Professional Documents
Culture Documents
Ed
3rd
,
Don t print me!
This e-book is devoted to Global Optimization algorithms, which are methods for finding
solutions of high quality for an incredible wide range of problems. We introduce the basic
concepts of optimization and discuss features which make optimization problems difficult
and thus, should be considered when trying to solve them. In this book, we focus on
metaheuristic approaches like Evolutionary Computation, Simulated Annealing, Extremal
Optimization, Tabu Search, and Random Optimization. Especially the Evolutionary Com-
putation methods, which subsume Evolutionary Algorithms, Genetic Algorithms, Genetic
Programming, Learning Classifier Systems, Evolution Strategy, Differential Evolution, Par-
ticle Swarm Optimization, and Ant Colony Optimization, are discussed in detail.
In this third edition, we try to make a transition from a pure material collection and
compendium to a more structured book. We try to address two major audience groups:
1. Our book may help students, since we try to describe the algorithms in an under-
standable, consistent way. Therefore, we also provide the fundamentals and much
background knowledge. You can find (short and simplified) summaries on stochastic
theory and theoretical computer science in Part VI on page 638. Additionally, appli-
cation examples are provided which give an idea how problems can be tackled with
the different techniques and what results can be expected.
2. Fellow researchers and PhD students may find the application examples helpful too.
For them, in-depth discussions on the single approaches are included that are supported
with a large set of useful literature references.
The contents of this book are divided into three parts. In the first part, different op-
timization technologies will be introduced and their features are described. Often, small
examples will be given in order to ease understanding. In the second part starting at page
530, we elaborate on different application examples in detail. Finally, in the last part fol-
lowing at page 638, the aforementioned background knowledge is provided.
In order to maximize the utility of this electronic book, it contains automatic, clickable
links. They are shaded with dark gray so the book is still b/w printable. You can click on
1. entries in the table of contents,
2. citation references like “Heitkötter and Beasley [1220]”,
3. page references like “253”,
4. references such as “see Figure 28.1 on page 254” to sections, figures, tables, and listings,
and
5. URLs and links like “http://www.lania.mx/˜ccoello/EMOO/ [accessed 2007-10-25]”.1
1 URLs are usually annotated with the date we have accessed them, like http://www.lania.mx/
˜ccoello/EMOO/ [accessed 2007-10-25]. We can neither guarantee that their content remains unchanged, nor
that these sites stay available. We also assume no responsibility for anything we linked to.
4 PREFACE
The following scenario is an example for using the book: A student reads the text and
finds a passage that she wants to investigate in-depth. She clicks on a citation which seems
interesting and the corresponding reference is shown. To some of the references which are
online available, links are provided in the reference text. By clicking on such a link, the
Adobe Reader
R 2 will open another window and load the regarding document (or a browser
window of a site that links to the document). After reading it, the student may use the
“backwards” button in the Acrobat Reader’s navigation utility to go back to the text initially
read in the e-book.
If this book contains something you want to cite or reference in your work, please use
the citation suggestion provided in Chapter A on page 945. Also, I would be very happy
if you provide feedback, report errors or missing things that you have (or have not) found,
criticize something, or have any additional ideas or suggestions. Do not hesitate to contact
me via my email address tweise@gmx.de. Matter of fact, a large number of people helped
me to improve this book over time. I have enumerated the most important contributors in
Chapter D – Thank you guys, I really appreciate your help! At many places in this book
we refer to Wikipedia – The Free Encyclopedia [2938] which is a great source of knowledge.
Wikipedia – The Free Encyclopedia contains articles and definitions for many of the aspects
discussed in this book. Like this book, it is updated and improved frequently. Therefore,
including the links adds greatly to the book’s utility, in my opinion.
The updates and improvements will result in new versions of the book, which will
regularly appear at http://www.it-weise.de/projects/book.pdf. The complete
LATEX source code of this book, including all graphics and the bibliography, is hosted
at SourceForge under http://sourceforge.net/projects/goa-taa/ in the CVS
repository goa-taa.cvs.sourceforge.net. You can browse the repository under
http://goa-taa.cvs.sourceforge.net/goa-taa/ and anonymously download the
complete sources of it by using the CVS command given in Listing 1.3
cvs -z3 -d:pserver:anonymous@goa-taa.cvs.sourceforge.net:/cvsroot/goa-taa
checkout -P Book
Listing 1: The CVS command for anonymously checking out the complete book sources.
Compiling the sources requires multiple runs of LATEX, BibTEX, and makeindex because of
the nifty way the references are incorporated. In the repository, an Ant-Script is provided for
doing this. Also, the complete bibliography of this book is stored in a MySQL4 database and
the scripts for creating this database as well as tools for querying it (Java) and a Microsoft
Access5 frontend are part of the sources you can download. These resources as well as all
contents of this book (unless explicitly stated otherwise) are licensed under the GNU Free
Documentation License (FDL, see Chapter B on page 947). Some of the algorithms provided
are made available under the LGPL license (see Chapter C on page 955)
Title Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Part I Foundations
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.1 What is Global Optimization? . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.1.1 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.1.2 Algorithms and Programs . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.1.3 Optimization versus Dedicated Algorithms . . . . . . . . . . . . . . . 24
1.1.4 Structure of this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2 Types of Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.1 Combinatorial Optimization Problems . . . . . . . . . . . . . . . . . . 24
1.2.2 Numerical Optimization Problems . . . . . . . . . . . . . . . . . . . . 27
1.2.3 Discrete and Mixed Problems . . . . . . . . . . . . . . . . . . . . . . 29
1.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3 Classes of Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . 31
1.3.1 Algorithm Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.3.2 Monte Carlo Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.3.3 Heuristics and Metaheuristics . . . . . . . . . . . . . . . . . . . . . . 34
1.3.4 Evolutionary Computation . . . . . . . . . . . . . . . . . . . . . . . . 36
1.3.5 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.4 Classification According to Optimization Time . . . . . . . . . . . . . . . . . 37
1.5 Number of Optimization Criteria . . . . . . . . . . . . . . . . . . . . . . . . 39
1.6 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5 Fitness and Problem Landscape: How does the Optimizer see it? . . . . 93
5.1 Fitness Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Fitness as a Relative Measure of Utility . . . . . . . . . . . . . . . . . . . . . 94
5.3 Problem Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
11 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
15 Deceptiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
25 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
25.1 General Information on Metaheuristics . . . . . . . . . . . . . . . . . . . . . 226
25.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 226
25.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
25.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 227
25.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
42 GRASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
42.1 General Information on GRAPSs . . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.2 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.3 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
45 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Part V Applications
50 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.2 Bit-String based Problem Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.2.1 Kauffman’s NK Fitness Landscapes . . . . . . . . . . . . . . . . . . . 553
50.2.2 The p-Spin Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
50.2.3 The ND Family of Fitness Landscapes . . . . . . . . . . . . . . . . . . 557
50.2.4 The Royal Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
50.2.5 OneMax and BinInt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
50.2.6 Long Path Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
50.2.7 Tunable Model for Problematic Phenomena . . . . . . . . . . . . . . 566
50.3 Real Problem Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
50.3.1 Single-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . 580
50.4 Close-to-Real Vehicle Routing Problem Benchmark . . . . . . . . . . . . . . 621
50.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
50.4.2 Involved Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
50.4.3 Problem Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
50.4.4 Objectives and Constraints . . . . . . . . . . . . . . . . . . . . . . . . 634
50.4.5 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Part VI Background
16 CONTENTS
51 Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
51.1 Set Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
51.2 Special Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
51.3 Relations between Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
51.4 Operations on Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
51.5 Tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
51.6 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
51.7 Binary Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
51.8 Order Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
51.9 Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
51.10Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
51.10.1Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
51.11Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
51.11.1Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
51.11.2Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
51.11.3Transformations between Sets and Lists . . . . . . . . . . . . . . . . . 650
54 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
57 Demos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
57.1 Demos for String-based Problem Spaces . . . . . . . . . . . . . . . . . . . . . 859
57.1.1 Real-Vector based Problem Spaces X ⊆ Rn . . . . . . . . . . . . . . . 859
57.1.2 Permutation-based Problem Spaces . . . . . . . . . . . . . . . . . . . 887
57.2 Demos for Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . 905
57.2.1 Demos for Genetic Programming Applied to Mathematical Problems 905
57.3 The VRP Benchmark Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 914
57.3.1 The Involved Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
57.3.2 The Optimization Utilities . . . . . . . . . . . . . . . . . . . . . . . . 925
Appendices
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
Foundations
20 CONTENTS
Chapter 1
Introduction
One of the most fundamental principles in our world is the search for optimal states. It begins
in the microcosm in physics, where atom bonds1 minimize the energy of electrons [2137].
When molecules combine to solid bodies during the process of freezing, they assume crystal
structures which come close to energy-optimal configurations. Such a natural optimization
process, of course, is not driven by any higher intention but the result from the laws of
physics.
The same goes for the biological principle of survival of the fittest [2571] which, together
with the biological evolution [684], leads to better adaptation of the species to their environ-
ment. Here, a local optimum is a well-adapted species that can maintain a stable population
in its niche.
1.1.1 Optimization
Since optimization covers a wide variety of subject areas, there exist different points of view
on what it actually is. We approach this question by first clarifying the goal of optimization:
solving optimization problems.
Every task which has the goal of approaching certain configurations considered as optimal in
the context of pre-defined criteria can be viewed as an optimization problem. After giving
these rough definitions which we will later refine in Section 6.2 on page 103 we can also
specify the topic of this book:
The term algorithms is derived from the name of the mathematician Mohammed ibn-Musa
al-Khwarizmi who was part of the royal court in Baghdad and who lived from about 780 to
850. Al-Khwarizmi’s work is the likely source for the word algebra as well. Definition D1.6
provides a basic understanding about what an optimization algorithm is, a “working hy-
pothesis” for the time being. It will later be refined in Definition D6.3 on page 103.
Therefore, the primitive instructions of the algorithm must be expressed either in a form that
the machine can process directly (machine code7 [2135]), in a form that can be translated
(1:1) into such code (assembly language [2378], Java byte code [1109], etc.), or in a high-
level programming language8 [1817] which can be translated (n:m) into the latter using
special software (compiler) [1556].
In this book, we will provide several algorithms for solving optimization problems. In
our algorithms, we will try to bridge between abstraction, easiness of reading, and closeness
to the syntax of high-level programming languages as Java. In the algorithms, we will use
self-explaining primitives which will still base on clear definitions (such as the list operations
discussed in Section 51.11 on page 647).
3 http://en.wikipedia.org/wiki/Mathematical_economics [accessed 2009-06-16]
4 http://en.wikipedia.org/wiki/Operations_research [accessed 2009-06-19]
5 http://en.wikipedia.org/wiki/Algorithm [accessed 2007-07-03]
6 http://en.wikipedia.org/wiki/Computer_program [accessed 2007-07-03]
7 http://en.wikipedia.org/wiki/Machine_code [accessed 2007-07-04]
8 http://en.wikipedia.org/wiki/High-level_programming_language [accessed 2007-07-03]
24 1 INTRODUCTION
1.1.3 Optimization versus Dedicated Algorithms
During at least fifty five years, research in computer science led to the development of many
algorithms which solve given problem classes to optimality. Basically, we can distinguish
two kinds of problem-solving algorithms: dedicated algorithms and optimization algorithms.
Dedicated algorithms are specialized to exactly solve a given class of problems in the
shortest possible time. They are most often deterministic and always utilize exact theoretical
knowledge about the problem at hand. Kruskal’s algorithm [1625], for example, is the best
way for searching the minimum spanning tree in a graph with weighted edges. The shortest
paths from a source node to the other nodes in a network can best be found with Dijkstra’s
algorithm [796] and the maximum flow in a flow network can be computed quickly with the
algorithm given by Dinitz [800]. There are many problems for which such fast and efficient
dedicated algorithms exist. For each optimization problem, we should always check first if
it can be solved with such a specialized, exact algorithm (see also Chapter 7 on page 109).
However, for many optimization tasks no efficient algorithm exists. The reason can be
that (1) the problem is too specific – after all, researchers usually develop algorithms only
for problems which are of general interest. (2) Furthermore, for some problems (the current
scientific opinion is that) no efficient, dedicated can be constructed at all (see Section 12.2.2
on page 146).
For such problems, the more general optimization algorithms are employed. Most often,
these algorithms only need a definition of the structure of possible solutions and a function
f which tells measures the quality of a candidate solution. Based on this information, they
try to find solutions for which f takes on the best values. Optimization methods utilize much
less information than exact algorithms, they are usually slower and cannot provide the same
solution quality. In the cases 1 and 2, however, we have no choice but to use them. And
indeed, there exists a giant number of situations where either Point 1, 2, or both conditions
hold, as you can see in, for example, Table 25.1 on page 226 or 28.4 on page 314. . .
In the first part of this book, we will first discuss the things which constitute an optimization
problem and which are the basic components of each optimization process. In Part II,
we take a look on all the possible aspects which can make an optimization task difficult.
Understanding these issues will allow you to anticipate possible problems when solving an
optimization task and, hopefully, to prevent their occurrence from the start. We then discuss
metaheuristic optimization algorithms (and especially Evolutionary Computation methods)
in Part III. Whereas these optimization approaches are clearly the center of this book, we
also introduce traditional, non-metaheuristic, numerical, and/or exact methods in Part IV.
In Part V, we then list some examples of applications for which optimization algorithms can
be used. Finally, Part VI is a collection of definitions and knowledge which can be useful to
understand the terms and formulas used in the rest of the book. It is provided in an effort
to make the book stand-alone and to allow students to understand it even if they have little
mathematical or theoretical background.
Combinatorial tasks are, for example, the Traveling Salesman Problem (see Example E2.2 on
page 45 [1429]), Vehicle Routing Problems (see Section 49.3 on page 536 and [2162, 2912]),
graph coloring [1455], graph partitioning [344], scheduling [564], packing [564], and satisfia-
bility problems [2335]. Algorithms suitable for such problems are, amongst others, Genetic
Algorithms (Chapter 29), Simulated Annealing (Chapter 27), Tabu Search (Chapter 40),
and Extremal Optimization (Chapter 41).
a3=1
a5=1
a9=2 a2=4 a8=2
b=5
a1=4
a10=3
a7=2 a6=2 distribution of n=10
a4=1 objects of size a1 to a10
in k=5 bins of size b=5
bin 0 bin 1 bin 2 bin 3 bin 4
Figure 1.2: One example instance of a bin packing problem.
as follows: Given are a number k ∈ N1 of bins, each of which having size b ∈ N1 , and a
number n ∈ N1 with the weights a1 , a2 , . . . , an . The question to answer is: Can the n objects
be distributed over the k bins in a way that none of the bins flows over? A configuration
which meets this requirement is sketched in Figure 1.2. If another object of size a11 = 4 was
added, the answer to the question would become No.
The bin packing problem can be formalized to asking whether a mapping x from the
numbers 1..n to 1..k exists so that Equation 1.1 holds. In Equation 1.1, the mapping is
represented as a list x of k sets where the ith set contains the indices of the objects to be
packed into the ith bin.
X
∃x : ∀i ∈ 0..k − 1 : aj ≤ b (1.1)
∀j∈x[i]
Finding whether this equation holds or not is known to be N P-complete. The question
could be transformed into an optimization problem with the goal of to find the smallest
9 http://en.wikipedia.org/wiki/Combinatorial_optimization [accessed 2010-08-04]
10 http://en.wikipedia.org/wiki/Bin_packing_problem [accessed 2010-08-27]
26 1 INTRODUCTION
number k of bins of size b which can contain all the objects listed in a. Alternatively, we
could try to find the mapping x for which the function fbp given in Equation 1.2 takes on
the lowest value, for example.
k−1
X X
fbp (x) = max 0, aj − b (1.2)
i=0 ∀j∈x[i]
20 slots
S=20´11
K
A B D J C={A, B, C, D, E, F, G, H, I, J, K, L, M}
11 I R={(A,B), (B,C), (C,F), (C,G), (C,M),
C
slots E H (D,E), (D,J), (E,F), (F,K), (G,J), (G,M),
L (H,J), (H,I), (H,L), (I,K), (I,L), (K,L),
F G M (L,M)}
that card, and a connection list R ⊆ C × C where (c1 , c2 ) ∈ R means that the two circuits
c1 , c2 ∈ C have to be connected. The goal is to find a layout x which assigns each circuit
c ∈ C to a location s ∈ S so that the overall length of wiring required to establish all the
connections r ∈ R becomes minimal, under the constraint that no wire must cross another
one. An example for such a scenario is given in Figure 1.3.
Similar circuit layout problems exist in a variety of forms for quite a while now [1541]
and are known to be quite hard. They may be extended with constraints such as to also
minimize the heat emission of the circuit and to minimize cross-wire induction and so on.
They may be extended to not only evolve the circuit location but also the circuit types and
numbers [1479, 2792] and thus, can become arbitrarily complex.
Numerical problems defined over Rn (or any kind of space containing it) are called contin-
uous optimization problems. Function optimization [194], engineering design optimization
tasks [2059], classification and data mining tasks [633] are common continuous optimiza-
tion problems. They often can efficiently be solved with Evolution Strategies (Chapter 30),
Differential Evolution (Chapter 33), Evolutionary Programming (Chapter 32), Estimation
of Distribution Algorithms (Chapter 34) or Particle Swarm Optimization (Chapter 39), for
example.
However, we can transform it into the optimization problem Minimize the objective function
2 2
f1 (x) = |0 − g(x)| = |g(x)| or Minimize the objective function f2 (x) = (0 − g(x)) = (g(x)) .
Different from the original formulation, the objective functions f1 and f2 provide us with a
12 http://en.wikipedia.org/wiki/Optimization_(mathematics) [accessed 2010-08-04]
28 1 INTRODUCTION
g(x)
4
2
0
-2
-4
x
-5 x1 x2 0 x3 x4 5
Figure 1.4: The graph of g(x) as given in Equation 1.3 with the four roots x⋆1 to x⋆4 .
measure of how close an element x is to be a valid solution. For the global optima x⋆ of
the problem, f1 (x⋆ ) = f2 (x⋆ ) = 0 holds. Based on this formulation, we can hand any of the
two objective functions to a method capable of solving numerical optimization problems. If
these indeed discover elements x⋆ with f1 (x⋆ ) = f2 (x⋆ ) = 0, we have answered the initial
question since the elements x⋆ are the roots of g(x).
1.2. TYPES OF OPTIMIZATION PROBLEMS 29
optimization
F F
Figure 1.5: A truss optimization problem like the one given in [18].
possible beams (light gray), a few fixed points (left hand side of the truss sketches), and
→
−
a point where a weight will be attached (right hand side of the truss sketches, F ). The
thickness di of each of the i = 1..n beams which leads to the most stable truss whilst not
exceeding a maximum volume of material V would be subject to optimization. A structure
as the one sketched on the right hand side of the figure could result from a numerical
optimization process: Some beams receive different non-zero thicknesses (illustrated with
thicker lines). Some other beams have been erased from the design (receive thickness di = 0),
sketched as thin, light gray lines.
Notice that, depending on the problem formulation, truss optimization can also be-
come a discrete or even combinatorial problem: Not in all real-world design tasks, we can
choose the thicknesses of the beams in the truss arbitrarily. It may probably be possi-
ble when synthesizing the support structures for air plane wings or motor blocks. How-
ever, when building a truss from pre-defined building blocks, such as the beams avail-
able for house construction, the thicknesses of the beams are finite and, for example,
di ∈ {10mm, 20mm, 30mm, 50mm, 100mm, 300mm} ∀i ∈ 1..n. In such a case, a problem
such as the one sketched in Figure 1.5 becomes effectively combinatorial or, at least, a
discrete internet programming task.
Of course, numerical problems may also be defined over discrete spaces, such as the Nn0 .
Problems of this type are called integer programming problems:
1.2.4 Summary
Figure 1.6 is not intended to provide a precise hierarchy of problem types. Instead, it is only
a rough sketch the relations between them. Basically, there are smooth transitions between
many problem classes and the type of a given problem is often subject to interpretation.
The candidate solutions of combinatorial problems stem from finite sets and are discrete.
They can thus always be expressed as discrete numerical vectors and thus could theoretically
be solved with methods for numerical optimization, such as integer programming techniques.
Of course, every discrete numerical problem can also be expressed as continuous numeri-
cal problem since any vector from the Rn can be easily be transformed to a vector in the Zn
by simply rounding each of its elements. Hence, the techniques for continuous optimization
can theoretically be used for discrete numerical problems (integer programming) as well as
for MIPs and for combinatorial problems.
However, algorithms for numerical optimization problems often apply numerical meth-
ods, such as adding vectors, multiplying them with some number, or rotating vectors. Com-
binatorial problems, on the other hand, are usually solved by applying combinatorial modi-
fications to solution structures, such as changing the order of elements in a sequence. These
two approaches are fundamentally different and thus, although algorithms for continuous
1.3. CLASSES OF OPTIMIZATION ALGORITHMS 31
optimization can theoretically be used for combinatorial optimization, it is quite clear that
this is rarely the case in practice and would generally yield bad results.
Furthermore, the precision with which numbers can be represented on a computer is finite
since the memory available in any possible physical device is limited. From this perspective,
any continuous optimization problem, if solved on a physical machine, is a discrete problem
as well. In theory, it could thus be solved with approaches for combinatorial optimization.
Again, it is quite easy to see that adjusting the values and sequences of bits in a floating
point representation of a real number will probably lead to inferior results or, at least, to
slower optimization speed than using advanced mathematical operations.
Finally, there may be optimization tasks with both, combinatorial and continuous char-
acteristics. Example E1.6 provides a problem which can exist in a continuous and in a
discrete flavor, both of which can be tackled with numerical methods whereas the latter one
can also be solved efficiently with approaches for discrete problems.
In this book, we will only be able to discuss a small fraction of the wide variety of Global
Optimization techniques [1068, 2126]. Before digging any deeper into the matter, we will
attempt to classify some of these algorithms in order to give a rough overview of what will
come in Part III and Part IV. Figure 1.7 sketches a rough taxonomy of Global Optimization
methods. Please be aware that the idea of creating a precise taxonomy of optimization
methods makes no sense. Not only are there many hybrid approaches which combine several
different techniques, there also exist various definitions for what things like metaheuristics,
artificial intelligence, Evolutionary Computation, Soft Computing, and so on exactly are
and which specific algorithms do belong to them and which not. Furthermore, many of
these fuzzy collective terms also widely overlap. The area of Global Optimization is a living
and breathing research discipline. Many contributions come from scientists from various
research areas, ranging from physics to economy and from biology to the fine arts. As
same as important are the numerous ideas developed by practitioners in from engineering,
industry, and business. As stated before, Figure 1.7 should thus be considered as just a rough
sketch, as one possible way to classify optimizers according to their algorithm structure. In
the remainder of this section, we will try to outline some of the terms used in Figure 1.7.
Generally, optimization algorithms can be divided in two basic classes: deterministic and
probabilistic algorithms.
For the same input data, deterministic algorithms will always produce the same results.
This means that whenever they have to make a decision on how to proceed, they will come
to the same decision each time the available data is the same.
32 1 INTRODUCTION
Optimization
[609, 2126, 2181, 2714] ✛
✻Probabilistic Approaches Deterministic Approaches
✻ Metaheuristics
[1103, 1271, 2040, 2126]
Chapter 26 [2356] [715, 847, 863, 1220, 3025, 3043] ✻Uninformed Search
Simulated Annealing ✻
Evolutionary Algorithms ✻ Swarm Intelligence Section 46.2 [1557, 2140]
Chapter 27 [1541, 2043] Chapter 28 [167, 171, 599] [353, 881, 1524] ✻ BFS
Tabu Search
✻Genetic Algorithms ✻ ACO
Section 46.2.1
Chapter 40 [1069, 1228] Chapter 29 [1075, 1912] Chapter 37 [809, 810] DFS
Section 46.2.2
Extremal Optimization Evolution Strategies RFD
Chapter 41 [345, 725] Chapter 30 [300, 2279] Chapter 38 [229, 230] IDDFS
Section 46.2.4 [2356]
GRASP Genetic Programming PSO
Chapter 42 [902, 906] Chapter 31 [1602, 2199] Chapter 39 [589, 853] Informed Search
Downhill Simplex ✻ SGP Memetic Algorithms
Section 46.3 [1557, 2140]
Chapter 43 [1655, 2017] Section 31.3 [1602, 1615] Chapter 36 [1141, 1961] ✻ Greedy Search
Section 46.3.1 [635]
Random Optimization LGP
Chapter 44 [526, 2926] Section 31.4 [209, 394] A⋆ Search
Section 46.3.2 [758]
Harmony Search GGGP
[1033] Section 31.5 [1853, 2358] Branch & Bound
Chapter 47 [673, 1663]
Graph-based
Section 31.6 [1901, 2191] Cutting-Plane Method
Chapter 48 [152, 643]
Differential Evolution
Chapter 33 [2220, 2621]
EDAs
Chapter 34 [1683, 2153]
Evol. Programming
Chapter 32 [940, 945]
LCS
Chapter 35 [430, 1256]
Deterministic algorithms are thus used when decisions can efficiently be made on the data
available to the optimization process. This is the case if a clear relation between the char-
acteristics of the possible solutions and their utility for a given problem exists. Then, the
search space can efficiently be explored with, for example, a divide and conquer scheme14 .
There may be situations, however, where deterministic decision making may prevent the
optimizer from finding the best results. If the relation between a candidate solution and its
“fitness” is not obvious, dynamically changing, too complicated, or the scale of the search
space is very high, applying many of the deterministic approaches becomes less efficient and
often infeasible. Then, probabilistic optimization algorithms come into play. Here lies the
focus of this book.
Generally, exact algorithms can be much more efficient than probabilistic ones in many
domains. Probabilistic algorithms have the further drawback of not being deterministic,
i. e., may produce different results in each run even for the same input. Yet, in situations like
the one described in Example E1.8, randomized decision making can be very advantageous.
Early work in this area, which now has become one of most important research fields in
optimization, was started at least sixty years ago. The foundations of many of the efficient
methods currently in use were laid by researchers such as Bledsoe and Browning [327],
Friedberg [995], Robbins and Monro [2313], and Bremermann [407].
14 http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm [accessed 2007-07-09]
15 http://en.wikipedia.org/wiki/Randomized_algorithm [accessed 2007-07-03]
34 1 INTRODUCTION
1.3.2 Monte Carlo Algorithms
Most probabilistic algorithms used for optimization are Monte Carlo-based approaches (see
Section 12.4). They trade guaranteed global optimality of the solutions in for a shorter
runtime. This does not mean that the results obtained by using them are incorrect – they
may just not be the global optima. Still, a result little bit inferior to the best possible
solution x⋆ is still better than waiting 10100 years until x⋆ is found16 . . .
An umbrella term closely related to Monte Carlo algorithms is Soft Computing17 (SC),
which summarizes all methods that find solutions for computational hard tasks (such as
N P-complete problems, see Section 12.2.2) in relatively short time, again by trading in
optimality for higher computation speed [760, 1242, 1430, 2685].
Most efficient optimization algorithms which search good individuals in some targeted way,
i. e., all methods better than uninformed searches, utilize information from objective (see
Section 2.2) or heuristic functions. The term heuristic stems from the Greek heuriskein
which translates to to find or to to discover.
1.3.3.1 Heuristics
Heuristics used in optimization are functions that rates the utility of an element and provides
support for the decision regarding which element out of a set of possible solutions is to be
examined next. Deterministic algorithms usually employ heuristics to define the processing
order of the candidate solutions. Approaches using such functions are called informed search
methods (which will be discussed in Section 46.3 on page 518). Many probabilistic methods,
on the other hand, only consider those elements of the search space in further computations
that have been selected by the heuristic while discarding the rest.
A heuristic may, for instance, estimate the number of search steps required from a given
solution to the optimal results. In this case, the lower the heuristic value, the better the
solution would be. It is quite clear that heuristics are problem class dependent. They can be
considered as the equivalent of fitness or objective functions in (deterministic) state space
search methods [1467] (see Section 46.3 and the more specific Definition D46.5 on page 518).
An objective function f(x) usually has a clear meaning denoting an absolute utility of a
candidate solution x. Different from that, heuristic values φ(p) often represent more indirect
criteria such as the aforementioned search costs required to reach the global optimum [1467]
from a given individual.
We will discuss heuristics in more depth in Section 46.3. It should be mentioned that opti-
mization and artificial intelligence19 (AI) merge seamlessly with each other. This becomes
especially clear when considering that one most of important research areas of AI is the
design of good heuristics for search, especially for the state space search algorithms which
here, are listed as deterministic optimization methods.
Anyway, knowing the proximity of the meaning of the terms heuristics and objective
functions, it may as well be possible to consider many of the following, randomized ap-
proaches to be informed searches (which brings us back to the fuzzyness of terms). There
is, however, another difference between heuristics and objective functions: They usually
involve more knowledge about the relation of the structure of the candidate solutions and
their utility as may have become clear from Example E1.9.
1.3.3.2 Metaheuristics
This knowledge is a luxury often not available. In many cases, the optimization algorithms
have a “horizon” limited to the search operations and the objective functions with unknown
characteristics. They can produce new candidate solutions by applying search operators to
the currently available genotypes. Yet, the exact result of the operators is (especially in
probabilistic approaches) not clear in advance and has to be measured with the objective
functions as sketched in Figure 1.8. Then, the only choices an optimization algorithm has
are:
1. Which candidate solutions should be investigated further?
2. Which search operations should be applied to them? (If more than one operator is
available, that is.)
f1(x)
f(xÎX)
black box (3,3)
(3,0) (3,1) (3,2)
Metaheuristics, which you can find discussed in Part III, obtain knowledge on the struc-
ture of the problem space and the objective functions by utilizing statistics obtained from
stochastically sampling the search space. Many metaheuristics are inspired by a model of
some natural phenomenon or physical process. Simulated Annealing, for example, decides
which candidate solution is used during the next search step according to the Boltzmann
probability factor of atom configurations of solidifying metal melts. The Raindrop Method
emulates the waves on a sheet of water after a raindrop fell into it. There also are tech-
niques without direct real-world role model like Tabu Search and Random Optimization.
Unified models of metaheuristic optimization procedures have been proposed by Osman
[2103], Rayward-Smith [2275], Vaessens et al. [2761, 2762], and Taillard et al. [2650].
Obviously, metaheuristics are not necessarily probabilistic methods. The A⋆ search, for
instance, a deterministic informed search approach, is considered to be a metaheuristic too
(although it incorporates more knowledge about the structure of the heuristic functions
than, for instance, Evolutionary Algorithms do). Still, in this book, we are interested in
probabilistic metaheuristics the most.
The terms heuristic, objective function, fitness function (will be introduced in 28.1.2.5
and Section 28.3 on page 274 in detail), and metaheuristic may easily be mixed up and
it is maybe not entirely obvious which is which and where is the difference. Therefore, we
here want to give a short outline of the main differences:
3. Fitness Function. Secondary utility measure which may combine information from a
set of primary utility measures (such as objective functions or heuristics) to a single
rating for an individual. This process may also take place relative to a set of given
individuals, i. e., rate the utility of a solution in comparison to another set of solutions
(such as a population in Evolutionary Algorithms).
Evolutionary Algorithms can again be divided into very diverse approaches, spanning from
Genetic Algorithms (GAs) which try to find bit-or integer-strings of high utility (see, for
instance, Example E4.2 on page 83) and Evolution Strategies (ESs) which do the same with
vectors of real numbers to Genetic Programming (GP), where optimal tree data structures
or programs are sought. The variety of Evolutionary Algorithms is the focus of this book,
also including Memetic Algorithms (MAs) which are EAs hybridized with other search al-
gorithms.
The taxonomy just introduced classifies the optimization methods according to their algo-
rithmic structure and underlying principles, in other words, from the viewpoint of theory. A
software engineer or a user who wants to solve a problem with such an approach is however
more interested in its “interfacing features” such as speed and precision.
Speed and precision are often conflicting objectives, at least in terms of probabilistic
algorithms. A general rule of thumb is that you can gain improvements in accuracy of
optimization only by investing more time. Scientists in the area of global optimization
try to push this Pareto frontier22 further by inventing new approaches and enhancing or
tweaking existing ones. When it comes to time constraints and hence, the required speed of
the optimization algorithm, we can distinguish two types of optimization use cases.
Definition D1.16 (Online Optimization). Online optimization problems are tasks that
need to be solved quickly in a time span usually ranging between ten milliseconds to a few
minutes.
In order to find a solution in this short time, optimality is often significantly traded in
for speed gains. Examples for online optimization are robot localization, load balancing,
services composition for business processes, or updating a factory’s machine job schedule
after new orders came in.
Figure 1.9: The robotic soccer team CarpeNoctem (on the right side) on the fourth day of
the RoboCup German Open 2009.
the goal? Should it try to pass the ball to a team mate? Should it continue to drive towards
the goal in order to get into a better shooting position?
To find a sequence of actions which maximizes the chance of scoring a goal in a contin-
uously changing world is an online optimization problem. If a robot takes too long to make
a decision, opposing and cooperating robots may already have moved to different locations
and the decision may have become outdated.
Not only must the problem be solved in a few milliseconds, this process often involves
finding an agreement between different, independent robots each having a possibly different
view on what is currently going on in the world. Additionally, the degrees of freedom in the
real world, the points where the robot can move or shoot the ball to, are basically infinite.
It is clear that in such situations, no complex optimization algorithm can be applied.
Instead, only a smaller set of possible alternatives can be tested or, alternatively, predefined
programs may be processed, thus ignoring the optimization character of the decision making.
Online optimization tasks are often carried out repetitively – new orders will, for instance,
continuously arrive in a production facility and need to be scheduled to machines in a way
that minimizes the waiting time of all jobs.
Such problems regard for example design optimization, some data mining tasks, or creating
long-term schedules for transportation crews. These optimization processes will usually be
carried out only once in a long time.
Before doing anything else, one must be sure about to which of these two classes the
problem to be solved belongs. There are two measures which characterize whether an opti-
mization is suited to the timing constraints of a problem:
An algorithm with high internal complexity and a low FHT may be suitable for a problem
where evaluating the objective functions takes long. On the other hand, it may be less
feasible than a simple approach if the quality of a candidate solution can be determined
quickly.
1.5. NUMBER OF OPTIMIZATION CRITERIA 39
1.5 Number of Optimization Criteria
1. such which try to find the best values of single objective functions f (see Section 3.1
on page 51,
2. such that optimize sets f of target functions (see Section 3.3 on page 55, and
3. such that optimize one or multiple objective functions while adhering to additional
constraints (see Section 3.4 on page 70).
1. Name five optimization problems which can be encountered in daily life. Describe
the criteria which are subject to optimization, whether they should be maximized or
minimized, and how the investigated space of possible solutions looks like.
[10 points]
3. Classify the problems from Task 1 and 2 into numerical, combinatorial, discrete, and
mixed integer programming, the classes introduced in Section 1.2 and sketched in Fig-
ure 1.6 on page 30. Check if different classifications are possible for the problems and
if so, outline how the problems should be formulated in each case. The truss opti-
mization problem (see Example E1.6 on page 29) is, for example, usually considered
to be a numerical problem since the thickness of each beam is a real value. If there
is, however, a finite set of beam thicknesses we can choose from, it becomes a discrete
problem and could even be considered as a combinatorial one.
[10 points]
5. Find at least three features which make Algorithm 1.1 on page 40 rather unsuitable for
practical applications. Describe these drawbacks, their impact, and point out possible
remedies. You may use an implementation of the algorithm as requested in Task 4 for
gaining experience with what is problematic for the algorithm.
[6 points]
7. In which way are the requirements for online and offline optimization different? Does
a need for online optimization influence the choice of the optimization algorithm and
the general algorithm structure which can be used in your opinion?
[5 points]
8. What are the differences between solving general optimization problems and specific
ones? Discuss especially the difference between black box optimization and solving
functions which are completely known and maybe even can be differentiated.
[5 points]
Chapter 2
Problem Space and Objective Functions
The first step which has to be taken when tackling any optimization problem is to define the
structure of the possible solutions. For a minimizing a simple mathematical function, the
solution will be a real number and if we wish to determine the optimal shape of an airplane’s
wing, we are looking for a blueprint describing it.
Usually, more than one problem space can be defined for a given optimization problem. A
few lines before, we said that as problem space for minimizing a mathematical function, the
real numbers R would be fine. On the other hand, we could as well restrict ourselves to
the natural numbers N0 or widen the search to the whole complex plane C. This choice has
major impact: On one hand, it determines which solutions we can possible find. On the
other hand, it also has subtle influence on the way in which an optimization algorithm can
perform its search. Between each two different points in R, for instance, there are infinitely
many other numbers, while in N0 , there are not. Then again, unlike N0 and R, there is no
total order defined on C.
In this book, we will especially focus on evolutionary optimization algorithms. One of the
oldest such method, the Genetic Algorithm, is inspired by the natural evolution and re-uses
many terms from biology. In dependence on Genetic Algorithms, we synonymously refer
to the problem space as phenome and to candidate solutions as phenotypes. The problem
space X is often restricted by practical constraints. It is, for instance, impossible to take
all real numbers into consideration in the minimization process of a real function. On our
off-the-shelf CPUs or with the Java programming language, we can only use 64 bit floating
point numbers. With them numbers can only be expressed up to a certain precision. If the
computer can only handles numbers up to 15 decimals, an element x = 1 + 10−16 cannot be
discovered.
44 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS
2.2 Objective Functions: Is it good?
The next basic step when defining an optimization problem is to define a measure for what
is good. Such measures are called objective functions. As parameter, they take a candidate
solution x from the problem space X and return a value which rates its quality.
In Listing 55.8 on page 712, you can find a Java interface resembling Definition D2.3.
Usually, objective functions are subject to minimization. In other words, the smaller f(x),
the better is the candidate solution x ∈ X. Notice that although we state that an objective
function is a mathematical function, this is not always fully true. In Section 51.10 on
page 646 we introduce the concept of functions as functional binary relations (see ?? on
page ??) which assign to each element x from the domain X one and only one element from
the codomain (in case of the objective functions, the codomain is R). However, the evaluation
of the objective functions may incorporate randomized simulations (Example E2.3) or even
human interaction (Example E2.6). In such cases, the functionality constraint may be
violated. For the purpose of simplicity, let us, however, stick with the concept given in
Definition D2.3.
In the following, we will use several examples to build an understanding of what the
three concepts problem space X, candidate solution x, and objective function f mean. These
example should give an impression on the variety of possible ways in which they can be
specified and structured.
d(x) 100m
a
f(x)
The problem space X of this problem equals to the real numbers R and we can define
an objective function f : R 7→ R which denotes the absolute difference between the actual
throw distance d(x) and the target distance 100m when a stone is thrown with x m/s as
sketched in Figure 2.1:
Example E2.1 was very easy. We had a very clear objective function f for which even the
precise mathematical definition was available and, if not known beforehand, could have
easily been obtained from literature. Furthermore, the space of possible solutions, the
problem space, was very simple since it equaled the (positive) real numbers.
Definition D2.4 (Traveling Salesman Problem). The goal of the Traveling Salesman
Problem (TSP) is to find a cyclic path of minimum total weight2 which visits all vertices of
a weighted graph.
Figure 2.2: An example for Traveling Salesman Problems, based on data from GoogleMaps.
Assume that we have the task to solve the Traveling Salesman Problem given in Figure 2.2,
i. e., to find the shortest cyclic tour through Beijing, Chengdu, Guangzhou, Hefei, and
Shanghai. The (symmetric) distance matrix3 for computing a tour’s length is given in the
figure, too.
Here, the set of possible solutions X is not as trivial as in Example E2.1. Instead of
being a mapping from the real numbers to the real numbers, the objective function f takes
2 see Equation 52.3 on page 651
3 Based on the routing utility GoogleMaps (http://maps.google.com/ [accessed 2009-06-16]).
46 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS
one possible permutation of the five cities as parameter. Since the tours which we want
to find are cyclic, we only need to take the permutations of four cities into consideration.
If we, for instance, assume that the tour starts in Hefei, it will also end in Hefei and each
possible solution is completely characterized by the sequence in which Beijing, Chengdu,
Guangzhou, and Shanghai are visited. Hence, X is the set of all such permutations.
X = Π({Beijing, Chengdu, Guangzhou, Shanghai}) (2.7)
We can furthermore halve the search space by recognizing that it plays no role for the
total distance in which direction we take our tour x ∈ X. In other words, a tour
(Hefei → Shanghai → Beijing → Chengdu → Guangzhou → Hefei) has the same length and
is equivalent to (Hefei → Guangzhou → Chengdu → Beijing → Shanghai → Hefei). Hence,
for a TSP with n cities, there are 21 ∗ (n − 1)! possible solutions. In our case, this avails to
12 unique permutations.
2
X
f(x) = w(Hefei, x[0]) + w(x[i], x[i + 1]) + w(x[3], Hefei) (2.8)
i=0
Enumerating all possible solutions is, of course, the most time-consuming optimization ap-
proach possible. For TSPs with more cities, it quickly
n becomes infeasible since the number
of solutions grows more than exponentially n3 ≤ n! . Furthermore, in the simple Exam-
ple E2.1, enumerating all candidate solutions would have even been impossible since X = R.
From this example we can learn three things:
4 http://en.wikipedia.org/wiki/Brute-force_search [accessed 2009-06-16]
2.2. OBJECTIVE FUNCTIONS: IS IT GOOD? 47
1. The problem space containing the parameters of the objective functions is not re-
stricted to the real numbers but can, indeed, have a very different structure.
2. There are objective functions which cannot be resolved mathematically. Matter of
fact, most “interesting” optimization problem have this feature.
3. Enumerating all solutions is not a good optimization approach and only applicable in
“small” or trivial problem instances.
Although the objective function of Example E2.2 could not be solved for x, its results can
still be easily determined for any given candidate solution. Yet, there are many optimization
problems where this is not the case.
Let us now summarize the information gained from these examples. The first step when
approaching an optimization problem is to choose the problem space X, as we will later
discuss in detail in Chapter 7 on page 109. Objective functions are not necessarily mere
mathematical expressions, but can be complex algorithms that, for example, involve multiple
simulations. Global Optimization comprises all techniques that can be used to find the
best elements x⋆ in X with respect to such criteria. The next question to answer is What
constitutes these “best” elements?
9. In Example E1.2 on page 26, we stated the layout of circuits as a combinatorial opti-
mization problem. Define a possible problem space X for this task that allows us to
assign each circuit c ∈ C to one location of an m × n-sized card. Define an objective
function f : X 7→ R+ which denotes the total length of required wiring required for a
given element x ∈ X. For this purpose, you can ignore wire crossing and simply assume
that a wire going from location (i, j) to location (k, l) has length |i − k| + |j − l|.
[10 points]
10. Name some problems which can occur in circuit layout optimization as outlined in
Task 9 if the mentioned simplified assumption is indeed applied.
[5 points]
11. Define a problem space for general job shop scheduling problems as outlined in Exam-
ple E1.3 on page 26. Do not make the problem specific to the cola/lemonade example.
[10 points]
We already said that Global Optimization is about finding the best possible solutions for
given problems. Let us therefore now discuss what it is that makes a solution optimal 1 .
X2
X1
global minimum X
A local minimum x̌l is a point which has the smallest objective value amongst its neighbors
in the problem space. Similarly, a local maximum x̂l has the highest objective value when
compared to its neighbors. Such points are called (local) extrema2 . In optimization, a local
optimum is always an extremum, i. e., either a local minimum or maximum.
Back in high school, we learned: “If f : X 7→ R (X ⊆ R) has a (local) extremum at x⋆l
and is differentiable at that point, then the first derivative f ′ is zero at x⋆l .”
Alternatively, we can detect a local minimum or maximum also when the first derivative
undergoes a sign change. If, for a small h, sign (f ′ (x⋆l − h)) 6= sign (f ′ (x⋆l + h)), then we can
state that:
(f ′ (x⋆l ) = 0) ∧ lim f ′ (x⋆l − h) < 0 ∧ lim f ′ (x⋆l + h) > 0 ⇒ x⋆l is local minimum (3.4)
h→0 h→0
(f ′ (x⋆l ) = 0) ∧ lim f ′ (x⋆l − h) > 0 ∧ lim f ′ (x⋆l + h) < 0 ⇒ x⋆l is local maximum(3.5)
h→0 h→0
From many of our previous examples such as Example E1.1 on page 25, Example E2.2 on
page 45, or Example E2.4 on page 47, we know that objective functions in many optimization
problems cannot be expressed in a mathematically closed form, let alone in one that can be
differentiated. Furthermore, Equation 3.1 to Equation 3.5 can only be directly applied if the
problem space is a subset of the real numbers X ⊆ R. They do not work for discrete spaces
or for vector spaces. For the two-dimensional objective function given in Example E3.1, for
example, we would need to a apply Equation 3.6, the generalized form of Equation 3.1:
For distinguishing maxima, minima, and saddle points, the Hessian matrix3 would be needed.
Even if the objective function exists in a differentiable form, for problem spaces X ⊆ Rn
of larger scales n > 5, computing the gradient, the Hessian, and their properties becomes
cumbersome. Derivative-based optimization is hence only feasible in a subset of optimization
problems.
In this book, we focus on derivative-free optimization. The precise definition of local
optima without using derivatives requires some sort of measure of what neighbor means
in the problem space. Since this requires some further definitions, we will postpone it to
Section 4.4.
The definitions of globally optimal points are much more straightforward and can be given
based on the terms we already have discussed:
Of course, each global optimum, minimum, or maximum is, at the same time, also a local
optimum, minimum, or maximum. As stated before, the he definition of local optima is a
bit more complicated and will be discussed in detail in Section 4.4.
3 http://en.wikipedia.org/wiki/Hessian_matrix [accessed 2010-09-07]
54 3 OPTIMA: WHAT DOES GOOD MEAN?
3.2 Multiple Optima
Many optimization problems have more than one globally optimal solution. Even a one-
dimensional function f : R 7→ R can have more than one global maximum, multiple global
minima, or even both in its domain X.
200 f(x)
f(x)
1
^ ^
100 x1 x2 0.5
0
0 -0.5
^
x-1 ^
x0 ^
x1
-1
-100 xÎX xÎX
-10
-5 0 5 10 -10 -5 0 5 10
Fig. 3.2.a: x3 −8x2 : a function with Fig. 3.2.b: The cosine function: in-
two global minima. finitely many optima.
100 f(x)
80
60
^
40
X
20 xÎX
-10 -5 0 5 10
Fig. 3.2.c: A function with uncount-
able many minima.
3.3.1 Introduction
Global Optimization techniques are not just used for finding the maxima or minima of single
functions f. In many real-world design or decision making problems, there are multiple
criteria to be optimized at the same time. Such tasks can be defined as in Definition D3.6
which will later be refined in Definition D6.2 on page 103.
In the following, we will treat these sets as vector functions f : X 7→ Rn which return a
vector of real numbers of dimension n when applied to a candidate solution x ∈ X. To the
Rn we will in this case refer to as objective space Y. Algorithms designed to optimize sets
of objective functions are usually named with the prefix multi-objective, like multi-objective
Evolutionary Algorithms (see Definition D28.2 on page 253).
Purshouse [2228, 2230] provides a nice classification of the possible relations between
the objective functions which we illustrate here as Figure 3.3. In the best case from the
56 3 OPTIMA: WHAT DOES GOOD MEAN?
relation
dependent independent
conflict harmony
Figure 3.3: The possible relations between objective functions as given in [2228, 2230].
The opposite are conflicting objectives where an improvement in one criterion leads to a
degeneration of the utility of the candidate solutions according to another criterion. If two
objectives f3 and f4 conflict, we denote this as f3 ≁ f4 . Obviously, objective functions do not
need to be conflicting or harmonic for their complete domain X, but instead may exhibit
these properties on certain intervals only.
Besides harmonizing and conflicting optimization criteria, there may be such which are
independent from each other. In a harmonious relationship, the optimization criteria can
be optimized simultaneously, i. e., an improvement of one leads to an improvement of the
other. If the criteria conflict, the exact opposite is the case. Independent criteria, however,
do not influence each other: It is possible to modify candidate solutions in a way that leads
to a change in one objective value while the other one remains constant (see Example E3.8).
If X = X, these relations are called total and the subscript X is left away in the notations. In
the following, we give some examples for typical situations involving harmonizing, conflicting,
and independent optimization criteria. We also list some typical multi-objective optimization
applications in Table 3.1, before discussing basic techniques suitable for defining what is a
“good solution” in problems with multiple, conflicting criteria.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 57
Example E3.4 (Factory Example).
Multi-objective optimization often means to compromise conflicting goals. If we go back to
our factory example, we can specify the following objectives that all are subject to optimiza-
tion:
1. Minimize the time between an incoming order and the shipment of the corresponding
product.
2. Maximize the profit.
3. Minimize the costs for advertising, personal, raw materials etc..
4. Maximize the quality of the products.
5. Minimize the negative impact of waste and pollution caused by the production process
on environment.
The last two objectives seem to clearly conflict with the goal cost minimization. Between the
personal costs and the time needed for production and the product quality there should also
be some kind of contradictive relation, since producing many quality goods in short time
naturally requires many hands. The exact mutual influences between optimization criteria
can apparently become complicated and are not always obvious.
From these both examples, we can gain another insight: To find the global optimum could
mean to maximize one function fi ∈ f and to minimize another one fj ∈ f , (i 6= j). Hence,
it makes no sense to talk about a global maximum or a global minimum in terms of multi-
objective optimization. We will thus retreat to the notation of the set of optimal elements
x⋆ ∈ X ⋆ ⊆ X in the following discussions.
Since compromises for conflicting criteria can be defined in many ways, there exist mul-
tiple approaches to define what an optimum is. These different definitions, in turn, lead to
different sets X ⋆ . We will discuss some of these approaches in the following by using two
graphical examples for illustration purposes.
x
^2 x
^1 xÎX1
Figure 3.4: Two functions f1 and f2 with different maxima x̂1 and x̂2 .
f3 f4
x1 x2 x1 x2
X2
Figure 3.5: Two functions f3 and f4 with different minima x̌1 , x̌2 , x̌3 , and x̌4 .
lem space X1 to the real numbers that are to be maximized. In the second example sketched
in Figure 3.5, we instead minimize two functions f3 and f4 that map a two-dimensional
problem space X2 ⊂ R2 to the real numbers R. Both functions have two global minima; the
lowest values of f3 are x̌1 and x̌2 whereas f4 gets minimal at x̌3 and x̌4 . It should be noted
that x̌1 6= x̌2 6= x̌3 6= x̌4 .
The maybe simplest way to optimize a set f of objective functions is to solve them according
to priorities [93, 860, 2291, 2820]. Without loss of generality, we could, for instance, assume
that f1 is strictly more important than f2 , which, in turn, is more important than f3 , and
⋆
so on. We then would first determine the optimal set X(1) according to f1 . If this set
60 3 OPTIMA: WHAT DOES GOOD MEAN?
⋆
contains more than one solution, we will then retain only those elements in X(1) which
⋆
have the best objective values according to f2 compared to the other members of X(1) and
⋆ ⋆ ⋆
obtain X(1,2) ⊆ X(1) . This process is repeated until either X(1,2,... ) = 1 or all criteria have
been processed. By doing so, the multi-objective optimization problem has been reduced to
n = |f | single-objective ones to be solved iteratively. Then, the problem space Xi of the ith
problem equals X only for i = 1 and then is Xi ∀i > 1 = X(1,2,..i−1)
⋆
[860].
x1 <lex x2 ⇔ (ωi fi (x1 ) < ωi fi (x2 )) ∧ (i = min {k : fk (x1 ) 6= fk (x2 )}) (3.13)
1 if fj should be minimized
ωj = (3.14)
−1 if fj should be maximized
where the ωj in Equation 3.13 only denote whether fj should be maximized or minimized,
as specified in Equation 3.14.
where n = len(f ) is the number of objective functions, π [i] is the ith element of the permu-
tation π, and the ωi retain their meaning from Equation 3.14.
which perform exceptionally well in terms of the production time but sacrifice profit, running
costs, production quality, and will largely disregard any environmental issues. No trade-off
between the objectives will happen at all. Therefore, the set of all lexicographically optimal
solutions X ⋆ will contain the elements x⋆ ∈ X which have the best objective values regarding
to one single objective function while largely neglecting the others.
Another very simple method to define what an optimal solution is, is to compute a weighted
sum ws(x) of all objective functions fi (x) ∈ f . Since a weighted sum is a linear aggregation
of the objective values, it is also often referred to as linear aggregating. Each objective fi is
multiplied with a weight wi representing its importance. Using signed weights allows us to
minimize one objective while maximizing another one. We can, for instance, apply a weight
wa = 1 to an objective function fa and the weight wb = −1 to the criterion fb . By minimizing
ws(x), we then actually minimize the first and maximize the second objective function. If
we instead maximize ws(x), the effect would be inverse and fb would be minimized and while
fa is maximized. Either way, multi-objective problems are reduced to single-objective ones
with this method. As illustrated in Equation 3.19, a global optimum then is a candidate
solution x⋆ ∈ X for which the weighted sum of all objective values is less or equal to those
of all other candidate solutions (if minimization of the weighted sum is assumed).
n
X X
ws(x) = wi fi (x) = wi fi (x) (3.18)
i=1 ∀fi ∈f
x⋆ ∈ X ⋆ ⇔ ws(x⋆ ) ≤ ws(x) ∀x ∈ X (3.19)
With this method, we can also emulate lexicographical optimization in many cases by simply
setting the weights to values of different magnitude. If the objective functions fi ∈ f all had
the range fi : X 7→ 0..10, for instance, setting the weights to wi = 10i would effectively
achieve this.
y y=ws1(x)=f1(x)+f2(x)
y1=f1(x)
y2=f2(x)
x
^ xÎX1
Figure 3.6: Optimization using the weighted sum approach (first example).
x1 x2
y=ws2(x)=f3(x)+f4(x)
Figure 3.7: Optimization using the weighted sum approach (second example).
This approach has a variety of drawbacks. One is that it cannot handle functions that rise
or fall with different speed5 properly. Whitley [2929] pointed out that the results of an
objective function should not be considered as a precise measure of utility. Adding multiple
such values up thus does not necessarily make sense.
45
y2=f2(x)
35
25
15
y=g(x)
5 =f1(x)+f2(x)
-5 -3 -1
-5 1 3 5
-15
y1=f1(x)
-25
Figure 3.8: A problematic constellation for the weighted sum approach.
Weighted sums are only suitable to optimize functions that at least share the same big-O
notation (see Definition D12.1 on page 144). Often, it is not obvious how the objective
functions will fall or rise. How can we, for instance, determine whether the objective max-
imizing the food piled by an Artificial Ant rises in comparison to the objective minimizing
the distance walked by the simulated insect?
And even if the shape of the objective functions and their complexity class were clear,
the question about how to set the weights w properly still remains open in most cases [689].
In the same paper, Das and Dennis [689] also show that with weighted sum approaches, not
necessarily all elements considered optimal in terms of Pareto domination (see next section)
will be found. Instead, candidate solutions hidden in concavities of the trade-off curve will
not be discovered [609, 1621, 1622] (see Example E3.17 on the next page for different shapes
of trade-off curves). Also, the candidate solutions which will finally be contained in the
result set X will not necessarily be evenly spaced [1000]. Amongst further authors who
discuss the drawbacks of the linear aggregation method are Koski [1582] and Messac and
Mattson [1873].
x⋆ ∈ X ⋆ ⇔6 ∃x ∈ X : x ⊣ x⋆ (3.23)
The Pareto optimal set will also comprise all lexicographically optimal solutions since, per
definition, it includes all the extreme points of the objective functions. Additionally, it
provides the trade-off solutions which we longed for in Section 3.3.2.1.
Definition D3.15 (Pareto Frontier). For a given optimization problem, the Pareto
front(ier) F ⋆ ⊂ Rn is defined as the set of results the objective function vector f creates
when it is applied to all the elements of the Pareto-optimal set X ⋆ .
F ⋆ = {f (x⋆ ) : x⋆ ∈ X ⋆ } (3.24)
f2 f2 f2 f2
f1 f1 f1 f1
Fig. 3.9.a: Non-Convex Fig. 3.9.b: Discon- Fig. 3.9.c: linear Fig. 3.9.d: Non-
(Concave) nected Uniformly Distributed
shapes as well. In Figure 3.9 [2915], we show some examples, including non-convex (concave),
disconnected, linear, and non-uniformly distributed Pareto fronts. Different structures of the
optimal sets pose different challenges to achieve good convergence and spread. Optimization
processes driven by linear aggregating functions will, for instance, have problems to find all
portions of Pareto fronts with non-convex geometry as shown by Das and Dennis [689].
3.3. MULTIPLE OBJECTIVE FUNCTIONS 67
In this book, we will refer to candidate solutions which are not dominated by any element
in a reference set as non-dominated. This reference set may be the whole problem space X
itself (in which case the non-dominated elements are global optima) or any subset X ⊆ x of
it. Many optimization algorithms, for instance, work on populations of candidate solutions
and in such a context, a non-dominated candidate solution would be one not dominated by
any other element in the population.
y y=f1(x)
y=f2(x)
xÎX1
x1 x2 x3 x4 x5 x6
The points in the area between x1 and x2 (shaded in light gray) are dominated by other
points in the same region or in [x2 , x3 ], since both functions f1 and f2 can be improved by
increasing x. In other words, f1 and f2 harmonize in [x1 , x2 ): f1 ∼[x1 ,x2 ) f2 . If we start at
the leftmost point in X (which is position x1 ), for instance, we can go one small step ∆
to the right and will find a point x1 + ∆ dominating x1 because f1 (x1 + ∆) > f1 (x1 ) and
f2 (x1 + ∆) > f2 (x1 ). We can repeat this procedure and will always find a new dominating
point until we reach x2 . x2 demarks the global maximum of f2 – the point with the highest
possible f2 value – which cannot be dominated by any other element of X by definition (see
Equation 3.21).
68 3 OPTIMA: WHAT DOES GOOD MEAN?
From here on, f2 will decrease for some time, but f1 keeps rising. If we now go a
small step ∆ to the right, we will find a point x2 + ∆ with f2 (x2 + ∆) < f2 (x2 ) but also
f1 (x2 + ∆) > f1 (x2 ). One objective can only get better if another one degenerates, i. e., f1
and f2 conflict f1 ≁[x2 ,x3 ] f2 . In order to increase f1 , f2 would be decreased and vice versa
and hence, the new point is not dominated by x2 . Although some of the f2 (x) values of the
other points x ∈ [x1 , x2 ) may be larger than f2 (x2 + ∆), f1 (x2 + ∆) > f1 (x) holds for all of
them. This means that no point in [x1 , x2 ) can dominate any point in [x2 , x4 ] because f1
keeps rising until x4 is reached.
At x3 however, f2 steeply falls to a very low level. A level lower than f2 (x5 ). Since the f1
values of the points in [x5 , x6 ] are also higher than those of the points in (x3 , x4 ], all points
in the set [x5 , x6 ] (which also contains the global maximum of f1 ) dominate those in (x3 , x4 ].
For all the points in the white area between x4 and x5 and after x6 , we can derive similar
relations. All of them are also dominated by the non-dominated regions that we have just
discussed.
« « « «
X1 X2 X3 X4
#dom
x1 x2
Figure 3.11: Optimization using the Pareto Frontier approach (second example).
If we compare Figure 3.11 with the plots of the two functions f3 and f4 in Figure 3.5, we
can see that hills in the domination space occur at positions where both, f3 and f4 have high
values. Conversely, regions of the problem space where both functions have small values are
dominated by very few elements.
Besides these examples here, another illustration of the domination relation which may help
understanding Pareto optimization can be found in Section 28.3.3 on page 275 (Figure 28.4
and Table 28.1).
3.3. MULTIPLE OBJECTIVE FUNCTIONS 69
3.3.5.1 Problems of Pure Pareto Optimization
Besides the fact that we will usually not be able to list all Pareto optimal elements, the
complete Pareto optimal set is often also not the wanted result of the optimization process.
Usually, we are rather interested in some special areas of the Pareto front only.
2. Minimize the distance covered or the time needed to find the food.
If we applied pure Pareto optimization to these objective, we may yield for example:
1. A program x1 consisting of 100 instructions, allowing the ant to gather 50 food items
when walking a distance of 500 length units.
2. A program x2 consisting of 100 instructions, allowing the ant to gather 60 food items
when walking a distance of 5000 length units.
5. A program x5 consisting of 0 instructions, allowing the ant to gather 0 food item when
walking a distance of 0 length units.
The result of the optimization process obviously contains useless but non-dominated indi-
viduals which occupy space in the non-dominated set. A program x3 which forces the ant
to scan each and every location on the map is not feasible even if it leads to a maximum
in food gathering. Also, an empty program x5 is useless because it will lead to no food
collection at all. Between x1 and x2 , the difference in the amount of piled food is only 20%,
but the ant driven by x2 has to cover ten times the distance created by x1 . Therefore, the
question of the actual utility arises in this case, too.
If such candidate solutions are created and archived, processing time is invested in evaluating
them (and we have already seen that evaluation may be a costly step of the optimization
process). Ikeda et al. [1400] show that various elements, which are apart from the true
Pareto frontier, may survive as hardly-dominated solutions (called dominance resistant, see
Section 20.2). Also, such solutions may dominate others which are not optimal but fall into
the space behind the interesting part of the Pareto front. Furthermore, memory restrictions
usually force us to limit the size of the list of non-dominated solutions X found during
the search. When this size limit is reached, some optimization algorithms use a clustering
technique to prune the optimal set while maintaining diversity. On one hand, this is good
since it will preserve a broad scan of the Pareto frontier. In this case, on the other hand, a
short but dumb program is of course very different from a longer, intelligent one. Therefore,
it will be kept in the list and other solutions which differ less from each other but are more
interesting for us will be discarded.
Better yet, non-dominated elements usually have a higher probability of being explored
further. This then leads inevitably to the creation of a great proportion of useless offspring.
70 3 OPTIMA: WHAT DOES GOOD MEAN?
In the next iteration of the optimization process, these useless offspring will need a good
share of the processing time to be evaluated.
Thus, there are several reasons to force the optimization process into a wanted direction.
In ?? on page ?? you can find an illustrative discussion on the drawbacks of strict Pareto
optimization in a practical example (evolving web service compositions).
Purshouse [2228] identifies three goals of Pareto optimization which also mirror the afore-
mentioned problems:
1. Proximity: The optimizer should obtain result sets X which come as close to the Pareto
frontier as possible.
2. Diversity: If the real set of optimal solutions is too large, the set X of solutions actually
discovered should have a good distribution in both uniformity and extent.
3. Pertinency: The set X of solutions discovered should match the interests of the decision
maker using the optimizer. There is no use in containing candidate solutions with no
actual merits for the human operator.
Often, there are feasibility limits for the parameters of the candidate solutions or the objec-
tive function values. Such regions of interest are one of the reasons for one further extension
of multi-objective optimization problems:
A candidate solution x for which isFeasible(x) does not hold is called infeasible. Obviously,
only a feasible individual can be a solution, i. e., an optimum, for a given optimization
problem. Constraint handling and the notation of feasibility can be combined with all the
methods for single-objective and multi-objective optimization previously discussed. Com-
prehensive reviews on techniques for such problems have been provided by Michalewicz
[1885], Michalewicz and Schoenauer [1890], and Coello Coello et al. [594, 599] in the con-
text of Evolutionary Computation. In Table 3.2, we give some examples for constraints in
optimization problems.
One approach to ensure that constraints are always obeyed is to ensure that only feasible
candidate solutions can occur during the optimization process.
1. Coding: This can be done as part of the genotype-phenotype mapping by using so-
called decoders. An example for a decoder is given in [2472, 2473]. [2228]
2. Repairing: Infeasible candidate solutions could be repaired, i. e., turned into feasible
ones [2228, 3041] as part of the genotype-phenotype mapping as well.
Probably the easiest way of dealing with constraints in objective and fitness space is to simply
reject all infeasible candidate solutions right away and not considering them any further in
the optimization process. This death penalty [1885, 1888] can only work in problems where
the feasible regions are very large. If the problem space contains only few or non-continuously
distributed feasible elements, such an approach will lead the optimization process to stagnate
because most effort is spent in sampling untargeted in the hope to find feasible candidate
solutions. Once this has happened, the transition to other feasible elements may be too
complicated and the search only finds local optima. Additionally, if infeasible elements are
directly discarded, the information which could be gained from them is disposed with them.
Maybe one of the most popular approach for dealing with constraints, especially in the
area of single-objective optimization, goes back to Courant [647] who introduced the idea of
penalty functions in 1943. Here, the constraints are combined with the objective function f,
resulting in a new function f′ which is then actually optimized. The basic idea is that this
combination is done in a way which ensures that an infeasible candidate solution always has
a worse f′ -value than a feasible one with the same objective values. In [647], this is achieved
2
by defining f′ as f′ (x) = f(x) + v [h(x)] . Various similar approaches exist. Carroll [500, 501],
Pp −1
for instance, chose a penalty function of the form f′ (x) = f(x) + v i=1 [gi (x)] in order to
ensure that the function g always stays larger than zero.
There are practically no limits for the ways in which a penalty for infeasibility can be
integrated into the objective functions. Several researchers suggest dynamic penalties which
incorporate the index of the current iteration of the optimizer [1458, 2072] or adaptive penal-
ties which additionally utilize population statistics [237, 1161, 2487]. Thorough discussions
on penalty functions have been contributed by Fiacco and McCormick [908] and Smith and
Coit [2526].
72 3 OPTIMA: WHAT DOES GOOD MEAN?
3.4.4 Constraints as Additional Objectives
Another idea for handling constraints would be to consider them as new objective functions.
If g(x) ≥ 0 must hold, for instance, we can transform this to a new objective function which
is subject to minimization.
f≥ (x) = −min {g(x) , 0} (3.27)
As long as g is larger than 0, the constraint is met. f≥ will be zero for all candidate solutions
which fulfill this requirement, since there is no use in maximizing g further than 0. However,
if g gets negative, the minimum term in f≥ will become negative as well. The minus in
front of it will turn it positive. The higher the absolute value of the negative g, the higher
will the value of f≥ become. Since it is subject to minimization, we thus apply a force into
the direction of meeting the constraint. No optimization pressure is applied as long as the
constraint is met. An equality constraint h can similarly be transformed to a function f= (x)
(again subject to minimization).
f= (x) minimizes the absolute value of h. If h is continuous and real, it may arbitrarily closely
approach 0 but never actually reach it. Then, defining a small cut-off value ǫ (for instance
ǫ = 10−6 ) may help the optimizer to satisfy the constraint. An approach similar to the
idea of interpreting constraints as objectives is Deb’s Goal Programming method [736, 739].
Constraint violations and objective values are ranked together in a part of the MOEA by
Ray et al. [2273] [749].
General inequality constraints can also be processed according to the Method of Inequalities
(MOI) introduced by Zakian [3050, 3052, 3054] in his seminal work on computer-aided
control systems design (CACSD) [2404, 2924, 3070]. In the MOI, an area of interest is
˘
specified in form of a goal range [r̆i , ri ] for each objective function fi .
Pohlheim [2190] outlines how this approach can be applied to the set f of objective
functions and combined with Pareto optimization: Based on the inequalities, three categories
of candidate solutions can be defined and each element x ∈ X belongs to one of them:
1. The candidate solutions which fulfill all goals are preferred instead of all other indi-
viduals that either fulfill some or no goals.
2. The candidate solutions which are not able to fulfill any of the goals succumb to those
which fulfill at least some goals.
3. Only the solutions that are in the same group are compared on basis on the Pareto
domination relation.
3.4. CONSTRAINT HANDLING 73
By doing so, the optimization process will be driven into the direction of the interesting
part of the Pareto frontier. Less effort will be spent on creating and evaluating individuals
in parts of the problem space that most probably do not contain any valid solution.
˘r1,r˘ 2
xÎX1
x1 x2 x3 x4
Figure 3.12: Optimization using the Pareto-based Method of Inequalities approach (first
example).
˘
˘
r4
r3
f3 f4
r˘ 3 ˘r4
x1 x2 x1 x2
«
x«1 x«2 x«3 x«4 x«5 x«6 x«7 x«8 X9
3
MOI class
#dom
1
x1 x2 x1 x2
Fig. 3.13.b: The Pareto-based Method of Inequalities Fig. 3.13.c: The Pareto-based Method of Inequali-
class division. ties ranking.
Figure 3.13: Optimization using the Pareto-based Method of Inequalities approach (first
example).
3.5. UNIFYING APPROACHES 75
A good overview on techniques for the Method of Inequalities is given by Whidborne et al.
[2924].
3.4.6 Constrained-Domination
The idea of constrained-domination is very similar to the Method of Inequalities but natively
applies to optimization problems as specified in Definition D3.17. Deb et al. [749] define
this principle which can easily be incorporated into the dominance comparison between any
two candidate solutions. Here, a candidate solutions x1 ∈ X is preferred in comparison to
an element x2 ∈ X [547, 749, 1297] if
2. x1 and x2 both are infeasible but x1 has a smaller overall constraint violation, or
Other approaches for incorporating constraints into optimization are Goal Attainment [951,
2951] and Goal Programming7 [530, 531]. Especially interesting in our context are methods
which have been integrated into Evolutionary Algorithms [736, 739, 2190, 2393, 2662], such
as the popular Goal Attainment approach by Fonseca and Fleming [951] which is similar to
the Pareto-MOI we have adopted from Pohlheim [2190]. Again, an overview on this subject
is given by Coello Coello et al. in [599].
All approaches for defining what optima are and how constraints should be considered until
now were rather specific and bound to certain mathematical constructs. The more general
concept of an External Decision Maker which (or who) decides which candidate solutions
prevail has been introduced by Fonseca and Fleming [952, 954].
One of the ideas behind “externalizing” the assessment process on what is good and
what is bad is that Pareto optimization as well as, for instance, the Method of Inequalities,
impose rigid, immutable partial orders8 on the candidate solutions. The first drawback of
such a partial order is that elements may exists which neither succeed nor precede each
other. As we have seen in the examples discussed in Section 3.3.5, there can, for instance,
be two candidate solutions x1 , x2 ∈ X with neither x1 ⊣ x2 nor x2 ⊣ x1 . Since there can be
arbitrarily many such elements, the optimization process may run into a situation where it
set of currently discovered “best” candidate solutions X grows quickly, which leads to high
memory consumption and declining search speed.
The second drawback of only using rigid partial orders is that they make no statement
of the quality relations within X. There is no “best of the best”. We cannot decide which
of two candidate solutions not dominating each other is the most interesting one for further
7 http://en.wikipedia.org/wiki/Goal_programming [accessed 2007-07-03]
8 A definition of partial order relations is specified in Definition D51.16 on page 645.
76 3 OPTIMA: WHAT DOES GOOD MEAN?
investigation. Also, the plain Pareto relation does not allow us to dispose the “weakest”
element from X in order to save memory, since all elements in X are alike.
One example for introducing a quality measure additional to the Pareto relation is the
Pareto ranking which we already used in Example E3.19 and which we will discuss later
in Section 28.3.3 on page 275. Here, the utility of a candidate solution is defined by the
number of other candidate solutions it is dominated by. Into this number, additional den-
sity information which deems areas of X which have only sparsely been sampled as more
interesting than those which have already extensively been investigated.
While such ordering methods are a good default approaches able to direct the search
into the direction of the Pareto frontier and delivering a broad scan of it, they neglect the
fact that the user of the optimization most often is not interested in the whole optimal
set but has preferences, certain regions of interest [955]. This region would exclude, for
example, the infeasible (but Pareto optimal) programs for the Artificial Ant as discussed in
Section 3.3.5.1. What the user wants is a detailed scan of these areas, which often cannot
be delivered by pure Pareto optimization.
a priori
knowledge utility/cost results
DM EA
(decision maker) (an optimizer)
objective values
(acquired knowledge)
Figure 3.14: An external decision maker providing an Evolutionary Algorithm with utility
values.
Here comes the External Decision Maker as an expression of the user’s preferences [949]
into play, as illustrated in Figure 3.14. The task of this decision maker is to provide a cost
function u : Rn 7→ R (or utility function, if the underlying optimizer is maximizing) which
maps the space of objective values (Rn ) to the real numbers R. Since there is a total order
defined on the real numbers, this process is another way of resolving the “incomparability-
situation”. The structure of the decision making process u can freely be defined and may
incorporate any of the previously mentioned methods. u could, for example, be reduced
to compute a weighted sum of the objective values, to perform an implicit Pareto ranking,
or to compare individuals based on pre-specified goal-vectors. Furthermore, it may even
incorporate forms of artificial intelligence, other forms of multi-criterion Decision Making,
and even interaction with the user (smilar to Example E2.6 on page 48, for instance). This
technique allows focusing the search onto solutions which are not only good in the Pareto
sense, but also feasible and interesting from the viewpoint of the user.
Fonseca and Fleming make a clear distinction between fitness and cost values. Cost values
have some meaning outside the optimization process and are based on user preferences.
Fitness values on the other hand are an internal construct of the search with no meaning
outside the optimizer (see Definition D5.1 on page 94 for more details). If External Decision
Makers are applied in Evolutionary Algorithms or other search paradigms based on fitness
measures, these will be computed using the values of the cost function instead of the objective
functions [949, 950, 956].
We have now discussed various ways to define what optima in terms of multi-objective
optimization are and to steer the search process into their direction. Let us subsume all
3.5. UNIFYING APPROACHES 77
of them in general approach. From the concept of Pareto optimization to the Method of
Inequalities, the need to compare elements of the problem space in terms of their quality
as solution for a given problem winds like a read thread through this matter. Even the
weighted sum approach and the External Decision Maker do nothing else than mapping
multi-dimensional vectors to the real numbers in order to make them comparable.
If we compare two candidate solutions x1 und x2 , either x1 is better than x2 , vice versa,
or the result is undecidable. In the latter case, we assume that both are of equal quality.
Hence, there are three possible relations between two elements of the problem space. These
two results can be expressed with a comparator function cmpf .
We provide an Java interface for the functionality of a comparator functions in Listing 55.15
on page 719. From the three defining equations, many features of “cmp” can be deduced.
It is, for instance, transitive, i. e., cmp(a1 , a2 ) < 0 ∧ cmp(a2 , a3 ) < 0 ⇒ cmp(a1 , a3 )) < 0.
Provided with the knowledge of the objective functions f ∈ f , such a comparator function
cmpf can be imposed on the problem spaces of our optimization problems:
The subscript f in cmpf illustrates that the comparator has access to all the values of the
objective functions in addition to the problem space elements which are its parameters. As
shortcut for this comparator function, we introduce the prevalence notation as follows:
It is easy to see that we can fit lexicographic orderings, Pareto domination relations, the
Method of Inequalities-based comparisons, as well as the weighted sum combination of ob-
jective values easily into this notation. Together with the fitness assignment strategies for
Evolutionary Algorithms which will be introduced later in this book (see Section 28.3 on
page 274), it covers many of the most sophisticated multi-objective techniques that are pro-
posed, for instance, in [952, 1531, 2662]. By replacing the Pareto approach with prevalence
comparisons, all the optimization algorithms (especially many of the evolutionary tech-
niques) relying on domination relations can be used in their original form while offering the
additional ability of scanning special regions of interests of the optimal frontier.
9 Partial orders are introduced in Definition D51.15 on page 645.
78 3 OPTIMA: WHAT DOES GOOD MEAN?
Since the comparator function cmpf and the prevalence relation impose a partial order
on the problem space X like the domination relation does, we can construct the optimal set
X ⋆ containing globally optimal elements x⋆ in a way very similar to Equation 3.23:
x⋆ ∈ X ⋆ ⇔ ¬∃x ∈ X : x 6= x⋆ ∧ x ≺ x⋆ (3.38)
For illustration purposes, we will exercise the prevalence approach on the examples of lexico-
graphic optimality (see Section 3.3.2) and define the comparator function cmpf ,lex . For the
weighted sum method with the weights wi (see Section 3.3.3) we similarly provide cmpf ,ws
and the domination-based Pareto optimization with the objective directions ωi as defined
in Section 3.3.5 can be realized with cmpf ,⊣ .
−1 if ∃π ∈ Π(1.. |f |) : (x1 <lex,π x2 ) ∧
¬∃π ∈ Π(1.. |f |) : (x2 <lex,π x1 )
cmpf ,lex (x1 , x2 ) = 1 if ∃π ∈ Π(1.. |f |) : (x2 <lex,π x1 ) ∧ (3.39)
¬∃π ∈ Π(1.. |f |) : (x < x
1 lex,π 2 )
0 otherwise
|f |
X
cmpf ,ws (x1 , x2 ) = (wi fi (x1 ) − wi fi (x2 )) ≡ ws(x1 ) − ws(x2 ) (3.40)
i=1
−1 if x1 ⊣ x2
cmpf ,⊣ (x1 , x2 ) = 1 if x2 ⊣ x1 (3.41)
0 otherwise
Later in this book, we will discuss some of the most popular optimization strategies. Al-
though they are usually implemented based on Pareto optimization, we will always introduce
them using prevalence.
3.5. UNIFYING APPROACHES 79
Tasks T3
13. In Example E1.2 on page 26, we stated the layout of circuits as a combinatorial opti-
mization problem. Name at least three criteria which could be subject to optimization
or constraints in this scenario.
[6 points]
14. The job shop scheduling problem was outlined in Example E1.3 on page 26. The most
basic goal when solving job shop problems is to meet all production deadlines. Would
formulate this goal as a constraint or rather as an objective function? Why?
[4 points]
15. Let us consider the university life of a student as optimization problem with the goal
of obtaining good grades in all the subjects she attends. How could be formulate a
Additionally, phrase at least two constraints in daily life which, for example, prevent
the student from learning 24 hours per day.
[10 points]
16. Consider a bi-objective optimization problem with X = R as problem space and the two
objective functions f1 (x) = e|x−3| and f2 (x) = |x − 100| both subject to minimization.
What is your opinion about the viability of solving this problem with the weighted
sum approach with both weights set to 1?
[5 points]
18. In your opinion, is it good if most of the candidate solutions under investigation
by a Pareto-based optimization algorithm are non-dominated? For answering this
question, consider an optimization process that decides which individual should be
used for further investigation solely on basis of its objective values and, hence, the
Pareto dominance relation. Consider the extreme case that all points known to such
an algorithm are non-dominated to answer the question.
Based on your answer, make a statement about the expected impact of the fact dis-
cussed in Task 17 on the quality of solutions that a Pareto-based optimizer can deliver
for rising dimensionality of the objective space, i. e., for increasing values of n = |f |.
[10 points]
19. Use Equation 3.1 to Equation 3.5 from page 52 to determine all extrema (optima) for
In Section 2.2, we gave various examples for optimization problems, such as from simple
real-valued problems (Example E2.1), antenna design (Example E2.3), and tasks in aircraft
development (Example E2.5). Obviously, the problem spaces X of these problems are quite
different. A solution of the stone’s throw task, for instance, was a real number of meters per
second. An antenna can be described by its blueprint which is structured very differently
from the blueprints of aircraft wings or rotors.
Optimization algorithms usually start out with a set of either predefined or automatically
generated elements from X which are far from being optimal (otherwise, optimization would
make not too much sense). From these initial solutions, new elements from X are explored
in the hope to improve the solution quality. In order to find these new candidate solutions,
search operators are used which modify or combine the elements already known.
Since the problem spaces of the three aforementioned examples are different, this would
mean that different sets of search operators are required, too. It would, however, be cumber-
some to develop search operations time and again for each new problem space we encounter.
Such an approach would not only be error-prone, it would also make it very hard to formulate
general laws and to consolidate findings.
Instead, we often reuse well-known search spaces for many different problems. After we
the problem space is defined, we will always look for ways to map it to such a search space
for which search operators are already available. This is not always possible, but in most of
the problems we will encounter, it will save a lot of effort.
Example E4.1 (Search Spaces for Example E2.1, E2.3, and E2.5).
The stone’s throw as well as many antenna and wing blueprints, for instance, can all be
represented as vectors of real numbers. For the stone’s throw, these vectors would have
length one (Fig. 4.1.a). An antenna consisting of n segments could be encoded as vector of
length 3n, where the element at index i specifies a length and i + 1 and i + 2 are angles
denoting into which direction the segment is bent as sketched in Fig. 4.1.b. The aircraft
wing could be represent as vector of length 3n, too, where the element at index i is the
segment width and i + 1 and i + 2 denote its disposition and height (Fig. 4.1.c).
82 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
g0
()
g0
g8
g6
g3 g5
g= g
g2
1
g...
g2 g5
g= g
g2
1
()
g...
g7
g0 g4
g1 g8
x=g=(g1) g2 g11 g14
g1 g g4 g7 g10
0 g3 g6 g9 g13
Fig. 4.1.a: Stone’s throw. Fig. 4.1.b: Antenna design. Fig. 4.1.c: Wing design.
Definition D4.1 (Search Space). The search space G (genome) of an optimization prob-
lem is the set of all elements g which can be processed by the search operations.
Definition D4.5 (Locus). The locus5 is the position where a specific gene can be found
in a genotype [634].
0 0 0 0
0 0 0 1 searchOp x=gpm(g)
0 0 1 0
... gÎG
0 1 1 1 3
X1 xÎX 3
X1
1 1 0 1 2 2
st
1 1 1 0 1 Gene 2nd Gene 1
X0
1
X0
allele ,,01`` allele ,,11``
1 1 1 1 at locus 0 at locus 1 0 1 2 3 0 1 2 3
Figure 4.2: The relation of genome, genes, and the problem space.
Figure 4.2 illustrates how this can be achieved by using two bits for each, x[0] and x[1].
The first gene resides at locus 0 and encodes x[0] whereas the two bits at locus 1 stand for
x[1]. Four bits suffice for encoding two natural numbers in X1 = X2 = 1..3 and thus, for the
whole candidate solution (phenotype) x. With this trivial encoding, the Genetic Algorithm
can be applied to the problem and will quickly find the solution x = (0, 3)
Search operations are used an optimization algorithm in order to explore the search space.
n is an operator-specific value, its arity6 . An operation with n = 0, i. e., a nullary operator,
receives no parameters and may, for instance, create randomly configured genotypes (see,
for instance, Listing 55.2 on page 708). An operator with n = 1 could randomly modify
5 http://en.wikipedia.org/wiki/Locus_%28genetics%29 [accessed 2007-07-03]
6 http://en.wikipedia.org/wiki/Arity [accessed 2008-02-15]
84 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
its parameter – the mutation operator in Evolutionary Algorithms is an example for this
behavior.
The key idea of all search operations which take at least one parameter (n > 0) (see
Listing 55.3) is that
1. it is possible to take a genotype g of a candidate solution x = gpm(g) with good
objective values f (x) and to
2. modify this genotype g to get a new one g ′ = searchOp(g, . . .) which
3. can be translated to a phenotype x′ = gpm(g ′ ) with similar or maybe even better
objective values f (x′ ).
This assumption is one of the most basic concepts of iterative optimization algorithms.
It is known as causality or locality and discussed in-depth in Section 14.2 on page 161
and Section 24.9 on page 218. If it does not hold then applying a small change to a genotype
leads to large changes in its utility. Instead modifying a genotype, we then could as well
create a new, random one directly.
Many operations which have n > 1 try to combine the genotypes they receive in the
input. This is done in the hope that positive characteristics of them can be joined into an
even better offspring genotype. An example for such operations is the crossover operator in
Genetic Algorithms. Differential Evolution utilizes an operator with n = 3, i. e., which takes
three arguments. Here, the idea often is that good can step by step be accumulated and
united by an optimization process. This Building Block Hypothesis is discussed in detail
in Section 29.5.5 on page 346, where also criticism to this point of view is provided.
Search operations often are randomized, i. e., they are not functions in the mathematical
sense. Instead, the return values of two successive invocations of one search operation with
exactly the same parameters may be different. Since optimization processes quite often use
more than a single search operation at a time, we define the set Op.
Definition D4.7 (Search Operation Set). Op is the set of search operations searchOp ∈
Op which are available to an optimization process. Op(g1 , g2 , .., gn ) denotes the application
of any n-ary search operation in Op to the to the genotypes g1 , g2 , · · · ∈ G.
Op(g1 , g2 , .., gn ) ∈ G if ∃ n-ary operation in Op
∀g1 , g2 , .., gn ∈ G ⇒ (4.2)
Op(g1 , g2 , .., gn ) = ∅ otherwise
It should be noted that, if the n-ary search operation from Op which was applied is a random-
ized operator, obviously also the behavior of Op(. . .) is randomized as well. Furthermore,
if more than one n-ary operation is contained in Op, the optimization algorithm using Op
usually performs some form of decision on which operation to apply. This decision may
either be random or follow a specific schedule and the decision making process may even
change during the course of the optimization process. Writing Op(g) in order to denote the
application of a unary search operation from Op to the genotype g ∈ G is thus a simplified
expression which encompasses both, possible randomness in the search operations and in
the decision process on which operation to apply.
With Opk (g1 , g2 , . . .) we denote k successive applications of (possibly different) search
operators (with respect to their arities). The kth application receives all possible results of
the k-1th application of the search operators as parameter, i. e., is defined recursively as
[
Opk (g1 , g2 , . . .) = Op(G) (4.3)
∀G∈Π(P(Opk−1 (g1 ,g2 ,...)))
where Op0 (g1 , g2 , . . .) = {g1 , g2 , . . . }, P (G) is the power set of a set G, and Π(A) is the set
of all possible permutations of the elements of G.
If the parameter g is left away, i. e., just Opk is written, this chain has to start with
a search operation with zero arity. In the style of Badea and Stanciu [177] and Skubch
[2516, 2517], we now can define:
4.3. THE CONNECTION BETWEEN SEARCH AND PROBLEM SPACE 85
Definition D4.8 (Completeness). A set Op of search operations “searchOp” is complete
if and only if every point g1 in the search space G can be reached from every other point
g2 ∈ G by applying only operations searchOp ∈ Op.
∀g1 , g2 ∈ G ⇒ ∃k ∈ N1 : g1 ∈ Opk (g2 ) (4.4)
A weakly complete set of search operations usually contains at least one operation without
parameters, i. e., of zero arity. If at least one such operation exists and this operation can
potentially return all possible genotypes, then the set of search operations is already weakly
complete. However, nullary operations are often only used in the beginning of the search
process (see, for instance, the Hill Climbing Algorithm 26.1 on page 230). If Op achieved its
weak completeness because of a nullary search operation, the initialization of the algorithm
would determine which solutions can be found and which not. If the set of search operations
is strongly complete, the search process, in theory, can find any solution regardless of the
initialization.
From the opposite point of view, this means that if the set of search operations is not
complete, there are elements in the search space which cannot be reached from a random
starting point. If it is not even weakly complete, parts of the search space can never be
discovered. Then, the optimization process is probably not able to explore the problem
space adequately and may not find optimal or satisfyingly good solution.
∀g ∈ G ∃x ∈ X : gpm(g) = x (4.7)
In Listing 55.7 on page 711, you can find a Java interface which resembles Equation 4.7. The
only hard criterion we impose on genotype-phenotype mappings in this book is left-totality,
i. e., that they map each element of the search space to at least one candidate solution. The
GPM may be a functional relation if it is deterministic. Nevertheless, it is possible to create
mappings which involve random numbers and hence, cannot be considered to be functions
in the mathematical sense of Section 51.10 on page 646. In this case, we would rewrite
Equation 4.7 as Equation 4.8:
Genotype-phenotype mappings should further be surjective [2247], i. e., relate at least one
genotype to each element of the problem space. Otherwise, some candidate solutions can
never be found and evaluated by the optimization algorithm even if the search operations
are complete. Then, there is no guarantee whether the solution of a given problem can
be discovered or not. If a genotype-phenotype mapping is injective, which means that it
assigns distinct phenotypes to distinct elements of the search space, we say that it is free from
redundancy. There are different forms of redundancy, some are considered to be harmful for
the optimization process, others have positive influence8 . Most often, GPMs are not bijective
(since they are neither necessarily injective nor surjective). Nevertheless, if a genotype-
phenotype mapping is bijective, we can construct an inverse mapping gpm−1 : X 7→ G.
Based on the genotype-phenotype mapping and the definition of adjacency in the search
space given earlier in this section, we can also define a neighboring relation for the problem
space, which, of course, is also not necessarily symmetric.
Definition D4.12 (Neighboring (Problem Space)). Under the condition that the can-
didate solution x1 ∈ X was obtained from the element g1 ∈ G by applying the genotype-
phenotype mapping, i. e., x1 = gpm(g1 ), a point x2 is ε-neighboring to x1 if it can be reached
by applying a single search operation and a subsequent genotype-phenotype mapping to g1
with at least a probability of ε ∈ [0, 1). Equation 4.10 provides a general definition of the
neighboring relation. If the genotype-phenotype mapping is deterministic (which usually is
the case), this definition boils down to Equation 4.11.
X
neighboringε (x2 , x1 ) ⇔ P (Op(g1 ) = g2 ) P x2 = gpm(g2 )|Op(g1 )=g2 > ε (4.10)
∀g2 :adjacent0 (g2 ,g1 )
x1 =gpm(g1 )
X
neighboringε (x2 , x1 ) ⇔ P (Op(g1 ) = g2 ) > ε (4.11)
∀g2 : adjacent (g2 , g1 ) ∧
0
x2 = gpm(g2 )
x1 =gpm(g1 )
7 See Equation 51.29 on page 644 to Point 5 for an outline of the properties of binary relations.
8 See Section 16.5 on page 174 for more information.
4.3. THE CONNECTION BETWEEN SEARCH AND PROBLEM SPACE 87
The condition x1 = gpm(g1 ) is relevant if the genotype-phenotype mapping “gpm” is not
injective (see Equation 51.31 on page 644), i. e., if more than one genotype can map to
the same phenotype. Then, the set of ε-neighbors of a candidate solution x1 depends
on how it was discovered, on the genotype g1 it was obtained from, as can be seen in
Example E4.3. Example E5.1 on page 96 shows an account on how the search operations and,
thus, the definition of neighborhood, influence the performance of an optimization algorithm.
Problem Space: X
1 0 2 The natural numbers 0, 1, and 2
X = {0,1,2}
X
Genotype-Phenotype Mapping: gpm
GPM gpm(g) = (g1 + g2) * g3
110 111
gB
010 011 Search Space: G
gA The bit strings of length 3
3 3
100 101 G = B = {0,1}
000 001
us imagine that we have a very simple optimization problem where the problem space
X only consists of the three numbers 0, 1, and 2, as sketched in Figure 4.3. Imagine
further that we would use the bit strings of length three as search space, i. e., G = B3 =
3
{0, 1} . As genotype-phenotype mapping gpm : G 7→ X, we would apply the function
x = gpm(g) = (g1 + g2 ) ∗ g3 . The two genotypes gA = 010 and gB = 110 both map to 0
since (0 + 1) ∗ 0 = (1 + 1) ∗ 0 = 0.
If the only search operation available would be a single-bit flip mutation, then all geno-
types which have a Hamming distance of 1 are adjacent in the search space. This means
that gA = 010 is adjacent to 110, 000, and 011 while gB = 110 is adjacent to 010, 100, and
111.
The candidate solution x = 0 is neighboring to 0 and 1 if it was obtained by applying the
genotype-phenotype mapping to gA = 010. With one search step, gpm(110) = gpm(000) =
0 and gpm(011) = 1 can be reached. However, if x = 0 was the result of by applying the
genotype-phenotype mapping to gB = 110, its neighborhood is {0, 2}, since gpm(010) =
gpm(100) = 0 and gpm(111) = 2.
Hence, the neighborhood relation under non-injective genotype-phenotype mappings de-
pends on the genotypes from which the phenotypes under investigation originated. For
injective genotype-phenotype mappings, this is not the case.
If the search space G and the problem space X are the same (G = X) and the genotype-
phenotype mapping is the identity mapping, it follows that g2 = x2 and the sum in Equa-
tion 4.11 breaks down to P (Op(g1 ) = g2 ). Then the neighboring relation (defined on the
problem space) equals the adjacency relation (defined on the search space), as can be seen
88 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
when taking a look at Equation 4.6. Since the identity mapping is injective, the condition
x1 = gpm(g1 ) is always true, too.
In the following, we will use neighboringε to denote a ε-neighborhood relation with implicitly
assuming the dependencies on the search space or that the GPM is injective. Furthermore,
with neighboring(x1 , x2 ), we denote that x1 and x2 are ε = 0-neighbors.
f(x)
XN x«
X«l
xÎX
be found. Obviously, every global optimum is also a local optimum. In the case of single-
objective minimization, it is also a local minimum, and a global minimum.
In order to define what a local minimum is, as a first step, we could say “A (local)
minimum x̌l ∈ X of one (objective) function f : X 7→ R is an input element with f(x̌l ) < f(x)
for all x neighboring to x̌l .” However, such a definition would disregard locally optimal
plateaus, such as the set Xl⋆ of candidate solutions sketched in Figure 4.4.
If we simply exchange the “<” in f(x̌l ) < f(x) with a “≤” and obtain f(x̌l ) ≤ f(x) however,
we would classify the elements in XN as a local optima as well. This would contradict the
intuition, since they are clearly worse than the elements on the right side of XN . Hence, a
more fine-grained definition is necessary.
Similar to Handl et al. [1178], let us start with creating a definition of areas in the search
space based on the specification of ε-neighboring (Definition D4.12).
Definition D4.14 (ε-Local Minimal Set). A ε-local minimal set X̌l ⊆ X under objec-
tive function f and problem space X is a connected set for which Equation 4.14 holds.
neighboringε (x, x̌l ) ⇒ (x ∈ X̌l ) ∧ (f(x) = f(x̌l )) ∨ (4.14)
(x 6∈ X̌l ) ∧ (f(x̌l ) < f(x)) ∀x̌l ∈ X̌l , x ∈ X
An ε-local minimum x̌l is an element of an ε-local minimal set. We will use the term local
minimum set and local minimum to denote scenarios where ε = 0.
Definition D4.15 (ε-Local Maximal Set). A ε-local maximal set X̂l ⊆ X under objec-
tive function f and problem space X is a connected set for which Equation 4.15 holds.
h i
neighboringε (x, x̂l ) ⇒ (x ∈ X̂l ) ∧ (f(x) = f(x̂l )) ∨ (4.15)
h i
(x 6∈ X̂l ) ∧ (f(x̂l ) > f(x)) ∀x̂l ∈ X̂l , x ∈ X
Again, an ε-local maximum x̂l is an element of an ε-local maximal set and we will use the
term local maximum set and local maximum to indicate scenarios where ε = 0.
Similar to the case of local maxima and minima, we will again use the terms ε-local optimum,
local optimum set, and local optimum.
As basis for the definition of local optima in multi-objective problems we use the prevalence
relations and the concepts provided by Handl et al. [1178]. In Section 3.5.2, we showed
that these encompass the other multi-objective concepts as well. Therefore, in the following
definition, “prevails” can as well be read as “dominates”, which is probably the most common
use.
Again, we use the term ε-local optimum to denote candidate solutions which are part of
ε-local optimal sets. If ε is omitted in the name, ε = 0 is assumed. See, for example, Exam-
ple E13.4 on page 158 for an example of local optima in multi-objective problems.
90 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
4.5 Further Definitions
In order to ease the discussions of different Global Optimization algorithms, we furthermore
define the data structure individual, which is the unison of the genotype and phenotype of
an individual.
Besides this basic individual structure, many practical realizations of optimization algo-
rithms use extended variants of such a data structure to store additional information like
the objective and fitness values. Then, we will consider individuals as tuples in G × X × W,
where W is the space of the additional information – W = Rn × R, for instance. Such en-
dogenous information [302] often represents local optimization strategy parameters. It then
usually coexists with global (exogenous) configuration settings. In the algorithm definitions
later in this book, we will often access the phenotypes p.x without explicitly referring to
the genotype-phenotype mapping “gpm”, since “gpm” is already implicitly included in the
relation of p.x and p.g according to Definition D4.18. Again relating to the notation used
in Genetic Algorithms, we will refer to lists of such individual records as populations.
20. Assume that we are car salesmen and search for an optimal configuration for a car.
We have a search space X = X1 × X2 × X3 × X4 where
Define a proper search space for this problem along with a unary search operation and
a genotype-phenotype mapping.
[10 points]
21. Define a nullary search operation which creates new random points in the n-
n
dimensional search space G = {0, 1, 2} . A nullary search operation has no parameters
n
(see Section 4.2). {0, 1, 2} denotes a search space which consists of strings of the
length n where each element of the string is from the set {0, 1, 2}. An example for
such a string for n = 5 is (1, 2, 1, 1, 0). A nullary search operation here would thus just
create a new, random string of this type.
[5 points]
22. Define a unary search operation which randomly modifies a point in the n-dimensional
n
search space G = {0, 1, 2} See Task 21 for a description of the search space. A unary
n
search operation for G = {0, 1, 2} has one parameter, one string of length n where
each element is from the set {0, 1, 2}. A unary search operation takes such a string
g ∈ G and modifies it. It returns a new string g ′ ∈ G which is a bit different from g.
This modification has to be done in a random way, by adding, subtraction, multiplying,
or permutating g randomly, for example.
[5 points]
23. Define a possible unary search operation which modifies a point g ∈ R2 and which
could be used for minimizing a black box objective function f : R2 7→ R.
[10 points]
24. When trying to find the minimum of an black box objective function f : R3 7→ R, we
could use the same search space and problem space G = X = R3 . Instead, we could
use G = R2 and apply
a genotype-phenotype
mapping gpm : R2 7→ R3 . For example,
T T
we could set gpm g = (g [0], g [1]) = (g [0], g [1], g [0] + g [1]) . By this approach, our
optimization algorithm would only face a two-dimensional problem instead of a three-
dimensional one. What do you think about this idea? Is it good?
[5 points]
25. Discuss some basic assumptions usually underlying the design of unary and binary
search operations.
[5 points]
26. Define a search space and search operations for the Traveling Salesman Problem as
discussed in Example E2.2 on page 45.
[10 points]
27. We can define a search space G = (1..n)n for a Traveling Salesman Problem with n
cities. This search space would clearly be numerical since G ⊆ Nn1 ⊆ Rn . Discuss
about whether it makes sense to use a pure numerical optimization algorithm (with
operations similar to the one you defined in Task 23, for example) to find solutions to
the Traveling Salesman Problem in this search space.
[5 points]
92 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
28. Assume that you want to solve an optimization problem defined over an n-dimensional
vector of natural numbers where each element of the vector stems from the natural
interval9 a..b. In other words, the problem space X is a subsetset of the natural
numbers by the n (i. e., X ⊆ Nn0 ); more specific: X = (a..b)n . You have an optimization
algorithm available which is suitable for continuous optimization problems defined over
m-dimensional real-vectors where each vector element stems from the real interval10
[c, d], i. e., which deals with search spaces G ⊆ Rm ; more specific G = [c, d]m . m, c,
and d can be chosen freely. n, a, and b are fixed constants that are specific to the
optimization problem you have.
How can you solve your optimization problem with the available algorithm? Provide an
implementation of the necessary module in a programming language of your choice.
The implementation should obey to the interface definitions in Chapter 55. If you
choose a programming language different from Java, you should give the methods
names similar to those used in that section and write comments relate your code to
the interfaces given there.
[10 points]
9 The natural interval a..b contains only the natural numbers from a to b. For example 1..5 = {1, 2, 34, 5}
and π 6∈ 1..4 .
10 The real interval [c, d] contains all the real numbers between c and d. For example π ∈ [1, 4] and
A very powerful metaphor in Global Optimization is the fitness landscape1 . Like many other
abstractions in optimization, fitness landscapes have been developed and extensively been
researched by evolutionary biologists [701, 1030, 1500, 2985]. Basically, they are visualiza-
tions of the relationship between the genotypes or phenotypes in a given population and
their corresponding reproduction probability. The idea of such visualizations goes back to
Wright [2985], who used level contours diagrams in order to outline the effects of selection,
mutation, and crossover on the capabilities of populations to escape locally optimal config-
urations. Similar abstractions arise in many other areas [2598], like in physics of disordered
systems such as spin glasss [311, 1879], for instance.
In Chapter 28, we will discuss Evolutionary Algorithms, which are optimization methods
inspired by natural evolution. The Evolutionary Algorithm research community has widely
adopted the fitness landscapes as relation between individuals and their objective values [865,
1912]. Langdon and Poli [1673]2 explain that fitness landscapes can be imagined as a view
on a countryside from far above. Here, the height of each point is analogous to its objective
value. An optimizer can then be considered as a short-sighted hiker who tries to find the
lowest valley or the highest hilltop. Starting from a random point on the map, she wants to
reach this goal by walking the minimum distance.
Evolutionary Algorithms were initially developed as single-objective optimization meth-
ods. Then, the objective values were directly used as fitness and the “reproduction prob-
ability”, i. e., the chance of a candidate solution for being subject to further investigation,
was proportional to them. In multi-objective optimization applications with more sophisti-
cated fitness assignment and selection processes, this simple approach does not reflect the
biological metaphor correctly anymore.
In the context of this book, we therefore deviate from this view. Since it would possibly
be confusing for the reader if we used a different definition for fitness landscapes than the
rest of the world, we introduce the new term problem landscape and keep using the term
fitness landscape in the traditional manner. In Figure 11.1 on page 140, you can find some
examples for fitness landscapes.
The fitness mapping v is usually built by incorporating the information of a set of indi-
viduals, i. e., a population pop. This allows the search process to compute the fitness as
a relative utility rating. In multi-objective optimization, a fitness assignment procedure
v = assignFitness(pop, cmpf ) as defined in Definition D28.6 on page 274 is typically used
to construct v based on the set pop of all individuals which are currently under investiga-
tion and a comparison criterion cmpf (see Definition D3.20 on page 77). This procedure is
usually repeated in each iteration t cycle of the optimizer and different fitness assignments
vt=1 , vt=2 , . . . , result.
We can detect a similarity between a fitness function v and a heuristic function φ as
defined in Definition D1.14 on page 34 and the more precise Definition D46.5 on page 518.
Both of them give indicators about the utility of p which, most often, are only valid and make
only sense within the optimization process. Both measures reflect some ranking criterion
used to pick the point(s) in the search space which should be investigated further in the next
iteration. The difference between fitness functions and heuristics is that heuristics usually
are based on some insights about the concrete structure of the optimization problem whereas
a fitness assignment process obtains its information primarily from the objective values of
the individuals. They are thus more abstract and general. Also, as mentioned before, the
fitness can change during the optimization process and often a relative measure whereas
heuristic values for a fixed point in the search space are usually constant and independent
from the structure of the other candidate solutions under investigation.
Although it is a bit counter-intuitive that a fitness function is built repeatedly and is not
a constant mapping, this is indeed the case in most multi-objective optimization situations.
In practical implementations, the individual records are often extended to also facilitate the
fitness values. If we refer to v(p) multiple times in an algorithm for the same p and pop,
this should be understood as access to such a cached value which is actually computed only
once. This is indeed a good implementation decision, since some fitness assignment processes
3 http://en.wikipedia.org/wiki/Fitness_(genetic_algorithm) [accessed 2008-08-10]
5.3. PROBLEM LANDSCAPES 95
2
have a complexity of O (len(pop)) or higher, because they may involve comparing each
individual with every other one in pop as well as computing density information or clustering.
The term fitness has been borrowed from biology4 [2136, 2542] by the Evolutionary
Algorithms community. As already mentioned, the first applications of Genetic Algorithms
were developed, the focus was mainly on single-objective optimization. Back then, they
called this single function fitness function and thus, set objective value ≡ fitness value. This
point of view is obsolete in principle, yet you will find many contemporary publications that
use this notion. This is partly due the fact that in simple problems with only one objective
function, the old approach of using the objective values directly as fitness, i. e., vp = f(p.x),
can sometimes actually be applied. In multi-objective optimization processes, this is not
possible and fitness assignment processes like those which we are going to elaborate on
in Section 28.3 on page 274 are used instead.
In the context of this book, fitness is subject to minimization, i. e., elements with smaller
fitness are “better” than those with higher fitness. Although this definition differs from the
biological perception of fitness, it complies with the idea that optimization algorithms are to
find the minima of mathematical functions (if nothing else has been stated). It makes fitness
and objective values compatible in that it allows us to consider the fitness of a multi-objective
problem to be the objective value of a single-objective one.
This definition of problem landscape is very similar to the performance measure definition
used by Wolpert and Macready [2963, 2964] in their No Free Lunch Theorem (see Chapter 23
on page 209). In our understanding, problem landscapes are not only closer to the original
meaning of fitness landscapes in biology, they also have another advantage. According to
this definition, all entities involved in an optimization process over a finite problem space
directly influence the problem landscape. The choice of the search operations in the search
space G, the way the initial elements are picked, the genotype-phenotype mapping, the
objective functions, the fitness assignment process, and the way individuals are selected for
further exploration all have impact on the probability to discover certain candidate solutions
– the problem landscape Φ.
Problem landscapes are essentially some form of cumulative distribution functions
(see Definition D53.18 on page 658). As such, they can be considered as the counterparts
of the Markov chain models for the search state used in convergence proofs such as those
in [2043] which resemble probability mass/density functions. On basis of its cumulative
character, we can furthermore make the following assumptions about Φ(x, τ ):
Φ(x, τ1 ) ≤ Φ(x, τ2 ) ∀τ1 < τ2 ∧ x ∈ X, τ1 , τ2 ∈ N1 (5.2)
0 ≤ Φ(x, τ ) ≤ 1 ∀x ∈ X, τ ∈ N1 (5.3)
4 http://en.wikipedia.org/wiki/Fitness_%28biology%29 [accessed 2008-02-22]
965 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
Example E5.1 (Influence of Search Operations on Φ).
Let us now give a simple example for problem landscapes and how they are influenced by
the optimization algorithm applied to them. Figure 5.1 illustrates one objective function,
defined over a finite subset X of the two-dimensional real plane, which we are going to
optimize. In this example, we use the problem space X also as search space G, i. e., the
f(x)
x«l x«
genotype-phenotype mapping is the identity mapping. For optimization, we will use a very
simple Hill Climbing algorithm5 , which initially randomly draws one candidate solution with
uniform probability from the X. In each iteration, it creates a new candidate solution from
the current one by using a unary search operation. The old and the new candidate are
compared, and the better one is kept. Hence, we do not need to differentiate between fitness
and objective values. In the example, better means has lower fitness.
In Figure 5.1, we can spot one local optimum x⋆l and one global optimum x⋆ . Between
them, there is a hill, an area of very bad fitness. The rest of the problem space exhibits a
small gradient into the direction of its center. The optimization algorithm will likely follow
this gradient and sooner or later discover x⋆l or x⋆ . The chances to first of x⋆l are higher,
since it is closer to the center of X and thus, has a bigger basin of attraction.
With this setting, we have recorded the traces of two experiments with 1.3 million runs of
the optimizer (8000 iterations each). From these records, we can approximate the problem
landscapes very good.
In the first experiment, depicted in Figure 5.2, the Hill Climbing optimizer used a search
operation searchOp1 : X 7→ X which creates a new candidate solution according to a (dis-
cretized) normal distribution around the old one. We divided X in a regular lattice and
in the second experiment series, we used an operator searchOp2 : X 7→ X which produces
candidate solutions that are direct neighbors of the old ones in this lattice. The problem
landscape Φ produced by this operator is shown in Figure 5.3. Both operators are complete,
since each point in the search space can be reached from each other point by applying them.
In both experiments, the probabilities of the elements in the problem space of being discov-
ered are very low, near to zero, and rather uniformly distributed in the first few iterations.
To put it precise, since our problem space is a 36 × 36 lattice, this probability is exactly 1/362
in the first iteration. Starting with the tenth or so evaluation, small peaks begin to form
around the places where the optima are located. These peaks then begin to grow.
5 Hill Climbing algorithms are discussed thoroughly in Chapter 26.
5.3. PROBLEM LANDSCAPES 97
F(x,0)
F(x,2)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.2.a: Φ(x, 1) Fig. 5.2.b: Φ(x, 2)
F(x,5)
F(x,10)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.2.c: Φ(x, 5) Fig. 5.2.d: Φ(x, 10)
F(x,50)
F(x,100)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.2.e: Φ(x, 50) Fig. 5.2.f: Φ(x, 100)
F(x,500)
F(x,1000)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.2.g: Φ(x, 500) Fig. 5.2.h: Φ(x, 1000)
F(x,2000)
F(x,4000)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.2.i: Φ(x, 2000) Fig. 5.2.j: Φ(x, 4000)
F(x,6000)
F(x,8000)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.2.k: Φ(x, 6000) Fig. 5.2.l: Φ(x, 8000)
Figure 5.2: The problem landscape of the example problem derived with searchOp1 .
985 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
F(x,0)
F(x,2)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.3.a: Φ(x, 1) Fig. 5.3.b: Φ(x, 2)
F(x,5)
F(x,10)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.3.c: Φ(x, 5) Fig. 5.3.d: Φ(x, 10)
F(x,50)
F(x,100)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.3.e: Φ(x, 50) Fig. 5.3.f: Φ(x, 100)
F(x,500)
F(x,1000)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.3.g: Φ(x, 500) Fig. 5.3.h: Φ(x, 1000)
F(x,2000)
F(x,4000)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.3.i: Φ(x, 2000) Fig. 5.3.j: Φ(x, 4000)
F(x,6000)
F(x,8000)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
X X
Fig. 5.3.k: Φ(x, 6000) Fig. 5.3.l: Φ(x, 8000)
Figure 5.3: The problem landscape of the example problem derived with searchOp2 .
5.3. PROBLEM LANDSCAPES 99
Since the local optimum x⋆l resides at the center of a large basin and the gradients point
straight into its direction, it has a higher probability of being found than the global optimum
x⋆ . The difference between the two search operators tested becomes obvious starting with
approximately the 2000th iteration. The process with the operator utilizing the normal
distribution, the Φ value of the global optimum begins to rise farther and farther, finally
surpassing the one of the local optimum. Even if the optimizer gets trapped in the local
optimum, it will still eventually discover the global optimum and if we ran this experiment
longer, the according probability would have converge to 1. The reason for this is that with
the normal distribution, all points in the search space have a non-zero probability of being
found from all other points in the search space. In other words, all elements of the search
space are adjacent.
The operator based on the uniform distribution is only able to create candidate solutions
in the direct neighborhood of the known points. Hence, if an optimizer gets trapped in the
local optimum, it can never escape. If it arrives at the global optimum, it will never discover
the local one. In Fig. 5.3.l, we can see that Φ(x⋆l , 8000) ≈ 0.7 and Φ(x⋆ , 8000) ≈ 0.3. In
other words, only one of the two points will be the result of the optimization process whereas
the other optimum remains undiscovered.
1. Optimization algorithms discover good elements with higher probability than elements
with bad characteristics, given the problem is defined in a way which allows this to
happen.
2. The success of optimization depends very much on the way the search is conducted,
i. e., the definition of neighborhoods of the candidate solutions (see Definition D4.12
on page 86).
3. It also depends on the time (or the number of iterations) the optimizer allowed to use.
4. Hill Climbing algorithms are no Global Optimization algorithms since they have no
general means of preventing getting stuck at local optima.
1005 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
Tasks T5
29. What is are the similarities and differences between fitness values and traditional
heuristic values?
[10 points]
30. For many (single-objective) applications, a fitness assignment process which assigns
the rank of an individual p in a population pop according to the objective value f(p.x)
of its phenotype p.x as its fitness v(p) is highly useful. Specify an algorithm for such
an assignment process which sorts the individuals in a list (population pop) according
to their objective value and assigns the index of an individual in this sorted list as its
fitness.
Should the individuals be sorted in ascending or descending order if f is subject to
maximization and the fitness v is subject to minimization?
[10 points]
Chapter 6
The Structure of Optimization: Putting it together.
After discussing the single entities involved in optimization, it is time to put them together
to the general structure common to all optimization processes, to an overall picture. This
structure consists of the previously introduced well-defined spaces and sets as well as the
mappings between them. One example for this structure of optimization processes is given in
Figure 6.1 by using a Genetic Algorithm which encodes the coordinates of points in a plane
into bit strings as an illustration. The same structure was already used in Example E4.2.
At the lowest level, there is the search space G inside which the search operations navigate.
Optimization algorithms can usually only process a subset of this space at a time. The
elements of G are mapped to the problem space X by the genotype-phenotype mapping
“gpm”. In many cases, “gpm” is the identity mapping (if G = X) or a rather trivial, injective
transformation. Together with the elements under investigation in the search space, the
corresponding candidate solutions form the population pop of the optimization algorithm.
A population may contain the same individual record multiple times if the algorithm
permits this. Some optimization methods only process one candidate solution at a time.
In this case, the size of pop is one and a single variable instead of a list data structure is
used to hold the element’s data. Having determined the corresponding phenotypes of the
genotypes produced by the search operations, the objective functions(s) can be evaluated.
In the single-objective case, a single value f(p.x) determines the utility of p.x. In case
of a multi-objective optimization, on the other hand, a vector f (p.x) of objective values
is computed for each candidate solution p.x in the population. Based on these objective
values, the optimizer may compute a real fitness v(p) denoting how promising an individual
p is for further investigation, i. e., as parameter for the search operations, relative to the
other known candidate solutions in the population pop. Either based on such a fitness value,
directly on the objective values, or on other criteria, the optimizer will then decide how to
proceed, how to generate new points in G.
102 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.
fitness
Fitness and heuristic values
(normally) have only a meaning in the
context of a population or a set of
solution candidates..............................
Fitness Space R
+
Fitness Values v: G´X ! R+
Fitness Assignment Process Fitness Assignment Process
f1(x)
f1(x)
1000
0000 0001 1000 1001
ps
Search Space G Population (Genotypes) PopÎ(G´X)
Many optimization algorithms are general enough to work on different search spaces and
utilize various search operations. Others can only be applied to a distinct class of search
spaces or have fixed operators. Each optimization algorithm is defined by its specification
that states which operations have to be carried out in which order. Definition D6.2 provides
us with a mathematical expression for one specific application of an optimization algorithm
to a specific optimization problem instance.
This setup, together with the specification of the algorithm, entirely defines the behavior of
the optimizer “OptAlgo”.
Apart from this very mathematical definition, we provide a Java interface which unites the
functionality and features of optimization algorithms in Section 55.1.5 on page 713. Here,
the functionality (finding a list of good individuals) and the properties (the objective func-
tion(s), the search operations, the genotype-phenotype mapping, the termination criterion)
characterize an optimizer. This perspective comes closer to the practitioner’s point of view
whereas Definition D6.3 gives a good summary of the theoretical idea of optimization.
If an optimization algorithm can effectively be applied to an optimization problem, it
will find at least one local optimum x⋆l if granted infinite processing time and if such an
optimum exists. Usual prerequisites for effective problem solving are a weakly complete
set of search operations Op, a surjective genotype-phenotype mapping “gpm”, and that the
objective functions f do provide more information than needle-in-a-haystack configurations
(see Section 16.7 on page 175).
∃x⋆l ∈ X : lim Φ(x⋆l , τ ) = 1 (6.1)
τ →∞
In other words, if we have a differentiable smooth function with low total variation2 [2224],
the gradient would be the perfect hint for optimizer, giving the best direction into which
the search should be steered. What many optimization algorithms which treat objective
functions as black boxes do is to use rough estimations of the gradient for this purpose. The
Downhill Simplex (see Chapter 43 on page 501), for example, uses a polytope consisting of
n + 1 points in G = Rn for approximating the direction with the steepest descend.
In most cases, we cannot directly differentiate the objective functions with the Nabla op-
erator3 ∇f because its exact mathematical characteristics are not known or too complicated.
Also, the search space G is often not a vector space over the real numbers R.
Thus, samples of the search space are used to approximate the gradient. If we compare
two elements g1 and g2 via their corresponding phenotypes x1 = gpm(g1 ) and x2 = gpm(g2 )
and find x1 ≺ x2 , we can assume that there is some sort of gradient facing upwards from
x1 to x2 and hence, from g1 to g2 . When descending this gradient, we can hope to find an
x3 with x3 ≺ x1 and finally the global minimum.
1 http://en.wikipedia.org/wiki/Gradient [accessed 2007-11-06]
2 http://en.wikipedia.org/wiki/Total_variation [accessed 2009-07-01]
3 http://en.wikipedia.org/wiki/Del [accessed 2008-02-15]
6.3. OTHER GENERAL FEATURES 105
6.3.2 Iterations
Global optimization algorithms often proceed iteratively and step by step evaluate candi-
date solutions in order to approach the optima. We distinguish between evaluations τ and
iterations t.
Definition D6.5 (Evaluation). The value τ ∈ N0 denotes the number of candidate solu-
tions for which the set of objective functions f has been evaluated.
Algorithms are referred to as iterative if most of their work is done by cyclic repetition of
one main loop. In the context of this book, an iterative optimization algorithm starts with
the first step t = 0. The value t ∈ N0 is the index of the iteration currently performed by
the algorithm and t + 1 refers to the following step. One example for iterative algorithm
is Algorithm 6.1. In some optimization algorithms like Genetic Algorithms, for instance,
iterations are referred to as generations.
There often exists a well-defined relation between the number of performed evaluations
τ of candidate solutions and the index of the current iteration t in an optimization pro-
cess: Many Global Optimization algorithms create and evaluate exactly a certain number
of individuals per generation.
In Listing 55.9 on page 712, you can find a Java interface resembling Definition D6.7. Some
possible criteria that can be used to decide whether an optimizer should terminate or not
are [2160, 2622, 3088, 3089]:
1. The user may grant the optimization algorithm a maximum computation time. If this
time has been exceeded, the optimizer should stop. Here we should note that the time
needed for single individuals may vary, and so will the times needed for iterations.
Hence, this time threshold can sometimes not be abided exactly and may avail to
different total numbers of iterations during multiple runs of the same algorithm.
˘
2. Instead of specifying a time limit, an upper limit for the number of iterations t or
˘
individual evaluations τ may be specified. Such criteria are most interesting for the
researcher, since she often wants to know whether a qualitatively interesting solution
can be found for a given problem using at most a predefined number of objective
function evaluations. In Listing
56.53 on page 836, we give a Java implementation of
the terminationCriterion operation which can be used for that purpose.
4 http://en.wikipedia.org/wiki/Iteration [accessed 2007-07-03]
106 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.
3. An optimization process may be stopped when no significant improvement in the
solution quality was detected for a specified number of iterations. Then, the process
most probably has converged to a (hopefully good) solution and will most likely not
be able to make further progress.
4. If we optimize something like a decision maker or classifier based on a sample data
set, we will normally divide this data into a training and a test set. The training set is
used to guide the optimization process whereas the test set is used to verify its results.
We can compare the performance of our solution when fed with the training set to
its properties if fed with the test set. This comparison helps us detect when most
probably no further generalization can be achieved by the optimizer and we should
terminate the process.
5. Obviously, we can terminate an optimization process if it has already yielded a suffi-
ciently good solution.
In practical applications, we can apply any combination of the criteria above in order to
determine when to halt. Algorithm 1.1 on page 40 is an example for a badly designed termi-
nation criterion: the algorithm will not halt until a globally optimal solution is found. From
its structure, however, it is clear that it is very prone for premature convergence (see Chap-
ter 13 on page 151 and thus, may get stuck at a local optimum. Then, it will run forever
even if an optimal solution exists. This example also shows another thing: The structure
of the termination criterion decides whether a probabilistic optimization technique is a Las
Vegas or Monte Carlo algorithm (see Definition D12.2 and D12.3). How the termination
criterion may be tested in an iterative algorithm is illustrated in Algorithm 6.1.
6.3.4 Minimization
The original form of many optimization algorithms was developed for single-objective op-
timization. Such algorithms may be used for both, minimization or maximization. As
previously mentioned, we will present them as minimization processes since this is the most
commonly used notation without loss of generality. An algorithm that maximizes the func-
tion f may be transformed to a minimization using −f instead.
Note that using the prevalence comparisons as introduced in Section 3.5.2 on page 76,
multi-objective optimization processes can be transformed into single-objective minimization
processes. Therefore x1 ≺ x2 ⇔ cmpf (x1 , x2 ) < 0.
While there are a lot of problems where the objective functions are mathematical expressions
that can directly be computed, there exist problem classes far away from such simple function
optimization that require complex models and simulations.
6.3. OTHER GENERAL FEATURES 107
Definition D6.8 (Model). A model5 is an abstraction or approximation of a system that
allows us to reason and to deduce properties of the system.
Models are often simplifications or idealization of real-world issues. They are defined by
leaving away facts that probably have only minor impact on the conclusions drawn from
them. In the area of Global Optimization, we often need two types of abstractions:
1. The models of the potential solutions shape the problem space X. Examples are
(a) programs in Genetic Programming, for example for the Artificial Ant problem,
(b) construction plans of a skyscraper,
(c) distributed algorithms represented as programs for Genetic Programming,
(d) construction plans of a turbine,
(e) circuit diagrams for logical circuits, and so on.
2. Models of the environment in which we can test and explore the properties of the
potential solutions, like
(a) a map on which the Artificial Ant will move driven by an optimized program,
(b) an abstraction from the environment in which the skyscraper will be built, with
wind blowing from several directions,
(c) a model of the network in which the evolved distributed algorithms can run,
(d) a physical model of air which blows through the turbine,
(e) the model of an energy source the other pins which will be attached to the circuit
together with the possible voltages on these pins.
Models themselves are rather static structures of descriptions and formulas. Deriving con-
crete results (objective values) from them is often complicated. It often makes more sense
to bring the construction plan of a skyscraper to life in a simulation. Then we can test the
influence of various wind forces and directions on building structure and approximate the
properties which define the objective values.
Simulations are executable, live representations of models that can be as meaningful as real
experiments. They allow us to reason if a model makes sense or not and how certain objects
behave in the context of a model.
There are many highly efficient algorithms for solving special problems. For finding
the shortest paths from one source node to the other nodes in a network, for example, one
would never use a metaheuristic but Dijkstra’s algorithm [796]. For computing the maximum
flow in a flow network, the algorithm by Dinitz [800] beats any probabilistic optimization
method. The key to win with metaheuristics is to know when to apply them and when not.
The first step before using them is always an internet or literature search in order to check
for alternatives.
If this search did not lead to the discovery of a suitable method to solve the problem,
a randomized or traditional optimization technique can be used. Therefore, the following
steps should be performed in the order they are listed below and sketched in Figure 7.2.
0. Check for possible alternatives to using optimization algorithms!
1. Define the data structures for the possible solutions, the problem space X (see Sec-
tion 2.1).
2. Define the objective functions f ∈ f which are subject to optimization (see Section 2.2).
3. Define the equality constraints h and inequality constraints g, if needed (see Sec-
tion 3.4).
4. Define a comparator function cmpf (x1 , x2 ) able to decide which one of two candidate
solutions x1 and x2 is better (based on f , and the constraints hi and gi , see Sec-
tion 3.5.2). In the unconstrained, single-objective case which is most common, this
function will just select the one individual with the lower objective value.
5. Decide upon the optimization algorithm to be used for the optimization problem (see
Section 1.3).
6. Choose an appropriate search space G (see Section 4.1). This decision may be directly
implied by the choice in Point 5.
110 7 SOLVING AN OPTIMIZATION PROBLEM
no
Determine datastructure for solutions. X
Define optimization criteria and constraints. f: X ! Rn; v(pÎPop, Pop) ! R+
no Are results
satisfying?
yes
Perform real experiment consisting of enough e.g., 30 runs for each configuration
runs to get statistically significant results. and problem instance
yes
Production Stage integrate into real application
Figure 7.2:
7 SOLVING AN OPTIMIZATION PROBLEM 111
7. Define a genotype-phenotype mapping “gpm” between the search space G and the
problem space X (see Section 4.3).
10. Run a small-scale test with the algorithm and the settings and test, whether they
approximately meet the expectations. If not, go back to Point 9, 5, or even Point 1,
depending on how bad the results are. Otherwise, continue at Point 11
11. Conduct experiments with some different configurations and perform many runs for
each configuration. Obtain enough results to make statistical comparisons between
the configurations and with other algorithms (see e.g., Section 53.7). If the results are
bad, again consider to go back to a previous step.
12. Now either the results of the optimizer can be used and published or the developed
optimization system can become part of an application (see Part V and [2912] as an
example), depending on the goal of the work.
112 7 SOLVING AN OPTIMIZATION PROBLEM
Tasks T7
31. Why is it important to always look for traditional dedicated and deterministic alterna-
tives before trying to solve an optimization problem with a metaheuristic optimization
algorithm?
[5 points]
32. Why is a single experiment with a randomized optimization algorithm not enough to
show that this algorithm is good (or bad) for solving a given optimization task? Why
do we need comprehensive experiments instead, as pointed out in Point 11 on the
previous page?
[5 points]
Chapter 8
Baseline Search Patterns
Metaheuristic optimization algorithms trade in the guaranteed optimality of their results for
shorter runtime. This means that their results can be (but not necessarily are) worse than
the global optima of a given optimization problem. The results are acceptable as long as the
user is satisfied with them. An interesting question, however, is When is an optimization
algorithm suitable for a given problem?. The answer to this question is strongly related to
how well it can utilize the information it gains during the optimization process in order to
find promising regions in the search space.
The primary information source for any optimizer is the (set of) objective functions.
Hence, it makes sense to declare an optimization algorithm as efficient if it can outperform
primitive approaches which do not utilize the information gained from evaluating candidate
solutions with objective functions, such as the random sampling and random walk methods
which will be introduced in this chapter. The comparison criterion should be the solution
quality in terms of objective values.
A third baseline algorithm is exhaustive enumeration, the testing of all possible solutions. It
does not utilize the information gained from the objective functions as well. It can be used
as a yardstick in terms of optimization speed.
Notice that x is the only variable used for information aggregation and that there is
no form of feedback. The search makes no use of the knowledge about good structures of
candidate solutions nor does it further investigate a genotype which has produced a good
phenotype. Instead, if “create” samples the search space G in a uniform, unbiased way,
random sampling will be completely unbiased as well.
This unbiasedness and ignorance of available information turns random sampling into
one of the baseline algorithms used for checking whether an optimization method is suitable
for a given problem. If an optimization algorithm does not perform better than random
sampling on a given optimization problem, we can deduce that (a) it is not able to utilize the
information gained from the objective function(s) efficiently (see Chapter 14 and 16), (b) its
search operations cannot modify existing genotypes in a useful manner (see Section 14.2),
or (c) the information given by the objective functions is largely misleading (see Chapter 15)..
It is well known as the No Free Lunch Theorem (see Chapter 23), that averaged over the set
of all possible optimization problems, no optimization algorithm can beat random sampling.
However, for specific families of problems and for most practically relevant problems in
general, outperforming random sampling is not too complicated. Still, a comparison to this
primitive algorithm is necessary whenever a new, not yet analyzed optimization task is to
be solved.
Like random sampling, random walks1 [899, 1302, 2143] (sometimes also called drunkard’s
walks) can be considered as an uninformed, undirected search methods. They form the
second one of the two baseline algorithms to which optimization methods must be compared
for specific problems in order to test their utility. We specify the random walk algorithm in
Algorithm 8.2 and implement it in Listing 56.5 on page 732.
While random sampling algorithms make use of no information gathered so far, random
walks base their next “search move” on the most recently investigated genotype p.g. Instead
of sampling the search space uniformly as random sampling does, a random walk starts at one
(possibly random) point. In each iteration, it uses a unary search operation (in Algorithm 8.2
simply denoted as searchOp(p.g)) to move to the next point.
Because of this structure, random walks are quite similar in their structure to basic
optimization algorithms such as Hill Climbing (see Chapter 26). The difference is that
optimization algorithms make use of the information they gain by evaluating the candidate
1 http://en.wikipedia.org/wiki/Random_walk [accessed 2010-08-26]
8.2. RANDOM WALKS 115
solutions with the objective functions. In any useful configuration for optimization problems
which are not too ill-defined, they should thus be able to outperform random walks and
random sampling.
In cases where they cannot, applying one of these two uninformed, primitive strategies is
advised. Better yet, since optimization methods always introduce some bias into the search
process, always try to steer into the direction of the search space where they expected a
fitness improvement, random walk or random sampling may even outperform them if the
problems are deceptive enough (see Chapter 15). Circumstances where random walks can
be the search algorithms of choice are, for example,
1. if a search algorithm encounters a state explosion because there are too many states
and transcending to them would consume too much memory.
2. In certain cases of online search it is not possible to apply systematic approaches
like breadth-first search or depth-first search. If the environment, for instance, is only
partially observable and each state transition represents an immediate interaction with
this environment, we are maybe not able to navigate to past states again. One example
for such a case is discussed in the work of Skubch [2516, 2517] about reasoning agents.
Random walks are often used in optimization theory for determining features of a fitness
landscape. Measures that can be derived mathematically from a walk model include esti-
mates for the number of steps needed until a certain configuration is found, the chance to
find a certain configuration at all, and the average difference of the objective values of two
consecutive populations. From practically running random walks, some information about
the search space can be extracted. Skubch [2516, 2517], for instance, uses the number of en-
counters of certain state transition during the walk in order to successively build a heuristic
function.
Figure 8.1 illustrates some examples for random walks. The subfigures 8.1.a to 8.1.c
each illustrate three random walks in which axis-parallel steps of length 1 are taken. The
dimensions range from 1 to 3 from the left to the right. The same goes for Fig. 8.1.d to
Fig. 8.1.f, each of which displaying three random walks with Gaussian (standard-normally)
distributed step widths and arbitrary step directions.
An adaptive walk is a theoretical optimization method which, like a random walk, usually
works on a population of size 1. It starts at a random location in the search space and pro-
116 8 BASELINE SEARCH PATTERNS
steps
Fig. 8.1.a: 1d, step length 1 Fig. 8.1.b: 2d, axis-parallel, Fig. 8.1.c: 3d, axis-parallel,
step length 1 step length 1
steps
Fig. 8.1.d: 1d, Gaussian step Fig. 8.1.e: 2d, arbitrary angle, Fig. 8.1.f: 3d, arbitrary angle,
length Gaussian step length Gaussian step length
ceeds by changing (or mutating) its single individual. For this modification, three methods
are available:
1. One-mutant change: The optimization process chooses a single new individual from
the set of “one-mutant change” neighbors. These neighbors differ from the current
candidate solution in only one property, i. e., are in its ε = 0-neighborhood. If the new
individual is better, it replaces its ancestor, otherwise it is discarded.
2. Greedy dynamics: The optimization process chooses a single new individual from the
set of “one-mutant change” neighbors. If it is not better than the current candidate
solution, the search continues until a better one has been found or all neighbors have
been enumerated. The major difference to the previous form is the number of steps
that are needed per improvement.
3. Fitter Dynamics: The optimization process enumerates all one-mutant neighbors of
the current candidate solution and transcends to the best one.
From these elaborations, it becomes clear that adaptive walks are very similar to Hill Climb-
ing and Random Optimization. The major difference is that an adaptive walk is a theoreti-
cal construct that, very much like random walks, helps us to determine properties of fitness
landscapes whereas the other two are practical realizations of optimization algorithms.
Adaptive walks are a very common construct in evolutionary biology. Biological
populations are running for a very long time and so their genetic compositions are as-
sumed to be relatively converged [79, 1057]. The dynamics of such populations in near-
equilibrium states with low mutation rates can be approximated with one-mutant adaptive
walks [79, 1057, 2528].
Algorithm 8.3 contains a definition of the exhaustive search algorithm. It is based on the
assumption that the search space G can be traversed in a linear order, where the tth element
in the search space can be accessed as G[t]. The second parameter of the algorithm is the
lower limit y̆ of the objective value. If this limit is known, the algorithm may terminate
before traversing the complete search space if it discovers a global optimum with f(x) = y̆
earlier. If no lower limit for f is known, y̆ = −∞ can be provided.
It is interesting to see that for many hard problems such as N P-complete ones (Sec-
tion 12.2.2), no algorithms are known which can generally be faster than exhaustive enu-
meration. For an input size of n, algorithms for such problems need O(2n ) iterations.
Exhaustive search requires exactly the same complexity. If G = Bn , i. e., the bit strings of
length n, the algorithm will need exactly 2n steps if no lower limit y̆ for the value of f is
known.
118 8 BASELINE SEARCH PATTERNS
Tasks T8
33. What is are the similarities and differences between random walks, random sampling,
and adaptive walks?
[10 points]
34. Why should we consider an optimization algorithm as ineffective for a given problem
if it does not perform better than random sampling or a random walk on the problem?
[5 points]
35. Is it possible that an optimization algorithm such as, for example, Algorithm 1.1 on
page 40 may give us even worse results than a random walk or random sampling? Give
reasons. If yes, try to find a problem (problem space, objective function) where this
could be the case.
[5 points]
36. Implement random sampling for the bin packing problem as a comparison algorithm
for Algorithm 1.1 on page 40 and test it on the same problem instances used to
verify Task 4 on page 42. Discuss your results.
[10 points]
37. Compare your answers to Task 35 and 36. Is the experiment in Task 36 suitable
to validate your hypothesis? If not, can you find some instances of the bin packing
problem where it can be used to verify your idea? If so, apply both Algorithm 1.1
on page 40 and your random sampling implementation to these instances. Did the
experiments support your hypothesis?
[15 points]
38. Can we implement an exhaustive search pattern to the find the minimum of a math-
ematical function f defined over the real interval (−1, 1)? Answer this question both
from
39. Implement an exhaustive-search based optimization procedure for the bin packing
problem as a comparison algorithm for Algorithm 1.1 on page 40 and test it on the
same problem instances used to verify Task 4 on page 42 and 36. Discuss your results.
[10 points]
40. Implement an exhaustive search pattern to solve the instance of the Traveling Salesman
Problem given in Example E2.2 on page 45.
[10 points]
Chapter 9
Forma Analysis
The Schema Theorem has been stated for Genetic Algorithms by Holland [1250] in its seminal
work [711, 1250, 1253]. In this section, we are going to discuss it in the more general version
from Weicker [2879] as introduced by Radcliffe and Surry [2241–2243, 2246, 2634].
1 More information on symbolic regression can be found in Section 49.1 on page 531.
120 9 FORMA ANALYSIS
Af =black
5
G5
q
G4
q
G1
q
G6 G2
q q
G8
q
G7 G3
q q
Pop Í X
(G=X)
Af =white
5
Af =gray
5
Obviously, for each two candidate solutions x1 and x2 , either x1 ∼φi x2 or x1 ≁φi x2 holds.
These relations divide the search space into equivalence classes Aφi =v .
Definition D9.2 (Forma). An equivalence class Aφi =v that contains all the individuals
sharing the same characteristic v in terms of the property φi is called a forma [2241] or
predicate [2826].
The number of formae induced by a property, i. e., the number of its different characteristics,
is called its precision [2241]. Two formae Aφi =v and Aφj =w are said to be compatible, written
as Aφi =v ⊲⊳ Aφj =w , if there can exist at least one individual which is an instance of both.
Of course, two different formae of the same property φi , i. e., two different characteristics of
φi, are always incompatible.
z2(x)=x2+1.1
z1(x)=x+1 z4 z7
z8(x)=(tan x)+1 z1 z6
z3(x)=x+2 ~f 1
Af =true
1
z5(x)=tan x z2 Af =false
z5 z3
1
z7(x)=(cos x)(x+1)
z8
z4(x)=2(x+1)
z6(x)=(sin x)(x+1)
Pop Í X ~f
(G=X) 2
z1 z2
z7
~f6 z8
Af =true
2
z5 z6 z2 z6 z5 z3
z4 Af =false
Af =0 Af =1.1
2
4
z1
z4 z3 4
z7
z8
Af =2 4 Af =1
4
In our initial symbolic regression example Aφ1 =true 6⊲⊳ Aφ1 =false holds, since it is not
possible that a function z contains a term x + 1 and at the same time does not contain
it. All formae of the properties φ1 and φ2 on the other hand are compatible: Aφ1 =false ⊲⊳
Aφ2 =false , Aφ1 =false ⊲⊳ Aφ2 =true , Aφ1 =true ⊲⊳ Aφ2 =false , and Aφ1 =true ⊲⊳ Aφ2 =true all
hold. If we take φ6 into consideration, we will find that there exist some formae compatible
with some of φ2 and some that are not, like Aφ2 =true ⊲⊳ Aφ6 =1 and Aφ2 =false ⊲⊳ Aφ6 =2 , but
Aφ2 =true 6⊲⊳ Aφ6 =0 and Aφ2 =false 6⊲⊳ Aφ6 =0.95 .
The discussion of forma and their dependencies stems from the Evolutionary Algorithm
community and there especially from the supporters of the Building Block Hypothesis.
The idea is that an optimization algorithm should first discover formae which have a good
influence on the overall fitness of the candidate solutions. The hope is that there are many
compatible ones under these formae that can gradually be combined in the search process.
In this text we have defined formae and the corresponding terms on the basis of individ-
uals p which are records that assign an element of the problem spaces p.x ∈ X to an element
of the search space p.g ∈ G. Generally, we will relax this notation and also discuss forma
directly in the context of the search space G or problem space X, when appropriate.
122 9 FORMA ANALYSIS
Tasks T9
41. Define at least three reasonable properties on the individuals, genotypes, or phenotypes
of the Traveling Salesman Problem as introduced in Example E2.2 on page 45 and
further discussed in Task 26 on page 91.
[6 points]
42. List the relevant forma for each of the properties you have defined in based on your
definitions in Task 41 and in Task 26 on page 91.
[12 points]
43. List all the members of your forma given in Task 42 for the Traveling Salesman Problem
instance provided in Table 2.1 on page 46.
[10 points]
Chapter 10
General Information on Optimization
10.2 Books
1. Handbook of Evolutionary Computation [171]
2. New Ideas in Optimization [638]
3. Scalable Optimization via Probabilistic Modeling – From Algorithms to Applications [2158]
4. Handbook of Metaheuristics [1068]
5. New Optimization Techniques in Engineering [2083]
6. Metaheuristics for Multiobjective Optimisation [1014]
7. Selected Papers from the Workshop on Pattern Directed Inference Systems [2869]
8. Applications of Multi-Objective Evolutionary Algorithms [597]
9. Handbook of Applied Optimization [2125]
10. The Traveling Salesman Problem: A Computational Study [118]
11. Intelligent Systems for Automated Learning and Adaptation: Emerging Trends and Applica-
tions [561]
12. Optimization and Operations Research [773]
13. Readings in Artificial Intelligence: A Collection of Articles [2872]
14. Machine Learning: Principles and Techniques [974]
15. Experimental Methods for the Analysis of Optimization Algorithms [228]
16. Evolutionary Multiobjective Optimization – Theoretical Advances and Applications [8]
17. Evolutionary Algorithms for Solving Multi-Objective Problems [599]
18. Pareto Optimality, Game Theory and Equilibria [560]
19. Pattern Classification [844]
20. The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization [1694]
21. Parallel Evolutionary Computations [2015]
22. Introduction to Global Optimization [2126]
126 10 GENERAL INFORMATION ON OPTIMIZATION
23. Global Optimization Algorithms – Theory and Application [2892]
24. Computational Intelligence in Control [1922]
25. Introduction to Operations Research [580]
26. Management Models and Industrial Applications of Linear Programming [530]
27. Efficient and Accurate Parallel Genetic Algorithms [477]
28. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
29. Nonlinear Programming: Sequential Unconstrained Minimization Techniques [908]
30. Nonlinear Programming: Sequential Unconstrained Minimization Techniques [909]
31. Global Optimization: Deterministic Approaches [1271]
32. Soft Computing: Methodologies and Applications [1242]
33. Handbook of Global Optimization [1272]
34. New Learning Paradigms in Soft Computing [1430]
35. How to Solve It: Modern Heuristics [1887]
36. Numerical Optimization [2040]
37. Artificial Intelligence: A Modern Approach [2356]
38. Electromagnetic Optimization by Genetic Algorithms [2250]
39. Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems [2685]
40. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations
[2961]
41. Multi-Objective Swarm Intelligent Systems: Theory & Experiences [2016]
42. Evolutionary Optimization [2394]
43. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
44. Multi-Objective Optimization in Computational Intelligence: Theory and Practice [438]
45. An Operations Research Approach to the Economic Optimization of a Kraft Pulping Process
[500]
46. Fundamental Methods of Mathematical Economics [559]
47. Einführung in die Künstliche Intelligenz [797]
48. Stochastic and Global Optimization [850]
49. Machine Learning Applications in Expert Systems and Information Retrieval [975]
50. Nonlinear and Dynamics Programming [1162]
51. Introduction to Operations Research [1232]
52. Introduction to Stochastic Search and Optimization [2561]
53. Nonlinear Multiobjective Optimization [1893]
54. Heuristics: Intelligent Search Strategies for Computer Problem Solving [2140]
55. Global Optimization with Non-Convex Constraints [2623]
56. The Nature of Statistical Learning Theory [2788]
57. Advances in Greedy Algorithms [247]
58. Multiobjective Decision Making Theory and Methodology [528]
59. Multiobjective Optimization: Principles and Case Studies [609]
60. Multi-Objective Optimization Using Evolutionary Algorithms [740]
61. Multicriteria Optimization [860]
62. Multiple Criteria Optimization: Theory, Computation and Application [2610]
63. Global Optimization [2714]
64. Soft Computing [760]
10.4 Journals
1. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics published by
IEEE Systems, Man, and Cybernetics Society
2. Information Sciences – Informatics and Computer Science Intelligent Systems Applications:
An International Journal published by Elsevier Science Publishers B.V.
3. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientific Publishers Ltd.
4. Machine Learning published by Kluwer Academic Publishers and Springer Netherlands
5. Applied Soft Computing published by Elsevier Science Publishers B.V.
6. Soft Computing – A Fusion of Foundations, Methodologies and Applications published by
Springer-Verlag
7. Computers & Operations Research published by Elsevier Science Publishers B.V. and Perg-
amon Press
8. Artificial Intelligence published by Elsevier Science Publishers B.V.
9. Operations Research published by HighWire Press (Stanford University) and Institute for
Operations Research and the Management Sciences (INFORMS)
10. Annals of Operations Research published by J. C. Baltzer AG, Science Publishers and Springer
Netherlands
11. International Journal of Computational Intelligence and Applications (IJCIA) published by
Imperial College Press Co. and World Scientific Publishing Co.
12. ORSA Journal on Computing published by Operations Research Society of America (ORSA)
13. Management Science published by HighWire Press (Stanford University) and Institute for
Operations Research and the Management Sciences (INFORMS)
14. Journal of Artificial Intelligence Research (JAIR) published by AAAI Press and AI Access
Foundation, Inc.
15. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
16. The Journal of the Operational Research Society (JORS) published by Operations Research
Society and Palgrave Macmillan Ltd.
17. Data Mining and Knowledge Discovery published by Springer Netherlands
136 10 GENERAL INFORMATION ON OPTIMIZATION
18. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
19. Mathematics of Operations Research (MOR) published by Institute for Operations Research
and the Management Sciences (INFORMS)
20. Journal of Global Optimization published by Springer Netherlands
21. AIAA Journal published by American Institute of Aeronautics and Astronautics
22. Journal of Artificial Evolution and Applications published by Hindawi Publishing Corporation
23. Applied Intelligence – The International Journal of Artificial Intelligence, Neural Networks,
and Complex Problem-Solving Technologies published by Springer Netherlands
24. Artificial Intelligence Review – An International Science and Engineering Journal published
by Kluwer Academic Publishers and Springer Netherlands
25. Annals of Mathematics and Artificial Intelligence published by Springer Netherlands
26. IEEE Transactions on Knowledge and Data Engineering published by IEEE Computer Soci-
ety Press
27. IEEE Intelligent Systems Magazine published by IEEE Computer Society Press
28. ACM SIGART Bulletin published by ACM Press
29. Journal of Machine Learning Research (JMLR) published by MIT Press
30. Discrete Optimization published by Elsevier Science Publishers B.V.
31. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
32. Artificial Intelligence in Engineering published by Elsevier Science Publishers B.V.
33. Engineering Optimization published by Informa plc and Taylor and Francis LLC
34. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
35. International Journal of Organizational and Collective Intelligence (IJOCI) published by Idea
Group Publishing (Idea Group Inc., IGI Global)
36. Journal of Intelligent Information Systems published by Springer-Verlag GmbH
37. International Journal of Knowledge-Based and Intelligent Engineering Systems published by
IOS Press
38. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) published by
IEEE Computer Society
39. Optimization Methods and Software published by Taylor and Francis LLC
40. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
41. Structural and Multidisciplinary Optimization published by Springer-Verlag GmbH
42. OR/MS Today published by Institute for Operations Research and the Management Sciences
(INFORMS) and Lionheart Publishing, Inc.
43. AI Magazine published by AAAI Press
44. Computers in Industry – An International, Application Oriented Research Journal published
by Elsevier Science Publishers B.V.
45. International Journal Approximate Reasoning published by Elsevier Science Publishers B.V.
46. International Journal of Intelligent Systems published by Wiley Interscience
47. International Journal of Production Research published by Taylor and Francis LLC
48. International Transactions in Operational Research published by Blackwell Publishing Ltd
49. INFORMS Journal on Computing (JOC) published by HighWire Press (Stanford University)
and Institute for Operations Research and the Management Sciences (INFORMS)
50. OR Spectrum – Quantitative Approaches in Management published by Springer-Verlag
51. SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization pub-
lished by SIAM Activity Group on Optimization – SIAG/OPT
52. Künstliche Intelligenz (KI) published by Böttcher IT Verlag
Part II
Difficulties in Optimization
138 10 GENERAL INFORMATION ON OPTIMIZATION
Chapter 11
Introduction
The classification of optimization algorithms in Section 1.3 and the table of contents of this
book enumerate a wide variety of optimization algorithms. Yet, the approaches introduced
here resemble only a fraction of the actual number of available methods. It is a justified
question to ask why there are so many different approaches, why is this variety needed? One
possible answer is simply because there are so many different kinds of optimization tasks.
Each of them puts different obstacles into the way of the optimizers and comes with own,
characteristic difficulties.
In this part we want to discuss the most important of these complications, the major
problems that may be encountered during optimization [2915]. You may wish to read this
part after reading through some of the later parts on specific optimization algorithms or
after solving your first optimization task. You will then realize that many of the problems
you encountered are discussed here in depth.
Some of subjects in the following text concern Global Optimization in general (multi-
modality and overfitting, for instance), others apply especially to nature-inspired approaches
like Genetic Algorithms (epistasis and neutrality, for example). Sometimes, neglecting a
single one of these aspects during the development of the optimization process can render
the whole efforts invested useless, even if highly efficient optimization techniques are applied.
By giving clear definitions and comprehensive introductions to these topics, we want to raise
the awareness of scientists and practitioners in the industry and hope to help them to use
optimization algorithms more efficiently.
In Figure 11.1, we have sketched a set of different types of fitness landscapes (see Sec-
tion 5.1) which we are going to discuss. The objective values in the figure are subject to
minimization and the small bubbles represent candidate solutions under investigation. An
arrow from one bubble to another means that the second individual is found by applying
one search operation to the first one.
Before we go more into detail about what makes these landscapes difficult, we should
establish the term in the context of optimization. The degree of difficulty of solving a certain
problem with a dedicated algorithm is closely related to its computational complexity. We
will outline this in the next section and show that for many problems, there is no algorithm
which can exactly solve them in reasonable time.
As already stated, optimization algorithms are guided by objective functions. A function
is difficult from a mathematical perspective in this context if it is not continuous, not
differentiable, or if it has multiple maxima and minima. This understanding of difficulty
comes very close to the intuitive sketches in Figure 11.1.
In many real world applications of metaheuristic optimization, the characteristics of the
objective functions are not known in advance. The problems are usually very complex and it
is only rarely possible to derive boundaries for the performance or the runtime of optimizers
in advance, let alone exact estimates with mathematical precision.
In such cases, experience and general rules of thumb are often the best guides available
140 11 INTRODUCTION
objective values f(x)
? ? ?
objective values f(x)
?
region with misleading neutral area
gradient information x
x
Fig. 11.1.e: Deceptive Fig. 11.1.f: Neutral
needle
?
objective values f(x)
objective values f(x)
(isolated
? optimum)
?
neutral area or
area without much
information x x
Fig. 11.1.g: Needle-In-A-Haystack Fig. 11.1.h: Nightmare
Most of the algorithms we will discuss in this book, in particular the metaheuristics, usually
give approximations as results but cannot guarantee to always find the globally optimal
solution. However, we are actually able to find the global optima for all given problems (of
course, limited to the precision of the available computers). Indeed, the easiest way to do
so would be exhaustive enumeration (see Section 8.3), to simply test all possible values from
the problem space and to return the best one.
This method, obviously, will take extremely long so our goal is to find an algorithm
which is better. More precisely, we want to find the best algorithm for solving a given class
of problems. This means that we need some sort of metric with which we can compare the
efficiency of algorithms [2877, 2940].
In other words, a function f (x) is in O of another function g (x) if and only if there exists a
real number x0 and a constant, positive factor m so that the absolute value of f (x) is smaller
(or equal) than m-times the absolute value of g (x) for all x that are greater than x0 . f (x)
thus cannot grow (considerably) faster than g (x). Therefore, x3 +x2 +x+1 = f (x) ∈ O x3
since for m = 5 and x0 = 2 it holds that 5x3 > x3 + x2 + x + 1 ∀x ≥ 2.
In terms of algorithmic complexity, we specify the amount of steps or memory units an
algorithm needs in dependency on the size of its inputs in the big-O notation. More gen-
erally, this specification is based on key-features of the input data. For a graph-processing
algorithm, this may be the number of edges |E| and the number of vertexes |V | of a graph
to be processed bya graph-processing algorithm and the algorithmic complexity may be
p
O |V |2 + |E||V | . Some examples for single-parametric functions can be found in Ex-
ample E12.1.
20
10
15
10
f(x)=x4
1 trillion ms per day
1 billion
1 million f(x)=x2
1000 f(x)=x
100 x
10
1
1 2 4 8 16 32 64 128 256 512 1024 2048
Figure 12.1: Illustration of some functions illustrating their rising speed, gleaned from [2365].
In Example E12.1, we give some examples for the big-O notation and name kinds of algo-
rithms which have such complexities. The rising behavior of some functions is illustrated in
Figure 12.1. From this figure it becomes clear that problems for which only algorithms with
high complexities such as O(2n ) exist quickly become intractable even for small input sizes
n.
The amount of resources required at least to solve a problem depends on the problem
itself and on the machine used for solving it. One of the most basic computer models are
deterministic Turing machines6 (DTMs) [2737] as sketched in Fig. 12.2.a. They consist of
a tape of infinite length which is divided into distinct cells, each capable of holding exactly
one symbol from a finite alphabet. A Turing machine has a head which is always located
at one of these cells and can read and write the symbols. The machine has a finite set of
possible states and is driven by rules. These rules decide which symbol should be written
and/or move the head one unit to the left or right based on the machine’s current state and
the symbol on the tape under its head. With this simple machine, any of our computers
and, hence, also the programs executed by them, can be simulated.7
4 http://en.wikipedia.org/wiki/Big_O_notation [accessed 2007-07-03]
5 http://en.wikipedia.org/wiki/Computational_complexity_theory [accessed 2010-06-24]
6 http://en.wikipedia.org/wiki/Turing_machine [accessed 2010-06-24]
7 Another computational model, RAM (Random Access Machine) is quite close to today’s computer
architectures and can be simulated with Turing machines efficiently [1804] and vice versa.
146 12 PROBLEM HARDNESS
State Tape Symbol Þ Print Motion Next State
A 0 Þ 1 Right A
0 0 00 Þ
A 1 0 Right B
00 1
00 0 0 B 0 Þ 1 Right B
11 1 0 00 B 1 Þ 1 Left C
C 1 Þ 0 Left C
C 0 Þ 1 Right A
Fig. 12.2.a: A deterministic Turing machine.
Like DTMs, non-deterministic Turing machine8 (NTMs, see Fig. 12.2.b) are thought
experiments rather than real computers. They are different from DTMs in that they may
have more than one rule for a state/input pair. Whenever there is an ambiguous situation
where n > 1 rules could be applied, the NTM can be thought of to “fork” into n independent
NTMs running in parallel. The computation of the overall system stops when any of these
reaches a final, accepting state. NTMs can be simulated with DTMs by performing a
breadth-first search on the possible computation steps of the NTM until it finds an accepting
state. The number of steps required for this simulation, however, grows exponentially with
the length of the shortest accepting path of the NTM [1483].
The set of all problems which can be solved by a deterministic Turing machine within
polynomial time is called P [1025]. This means that for the problems within this class,
there exists at least one algorithm (for a DTM) with an algorithmic complexity in O(np )
(where n is the input size and p ∈ N1 is some natural number).
Algorithms designed for Turing machines can be translated to algorithms for normal PCs.
Since it quite easy to simulate a DTM (with finite tape) with an off-the-shelf computer and
vice versa, these are also the problems that can be solved within polynomial time by existing,
real computers. These are the problems which are said to be exactly solvable in a feasible
way [359, 1092].
N P, on the other hand, includes all problems which can be solved by a non-deterministic
Turing machine in a runtime rising polynomial with the input size. It is the set of all problems
for which solutions can be checked in polynomial time [1549].
Clearly, these include all the problems from P, since DTMs can be considered as a
subset of NTMs and if a deterministic Turing machine can solve something in polynomial
time, a non-deterministic one can as well. The other way around, i.e., the question whether
N P ⊆ P and, hence, P = N P, is much more complicated and so far, there neither exists
proof against nor for it [1549]. However, it is strongly assumed that problems in N P need
super-polynomial, exponential, time to be solved by deterministic Turing machines and
therefore also by computers as we know and use them.
8 http://en.wikipedia.org/wiki/Non-deterministic_Turing_machine [accessed 2010-06-24]
12.3. THE PROBLEM: MANY REAL-WORLD TASKS ARE N P-HARD 147
If problem A can be transformed in a way so that we can use an algorithm for problem
B to solve A, we say A reduces to B. This means that A is no more difficult than B. A
problem is hard for a complexity class if every other problem in this class can be reduced
to it. For classes larger than P, reductions which take no more than polynomial time in the
input size are usually used. The problems which are hard for N P are called N P-hard. A
problem which is in a complexity class C and hard for C is called complete for C, hence the
name N P-complete.
From a practical perspective, problems in P are usually friendly. Everything with an algo-
rithmic complexity in O n5 and less can usually be solved more or less efficiently if n does
not get too large. Searching and sorting algorithms, with complexities of between log n and
n log n, are good examples for such problems which, in most cases, are not bothersome.
However, many real-world problems are N P-hard or, without formal proof, can be as-
sumed to be at least as bad as that. So far, no polynomial-time algorithm for DTMs has
been discovered for this class of problems and the runtime of the algorithms we have for
them increases exponentially with the input size.
In 1962, Bremermann [407] pointed out that “No data processing system whether artificial
or living can process more than 2 · 1047 bits per second per gram of its mass”. This is an
absolute limit to computation power which we are very unlikely to ever be able to unleash.
Yet, even with this power, if the number of possible states of a system grow exponentially
with its size, any solution attempt becomes infeasible quickly. Minsky [1907], for instance,
estimates the number of all possible move sequences in chess to be around 1 · 10120 .
The Traveling Salesman Problem which we initially introduced in Example E2.2 on
page 45 (TSP) is a classical example for N P-complete combinatorial optimization problems.
Since many Vehicle Routing Problems (VRPs) [2912, 2914] are basically iterated variants
of the TSP, they fall at least into the same complexity class. This means that until now,
there exists no algorithm with a less-than exponential complexity which can be used to find
optimal transportation plans for a logistics company. In other words, as soon as a logistics
company has to satisfy transportation orders of more than a handful of customers, it will
likely not be able to serve them in an optimal way.
The Knapsack problem [463], i. e., answering the question on how to fit as many items
(of different weight and value) into a weight-limited container so that the totally value of
the items in the container is maximized, is N P-complete as well. Like the TSP, it occurs in
a variety of practical applications.
The first known N P-complete problem is the satisfiability problem (SAT) [626]. Here,
the problem is to decide whether or not there is an assignment of values to a set of Boolean
variables x1 , x2 , . . . , xn which makes a given Boolean expression based on these variables,
parentheses, ∧, ∨, and ¬ evaluate to true.
These few examples and the wide area of practical application problems which can be
reduced to them clearly show one thing: Unless P = N P can be found at some point in
time, there will be no way to solve a variety of problems exactly.
12.4 Countermeasures
Solving a problem exactly to optimality is not always possible, as we have learned in the
previous section. If we have to deal with N P-complete problems, we will therefore have
to give up on some of the features which we expect an algorithm or a solution to have. A
randomized algorithm includes at least one instruction that acts on the basis of random de-
cision, as stated in Definition D1.13 on page 33. In other words, as opposed to deterministic
148 12 PROBLEM HARDNESS
algorithms9 which always produce the same results when given the same inputs, a random-
ized algorithm violates the constraint of determinism. Randomized algorithms are also often
called probabilistic algorithms [1281, 1282, 1919, 1966, 1967]. There are two general classes
of randomized algorithms: Las Vegas and Monte Carlo algorithms.
If a Las Vegas algorithm returns, its outcome is deterministic (but not the algorithm itself).
The termination, however, cannot be guaranteed. There usually exists an expected runtime
limit for such algorithms – their actual execution however may take arbitrarily long. In
summary, a Las Vegas algorithm terminates with a positive probability and is (partially)
correct.
Definition D12.3 (Monte Carlo Algorithm). A Monte Carlo algorithm11 always ter-
minates. Its result, however, can be either correct or incorrect [1281, 1282, 1966].
In contrast to Las Vegas algorithms, Monte Carlo algorithms always terminate but are
(partially) correct only with a positive probability. As pointed out in Section 1.3, they are
usually the way to go to tackle complex problems.
46. What is the difference between a deterministic Turing machine (DTM) and non-
deterministic one (NTM)?
[10 points]
47. Write down the states and rules of a deterministic Turing machine which inverts a
sequence of zeros and ones on the tape while moving its head to the right and that
stops when encountering a blank symbol.
[5 points]
48. Use literature or the internet to find at least three more N P-complete problems. Define
them as optimization problems by specifying proper problem spaces X and objective
functions f which are subject to minimization.
[15 points]
Chapter 13
Unsatisfying Convergence
13.1 Introduction
Optimization processes will usually converge at some point in time. In nature, a similar phe-
nomenon can be observed according to Koza [1602]: The niche preemption principle states
that a niche in a natural environment tends to become dominated by a single species [1812].
Linear convergence to the global optimum x⋆ with respect to an objective function f is
defined in Equation 13.1, where t is the iteration index of the optimization algorithm and
x(t) is the best candidate solution discovered until iteration t [1976]. This is basically the
best behavior of an optimization process we could wish for.
One of the problems in Global Optimization (and basically, also in nature) is that it is often
not possible to determine whether the best solution currently known is situated on local or
a global optimum and thus, if convergence is acceptable. In other words, it is usually not
clear whether the optimization process can be stopped, whether it should concentrate on
refining the current optimum, or whether it should examine other parts of the search space
instead. This can, of course, only become cumbersome if there are multiple (local) optima,
i. e., the problem is multi-modal [1266, 2267] as depicted in Fig. 11.1.c on page 140.
The existence of multiple global optima itself is not problematic and the discovery of only
a subset of them can still be considered as successful in many cases. The occurrence of
numerous local optima, however, is more complicated.
1 according to a suitable metric like numbers of modifications or mutations which need to be applied to
There are three basic problems which are related to the convergence of an optimization algo-
rithm: premature, non-uniform, and domino convergence. The first one is most important
in optimization, but the latter ones may also cause large inconvenience.
The goal of optimization is, of course, to find the global optima. Premature convergence,
basically, is the convergence of an optimization process to a local optimum. A prematurely
converged optimization process hence provides solutions which are below the possible or
anticipated quality.
x
Fig. 13.1.a: Example 1: Maximization Fig. 13.1.b: Example 2: Minimization
This definition is rather fuzzy, but it becomes much clearer when taking a look on
Example E13.2. The goal of optimization in problems with infinitely many optima is often
not to discover only one or two optima. Instead, usually we want to find a set X of solutions
which has a good spread over all possible optimal features.
front returned
f2 by the optimizer f2
f2
Fig. 13.2.a: Bad Convergence and Fig. 13.2.b: Good Convergence and Fig. 13.2.c: Good Convergence and
Good Spread Bad Spread Spread
Let us examine the three fronts included in Figure 13.2 [2915]. The first picture
(Fig. 13.2.a) shows a result X having a very good spread (or diversity) of solutions, but
the points are far away from the true Pareto front. Such results are not attractive because
they do not provide Pareto-optimal solutions and we would consider the convergence to be
premature in this case. The second example (Fig. 13.2.b) contains a set of solutions which
are very close to the true Pareto front but cover it only partially, so the decision maker
could lose important trade-off options. Finally, the front depicted in Fig. 13.2.c has the two
desirable properties of good convergence and spread.
The phenomenon of domino convergence has been brought to attention by Rudnick [2347]
who studied it in the context of his BinInt problem [2347, 2698] which is discussed in
Section 50.2.5.2. In principle, domino convergence occurs when the candidate solutions
have features which contribute to significantly different degrees to the total fitness. If these
features are encoded in separate genes (or building blocks) in the genotypes, they are likely to
be treated with different priorities, at least in randomized or heuristic optimization methods.
Building blocks with a very strong positive influence on the objective values, for instance,
will quickly be adopted by the optimization process (i. e., “converge”). During this time, the
alleles of genes with a smaller contribution are ignored. They do not come into play until
154 13 UNSATISFYING CONVERGENCE
the optimal alleles of the more “important” blocks have been accumulated. Rudnick [2347]
called this sequential convergence phenomenon domino convergence due to its resemblance
to a row of falling domino stones [2698].
Let us consider the application of a Genetic Algorithm, an optimization method working
on bit strings, in such a scenario. Mutation operators from time to time destroy building
blocks with strong positive influence which are then reconstructed by the search. If this
happens with a high enough frequency, the optimization process will never get to optimize
the less salient blocks because repairing and rediscovering those with higher importance
takes precedence. Thus, the mutation rate of the EA limits the probability of finding the
global optima in such a situation.
In the worst case, the contributions of the less salient genes may almost look like noise
and they are not optimized at all. Such a situation is also an instance of premature conver-
gence, since the global optimum which would involve optimal configurations of all building
blocks will not be discovered. In this situation, restarting the optimization process will not
help because it will turn out the same way with very high probability. Example problems
which are often likely to exhibit domino convergence are the Royal Road and the aforemen-
tioned BinInt problem, which you can find discussed in Section 50.2.4 and Section 50.2.5.2,
respectively.
There are some relations between domino convergence and the Siren Pitfall in feature
selection for classification in data mining [961]. Here, if one feature is highly useful in a hard
sub-problem of the classification task, its utility may still be overshadowed by mediocre
features contributing to easy sub-tasks.
In biology, diversity is the variety and abundance of organisms at a given place and
time [1813, 2112]. Much of the beauty and efficiency of natural ecosystems is based on
a dazzling array of species interacting in manifold ways. Diversification is also a good strat-
egy utilized by investors in the economy in order to increase their wealth.
In population-based Global Optimization algorithms, maintaining a set of diverse candi-
date solutions is very important as well. Losing diversity means approaching a state where all
the candidate solutions under investigation are similar to each other. Another term for this
state is convergence. Discussions about how diversity can be measured have been provided
by Cousins [648], Magurran [1813], Morrison [1956], Morrison and De Jong [1958], Paenke
et al. [2112], Routledge [2344], and Burke et al. [454, 458].
initial situation
Preserving diversity is directly linked with maintaining a good balance between exploitation
and exploration [2112] and has been studied by researchers from many domains, such as
The operations which create new solutions from existing ones have a very large impact on
the speed of convergence and the diversity of the populations [884, 2535]. The step size in
Evolution Strategy is a good example of this issue: setting it properly is very important and
leads over to the “exploration versus exploitation” problem [1250]. The dilemma of deciding
which of the two principles to apply to which degree at a certain stage of optimization is
sketched in Figure 13.4 and can be observed in many areas of Global Optimization.2
have been introduced by Glover [1067, 1069] in the context of Tabu Search.
156 13 UNSATISFYING CONVERGENCE
initial situation:
good candidate solution
found
exploitation: exploration:
search close proximity of search also the
the good candidata distant area
Figure 13.4: Exploration versus Exploitation
Since computers have only limited memory, already evaluated candidate solutions usually
have to be discarded in order to accommodate the new ones. Exploration is a metaphor
for the procedure which allows search operations to find novel and maybe better solution
structures. Such operators (like mutation in Evolutionary Algorithms) have a high chance
of creating inferior solutions by destroying good building blocks but also a small chance of
finding totally new, superior traits (which, however, is not guaranteed at all).
The most basic measure to decrease the probability of premature convergence is to make
sure that the search operations are complete (see Definition D4.8 on page 85), i. e., to make
sure that they can at least theoretically reach every point in the search space from every
other point. Then, it is (at least theoretically) possible to escape arbitrary local optima.
A prime example for bad operator design is Algorithm 1.1 on page 40 which cannot escape
any local optima. The two different operators given in Equation 5.4 and Equation 5.5 on 96
are another example for the influence of the search operation design on the vulnerability of
the optimization process to local convergence.
13.4.2 Restarting
A very crude and yet, sometimes effective measure is restarting the optimization process at
randomly chosen points in time. One example for this method is GRASP s, Greedy Random-
ized Adaptive Search Procedures [902, 906] (see Chapter 42 on page 499), which continuously
restart the process of creating an initial solution and refining it with local search. Still, such
approaches are likely to fail in domino convergence situations. Increasing the proportion of
exploration operations may also reduce the chance of premature convergence.
Generally, the higher the chance that candidate solutions with bad fitness are investigated
instead of being discarded in favor of the seemingly better ones, the lower the chance of
getting stuck at a local optimum. This is the exact idea which distinguishes Simulated
Annealing from Hill Climbing (see Chapter 27). It is known that Simulated Annealing can
find the global optimum, whereas simple Hill Climbers are likely to prematurely converge.
In an EA, too, using low selection pressure decreases the chance of premature convergence.
However, such an approach also decreases the speed with which good solutions are exploited.
Simulated Annealing, for instance, takes extremely long if a guaranteed convergence to the
global optimum is required.
Increasing the population size of a population-based optimizer may be useful as well,
since larger populations can maintain more individuals (and hence, also be more diverse).
However, the idea that larger populations will always lead to better optimization results
does not always hold, as shown by van Nimwegen and Crutchfield [2778].
Instead of increasing the population size, it makes thus sense to “gain more” from the smaller
populations. In order to extend the duration of the evolution in Evolutionary Algorithms,
many methods have been devised for steering the search away from areas which have already
been frequently sampled. In steady-state EAs (see Paragraph 28.1.4.2.1), it is common to
remove duplicate genotypes from the population [2324].
More general, the exploration capabilities of an optimizer can be improved by integrating
density metrics into the fitness assignment process. The most popular of such approaches
are sharing and niching (see Section 28.3.4, [735, 742, 1080, 1085, 1250, 1899, 2391]). The
158 13 UNSATISFYING CONVERGENCE
Strength Pareto Algorithms, which are widely accepted to be highly efficient, use another
idea: they adapt the number of individuals that one candidate solution dominates as density
measure [3097, 3100].
Pétrowski [2167, 2168] introduced the related clearing approach which you can find
discussed in Section 28.4.7, where individuals are grouped according to their distance in the
phenotypic or genotypic space and all but a certain number of individuals from each group
just receive the worst possible fitness. A similar method aiming for convergence prevention
is introduced in Section 28.4.7, where individuals close to each other in objective space are
deleted.
13.4.5 Self-Adaptation
13.4.6 Multi-Objectivization
Recently, the idea of using helper objectives has emerged. Here, usually a single-objective
problem is transformed into a multi-objective one by adding new objective functions [1429,
1440, 1442, 1553, 1757, 2025]. Brockhoff et al. [425] proved that in some cases, such changes
can speed up the optimization process. The new objectives are often derived from the main
objective by decomposition [1178] or from certain characteristics of the problem [1429]. They
are then optimized together with the original objective function with some multi-objective
technique, say an MOEA (see Definition D28.2). Problems where the main objective is
some sort of sum of different components contributing to the fitness are especially suitable
for that purpose, since such a component naturally lends itself as helper objective. It has
been shown that the original objective function has at least as many local optima as the
total number of all local optima present in the decomposed objectives for sum-of-parts
multi-objectivizations [1178].
1 1
f(x) x«l,2 x«l,3 x«l,4 A«1 A«2 A«3
0.8 0.8 f2(x)
0.4 0.4
0.2 0.2
main objective: f(x)=f1(x)+f2(x) f1(x)
xÎX xÎX
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Fig. 13.5.a: The original objective and its local op- Fig. 13.5.b: The helper objectives and the local op-
tima. tima in the objective space.
i. e., getting worse. The neighboring elements outside of A⋆ 2 are thus dominated by the
elements on its borders.
Multi-objectivization may thus help here to overcome the local optima, since there are
fewer locally optimal regions. However, a look at A⋆2 shows us that it contains x⋆l,2 , x⋆l,3 , and
x⋆l,4 and – since it is continuous – also (uncountable) infinitly many other points. In this
case, it would thus not be clear whether multi-objectivization would be beneficial or not a
priori.
160 13 UNSATISFYING CONVERGENCE
Tasks T13
49. Sketch one possible objective function for a single-objective optimization problem
which has infitely many solutions. Copy this graph three times and paint six points
into each copy which represent:
50. Discuss the dilemma of exploration versus exploitation. What are the advantages and
disadvantages of these two general search principles?
[5 points]
51. Discuss why the Hill Climbing optimization algorithm (Chapter 26) is particularly
prone to premature convergence and how Simulated Annealing (Chapter 27) mitigates
the cause of this vulnerability.
[5 points]
Chapter 14
Ruggedness and Weak Causality
Optimization algorithms generally depend on some form of gradient in the objective or fitness
space. The objective functions should be continuous and exhibit low total variation1 , so the
optimizer can descend the gradient easily. If the objective functions become more unsteady
or fluctuating, i. e., go up and down often, it becomes more complicated for the optimization
process to find the right directions to proceed to as sketched in Figure 14.1. The more
rugged a function gets, the harder it gets to optimize it. For short, one could say ruggedness
is multi-modality plus steep ascends and descends in the fitness landscape. Examples of
rugged landscapes are Kauffman’s NK fitness landscape (see Section 50.2.1), the p-Spin
model discussed in Section 50.2.2, Bergman and Feldman’s jagged fitness landscape [276],
and the sketch in Fig. 11.1.d on page 140.
During an optimization process, new points in the search space are created by the search
operations. Generally we can assume that the genotypes which are the input of the search
operations correspond to phenotypes which have previously been selected. Usually, the
better or the more promising an individual is, the higher are its chances of being selected for
further investigation. Reversing this statement suggests that individuals which are passed
to the search operations are likely to have a good fitness. Since the fitness of a candidate
solution depends on its properties, it can be assumed that the features of these individuals
1 http://en.wikipedia.org/wiki/Total_variation [accessed 2008-04-23]
162 14 RUGGEDNESS AND WEAK CAUSALITY
are not so bad either. It should thus be possible for the optimizer to introduce slight changes
to their properties in order to find out whether they can be improved any further2 . Normally,
such exploitive modifications should also lead to small changes in the objective values and
hence, in the fitness of the candidate solution.
Definition D14.1 (Strong Causality). Strong causality (locality) means that small
changes in the properties of an object also lead to small changes in its behavior [2278,
2279, 2329].
This principle (proposed by Rechenberg [2278, 2279]) should not only hold for the search
spaces and operations designed for optimization (see Equation 24.5 on page 218 in Sec-
tion 24.9), but applies to natural genomes as well. The offspring resulting from sexual
reproduction of two fish, for instance, has a different genotype than its parents. Yet, it is
far more probable that these variations manifest in a unique color pattern of the scales, for
example, instead of leading to a totally different creature.
Apart from this straightforward, informal explanation here, causality has been investi-
gated thoroughly in different fields of optimization, such as Evolution Strategy [835, 2278],
structure evolution [1760, 1761], Genetic Programming [835, 1396, 2327, 2329], genotype-
phenotype mappings [2451], search operators [835], and Evolutionary Algorithms in gen-
eral [835, 2338, 2599].
Sendhoff et al. [2451, 2452] introduce a very nice definition for causality which combines
the search space G, the (unary) search operations searchOp1 ∈ Op, the genotype-phenotype
mapping “gpm” and the problem space X with the objective functions f . First, a distance
measure “dist caus” based on the stochastic nature of the unary search operation searchOp1
is defined.3
1
dist caus(g1 , g2 ) = − log P (g2 = searchOp1 (g1 )) (14.1)
Pid
Pid = P (g = searchOp1 (g)) (14.2)
In Equation 14.1, g1 and g2 are two elements of the search space G, P (g2 = searchOp1 (g1 ))
is the probability that g2 is the result of one application of the search operation searchOp1 to
g1 , and Pid is the probability that the searchOp1 maps an element to itself. dist caus(g1 , g2 )
falls with a rising probability the g1 is mapped to g2 . Sendhoff et al. [2451, 2452] then define
strong causality as the state where Equation 14.3 holds:
∀g1 , g2 , g3 ∈ G : ||f (gpm(g1 )) − f (gpm(g2 ))||2 < ||f (gpm(g1 )) − f (gpm(g3 ))||2
dist caus(g1 , g2 ) < dist caus(g1 , g3 )
P (g2 = searchOp1 (g1 )) > P (g3 = searchOp1 (g1 )) (14.3)
In other words, strong causality means that the search operations have a higher chance to
produce points g2 which (map to candidate solutions which) are closer in objective space
to their inputs g1 than to create elements x3 which (map to candidate solutions which) are
farther away.
In fitness landscapes with weak (low) causality, small changes in the candidate solutions
often lead to large changes in the objective values, i. e., ruggedness. It then becomes harder
to decide which region of the problem space to explore and the optimizer cannot find reliable
gradient information to follow. A small modification of a very bad candidate solution may
then lead to a new local optimum and the best candidate solution currently known may be
surrounded by points that are inferior to all other tested individuals.
The lower the causality of an optimization problem, the more rugged its fitness landscape
is, which leads to a degeneration of the performance of the optimizer [1563]. This does not
necessarily mean that it is impossible to find good solutions, but it may take very long to
do so.
2 We have already mentioned this under the subject of exploitation.
3 This measure has the slight drawback that it is undefined for P (g2 = searchOp1 (g2 )) = 0 or Pid = 0.
14.3. FITNESS LANDSCAPE MEASURES 163
14.3 Fitness Landscape Measures
As measures for the ruggedness of a fitness landscape (or their general difficulty), many
different metrics have been proposed. Wedge and Kell [2874] and Altenberg [80] provide
nice lists of them in their work4 , which we summarize here:
1. Weinberger [2881] introduced the autocorrelation function and the correlation length
of random walks.
2. The correlation of the search operators was used by Manderick et al. [1821] in con-
junction with the autocorrelation [2267].
3. Jones and Forrest [1467, 1468] proposed the fitness distance correlation (FDC), the
correlation of the fitness of an individual and its distance to the global optimum. This
measure has been extended by researchers such as Clergue et al. [590, 2784].
4. The probability that search operations create offspring fitter than their parents, as
defined by Rechenberg [2278] and Beyer [295] (and called evolvability by Altenberg
[77]), will be discussed in Section 16.2 on page 172 in depth.
5. Simulation dynamics have been researched by Altenberg [77] and Grefenstette [1130].
6. Another interesting metric is the fitness variance of formae (Radcliffe and Surry [2246])
and schemas (Reeves and Wright [2284]).
7. The error threshold method from theoretical biology [867, 2057] has been adopted
Ochoa et al. [2062] for Evolutionary Algorithms. It is the “critical mutation rate be-
yond which structures obtained by the evolutionary process are destroyed by mutation
more frequently than selection can reproduce them” [2062].
8. The negative slope coefficient (NSC) by Vanneschi et al. [2785, 2786] may be considered
as an extension of Altenberg’s evolvability measure.
9. Davidor [691] uses the epistatic variance as a measure of utility of a certain represen-
tation in Genetic Algorithms. We discuss the issue of epistasis in Chapter 17.
10. The genotype-fitness correlation (GFC) of Wedge and Kell [2874] is a new measure for
ruggedness in fitness landscape and has been shown to be a good guide for determining
optimal population sizes in Genetic Programming.
11. Rana [2267] defined the Static-φ and the Basin-δ metrics based on the notion of
schemas in binary-coded search spaces.
As example, let us take a look at the autocorrelation function as well as the correlation
length of random walks [2881]. Here we borrow its definition from Vérel et al. [2802]:
where E [f(xi )] and D2 [f(xi )] are the expected value and the variance of f(xi ).
4 Especially the one of Wedge and Kell [2874] is beautiful and far more detailed than this summary here.
164 14 RUGGEDNESS AND WEAK CAUSALITY
1
The correlation length τ = − log ρ(1,f) measures how the autocorrelation function decreases
and summarizes the ruggedness of the fitness landscape: the larger the correlation length,
the lower the total variation of the landscape. From the works of Kinnear, Jr [1540] and
Lipsitch [1743] from twenty, however, we also know that correlation measures do not always
represent the hardness of a problem landscape full.
14.4 Countermeasures
To the knowledge of the author, no viable method which can directly mitigate the effects
of rugged fitness landscapes exists. In population-based approaches, using large population
sizes and applying methods to increase the diversity can reduce the influence of ruggedness,
but only up to a certain degree. Utilizing Lamarckian evolution [722, 2933] or the Baldwin
effect [191, 1235, 1236, 2933], i. e., incorporating a local search into the optimization pro-
cess, may further help to smoothen out the fitness landscape [1144] (see Section 36.2 and
Section 36.3, respectively).
Weak causality is often a home-made problem because it results to some extent from
the choice of the solution representation and search operations. We pointed out that explo-
ration operations are important for lowering the risk of premature convergence. Exploitation
operators are as same as important for refining solutions to a certain degree. In order to
apply optimization algorithms in an efficient manner, it is necessary to find representations
which allow for iterative modifications with bounded influence on the objective values, i. e.,
exploitation. In Chapter 24, we present some further rules-of-thumb for search space and
operation design.
14.4. COUNTERMEASURES 165
Tasks T14
53. Why is causality important in metaheuristic optimization? Why are problems with
low causality normally harder than problems with strong causality?
[5 points]
54. Define three objective functions by yourself: a uni-modal one (f1 : R 7→ R+ ), a simple
multi-modal one (f2 : R 7→ R+ ), and a rugged one (f3 : R 7→ R+ ). All three functions
should have the same global minimum (optimum) 0 with objective value 0. Try to solve
all three function with a trivial implementation of Hill Climbing (see Algorithm 26.1 on
page 230) by applying it 30 times to each function. Note your observations regarding
the problem hardness.
[15 points]
15.1 Introduction
Especially annoying fitness landscapes show deceptiveness (or deceptivity). The gradient
of deceptive objective functions leads the optimizer away from the optima, as illustrated in
Figure 15.1 and Fig. 11.1.e on page 140.
The term deceptiveness [288] is mainly used in the Genetic Algorithm1 community in the
context of the Schema Theorem [1076, 1077]. Schemas describe certain areas (hyperplanes)
in the search space. If an optimization algorithm has discovered an area with a better
average fitness compared to other regions, it will focus on exploring this region based on
the assumption that highly fit areas are likely to contain the true optimum. Objective
functions where this is not the case are called deceptive [288, 1075, 1739]. Examples for
deceptiveness are the ND fitness landscapes outlined in Section 50.2.3, trap functions (see
Section 50.2.3.1), and the fully deceptive problems given by Goldberg et al. [743, 1076, 1083]
and [288].
15.3 Countermeasures
Solving deceptive optimization tasks perfectly involves sampling many individuals with very
bad features and low fitness. This contradicts the basic ideas of metaheuristics and thus,
there are no efficient countermeasures against deceptivity. Using large population sizes,
maintaining a very high diversity, and utilizing linkage learning (see Section 17.3.3) are,
maybe, the only approaches which can provide at least a small chance of finding good
solutions.
A completely deceptive fitness landscape would be even worse than the needle-in-a-
haystack problems outlined in Section 16.7 [? ]. Exhaustive enumeration of the search
space or searching by using random walks may be the only possible way to go in extremely
deceptive cases. However, the question what kind of practical problem would have such a
feature arises. The solution of a completely deceptive problem would be the one that, when
looking on all possible solutions, seems to be the most improbably one. However, when
actually testing it, this worst-looking point in the problem space turns out to be the best-
possible solution. Such kinds of problem are, as far as the author of this book is concerned,
rather unlikely and rare.
15.3. COUNTERMEASURES 169
Tasks T15
55. Why is deceptiveness fatal for metaheuristic optimization? Why are non-deceptive
problems usually easier than deceptive ones?
[5 points]
16.1 Introduction
It is challenging for optimization algorithms if the best candidate solution currently known
is situated on a plane of the fitness landscape, i. e., all adjacent candidate solutions have
the same objective values. As illustrated in Figure 16.1 and Fig. 11.1.f on page 140, an
optimizer then cannot find any gradient information and thus, no direction in which to
proceed in a systematic manner. From its point of view, each search operation will yield
identical individuals. Furthermore, optimization algorithms usually maintain a list of the
best individuals found, which will then overflow eventually or require pruning.
The degree of neutrality ν is defined as the fraction of neutral results among all possible
products of the search operations applied to a specific genotype [219]. We can generalize
this measure to areas G in the search space G by averaging over all their elements. Regions
where ν is close to one are considered as neutral.
|{g2 : P (g2 = Op(g1 )) > 0 ∧ f (gpm(g2 )) = f (gpm(g1 ))}|
∀g1 ∈ G ⇒ ν (g1 ) = (16.1)
|{g2 : P (g2 = Op(g1 )) > 0}|
1 X
∀G ⊆ G ⇒ ν (G) = ν (g) (16.2)
|G|
g∈G
172 16 NEUTRALITY AND REDUNDANCY
16.2 Evolvability
Another metaphor in Global Optimization which has been borrowed from biological sys-
tems [1289] is evolvability1 [699]. Wagner [2833, 2834] points out that this word has two
uses in biology: According to Kirschner and Gerhart [1543], a biological system is evolv-
able if it is able to generate heritable, selectable phenotypic variations. Such properties
can then be spread by natural selection and changed during the course of evolution. In its
second sense, a system is evolvable if it can acquire new characteristics via genetic change
that help the organism(s) to survive and to reproduce. Theories about how the ability
of generating adaptive variants has evolved have been proposed by Riedl [2305], Wagner
and Altenberg [78, 2835], Bonner [357], and Conrad [624], amongst others. The idea of
evolvability can be adopted for Global Optimization as follows:
The direct probability of success [295, 2278], i. e., the chance that search operators produce
offspring fitter than their parents, is also sometimes referred to as evolvability in the context
of Evolutionary Algorithms [77, 80].
The link between evolvability and neutrality has been discussed by many researchers [2834,
3039]. The evolvability of neutral parts of a fitness landscape depends on the optimization
algorithm used. It is especially low for Hill Climbing and similar approaches, since the search
operations cannot directly provide improvements or even changes. The optimization process
then degenerates to a random walk, as illustrated in Fig. 11.1.f on page 140. The work of
Beaudoin et al. [242] on the ND fitness landscapes2 shows that neutrality may “destroy”
useful information such as correlation.
Researchers in molecular evolution, on the other hand, found indications that the major-
ity of mutations in biology have no selective influence [971, 1315] and that the transformation
from genotypes to phenotypes is a many-to-one mapping. Wagner [2834] states that neutral-
ity in natural genomes is beneficial if it concerns only a subset of the properties peculiar to
the offspring of a candidate solution while allowing meaningful modifications of the others.
Toussaint and Igel [2719] even go as far as declaring it a necessity for self-adaptation.
The theory of punctuated equilibria 3 , in biology introduced by Eldredge and Gould
[875, 876], states that species experience long periods of evolutionary inactivity which are
interrupted by sudden, localized, and rapid phenotypic evolutions [185].4 It is assumed that
the populations explore neutral layers5 during the time of stasis until, suddenly, a relevant
change in a genotype leads to a better adapted phenotype [2780] which then reproduces
quickly. Similar phenomena can be observed/are utilized in EAs [604, 1838].
“Uh?”, you may think, “How does this fit together?” The key to differentiating between
“good” and “bad” neutrality is its degree ν in relation to the number of possible solutions
maintained by the optimization algorithms. Smith et al. [2538] have used illustrative ex-
amples similar to Figure 16.2 showing that a certain amount of neutral reproductions can
foster the progress of optimization. In Fig. 16.2.a, basically the same scenario of premature
1 http://en.wikipedia.org/wiki/Evolvability [accessed 2007-07-03]
2 See Section 50.2.3 on page 557 for a detailed elaboration on the ND fitness landscape.
3 http://en.wikipedia.org/wiki/Punctuated_equilibrium [accessed 2008-07-01]
4 A very similar idea is utilized in the Extremal Optimization method discussed in Chapter 41.
5 Or neutral networks, as discussed in Section 16.4.
16.4. NEUTRAL NETWORKS 173
convergence as in Fig. 13.1.a on page 152 is depicted. The optimizer is drawn to a local opti-
mum from which it cannot escape anymore. Fig. 16.2.b shows that a little shot of neutrality
could form a bridge to the global optimum. The optimizer now has a chance to escape the
smaller peak if it is able to find and follow that bridge, i. e., the evolvability of the system
has increased. If this bridge gets wider, as sketched in Fig. 16.2.c, the chance of finding the
global optimum increases as well. Of course, if the bridge gets too wide, the optimization
process may end up in a scenario like in Fig. 11.1.f on page 140 where it cannot find any
direction. Furthermore, in this scenario we expect the neutral bridge to lead to somewhere
useful, which is not necessarily the case in reality.
global optimum
local optimum
Fig. 16.2.a: Premature Conver- Fig. 16.2.b: Small Neutral Bridge Fig. 16.2.c: Wide Neutral Bridge
gence
Recently, the idea of utilizing the processes of molecular6 and evolutionary7 biology as
complement to Darwinian evolution for optimization gains interest [212]. Scientists like
Hu and Banzhaf [1286, 1287, 1289] began to study the application of metrics such as the
evolution rate of gene sequences [2973, 3021] to Evolutionary Algorithms. Here, the degree
of neutrality (synonymous vs. non-synonymous changes) seems to play an important role.
Examples for neutrality in fitness landscapes are the ND family (see Section 50.2.3),
the NKp and NKq models (discussed in Section 50.2.1.5), and the Royal Road (see Sec-
tion 50.2.4). Another common instance of neutrality is bloat in Genetic Programming,
which is outlined in ?? on page ??.
1. the rate of discovery of innovations keeps constant for a reasonably large amount of
applications of the search operations [1314], and
Networks with this property may prove very helpful if they connect the optima in the fitness
landscape. Stewart [2611] utilizes neutral networks and the idea of punctuated equilibria
in his extrema selection, a Genetic Algorithm variant that focuses on exploring individuals
which are far away from the centroid of the set of currently investigated candidate solutions
(but have still good objective values). Then again, Barnett [218] showed that populations
in Genetic Algorithm tend to dwell in neutral networks of high dimensions of neutrality
regardless of their objective values, which (obviously) cannot be considered advantageous.
The convergence on neutral networks has furthermore been studied by Bornberg-Bauer
and Chan [361], van Nimwegen et al. [2778, 2779], and Wilke [2942]. Their results show that
the topology of neutral networks strongly determines the distribution of genotypes on them.
Generally, the genotypes are “drawn” to the solutions with the highest degree of neutrality
ν on the neutral network Beaudoin et al. [242].
16.6 Summary
Different from ruggedness which is always bad for optimization algorithms, neutrality has
aspects that may further as well as hinder the process of finding good solutions. Generally
we can state that degrees of neutrality ν very close to 1 degenerate optimization processes
to random walks. Some forms of neutral networks accompanied by low (nonzero) values of
ν can improve the evolvability and hence, increase the chance of finding good solutions.
Adverse forms of neutrality are often caused by bad design of the search space or
genotype-phenotype mapping. Uniform redundancy in the genome should be avoided where
possible and the amount of neutrality in the search space should generally be limited.
16.7 Needle-In-A-Haystack
Besides fully deceptive problems, one of the worst cases of fitness landscapes is the needle-
in-a-haystack (NIAH) problem [? ] sketched in Fig. 11.1.g on page 140, where the optimum
occurs as isolated [1078] spike in a plane. In other words, small instances of extreme rugged-
ness combine with a general lack of information in the fitness landscape. Such problems
are extremely hard to solve and the optimization processes often will converge prematurely
or take very long to find the global optimum. An example for such fitness landscapes is
the all-or-nothing property often inherent to Genetic Programming of algorithms [2729], as
discussed in ?? on page ??.
176 16 NEUTRALITY AND REDUNDANCY
Tasks T16
57. Describe the possible positive and negative aspects of neutrality in the fitness land-
scape.
[10 points]
58. Construct two different objective functions f1 , f2 : R 7→ R+ which have neutral parts
in the fitness landscape. Like the three functions from Task 54 on page 165 and the
two from Task 56 on page 169, they should have the global minimum (optimum) 0
with objective value 0. Try to solve these functions with a trivial implementation of
Hill Climbing (see Algorithm 26.1 on page 230) by applying it 30 times to both of
them. Note your observations regarding the problem hardness and compare them to
your results from Task 54.
[7 points]
Chapter 17
Epistasis, Pleiotropy, and Separability
17.1 Introduction
We speak of minimal epistasis when every gene is independent of every other gene. Then,
the optimization process equals finding the best value for each gene and can most efficiently
be carried out by a simple greedy search (see Section 46.3.1) [692]. A problem is maximally
epistatic when no proper subset of genes is independent of any other gene [2007, 2562]. Our
understanding and effects of epistasis come close to another biological phenomenon:
Definition D17.2 (Pleiotropy). Pleiotropy 2 denotes that a single gene is responsible for
multiple phenotypical traits [1288, 2944].
Like epistasis, pleiotropy can sometimes lead to unexpected improvements but often is harm-
ful for the evolutionary system [77]. Both phenomena may easily intertwine. If one gene
epistatically influences, for instance, two others which are responsible for distinct phenotyp-
ical traits, it has both epistatic and pleiotropic effects. We will therefore consider pleiotropy
and epistasis together and when discussing the effects of the latter, also implicitly refer to
the former.
shape
color
length
length
consisting of four genes. Gene 1 influences the color of the creature and is neither pleiotropic
nor has any epistatic relations. Gene 2, however, exhibits pleiotropy since it determines the
length of the hind legs and forelegs. At the same time, it is epistatically connected with
gene 3 which too, influences the length of the forelegs – maybe preventing them from looking
exactly like the hind legs. The fourth gene is again pleiotropic by determining the shape of
the bone armor on the top of the dinosaur’s skull.
17.1.2 Separability
In the area of optimizing over continuous problem spaces (such as X ⊆ Rn ), epistasis and
pleiotropy are closely related to the term separability. Separbility is a feature of the objective
functions of an optimization problem [2668]. A function of n variables is separable if it can
be rewritten as a sum of n functions of just one variable [1162, 2099]. Hence, the decision
variables involved in the problem can be optimized independently of each other, i. e., are
minimally epistatic, and the problem is said to be separable according to the formal definition
given by Auger et al. [147, 1181] as follows:
The higher the degree of nonseparability, the harder a function will usually become more
difficult for optimization [147, 1181, 2635]. Often, the term nonseparable is used in the sense
of fully-nonseparable. In other words, in between separable and fully nonseparable problems,
a variety of partially separable problems exists. Definition D17.5 is, hence, an alternative
for Definition D17.4.
17.1.3 Examples
Examples of problems with a high degree of epistasis are Kauffman’s NK fitness land-
scape [1499, 1501] (Section 50.2.1), the p-Spin model [86] (Section 50.2.2), and the tun-
able model by Weise et al. [2910] (Section 50.2.7). Nonseparable objective functions often
used for benchmarking are, for instance, the Griewank function [1106, 2934, 3026] (see Sec-
tion 50.3.1.11) and Rosenbrock’s Function [3026] (see Section 50.3.1.5). Partially separable
test problems are given in, for example, [2670, 2708].
As sketched in Figure 17.2, epistasis has a strong influence on many of the previously
discussed problematic features. If one gene can “turn off” or affect the expression of other
genes, a modification of this gene will lead to a large change in the features of the phenotype.
Hence, the causality will be weakened and ruggedness ensues in the fitness landscape. It also
becomes harder to define search operations with exploitive character. Moreover, subsequent
changes to the “deactivated” genes may have no influence on the phenotype at all, which
would then increase the degree of neutrality in the search space. Bowers [375] found that
representations and genotypes with low pleiotropy lead to better and more robust solutions.
Epistasis and pleiotropy are mainly aspects of the way in which objective functions f ∈ f ,
the genome G, and the genotype-phenotype mapping are defined and interact. They should
be avoided where possible.
180 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
Needle in a
Haystack
ruggedness multi-
modality
Furthermore, the search space be identical to the problem space, i. e., G = X. In the
optimization problem defined by XB and f , the elements of the candidate solutions are
totally independent and can be optimized separately. Thus, f1 is minimal epistatic. In f2 ,
only the first two (x[0], x[1]) and second two (x[2], x[3]) genes of the each genotype influence
each other. Both groups are independent from each other and from x[4]. Hence, the problem
has some epistasis. f3 is a needle-in-a-haystack problem where all five genes directly influence
each other. A single zero in a candidate solution deactivates all other genes and the optimum
– the string containing all ones – is the only point in the search space with non-zero objective
values.
10 a 11
11=12
8 9 a10=10
7 a8=8
a9=9
4 5 6 a7=7
0 1 2 3 a4=4
a5=5 a6=6
a1=3 a2=3 a3=3
a0=2
3
4 5 a3=3
bin size b = 14
a4=4
a5=5
2 1 7
a2=3 a1=3
11 9 8 a7=7
a11=12 a8=8
0 a9=9
10 11 10
a11=12 a0=2 3
a10=10 9 8 a10=10
a9=9 7 a3=3
6 a8=8 6 5
a7=7 4
a6=6 0 a6=6 2 a4=4
a5=5 1
a2=3 a1=3
a0=2
bin 1 bin 2 bin 3 bin 4 bin 5 bin 6 bin 1 bin 2 bin 3 bin 4 bin 5 bin 6 bin 7
First candidate solution x1 requires 6 bins. Candidate x2, a slightly modified version of x1,
, ,
requires 7 bins and has a very different behavior .
17.3 Countermeasures
We have shown that epistasis is a root cause for multiple problematic features of optimization
tasks. General countermeasures against epistasis can be divided into two groups. The
symptoms of epistasis can be mitigated with the same methods which increase the chance
of finding good solutions in the presence of ruggedness or neutrality.
Choosing the problem space and the genotype-phenotype mapping efficiently can lead to a
great improvement in the quality of the solutions produced by the optimization process [2888,
2905]. Introducing specialized search operations can achieve the same [2478].
We outline some general rules for representation design in Chapter 24. Considering these
suggestions may further improve the performance of an optimizer.
Using larger populations and favoring explorative search operations seems to be helpful in
epistatic problems since this should increase the diversity. Shinkai et al. [2478] showed that
applying (a) higher selection pressure, i. e., increasing the chance to pick the best candidate
solutions for further investigation instead of the weaker ones, and (b) extinctive selection,
i. e., only working with the newest produced set of candidate solutions while discarding
the ones from which they were derived, can also increase the reliability of an optimizer to
find good solutions. These two concepts are slightly contradicting, so careful adjustment of
the algorithm settings appears to be vital in epistatic environments. Shinkai et al. [2478]
observed higher selection pressure also leads to earlier convergence, a fact that we pointed
out in Chapter 13.
According to Winter et al. [2960], linkage is “the tendency for alleles of different genes to
be passed together from one generation to the next” in genetics. This usually indicates that
these genes are closely located in the same chromosome. In the context of Evolutionary
Algorithms, this notation is not useful since identifying spatially close elements inside the
genotypes g ∈ G is trivial. Instead, we are interested in alleles of different genes which have
a joint effect on the fitness [1980, 1981].
Identifying these linked genes, i. e., learning their epistatic interaction, is very helpful
for the optimization process. Such knowledge can be used to protect building blocks3 from
being destroyed by the search operations (such as crossover in Genetic Algorithms), for
instance. Finding approaches for linkage learning has become an especially popular discipline
3 See Section 29.5.5 for information on the Building Block Hypothesis.
184 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
in the area of Evolutionary Algorithms with binary [552, 1196, 1981] and real [754] genomes.
Two important methods from this area are the messy GA (mGA, see Section 29.6) by
Goldberg et al. [1083] and the Bayesian Optimization Algorithm (BOA) [483, 2151]. Module
acquisition [108] may be considered as such an effort.
Variable Interaction Learning (VIL) techniques can be used to discover the relations be-
tween decision variables, as introduced by Chen et al. [551] and discussed in the context of
cooperative coevolution in Section 21.2.3 on page 204.
17.3. COUNTERMEASURES 185
Tasks T17
63. How are epistasis and separability related with each other?
[5 points]
Chapter 18
Noise and Robustness
In the context of optimization, three types of noise can be distinguished. The first form is
noise in the training data used as basis for learning or the objective functions [3019] (i). In
many applications of machine learning or optimization where a model for a given system is
to be learned, data samples including the input of the system and its measured response are
used for training.
1. the usage of Global Optimization for building classifiers (for example for predicting
buying behavior using data gathered in a customer survey for training),
2. the usage of simulations for determining the objective values in Genetic Programming
(here, the simulated scenarios correspond to training cases), and
3. the fitting of mathematical functions to (x, y)-data samples (with artificial neural
networks or symbolic regression, for instance).
Since no measurement device is 100% accurate and there are always random errors, noise is
present in such optimization problems.
Besides inexactnesses and fluctuations in the input data of the optimization process, pertur-
bations are also likely to occur during the application of its results. This category subsumes
the other two types of noise: perturbations that may arise from (ii) inaccuracies in the pro-
cess of realizing the solutions and (iii) environmentally induced perturbations during the
applications of the products.
The effects of noise in optimization have been studied by various researchers; Miller and
Goldberg [1897, 1898], Lee and Wong [1703], and Gurin and Rastrigin [1155] are some of
them. Many Global Optimization algorithms and theoretical results have been proposed
which can deal with noise. Some of them are, for instance, specialized
1. Genetic Algorithms [932, 1544, 2389, 2390, 2733, 2734],
2. Evolution Strategies [168, 294, 1172], and
3. Particle Swarm Optimization [1175, 2119] approaches.
Therefore, a local optimum (or even a non-optimal element) for which slight disturbances
only lead to gentle performance degenerations is usually favored over a global optimum
located in a highly rugged area of the fitness landscape [395]. In other words, local optima in
regions of the fitness landscape with strong causality are sometimes better than global optima
with weak causality. Of course, the level of this acceptability is application-dependent.
Figure 18.1 illustrates the issue of local optima which are robust vs. global optima which
are not. Table 18.1 gives some examples on problems which require robust optimization.
f(x)
18.3 Countermeasures
For the special case where the phenome is a real vector space (X ⊆ Rn ), several approaches
for dealing with the need for robustness have been developed. Inspired by Taguchi meth-
T
ods1 [2647], possible disturbances are represented by a vector δ = (δ1 , δ2 , .., δn ) , δi ∈
R in the method suggested by Greiner [1134, 1135]. If the distributions and influ-
ences of the δi are known, the objective function f(x) : x ∈ X can be rewritten as
f̃(x,δ) [2937]. In the special case where δ is normally distributed, this can be simplified
T
to f̃ (x1 + δ1 , x2 + δ2 , .., xn + δn ) . It would then make sense to sample the probability
distribution of δ a number of t times and to use the mean values of f̃(x, δ) for each objective
function evaluation during the optimization process. In cases where the optimal value y ⋆
of the objective function f is known, Equation 18.1 can be minimized. This approach is
also used in the work of Wiesmann et al. [2936, 2937] and basically turns the optimization
algorithm into something like a maximum likelihood estimator (see Section 53.6.2 and Equa-
tion 53.257 on page 695).
1 X ⋆ 2
t
f′ (x) = y − f̃(x, δ i ) (18.1)
t i=1
This method corresponds to using multiple, different training scenarios during the objective
function evaluation in situations where X 6⊆ Rn . By adding random noise and artificial
perturbations to the training cases, the chance of obtaining robust solutions which are
stable when applied or realized under noisy conditions can be increased.
In all scenarios where optimizers evaluate some of the objective values of the candidate
solutions by using training data, two additional phenomena with negative influence can be
observed: overfitting and (what we call) oversimplification. This is especially true if the
solutions represent functional dependencies inside the training data St .
19.1 Overfitting
b b b
x★ x
a a a
Fig. 19.1.a: Three sample points Fig. 19.1.b: The optimal solution Fig. 19.1.c: The overly complex
of a linear function. x⋆ . variant x̀.
A very common cause for overfitting is noise in the sample data. As we have already pointed
out, there exists no measurement device for physical processes which delivers perfect results
without error. Surveys that represent the opinions of people on a certain topic or randomized
simulations will exhibit variations from the true interdependencies of the observed entities,
too. Hence, data samples based on measurements will always contain some noise.
b b b
x★ x
a a a
Fig. 19.2.a: The original physical Fig. 19.2.b: The measured train- Fig. 19.2.c: The overfitted result
process. ing data. x̀.
From the examples we can see that the major problem that results from overfitted solutions
is the loss of generality.
19.1.2 Countermeasures
There exist multiple techniques which can be utilized in order to prevent overfitting to a
certain degree. It is most efficient to apply multiple such techniques together in order to
achieve best results.
A very simple approach is to restrict the problem space X in a way that only solutions up
to a given maximum complexity can be found. In terms of polynomial function fitting, this
could mean limiting the maximum degree of the polynomials to be tested, for example.
Furthermore, the functional objective functions (such as those defined in Example E19.1)
which solely concentrate on the error of the candidate solutions should be augmented by
penalty terms and non-functional criteria which favor small and simple models [792, 1510].
Large sets of sample data, although slowing down the optimization process, may improve the
generalization capabilities of the derived solutions. If arbitrarily many training datasets or
training scenarios can be generated, there are two approaches which work against overfitting:
1. The first method is to use a new set of (randomized) scenarios for each evaluation
of each candidate solution. The resulting objective values then may differ largely
even if the same individual is evaluated twice in a row, introducing incoherence and
ruggedness into the fitness landscape.
194 19 OVERFITTING AND OVERSIMPLIFICATION
2. At the beginning of each iteration of the optimizer, a new set of (randomized) scenarios
is generated which is used for all individual evaluations during that iteration. This
method leads to objective values which can be compared without bias. They can be
made even more comparable if the objective functions are always normalized into some
fixed interval, say [0, 1].
In both cases it is helpful to use more than one training sample or scenario per evaluation
and to set the resulting objective value to the average (or better median) of the outcomes.
Otherwise, the fluctuations of the objective values between the iterations will be very large,
making it hard for the optimizers to follow a stable gradient for multiple steps.
Another simple method to prevent overfitting is to limit the runtime of the optimizers [2397].
It is commonly assumed that learning processes normally first find relatively general solutions
which subsequently begin to overfit because the noise “is learned”, too.
For the same reason, some algorithms allow to decrease the rate at which the candidate
solutions are modified by time. Such a decay of the learning rate makes overfitting less
likely.
If only one finite set of data samples is available for training/optimization, it is common
practice to separate it into a set of training data St and a set of test cases Sc . During the
optimization process, only the training data is used. The resulting solutions x are tested
with the test cases afterwards. If their behavior is significantly worse when applied to Sc
than when applied to St , they are probably overfitted.
The same approach can be used to detect when the optimization process should be
stopped. The best known candidate solutions can be checked with the test cases in each
iteration without influencing their objective values which solely depend on the training data.
If their performance on the test cases begins to decrease, there are no benefits in letting the
optimization process continue any further.
19.2 Oversimplification
With the term oversimplification (or overgeneralization) we refer to the opposite of overfit-
ting. Whereas overfitting denotes the emergence of overly complicated candidate solutions,
oversimplified solutions are not complicated enough. Although they represent the training
samples used during the optimization process seemingly well, they are rough overgeneral-
izations which fail to provide good results for cases not part of the training.
A common cause for oversimplification is sketched in Figure 19.3: The training sets only
represent a fraction of the possible inputs. As this is normally the case, one should always
be aware that such an incomplete coverage may fail to represent some of the dependencies
and characteristics of the data, which then may lead to oversimplified solutions. Another
possible reason for oversimplification is that ruggedness, deceptiveness, too much neutrality,
19.2. OVERSIMPLIFICATION 195
or high epistasis in the fitness landscape may lead to premature convergence and prevent
the optimizer from surpassing a certain quality of the candidate solutions. It then cannot
adapt them completely even if the training data perfectly represents the sampled process. A
third possible cause is that a problem space could have been chosen which does not include
the correct solution.
b b a
a a b
Fig. 19.3.a: The “real system” Fig. 19.3.b: The sampled training Fig. 19.3.c: The oversimplified re-
and the points describing it. data. sult x̀.
19.2.2 Countermeasures
It is not always possible to have training scenarios which cover the complete input space
A. If a program is synthesized Genetic Programming which has an 32bit integer number
as input, we would need 232 = 4 294 967 296 samples. By using multiple scenarios for each
individual evaluation, the chance of missing important aspects is decreased. These scenarios
can be replaced with new, randomly created ones in each generation, in order to decrease
this chance even more.
196 19 OVERFITTING AND OVERSIMPLIFICATION
19.2.2.2 Using a larger Problem Space
The problem space, i. e., the representation of the candidate solutions, should further be
chosen in a way which allows constructing a correct solution to the problem defined. Then
again, releasing too many constraints on the solution structure increases the risk of overfit-
ting and thus, careful proceeding is recommended.
Chapter 20
Dimensionality (Objective Functions)
20.1 Introduction
The most common way to define optima in multi-objective problems (MOPs, see Defini-
tion D3.6 on page 55) is to use the Pareto domination relation discussed in Section 3.3.5
on page 65. A candidate solution x1 ∈ X is said to dominate another candidate solution
x2 ∈ X (x1 ⊣ x2 ) if and only if its corresponding vector of objective values f (x1 ) is (par-
tially) less than the one of x2 , i. e., i ∈ 1..n ⇒ fi (x1 ) ≤ fi (x2 ) and ∃i ∈ 1..n : fi (x1 ) < fi (x2 )
(in minimization problems, see Definition D3.13). The domination rank #dom(x, X) of a
candidate solution x ∈ X is the number of elements in a reference set X ⊆ X it is dominated
by (Definition D3.16), a non-dominated element (relative to X) has a domination rank of
#dom(x, X) = 0, and the solutions considered to be global optima are non-dominated for
the whole problem space X (#dom(x, X) = 0, Definition D3.14). These are the elements we
would like to find with optimization or, in the case of probabilistic metaheuristics, at least
wish to approximate as closely as possible.
Many studies in the literature consider mainly bi-objective MOPs [1237, 1530]. As a conse-
quence, many algorithms are designed to deal with that kind of problems. However, MOPs
having a higher number of objective functions are common in practice – sometimes the
number of objectives reaches double figures [599] – leading to the so-called many-objective
optimization [2228, 2915]. This term has been coined by the Operations Research commu-
nity to denote problems with more than two or three objective functions [894, 3099]. It is
currently a hot research topic with the first conference on evolutionary multi-objective opti-
mization held in 2001 [3099]. Table 20.1 gives some examples for many-objective situations.
When the dimension of the MOPs increases, the majority of candidate solutions are non-
dominated. As a consequence, traditional nature-inspired algorithms have to be redesigned:
According to Hughes [1303] and on basis of the results provided by Purshouse and Fleming
[2231], it can be assumed that purely Pareto-based optimization approaches (maybe with
added diversity preservation methods) will not perform good in problems with four or more
objectives. Results from the application of such algorithms (NSGA-II [749], SPEA-2 [3100],
PESA [639]) to two or three objectives cannot simply be extrapolated to higher numbers
of optimization criteria [1530]. Kang et al. [1491] consider Pareto optimality as unfair
and imperfect in many-objective problems. Hughes [1303] indicates that the following two
hypotheses may hold:
1. An optimizer that produces an entire Pareto set in one run is better than generating the
Pareto set through many single-objective optimizations using an aggregation approach
such as weighted min-max (see Section 3.3.4) if the number of objective function
evaluations is fixed.
2. Optimizers that use Pareto ranking based methods to sort the population will be very
effective for small numbers of objectives, but not perform as effectively for many-
objective optimization in comparison with methods based on other approaches.
The results of Jaszkiewicz [1437] and Ishibuchi et al. [1418] further demonstrate the degener-
ation of the performance of the traditional multi-objective metaheuristics in many-objective
problems in comparison with single-objective approaches. Ikeda et al. [1400] show that vari-
ous elements, which are apart from the true Pareto frontier, may survive as hardly-dominated
solutions which leads to a decrease in the probability of producing new candidate solutions
dominating existing ones. This phenomenon is called dominance resistant. Deb et al. [750]
recognize this problem of redundant solutions and incorporate an example function provok-
ing it into their test function suite for real-valued, multi-objective optimization.
Additionally to this algorithm-sided limitations, Bouyssou [374] suggests that based on,
for instance, the limited capabilities of the human mind [1900], an operator will not be able
to make efficient decisions if more than a dozen objectives are involved. Visualizing the
solutions in a human-understandable way becomes more complex with the rising number of
dimensions, too [1420].
P(#dom=k | ps,n)
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0
4 4
k 8 10 12
14 k 8 10 12
14
12 6 8 12 6 8
2 4
jectiv es n 2 4
jectiv es n
no. of ob no. of ob
Fig. 20.1.a: ps = 5 Fig. 20.1.b: ps = 10
P(#dom=k | ps,n)
P(#dom=k | ps,n)
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0
4 4
k 8 10 12
14 k 8 10 12
14
12 6 8 12 6 8
4 sn 4 es n
2
no. of o bjective 2
no. of ob
jectiv
Fig. 20.1.c: ps = 50 Fig. 20.1.d: ps = 100
P(#dom=k | ps,n)
P(#dom=k | ps,n)
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0
4 4
k 8 10 12
14 k 8 10 12
14
12 6 8 12 6 8
4 sn 4 es n
2
no. of o bjective 2
no. of ob
jectiv
Fig. 20.1.e: ps = 500 Fig. 20.1.f: ps = 1000
P(#dom=k | ps,n)
0.3
0.2
0.1
0
0
4
k 8 10 12
14
12 6 8
2 4
jectiv es n
no. of ob
Fig. 20.1.g: ps = 3000
Figure 20.1: Approximated distribution of the domination rank in random samples for
several population sizes.
200 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
The distribution P ( #dom = k| ps, n) of the probability that the domination rank “#dom”
of a randomly selected individual from the initial, randomly created population is exactly
k ∈ 0..ps − 1 depends on the population size ps and the number of objective functions n. We
have approximated this probability distribution with experiments where we determined the
domination rank of ps n-dimensional vectors where each element is drawn from a uniform
distribution for several values of n and ps, spanning from n = 2 to n = 50 and ps = 3 to
ps = 3600. (For n = 1, no experiments where performed since there, each domination rank
from 0 to ps − 1 occurs exactly once in the population.)
Figure 20.1 and Figure 20.2 illustrate the results of these experiments, the arithmetic
means of 100 000 runs for each configuration. In Figure 20.1, the approximated distribu-
tion of P ( #dom = k| ps, n) is illustrated for various n. The value of P ( #dom = 0| ps, n)
quickly approaches one for rising n. We limited the z-axis to 0.3 so the behavior of the
other ranks k > 0, which quickly becomes negligible, can still be seen. For fixed n and
ps, P ( #dom = k| ps, n) drops rapidly, from the looks roughly similar to an exponential dis-
tribution (see Section 53.4.3). For a fixed k and ps, P ( #dom = k| ps, n) takes on roughly
the shape of a Poisson distribution (see Section 53.3.2) along the n-axis, i. e., it rises for
some time and then falls again. With rising ps, the maximum of this distribution shifts
to the right, meaning that an algorithm with a higher population size may be able to deal
with more objective functions. Still, even for ps = 3000, around 64% of the population are
already non-dominated in this experiment.
P(#dom=0 | ps,n)
1.0
0.8
0.6
0.4
0.2
0
20
500
1000 15 sn
pop 1500 2000 ct ive
ulat 10
b je
ion 2500 o
size 3000 5 . of
ps 3500 no
Figure 20.2: The proportion of non-dominated candidate solutions for several population
sizes ps and dimensionalities n.
Finally, it should be noted that there exists also interesting research where objectives are
added to a single-objective problem to make it easier [1553, 2025] which you can find sum-
marized in Section 13.4.6. Indeed, Brockhoff et al. [425] showed that in some cases, adding
objective functions may be beneficial (while it is not in others).
20.3 Countermeasures
Various countermeasures have been proposed against the problem of dimensionality. Nice
surveys on contemporary approaches to many-objective optimization with evolutionary ap-
proaches, to the difficulties arising in many-objective, and on benchmark problems have
been provided by Hiroyasu et al. [1237], Ishibuchi et al. [1420], and López Jaimes and Coello
Coello [1775]. In the following we list some of approaches for many-objective optimization,
mostly utilizing the information provided in [1237, 1420].
The most trivial measure, however, has not been introduced in these studies: increasing
the population size. Well, from Example E20.1 we know that this will work only for a
few objective functions and we have to throw in exponentially more individuals in order
to fight the influence of many objectives. For five or six objective functions, we may still
be able to obtain good results if we do this – for the price of many additional objective
function evaluations. If we follow the rough approximation equation from the example, it
would already take a population size of around ps = 100 000 in order to keep the fraction of
non-dominated candidate solutions below 0.5 (in the scenario used there, that is). Hence,
increasing the population size can only get us so far.
A very simple idea for improving the way multi-objective approaches scale with increasing
dimensionality is to increase the exploration pressure into the direction of the Pareto frontier.
Ishibuchi et al. [1420] distinguish approaches which modify the definition of domination in
order to reduce the number non-dominated candidate solutions in the population [2403] and
methods which assign different ranks to non-dominated solutions [636, 829, 830, 1576, 1635,
2629]. Farina and Amato [894] propose to rely fuzzy Pareto methods instead of pure Pareto
comparisons.
Ishibuchi et al. [1420] point further out that fitness assignment methods could be used which
are not based on Pareto dominance. According to them, one approach is using indicator
functions such as those involving hypervolume metrics [1419, 2838].
202 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
20.3.4 Scalerizing Approaches
Another fitness assignment method are scalarizing functions [1420] which treat many-
objective problems with single-objective-style methods. Advanced single-objective optimiza-
tion like those developed by Hughes [1303] could be used instead of Pareto comparisons.
Wagner et al. [2837, 2838] support the results and assumptions of Hughes [1303] in terms
of single-objective being better in many-objective problems than traditional multi-objective
Evolutionary Algorithms such as NSGA-II [749] or SPEA 2 [3100], but also refute the gen-
eral idea that all Pareto-based cannot cope with their performance. They also show that
more recent approaches [880, 2010, 3096] which favor more than just distribution aspects
can perform very well. Other scalarizing methods can be found in [1304, 1416, 1417, 1437]
Hiroyasu et al. [1237] point out that the search area in the objective space can be lim-
ited. This leads to a decrease of the number of non-dominated points [1420] and can be
achieved by either incorporating preference information [746, 933, 2695] or by reducing the
dimensionality [422–424, 745, 2411].
In Chapter 20, we discussed how an increasing number of objective functions can threaten
the performance of optimization algorithms. We referred to this problem as dimensionality,
i. e., the dimension of the objective space. However, there is another space in optimization
whose dimension can make an optimization problem difficult: the search space G. The
“curse of dimensionality” 1 of the search space stands for the exponential increase of its
volume with the number of decision variables [260, 261]. In order to better distinguish
between the dimensionality of the search and the objective space, we will refer to the former
one as scale.
In Example E21.1, we give an example for the growth of the search space with the scale
of a problem and in Table 21.1, some large-scale optimization problems are listed.
21.2 Countermeasures
When facing problems of large-scale, the problematic symptom is the high runtime require-
ment. Thus, any measure to use as much computational power as available can be a remedy.
Obviously, there (currently) exists no way to solve large-scale N P-hard problems exactly
within feasible time. With more computers, cores, or hardware, only a linear improvement
of runtime can be achieved at most. However, for problems residing in the gray area between
feasible and infeasible, distributed computing [1747] may be the method of choice.
The best way to solve a large-scale problem is to solve it as a small-scale problem. For some
optimization tasks, it is possible to choose a search space which has a smaller size than
the problem space (|G| < |X|). Indirect genotype-phenotype mappings can link the spaces
together, as we will discuss later in Section 28.2.2.
Here, one option are generative mappings (see Section 28.2.2.1) which step-by-step con-
struct a complex phenotype by translating a genotype according to some rules. Grammatical
Evolution, for instance, unfolds a symbol according to a BNF grammar with rules identified
in a genotype. This recursive process can basically lead to arbitrarily complex phenotypes.
Another approach is to apply a developmental, ontogenic mapping (see Section 28.2.2.2)
that uses feedback from simulations or objective functions in this process.
Sometimes, parts of candidate solutions are independent from each other and can be opti-
mized more or less separately. In such a case of low epistasis (see Chapter 17), a large-scale
problem can be divided into several problems of smaller-scale which can be optimized sepa-
rately. If solving a problem of scale n takes 2n algorithm steps, solving two problems of scale
0.5n will clearly lead to a great runtime reduction. Such a reduction may even be worth
21.2. COUNTERMEASURES 205
sacrificing some solution quality. If the optimization problem at hand exhibits low epistasis
or is separable, such a sacrifice may even be unnecessary.
Coevolution has shown to be an efficient approach in combinatorial optimization [1311].
If extended with a cooperative component, i. e., to Cooperative Coevolution [2210, 2211],
it can efficiently exploit separability in numerical problems and lead to better results [551,
3020].
Generally, it can be a good idea to concurrently use sets of algorithm [1689] or portfo-
lios [2161] working on the same or different populations. This way, the different strengths
of the optimization methods can be combined. In the beginning, for instance, an algorithm
with good convergence speed may be granted more runtime. Later, the focus can shift
towards methods which can retain diversity and are not prone to premature convergence.
Alternatively, a sequential approach can be performed, which starts with one algorithm and
switches to another one when no further improvements are found [? ]. This way, first an
interesting area in the search space can be found which then can be investigated in more
detail.
Chapter 22
Dynamically Changing Fitness Landscape
It should also be mentioned that there exist problems with dynamically changing fitness
landscapes [396, 397, 400, 1957, 2302]. The task of an optimization algorithm is then to
provide candidate solutions with momentarily optimal objective values for each point in time.
Here we have the problem that an optimum in iteration t will possibly not be an optimum in
iteration t + 1 anymore. Problems with dynamic characteristics can, for example, be tackled
with special forms [3019] of
The moving peaks benchmarks by Branke [396, 397] and Morrison and De Jong [1957] are
good examples for dynamically changing fitness landscapes. You can find them discussed
in ?? on page ??. In Table 22.1, we give some examples for applications of dynamic opti-
mization.
By now, we know the most important difficulties that can be encountered when applying an
optimization algorithm to a given problem. Furthermore, we have seen that it is arguable
what actually an optimum is if multiple criteria are optimized at once. The fact that there
is most likely no optimization method that can outperform all others on all problems can,
thus, easily be accepted. Instead, there exists a variety of optimization methods specialized
in solving different types of problems. There are also algorithms which deliver good results
for many different problem classes, but may be outperformed by highly specialized methods
in each of them. These facts have been formalized by Wolpert and Macready [2963, 2964]
in their No Free Lunch Theorems1 (NFL) for search and optimization algorithms.
03-28]
2 Notice that we have partly utilized our own notations here in order to be consistent throughout the
book.
210 23 THE NO FREE LUNCH THEOREM
issue. Each distinct TSP produces a different structure of φ. Yet, we would use the same
optimization algorithm a for all problems of this class without knowing the exact shape of φ.
This corresponds to the assumption that there is a set of very similar optimization problems
which we may encounter here although their exact structure is not known. We act as if there
was a probability distribution over all possible problems which is non-zero for the TSP-alike
ones and zero for all others.
23.3 Implications
From this theorem, we can immediately follow that, in order to outperform a1 in one opti-
mization problem, a2 will necessarily perform worse in another. Figure 23.1 visualizes this
issue. It shows that general optimization approaches like Evolutionary Algorithms can solve
a variety of problem classes with reasonable performance. In this figure, we have chosen a
performance measure Φ subject to maximization, i. e., the higher its values, the faster will
the problem be solved. Hill Climbing approaches, for instance, will be much faster than
Evolutionary Algorithms if the objective functions are steady and monotonous, that is, in a
smaller set of optimization tasks. Greedy search methods will perform fast on all problems
with matroid3 structure. Evolutionary Algorithms will most often still be able to solve these
problems, it just takes them longer to do so. The performance of Hill Climbing and greedy
approaches degenerates in other classes of optimization tasks as a trade-off for their high
utility in their “area of expertise”.
One interpretation of the No Free Lunch Theorem is that it is impossible for any opti-
mization algorithm to outperform random walks or exhaustive enumerations on all possible
problems. For every problem where a given method leads to good results, we can construct
a problem where the same method has exactly the opposite effect (see Chapter 15). As
a matter of fact, doing so is even a common practice to find weaknesses of optimization
algorithms and to compare them with each other, see Section 50.2.6, for example.
I II III IV V
objective values f(x)
t
righ
rig
g
on
wr rig
wro
ng
ht
x x
Fig. 23.2.a: A landscape good for Hill Fig. 23.2.b: A landscape bad for Hill
Climbing. Climbing.
in Figure 23.2 which are both subject to minimization. We divided both fitness landscapes
into sections labeled with roman numerals. A Hill Climbing algorithm will generate points
in the proximity of the currently best solution and transcend to these points if they are
better. Hence, in both problems it will always follow the downward gradient in all three
sections. In Fig. 23.2.a, this is the right decision in sections II and III because the global
optimum is located at the border between them, but wrong in the deceptive section I. In
Fig. 23.2.b, exactly the opposite is the case: Here, a gradient descend is only a good idea
in section IV (which corresponds to I in Fig. 23.2.a) whereas it leads away from the global
optimum in section V which has the same extend as sections II and III in Fig. 23.2.a.
Hence, our Hill Climbing would exhibit very different performance when applied to
212 23 THE NO FREE LUNCH THEOREM
Fig. 23.2.a and Fig. 23.2.b. Depending on the search operations available, it may be even
outpaced by a random walk on Fig. 23.2.b in average. Since we may not know the nature of
the objective functions before the optimization starts, this may become problematic. How-
ever, Fig. 23.2.b is deceptive in the whole section V, i. e., most of its problem space, which
is not the case to such an extent in many practical problems.
Another interpretation is that every useful optimization algorithm utilizes some form of
problem-specific knowledge. Radcliffe [2244] basically states that without such knowledge,
search algorithms cannot exceed the performance of simple enumerations. Incorporating
knowledge starts with relying on simple assumptions like “if x is a good candidate solution,
than we can expect other good candidate solutions in its vicinity”, i. e., strong causality.
The more (correct) problem specific knowledge is integrated (correctly) into the algorithm
structure, the better will the algorithm perform. On the other hand, knowledge correct
for one class of problems is, quite possibly, misleading for another class. In reality, we use
optimizers to solve a given set of problems and are not interested in their performance when
(wrongly) applied to other classes.
The rough meaning of the NLF is that all black-box optimization methods perform equally
well over the complete set of all optimization problems [2074]. In practice, we do not want
to apply an optimizer to all possible problems but to only some, restricted classes. In terms
of these classes, we can make statements about which optimizer performs better.
Today, there exists a wide range of work on No Free Lunch Theorems for many different
aspects of machine learning. The website http://www.no-free-lunch.org/4 gives
a good overview about them. Further summaries, extensions, and criticisms have been
provided by Köppen et al. [1578], Droste et al. [837, 838, 839, 840], Oltean [2074], and
Igel and Toussaint [1397, 1398]. Radcliffe and Surry [2247] discuss the NFL in the context
of Evolutionary Algorithms and the representations used as search spaces. The No Free
Lunch Theorem is furthermore closely related to the Ugly Duckling Theorem5 proposed by
Watanabe [2868] for classification and pattern recognition.
4 accessed: 2008-03-28
5 http://en.wikipedia.org/wiki/Ugly_duckling_theorem [accessed 2008-08-22]
23.4. INFINITE AND CONTINUOUS DOMAINS 213
23.4 Infinite and Continuous Domains
Recently, Auger and Teytaud [145, 146] showed that the No Free Lunch Theorem holds only
in a weaker form for countable infinite domains. For continuous domains (i. e., uncountable
infinite ones), it does not hold. This means for a given performance measure and a limit
of the maximally allowed objective function evaluations, it can be possible to construct an
optimal algorithm for a certain family of objective functions [145, 146].
A factor that should be considered when thinking about these issues is, that real ex-
isting computers can only work with finite search spaces, since they have finite memory.
In the end, even floating point numbers are finite in value and precision. Hence, there is
always a difference between the behavior of the algorithmic, mathematical definition of an
optimization method and its practical implementation for a given computing environment.
Unfortunately, the author of this book finds himself unable to state with certainty whether
this means that the NFL does hold for realistic continuous optimization or whether the
proofs by Auger and Teytaud [145, 146] carry over to actual programs.
23.5 Conclusions
So far, the subject of this chapter was the question about what makes optimization prob-
lems hard, especially for metaheuristic approaches. We have discussed numerous different
phenomena which can affect the optimization process and lead to disappointing results.
If an optimization process has converged prematurely, it has been trapped in a non-
optimal region of the search space from which it cannot “escape” anymore (Chapter 13).
Ruggedness (Chapter 14) and deceptiveness (Chapter 15) in the fitness landscape, often
caused by epistatic effects (Chapter 17), can misguide the search into such a region. Neu-
trality and redundancy (Chapter 16) can either slow down optimization because the appli-
cation of the search operations does not lead to a gain in information or may also contribute
positively by creating neutral networks from which the search space can be explored and
local optima can be escaped from. Noise is present in virtually all practical optimization
problems. The solutions that are derived for them should be robust (Chapter 18). Also,
they should neither be too general (oversimplification, Section 19.2) nor too specifically
aligned only to the training data (overfitting, Section 19.1). Furthermore, many practical
problems are multi-objective, i. e., involve the optimization of more than one criterion at
once (partially discussed in Section 3.3), or concern objectives which may change over time
(Chapter 22).
The No Free Lunch Theorem argues that it is not possible to develop the one optimization
algorithm, the problem-solving machine which can provide us with near-optimal solutions
in short time for every possible optimization task (at least in finite domains). Even though
this proof does not directly hold for purely continuous or infinite search spaces, this must
sound very depressing for everybody new to this subject.
Actually, quite the opposite is the case, at least from the point of view of a researcher.
The No Free Lunch Theorem means that there will always be new ideas, new approaches
which will lead to better optimization algorithms to solve a given problem. Instead of being
doomed to obsolescence, it is far more likely that most of the currently known optimization
methods have at least one niche, one area where they are excellent. It also means that it
is very likely that the “puzzle of optimization algorithms” will never be completed. There
will always be a chance that an inspiring moment, an observation in nature, for instance,
may lead to the invention of a new optimization algorithm which performs better in some
problem areas than all currently known ones.
214 23 THE NO FREE LUNCH THEOREM
Most Global Optimization algorithms share the premise that solutions to problems are either
(a) elements of a somewhat continuous space that can be approximated stepwise or (b) that
they can be composed of smaller modules which already have good attributes even when oc-
curring separately.Especially in the second case, the design of the search space (or genome)
G and the genotype-phenotype mapping “gpm” is vital for the success of the optimization
process. It determines to what degree these expected features can be exploited by defining
how the properties and the behavior of the candidate solutions are encoded and how the
search operations influence them. In this chapter, we will first discuss a general theory
on how properties of individuals can be defined, classified, and how they are related. We
will then outline some general rules for the design of the search space G, the search opera-
tions searchOp ∈ Op, and the genotype-phenotype mapping “gpm”. These rules represent
the lessons learned from our discussion of the possible problematic aspects of optimization
tasks.In software engineering, there exists a variety of design patterns1 that describe good
practice and experience values. Utilizing these patterns will help the software engineer to
create well-organized, extensible, and maintainable applications.
Whenever we want to solve a problem with Global Optimization algorithms, we need to
define the structure of a genome. The individual representation along with the genotype-
phenotype mapping is a vital part of Genetic Algorithms and has major impact on the
chance of finding good solutions. Like in software engineering, there exist basic patterns
and rules of thumb on how to design them properly.
Based on our previous discussions, we can now provide some general best practices for the
design of encodings from different perspectives. These principles can lead to finding better
solutions or higher optimization speed if considered in the design phase [1075, 2032, 2338].
We will outline suggestions given by different researchers transposed into the more general
vocabulary of formae (introduced in Chapter 9 on page 119).
Beyer and Schwefel [300, 302] furthermore also request for unbiased variation operators. The
unary mutation operations in Evolution Strategy, for example, should not give preference to
any specific change of the genotypes. The reason for this is that selection operations, i. e.,
the methods which chooses the candidate solutions already exploits fitness information and
guides the search into one direction. The variation operators thus should not do this job as
well but instead try to increase the diversity. They should perform exploration as opposed
to the exploitation performed by selection.
Beyer and Schwefel [302] therefore again suggest to use normal distributions to mutate
real-valued vectors if G ⊆ Rn and Rudolph [2348] propose to use geometrically distributed
random numbers to offset genotype vectors in G ⊆ Zn .
Palmer [2113], Palmer and Kershenbaum [2115] and Della Croce et al. [765] state that a
good search space G and genotype-phenotype mapping “gpm” should be able to repre-
sent all possible phenotypes and solution structures, i. e., be surjective (see Section 51.7 on
page 644) [2032].
∀x ∈ X ⇒ ∃g ∈ G : x = gpm(g) (24.2)
In the spirit of Section 24.1 and with the goal to lower redundancy, the number of genotypes
which map to the same phenotype should be low [2322–2324] In Section 16.5 we provided a
thorough discussion of redundancy in the genotypes and its demerits (but also its possible
advantages). Combining the idea of achieving low uniform redundancy with the principle
stated in Section 24.3, Palmer and Kershenbaum [2113, 2115] conclude that the GPM should
ideally be both, simple and bijective.
The genotype-phenotype mapping should always yield valid phenotypes [2032, 2113, 2115].
The meaning of valid in this context is that if the problem space X is the set of all possible
trees, only trees should be encoded in the genome. If we use the R3 as problem space,
no vectors with fewer or more elements than three should be produced by the genotype-
phenotype mapping. Anything else would not make sense anyway, so this is a pretty basic
rule. This form of validity does not imply that the individuals are also correct solutions in
terms of the objective functions.
Constraints on what a good solution is can, on one hand, be realized by defining them ex-
plicitly and making them available to the optimization process (see Section 3.4 on page 70).
On the other hand, they can also be implemented by defining the encoding, search opera-
tions, and genotype-phenotype mapping in a way which ensures that all phenotypes created
are feasible by default [1888, 2323, 2912, 2914].
24.6. FORMAE INHERITANCE AND PRESERVATION 217
24.6 Formae Inheritance and Preservation
The search operations should be able to preserve the (good) formae and (good) configurations
of multiple formae [978, 2242, 2323, 2879]. Good formae should not easily get lost during
the exploration of the search space.
From a binary search operation like recombination (e.g., crossover in GAs), we would
expect that, if its two parameters g1 and g2 are members of a forma A, the resulting element
will also be an instance of A.
If we furthermore can assume that all instances of all formae A with minimal precision
(A ∈ mini) of an individual are inherited by at least one parent, the binary reproduction
operation is considered as pure. [2242, 2879]
If this is the case, all properties of a genotype g3 which is a combination of two others (g1 , g2 )
can be traced back to at least one of its parents. Otherwise, “searchOp” also performs an
implicit unary search step, a mutation in Genetic Algorithm, for instance. Such properties,
although discussed here for binary search operations only, can be extended to arbitrary n-ary
operators.
The optimization algorithms find new elements in the search space G by applying the search
operations searchOp ∈ Op. These operations can only create, modify, or combine genotypical
formae since they usually have no information about the problem space. Most mathematical
models dealing with the propagation of formae like the Building Block Hypothesis and the
Schema Theorem2 thus focus on the search space and show that highly fit genotypical formae
will more probably be investigated further than those of low utility. Our goal, however, is to
find highly fit formae in the problem space X. Such properties can only be created, modified,
and combined by the search operations if they correspond to genotypical formae. According
to Radcliffe [2242] and Weicker [2879], a good genotype-phenotype mapping should provide
this feature.
It furthermore becomes clear that useful separate properties in phenotypic space can only
be combined by the search operations properly if they are represented by separate formae
in genotypic space too.
If this principle holds for many individuals and formae, useful properties can be combined by
the optimization step by step, narrowing down the precision of the arising, most interesting
formae more and more. This should lead the search to the most promising regions of the
search space.
Minimal-sized genomes are not always the best approach. An interesting aspect of genome
design supporting this claim is inspired by the works of the theoretical biologist Conrad
[621, 622, 623, 625]. According to his extradimensional bypass principle, it is possible to
transform a rugged fitness landscape with isolated peeks into one with connected saddle
points by increasing the dimensionality of the search space [496, 544]. In [625] he states that
the chance of isolated peeks in randomly created fitness landscapes decreases when their
dimensionality grows.
This partly contradicts Section 24.1 which states that genomes should be as compact
as possible. Conrad [625] does not suggest that nature includes useless sequences in the
genome but either genes which allow for
1. new phenotypical characteristics or
2. redundancy providing new degrees of freedom for the evolution of a species.
220 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
In some cases, such an increase in freedom makes more than up for the additional “costs”
arising from the enlargement of the search space. The extradimensional bypass can be
considered as an example of positive neutrality (see Chapter 16).
global global
f(x) optimum f(x) optimum
local
optimum
local
optimum
G’ G’
G G
Fig. 24.1.a: Useful increase of dimensionality. Fig. 24.1.b: Useless increase of dimensionality.
Figure 24.1: Examples for an increase of the dimensionality of a search space G (1d) to G′
(2d).
In Fig. 24.1.a, an example for the extradimensional bypass (similar to Fig. 6 in [355])
is sketched. The original problem had a one-dimensional search space G corresponding to
the horizontal axis up front. As can be seen in the plane in the foreground, the objective
function had two peeks: a local optimum on the left and a global optimum on the right,
separated by a larger valley. When the optimization process began climbing up the local
optimum, it was very unlikely that it ever could escape this hill and reach the global one.
Increasing the search space to two dimensions (G′ ), however, opened up a path way
between them. The two isolated peeks became saddle points on a longer ridge. The global
optimum is now reachable from all points on the local optimum.
Generally, increasing the dimension of the search space makes only sense if the added
dimension has a non-trivial influence on the objective functions. Simply adding a useless new
dimension (as done in Fig. 24.1.b) would be an example for some sort of uniform redundancy
from which we already know (see Section 16.5) that it is not beneficial. Then again, adding
useful new dimensions may be hard or impossible to achieve in most practical applications.
A good example for this issue is given by Bongard and Paul [355] who used an EA to
evolve a neural network for the motion control of a bipedal robot. They performed runs
where the evolution had control over some morphological aspects and runs where it had
not. The ability to change the leg with of the robots, for instance, comes at the expense
of an increase of the dimensions of the search spaced. Hence, one would expect that the
optimization would perform worse. Instead, in one series of experiments, the results were
much better with the extended search space. The runs did not converge to one particular
leg shape but to a wide range of different structures. This led to the assumption that the
morphology itself was not so much target of the optimization but the ability of changing it
transformed the fitness landscape to a structure more navigable by the evolution. In some
other experimental runs performed by Bongard and Paul [355], this phenomenon could not
be observed, most likely because
1. the robot configuration led to a problem of too high complexity, i. e., ruggedness in
the fitness landscape and/or
2. the increase in dimensionality this time was too large to be compensated by the gain
of evolvability.
24.15. INDIRECT MAPPING 221
Further examples for possible benefits of “gradually complexifying” the search space are
given by Malkin in his doctoral thesis [1818].
Part III
25.1.2 Books
1. Metaheuristics for Scheduling in Industrial and Manufacturing Applications [2992]
2. Handbook of Metaheuristics [1068]
3. Fleet Management and Logistics [651]
4. Metaheuristics for Multiobjective Optimisation [1014]
5. Large-Scale Network Parameter Configuration using On-line Simulation Framework [3031]
6. Modern Heuristic Techniques for Combinatorial Problems [2283]
7. How to Solve It: Modern Heuristics [1887]
8. Search Methodologies – Introductory Tutorials in Optimization and Decision Support Tech-
niques [453]
9. Handbook of Approximation Algorithms and Metaheuristics [1103]
25.1.4 Journals
1. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientific Publishers Ltd.
2. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
3. The Journal of the Operational Research Society (JORS) published by Operations Research
Society and Palgrave Macmillan Ltd.
4. Journal of Heuristics published by Springer Netherlands
5. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
6. Journal of Global Optimization published by Springer Netherlands
7. Discrete Optimization published by Elsevier Science Publishers B.V.
8. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
9. Engineering Optimization published by Informa plc and Taylor and Francis LLC
10. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
11. Optimization Methods and Software published by Taylor and Francis LLC
12. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
13. Structural and Multidisciplinary Optimization published by Springer-Verlag GmbH
14. SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization pub-
lished by SIAM Activity Group on Optimization – SIAG/OPT
Chapter 26
Hill Climbing
26.1 Introduction
In other words, a local search algorithm in each iteration either keeps the current candidate
solution or moves to one element from its neighborhood as defined in Definition D4.12 on
page 86. Local search algorithms have two major advantages: They use very little memory
(normally only a constant amount) and are often able to find solutions in large or infinite
search spaces. These advantages come, of course, with large trade-offs in processing time
and the risk of premature convergence.
Hill Climbing2 (HC) [2356] is one of the oldest and simplest local search and optimiza-
tion algorithm for single objective functions f. In Algorithm 1.1 on page 40, we gave an
example for a simple and unstructured Hill Climbing algorithm. Here we will generalize
from Algorithm 1.1 to a more basic Monte Carlo Hill Climbing algorithm structure.
26.2 Algorithm
The Hill Climbing procedures usually starts either from a random point in the search space
or from a point chosen according to some problem-specific criterion (visualized with the
operator create Algorithm 26.1). Then, in a loop, the currently known best individual p⋆
is modified (with the mutate( )operator) in order to produce one offspring pnew . If this new
individual is better than its parent, it replaces it. Otherwise, it isdiscarded. This procedure
is repeated until the termination criterion terminationCriterion is met. In Listing 56.6 on
page 735, we provide a Java implementation which resembles Algorithm 26.1.
In this sense, Hill Climbing is similar to an Evolutionary Algorithm (see Chapter 28)
with a population size of 1 and truncation selection. Matter of fact, it resembles a (1 + 1)
Evolution Strategies. In Algorithm 30.7 on page 369, we provide such an optimization
algorithm for G ⊆ Rn which additionally adapts the search step size.
1 http://en.wikipedia.org/wiki/Local_search_(optimization) [accessed 2010-07-24]
2 http://en.wikipedia.org/wiki/Hill_climbing [accessed 2007-07-03]
230 26 HILL CLIMBING
Although the search space G and the problem space X are most often the same in Hill
Climbing, we distinguish them in Algorithm 26.1 for the sake of generality. It should further
be noted that Hill Climbing can be implemented in a deterministic manner if, for each
genotype g1 ∈ G, the set of adjacent points is finite (g2 ∈ G : |adjacent(g2 , g1 )| ∈ N0 ).
As illustrated in Algorithm 26.2, we can easily extend Hill Climbing algorithms with a
support for multi-objective optimization by using some of the methods from Section 3.5.2
on page 76 and some which we later will discuss in the context of Evolutionary Algorithms.
This extended approach will then return a set X of the best candidate solutions found instead
of a single point x from the problem space as done in Algorithm 26.1.
The set of currently known best individuals archive may contain more than one element.
In each round, we pick one individual from the archive at random. We then use this individ-
ual to create the next point in the problem space with the “mutate” operator. The method
pruneOptimalSet(archive, as) checks whether the new candidate solution is non-prevailed
from any element in archive. If so, it will enter the archive and all individuals from archive
which now are dominated are removed. Since an optimization problem may potentially have
infinitely many global optima, we subsequently have to ensure that the archive does not grow
too big: pruneOptimalSet(archive, as) makes sure that it contains at most as individuals and,
if necessary, deletes some individuals. In case of Algorithm 26.2, the archive can grow to
at most as + 1 individuals before the pruning, so at most one individual will be deleted.
When the loop ends because the termination criterion terminationCriterion was met, we
extract all the phenotypes from the archived individuals (extractPhenotypes(archive)) and
return them as the set X. An example implementation of this method in Java can be found
in Listing 56.7 on page 737.
26.4. PROBLEMS IN HILL CLIMBING 231
The major problem of Hill Climbing is premature convergence. We introduced this phe-
nomenon in Chapter 13 on page 151: The optimization gets stuck at a local optimum and
cannot discover the global optimum anymore.
Both previously specified versions of the algorithm are very likely to fall prey to this
problem, since the always move towards improvements in the objective values and thus,
cannot reach any candidate solution which is situated behind an inferior point. They will
only follow a path of candidate solutions if it is monotonously3 improving the objective func-
tion(s). The function used in Example E5.1 on page 96 is, for example, already problematic
for Hill Climbing.
As previously mentioned, Hill Climbing in this form is a local search rather than Global
Optimization algorithm. By making a few slight modifications to the algorithm however, it
can become a valuable technique capable of performing Global Optimization:
1. A tabu-list which stores the elements recently evaluated can be added. By preventing
the algorithm from visiting them again, a better exploration of the problem space X can
be enforced. This technique is used in Tabu Search which is discussed in Chapter 40
on page 489.
4. Randomly restarting the search after so-and-so many steps is a crude but efficient
method to explore wide ranges of the problem space with Hill Climbing. You can find
it outlined in Section 26.5.
5. Using a reproduction scheme that not necessarily generates candidate solutions directly
neighboring p⋆.x, as done in Random Optimization, an optimization approach defined
in Chapter 44 on page 505, may prove even more efficient.
Hill Climbing with random restarts is also called Stochastic Hill Climbing (SH) or Stochastic
gradient descent 4 [844, 2561]. In the following, we will outline how random restarts can be
implemented into Hill Climbing and serve as a measure against premature convergence.
We will further combine this approach directly with the multi-objective Hill Climbing ap-
proach defined in Algorithm 26.2. The new algorithm incorporates two archives for optimal
solutions: archive1 , the overall optimal set, and archive2 the set of the best individuals in
the current run. We additionally define the restart criterion shouldRestart which is eval-
uated in every iteration and determines whether or not the algorithm should be restarted.
shouldRestart therefore could, for example, count the iterations performed or check if
any improvement was produced in the last ten iterations. After each single run, archive2 is
incorporated into archive1, from which we extract and return the problem space elements
at the end of the Hill Climbing process, as defined in Algorithm 26.3.
26.6.2 Books
1. Artificial Intelligence: A Modern Approach [2356]
2. Introduction to Stochastic Search and Optimization [2561]
3. Evolution and Optimum Seeking [2437]
26.6.4 Journals
1. Journal of Heuristics published by Springer Netherlands
11. If x is better than x⋆ , that is, x ≺ x⋆ , set x⋆ = x. Otherwise, decrease the iteration
counter τ .
12. If the termination criterion has not yet been met, go back to step Point 3 if τ > 0 and
to Point 2 if τ = 0.
The iteration counter τ here is used to allow the search to explore solutions more distance
from the current optimum x⋆ . The higher the initial value maxτ specified user, the more
iterations without improvement are allowed before reverting x to x⋆ . The name Raindrop
Method comes from the fact that the constraint violations caused by the perturbation of
the valid solution radiate away from the modified component s like waves on a water surface
radiate away from the point where a raindrop hits.
26.7.1.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
26.7.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
238 26 HILL CLIMBING
Tasks T26
64. Use the implementation of Hill Climbing given in Listing 56.6 on page 735 (or an
equivalent own one as in Task 65) in order to find approximations of the global minima
of
n
1 X
f1 (x) = 0.5 − [cos (ln (1 + |xi |))] (26.1)
2n i=1
" n
#!
X
f2 (x) = 0.5 − 0.5 ∗ cos ln 1 + |xi | (26.2)
i=1
f3 (x) = 0.01 ∗ maxni=1 xi 2 (26.3)
(a) Run ten experiments for n = 2, n = 5, and n = 10 each. Note all parameter
settings of the algorithm you used (the number of algorithm steps, operators,
etc.), the results, the vectors discovered, and their objective values. [20 points]
(b) Compute the median (see Section 53.2.6 on page 665), the arithmetic mean (see
Section 53.2.2, especially Definition D53.32 on page 661) and the standard devi-
ation (see Section 53.2.3, especially Section 53.2.3.2 on page 663) of the achieved
objective values in the ten runs. [10 points]
(c) The three objective functions are designed so that their value is always in [0, 1]
in the interval [−10, 10]n . Try to order the optimization problems according to
the median of the achieved objective values. Which problems seem to be the
hardest? What is the influence of the dimension n on the achieved objective
values? [10 points]
(d) If you compare f1 and f2 , you find striking similarities. Does one of the functions
seem to be harder? If so, which. . . and what, in your opinion, makes it harder?
[5 points]
[45 points]
65. Implement the Hill Climbing procedure Algorithm 26.1 on page 230 in a programming
language different from Java. You can specialize your implementation for only solving
continuous numerical problems, i. e., you do not need to follow the general, interface-
based approach we adhere to in Listing 56.6 on page 735. Apply your implementation
to the Sphere function (see Section 50.3.1.1 on page 581 and Listing 57.1 on page 859)
for n = 10 dimensions in order to test its functionality.
[20 points]
66. Extend the Hill Climbing algorithm implementation given in Listing 56.6 on page 735
to Hill Climbing with restarts, i. e., to a Hill Climbing which restarts the search pro-
cedure every n steps (while preserving the results of the optimization process in an
additional variable). You may use the discussion in Section 26.5 on page 232 as refer-
ence. However, you do not need to provide the multi-objective optimization capabilities
of Algorithm 26.3 on page 233. Instead, only extend the single-objective Hill Climbing
method in Listing 56.6 on page 735 or provide a completely new implementation of
Hill Climbing with restarts in a programming language of your choice. Test your im-
plementation on one of the previously discussed benchmark functions for real-valued,
continuous optimization (see, e.g., Task 64).
[25 points]
67. Let us now solve a modified version of the bin packing problem from Example E1.1
on page 25. In Table 26.5, we give four datasets from Data Set 1 in [2422]. Each
26.7. RAINDROP METHOD 239
dataset stands for one bin packing problem and provides the bin size b, the number
n of objects to be put into the bins, and the sizes ai of the objects (with i ∈ 1..n).
The problem space be the set of all possible assignments of objects to bins, where the
number k of bins is not provided. For each of the four bin packing problems, find
a candidate solution x which contains an assignment of objects to as few as possible
bins of the given size b. In other words: For each problem, we do know the size b of
the bin, the number n of objects, and the object sizes a. We want to find out how
many bins we need at least to store all the objects (k ∈ N1 , the objective, subject to
minimization) and how we can store (x ∈ X) the objects in these bins.
Name [2422] b n ai k⋆
N1C1W1 A 100 50 [50, 100, 99, 99, 96, 96, 92, 92, 91, 88, 87, 86, 85, 76, 74, 25
72, 69, 67, 67, 62, 61, 56, 52, 51, 49, 46, 44, 42, 40, 40,
33, 33, 30, 30, 29, 28, 28, 27, 25, 24, 23, 22, 21, 20, 17,
14, 13, 11, 10, 7, 7, 3]
N1C2W2 D 120 50 [50, 120, 99, 98, 98, 97, 96, 90, 88, 86, 82, 82, 80, 79, 76, 24
76, 76, 74, 69, 67, 66, 64, 62, 59, 55, 52, 51, 51, 50, 49,
44, 43, 41, 41, 41, 41, 41, 37, 35, 33, 32, 32, 31, 31, 31,
30, 29, 23, 23, 22, 20, 20]
N2C1W1 A 100 100 [100, 100, 99, 97, 95, 95, 94, 92, 91, 89, 86, 86, 85, 84, 80, 48
80, 80, 80, 80, 79, 76, 76, 75, 74, 73, 71, 71, 69, 65, 64,
64, 64, 63, 63, 62, 60, 59, 58, 57, 54, 53, 52, 51, 50, 48,
48, 48, 46, 44, 43, 43, 43, 43, 42, 41, 40, 40, 39, 38, 38,
38, 38, 37, 37, 37, 37, 36, 35, 34, 33, 32, 30, 29, 28, 26,
26, 26, 24, 23, 22, 21, 21, 19, 18, 17, 16, 16, 15, 14, 13,
12, 12, 11, 9, 9, 8, 8, 7, 6, 6, 5, 1]
N2C3W2 C 150 100 [100, 150, 100, 99, 99, 98, 97, 97, 97, 96, 96, 95, 95, 95, 41
94, 93, 93, 93, 92, 91, 89, 88, 87, 86, 84, 84, 83, 83, 82,
81, 81, 81, 78, 78, 75, 74, 73, 72, 72, 71, 70, 68, 67, 66,
65, 64, 63, 63, 62, 60, 60, 59, 59, 58, 57, 56, 56, 55, 54,
51, 49, 49, 48, 47, 47, 46, 45, 45, 45, 45, 44, 44, 44, 44,
43, 41, 41, 40, 39, 39, 39, 37, 37, 37, 35, 35, 34, 32, 31,
31, 30, 28, 26, 25, 24, 24, 23, 23, 22, 21, 20, 20]
Table 26.5: Bin Packing test data from [2422], Data Set 1.
Solve this optimization problem with Hill Climbing. It is not the goal to solve it to
optimality, but to get solutions with Hill Climbing which are as good as possible. For
the sake of completeness and in order to allow comparisons, we provide the lowest
known number k ⋆ of bins for the problems given by Scholl and Klein [2422] in the last
column of Table 26.5.
In order to solve this optimization problem, you may proceed as discussed in Chap-
ter 7 on page 109, except that you have to use Hill Climbing. Below, we give some
suggestions – but you do not necessarily need to follow them:
(a) A possible problem space X for n objects would be the set of all permutations
of the first n natural numbers (G = Π(1..n)). A candidate solution x ∈ X, i. e.,
such a permutation, could describe the order in which the objects have to be put
into bins. If a bin is filled to its capacity (or the current object does not fit into
the bin anymore), a new bin is needed. This process is repeated until all objects
have been packed into bins.
(b) For the objective function, there exist many different choices.
i. As objective function f1 , the number k(x) of bins required for storing the
objects according to a permutation x ∈ X could be used. This value can
easily be computed as discussed in our outline of the problem space X above.
You can find an example implementation of this function in Listing 57.10 on
page 892.
240 26 HILL CLIMBING
ii. Alternatively, we could also sum up the squares of the free space left over in
each bin as f2 . It is easy to see that if we put each object into a single bin,
the free capacity left over is maximal. If there was a way to perfectly fill each
bin with objects, the free capacity would become 0. Such a configuration
would also require the least number of bins, since using fewer bins would
mean to not store all objects. By summing up the squares of the free spaces,
we punish bins with lots of free space more severely. In other words, we
reward solutions where the bins are filled equally. Naturally one would “feel”
that such solutions are “good”. However, this approach may prevent us
from finding the optimum if it is a solution where the objects are distributed
uneven. We implement this objective function as Listing 57.11 on page 893.
iii. Also, a combination of the above would be possible, such as f3 (x) = f1 (x) −
1
1+f2 (x) , for example. An example implementation of this function is given
as Listing 57.12 on page 894.
iv. Then again, we could consider the total number of bins required but ad-
ditionally try to minimize the space lastWaste(x) left over in the last bin,
i. e., set f4 (x) = f1 (x) + 1/(1 + lastWaste(x)). The idea here would be that
the dominating factor of the objective should still be the number of bins re-
quired. But, if two solutions require the same amount of bins, we prefer the
one where more space is free in the last bin. If this waste in the last bin is
increased more and more, this means that the volume of objects in that bin
is reduced. It may eventually become 0 which would equal leaving the bin
empty. The empty bin would not be used and hence, we would be saving one
bin. An implementation of such a function for the suggested problem space
is given at Listing 57.13 on page 896.
v. Finally, instead of only considering the remaining empty space in the last
bin, we could consider the maximum empty space over all bins. The idea
is pretty similar: If we have a bin which is almost empty, maybe we can
make it disappear with just a few moves (object swaps). Hence, we could
prefer solutions where there is one bin with much free space. In order to find
such solutions, we go over all the bins required by the solution, and check how
much space is free in them. We then take the one bin with the most free space
mf and add this space as a factor to the bin count in the objective function,
similar to as we did in the previous function: f5 (x) = f1 (x) + 1/(1 + mf(x)).
An implementation of this function for the suggested problem space is given
at Listing 57.14 on page 897.
(c) If the suggested problem space is used, it could also be used as search space,
i. e., G = X. Then, a simple unary search operation (mutation) would swap the
elements of the permutation such as those provided in Listing 56.38 on page 806
and Listing 56.39. A creation operation could draw a permutation according to
the uniform distribution as shown in Listing 56.37.
Use the Hill Climbing Algorithm 26.1 given in Listing 56.6 on page 735 for solving the
task and compare its results with the results produced by equivalent random walks
and random sampling (see Chapter 8 and Section 56.1.2 on page 729). For your
experiments, you may use the program given in Listing 57.15 on page 899.
(a) For each algorithm or objective function you test, perform at least 30 independent
runs to get reliable results (Remember: our Hill Climbing algorithm is randomized
and can produce different results in each run.)
(b) For each of the four problems, write down the best assignment of objects to bins
that you have found (along with the number of bins required). You may add own
ideas as well
26.7. RAINDROP METHOD 241
(c) Discuss which of the objective functions and algorithm or algorithm configuration
performed best and give reasons.
(d) Also provide all your source code.
[40 points]
Chapter 27
Simulated Annealing
27.1 Introduction
In 1953, Metropolis et al. [1874] developed a Monte Carlo method for “calculating the
properties of any substance which may be considered as composed of interacting individual
molecules”. With this new “Metropolis” procedure stemming from statistical mechanics,
the manner in which metal crystals reconfigure and reach equilibriums in the process of
annealing can be simulated, for example. This inspired Kirkpatrick et al. [1541] to develop
the Simulated Annealing1 (SA) algorithm for Global Optimization in the early 1980s and
to apply it to various combinatorial optimization problems. Independently around the same
time, Černý [516] employed a similar approach to the Traveling Salesman Problem. Even
earlier, Jacobs et al. [1426, 1427] use a similar method to optimize program code. Simulated
Annealing is a general-purpose optimization algorithm that can be applied to arbitrary
search and problem spaces. Like simple Hill Climbing, Simulated Annealing only needs a
single initial individual as starting point and a unary search operation and thus belongs to
the family of local search algorithms.
chemist and have suggestions for improving/correcting the discussion, please drop me an email.
244 27 SIMULATED ANNEALING
metal structure with defects increased movement of atoms ions in low-energy structure
with fewer defects
T Ts=getTemperature(1)
physical role model of
Simulated Annealing
0K=getTemperature(¥)
t
Figure 27.1: A sketch of annealing in metallurgy as role model for Simulated Annealing.
approaches its equilibrium state. During this process, the dislocations in the metal crystal
can disappear. When annealing metal, the initial temperature must not be too low and the
cooling must be done sufficiently slowly so as to avoid to stop the ions too early and to
prevent the system from getting stuck in a meta-stable non-crystalline state representing a
local minimum of energy.
In the Metropolis simulation of the annealing process, each set of positions pos of all
E(pos)
−
atoms of a system is weighted by its Boltzmann probability factor e kB ∗T where E(pos) is
the energy of the configuration pos, T is the temperature measured in Kelvin, and kB is the
Boltzmann’s constant4 :
kB ≈ 1.380 650 524 · 10−23 J/K (27.1)
The Metropolis procedure is a model of this physical process which could be used to simulate
a collection of atoms in thermodynamic equilibrium at a given temperature. A new nearby
geometry posi+1 was generated as a random displacement from the current geometry posi
of the atoms in each iteration. The energy of the resulting new geometry is computed and
∆E, the energetic difference between the current and the new geometry, was determined.
The probability that this new geometry is accepted, P (∆E) is defined in Equation 27.3.
Systems in chemistry and physics always “strive” to attain low-energy states, i. e., the lower
the energy, the more stable is the system. Thus, if the new nearby geometry has a lower
energy level, the change of atom positions is accepted in the simulation. Otherwise, a
uniformly distributed random number r = randomUni[0, 1) is drawn and the step will only
be realized if r is less or equal the Boltzmann probability factor, i. e., r ≤ P (∆E). At high
temperatures T, this factor is very close to 1, leading to the acceptance of many uphill steps.
4 http://en.wikipedia.org/wiki/Boltzmann%27s_constant [accessed 2007-07-03]
27.2. ALGORITHM 245
As the temperature falls, the proportion of steps accepted which would increase the energy
level decreases. Now the system will not escape local regions anymore and (hopefully) comes
to a rest in the global energy minimum at temperature T = 0K.
27.2 Algorithm
The abstraction of the Metropolis simulation method in order to allow arbitrary problem
spaces is straightforward – the energy computation E(posi ) is replaced by an objective
function f or maybe even by the result v of a fitness assignment process. Algorithm 27.1
illustrates the basic course of Simulated Annealing which you can find implemented in List-
ing 56.8 on page 740. Without loss of generality, we reuse the definitions from Evolutionary
Algorithms for the search operations and set Op = {create, mutate}.
Let us discuss in a very simplified way how this algorithm works. During the course of
the optimization process, the temperature T decreases (see Section 27.3). If the temperature
is high in the beginning, this means that, different from a Hill Climbing process, Simulated
Annealing accepts also many worse individuals. Better candidate solutions are always ac-
cepted. In summary, the chance that the search will transcend to a new candidate solution,
regardless whether it is better or worse, is high. In its behavior, Simulated Annealing is
then a bit similar to a random walk. As the temperature decreases, the chance that worse
individuals are accepted decreases. If T approaches 0, this probability becomes 0 as well
and Simulated Annealing becomes a pure Hill Climbing algorithm. The rationale of having
an algorithm that initially behaves like a random walk and then becomes a Hill Climber is
to make use of the benefits of the two extreme cases-
246 27 SIMULATED ANNEALING
A random walk is a maximally-explorative and minimally-exploitive algorithm. It can
never get stuck at a local optimum and will explore all areas in the search space with
the same probability. Yet, it cannot exploit a good solution, i. e., is unable to make small
modifications to an individual in a targeted way in order to check if there are better solutions
nearby.
A Hill Climber, on the other hand, is minimally-explorative and maximally-exploitive. It
always tries to modify and improve on the current solution and never investigates far-away
areas of the search space.
Simulated Annealing combines both properties, it performs a lot of exploration in order
to discover interesting areas of the search space. The hope is that, during this time, it will
discover the basin of attraction of the global optimum. The more the optimization process
becomes like a Hill Climber, the less likely will it move away from a good solution to a worse
one and will instead concentrate on exploiting the solution.
Notice the difference between Line 15 in Algorithm 27.1 and Equation 27.3 on page 244:
Boltzmann’s constant has been removed. The reason is that with Simulated Annealing, we
do not want to simulate a physical process but instead, aim to solve an optimization problem.
The nominal value of Boltzmann’s constant, given in Equation 27.1 (something around
1.4 · 10−23 ), has quite a heavy impact on the Boltzmann probability. In a combinatorial
problem, where the differences in the objective values between two candidate solutions are
often discrete, this becomes obvious: In order to achieve an acceptance probability of 0.001
for a solution which is 1 worse than the best known one, one would need a temperature T
of:
1
−k
P = 0.001 = e B ∗T (27.4)
1
ln 0.001 = − (27.5)
kB ∗ T
1
− = kB ∗ T (27.6)
ln 0.001
1
− =T (27.7)
kB ∗ ln 0.001
T ≈ 1.05 · 1022 K (27.8)
Making such a setting, i. e., using a nominal value of the scale of 1 · 1022 or more, for such
a problem is not obvious and hard to justify. Leaving the constant kB away leads to
1
P = 0.001 = e− T (27.9)
1
ln 0.001 = − (27.10)
T
1
− =T (27.11)
ln 0.001
T ≈ 0.145 (27.12)
27.2.2 Convergence
It has been shown that Simulated Annealing algorithms with appropriate cooling strategies
will asymptotically converge to the global optimum [1403, 2043]. The probability that
the individual pcur as used in Algorithm 27.1 represents a global optimum approaches 1
for t → ∞. This feature is closely related to the property ergodicity5 of an optimization
algorithm:
Nolte and Schrader [2043] and van Laarhoven and Aarts [2776] provide lists of the most
important works on the issue of guaranteed convergence to the global optimum. Nolte and
Schrader [2043] further list the research providing deterministic, non-infinite boundaries
for the asymptotic convergence by Anily and Federgruen [111], Gidas [1054], Nolte and
Schrader [2042], and Mitra et al. [1917]. In the same paper, they introduce a significantly
lower bound. They conclude that, in general, pcur .x in the Simulated Annealing process is
a globally optimal candidate solution with a high probability after a number of iterations
which exceeds the cardinality of the problem space. This is a boundary which is, well, slow
from the viewpoint of a practitioner [1403]: In other words, it would be faster to enumerate
all possible candidate solutions in order to find the global optimum with certainty than
applying Simulated Annealing. Indeed, this makes sense: If this was not the case, Simulated
Annealing could be used to solve N P-complete problems in sub-exponential time. . .
This does not mean that Simulated Annealing is generally slow. It only needs that much
time if we insist on the convergence to the global optimum. Speeding up the cooling process
will result in a faster search, but voids the guaranteed convergence on the other hand. Such
speeded-up algorithms are called Simulated Quenching (SQ) [1058, 1402, 2405].
Quenching6 , too, is a process in metallurgy. Here, a work piece is first heated and then
cooled rapidly, often by holding it into a water bath. This way, steel objects can be hardened,
for example.
Definition D27.2 (Temperature Schedule). The temperature schedule defines how the
temperature parameter T in the Simulated Annealing process (as defined in Algorithm 27.1)
is set. The operator getTemperature : N1 7→ R+ maps the current iteration index t to a
(positive) real temperature value T.
If the number of iterations t approaches infinity, the temperature should approach 0. If the
temperature sinks too fast, the ergodicity of the search may be lost and the global optimum
may not be discovered. In order to avoid that the resulting Simulated Quenching algorithm
gets stuck at a local optimum, random restarting can be applied, which already has been
discussed in the context of Hill Climbing in Section 26.5 on page 232.
There exists a wide range of methods to determine this temperature schedule. Miki et al.
Ts
0.8 Ts
0.6 Ts
linear scaling
(i.e., polynomial with a=1)
0.4 Ts exponentially
with e=0.025
logarithmically
0.2 Ts
polynomial
with a=2
exponentially
with e=0.05
0 20 40 60 80 t 100
[1896], for example, used Genetic Algorithms for this purpose. The most established methods
are listed below and implemented in Paragraph 56.1.3.2.1 and some examples of them are
sketched in Figure 27.2.
Scale the temperature logarithmically where Ts is set to a value larger than the greatest
difference of the objective value of a local minimum and its best neighboring candidate
27.3. TEMPERATURE SCHEDULING 249
solution. This value may, of course, be estimated since it usually is not known. [1167, 2043]
An implementation of this method is given in Listing 56.9 on page 743.
Ts if t < e
getTemperatureln (t) = (27.16)
Ts / ln t otherwise
Reduce T to (1−ǫ)T in each step (or ever m steps, see Equation 27.19), where the exact value
of 0 < ǫ < 1 is determined by experiment [2808]. Such an exponential cooling strategy will
turn Simulated Annealing into Simulated Quenching [1403]. This strategy is implemented
in Listing 56.10 on page 744.
Instead of basing the temperature solely on the iteration index t, other information may be
used for self-adaptation as well. After every m moves, the temperature T can be set to β
times ∆Ec = f(pcur .x) − f(x), where β is an experimentally determined constant, f(pcur .x)
is the objective value of the currently examined candidate solution pcur .x, and f(x) is the
objective value of the best phenotype x found so far. Since ∆Ec may be 0, we limit the
temperature change to a maximum of T ∗ γ with 0 < γ < 1. [2808]
If the temperature reduction should not take place continuously but only every m ∈ N1
iterations, t′ computed according to Equation 27.19 can be used in the previous formulas,
i. e., getTemperature(t′ ) instead of getTemperature(t).
t′ = m⌊t/m⌋ (27.19)
250 27 SIMULATED ANNEALING
27.4 General Information on Simulated Annealing
27.4.2 Books
1. Genetic Algorithms and Simulated Annealing [694]
2. Simulated Annealing: Theory and Applications [2776]
3. Electromagnetic Optimization by Genetic Algorithms [2250]
4. Facts, Conjectures, and Improvements for Simulated Annealing [2369]
5. Simulated Annealing for VLSI Design [2968]
6. Introduction to Stochastic Search and Optimization [2561]
7. Simulated Annealing [2966]
27.4.4 Journals
1. Journal of Heuristics published by Springer Netherlands
252 27 SIMULATED ANNEALING
Tasks T27
68. Perform the same experiments as in Task 64 on page 238 with Simulated Annealing.
Therefore, use the Java Simulated Annealing implementation given in Listing 56.8 on
page 740 or develop an equivalent own implementation.
[45 points]
69. Compare your results from Task 64 on page 238 and Task 68. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results.
[10 points]
70. Perform the same experiments as in Task 67 on page 238 with Simulated Annealing and
Simulated Quenching. Therefore, use the Java Simulated Annealing implementation
given in Listing 56.8 on page 740 or develop an equivalent own implementation.
[25 points]
71. Compare your results from Task 67 on page 238 and Task 70. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results. Put your explanations into the context of the discussions provided
in Example E17.4 on page 181.
[10 points]
72. Try solving the bin packing problem again, this time by using the representation given
by Khuri, Schütz, and Heitkötter in [1534]. Use both, Simulated Annealing and Hill
Climbing to evaluate the performance of the new representation similar to what we
do in Section 57.1.2.1 on page 887. Compare the results with the results listed in that
section and obtained by you in Task 70. Which method is better? Provide reasons for
your observation.
[50 points]
73. Compute the probabilities that a new solution with ∆E1 = 0, ∆E2 = 0.001, ∆E3 =
0.01, ∆E4 = 0.1, ∆E5 = 1.0, ∆E6 = 10.0, and ∆E7 = 100 is accepted by the Simulated
Annealing algorithm at temperatures T1 = 0, T2 = 1, T3 = 10, and T4 = 100. Use
the same formulas used in Algorithm 27.1 (consider the discussion in Section 27.2.1).
[15 points]
74. Compute the temperatures needed so that solutions with ∆E1 = 0, ∆E2 = 0.001,
∆E3 = 0.01, ∆E4 = 0.1, ∆E5 = 1.0, ∆E6 = 10.0, and ∆E7 = 100 are accepted by the
Simulated Annealing algorithm with probabilities P1 = 0, P2 = 0.1, P3 = 0.25, and
P4 = 0.5. Base your calculations on the formulas used in Algorithm 27.1 and consider
the discussion in Section 27.2.1.
[15 points]
Chapter 28
Evolutionary Algorithms
28.1 Introduction
Evolutionary Algorithms have not been invented in a canonical form. Instead, they are a
large family of different optimization methods which have, to a certain degree, been de-
veloped independently by different researchers. Information on their historic development
is thus largely distributed amongst the chapters of this book which deal with the specific
members of this family of algorithms.
Like other metaheuristics, all EAs share the “black box” optimization character and
make only few assumptions about the underlying objective functions. Their consistent and
high performance in many different problem domains (see, e.g., Table 28.4 on page 314) and
the appealing idea of copying natural evolution for optimization made them one the most
popular metaheuristic.
As in most metaheuristic optimization algorithms, we can distinguish between single-
objective and multi-objective variants. The latter means that multiple, possibly conflict-
ing criteria are optimized as discussed in Section 3.3. The general area of Evolutionary
Computation that deals with multi-objective optimization is called EMOO, evolutionary
multi-objective optimization and the multi-objective EAs are called, well, multi-objective
Evolutionary Algorithms.
In this section, we will first outline the basic cycle in which Evolutionary Algorithms proceed
in Section 28.1.1. EAs are inspired by natural evolution, so we will compare the natural
paragons of the single steps of this cycle in detail in Section 28.1.2. Later in this chapter,
we will outline possible algorithms which can be used for the processes in EAs and finally
give some information on conferences, journals, and books on EAs as well as it applications.
1 http://en.wikipedia.org/wiki/Artificial_evolution [accessed 2007-07-03]
254 28 EVOLUTIONARY ALGORITHMS
28.1.1 The Basic Cycle of EAs
All evolutionary algorithms proceed in principle according to the scheme illustrated in Fig-
Fitness Assignment
GPM use the objective values
apply the genotype- to determine fitness
phenotype mapping and values
obtain the phenotypes
Selection
Reproduction
create new individuals select the fittest indi-
from the mating pool by viduals for reproduction
crossover and mutation
ure 28.1 which is a specialization of the basic cycle of metaheuristics given in Figure 1.8 on
page 35. We define the basic EA for single-objective optimization in Algorithm 28.1 and the
multi-objective version in Algorithm 28.2.
1. In the first generation (t = 1), a population pop of ps individuals p is created. The
function createPop(ps) produces this population. It depends on its implementation
whether the individuals p have random genomes p.g (the usual case) or whether the
population is seeded (see Paragraph 28.1.2.2.2).
2. the genotypes p.g are translated to phenotypes p.x (see Section 28.1.2.3). This
genotype-phenotype mapping may either directly be performed to all individuals
(performGPM) in the population or can be a part of the process of evaluating the
objective functions.
3. The values of the objective functions f ∈ f are evaluated for each candidate solu-
tion p.x in pop (and usually stored in the individual records) with the operation
“computeObjectives”. This evaluation may incorporate complicated simulations and
calculations. In single-objective Evolutionary Algorithms, f = {f} and, hence, |f | = 1
holds.
4. With the objective functions, the utilities of the different features of the candidate
solutions p.x have been determined and a scalar fitness value v(p) can now be assigned
to each of them (see Section 28.1.2.5). In single-objective Evolutionary Algorithms,
v ≡ f holds.
5. A subsequent selection process selection(pop, v, mps) (or selection(pop, f, mps), if v ≡ f,
as is the case in single-objective Evolutionary Algorithms) filters out the candidate
solutions with bad fitness and allows those with good fitness to enter the mating pool
with a higher probability (see Section 28.1.2.6).
6. In the reproduction phase, offspring are derived from the genotypes p.g of the selected
individuals p ∈ mate by applying the search operations searchOp ∈ Op (which are
28.1. INTRODUCTION 255
called reproduction operations in the context of EAs, see Section 28.1.2.7). There usu-
ally are two different reproduction operations: mutation, which modifies one genotype,
and crossover, which combines two genotypes to a new one. Whether the whole popu-
lation is replaced by the offspring or whether they are integrated into the population as
well as which individuals recombined with each other depends on the implementation
of the operation “reproducePop”. We highlight the transition between the generations
in Algorithm 28.1 and 28.2 by incrementing a generation counter t and annotating the
populations with it, i. e., pop(t).
7. If the terminationCriterion is met, the evolution stops here. Otherwise, the algo-
rithm continues at Point 2.
Algorithm 28.1 is strongly simplified. The termination criterion terminationCriterion ,
for example, is usually invoked after each and every evaluation of the objective functions.
Often, we have a limit in terms of objective function evaluations (see, for instance Sec-
tion 6.3.3). Then, it is not sufficient to check terminationCriterion after each generation,
since a generation involves computing the objective values of multiple individuals. You can
find a more precise version of the basic Evolutionary Algorithm scheme (with steady-state
population treatment) is specified in Listing 56.11 on page 746.
Also, the return value(s) of an Evolutionary Algorithm often is not the best individual(s)
from the last generation, but the best candidate solution discovered so far. This, however,
demands for maintaining an archive, a thing which – especially in the case of multi-objective
or multi-modal optimization.
An example implementation of Algorithm 28.2 is given in Listing 56.12 on page 750.
In 1859, Darwin [684] published his book “On the Origin of Species”2 in which he identified
the principles of natural selection and survival of the fittest as driving forces behind the
biological evolution. His theory can be condensed into ten observations and deductions
[684, 1845, 2938]:
1. The individuals of a species possess great fertility and produce more offspring than
can grow into adulthood.
2. Under the absence of external influences (like natural disasters, human beings, etc.),
the population size of a species roughly remains constant.
3. Again, if no external influences occur, the food resources are limited but stable over
time.
4. Since the individuals compete for these limited resources, a struggle for survival ensues.
5. Especially in sexual reproducing species, no two individuals are equal.
6. Some of the variations between the individuals will affect their fitness and hence, their
ability to survive.
7. Some of these variations are inheritable.
8. Individuals which are less fit are also less likely to reproduce. The fittest individuals
will survive and produce offspring more probably.
9. Individuals that survive and reproduce will likely pass on their traits to their offspring.
10. A species will slowly change and adapt more and more to a given environment during
this process which may finally even result in new species.
These historic ideas themselves remain valid to a large degree even today. However, some
of Darwin’s specific conclusions can be challenged based on data and methods which simply
were not available at his time. Point 10, the assumption that change happens slowly over
time, for instance, became controversial as fossils indicate that evolution seems to “at rest”
for longer times which exchange with short periods of sudden, fast developments [473, 2778]
(see Section 41.1.2 on page 493).
28.1.2.2 Initialization
28.1.2.2.1 Biology
Until now, it is not clear how life started on earth. It is assumed that before the first organ-
isms emerged, an abiogenesis 3 , a chemical evolution took place in which the chemical com-
ponents of life formed. This process was likely divided in three steps: (1) In the first step,
simple organic molecules such as alcohols, acids, purines4 (such as adenine and guanine),
and pyrimidines5 (such as thymine, cytosine, and uracil) were synthesized from inorganic
2 http://en.wikipedia.org/wiki/The_Origin_of_Species [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Abiogenesis [accessed 2009-08-05]
4 http://en.wikipedia.org/wiki/Purine [accessed 2009-08-05]
5 http://en.wikipedia.org/wiki/Pyrimidine [accessed 2009-08-05]
258 28 EVOLUTIONARY ALGORITHMS
substances. (2) Then, simple organic molecules were combined to the basic components of
complex organic molecules such as monosaccharides, amino acids6 , pyrroles, fatty acids, and
nucleotides7 . (3) Finally, complex organic molecules emerged.Many different theories exist
on how this could have taken place [176, 1300, 1698, 1713, 1837, 2084, 2091, 2432]. An impor-
tant milestone in the research in this area is likely the Miller-Urey experiment 8 [1904, 1905]
where it was shown that amino acids can form under conditions such as those expected to be
present in the early time of the earth from inorganic substances. The first single-celled life
most probably occurred in the Eoarchean9 , the earth age 3.8 billion years ago, demarking
the onset of evolution.
Usually, when an Evolutionary Algorithm starts up, there exists no information about what
is good or what is bad. Basically, only random genes are coupled together as individuals
in the initial population pop(t = 0). This form of initialization can be considered to be
similar to the early steps of chemical and biological evolution, where it was unclear which
substances or primitive life form would be successful and nature experimented with many
different reactions and concepts.
Besides random initialization, Evolutionary Algorithms can also be seeded [1126, 1139,
1471]. If seeding is applied, a set of individuals which already are adapted to the optimization
criteria is inserted into the initial population. Sometimes, the entire first population consists
of such individuals with good characteristics only. The candidate solutions used for this may
either have been obtained manually or by a previous optimization step [1737, 2019].
The goal for seeding is be to achieve faster convergence and better results in the evolu-
tionary process. However, it also raises the danger of premature convergence (see Chapter 13
on page 151), since the already-adapted candidate solutions will likely be much more com-
petitive than the random ones and thus, may wipe them out of the population. Care must
thus be taken in order to achieve the wanted performance improvement.
28.1.2.3.1 Biology
TODO
Example E28.1 (Evolution of Fish).
In the following, let us discuss the evolution on the example of fish. We can consider a fish as
an instance of its heredity information, as one possible realization of the blueprint encoded
in its DNA.
28.1.2.4.1 Biology
There are different criteria which determine whether an organism can survive in its environ-
ment. Often, individuals are good in some characteristics but rarely reach perfection in all
possible aspects.
To sum it up, we could consider the life of the fish as the evaluation process of its genotype
in an environment where good qualities in one aspect can turn out as drawbacks from other
perspectives. In multi-objective Evolutionary Algorithms, this is exactly the same. For each
problem that we want to solve, we can specify one or multiple so-called objective functions
f ∈ f . An objective function f represents one feature that we are interested in.
28.1.2.5 Fitness
28.1.2.5.1 Biology
As stated before, different characteristics determine the fitness of an individual. The biolog-
ical fitness is a scalar value that describes the number of offspring of an individual [2542].
260 28 EVOLUTIONARY ALGORITHMS
Example E28.4 (Evolution of Fish (Example E28.1) Cont.).
In Example E28.1, the fish genome is instantiated and proved its physical properties in
several different categories in Example E28.2. Whether one specific fish is fit or not, however,
is still not easy to say: fitness is always relative; it depends on the environment. The fitness
of the fish is not only defined by its own characteristics but also results from the features of
the other fish in the population (as well as from the abilities of its prey and predators). If
one fish can beat another one in all categories, i. e., is bigger, stronger, smarter, and so on,
it is better adapted to its environment and thus, will have a higher chance to survive. This
relation is transitive but only forms a partial order since a fish which is strong but not very
clever and a fish that is clever but not strong maybe have the same probability to reproduce
and hence, are not directly comparable.
It is not clear whether a weak fish with a clever behavioral pattern is worse or better
than a really strong but less cunning one. Both traits are useful in the evolutionary process
and maybe, one fish of the first kind will sometimes mate with one of the latter and produce
an offspring which is both, intelligent and powerful.
28.1.2.6 Selection
28.1.2.6.1 Biology
Selection in biology has many different facets [2945]. Let us again assume fitness to be a
measure for the expected number of offspring, as done in Paragraph 28.1.2.5.1. How fit an
individual is then does not necessarily determine directly if it can produce offspring and, if
so, how many. There are two more aspects which have influence on this: the environment
(including the rest of the population) and the potential mating partners. These factors are
the driving forces behind the three selection steps in natural selection 11 : (1) environmental
selection 12 (or survival selection and ecological selection) which determines whether an in-
dividual survives to reach fertility [684], (2) sexual selection, i. e., the choice of the mating
partner(s) [657, 2300], and (3) parental selection – possible choices of the parents against
pregnancy or newborn [2300].
10 Pareto comparisons are discussed in Section 3.3.5 on page 65 and Pareto ranking is introduced in
Section 28.3.3.
11 http://en.wikipedia.org/wiki/Natural_selection [accessed 2010-08-03]
12 http://en.wikipedia.org/wiki/Ecological_selection [accessed 2010-08-03]
28.1. INTRODUCTION 261
Example E28.6 (Evolution of Fish (Example E28.1) Cont.).
An intelligent fish may be eaten by a shark, a strong one can die from a disease, and even
the most perfect fish could die from a lighting strike before ever reproducing. The process
of selection is always stochastic, without guarantees – even a fish that is small, slow, and
lacks any sophisticated behavior might survive and could produce even more offspring than
a highly fit one. The degree to which a fish is adapted to its environment therefore can
be considered as some sort of basic probability which determines its expected chances to
survive environment selection. This is the form of selection propagated by Darwin [684] in
his seminal book “On the Origin of Species”.
The sexual or mating selection describes the fact that survival alone does not guarantee
reproduction: At least for sexually reproducing species, two individuals are involves in this
process. Hence, even a fish which is perfectly adapted to its environment will have low
chances to create offspring if it is unable to attract female interest. Sexual selection is
considered to be responsible for many of nature’s beauty, such as the colorful wings of a
butterfly or the tail of a peacock [657, 2300].
The third selection factor, parental , may lead to examples for reinforcement of certain phys-
ical characteristics which are much more extreme than Example E28.7. Rich Harris [2300]
argues that after the fertilization, i. e., before or after birth, parents may make a choice
for or against the life of their offspring. If the offspring does or is likely to exhibit certain
unwanted traits, this decision may be negative. A negative decision may also be caused by
environmental or economical decisions. A negative decision, however, may also again be
overruled after birth if the offspring turns out to possess especially favorable characteris-
tics. Rich Harris [2300], for instance, argues that parents in prehistoric European and Asian
cultures may this way have actively selected hairlessness and pale skin.
In Evolutionary Algorithms, survival selection is always applied in order to steer the op-
timization process towards good areas in the search space. From time to time, also a
subsequent sexual selection is used. For survival selection, the so-called selection algorithms
mate = selection(pop, v, mps) determine the mps individuals from the population pop which
should enter the mating pool mate based on their scalar fitness v. selection(pop, v, mps)
usually picks the fittest individuals and places them into mate with the highest probability.
The oldest selection scheme is called Roulette wheel and will be introduced in Section 28.4.3
on page 290 in detail. In the original version of this algorithm (intended for fitness maxi-
mization), the chance of an individual p to reproduce is proportional to its fitness v(p)pop.
More information on selection algorithms is given in Section 28.4.
28.1.2.7 Reproduction
262 28 EVOLUTIONARY ALGORITHMS
28.1.2.7.1 Biology
Last but not least, there is the reproduction. For this purpose, two approaches exist in
nature: the sexual and the asexual method.
Asexual reproducing species basically create clones of themselves, a process during which,
again, mutations may take place. Regardless of how they were produced, the new generation
of offspring now enters the struggle for survival and the cycle of evolution starts in another
round.
In Evolutionary Algorithms, individuals usually do not have such a “gender”. Yet, the
concept of recombination, i. e., of binary search operations, is one of the central ideas of this
metaheuristic. Each individual from the mating pool can potentially be recombined with
every other one. In the car example, this means that we could copy the design of the engine
of one car and place it into the construction plan of the body of another car.
The concept of mutation is present in EAs as well. Parts of the genotype (and hence,
properties of the phenotype) can randomly be changed with a unary search operation. This
way, new construction plans for new cars are generated. The common definitions for repro-
duction operations in Evolutionary Algorithms are given in Section 28.5.
At this point it should be mentioned that the direct reference to Darwinian evolution in
Evolutionary Algorithms is somehow controversial. The strongest argument against this
reference is that Evolutionary Algorithms introduce a change in semantics by being goal-
driven [2772] while the natural evolution is not.
Paterson [2134] further points out that “neither GAs [Genetic Algorithms] nor GP [Ge-
netic Programming] are concerned with the evolution of new species, nor do they use natural
selection.” On the other hand, nobody would claim that the idea of selection has not been
borrowed from nature although many additions and modifications have been introduced in
favor for better algorithmic performance.
An argument against close correspondence between EAs and natural evolution concerns
the development of different species. It depends on the definition of species: According to
Wikipedia – The Free Encyclopedia [2938], “a species is a class of organisms which are very
similar in many aspects such as appearance, physiology, and genetics.” In principle, there
is some elbowroom for us and we may indeed consider even different solutions to a single
problem in Evolutionary Algorithms as members of a different species – especially if the
binary search operation crossover/recombination applied to their genomes cannot produce
another valid candidate solution.
In Genetic Algorithms, the crossover and mutation operations may be considered to be
strongly simplified models of their natural counterparts. In many advanced EAs there may
be, however, operations which work entirely differently. Differential Evolution, for instance,
features a ternary search operation (see Chapter 33). Another example is [2912, 2914], where
28.1. INTRODUCTION 263
the search operations directly modify the phenotypes (G = X) according to heuristics and
intelligent decision.
Paterson [2134] points out that “neither GAs [Genetic Algorithms] nor GP [Genetic
Programming] are concerned with the evolution of new species, nor do they use natural
selection.” Nonetheless, nobody would claim that the idea of selection has not been borrowed
from nature although many additions and modifications have been introduced in favor for
better algorithmic performance.
Furthermore, although the concept of fitness13 in nature is controversial [2542], it is often
considered as an a posteriori measurement. It then defines the ratio between the numbers
of occurrences of a genotype in a population after and before selection or the number of
offspring an individual has in relation to the number of offspring of another individual. In
Evolutionary Algorithms, fitness is an a priori quantity denoting a value that determines
the expected number of instances of a genotype that should survive the selection process.
However, one could conclude that biological fitness is just an approximation of the a priori
quantity arisen due to the hardness (if not impossibility) of directly measuring it.
My personal opinion (which may as well be wrong) is that the citation of Darwin here is
well motivated since there are close parallels between Darwinian evolution and Evolutionary
Algorithms. Nevertheless, natural and artificial evolution are still two different things and
phenomena observed in either of the two do not necessarily carry over to the other.
Let us review our introductory fish example in terms of forma analysis. Fish can, for instance,
be characterized by the properties “clever” and “strong”. For the sake of simplicity, let us
assume that both properties may be true or false for a single individual. They hence
define two formae each. A third property can be the color, for which many different possible
variations exist. Some of them may be good in terms of camouflage, others maybe good in
terms of finding mating partners. Now a fish can be clever and strong at the same time, as
well as weak and green. Here, a living fish allows nature to evaluate the utility of at least
three different formae.
This issue has first been stated by Holland [1250] for Genetic Algorithms and is termed
implicit parallelism (or intrinsic parallelism). Since then, it has been studied by many dif-
ferent researchers [284, 1128, 1133, 2827]. If the search space and the genotype-phenotype
mapping are properly designed, implicit parallelism in conjunction with the crossover/re-
combination operations is one of the reasons why Evolutionary Algorithms are such a suc-
cessful class of optimization algorithms. The Schema Theorem discussed in Section 29.5.3
on page 342 gives a further explanation for this from of parallel evaluation.
1. Genetic Algorithms (GAs) are introduced in Chapter 29 on page 325. GAs subsume
all Evolutionary Algorithms which have string-based search spaces G in which they
mainly perform combinatorial search.
2. The set of Evolutionary Algorithms which explore the space of real vectors X ⊆ Rn ,
often using more advanced mathematical operations, is called Evolution Strategies (ES,
see Chapter 30 on page 359). Closely related is Differential Evolution, a numerical
optimizer using a ternary search operation.
13 http://en.wikipedia.org/wiki/Fitness_(biology) [accessed 2008-08-10]
264 28 EVOLUTIONARY ALGORITHMS
3. For Genetic Programming (GP), which will be elaborated on in Chapter 31 on page 379,
we can provide two definitions: On one hand, GP includes all Evolutionary Algorithms
that grow programs, algorithms, and these alike. On the other hand, also all EAs that
evolve tree-shaped individuals are instances of Genetic Programming.
4. Learning Classifier Systems (LCS), discussed in Chapter 35 on page 457, are learning
approaches that assign output values to given input values. They often internally use
a Genetic Algorithm to find new rules for this mapping.
5. Evolutionary Programming (EP, see Chapter 32 on page 413) approaches usually fall
into the same category as Evolution Strategies, they optimize vectors of real numbers.
Historically, such approaches have also been used to derive algorithm-like structures
such as finite state machines.
Genetic Programming
GGGP SGP
LGP
Learning Classifier
Systems Evolution Strategy
Differential
Evolution
Evolutionary Algorithms
The early research [718] in Genetic Algorithms (see Section 29.1 on page 325), Genetic
Programming (see Section 31.1.1 on page 379), and Evolutionary Programming (see ??
on page ??) date back to the 1950s and 60s. Besides the pioneering work listed in these
sections, at least other important early contribution should not go unmentioned here: The
Evolutionary Operation (EVOP) approach introduced by Box and Draper [376, 377] in the
late 1950s. The idea of EVOP was to apply a continuous and systematic scheme of small
changes in the control variables of a process. The effects of these modifications are evaluated
and the process is slowly shifted into the direction of improvement. This idea was never
realized as a computer algorithm, but Spendley et al. [2572] used it as basis for their simplex
method which then served as progenitor of the Downhill Simplex algorithm14 by Nelder and
Mead [2017]. [718, 1719] Satterthwaite’s REVOP [2407, 2408], a randomized Evolutionary
Operation approach, however, was rejected at this time [718].
Like in Section 1.3 on page 31, we here classified different algorithms according to their
semantics, in other words, corresponding to their specific search and problem spaces and
the search operations [634]. All five major approaches can be realized with the basic scheme
defined in Algorithm 28.1. To this simple structure, there exist many general improve-
ments and extensions. Often, these do not directly concern the search or problem spaces.
14 We discuss Nelder and Mead’s Downhill Simplex [2017] optimization method in Chapter 43 on page 501.
28.1. INTRODUCTION 265
Then, they also can be applied to all members of the EA family alike. Later in this chap-
ter, we will discuss the major components of many of today’s most efficient Evolutionary
Algorithms [593]. Some of the distinctive features of these EAs are:
1. The population size and the number of populations used.
2. The method of selecting the individuals for reproduction.
3. The way the offspring is included into the population(s).
Extinctive Evolutionary Algorithms can further be divided into left and right selec-
tion [2987]. In left extinctive selections, the best individuals are not allowed to reproduce in
order to prevent premature convergence of the optimization process. Conversely, the worst
individuals are not permitted to breed in right extinctive selection schemes in order to reduce
the selective pressure since they would otherwise scatter the fitness too much [2987].
The EA creates a single offspring from a single parent. The offspring then replaces its parent.
This is equivalent to a random walk and hence, does not optimize very well, see Section 8.2
on page 114.
28.1.4.3.5 (µ + 1)-EAs
Here, the population contains µ individuals from which one is drawn randomly. This indi-
vidual is reproduced. Alternatively, two parents can be chosen and recombined [302, 2278].
From the joint set of its offspring and the current population, the least fit individual is
removed.
The notation (µ/ρ, λ) is a generalization of (µ, λ) used to signify the application of ρ-ary
reproduction operators. In other words, the parameter ρ is added to denote the number of
parent individuals of one offspring. Normally, only unary mutation operators (ρ = 1) are
used in Evolution Strategies, the family of EAs the µ-λ notation stems from. If (binary)
recombination is also applied (as normal in general Evolutionary Algorithms), ρ = 2 holds. A
special case of (µ/ρ, λ) algorithms is the (µ/µ, λ) Evolution Strategy [1843] which, basically,
is similar to an Estimation of Distribution Algorithm.
Geyer et al. [1044, 1045, 1046] have developed nested Evolution Strategies where λ′ off-
spring are created and isolated for γ generations from a population of the size µ′ . In each
of the γ generations, λ children are created from which the fittest µ are passed on to the
next generation. After the γ generations, the best individuals from each of the γ isolated
candidate solutions propagated back to the top-level population, i. e., selected. Then, the
cycle starts again with λ′ new child individuals. This nested Evolution Strategy can be
more efficient than the other approaches when applied to complex multimodal fitness envi-
ronments [1046, 2279].
28.1.4.4 Elitism
Definition D28.4 (Elitism). An elitist Evolutionary Algorithm [595, 711, 1691] ensures
that at least one copy of the best individual(s) of the current generation is propagated on
to the next generation.
The main advantage of elitism is that it is guaranteed that the algorithm will converge. This
means, meaning that once the basin of attraction of the global optimum has been discovered,
the Evolutionary Algorithm will converge to that optimum. On the other hand, the risk
of converging to a local optimum is also higher. Elitism is an additional feature of Global
Optimization algorithms – a special type of preservative strategy – which is often realized
by using a secondary population (called the archive) only containing the non-prevailed in-
dividuals. This population is updated at the end of each iteration. It should be noted that
268 28 EVOLUTIONARY ALGORITHMS
Evolutionary Algorithms are very robust and efficient optimizers which can often also provide
good solutions for hard problems. However, their efficient is strongly influenced by design
choices and configuration parameters. In Chapter 24, we gave some general rules-of-thumb
for the design of the representation used in an optimizer. Here we want to list the set of
configuration and setup choices available to the user of an EA.
Fitne
s Ass s
Proc
m
ith
or
ignm
ess
lg
A
Searc
n
h
Searc Space a
en
io
ct
h Op n
erati d
t
le
Se
ons Arch
ive/P
pe
runin
g
y
ulat
Para
y
ot
en
mete
G
oss
rs
mu-
Figure 28.3 illustrates these parameters. The performance and success of an evolutionary
optimization approach applied to a problem given by a set of objective functions f and a
problem space X is defined by
270 28 EVOLUTIONARY ALGORITHMS
1. its basic parameter settings such as the population size ps or the crossover rate cr and
mutation rate mr,
2. whether it uses an archive archive of the best individuals found and, if so, the archive
size as and which pruning technology is used to prevent it from overflowing,
3. the fitness assignment process “assignFitness” and the selection algorithm “selection”,
4. the choice of the search space G and the search operations Op, and, finally
5. the genotype-phenotype mapping “gpm” connecting the search Space and the problem
space.
In ??, we go more into detail on how to state the configuration of an optimization algorithm
in order to fully describe experiments and to make them reproducible.
In Evolutionary Algorithms as well as in any other optimization method, the search space
G is not necessarily the same as the problem space X. In other words, the elements g
processed by the search operations searchOp ∈ Op may differ from the candidate solutions
x ∈ X which are evaluated by the objective functions f : X 7→ R. In such cases, a mapping
between G and X is necessary: the genotype-phenotype mapping (GPM). In Section 4.3 on
page 85 we gave a precise definition of the term GPM. Generally, we can distinguish four
types of genotype-phenotype mappings:
2. In many cases, direct mappings, i. e., simple functional transformations which directly
relate parts of the genotypes and phenotypes are used (see Section 28.2.1).
3. In generative mappings, the phenotypes are built step by step in a process directed by
the genotypes. Here, the genotypes serve as programs which direct the phenotypical
developments (see Section 28.2.2.1).
Which kind of genotype-phenotype mapping should be used for a given optimization task
strongly depends on the type of the task at hand. For most small-scale and many large-
scale problems, direct mappings are completely sufficient. However, especially if large-scale,
possibly repetitive or recursive structures are to be created, generative or ontogenic mappings
can be the best choice [777, 2735].
Direct mappings are explicit functional relationships between the genotypes and the phe-
notypes [887]. Usually, each element of the phenotype is related to one element in the
genotype [2735] or to a group of genes. Also, the algorithmic complexity of these mappings
usually is usually not above O(n log n) where n is the length of a genotype. Based on the
work of Devert [777], we define explicit or direct mappings as follows:
28.2. GENOTYPE-PHENOTYPE MAPPINGS 271
Definition D28.5 (Explicit Representation). An explicit (or direct) representation is
a representation for which a computable functions exists that takes as argument the size
the genotype and gives an upper bound for the number of algorithm steps of the genotype-
phenotype mapping until the phenotype-building process has converged to a stable pheno-
type.
The algorithm steps mentioned in Definition D28.5 can be, for example, the write operations
to the phenotype. In the direct mappings we discuss in the remainder of this section, usually
each element of a candidate solution is written to exactly once. If bit strings are transformed
to vectors of natural numbers, for example, each element of the numerical vector is set to
one value. The same goes when scaling real vectors: the elements of the resulting vectors are
written to once. In the following, we provide some examples for direct genotype-phenotype
mappings.
A genotype-phenotype mapping which allows the search to operate on G = [0, 1]n instead
˘ ˘ ˘ ˘
of X = [x̆1 , x1 ] × [x̆2 , x2 ] × · · · × [x̆n , xn ], where x̆i ∈ R and xi ∈ R are the lower and upper
bounds of the ith dimension of the problem space X, respectively, would prevent such pitfalls
from the beginning:
˘
gpmscale (g) = (x̆i + g [i] ∗ (xi − x̆i ∀i ∈ 1 . . . n) (28.1)
Full ontogenic (or developmental [887]) mappings even go a step farther than just utilizing
state information during the transformation of the genotypes to phenotypes [777]. This
additional information can stem from the objective functions. If simulations are involved
in the computation of the objective functions, the mapping may access the full simulation
information as well.
28.3.1 Introduction
Listing 55.14 on page 718 provides a Java interface which enables us to implement different
fitness assignment processes according to Definition D28.6. In the context of this book, we
generally minimize fitness values, i. e., the lower the fitness of a candidate solution, the
better. If no density or sharing information is included and the fitness only reflects the
strict partial order introduced by the Pareto (see Section 3.3.5) or a prevalence relation (see
Section 3.5.2), Equation 28.4 will hold. If further information such as diversity metrics are
included in the fitness, this equation may no longer be applicable.
p1 .x ≺ p2 .x ⇒ v(p1 ) < v(p2 ) ∀p1 , p2 ∈ pop ∪ archive (28.4)
28.3. FITNESS ASSIGNMENT 275
28.3.2 Weighted Sum Fitness Assignment
The most primitive fitness assignment strategy would be to just use a weighted sum of
the objective values. This approach is very static and brings about the same problems as
weighted sum-based approach for defining what an optimum is discussed in Section 3.3.3 on
page 62. It makes no use of Pareto or prevalence relations. For computing the weighted sum
of the different objective values of a candidate solution, we reuse Equation 3.18 on page 62
from the weighted sum optimum definition. The weights have to be chosen in a way that
ensures that v(p) ∈ R+ holds for all individuals p.x ∈ X.
v(p) = assignFitnessWeightedSum(pop) ⇔ (∀p ∈ pop ⇒ v(p) = ws(p.x)) (28.5)
In case 1, individuals that dominate many others will here receive a lower fitness value
than those which are prevailed by many. Since fitness is subject to minimization they will
strongly be preferred. Although this idea looks appealing at first glance, it has a decisive
drawback which can also be observed in Example E28.16: It promotes individuals that reside
in crowded region of the problem space and underrates those in sparsely explored areas. If an
individual has many neighbors, it can potentially prevail many other individuals. However,
a non-prevailed candidate solution in a sparsely explored region of the search space may not
prevail any other individual and hence, receive fitness 1, the worst possible fitness under v1 .
Thus, the fitness assignment process achieves exactly the opposite of what we want.
Instead of exploring the problem space and delivering a wide scan of the frontier of best
possible candidate solutions, it will focus all effort on a small set of individuals. We will
only obtain a subset of the best solutions and it is even possible that this fitness assignment
method leads to premature convergence to a local optimum.
A much better second approach for fitness assignment has first been proposed by Gold-
berg [1075]. Here, the idea is to assign the number of individuals it is prevailed by to each
candidate solution [1125, 1783]. This way, the previously mentioned negative effects will not
occur and the exploration pressure is applied to a much wider area of the Pareto frontier.
This so-called Pareto ranking can be performed by first removing all non-prevailed (non-
dominated) individuals from the population and assigning the rank 0 to them (since they
are prevailed by no individual and according to Equation 28.7 should receive v2 ). Then, the
same is performed with the rest of the population. The individuals only dominated by those
on rank 0 (now non-dominated) will be removed and get the rank 1. This is repeated until
all candidate solutions have a proper fitness assigned to them. Since we follow the idea of the
freer prevalence comparators instead of Pareto dominance relations, we will synonymously
refer to this approach as Prevalence ranking and define Algorithm 28.4 which is implemented
in Java in Listing 56.16 on page 759. It should be noted that this simplealgorithm has
a complexity of O ps2 ∗ |f | . A more efficient approach with complexity O ps log|f |−1 ps
has been introduced by Jensen [1441] based on a divide-and-conquer strategy.
276 28 EVOLUTIONARY ALGORITHMS
10
f2 8 15
8 5
7 1 6 9 14
6 10
5 7 11
4 2 13
3 12
2 3
Pareto Frontier
1 4
0 1 2 3 4 5 6 7 8 9 10 f1 12
Table 28.1: The Pareto domination relation of the individuals illustrated in Figure 28.4.
We computed the fitness values for both approaches for the individuals in the population
given in Figure 28.4 and noted them in Table 28.1. In column v1 (p), we provide the fitness
values are inversely proportional to the number of individuals which are dominated by p.
A good example for the problem that approach 1 prefers crowded regions in the search
space while ignoring interesting candidate solutions in less populated regions are the four
non-prevailed individuals {1, 2, 3, 4} on the Pareto frontier. The best fitness is assigned to
the element 2, followed by individual 1. Although individual 7 is dominated (by 1), its
fitness is better than the fitness of the non-dominated element 3.
The candidate solution 4 gets the worst possible fitness 1, since it prevails no other
element. Its chances for reproduction are similarly low than those of individual 15 which
is dominated by all other elements except 4. Hence, both candidate solutions will most
probably be not selected and vanish in the next generation. The loss of individual 4 will
greatly decrease the diversity and further increase the focus on the crowded area near 1 and
2.
Column v2 (p) in Table 28.1 shows that all four non-prevailed individuals now have the
best possible fitness 0 if the method given as Algorithm 28.4 is applied. Therefore, the focus
is not centered to the overcrowded regions.
However, the region around the individuals 1 and 2 has probably already extensively
been explored, whereas the surrounding of candidate solution 4 is rather unknown. More
individuals are present in the top-left area of Figure 28.4 and at least two individuals with
the same fitness as 3 and 4 reside there. Thus, chances are sill that more individuals
from this area will be selected than from the bottom-right area. A better approach of
fitness assignment should incorporate such information and put a bit more pressure into the
direction of individual 4, in order to make the Evolutionary Algorithm investigate this area
more thoroughly. Algorithm 28.4 cannot fulfill this hope.
Previously, we have mentioned that the drawback of Pareto ranking is that it does not
incorporate any information about whether the candidate solutions in the population reside
closely to each other or in regions of the problem space which are only sparsely covered
by individuals. Sharing, as a method for including such diversity information into the
278 28 EVOLUTIONARY ALGORITHMS
fitness assignment process, was introduced by Holland [1250] and later refined by Deb [735],
Goldberg and Richardson [1080], and Deb and Goldberg [742] and further investigated
by [1899, 2063, 2391].
Sharing functions can be employed in many different ways and are used by a variety of fitness
assignment processes [735, 1080]. Typically, the simple triangular function “Sh tri” [1268]
or one of its either convex (Sh cvexξ ) or concave (Sh ccavξ ) pendants with the power
ξ ∈ R+ , ξ > 0 are applied. Besides using different powers of the distance-σ-ratio, another
approach is the exponential sharing method “Sh exp”.
1 − σd if 0 ≤ d < σ
Sh triσ (d) = (28.9)
0 otherwise
( ξ
Sh cvexσ,ξ (d) = 1 − σd if 0 ≤ d < σ (28.10)
0 otherwise
(
d ξ
Sh ccavσ,ξ (d) = 1 − σ if 0 ≤ d < σ (28.11)
0 otherwise
1 if d ≤ 0
Sh expσ,ξ (d) = 0 if d ≥ σ (28.12)
ξd
e σ −e
− −ξ
1−e−ξ
otherwise
For sharing, the distance of the individuals in the search space G as well as their distance
in the problem space X or the objective space Y may be used. If the candidate solutions
are real vectors in the Rn , we could use the Euclidean distance of the phenotypes of the
individuals directly, i. e., compute dist eucl(p1 .x, p2 .x). However, this makes only sense in
lower dimensional spaces (small values of n), since the Euclidean distance in Rn does not
convey much information for high n.
In Genetic Algorithms, where the search space is the set of all bit strings G = Bn
of the length n, another suitable approach would be to use the Hamming distance22
dist ham(p1 .g, p2 .g) of the genotypes. The work of Deb [735] indicates that phenotypical
sharing will often be superior to genotypical sharing.
Definition D28.8 (Niche Count). The niche count m(p, P ) [738, 1899] of an individual
p ∈ P is the sum of its sharing values with all individual in a list P .
X
∀p ∈ P ⇒ m(p, P ) = Shσ (dist(p, p′ )) (28.13)
∀p′ ∈P
The niche count “m” is always greater than or equal to 1, since p ∈ P and, hence,
Shσ (dist(p, p)) = 1 is computed and added up at least once. The original sharing ap-
proach was developed for fitness assignment in single-objective optimization where only one
objective function f was subject to maximization. In this case, the objective value was sim-
ply divided by the niche count, punishing solutions in crowded regions [1899]. The goal of
22 See ?? on page ?? for more information on the Hamming distance.
28.3. FITNESS ASSIGNMENT 279
sharing was to distribute the population over a number of different peaks in the fitness land-
scape, with each peak receiving a fraction of the population proportional to its height [1268].
The results of dividing the fitness by the niche counts strongly depends on the height dif-
ferences of the peaks and thus, on the complexity class23 of f. On f1 ∈ O(x), for instance,
the influence of “m” is much bigger than on a f2 ∈ O(ex ).
By multiplying the niche count “m” to predetermined fitness values v′ , we can use this
approach for fitness minimization in conjunction with a variety of other different fitness
assignment processes, but also inherit its shortcomings:
Sharing was traditionally combined with fitness proportionate, i. e., roulette wheel selec-
tion24 . Oei et al. [2063] have shown that if the sharing function is computed using the
parental individuals of the “old” population and then naı̈vely combined with the more so-
phisticated tournament selection25 , the resulting behavior of the Evolutionary Algorithm
may be chaotic. They suggested to use the partially filled “new” population to circumvent
this problem. The layout of Evolutionary Algorithms, as defined in this book, bases the
fitness computation on the whole set of “new” individuals and assumes that their objec-
tive values have already been completely determined. In other words, such issues simply
do not exist in multi-objective Evolutionary Algorithms as introduced here and the chaotic
behavior does occur.
For computing the niche count “m”, O n2 comparisons are needed. According to
Goldberg et al. [1085], sampling the population can be sufficient to approximate “m”in
order to avoid this quadratic complexity.
Using sharing and the niche counts naı̈vely leads to more or less unpredictable effects. Of
course, it promotes solutions located in sparsely populated niches but how much their fitness
will be improved is rather unclear. Using distance measures which are not normalized can
lead to strange effects, too. Imagine two objective functions f1 and f2 . If the values of f1
span from 0 to 1 for the individuals in the population whereas those of f2 range from 0 to
10 000, the components of f1 will most often be negligible in the Euclidian distance of two
individuals in the objective space Y. Another problem is that the effect of simple sharing
on the pressure into the direction of the Pareto frontier is not obvious either or depends on
the sharing approach applied. Some methods simply add a niche count to the Pareto rank,
which may cause non-dominated individuals having worse fitness than any others in the
population. Other approaches scale the niche count into the interval [0, 1) before adding it
which not only ensures that non-dominated individuals have the best fitness but also leave
the relation between individuals at different ranks intact which, however, does not further
variety very much.
In some of our experiments [2888, 2912, 2914], we applied a Variety Preserving Fitness
Assignment, an approach based on Pareto ranking using prevalence comparators and well-
dosed sharing. We developed it in order to mitigate the previously mentioned side effects
and balance the evolutionary pressure between optimizing the objective functions and max-
imizing the variety inside the population. In the following, we will describe it in detail and
give its specification in Algorithm 28.5.
Before this fitness assignment process can begin, it is required that all individuals with
infinite objective values must be removed from the population pop. If such a candidate
solution is optimal, i. e., if it has negative infinitely large objectives in a minimization process,
for instance, it should receive fitness zero, since fitness is subject to minimization. If the
23 SeeChapter 12 on page 143 for a detailed introduction into complexity and the O-notation.
24 Roulette wheel selection is discussed in Section 28.4.3 on page 290.
25 You can find an outline of tournament selection in Section 28.4.4 on page 296.
280 28 EVOLUTIONARY ALGORITHMS
1 15
share potential
8
0.8
5 14
0.6 1 6 9
10
0.4
7 11
13
0.2
2
12
0
3
4
10
8
Pareto Frontier
f2 6
8 f1 10
4 6
0 2
Figure 28.5: The sharing potential in the Variety Preserving Example E28.17.
28.3. FITNESS ASSIGNMENT 283
x f1 f2 rank share/u share/s v(x)
1 1 7 0 0.71 0.779 0.779
2 2 4 0 0.239 0.246 0.246
3 6 2 0 0.201 0.202 0.202
4 10 1 0 0.022 0 0
5 1 8 1 0.622 0.679 3.446
6 2 7 2 0.906 1 5.606
7 3 5 1 0.531 0.576 3.077
8 2 9 4 0.314 0.33 5.191
9 3 7 4 0.719 0.789 6.845
10 4 6 2 0.592 0.645 4.325
11 5 5 2 0.363 0.386 3.39
12 7 3 1 0.346 0.366 2.321
13 8 4 3 0.217 0.221 3.797
14 7 7 9 0.094 0.081 9.292
15 9 9 13 0.025 0.004 13.01
But first things first; as already mentioned, we know the Pareto ranks of the candidate
solutions from Table 28.1, so the next step is to determine the ranges of values the objective
functions take on for the example population. These can again easily be found out from
Figure 28.4. f1 spans from 1 to 10, which leads to rangeScale[0] = 1/9. rangeScale[1] = 1/8
since the maximum of f2 is 9 and its minimum is 1. With this, we now can compute the
(dimensionally√scaled) distances amongst the candidate solutions in the objective space,
the values of dist in algorithm Algorithm 28.5, as well as the corresponding values of
√
√
the sharing function Sh exp |f |,16 dist . We noted these in Table 28.3, using the upper
triangle of the table for the distances and the lower triangle for the shares.
Table 28.3: The distance and sharing matrix of the example from Table 28.2.
The value of the sharing function can be imagined as a scalar field, as illustrated in
Figure 28.5. In this case, each individual in the population can be considered as an electron
that will build an electrical field around it resulting in a potential. If two electrons come
284 28 EVOLUTIONARY ALGORITHMS
close, repulsing forces occur, which is pretty much the same what we want to do with Variety
Preserving. Unlike the electrical field, the power of the sharing potential falls exponentially,
resulting in relatively steep spikes in Figure 28.5 which gives proximity and density a heavier
influence. Electrons in atoms on planets are limited in their movement by other influences
like gravity or nuclear forces, which are often stronger than the electromagnetic force. In
Variety Preserving, the prevalence rank plays this role – as you can see in Table 28.2, its
influence on the fitness is often dominant.
By summing up the single sharing potentials for each individual in the example, we
obtain the fifth column of Table 28.3, the unscaled share values. Their minimum is around
0.022 and the maximum is 0.94. Therefore, we must subtract 0.022 from each of these values
and multiply the result with 1.131. By doing so, we build the column shares/s. Finally, we
can compute the fitness values v(x) according to lines 38 and 37 in Algorithm 28.5.
The last column of Table 28.2 lists these results. All non-prevailed individuals have
retained a fitness value less than one, lower than those of any other candidate solution in
the population. However, amongst these best individuals, candidate solution 4 is strongly
preferred, since it is located in a very remote location of the objective space. Individual
1 is the least interesting non-dominated one, because it has the densest neighborhood in
Figure 28.4. In this neighborhood, the individuals 5 and 6 with the Pareto ranks 1 and 2 are
located. They are strongly penalized by the sharing process and receive the fitness values
v(5) = 3.446 and v(6) = 5.606. In other words, individual 5 becomes less interesting than
candidate solution 7 which has a worse Pareto rank. 6 now is even worse than individual 8
which would have a fitness better by two if strict Pareto ranking was applied.
Based on these fitness values, algorithms like Tournament selection (see Section 28.4.2)
or fitness proportionate approaches (discussed in Section 28.4.3) will pick elements in a
way that preserves the pressure into the direction of the Pareto frontier but also leads to a
balanced and sustainable variety in the population. The benefits of this approach have been
shown, for instance, in [2188, 2888, 2912, 2914].
28.4 Selection
28.4.1 Introduction
Definition D28.10 (Mating Pool). The mating pool mate in an Evolutionary Algorithm
contains the individuals which are selected for further investigation and on which the repro-
duction operations (see Section 28.5 on page 306) will subsequently be applied.
27 http://en.wikipedia.org/wiki/Selection_%28genetic_algorithm%29 [accessed 2007-07-03]
286 28 EVOLUTIONARY ALGORITHMS
Selection may behave in a deterministic or in a randomized manner, depending on the
algorithm chosen and its application-dependant implementation. Furthermore, elitist Evo-
lutionary Algorithms may incorporate an archive archive in the selection process, as outlined
in ??.
Generally, there are two classes of selection algorithms: such with replacement (annotated
with a subscript r in this book) and such without replacement (here annotated with a
subscript w ) [2399]. In Figure 28.6, we present the transition from generation t to t + 1 for
both types of selection methods.In a selection algorithm without replacement, each individual
individuals can enter
mating pool more than once
A A B
A’ A´B A´D
selection B D
with duplication,
B’ B B´D
replacement mating pool at time step t mutation,
crossover
Mate(t); ms = 5 ...
C D´A D’
A B C
G H I
A’ B´D C´A
individuals enter mating duplication,
Population in generation t selection pool at most once mutation,
Pop(t); ps = 9 without crossover D’ G A´B
replacement ...
A B C
B C’ D´B
D G
mating pool at time step t in some EAs (steady-state, (m+l) ES, ...)
Mate(t); ms = 5 the parents compete with the offsprings
from the population pop can enter the mating pool mate at most once in each generation.
If a candidate solution has been selected, it cannot be selected a second time in the same
generation. This process is illustrated in the bottom row of Figure 28.6.
The mating pool returned by algorithms with replacement can contain the same indi-
vidual multiple times, as sketched in the top row of Figure 28.6. The dashed gray line in
this figure stands for the possibility to include the parent generation in the next selection
process. In other words, the parents selected into mate(t) may compete with their children
in popt+1 and together take part in the next environmental selection step. This is the case
in (µ + λ)-Evolution Strategies and in steady-state Evolutionary Algorithms, for example.
In extinctive/generational EAs, such as, for example, (µ, λ)-Evolution Strategies, the parent
generation simply is discarded, as discussed in Section 28.1.4.1.
The selection algorithms have major impact on the performance of Evolutionary Algorithms.
Their behavior has thus been subject to several detailed studies, conducted by, for instance,
Blickle and Thiele [340], Chakraborty et al. [520], Goldberg and Deb [1079], Sastry and
Goldberg [2400], and Zhong et al. [3079], just to name a few.
Usually, fitness assignment processes are carried out before selection and the selection
algorithms base their decisions solely on the fitness v of the individuals. It is possible to
28.4. SELECTION 287
rely on the prevalence or dominance relation, i. e., to write selection(pop, cmpf , mps) instead
of selection(pop, v, mps), or, in case of single-objective optimization, to use the objective
values directly (selection(pop, f, mps)) and thus, to save the costs of the fitness assignment
process. However, this can lead to the same problems that occurred in the first approach of
Pareto-ranking fitness assignment (see Point 1 in Section 28.3.3 on page 275).
As outlined in Paragraph 28.1.2.6.2, Evolutionary Algorithms generally apply survival
selection, sometimes followed by a sexual selectionSelection!sexual procedure. In this section,
we will focus on the former one.
28.4.1.1 Visualization
In the following sections, we will discuss multiple selection algorithms. In order to ease
understanding them, we will visualize the expected number of offspring ESp of an indi-
vidual, i. e., the number of times it will take part in reproduction. If we assume uniform
sexual selection (i. e., no sexual selection), this value will be proportional to its number of
occurrences in the mating pool.
1. As sketched in Fig. 28.7.a, the individual pi has fitness i, i. e., v1 (p0 ) = 0, v1 (p1 ) =
1, . . . , v1 (p999 ) = 999.
800 8e8
600 6e8
400 4e8
200 2e8
i i
0 200 400 600 800 0 200 400 600 800
Fig. 28.7.a: Case 1: v1 (pi ) = i Fig. 28.7.b: Case 2: v2 (pi ) = (i + 1)3
1.6 1.6
1.2 1.2
0.8 0.8
0.4 0.4
i
i
0 200 400 600 800 0 200 400 600 800
Fig. 28.7.c: Case 3: v3 (pi ) = Fig. 28.7.d: Case 4: v4 (pi ) = 1 + v3 (pi )
0.0001i if i < 999, 1 otherwise
ES(pi)
6 k = 0.100 * ps
k = 0.125 * ps
5 k = 0.250 * ps
k = 0.500 * ps
k = 1.000 * ps
4
1
0
0 100 200 300 400 500 600 700 i 900
0 100 200 300 400 500 600 700 v1(pi) 900
1 1e6 8e6 3e7 6e7 1e8 2e8 3e8 v2(pi) 7e8
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 v3(pi) 0.09
1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 v4(pi) 1.09
ple E28.18 if truncation selection would be applied. As stated, k is less or equal to the
mating pool size mps. Under the assumption that no sexual selection takes place, ESp only
depends on k.
Under truncation selection, the diagram will look exactly the same regardless which of
the four example fitness configurations we use. Here, only the order of individuals and not
on the numerical relation of their fitness is important. If we set k = ps, each individual will
have one offspring in average. If k = 12 mps, the top-50% individuals will have two offspring
1
and the others none. For k = 10 mps, only the best 100 from the 1000 candidate solutions
will reach the mating pool and reproduce 10 times in average.
There exists a variety of approaches which realize such probability distributions [1079], like
stochastic remainder selection [358, 409] and stochastic universal selection [188, 1133]. The
most commonly known method is the Monte Carlo roulette wheel selection by De Jong
[711], where we imagine the individuals of a population to be placed on a roulette30 wheel
as sketched in Fig. 28.9.a. The size of the area on the wheel standing for a candidate solution
is proportional to its fitness. The wheel is spun, and the individual where it stops is placed
into the mating pool mate. This procedure is repeated until mps individuals have been
selected.
In the context of this book, fitness is subject to minimization. Here, higher fitness values
v(p) indicate unfit candidate solutions p.x whereas lower fitness denotes high utility. Fur-
thermore, the fitness values are normalized into a range of [0, 1], because otherwise, fitness
proportionate selection will handle the set of fitness values {0, 1, 2} in a different way than
{10, 11, 12}. Equation 28.22 defines the framework for such a (normalized) fitness propor-
tionate selection. It is realized in Algorithm 28.9 as a variant with and in Algorithm 28.10
without replacement.
v(p4)=40
v(p2)=20
v(p1)=10 A(p4)= 0/60 A
A(p2)= 1/5 A =0A
A(p1)= 30/60 A= 1/2 A
v(p3)=30 v(p1)=10
A(p3)= 1/3 A A(p1)= 1/10 A
v(p3)=30
A(p3)= 10/60 A
v(p4)=40 v(p2)=20 = 1/6 A
A(p4)= 2/5 A A(p2)= /60 A= /3 A
20 1
Fig. 28.9.a: Example for fitness maximiza- Fig. 28.9.b: Example for normalized fitness mini-
tion. mization.
imimization (Fig. 28.9.a) and normalized minimization (Fig. 28.9.b). In this example we
compute the expected proportion in which each of the four element p1 ..p4 will occur in the
mating pool mate if the fitness values are v(p1 ) = 10, v(p2 ) = 20, v(p3 ) = 30, and v(p4 ) = 40.
Amongst others, Whitley [2929] points out that even fitness normalization as performed here
cannot overcome the drawbacks of fitness proportional selection methods. In order to clarify
these drawbacks, let us visualize the expected results of roulette wheel selection applied to
the special cases stated in Section 28.4.1.1.
10 1.0
fitness maximization
5 0.5
0 0
0 200 400 600 i 0 200 400 600 i
0 0.02 0.04 0.06 v3(p) 0 1.02 1.04 1.06 v4(p)
Fig. 28.10.c: v3 (pi ) = Fig. 28.10.d: v4 (pi ) = 1 + v3 (pi )
0.0001i if i < 999, 1 otherwise
Roulette wheel selection has a bad performance compared to other schemes like tournament
selection or ranking selection [1079]. It is mainly included here for the sake of completeness
and because it is easy to understand and suitable for educational purposes.
Tournament selection31 , proposed by Wetzel [2923] and studied by Brindle [409], is one
of the most popular and effective selection schemes. Its features are well-known and have
been analyzed by a variety of researchers such as Blickle and Thiele [339, 340], Lee et al.
[1707], Miller and Goldberg [1898], Sastry and Goldberg [2400], and Oei et al. [2063]. In
tournament selection, k elements are picked from the population pop and compared with
each other in a tournament. The winner of this competition will then enter mating pool
mate. Although being a simple selection strategy, it is very powerful and therefore used in
many practical applications [81, 96, 461, 1880, 2912, 2914].
The absolute values of the fitness play no role in tournament selection. The only thing that
ES(pi)
6
k=10
5 k=5
k=4
k=3
4 k=2
k=1
3
1
0
0 100 200 300 400 500 600 700 i 900
0 100 200 300 400 500 600 700 v1(pi) 900
1 1e6 8e6 3e7 6e7 1e8 2e8 3e8 v2(pi) 7e8
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 v3(pi) 0.09
1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 v4(pi) 1.09
matters is whether or not the fitness of one individual is higher as the fitness of another
one, not fitness difference itself. The expected numbers of offspring for the four example
cases 1 and 2 from Example E28.18 are the same. Tournament selection thus gets rid of the
problematic dependency on the relative proportions of the fitness values in roulette-wheel
style selection schemes.
Figure 28.11 depicts the expected offspring numbers for different tournament sizes k ∈
{1, 2, 3, 4, 5, 10}. If k = 1, tournament selection degenerates to randomly picking individuals
and each candidate solution will occur one time in the mating pool on average. With rising
k, the selection pressure increases: individuals with good fitness values create more and
more offspring whereas the chance of worse candidate solutions to reproduce decreases.
Tournament selection with replacement (TSR) is presented in Algorithm 28.11 which has
been implemented in Listing 56.14 on page 756. Tournament selection without replacement
(TSoR) [43, 1707] can be defined in two forms. In the first variant specified as Algo-
rithm 28.12, a candidate solution cannot compete against itself. This method is defined in.
In Algorithm 28.13, on the other hand, an individual may enter the mating pool at most
once.
298 28 EVOLUTIONARY ALGORITHMS
The algorithms specified here should more precisely be entitled as deterministic tour-
nament selection algorithms since the winner of the k contestants that take part in each
tournament enters the mating pool. In the non-deterministic variants, this is not necessarily
the case. There, a probability η is defined. The best individual in the tournament is selected
with probability η, the second best with probability η(1 − η), the third best with probability
η(1 − η)2 and so on. The ith best individual in a tournament enters the mating pool with
probability η(1 − η)i . Algorithm 28.14 realizes this behavior for a tournament selection with
replacement. Notice that it becomes equivalent to Algorithm 28.11 on the preceding page
if η is set to 1. Besides the algorithms discussed here, a set of additional tournament-based
selection methods has been introduced by Lee et al. [1707].
300 28 EVOLUTIONARY ALGORITHMS
13 return mate
Linear ranking selection [340], introduced by Baker [187] and more thoroughly discussed by
Blickle and Thiele [340], Whitley [2929], and Goldberg and Deb [1079] is another approach
for circumventing the problems of fitness proportionate selection methods. In ranking se-
lection [187, 1133, 2929], the probability of an individual to be selected is proportional to
its position (rank) in the sorted list of all individuals in the population. Using the rank
smoothes out larger differences of the fitness values and emphasizes small ones. Generally,
we can consider the conventional ranking selection method as the application of a fitness
assignment process setting the rank as fitness (which can be achieved with Pareto ranking,
for instance) and a subsequent fitness proportional selection.
In its original definition discussed in [340], linear ranking selection was used to maximize
fitness. Here, we will discuss it again for fitness minimization. In linear ranking selection with
replacement, the probability P (select(pi )) that the individual at rank i ∈ 0..ps is picked in
each selection decision is given in Equation 28.24 below. Notice that each individual always
receives a different rank and even if two individuals have the same fitness, they must be
assigned to different ranks [340].
1 − +
− ps − i − 1
P (select(pi )) = η + η −η (28.24)
ps ps − 1
η+
In Equation 28.24, the best individual (at rank 0) will have the probability ps of being
28.4. SELECTION 301
−
η
picked while the worst individual (rank ps − 1) is chosen with ps chance:
1 ps − 0 − 1 1 − η+
P (select(p0 )) = η− + η+ − η− = η + η+ − η− = (28.25)
ps ps − 1 ps ps
1 ps − (ps − 1) − 1
P (select(pps−1 )) = η− + η+ − η− (28.26)
ps ps − 1
1 0 η−
= η− + η+ − η− = (28.27)
ps ps − 1 ps
ps−1
X
P (select(pi )) = 1 (28.28)
i=0
ps−1
X
1 − +
ps − i − 1
−
1= η + η −η (28.29)
i=0
ps ps − 1
ps−1
1 X ps − i − 1
= η− + η+ − η− (28.30)
ps i=0 ps − 1
" ps−1
#
X
1 − + − ps − i − 1
= ps η + η −η (28.31)
ps i=0
ps − 1
" #
1 − + −
ps−1
X ps − i − 1
= ps η + η − η (28.32)
ps i=0
ps − 1
" ps−1
#
1 − η+ − η− X
= ps η + ps − i − 1 (28.33)
ps ps − 1 i=0
" ps−1
!#
1 η +
− η − X
= ps η − + ps (ps − 1) − i (28.34)
ps ps − 1 i=0
1 − η+ − η− 1
= ps η + ps (ps − 1) − ps (ps − 1) (28.35)
ps ps − 1 2
+ −
1 η −η 1
= ps η − + ps (ps − 1) (28.36)
ps ps − 1 2
1 − 1 + −
= ps η + ps η − η (28.37)
ps 2
1 1 − 1 +
= ps η + ps η (28.38)
ps 2 2
1 − 1 +
1= η + η (28.39)
2 2
1 1 +
1 − η− = η (28.40)
2 2
2 − η− = η+ (28.41)
As mentioned, we can simply emulate this scheme by (1) first building a secondary fit-
ness function v′ based on the ranks of the individuals pi according to the original fitness v
and Equation 28.24 and then (2) applying fitness proportionate selection as given in Algo-
rithm 28.9 on page 292. We therefore only need to set v′ (pi ) = 1 − P (select(pi )) (since Al-
gorithm 28.9 on page 292 minimizes fitness).
Random selection is a selection method which simply fills the mating pool with randomly
picked individual from the population. Obviously, to use this selection scheme for sur-
vival/environmental selection, i. e., to determine which individuals are allowed to produce
offspring (see Section 28.1.2.6 on page 260) makes little sense. This would turn the Evolu-
tionary Algorithm into some sort of parallel random walk, since it means to disregard all
fitness information.
However, after an actual mating pool has been filled, it may make sense to use random
selection to choose the parents for the different individuals, especially in cases where there
are many parents such as in Evolution Strategies which are introduced in Chapter 30 on
page 359.
In our experiments, especially in Genetic Programming [2888] and problems with discrete
objective functions [2912, 2914], we often use a very simple mechanism to prevent premature
convergence. Premature convergence, as outlined in Chapter 13, means that the optimization
process gets stuck at a local optimum and loses the ability to investigate other areas in the
28.4. SELECTION 303
search space, in particular the area where the global optimum resides. OurSCP method
is neither a fitness nor a selection algorithm, but instead more of a filter which is placed
after computing the objectives and before applying a fitness assignment process. Since our
method actively deletes individuals from the population, we placed it into this section.
The idea is simple: the more similar individuals exist in the population, the more likely
the optimization process has converged. It is not clear whether it converged to a global
optimum or to a local one. If it got stuck at a local optimum, the fraction of the population
which resides at that spot should be limited. Then, some individuals necessarily will be
located somewhere else and, if they survive, may find a path away from the local optimum
towards a different area in the search space. In case that the optimizer indeed converged to
the global optimum, such a measure does not cause problems because in the end, because
better points than the global optimum cannot be discovered anyway.
28.4.7.1 Clearing
The first one to apply an explicit limitation of the number of individuals in an area of
the search space was Pétrowski [2167, 2168] whose clearing approach is applied in each
generation and works as specified in Algorithm 28.16 where fitness is subject to minimization.
Basically, clearing divides the population of an EA into several sub-populations according
11 return v′
to a distance measure “dist” applied in the genotypic (G) or phenotypic space (X) in each
generation. The individuals of each sub-population have at most the distance σ to the
fittest individual in this niche. Then, the fitness of all but the k best individuals in such a
sub-population is set to the worst possible value. This effectively prevents that a niche can
get too crowded. Sareni and Krähenbühl [2391] showed that this method is very promising.
Singh and Deb [2503] suggest a modified clearing approach which shifts individuals that
would be cleared farther away and reevaluates their fitness.
304 28 EVOLUTIONARY ALGORITHMS
28.4.7.2 SCP
We modify this approach in two respects: We measure similarity not in form of a distance
in the search space G or the problem space X, but in the objective space Y ⊆ R|f | . All
individuals are compared with each other. If two have exactly the same objective values32 ,
one of them is thrown away with probability33 c ∈ [0, 1] and does not take part in any further
comparisons. This way, we weed out similar individuals without making any assumptions
about G or X and make room in the population and mating pool for a wider diversity of
candidate solutions. For c = 0, this prevention mechanism is turned off, for c = 1, all
remaining individuals will have different objective values.
Although this approach is very simple, the results of our experiments were often
significantly better with this convergence prevention method turned on than without
it [2188, 2888, 2912, 2914]. Additionally, in none of our experiments, the outcomes were
influenced negatively by this filter, which makes it even more robust than other methods for
convergence prevention like sharing or variety preserving. Algorithm 28.17 specifies how our
simple mechanism works. It has to be applied after the evaluation of the objective values of
the individuals in the population and before any fitness assignment or selection takes place.
9 P ←− addItem(P, p)
10 return P
In Figure 28.12, we sketch the expected number of remaining instances of the individual p
after this pruning process if it occurred n times in the population before Algorithm 28.17
was applied.
From Equation 28.47 follows that even a population of infinite size which has fully con-
verged to one single value will probably not contain more than 1c copies of this individual
32 The exactly-the-same-criterion makes sense in combinatorial optimization and many Genetic Program-
ming problems but may easily be replaced with a limit imposed on the Euclidian distance in real-valued
optimization problems, for instance.
33 instead of defining a fixed threshold k
28.4. SELECTION 305
9 cp=0.1
ES(p)
5
cp=0.2
4
3 cp=0.3
cp=0.5
2
1 cp=0.7
0 5 10 15 n 25
Figure 28.12: The expected numbers of occurrences for different values of n and c.
after the simple convergence prevention has been applied. This threshold is also visible in
Figure 28.12.
n
1 − (1 − c) 1−0 1
lim ES (p) = lim = = (28.48)
n→∞ n→∞ c c c
28.4.7.3 Discussion
In Pétrowski’s clearing approach [2167], the maximum number of individuals which can
survive in a niche was a fixed constant k and, if less than k individuals resided in a niche,
none of them would be affected. Different from that, in SCP, an expected value of the
number of individuals allowed in a niche is specified with the probability c and may be
both, exceeded or undercut.
Another difference of the approaches arises from the space in which the distance is
computed: Whereas clearing prevents the EA from concentrating too much on a certain area
in the search or problem space, SCP stops it from keeping too many individuals with equal
utility. The former approach works against premature convergence to a certain solution
structure while the latter forces the EA to “keep track” of a trail to candidate solutions
with worse fitness which may later evolve to good individuals with traits different from
the currently exploited ones. Furthermore, since our method does not need to access any
information apart from the objective values, it is independent from the search and problem
space and can, once implemented, combined with any kind of population-based optimizer.
Which of the two approaches is better has not yet been tested with comparative ex-
periments and is part of our future work. At the present moment, we assume that in
real-valued search or problem spaces, clearing should be more suitable whereas we know
from experiments using our approach only that SCP performs very good in combinatorial
problems [2188, 2914] and Genetic Programming [2888].
306 28 EVOLUTIONARY ALGORITHMS
28.5 Reproduction
An optimization algorithm uses the information gathered up to step t for creating the can-
didate solutions to be evaluated in step t + 1. There exist different methods to do so. In
Evolutionary Algorithms, the aggregated information corresponds to the population pop
and the archive archive of best individuals, if such an archive is maintained. The search
operations searchOp ∈ Op used in the Evolutionary Algorithm family are called reproduc-
tion operation, inspired by the biological procreation mechanisms34 of mother nature [2303].
There are four basic operations:
1. Creation simply creates a new genotype without any ancestors or heritage.
2. Duplication directly copies an existing genotype without any modification.
3. Mutation in Evolutionary Algorithms corresponds to small, random variations in the
genotype of an individual, exactly natural mutation35 .
4. Like in sexual reproduction, recombination 36 combines two parental genotypes to a
new genotype including traits from both elders.
In the following, we will discuss these operations in detail and provide general definitions
form them. When an Evolutionary Algorithm starts, no information about the search space
has been gathered yet. Hence, unless seeding is applied (see Paragraph 28.1.2.2.2, we cannot
use existing candidate solutions to derive new ones and search operations with an arity higher
than zero cannot be applied. Creation is thus used to fill the initial population pop(t = 1).
Definition D28.11 (Creation). The creation operation create is used to produce a new
genotype g ∈ G with a random configuration.
g = create ⇒ g ∈ G (28.49)
For creating the initial population of the size ps, we define the function createPop(ps) in
Algorithm 28.18.
Duplication is just a placeholder for copying an element of the search space, i. e., it is
what occurs when neither mutation nor recombination are applied. It is useful to increase
the share of a given type of individual in a population.
g = duplicate(g) ∀g ∈ G (28.50)
In most EAs, a unary search operation, mutation, is provided. This operation takes one
genotype as argument and modifies it. The way this modification is performed is application-
dependent. It may happen in a randomized or in a deterministic fashion. Together with
duplication, mutation corresponds to the asexual reproduction in nature.
gn = mutate(g) : g ∈ G ⇒ gn ∈ G (28.51)
Like mutation, the recombination is a search operation inspired by natural reproduction. Its
role model is sexual reproduction. The goal of implementing a recombination operation is to
provide the search process with some means to join beneficial features of different candidate
solutions. This may happen in a deterministic or randomized fashion.
Notice that the term recombination is more general than crossover since it stands for
arbitrary search operations that combines the traits of two individuals. Crossover, however,
is only used if the elements search space G are linear representations. Then, it stands for
exchanging parts of these so-called strings.
It is not unusual to chain the application of search operators, i. e., to perform something
like mutate(recombine(g1 , g2 )). The four operators are altogether used to reproduce whole
populations of individuals.
Whenever a new individual p is created, the set P holding the best individuals known so far
may change. It is possible that the new candidate solution must be included in the optimal
set or even prevails some of the phenotypes already contained therein which then must be
removed.
We define two equivalent approaches in Algorithm 28.20 and Algorithm 28.21 which perform
the necessary operations. Algorithm 28.20 creates a new, empty optimal set and successively
inserts non-prevailed elements whereas Algorithm 28.21 removes all elements which are pre-
vailed by the new individual pnew from the old set Pold .
Especially in the case of Evolutionary Algorithms, not a single new element is created
in each generation but a set P . Let us define the operation “updateOptimalSetN” for this
purpose. This operation can easily be realized by iteratively applying “updateOptimalSet”,
as shown in Algorithm 28.22.
individuals. When the optimization process finishes, the “extractPhenotypes” can then be
used to obtain the non-prevailed elements of the problem space and return them to the
user. However, not all optimization methods maintain an non-prevailed set all the time.
When they terminate, they have to extracts all optimal elements the set of individuals pop
currently known.
Algorithm 28.23 defines one possible realization of “extractBest”. By the way, this approach
could also be used for updating an optimal set.
10 return listToSetP
28.6. MAINTAINING A SET OF NON-DOMINATED/NON-PREVAILED INDIVIDUALS311
28.6.3 Pruning the Optimal Set
In some optimization problems, there may be very many if not infinite many optimal indi-
viduals. The set X of the best discovered candidate solutions computed by the optimization
algorithms, however, cannot grow infinitely because we only have limited memory. The same
holds for the archives archive of non-dominated individuals maintained by several MOEAs.
Therefore, we need to perform an action called pruning which reduces the size of the set of
non-prevailed individuals to a given limit k [605, 1959, 2645].
There exists a variety of possible pruning operations. Morse [1959] and Taboada and Coit
[2644], for instance, suggest to use clustering algorithms38 for this purpose. In principle, also
any combination of the fitness assignment and selection schemes discussed in Chapter 28
would do. It is very important that the loss of generality during a pruning operation is
minimized. Fieldsend et al. [912], for instance, point out that if the extreme elements of
the optimal frontier are lost, the resulting set may only represent a small fraction of the
optima that could have been found without pruning. Instead of working on the set X of best
discovered candidate solutions x, we again base our approach on a set of individuals P and
we define:
Algorithm 28.24 uses clustering to provide the functionality specified in this definition and
thereby realizes the idea of Morse [1959] and Taboada and Coit [2644]. Basically, any given
clustering algorithm could be used as replacement for “cluster” – see ?? on page ?? for more
information on clustering.
Let us discuss the adaptive grid archiving algorithm (AGA) as example for a more sophis-
ticated approach to prune the set of non-dominated / non-prevailed individuals. AGA has
been introduced for the Evolutionary Algorithm PAES39 by Knowles and Corne [1552] and
uses the objective values (computed by the set of objective functions f ) directly. Hence, it
can treat the individuals as |f |-dimensional vectors where each dimension corresponds to one
objective function f ∈ f . This |f |-dimensional objective space Y is divided in a grid with d
divisions in each dimension. Its span in each dimension is defined by the corresponding min-
imum and maximum objective values. The individuals with the minimum/maximum values
are always preserved. This circumvents the phenomenon of narrowing down the optimal
set described by Fieldsend et al. [912] and distinguishes the AGA approach from clustering-
based methods. Hence, it is not possible to define maximum set sizes k which are smaller
than 2|f |. If individuals need to be removed from the set because it became too large, the
AGA approach removes those that reside in regions which are the most crowded.
The original sources outline the algorithm basically with with descriptions and defini-
tions. Here, we introduce a more or less trivial specification in Algorithm 28.25 on the next
page and Algorithm 28.26 on page 314.
The function “divideAGA” is internally used to perform the grid division. It transforms
the objective values of each individual to grid coordinates stored in the array lst. Further-
more, “divideAGA” also counts the number of individuals that reside in the same coordi-
nates for each individual and makes it available in cnt. It ensures the preservation of border
individuals by assigning a negative cnt value to them. This basically disables their later
disposal by the pruning algorithm “pruneOptimalSetAGA” since “pruneOptimalSetAGA”
deletes the individuals from the set Pold that have largest cnt values first.
28.7.2 Books
1. Handbook of Evolutionary Computation [171]
2. Variants of Evolutionary Algorithms for Real-World Applications [567]
3. New Ideas in Optimization [638]
4. Multiobjective Problem Solving from Nature – From Concepts to Applications [1554]
5. Natural Intelligence for Scheduling, Planning and Packing Problems [564]
6. Advances in Evolutionary Computing – Theory and Applications [1049]
7. Bio-inspired Algorithms for the Vehicle Routing Problem [2162]
8. Evolutionary Computation 1: Basic Algorithms and Operators [174]
9. Nature-Inspired Algorithms for Optimisation [562]
10. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
11. Hybrid Evolutionary Algorithms [1141]
12. Success in Evolutionary Computation [3014]
13. Computational Intelligence in Telecommunications Networks [2144]
14. Evolutionary Design by Computers [272]
15. Evolutionary Computation in Economics and Finance [549]
16. Parameter Setting in Evolutionary Algorithms [1753]
17. Evolutionary Computation in Dynamic and Uncertain Environments [3019]
18. Applications of Multi-Objective Evolutionary Algorithms [597]
19. Introduction to Evolutionary Computing [865]
20. Nature-Inspired Informatics for Intelligent Applications and Knowledge Discovery: Implica-
tions in Business, Science and Engineering [563]
21. Evolutionary Computation [847]
22. Evolutionary Computation [863]
23. Evolutionary Multiobjective Optimization – Theoretical Advances and Applications [8]
24. Evolutionary Computation 2: Advanced Algorithms and Operators [175]
25. Evolutionary Algorithms for Solving Multi-Objective Problems [599]
26. Genetic and Evolutionary Computation for Image Processing and Analysis [466]
27. Evolutionary Computation: The Fossil Record [939]
28. Swarm Intelligence: Collective, Adaptive [1524]
29. Parallel Evolutionary Computations [2015]
30. Evolutionary Computation in Practice [3043]
31. Design by Evolution: Advances in Evolutionary Design [1234]
32. New Learning Paradigms in Soft Computing [1430]
33. Designing Evolutionary Algorithms for Dynamic Environments [1956]
34. How to Solve It: Modern Heuristics [1887]
35. Electromagnetic Optimization by Genetic Algorithms [2250]
36. Advances in Metaheuristics for Hard Optimization [2485]
37. Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems [2685]
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 317
38. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music [2320]
39. Evolutionary Optimization [2394]
40. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
41. Multi-Objective Optimization in Computational Intelligence: Theory and Practice [438]
42. Computer Simulation in Genetics [659]
43. Evolutionary Computation: A Unified Approach [715]
44. Genetic Algorithms in Search, Optimization, and Machine Learning [1075]
45. Evolutionary Computation in Data Mining [1048]
46. Introduction to Stochastic Search and Optimization [2561]
47. Advances in Evolutionary Algorithms [1581]
48. Self-Adaptive Heuristics for Evolutionary Computation [1617]
49. Representations for Genetic and Evolutionary Algorithms [2338]
50. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
51. Search Methodologies – Introductory Tutorials in Optimization and Decision Support Tech-
niques [453]
52. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
53. Evolutionäre Algorithmen [2879]
54. Advances in Evolutionary Algorithms – Theory, Design and Practice [41]
55. Evolutionary Algorithms in Molecular Design [587]
56. Multi-Objective Optimization Using Evolutionary Algorithms [740]
57. Evolutionary Dynamics in Time-Dependent Environments [2941]
58. Evolutionary Computation: Theory and Applications [3025]
59. Evolutionary Computation [2987]
28.7.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Evolutionary Computation published by MIT Press
3. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics published by
IEEE Systems, Man, and Cybernetics Society
4. International Journal of Computational Intelligence and Applications (IJCIA) published by
Imperial College Press Co. and World Scientific Publishing Co.
5. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
6. Natural Computing: An International Journal published by Kluwer Academic Publishers and
Springer Netherlands
7. Journal of Heuristics published by Springer Netherlands
8. Journal of Artificial Evolution and Applications published by Hindawi Publishing Corporation
9. International Journal of Computational Intelligence Research (IJCIR) published by Research
India Publications
10. The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) pub-
lished by Idea Group Publishing (Idea Group Inc., IGI Global)
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 323
11. IEEE Computational Intelligence Magazine published by IEEE Computational Intelligence
Society
12. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
13. Computational Intelligence published by Wiley Periodicals, Inc.
14. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
324 28 EVOLUTIONARY ALGORITHMS
Tasks T28
75. Implement a general steady-state Evolutionary Algorithm. You may use Listing 56.11
as blueprint and create a new class derived from EABase (given in ?? on page ??) which
provides the steady-state functionality described in Paragraph 28.1.4.2.1 on page 266.
[20 points]
76. Implement a general Evolutionary Algorithm with archive-based elitism. You may use
Listing 56.11 as blueprint and create a new class derived from EABase (given in ?? on
page ??) which provides the steady-state functionality described in Section 28.1.4.4 on
page 267.
[20 points]
77. Implement the linear ranking selection scheme given in Section 28.4.5 on page 300
for your version of Evolutionary Algorithms. You may use Listing 56.14 as blueprint
implement the interface ISelectionAlgorithm given in Listing 55.13 in a new class.
[20 points]
Chapter 29
Genetic Algorithms
29.1 Introduction
Genetic Algorithms1 (GAs) are a subclass of Evolutionary Algorithms where (a) the
genotypes g of the search space G are strings of primitive elements (usually all of the
same type) such as bits, integer, or real numbers, and (b) search operations such as
mutation and crossover directly modify these genotypes.. Because of the simple struc-
ture of the search spaces of Genetic Algorithms, often a non-identity mapping is used
as genotype-phenotype mapping to translate the elements g ∈ G to candidate solutions
x ∈ X [1075, 1220, 1250, 2931].
Genetic Algorithms are the original prototype of Evolutionary Algorithms and adhere to
the description given in Section 28.1.1. They provide search operators which try to closely
copy sexual and asexual reproduction schemes from nature. In such “sexual” search opera-
tions, the genotypes of the two parents genotypes will recombine. In asexual reproduction,
mutations are the only changes that occur.
29.1.1 History
The roots of Genetic Algorithms go back to the mid-1950s, where biologists like Barri-
celli [220, 221, 222, 223] and the computer scientist Fraser [941, 988, 989] began to apply
computer-aided simulations in order to gain more insight into genetic processes and the
natural evolution and selection. Bremermann [407] and Bledsoe [323, 324, 325, 326] used
evolutionary approaches based on binary string genomes for solving inequalities, for func-
tion optimization, and for determining the weights in neural networks in the early 1960s.
At the end of that decade, important research on such search spaces was contributed by
Bagley [180] (who introduced the term Genetic Algorithm), Rosenberg [2331], Cavicchio, Jr.
[510, 511], and Frantz [985] – all under the advice of Holland at the University of Michigan.
As a result of Holland’s work [1247–1250] Genetic Algorithms as a new approach for prob-
lem solving could be formalized finally became widely recognized and popular. As same as
important was the work of De Jong [711] on benchmarking GAs for function optimization
from 1975 preceded by Hollstien’s PhD thesis [1258]. Today, there are many applications
in science, economy, and research and development [564, 2232] that can be tackled with
Genetic Algorithms.
It should further be mentioned that, because of the close relation to biology and since
Genetic Algorithms were originally applied to single-objective optimization, the objective
functions f here are often referred to as fitness functions. This is a historically grown
1 http://en.wikipedia.org/wiki/Genetic_algorithm [accessed 2007-07-03]
326 29 GENETIC ALGORITHMS
misnaming which should not be mixed up with the fitness assignment processes discussed
in Section 28.3 on page 274 and the fitness values v used in the context of this book.
rough simplification
gÎG 1 0 1 0 0 1 1 0 0 1 1 1 0 1 0 0 0 1
29.2.4 Introns
Besides the functional genes and their alleles, there are also parts of natural genomes which
have no (obvious) function [1072, 2870]. The American biochemist Gilbert [1056] coined the
term intron 2 for such parts. In some representations in Evolutionary Algorithms, similar
structures can be observed, especially in variable-length genomes.
Definition D29.1 (Intron). An intron is a part of a genotype g ∈ G that does not con-
tribute to the phenotype x = gpm(g), i. e., plays no role during the genotype-phenotype
mapping.
2 http://en.wikipedia.org/wiki/Intron [accessed 2007-07-05]
328 29 GENETIC ALGORITHMS
Biological introns have often been thought of as junk DNA or “old code”, i. e., parts of the
genome that were translated to proteins in evolutionary past, but now are not used anymore.
Currently though, many researchers assume that introns are maybe not as useless as initially
assumed [660]. Instead, they seem to provide support for efficient splicing, for instance.
Similarly, in some cases, the role introns in Genetic Algorithms is mysterious as well.
They represent a form of redundancy which may have positive as well as negative effects, as
outlined in Section 16.5 on page 174 and ??.
The creation of random fixed-length string individuals means to create a new tuple of the
structure defined by the genome and initialize it with random values. In reference to Equa-
tion 29.1 on page 326, we could roughly describe this process with Equation 29.5 for fixed-
length genomes where the elementary types Gi from G0 × G1 × .. × Gn−1 = G are finite sets
and their j th elements can be accessed with an index Gi [j]. Although creation is a nullary
reproduction operation, i. e., one which does not receive any genotypes g ∈ G as parameter,
in Equation 29.5 we provide n, the dimension of the search space, as well as the search space
G to the operation in order to keep the definition general. In Algorithm 29.1, we show how
this general equation can be implemented for fixed-length bit-string genomes, i. e., we define
an algorithm createGAFLBin(n) ≡ createGAFL(n, Bn ) which creates bit strings g ∈ G of
length n (G = Bn ).
createGAFL(n, G) ≡ (g [1], g [2], .., g [n]) : g [i] = Gi [⌊randomUni[0, len(Gi ))⌋] ∀i ∈ 1..n (29.5)
Besides on arbitrary vectors over some vector space (be it Bn , Nn0 , or Rn ), Genetic Algorithms
can also work on permutations, as defined in Section 51.6 on page 643. If G is the space of all
possible permutations of the n natural numbers from 0 to n − 1, i. e., G = Π(0..n − 1), we
can create a new genotype g ∈ G using the algorithm Algorithm 29.2 which is implemented
in Listing 56.37 on page 804.
From a more general perspective, we can consider the problem space X to be the space
X = Π(S) of all permutations of the elements si : i ∈ 0..n − 1 of a given set S. Usually,
of course, S = 0..n − 1 and si = i ∀i ∈ 0..n − 1. The genotypes g in the search space
G are vectors of natural numbers from 0 to n − 1. The mapping of these vectors to the
permutations can be done implicitly or explicitly according to the following Equation 29.6.
In other words, the gene at locus i in the genotype g ∈ G defines which element s from the
set S should appear at position i in the permutation x ∈ X = Π(S). More precisely, the
allelic value g [i] of the gene is the index of the element sg[i] ∈ S which will appear at that
position in the phenotype x.
330 29 GENETIC ALGORITHMS
29.3.2 Mutation: Unary Reproduction
Mutation is an important method for preserving the diversity of the candidate solutions by
introducing small, random changes into them.
Fig. 29.2.c: Uniform multi-gene mutation (ii). Fig. 29.2.d: Complete mutation.
In fixed-length string chromosomes, this can be achieved by randomly modifying the value
(allele) of a gene, as illustrated in Fig. 29.2.a. If the genome consists of bit strings, this
single-point mutation is called single-bit flip mutation. In this case, this operation creates a
neighborhood where all points which have a Hamming distance of exactly one are adjacent
(see Definition D4.10 on page 85).
In real-coded genomes, it is common to instead modify all genes as sketched in Fig. 29.2.d.
This would make little sense in binary-coded search space, since the inversion of all bits
means to completely discard all information of the parent individual.
Changing a single gene in binary-coded chromosomes, i. e., encodings where each gene is a
single bit, means that the value at a given locus is simply inverted as sketched in Fig. 29.3.a.
This operation is referred to as bit-flip mutation.
For real-encoded genomes, modifying an element g [i] can be done by replacing it with a
number drawn from a normal distribution (see Section 53.4.2) with expected value g [i] and
some standard deviation σ, i. e., g [i]new ∼ N g [i], σ 2 which we illustrated in Fig. 29.3.b and
outline in detail in Section 30.4.1 on page 364.
332 29 GENETIC ALGORITHMS
g[i]-s g[i]+s
g[i] g[i]’
Fig. 29.3.a: Mutation in binary genomes. Fig. 29.3.b: Mutation in real genomes.
The permutation operation is an alternative mutation method where the alleles of two genes
are exchanged as sketched in Figure 29.4. This, of course, makes only sense if all genes have
similar data types and the location of a gene has influence on its meaning. Permutation is,
for instance, useful when solving problems that involve finding an optimal sequence of items,
like the Traveling Salesman Problem as mentioned in Example E2.2. Here, a genotype g
could encode the sequence in which the cities are visited. Exchanging two alleles then equals
of switching two cities in the route.
In Algorithm 29.4 which is implemented in Listing 56.38 on page 806, we provide a sim-
ple unary search operation which exchanges the alleles of two genes, exactly as sketched
in Figure 29.4. If the genotype was a proper permutation obeying the conditions given
in Definition D51.13 on page 643, these conditions will hold for the new offspring as well.
Different from Algorithm 29.4, Algorithm 29.5 (implemented in Listing 56.39 on page 808)
repetitively exchanges elements. The number of elements which are swapped is roughly
exponentially distributed (see Section 53.4.3). This operation also preserves the validity of
the conditions for permutations given in Definition D51.13 on page 643.
334 29 GENETIC ALGORITHMS
There is an incredible wide variety of crossover operators for Genetic Algorithms. In his
2006 book [1159], Gwiazda mentions alone 11 standard, 66 binary-coded, and 89 real-coded
crossover operators. Here, we only can discuss a small fraction of those.
Amongst all Evolutionary Algorithms, some of the recombination operations which
probably comes closest to the natural paragon can be found in Genetic Algorithms. Fig-
ure 29.5 outlines the most basic methods to recombine two string chromosomes, the so-called
crossover, which is performed by swapping parts of two genotypes. Although these still can-
not nearly resemble the idea of natural crossover, they at least mirror the school book view
on it.
() () () ()
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
Fig. 29.5.a: Single-point Fig. 29.5.b: Two-point Fig. 29.5.c: Multi-point Fig. 29.5.d: Uniform
Crossover (SPX, 1-PX). Crossover (TPX, 2-PX). Crossover (MPX, k-PX). Crossover (UX).
When performing single-point crossover (SPX3 , 1-PX, [1159]), both parental chromosomes
are split at a randomly determined crossover point. Subsequently, a new child genotype is
created by appending the second part of the second parent to the first part of the first parent
as illustrated in Fig. 29.5.a. With crossover, it is possible that two offspring are created at
once from the two parents. The second offspring is shown in the long parentheses. Doing so
it, however, not mandatory.
In two-point crossover (TPX, 2-PX, sketched in Fig. 29.5.b), both parental genotypes are
split at two points and a new offspring is created by using parts number one and three from
the first, and the middle part from the second parent chromosome. The second possible
offspring of this operation is again displayed in parentheses.
In Algorithm 29.6, the generalized form of this technique is specified: The k-point crossover
operation (k-PX, [1159]), also called multi-point crossover (MPX). The operator takes the
two parent genotypes gp1 and gp2 from an n-dimensional search space as well as the number
0 < k < n of crossover points as parameter.
By setting the parameter k to 1 or 2, single-point and two-point crossover can be emulated
and with k = n − 1, becomes uniform crossover (see Section 29.3.4.4). It furthermore is
possible to choose k randomly from 1..n − 1 for each application of the operator, maybe by
picking it with a bounded exponential distribution. This way, many different recombination
operations can be performed with the same algorithm.
3 This abbreviation is also used for simplex crossover, see ??.
336 29 GENETIC ALGORITHMS
Uniform crossover (UX [1159, 2563], sketched in Fig. 29.5.d, specified in Algorithm 29.7, and
exemplarily implemented in Listing 56.36 on page 803) follows another scheme: Here, for
each locus in the offspring chromosome, the allelic value if chosen with the same probability
(50%) from the same locus of the first parent or from the same locus of the second parent.
Dominant (discrete) recombination which is introduced in Section 30.3.1 on page 362 is a
generalization of UX to an ρ-ary operator, i. e., to an operator which creates one child fromρ
parents.
The previously discussed crossover methods have primarily been developed for bit string
genomes G = Bn . If they combine two strings g1 and g2 , the offspring will necessarily be
located on one of the corners of the hypercube described by both parents [2099] and sketched
in Figure 29.6. In binary-coded Genetic Algorithms, this makes indeed sense.
For real-valued genomes, we would like to explore the volume inside the hypercube as
well. Therefore, we can use a (weighted) average of the two parent vectors as offspring. One
way to do so is Equation 29.9 for search spaces G ⊆ Rn . In this equation, each element of
the two parents g1 and g2 are combined by a weighted average to gnew . For pure average
crossover, the weight γ will always be set to γ = 0.5. If the average is randomly weighted,
as weight γ we can use
G1ÍR
R
GÍ
X
,U
2
X
-P
G1ÍR (g2[0], g2[1], g2[2])
X,M
P
, 2-
PX G0ÍR
1-
R
GÍ
2
W
Cr eig (g2[0], g2[1], g2[2])
os hte G1ÍR
(g1[0], g1[1], g1[2]) so d
ve A
G0ÍR r, ve
etc ra
R
. ge
GÍ
2
(g1[0], g1[1], g1[2])
G0ÍR
According to Banzhaf et al. [209], natural crossover is very restricted and usually exchanges
only genes that express the same functionality and are located at the same positions (loci)
on the chromosomes.
In other words, homologous genetic material is very similar and in nature, only such material
is exchanged in sexual reproduction. Park et al. [2128] propose that a similar approach
should be useful in Genetic Algorithms. As basis for their analysis they take multi-point
crossover (MPX, k-PK) as introduced here in Algorithm 29.6. In MPX, k > 0 crossover
points are chosen randomly and subsequences between two points (as well as the start of the
string and the first crossover point and the last crossover point and the end of the parent
strings) are exchanged.
Park et al. [2128] point out that such an exchange is likely to destroy schemas which
extend over crossover points while leaving only those completely embedded in a sequence
subject to exchange intact. They thus make exchange of genetic information conditional:
4 http://en.wikipedia.org/wiki/Homology_(biology) [accessed 2008-06-17]
5 http://en.wikipedia.org/wiki/Homologous_chromosome [accessed 2008-06-17]
29.4. VARIABLE-LENGTH STRING CHROMOSOMES 339
their HRO (Homologous Recombination Operator, HX) always chooses the subsequence
from the first parent gp1 if
1. it would be the turn of gp1 anyway (b = true in Algorithm 29.6 on Line 15), if
2. the exchanged sequence would be too short, i. e., its length would be below a given
threshold w, or if
3. the sequences are not different enough. As measure for difference, the Hamming
distance “dist ham” divided by the sequence length (e − s). If this value is above
a threshold u, again the sequence from gp1 is used.
Besides the schema-preservation which was the reason for the invention of HRO, another
argument for the measured better performance of homologous crossover [2128] would be the
discussion on similarity extraction given in Section 29.5.6 on page 346.
The reader should be informed that in Point 3 of the enumeration and in Equation 29.10,
we deviate from the suggestions given in [1159, 2128]. There, combining the two subse-
quences with xor and then counting the number of 1-bits is suggested as similarity measure.
Actually, like the Hamming distance it is a dissimilarity measure. Therefore, in Equa-
tion 29.10, we permit an information exchange between the parental chromosomes only if
it is below a threshold u while using the genes from the first parent if it is above or equal
the threshold. Either the quoted works just contain a small typo regarding that issue or the
author of this book misunderstood the concept of homology. . .
In his 1995 paper [1466], Jones introduces an experiment to test whether a crossover operator
is efficient or not. His random crossover or headless chicken crossover [101] (g3 , g4 ) ←−
recombineR (g1 , g2 ) uses an internal crossover operation recombinei and a genotype creation
routine “create”. It receives two parent individuals g1 and g2 and produces two offspring
g3 and g4 . However, it does not perform “real” crossover. Instead, it internally creates two
new (random) genotypes g5 , g6 and uses the internal operator recombinei to recombine each
of them with one of the parents.
(g3 , g4 ) = recombineR (g1 , g2 ) ⇒ (recombinei g1 , create , (29.11)
recombinei g2 , create )
If the performance of Genetic Algorithm which uses recombinei as crossover operator is not
better than with recombineR using recombinei , than the efficiency of recombinei is ques-
tionable. In the experiments of Jones [1466] with recombinei = TPX, two-point crossover
outperformed random crossover only in problems where there are well-defined building blocks
while not leading to better results in problems without this feature.
Variable-length genomes for Genetic Algorithms where first proposed by Smith in his PhD
thesis [2536]. There, he introduced a new variant of classifier systems (see Chapter 35) with
the goal of evolving programs for playing poker [2240, 2536].
340 29 GENETIC ALGORITHMS
29.4.1 Creation: Nullary Reproduction
Variable-length strings can be created by first randomly drawing a length l > 0 and then
creating a list of that length filled with random elements. l could be drawn from a bounded
exponential distribution.
createGAVL(GT ) = createGAFL l, GT l l ∼ exp (29.12)
If the string chromosomes are of variable length, the set of mutation operations introduced
in Section 29.3 can be extended by two additional methods. First, we could insert a couple
Figure 29.7: Search operators for variable-length strings (additional to those from Sec-
tion 29.3.2 and Section 29.3.3).
of genes with randomly chosen alleles at any given position into a chromosome as sketched
in Fig. 29.7.a. Second, this operation can be reversed by deleting elements from the string
as shown in Fig. 29.7.b. It should be noted that both, insertion and deletion, are also
implicitly be performed by crossover. Recombining two identical strings with each other
can, for example, lead to deletion of genes. The crossover of different strings may turn out
as an insertion of new genes into an individual.
For variable-length string chromosomes, the same crossover operations are available as for
fixed-length strings except that the strings are no longer necessarily split at the same loci.
The lengths of the new strings resulting from such a cut and splice operation may differ from
the lengths of the parents, as sketched in Figure 29.8. A special case of this type of recom-
bination is the homologous crossover, where only genes at the same loci and with similar
content are exchanged, as discussed in Section 29.3.4.6 on page 338 and in Section 31.4.4.1
on page 407.
29.5. SCHEMA THEOREM 341
()
Fig. 29.8.a: Single-Point
()
Fig. 29.8.b: Two-Point
()
Fig. 29.8.c: Multi-Point
Crossover Crossover Crossover
Assume that the genotypes g in the search space G of Genetic Algorithms are strings of
a fixed-length l over an alphabet6 Σ, i. e., G = Σl . Normally, Σ is the binary alphabet
Σ = {true, false} = {0, 1}. From forma analysis, we know that properties can be
defined on the genotypic or the phenotypic space. For fixed-length string genomes, we can
consider the values at certain loci as properties of a genotype. There are two basic principles
on defining such properties: masks and do not care symbols.
Definition D29.3 (Mask). For a fixed-length string genome G = Σl , the set of all geno-
typic masks M (l) is the power set7 of the valid loci M (l) = P (0..l − 1) [2879]. Every mask
mi ∈ M (l) defines a property φi and an equivalence relation:
Definition D29.4 (Order). The order order(mi ) of the mask mi is the number of loci
defined by it:
order(mi ) = |mi | (29.14)
Definition D29.5 (Defined Length). The defined length δ (mi ) of a mask mi is the max-
imum distance between two indices in the mask:
A mask contains the indices of all elements in a string that are interesting in terms of the
property it defines.
Definition D29.6 (Schema). A forma defined on a string genome concerning the values
of the characters at specified loci is called Schema [558, 1250]
The second method of specifying such schemata is to use don’t care symbols (wildcards)
to create “blueprints” H of their member individuals. Therefore, we place the don’t care
symbol * at all irrelevant positions and the characterizing values of the property at the
others.
g [j] if j ∈ mi
∀j ∈ 0..l − 1 ⇒ H (j) = (29.16)
∗ otherwise
H (j) ∈ Σ ∪ {∗} ∀j ∈ 0..l − 1 (29.17)
Schemas correspond to masks and thus, definitions like the defined length and order can
easily be transported into their context.
g2
H1=(0,0,*)
(0,0,1) (1,0,1)
H3=(1,0,*)
(0,1,1) (1,1,1)
H2=(0,1,*) H5=(1,*,*)
H4=(1,1,*)
g0
(0,0,0) (1,0,0)
g1
(0,1,0) (1,1,0)
These formae can be defined with masks as well: Aφ1 =(0,0) ≡ H1 = (0, 0, ∗), Aφ1 =(0,1) ≡
H2 = (0, 1, ∗), Aφ1 =(1,0) ≡ H3 = (1, 0, ∗), and Aφ1 =(1,1) ≡ H4 = (1, 1, ∗). These schemata
mark hyperplanes in the search space G, as illustrated in Figure 29.9 for the three bit
genome.
The Schema Theorem8 was defined by Holland [1250] for Genetic Algorithms which use
fitness-proportionate selection (see Section 28.4.3 on page 290) where fitness is subject to
8 http://en.wikipedia.org/wiki/Holland%27s_Schema_Theorem [accessed 2007-07-29]
29.5. SCHEMA THEOREM 343
maximization [711, 1253]. Let us first only consider selection, fitness proportionate selec-
tion for fitness maximization and with replacement in particular, and ignore reproduction
operations such as mutation and crossover.
For fitness proportionate selection in this case, Equation 28.18 on page 291 defines the
probability to pick an individual in a single selection choice according its fitness. We here
print this equation in a slightly adapted form. If the genotype of pH,i is an instance of the
schema H, its probability of being selected with this method equals its fitness divided by
the sum of the fitness of all other individuals (p′ ) in the population pop(t) at generation t.
v(pH,i )
P (select(pH,i )) = X (29.18)
v(p′ )
∀p′ ∈pop(t)
The chance to select any instance of schema H, here denoted as P (select(pH )), is thus
simply the sum of the probabilities of selecting any of the count(H, pop(t)) instances of this
schema in the current population:
count(H,pop(t))
X
count(H,pop(t)) v(pH,i )
X i=1
P (select(pH )) = P (select(pH,i )) = X (29.19)
i=1 v(p′ )
∀p′ ∈pop(t)
We can rewrite this equation to represent the arithmetic mean vt (H) of the fitness of all
instances of the schema in the population:
count(H,pop(t))
1 X
vt (H) = v(pH,i ) (29.20)
count(H, pop(t)) i=1
count(H,pop(t))
X
v(pH,i ) = count(H, pop(t)) vt (H) (29.21)
i=1
count(H, pop(t)) vt (H)
P (select(pH )) = ps−1
(29.22)
X
v(pj )
j=0
We can do a similar thing to compute the mean fitness vt of all the ps individuals p′ in the
population of generation t:
1 X
vt = v(p′ ) (29.23)
ps ′
∀p ∈pop(t)
X
′
v(p ) = ps vt (29.24)
∀p′ ∈pop(t)
From Equation 29.29 or Equation 29.33, one could assume that short, highly fit schemas
would spread exponentially in the population since their number would multiply with a cer-
tain factor in every generation. This deduction is however a very optimistic assumption and
not generally true, maybe even misleading. If a highly fit schema has many offspring with
good fitness, this will also improve the overall fitness of the population. Hence, the proba-
bilities in Equation 29.29 will shift over time. Generally, the Schema Theorem represents a
lower bound that will only hold for one generation [2931]. Trying to derive predictions for
more than one or two generations using the Schema Theorem as is will lead to deceptive or
wrong results [1129, 1133].
Furthermore, the population of a Genetic Algorithm only represents a sample of limited
size of the search space G. This limits the reproduction of the schemata and also makes
statements about probabilities in general more complicated. Since we only have samples
of the schemata H, we cannot be sure if vt (H) really represents the average fitness of all
the members of the schema. That is why we annotate it with t instead of writing v(H).
Thus, even reproduction operators which preserve the instances of the schema may lead to
a decrease of vt+... (H) by time. It is also possible that parts of the population already have
converged and other members of a schema will not be explored anymore, so we do not get
further information about its real utility.
Additionally, we cannot know if it is really good if one specific schema spreads fast, even
it is very fit. Remember that we have already discussed the exploration versus exploitation
topic and the importance of diversity in Section 13.3.1 on page 155.
Another issue is that we implicitly assume that most schemata are compatible and can
be combined, i. e., that there is low interaction between different genes. This is also not
generally valid: Epistatic effects, for instance, can lead to schema incompatibilities. The
expressiveness of masks and blueprints even is limited and can be argued that there are
properties which we cannot specify with them. Take the set D3 of numbers divisible by
three for example D3 = {3, 6, 9, 12, . . . }. Representing them as binary strings will lead
to D3 = {0011, 0110, 1001, 1100, . . . } if we have a bit-string genome of the length 4.
Obviously, we cannot seize these genotypes in a schema using the discussed approach. They
346 29 GENETIC ALGORITHMS
may, however, be gathered in a forma. The Schema Theorem, however, cannot hold for such
a forma since the probability p of destruction may be different from instance to instance.
Finally, we mentioned that the Schema Theorem gives rise to implicit parallelism since it
holds for any possible schema at the same time. However, it should be noted that (a) only a
small subset of schemas is usually actually instantiated within the population and (b) only a
few samples are usually available per schema (if any).Generally, this assumption (like many
others) about the Schema Theorem makes only sense if the population size is assumed to
be infinite and the sampling of the population and the schema would be really uniform.
1. When a Genetic Algorithm solves a problem, there exist some low-order, low-defining
length schemata with above-average fitness (the so-called building blocks).
2. These schemata are combined step by step by the Genetic Algorithm in order to form
larger and better strings. By using the building blocks instead of testing any possible
binary configuration, Genetic Algorithms efficiently decrease the complexity of the
problem. [1075]
Although it seems as if the Building Block Hypothesis is supported by the Schema Theorem,
this cannot be verified easily. Experiments that originally were intended to proof this theory
often did not work out as planned [1913]. Also consider the criticisms of the Schema Theorem
mentioned in the previous section which turns higher abstractions like the Building Block
Hypothesis into a touchy subject. In general, there exists much criticism of the Building
Block Hypothesis and, although it is a very nice model, it cannot yet be considered as proven
sufficiently.
The GR Hypothesis by Beyer [298] based on the Genetic Repair (GR) Effect [296] takes
a counter position to the Building Block Hypothesis [302]. It states that not the differ-
ent schemas are combined from the parent individuals to form the child, but instead the
similarities are inherited.
Let us take a look on uniform crossover (UX ) for bit-string based search spaces, as
introduced in Section 29.3.4.4 on page 337. Here, the value for each locus of the child
individual gc is chosen randomly from the values at the same locus in one of the two parents
gp,a and gp,b . This means that if the parents have the same value in locus i, then the
child will inherit this value too gp,a [i] = gp,b [i] ⇒ gc [i] = gp,a [i] = gp,b [i]. If the two parents
disagree in one locus j, however, the value at that locus in gc is either 1 or 0 – both with
a probability of 50%. In other words, uniform crossover preserves the positions where the
parents are equal and randomly fills up the remaining genes.
If a sequence of genes carries the same alleles in both parents, there are three possible
reasons: (a) either it emerged randomly which has a probability of 2−l for length-l sequences
– which is thus very unlikely, (b) both parents inherited the sequence, or (c) a combination
of inherited sequences of length m less than l and a random coincidence of l − m equal bits
(resulting, for example, from mutation). The latter two cases, the inheritance of the sequence
or subsequence, imply that the sequence (or parts thereof) was previously selected, i. e.,
seemingly has good fitness. Uniform crossover preserves these components. The remaining
bits are uninteresting, maybe even harmful, so they can be replaced. In UX, this is done
with maximum entropy, in a maximally random fashion [302].
29.6. THE MESSY GENETIC ALGORITHM 347
Hence, the idea of Genetic Repair is that useful gene sequences which step-by-step are
built by mutation (or the variation resulting from crossover) are aggregated. The useless
sequences, however, are dampened or destroyed by the same operator. This theory has been
explained for bit-string based as well as real-valued vector based search spaces alike [298,
302].
Figure 29.10: Two linked genes and their destruction probability under single-point
crossover.
29.6.1 Representation
The idea behind the genomes used in messy GAs goes back to the work Bagley
[180] from 1967 who first introduced a representation where the ordering of the
genes was not fixed. Instead, for each gene a tuple (φ, γ) with its position (lo-
cus) φ and value (allele) γ was used. For instance, the bit string 000111 can
be represented as g1 = ((0, 0) , (1, 0) , (2, 0) , (3, 1) , (4, 1) , (5, 1)) but as well as g2 =
((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1)) where both genotypes map to the same phenotype,
i. e., gpm(g1 ) = gpm(g2 ).
(0,0) (1,0) (2,0) (3,0) (4,1) (5,1) (6,1) (7,1) first inversion
(0,0) (1,0) (4,1) (3,0) (2,0) (5,1) (6,1) (7,1) second inversion
Figure 29.11: An example for two subsequent applications of the inversion operation [1196].
The cut operator splits a genotype g into two with the probability pc = (len(g) − 1) pK where
pK is a bitwise probability and len(g) the length of the genotype [1551]. With pK = 0.1, the
g1 = ((0, 0) , (1, 0) , (2, 0) , (3, 1) , (4, 1) , (5, 1)) has a cut probability of pc = (6 − 1)∗0.1 = 0.5.
A cut at position 4 would lead to g3 = ((0, 0) , (1, 0) , (2, 0) , (3, 1)) and g4 = ((4, 1) , (5, 1)).
The splice operator joins two genotypes with a predefined probability ps by simply attach-
ing one to the other [1551]. Splicing g2 = ((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1)) and g4 =
((4, 1) , (5, 1)), for instance, leads to g5 = ((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1) , (4, 1) , (5, 1)).
In summary, the application of two cut and a subsequent splice operation to two genotypes
has roughly the same effect as a single-point crossover operator in variable-length string
chromosomes Section 29.4.3.
The genotypes in messy GAs have a variable length and the cut and splice operators
can lead to genotypes being over or underspecified. If we assume a three bit genome,
the genotype g6 = ((2, 0) , (0, 0) , (2, 1) , (1, 0)) is overspecified since it contains two (in this
example, different) alleles for the third gene (at locus 2). g7 = ((2, 0) , (0, 0)), in turn, is
underspecified since it does not contain any value for the gene in the middle (at locus 1).
Dealing with overspecification is rather simple [847, 1551]: The genes are processed from
left to right during the genotype-phenotype mapping, and the first allele found for a specific
locus wins. In other words, g6 from above codes for 000 and the second value for locus 2
is discarded. The loci left open during the interpretation of underspecified genes are filled
with values from a template string [1551]. If this string was 000, g7 would code for 000, too.
In a simple Genetic Algorithm, building blocks are identified and recombined simultaneously,
which leads to a race between recombination and selection [1196]. In the messy GA [1083,
1084], this race is avoided by separating the evolutionary process into two stages:
1. In the primordial phase, building blocks are identified. In the original conception of
the messy GA, all possible building blocks of a particular order k are generated. Via
selection, the best ones are identified and spread in the population.
29.7. RANDOM KEYS 349
2. These building blocks are recombined with the cut and splice operators in the subse-
quent juxtapositional phase.
The complexity of the original mGA needed a bootstrap phase in order to identify the order-
k building blocks which required to identify the order-k − 1 blocks first. This bootstrapping
was done by applying the primordial and juxtapositional phases for all orders from 1 to
k − 1. This process was later improved by using a probabilistic complete initialization
algorithm [1087] instead.
In Section 29.3.1.2 and 29.3.3, we introduce operators for processing strings which directly
represent permutations of the elements from a given set S = {s0 , s1 , .., sn−1 }. This direct
representation is achieved by using a search space G which contains the n-dimensional vectors
over the natural numbers from 0 to n − 1, i. e., G = [0..n − 1]n . The gene g [i] at locus i of
the genotype g ∈ G hence has a value j = g [i] ∈ 1..n and specifies that element sj should be
at position i in the permutation, as defined in the genotype-phenotype mapping introduced
in Equation 29.6 on page 329.
It is quite clear that every number in 1..n must occur exactly once in each genotype.
Performing a simple single-point crossover on two different genotypes is hence not feasi-
ble. The two genotypes g1 = (1, 2, 3, 5, 4) and g2 = (4, 2, 3, 5, 1) could lead to the offspring
(1, 2, 3, 4, 1) or (4, 2, 3, 5, 4), both of which containing one number twice while missing an-
other one. We can therefore not apply Algorithm 29.6 nor Algorithm 29.7 to such genotypes.
The Random Keys encoding defined by Bean [236] is a simple idea how to circumvent
this problem. It introduces a genotype-phenotype mapping which translates n-dimensional
vectors of real numbers (G ⊆ Rn ) to permutations of length n. Each gene g [i] of a genotype
g with i ∈ 0..n − 1 is thus a real number or, most commonly, an element of the subset [0, 1]
of the real numbers. The gene at locus i stands for the element si . The permutation of these
elements, i. e., the position at which the element si appears in the phenotype, is defined by
the position of the gene g [i] in the sorted list of all genes. In Algorithm 29.8, we present
an algorithm which performs the Random Keys genotype-phenotype mapping. An example
implementation of the algorithm in Java is given in Listing 56.52 on page 834.
29.8.2 Books
29.8.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 357
Tasks T29
78. Perform the same experiments as in Task 64 on page 238 (and 68) with a Genetic
Algorithm. Use a bit-string based search space and utilize an appropriate genotype-
phenotype mapping.
[40 points]
79. Perform the same experiments as in Task 64 on page 238 (and 68 and 78) with a
Genetic Algorithm. Use a real-vector based search space. Implement an appropriate
crossover operator.
[40 points]
80. Compare your results from Task 64 on page 238, 68, and 78. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for all three tasks? Try to give
reasons for your results.
[10 points]
81. Perform the same experiments as in Task 67 on page 238 (and 70) with a Genetic
Algorithm. If you use a permutation-based search space, you may re-use the nullary
and unary search operations we already have developed for this task. Yet, you need
to define a recombination operation which does not violate the permutation-character
of the genotypes.
[40 points]
82. Compare your results from Task 67 on page 238, 70 and 81. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results. Put your explanations into the context of the discussions provided
in Example E17.4 on page 181.
[10 points]
83. Try solving the bin packing problem again, this time by reproducing the experiments
given in Khuri, Schütz, and Heitkötter in [1534]. Compare the results with the results
listed in that section and obtained by you in Task 70 and 70. Which method is better?
Provide reasons for your observation.
[50 points]
84. In Paragraph 56.2.1.2.1 on page 801, we provide a Java implementation of some of the
standard operators of Genetic Algorithms based on arrays of boolean. Obviously, using
boolean[] is a bit inefficient, since it wastes memory. Each boolean will take up at least
1 byte, probably even 4, 8, or maybe even 16 bytes. Since it only represents a single bit,
i. e., one eighth of a byte, it would be more economically to represent bit strings as array
of int or long and access specific bits via bit arithmetic (via, e.g., &, |, ˆ, <<, or >>>).
Please re-implement the operators from Paragraph 56.2.1.2.1 on page 801 in a more
efficient way, by either using byte[], int[], or long[], or a specific class such as java
.util.BitSet for storing the bit strings instead of boolean[].
[30 points]
Chapter 30
Evolution Strategies
30.1 Introduction
Evolution Strategies1 (ES) introduced by Rechenberg [2277, 2278, 2279] and Schwefel [2433,
2434, 2435] are a heuristic optimization technique based in the ideas of adaptation and
evolution, a special form of Evolutionary Algorithms [170, 300, 302, 1220, 2277–2279, 2437].
The first Evolution Strategies were quite similar to Hill Climbing methods. They did not
yet focus on evolving real-valued, numerical vectors and only featured two rules for finding
good solutions (a) slightly modifying one candidate solution and (b) keeping it only if it
is at least as good as its parent. Since there was only one parent and one offspring in this
form of Evolution Strategy, this early method more or less resembles a (1 + 1) population
discussed in Paragraph 28.1.4.3.3 on page 266. From there on, (µ + λ) and (µ, λ) Evolution
Strategies were developed – a step into a research direction which was not very welcomed
back in the 1960s [302].
The search space of today’s Evolution Strategies normally are vectors from the Rn , but
bit strings or integer strings are common as well [302]. Here, we will focus mainly on the
former, the real-vector based search spaces. In this case, the problem space is often identical
with the search space, i. e., X = G ⊆ Rn . ESs normally follow one of the population handling
methods classified in the µ-λ notation given in Section 28.1.4.3.
Mutation and selection are the primary reproduction operators and recombination is
used less often in Evolution Strategies. Most often, normal distributed random numbers
are used for mutation. The parameter of the mutation is the standard deviation of these
random numbers. The Evolution Strategy may either
1. maintain an a single standard deviation parameter σ and use identical normal dis-
tributions for generating the random numbers added to each element of the solution
vectors,
2. use a separate standard deviation (from a standard deviation vector σ) for each element
of the genotypes, i. e., create random numbers from different normal distributions
for mutations in order to facilitate different strengths and resolutions of the decision
variables (see, for instance, Example E28.11), or
3. use a complete covariance matrix C for creating random vectors distributed in a hy-
perellipse and thus also taking into account binary interactions between the elements
of the solution vectors.
The standard deviations are governed by self-adaptation [1185, 1617, 1878] and may result
from a stochastic analysis of the elements in the population [1182–1184, 1436]. They are often
1 http://en.wikipedia.org/wiki/Evolution_strategy [accessed 2007-07-03]
360 30 EVOLUTION STRATEGIES
treated as endogenous strategy parameters which can directly be stored in the individual
records p and evolve along with them [302]. The idea of self-adaption can be justified
with the following thoughts. Assume we have a Hill Climbing algorithm which uses a fixed
mutation step size σ to modify the (single) genotype it currently has under investigation.
1. If a large value is used for σ, the Hill Climber will initially quickly find interesting
areas in the search space. However, due to its large step size, it will often jump in and
out of smaller highly fit areas without being able to investigate them thoroughly.
2. If a small value is used for σ, the Hill Climber will progress very slowly. However, once
it finds the basin of attraction of an optimum, it will be able to trace that optimum
and deliver solutions.
It thus makes sense to adapt the step width of the mutation operation. Starting with a larger
step size, we may reduce it while approaching the global optimum, as we will lay down in
Section 30.5. The parameters µ and λ, on the other hand, are called exogenous parameters
and are usually kept constant during the evolutionary optimization process [302].
We can describe the idea of endogenous parameters for governing the search step width
as follows. Evolution Strategies are Evolutionary Algorithms which usually add the step
length vector as endogenous information to the individuals. This vector undergoes selec-
tion, recombination, and mutation together with the individual. The idea is that the EA
selects individuals which are good. By doing so, it also selects step width that led to its
creation. This step width can be considered to be good since it was responsible for creating
a good individual. Bad step widths, on the other hand, will lead to the creation of inferior
candidate solutions and thus, perish together with them during selection. In other words,
the adaptation of the step length of the search operations is governed by the same selection
and reproduction mechanisms which are applied in the Evolutionary Algorithm to find good
solutions. Since we expect these mechanisms to work for this purpose, it makes sense to
also assume that they can do well to govern the adaptation of the search steps. We can
further expect that initially, larger steps are favored by the evolution, which help the algo-
rithm to explore wide areas in the search space. Later, when the basin(s) of attraction of
the optimum/optima are reached, only small search steps can lead to improvements in the
objective values of the candidate solutions. Hence, individuals created by search operations
which perform small modifications will be selected. These individuals will hold endogenous
information which leads to small step sizes. Mutation and recombination will lead to the
construction of endogenous information which favors smaller and smaller search steps – the
optimization process converges.
30.2 Algorithm
In Algorithm 30.1 (and in the corresponding Java implementation in Listing 56.17 on
page 760) we present the basic single-objective Evolution Strategy algorithm as specified
by Beyer and Schwefel [302]. The algorithm can facilitate the two basic strategies (µ/ρ, λ)
(Paragraph 28.1.4.3.6) and (µ/ρ + λ) (Paragraph 28.1.4.3.7). These general strategies also
encompass (µ, λ) ≡ (µ/1, λ) and (µ + λ) ≡ (µ/1 + λ) population treatment.
The algorithm starts with the creation of a population of µ random individuals in Line 3,
maybe by using a procedure similar to the Java code given in Listing 56.24. In each
generation, first a genotype-phenotype mapping may be performed (Line 6) and then the
objective function is evaluated in .
In every generation (except the first one with t = 1), there are λ individuals in the
population pop. We therefore have to select µ from them into the mating pool mate. The
way in which this selection is performed depends on the type of the Evolution Strategy.
In the (µ/ρ, λ), only the individuals of generation t in pop(t) are considered (Line 13). If
the Evolution Strategy is a (µ/ρ + λ) strategy, then also the previously selected individuals
30.2. ALGORITHM 361
30.3 Recombination
The standard recombination operators in Evolution Strategy basically resemble generaliza-
tions of uniform crossover and average crossover to ρ ≥ 1 parents for (µ/ρ +
, λ) strategies.
If ρ > 2, we speak of multi-recombination.
selected parents
g’
sG G sG G sG G
s»
sG G +sG G
0
2
0 1 1
s» s( )
GG
0
1
0
1
C = (s 0
G G sG G
1
0
0
0
1
)
1
B C D
Figure 30.1: Sampling Results from Normal Distributions
The simplest way to mutate a vector g′ ∈ Rn with a normal distribution is given in Algo-
rithm 30.4 and implemented in Listing 56.25 on page 786. Here, the endogenous information
is a scalar number w = σ. For each dimension i ∈ 0..n − 1, a random value normally dis-
tributed with N µ, σ 2 is drawn and added to g′ [i].
As illustrated in part (B) of Figure 30.1, this operation creates an unbiased distribution
of possible genotypes g. The mutants are symmetrically distributed around g′ . The dis-
tribution is isotropic, i. e., the surfaces of constant density form concentric spheres around
g′ .
The advantage of this mutation operator is that it only needs a single, scalar endogenous
parameter, w = σ. The drawback is that the resulting genotype distribution is always
isotropic and cannot reflect different scales along the axes. In Example E28.11 on page 271
we pointed out that the influence of the genes of a genotype on the objective values, i. e.,
the influence of the dimensions of the search space, may differ largely. Furthermore, it may
even change depending on which area of the search space is currently investigated by the
optimization process.
Figure 30.1 is the result of an experiment where we illustrate the distribution of a sampled
point cloud. For the normal distribution in part (B), we chose a standard deviation which
equals the mean of the standard deviations along both axes G0 and G1 .
By using n different standard deviations, i. e., a vector σ, we can scale the cloud of points
along the axes of the coordinate system as can be seen in part (C) of Figure 30.1. Algo-
rithm 30.5 illustrates how this can be done and Listing 56.25 on page 786 again provides a
366 30 EVOLUTION STRATEGIES
suitable implementation of the idea. Instead of sampling all dimensions from the same nor-
mal distribution N 0, σ 2 , we now use n distinct univariate normal distributions N (0, σ [i])
with i ∈ 0..n − 1. For Figure 30.1 part (C), we sampled points with the standard deviations
of the parent points along the axes.
The endogenous parameter w = σ now has n dimensions as well. Even though the
problem of axis scale has been resolved, we still cannot sample an arbitrarily rotated cloud
of points – the main axes of the iso-density ellipsoids of the distribution are still always
parallel to the main axes of the coordinate system.
Algorithm 30.6 shows how to mutate vectors sespel′ by adding numbers which
We then can set, for example, the rotation and scaling matrix M to be the matrix of all
the normalized and scaled (n dimensional) Eigenvectors Λi (C) : i ∈ 1..n of the covariance
matrix C. These vectors are normalized by dividing them by their length, i. e., by ||Λi (C)||2 .
Then, they are scaled with the square roots of the corresponding Eigenvalues λi (C). By
multiplying a vector of n independent standard normal distributions with the matrix of
normalized Eigenvectors, it is rotated to
p be aligned to the covariance matrix C. By scaling
the Eigenvectors with the square roots λi (C) of the corresponding Eigenvalues, we stretch
the standard normal distributions from spheres to ellipsoids with the same shape as described
by the covariance matrix.
p !
λi (C)
M= Λi (C) for i from 1 to n (30.2)
||Λi (C)||2
√
If C is a diagonal matrix, M[i, j] = C[i, j] ∀i, j ∈ 0..n − 1 can be used. Otherwise,
computing the Eigenvectors and Eigenvalues is only easy for dimensions n ∈ {1, 2, 3} but
becomes computationally intense for higher dimensions [1097].
If this last mutation operator is indeed applied with a matrix M directly derived from the
covariance matrix C estimated from the parent individuals, the Evolution Strategy basically
becomes a real-valued Estimation of Distribution Algorithm as introduced in Section 34.5
on page 442.
One of the most well-known adaptation rules for Evolution Strategies is the 1/5th Rule
which adapts the single parameter σ of the isotropic mutation operator introduced in Sec-
tion 30.4.1.1. Since σ defines the dispersion of the mutation results, we can refer to it as
mutation strength.
Beyer [300] and Rechenberg [2278] considered two key performance metrics in their works
on (1 + 1)-ESs: the success probability PS of the mutation operation and progress rate ϕ de-
scribing the expected distance gain towards the optimum. Like Beyer [300] and Rechenberg
[2278], let us discuss the behavior of these parameters on the Sphere function fRSO1 discussed
in Section 50.3.1.1 on page 581 for n → ∞.
n
X
fRSO1 (x = g) = xi 2 with G = X ⊆ Rn (30.3)
i=1
The isotropic mutation operator samples points in a sphere around the intermediate genotype
g′ . Since we consider the Sphere function as benchmark, g′ will always lie on a iso-fitness
sphere. All points which have a lower Euclidean distance to the coordinate center 0 than
g′ have a better objective value and all points with larger distance have a worse objective
value.
Both, PS and ϕ, depend on the value of σ. If σ is very small, the chance PS of sampling
a point g with a smaller norm ||g||2 than g′ is 0.5. However, the expected distance ϕ for
moving towards the optimum also becomes zero.
If σ → +∞, on the other hand, the success probability PS becomes zero since mutation will
likely jump over the iso-fitness sphere of g′ even if it samples towards the right direction.
Since PS is zero, the expected distance gain towards the optimum also disappears.
lim PS = 0 (30.6)
σ→+∞
lim ϕ = 0 (30.7)
σ→+∞
In between these two extreme cases, i. e., for 0 < σ < +∞, lies an area where both ϕ > 0
and 0 < PS < 0.5. The goal of adaptation is to have a high expected progress towards the
global optimum, i. e., to maximize ϕ. If ϕ is large, the Evolution Strategy will converge
quickly to the global optimum. σ is the parameter we can tune in order to achieve this goal.
PS can be estimated as feedback from the optimizer: the estimation of PS is the number of
times a mutated individual replaced its parent in the population divided the total number of
performed mutations. Beyer and Schwefel [302] and Rechenberg [2278] found theoretically
that ϕ becomes maximal for values of PS in [0.18, 0.28] for different mathematical benchmark
functions. For larger as well as for lower values of PS , ϕ begins to decline. As a compromise,
they conceived the 1/5th Rule:
Definition D30.1 (1/5th Rule). In order to obtain nearly optimal (local) performance
of the (1 + 1)-ES with isotropic mutation in real-valued search spaces, tune the mutation
strength σ in such a way that the (estimated) success rate PS is about 1/5 [302].
The mutation of the single strategy parameter σ, the mutation strength, can be achieved
by multiplying it a positive number ν. If 0 < ν < 1, the strength decreases and if 1 < ν, the
mutation will become more volatile. The interesting question is how to create this number.
Rechenberg’s Two-Point Rule [2279] sets ν = α with 50% probability and to α1 otherwise.
α > 1 is a fixed exogenous strategy parameter. We can then define the operation mutateW,σ
(for the recombined mutation strength σ ′ ) given in Line 24 of Algorithm 30.1 on page 361
according to Equation 30.8.
′
′ ασ if randomUni[0, 1) < 0.5
mutateW,σ (σ ) = σ′ (30.8)
α otherwise
α = 1+τ (30.9)
According to Beyer [297], nearly optimal behavior of the Evolution Strategy can be achieved
if α is set according to Equation 30.9. The meaning of τ will be discussed below and values
for it will be given in Equation 30.11. The interesting thing about Equation 30.8 is that it
creates an extremely limited neighborhood of σ-values if we extend the concept of adjacency
from Definition D4.10 on page 85 to the space of endogenous information W. Matter of fact,
only the values σ0 ∗ αi : i ∈ Z can be reached at all if the mutation strength is initialized
with σ0 . Only the direct predecessor and successor of an element in this series can be found
in one step by mutation.
The ε = 0 adjacency for the strategy parameter can be extended to the whole R+ by
setting the multiplication parameter ν to e raised to the power of a normally distributed
number. Since all real numbers are ε = 0-adjacent under a normal distribution, all values
in R+ \ {0} are adjacent under eN (0,1) .
Algorithm 30.8 presents a way to mutate vectors σ ′ which represent axis-parallel mutation
′
Algorithm 30.8: σ ←− mutateW,σ (σ)
Input: σ ′ ∈ Rn : the old mutation strength vector
Data: i: a counter variable
Output: σ ∈ Rn : the new mutation strength vector
1 begin
2 ν ←− eτ0 randomNorm(0,1)
3 σ ←− 0
4 for i ←− 0 up to n − 1 do
5 σ [i] ←− ν ∗ eτ randomNorm(0,1) ∗ σ [i]′
6 return σ
strengths. σ ′ may be the result from recombination of several parents. In Algorithm 30.8,
we first compute a general scaling factor ν which is log-normally distributed. Each field i of
372 30 EVOLUTION STRATEGIES
the new mutation strength vector σ then is the old value σ ′ [i] multiplied with another log-
normally distributed random number and ν. This idea has exemplarily been implemented
in Listing 56.26 on page 787.
c
τ0 = √ (30.12)
2n
c
τ = q √ (30.13)
2 n
For σ ∈ Rn , the settings in Equation 30.12 and Equation 30.13 are recommended by Beyer
and Schwefel [302]. For a (10, 100)-Evolution Strategy, Schwefel [2437] further suggests to
use c = 1.
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 373
30.6 General Information on Evolution Strategies
30.6.2 Books
1. Handbook of Evolutionary Computation [171]
2. Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163]
3. Self-Adaptive Heuristics for Evolutionary Computation [1617]
4. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evo-
lution [2278]
5. Evolutionsstrategie ’94 [2279]
6. Evolution and Optimum Seeking [2437]
7. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
8. The Theory of Evolution Strategies [300]
9. Cybernetic Solution Path of an Experimental Problem [2277]
10. Numerical Optimization of Computer Models [2436]
30.6.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 377
Tasks T30
85. Implement the simple Evolution Strategy Algorithm 30.7 on page 369 in a program-
ming language of your choice.
[40 points]
86. How does the parameter adaptation method in Algorithm 30.7 on page 369 differ from
the method applied in Algorithm 30.1 on page 361? What is the general difference be-
tween endogenous and exogenous parameters and the underlying ideas of adaptation?
[10 points]
87. How do the parameters λ, µ, and ρ of an Evolution Strategy relate to the population
size ps and the mating pool size mps used in general Evolutionary Algorithms? You
may use Section 28.1.4 on page 265 as a reference.
[10 points]
88. How does the n-ary domination recombination algorithm in Evolution Strategies
(see Section 30.3.1 on page 362) relate to the uniform crossover operator (see Sec-
tion 29.3.4.4 on page 337) in Genetic Algorithms? What are the similarities? What
are the differences?
[10 points]
89. Perform the same experiments as in Task 67 on page 238 (and 70 and Task 81 on
page 357) with a Evolution Strategy using the Random Keys encoding given in Sec-
tion 29.7 on page 349. In Section 57.1.2.1 on page 887, you can find the necessary
code for that. How does the Evolution Strategy perform in comparison with the other
algorithms and previous experimental results? What conclusions can we draw about
the utility of the Random Keys genotype-phenotype mapping?
[10 points]
90. Apply an Evolution Strategy using the Random Keys encoding given in Section 29.7
on page 349 to the Traveling Salesman Problem instance att48 taken from [2288] and
listed in Paragraph 57.1.2.2.1.
As already discussed in Example E2.2 on page 45, a solution to an n-city Traveling
Salesman Problem can be represented by a permutation of length n − 1. Such per-
mutations can be created and optimized in a way very similar to what we already did
with the bin packing problem a few times. Only the objective function needs to be
re-defined. Additionally to the Evolution Strategy, also apply a simple Hill Climber
to the same problem, using a method similar to what we did in Task 67 on page 238
and a representation according to Section 29.3.1.2 on page 329.
How does the Evolution Strategy perform in comparison with the Hill Climber on
this problem? What conclusions can we draw about the utility of the Random Keys
genotype-phenotype mapping? How close do the algorithms come to the optimal,
shortest path for this problem which has the length 10 628?
[50 points]
Chapter 31
Genetic Programming
31.1 Introduction
The term Genetic Programming1 (GP) [1220, 1602, 2199] has two possible mean-
ings. (a) First, it is often used to subsume all Evolutionary Algorithms that have tree data
structures as genotypes. (b) Second, it can also be defined as the set of all Evolutionary
Algorithms that breed programs, algorithms, and similar constructs. In this chapter, we
focus on the latter definition which still includes discussing tree-shaped genomes.
The well-known input-processing-output model from computer science states that a run-
ning instance of a program uses its input information to compute and return output data.
In Genetic Programming, usually some inputs and corresponding output data samples are
known, can be produced, or simulated. The goal then is to find a program that connects
them or that exhibits some kind of desired behavior according to the specified situations, as
sketched in Figure 31.1.
31.1.1 History
The history of Genetic Programming [102] goes back to the early days of computer science
as sketched in Figure 31.2. In 1957, Friedberg [995] left the first footprints in this area by
using a learning algorithm to stepwise improve a program. The program was represented as
a sequence of instructions for a theoretical computer called Herman [995, 996]. Friedberg
did not use an evolutionary, population-based approach for searching the programs – the
idea of Evolutionary Computation had not been developed yet at that time, as you can see
1 http://en.wikipedia.org/wiki/Genetic_programming [accessed 2007-07-03]
380 31 GENETIC PROGRAMMING
Evolutionary Programming
(Fogel et al., 1966)
RBGP
(Weise, 2007)
Grammatical Evolution
(Ryan et al., 1995)
Grammars GADS 1/2
(Antonisse, 1990) (Paterson/Livesey, 1995) TAG3P
LOGENPRO (Nguyen, 2004)
(Wong/Leung, 1995)
PDGP Cartesian GP
(Poli, 1996) (Miller/Thomson, 1998)
Figure 31.2: Some important works on Genetic Programming illustrated on a time line.
in Section 29.1.1 on page 325. Also, the computational capacity of the computers of that era
was very limited and would not have allowed for experiments an program synthesis driven
by a (population-based) Evolutionary Algorithm.
Around the same time, Samuel applied machine learning to the game of checkers and, by
doing so, created the world’s first self-learning program. In the future development section
of his 1959 paper [2383], he suggested that effort could be spent into allowing the (checkers)
program to learn scoring polynomials – an activity which would be similar to symbolic
regression. Yet, in his 1967 follow-up work [2385], he could not report any progress in this
issue.
The Evolutionary Programming approach for evolving Finite State Machine by Fogel
et al. [945] (see also Chapter 32 on page 413) dates back to 1966. In order to build predictors,
different forms of mutation (but no crossover) were used for creating offspring from successful
individuals.
Fourteen years later, the next generation of scientists began to look for ways to evolve
programs. New results were reported by Smith [2536] in his PhD thesis in 1980. Forsyth
[972] evolved trees denoting fully bracketed Boolean expressions for classification problems
in 1981 [972, 973, 975].
The mid-1980s were a very productive period for the development of Genetic Program-
ming. Cramer [652] applied a Genetic Algorithm in order to evolve a program written in
a subset of the programming language PL in 1985. This GA used a string of integers as
genome and employed a genotype-phenotype mapping that recursively transformed them
into program trees. We discuss this approach in ?? on page ??.
At the same time, the undergraduate student Schmidhuber [2420] also used a Genetic Al-
gorithm to evolve programs at the Siemens AG. He re-implemented his approach in Prolog
at the TU Munich in 1987 [790, 2420]. Hicklin [1230] and Fujuki [999] implemented re-
production operations for manipulating the if-then clauses of LISP programs consisting of
single COND-statements. With this approach, Fujiko and Dickinson [998] evolved strategies
for playing the iterated prisoner’s dilemma game. Bickel and Bickel [309] evolved sets of
rules which were represented as trees using tree-based mutation crossover operators.
Genetic Programming became fully accepted at the end of this productive decade mainly
because of the work of Koza [1590, 1591]. Koza studied many benchmark applications of
Genetic Programming, such as learning of Boolean functions [1592, 1596], the Artificial Ant
31.1. INTRODUCTION 381
problem (as discussed in ?? on page ??) [1594, 1595, 1602], and symbolic regression [1596,
1602], a method for obtaining mathematical expressions that match given data samples
(see Section 49.1 on page 531). Koza formalized (and patented [1590, 1598]) the idea of
employing genomes purely based on tree data structures rather than string chromosomes as
used in Genetic Algorithms. We will discuss this form of Genetic Programming in-depth in
Section 31.3.
In the first half of the 1990s, researchers such as Banzhaf [205], Perkis [2164], Openshaw
and Turton [2085, 2086], and Crepeau [653], on the other hand, began to evolve programs in
a linear shape. Similar to programs written in machine code, string genomes used in linear
Genetic Programming (LGP ) consist of single, simple commands. Each gene identifies an
instruction and its parameters. Linear Genetic Programming is discussed in Section 31.4.
Also in the first half of the 1990s, Genetic Programming systems which are driven by a
given grammar were developed. Here, the candidate solutions are sentences in a language
defined by, for example, an EBNF. Antonisse [113], Stefanski [2605], Roston [2336], and
Mizoguchi et al. [1920] we the first ones to delve into grammar-guided Genetic Programming
(GGGP, G3P ) which is outlined in Section 31.5.
382 31 GENETIC PROGRAMMING
31.2 General Information on Genetic Programming
31.2.2 Books
1. Advances in Genetic Programming II [106]
2. Handbook of Evolutionary Computation [171]
3. Advances in Genetic Programming [2569]
4. Genetic Algorithms and Genetic Programming at Stanford [1601]
5. Genetic Programming IV: Routine Human-Competitive Machine Intelligence [1615]
6. Dynamic, Genetic, and Chaotic Programming: The Sixth-Generation [2559]
7. Genetic Programming: An Introduction – On the Automatic Evolution of Computer Pro-
grams and Its Applications [209]
8. Design by Evolution: Advances in Evolutionary Design [1234]
9. Linear Genetic Programming [394]
10. A Field Guide to Genetic Programming [2199]
11. Representations for Genetic and Evolutionary Algorithms [2338]
12. Automatic Quantum Computer Programming – A Genetic Programming Approach [2568]
13. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
14. Evolutionäre Algorithmen [2879]
15. Foundations of Genetic Programming [1673]
16. Genetic Programming II: Automatic Discovery of Reusable Programs [1599]
31.2.4 Journals
31.3.1 Introduction
Definition D31.1 (Terminal Node Types). The set Σ contains the node types i which
do not allow their instances t to have child nodes. The nodes t with typeOf (t) ∈ i ∧ i ∈ Σ
are thus always leaf nodes.
Definition D31.2 (Non-Terminal Node Types). Each node type i from the set of non-
terminal (function) node types in a Genetic Programming system has a definite number k ∈
N1 of children (k > 1). The nodes t which belong to a non-terminal node type typeOf (t) ∈
i ∧ i ∈ N thus are always inner nodes of a tree.
In Section 56.3.1 on page 824, we provide simple Java implementations for basic tree node
classes (Listing 56.48)). In this implementation, each node belongs to a specific type (List-
ing 56.49. Such a node type does not only describe the number of children that its instance
nodes can have. In our example implementation, we go one step further and also define
the possible types of children a node can have by specifying a type set (Listing 56.50) for
each possible child node location. We thus provide a fully-fledged Strongly-Typed Genetic
Programming (STGP) Montana [1930, 1931, 1932] implementation.
+
ab *
sin x
e +3 x e sin 3 ab
x 0.5
x
p
Figure 31.3: The formula esinx + 3 |x| expressed as tree.
is called symbolic regression and is discussed in detail in Section 49.1 on page 531, but
it can serve here as a good example for the abilities of Genetic Programming. SGP has
initially been used by Koza to evolve them. Here, an inner node stands for a mathematical
operation and its child nodes are the parameters of the operation. Leaf nodes then are
terminal symbols like numbers or variables. With
Trees are very (and deceptively) close to the natural structure of algorithms and programs.
The syntax of most of the high-level programming languages, for example, leads to a certain
hierarchy of modules and alternatives. Not only does this form normally constitute a tree –
compilers even use tree representations internally.
Pop = createPop(s)
Input: s the size of the population to be created
Data: i a counter variable
Output: Pop the new, random population 1 List<IIndividual> createPop(s) {
1 begin 2 List<Individual> Xpop;
3 Xpop = new ArrayList<IIndividual>(s);
2 Pop () 4 for(int i=s; i>0; i--) {
3 i s 5 Xpop.add(create());
4 while i>0 do 6 }
7 return Xpop;
5 Pop appendList(Pop, create()) 8 }
6 i i-1
7 return Pop
8 end
Program
Algorithm (Schematic Java, High-Level Language)
{block}
while ret
i 0 appendList
Pop create i
When reading the source code of a program, compilers first split it into tokens3 . Then
2 http://en.wikipedia.org/wiki/Infix_notation [accessed 2010-08-21]
3 http://en.wikipedia.org/wiki/Lexical_analysis [accessed 2007-07-03]
31.3. (STANDARD) TREE GENOMES 389
4 5
they parse these tokens. Finally, an abstract syntax tree (AST) is created [1278, 1462].
The internal nodes of ASTs are labeled by operators and the leaf nodes contain the operands
of these operators. In principle, we can illustrate almost every6 program or algorithm as such
an AST (see Figure 31.4). Tree-based Genetic Programming directly evolves individuals in
this form.
It should, however, be pointed out that Genetic Programming is not suitable to evolve
programs such as the one illustrated in Figure 31.4. As we will discuss in ??, programs in
representations which we know from our everyday life as students or programmers exhibit
too much epistasis. Synthesizing them is thus not possible with GP.
Another interesting aspect of the tree genome is that it has no natural role model. While Ge-
netic Algorithms match their direct biological metaphor, the DNA, to some degree, Genetic
Programming introduces completely new characteristics and traits.
Genetic Programming is one of the few techniques that are able to learn structures
of potentially unbounded complexity. It can be considered as more general than Genetic
Algorithms because it makes fewer assumptions about the shape of the possible solutions.
Furthermore, it often offers white-box solutions that are human-interpretable. Many other
learning algorithms like artificial neural networks, for example, usually generate black-box
outputs, which are highly complicated if not impossible to fully grasp [1853].
Before the evolutionary process in Standard Genetic Programming can begin, an initial
population composed of randomized individuals is needed. In Genetic Algorithms, a set
of random bit strings is created for this purpose. In Genetic Programming, random trees
instead of such one-dimensional sequences are constructed.
Normally, there is a maximum depth dˆ specified that the tree individuals are not allowed
to surpass. Then, the creation operation will return only trees where the path between the
root and the most distant ˆ There are three basic ways for
leaf node is not longer than d.
realizing the create and “createPop” operations from for trees which can be distinguished
according to the depth of the produced individuals.
31.3.2.1 Full
The full method illustrated in Figure 31.5 creates trees where each (non-backtracking)
maximum depth
path from the root to the leaf nodes has exactly the length d.ˆ Algorithm 31.1 illustrates
ˆ
this process: For each node of a depth below d, the type is chosen from the non-terminal
symbols (the functions N ). A chain of function nodes is recursively constructed until the
maximum depth minus one. If we reach d, ˆ of course, only leaf nodes (with types from Σ)
can be attached.
31.3.2.2 Grow
The grow method depicted in Figure 31.6 and specified in Algorithm 31.2, also creates trees
maximum depth
where each (non-backtracking) path from the root to the leaf nodes is not longer than dˆ but
may be shorter. This is achieved by deciding randomly for each node if it should be a leaf or
not when it is attached to the tree. Of course, to nodes of the depth dˆ − 1, only leaf nodes
can be attached to.
Koza [1602] additionally introduced a mixture method called ramped half-and-half defined
here in Algorithm 31.3. For each tree to be created, thish algorithm
draws a number r
ˆ ˆ
uniformly distributed between 2 and d: (r = ⌊randomUni 2, d + 1 ⌋). Now either full or
31.3. (STANDARD) TREE GENOMES 391
ˆ N, Σ
Algorithm 31.2: g ←− createGPGrow d,
Input: N : the set of function types
Input: Σ: the set terminal types
ˆ the maximum depth
Input: d:
Data: j: a counter variable
Data: V = N ∪ Σ: the joint list of function and terminal node types
Data: i ∈ V : the selected tree node type
Output: g: a new genotype
1 begin
2 if dˆ > 1 then
3 V ←− N ∪ Σ
4 i ←− V [⌊randomUni[0, len(V ))⌋]
5 g ←− instantiate(i)
6 for j ←− 1 up to numChildren(i)
do
7 g ∈ G ←− addChild g, createGPFull dˆ − 1, N, Σ
8 else
9 i ←− Σ[⌊randomUni[0, len(Σ))⌋]
10 g ←− instantiate(i)
11 return g
ˆ N, Σ
Algorithm 31.3: g ←− createGPRamped d,
Input: N : the set of function types
Input: Σ: the set terminal types
ˆ the maximum depth
Input: d:
Data: r: the maximum depth chosen for the genotype
Output: g ∈ G: a new genotype
1 begin j h k
2 r ←− randomUni 2, dˆ + 1
3 if randomUni[0, 1) < 0.5 then return createGPGrow(r, N, Σ)
4 else return createGPFull(r, N, Σ)
392 31 GENETIC PROGRAMMING
grow is chosen to finally create a tree with the maximum depth r (in place of d). ˆ This
method is often preferred since it produces an especially wide range of different tree depths
and shapes and thus provides a great initial diversity. The Ramped Half-and-Half algorithm
for Strongly-Typed Genetic Programming is implemented in Listing 56.42 on page 814.
In most of the reproduction operations for tree genomes, in mutation as well as in recom-
bination, certain nodes in the trees need to be selected. In order to apply mutation, for
instance, we first need to find the node which is to be altered. For recombination, we need
one node in each parent tree. These nodes are then exchanged. We introduce an operator
“selectNode” for choosing these nodes.
A good method for doing so could select all nodes tc in the tree tc with exactly the
17 return tc
same probability as done by the method selectNodeuni given in Algorithm 31.4. There,
P (selectNodeuni (tr ) = tc ) = P (selectNodeuni (tr ) = tn ) for all nodes tc and tn in the tree
with root tr .
31.3. (STANDARD) TREE GENOMES 393
In order to achieve such behavior, we first we define the weight nodeWeight(tn ) of a tree
node tn to be the total number of nodes in the subtree with tn as root, i. e., the tree which
contains tn itself, its children, grandchildren, grand-grandchildren, and so on.
numChildren(t)−1
X
nodeWeight(t) = 1 + nodeWeight(t.children[i]) (31.3)
i=0
Thus, the weight of the root of a tree is the number of all nodes in the tree and the weight of
each of the leaves is exactly 1. In selectNodeuni , the probability for a node of being selected
in a tree with root tr is thus 1/nodeWeight(tr ). We provide an example implementation such a
node selection method in Listing 56.41 on page 811.
A tree descend where with probabilities different from these defined here may lead to
unbalanced node selection probability distributions. Then, the reproduction operators will
prefer accessing some parts of the trees while very rarely altering the other regions. We
could, for example, descend the tree by starting at the root t and would return the current
node with probability 0.5 or recursively go to one of its children (also with 50% probability).
Then, the root tr would have a 50 : 50 chance of being the starting point of reproduction
2
operation. Its direct children have at most probability 0.5 /numChildren(tr ) each, and their
children would have a base multiplier of 0.53 for their probabilities and so on. Hence, the
leaves would almost never take actively part in reproduction.
We could also choose other probabilities which strongly prefer going down to the children
of the tree, but then, the nodes near to the root will most likely be left untouched during
reproduction. Often, this approach is favored by selection methods, although leaves in
different branches of the tree are not chosen with the same probabilities if the branches
differ in depth. When applying Algorithm 31.4 on the other hand, there exist no regions in
the trees that have lower selection probabilities than others.
Tree genotypes may undergo small variations during the reproduction process in the Evo-
lutionary Algorithm. Such a mutation is usually defined as the random selection of a node
in the tree, removing this node and all of its children, and finally replacing it with another
node [1602]. Algorithm 31.5 shows one possible way to realize such a mutation. First, a
copy g ∈ G from the original genotype gp ∈ G is created. Then, a random parent node t
is selected from g. One child of this node is replaced with a newly created subtree. For
creating this tree, we pick the ramped half-and-half method with a maximum depth similar
to the one of the child which we replace by it. The possible outcome of this algorithm
is illustrated in Fig. 31.7.a and an example implementation in Java for Strongly-Typed
Genetic Programming is given in Listing 56.43 on page 815.
Furthermore, in the special case that the tree nodes may have a variable number of chil-
dren, two additional operations for mutation become available: (a) insertions of new nodes
or small trees (Fig. 31.7.b), and (b) the deletion of nodes, as illustrated in Fig. 31.7.c.The
effects of insertion and deletion can, of course, also be achieved with replacement.
394 31 GENETIC PROGRAMMING
maximum depth
Fig. 31.7.c: Subtree deletion.
* randomly
created
+ +
/
11 x 3
x 12
cut random node x
replace a randomly
chosen node with
a randomly created
sub tree
*
/ +
x 12 3
x
rithm 31.5 to a mathematical expression encoded as a tree. The initial genotype represents
the mathematical formula (11 + x) ∗ (3 + |x|). From this tree, the portion (11 + x) is re-
placed with a randomly created subtree which resembles the expression x/12. The resulting
genotype then stands for the formula (x/12) ∗ (3 + |x|).
The tree permutation operation illustrated in Figure 31.9 resembles the permutation oper-
ation of string genomes or the inversion used in messy GA (Section 29.6.2.1, [1602]). Like
mutation, it is used to reproduce one single tree. It first selects an internal node of the
parental tree. The child nodes attached to that node are then shuffled randomly, i. e., per-
mutated. If the tree represents a mathematical formula and the operation represented by
the node is commutative, this has no direct effect. The main goal of this operation is to
re-arrange the nodes in highly fit subtrees in order to make them less fragile for other oper-
ations such as recombination. The effects of this operation are questionable and most often,
it is not applied [1602].
+ +
+ * + a
b - 1 a b 3
7 4
for instance. This expression clearly can be written in a shorter way by replacing (7 − 4)
with 3 and (1 ∗ a) with a. By doing so, we improve its readability and also decrease the
computational time needed to evaluate the expression for concrete values of a and b. Similar
measures can often be applied to algorithms and program code.
The positive aspect of editing here is that it reduces the number of nodes in the tree by
removing useless expressions. This makes it more easy for recombination operations to pick
“important” building blocks. At the same time, the expression (7 − 4) is now less likely to
be destroyed by the reproduction processes since it is replaced by the single terminal node
3.
A negative aspect would be if (in our example) a fitter expression was (7 − (4 ∗ a)) and a
is a variable close to 1. Then, transforming (7 − 4) into 3 prevents an evolutionary transition
to the fitter expression.
Besides the aspects mentioned in Example E31.3, editing also reduces the diversity in the
genome which could degrade the performance by decreasing the variety of structures avail-
able. In Koza’s experiments, Genetic Programming with and without editing showed equal
performance, so the overall benefits of this operation are not fully clear [1602].
The idea behind the encapsulation operation is to identify potentially useful subtrees and to
turn them into atomic building block as sketched in Figure 31.11. To put it plain, we create
new terminal symbols that (internally hidden) are trees with multiple nodes. This way,
they will no longer be subject to potential damage by other reproduction operations. The
new terminal may spread throughout the population in the further course of the evolution.
According to Koza, this operation has no substantial effect but may be useful in special
applications like the evolution of artificial neural networks [1602].
Applying the wrapping operation means to first select an arbitrary node n in the tree. A new
non-terminal node m is created outside of the tree. In m, at least one child node position is
31.3. (STANDARD) TREE GENOMES 397
left unoccupied. Then, n (and all its potential child nodes) is cut from the original tree and
append it to m by plugging it into the free spot. Finally, m is hung into the tree position
that formerly was occupied by n.
The purpose of this reproduction method illustrated in Figure 31.12 is to allow modifi-
cations of non-terminal nodes that have a high probability of being useful. Simple mutation
would, for example, cut n from the tree or replace it with another expression. This will
always change the meaning of the whole subtree below n dramatically.
The idea of wrapping is to introduce a search operation with high locality which allows
smaller changes to genotypes. The wrapping operation is used by the author in is exper-
iments and so far, I have not seen another source where it is used. Then again, the idea
of this operation is especially appealing to the evolution of executable structures such as
mathematical formulas and quite simple, so it is likely used in similar ways by a variety of
researchers.
While wrapping allows nodes to be inserted in non-terminal positions with small change of
the tree’s semantic, lifting is able to remove them in the same way. It is the inverse operation
to wrapping, which becomes obvious when comparing Figure 31.12 and Figure 31.13.
Lifting begins with selecting an arbitrary inner node n of the tree. This node then
replaces its parent node. The parent node inclusively all of its child nodes (except n)
are removed from the tree. With lifting, a tree that represents the mathematical formula
(b + a) ∗ 3 can be transformed to b ∗ 3 in a single step.
398 31 GENETIC PROGRAMMING
Lifting is used by the author in his experiments with Genetic Programming (see for
example ?? on page ??). Again, so far I have not seen a similar operation defined explicitly
in other works, but because of its simplicity, it would be strange if it had not been used by
various researchers in the past.
The mating process in nature – the recombination of the genotypes of two individuals – is
also copied in tree-based Genetic Programming. Exchanging parts of genotypes is not much
different in tree-based representations than in string-based ones.
Applying the default subtree exchange recombination operator to two parental trees means
to swap subtrees between them as defined in Algorithm 31.6 and illustrated in Figure 31.14.
Therefore, single subtrees are selected randomly from each of the parents and subsequently
are cut out and reinserted in the partner genotype.
If a depth restriction is imposed on the genome, both, the mutation and the recombi-
nation operation have to respect them. The new trees they create must not exceed it. An
example implementation of this operation in Java for Strongly-Typed Genetic Programming
is given in Listing 56.44 on page 817.
The intent of using the recombination operation in Genetic Programming is the same as
in Genetic Algorithms. Over many generations, successful building blocks – for example a
highly fit expression in a mathematical formula – should spread throughout the population
and be combined with good genes of different candidate solutions.
31.3. (STANDARD) TREE GENOMES 399
()
maximum depth
ab
-
sin x
-
/
e x 3
3 e
ab
sin -
/ e x
3 e Offspring
rithm 31.6. Two mathematical expressions encoded as trees are recombined by the exchange
of subtrees, i. e., mathematical expressions.
The first parent tree gp1 represents the formula (sin(3/e))x and the second one (gp2 )
denotes (e − x) − |3|. The node which stands for the variable x in the first parent is selected
as well as the subtree e − x in the second parent. x is cut from gp1 and replaced with e − x,
yielding an offspring which encodes the formula (sin(3/e))e−x .
SHCC is very close to subtree mutation, since a sub-tree in the parent is replaced with a
randomly created one. It resembles a macromutation. Angeline [101] showed that SHCC
performed at least as good if not better than subtree crossover on a variety of problems and
that the performance of WHCC is good, too.
His conclusion is that subtree crossover actually works similar to a macromutation oper-
ator whose contents are limited by the subtrees available in the population. This contradicts
the common (prior) believe that subtree crossover would be a building block combining fa-
cility. In problems where a constant incoming stream of new genetic material and variation
is necessary, the pure macromutation provided by SHCC thus outperforms crossover.
Interestingly, Lang [1665] claimed that Hill Climbing algorithm with a similar macromu-
tation could outperform Genetic Programming a few years prior but were concluded from
non-conclusive and ill-designed experiments. As a consequence, they could not withstand a
thorough investigation [1600]. The similarity of crossover and mutation (which also becomes
obvious when comparing Algorithm 31.5 and 31.6) in Genetic Programming was thus first
profoundly pointed out by Angeline [101].
Several techniques have been proposed in order to mitigate the destructive effects of re-
combination in tree-based representations. In 1994, D’Haeseleer [782] obtained modest im-
provements with his strong context preserving crossover that permitted only the exchange
of subtrees that occupied the same positions in the parents. Poli and Langdon [2193, 2194]
define the similar single-point crossover for tree genomes with the same purpose: increasing
the probability of exchanging genetic material which is structural and functional akin and
thus decreasing the disruptiveness. A related approach define by Francone et al. [982] for
linear Genetic Programming is discussed in Section 31.4.4.1 on page 407.
31.3.6 Modules
The concept of automatically defined functions (ADFs) as introduced by Koza [1602] pro-
vides some sort of pre-specified modularity for Genetic Programming. Finding a way to
evolve modules and reusable building blocks is one of the key issues in using GP to derive
higher-level abstractions and solutions to more complex problems [107, 108, 1599]. If ADFs
are used, a certain structure is defined for the genome.
The root of the tree usually loses its functional responsibility and now serves only as glue
that holds the individual together and has a fixed number n of children, from which n−1 are
automatically defined functions and one is the result-generating branch. When evaluating
the fitness of an individual, often only this first branch is taken into consideration whereas
the root and the ADFs are ignored. The result-generating branch, however, may use any of
the automatically defined functions to produce its output.
Hence, g(x) ≡ ((−4) ∗ (x + 7)) ∗ ((x + 7) + 3). The number of children of the function
calls in the result-generating branch must be equal to the number of the parameters of the
corresponding ADF.
Although ADFs were first introduced for symbolic regression by Koza [1602], they can also
be applied to a variety of other problems like in the evolution of agent behaviors [96, 2240],
electrical circuit design [1608], or the evolution of robotic behavior [98].
7 A very common example for function approximation, Genetic Programming-based symbolic regression,
function macro
ADF(param a) º (a+a) ADM(param a) º (a+a)
main_program main_program
begin begin
print(”out: “ + ADF(y)) print(”out: “ + ADM(y))
end end
Program with ADF Program with ADM
main_program main_program
begin begin
variable temp1, temp2 variable temp
temp1 = y() temp = y() + y()
temp2 = temp1 + temp1
print(”out: “ + temp2) print(”out: “ + temp)
end end
C:\ C:\
C:\ C:\
C:\ C:\
The ideas of automatically defined macros and automatically defined functions are very
close to each other. Automatically defined macros are likely to be useful in scenarios where
context-sensitive or side-effect-producing operators play important roles [2566, 2567]. In
other scenarios, there is no much difference between the application of ADFs and ADMs.
Finally, it should be mentioned that the concepts of automatically defined functions and
macros are not restricted to the standard tree genomes but are also applicable in other forms
of Genetic Programming, such as linear Genetic Programming (see Section 31.4) or PADO
(see ?? on page ??).
31.4.1 Introduction
In the beginning of this chapter, it was stated that one of the two meanings of Genetic
Programming is the automated, evolutionary synthesis of programs that solve a given set of
problems. It was discussed that tree genomes are suitable to encode programs or formulas
and how the genetic operators can be applied to them.
Nevertheless, trees are not the only way for representing programs. Matter of fact, a
computer processes programs them as sequences of simple machine instructions instead.
These sequences may contain branches in form of jumps to other places in the code. Ev-
ery possible flowchart describing the behavior of a program can be translated into such a
sequence. It is therefore only natural that the first approach to automated program gener-
ation developed by Friedberg [995] at the end of the 1950s used a fixed-length instruction
sequence genome [995, 996]. The area of Genetic Programming focused on such instruction
string genomes is called linear Genetic Programming (LGP).
In linear Genetic Programming, instruction strings (or bit/integer strings encoding tem)
are the center of the evolution and contain the program code directly. This distinguishes it
from many grammar-guided Genetic Programming approaches like Grammatical Evolution
(see ?? on page ??), where strings are just genotypic, intermediate representations that
encode the program trees. Some of the most important early contributions to LGP come
from [2199]:
404 31 GENETIC PROGRAMMING
1. Banzhaf [205], who used a genotype-phenotype mapping with repair mechanisms to
translate a bit string to nested simple arithmetic instructions in 1993,
2. Perkis [2164] (1994), whose stack-based GP evaluated arithmetic expressions in Re-
verse Polish Notation (RPN),
3. Openshaw and Turton [2086] (1994) who also used Perkis’s approach but already
represented mathematical equations as fixed-length bit string back in the 1980s [2085],
and
4. Crepeau [653], who developed a machine code GP system around an emulator for the
Z80 processor.
Besides the methods discussed in this section (and especially in Section 31.4.3), other inter-
esting approaches to linear Genetic Programming are the LGP variants developed by Eklund
[870] and Leung et al. [536, 1715] on specialized hardware, the commercial system by Foster
[976], and the MicroGP (µGP) system for test program induction by Corno et al. [642, 2592].
The advantage of linear Genetic Programming lies in the straightforward evaluation of the
evolved algorithms. Like a normal computer program, a (simulated) processor starts at the
beginning of the instruction string. It reads the first instruction and its operands from the
string and carries it out. Then, it moves to the second instruction and so on. The size of
the status information needed by an interpreter for tree-based programs is in Od (where d
is the depth of the tree), because a recursive depth-first traversal of the trees is required.
For processing a linear instruction sequence, the required status information is of O1 since
only the index of the currently executed instruction needs to be stored.
Because of this structure, it is also easy to limit the runtime in the program evalua-
tion and even simulating parallelism. Like Standard Genetic Programming, LGP is most
often applied with a simple function set comprised of primitive instructions and some math-
ematical operators. In SGP, such an instruction set can be extended with alternatives
(if-then-else) and loops. Similarly, in linear Genetic Programming jump instructions may
be added to allow for the evolution of a more complex control flow.
Then, the drawback arises that simply reusing the genetic operators for variable-length
string genomes (discussed in Section 29.4 on page 339), which randomly insert, delete, or
toggle bits, is no longer feasible. jump instructions are highly epistatic (see Chapter 17 and
??).
{...}
while
> {...}
0 = = =
a b b a *
One approach to increase the locality in linear Genetic Programming with jumps is to use
intelligent mutation and crossover operators which preserve the control flow of the program
when inserting or deleting instructions. Such operations could, for instance, analyze the
program structure and automatically correct jump targets. Operations which are restricted
to have only minimal effect on the control flow from the start can also easily be introduced.
In Section 31.4.3.4, we shortly outline some of the work of Brameier and Banzhaf [391],
who define some interesting approaches to this issue. Section 31.4.4.1 discusses the homol-
ogous crossover operation which represents another method for decreasing the destructive
effects of reproduction in LGP.
One of the early linear Genetic Programming approaches was developed by Nordin [2044],
who was dissatisfied with the performance of GP systems written in an interpreted language
which, in turn, interpret the programs evolved using a tree-shaped genome. In 1994, he
published his work on a new Compiling Genetic Programming System (CGPS) written in the
C programming language8 [1525] directly manipulating individuals represented as machine
code.
CGPS evolves linear programs which accept some input data and produce output data
as result. Each candidate solution in CGPS consists of (a) a prologue for shoveling the
input data from the stack into registers, (b) a set of instructions for information process-
ing, (c) and an epilogue for terminating the function [2045]. The prologue and epilogue
were never modified by the genetic operations. As instructions for the middle part, the
8 http://en.wikipedia.org/wiki/C_(programming_language) [accessed 2008-09-16]
406 31 GENETIC PROGRAMMING
Genetic Programming system had arithmetical operations and bit-shift operators at its dis-
posal in [2044], but no control flow manipulation primitives like jumps or procedure calls.
These were added in [2046] along with ADFs, making this LGP approach Turing-complete.
Nordin [2044] used the classification of Swedish words as task in the first experiments
with this new system. He found that it had approximately the same capability for grow-
ing classifiers as artificial neural networks but performed much faster. Another interesting
application of his system was the compression of images and audio data [2047].
CGPS originally evolved code for the Sun Sparc processors, which is a member of the RISC9
processor class. This had the advantage that all instructions have the same size. In the Au-
tomatic Induction of Machine Code with GP system (AIM-GP, AIMGP), the successor of
CGPS, the support for multiple other architectures was added by Nordin, Banzhaf, and
Francone [2050, 2053], including Java bytecode10 and CISC11 CPUs with variable instruc-
tion widths such as Intel 80x86 processors. A new interesting application for linear Genetic
Programming tackled with AIMGP is the evolution of robot behavior such as obstacle avoid-
ing and wall following [2052].
Besides AIMGP, there exist numerous other approaches to the evolution of linear Java
bytecode functions. The Java Bytecode Genetic Programming system (JBGP, also Java
Method Evolver, JME) by Lukschandl et al. [1792–1795] is written in Java. A genotype
in JBGP contains the maximum allowed stack depth together with a linear list of instruc-
tion descriptors. Each instruction descriptor holds information such as the corresponding
bytecode and a branch offset. The genotypes are transformed with a genotype-phenotype
mapping into methods of a Java class which then can be loaded into the JVM, executed,
and evaluated. [1206, 1207].
The JAPHET system of Klahold et al. [1547], the user provides an initial Java class
at startup. Classes are divided into a static and a dynamic part. The static parts contain
things like version information are not affected by the reproduction operations. The dynamic
parts, containing the methods, are modified by the genetic operations which add new byte
code [1206, 1207].
Harvey et al. [1206, 1207] introduce byte code GP (bcGP), where the whole population
of each generation is represented by one class file. Like in AIMGP, each individual is a linear
sequence of Java bytecode and is surrounded by a prologue and epilogue. Furthermore, by
adding buffer space, each individual has the same size and, thus, the whole population can
be kept inside a byte array of a fixed size, too.
In the Genetic Programming system developed by Brameier and Banzhaf [391] based on
former experience with AIMGP, an individual is represented as a linear sequence of simple
C instructions as outlined in the example Listing 31.1 (a slightly modified version of the
example given in [391]). Due to reproduction operations like as mutation and crossover,
linearly encoded programs may contain introns, i. e., instructions not influencing the result
(see Definition D29.1 and ??). Given that the output of the program defined in Listing 31.1
will store its outputs in the registers v[0] and v[1], all the lines marked with (I) do not
contribute to the overall functional fitness.
9 http://en.wikipedia.org/wiki/Reduced_Instruction_Set_Computing [accessed 2008-09-16]
10 http://en.wikipedia.org/wiki/Bytecode [accessed 2008-09-16]
11 http://en.wikipedia.org/wiki/Complex_Instruction_Set_Computing [accessed 2008-09-16]
31.4. LINEAR GENETIC PROGRAMMING 407
1 void ind(double[8] v) {
2 ...
3 v[0] = v[5] + 73;
4 v[7] = v[0] - 59; (I)
5 if(v[1] > 0)
6 if(v[5] > 23)
7 v[4] = v[2] * v[1];
8 v[2] = v[5] + v[4]; (I)
9 v[6] = v[0] * 25; (I)
10 v[6] = v[4] - 4;
11 v[1] = sin(v[6]);
12 if(v[0] > v[1]) (I)
13 v[3] = v[5] * v[5]; (I)
14 v[7] = v[6] * 2;
15 v[5] = v[7] + 115; (I)
16 if(v[1] <= v[6])
17 v[1] = sin(v[7]);
18 }
Listing 31.1: A genotype of an individual in Brameier and Banzhaf’s LGP system.
Brameier and Banzhaf [391] introduce an algorithm which removes these introns during
the genotype-phenotype mapping, before the fitness evaluation. Basically, this approach can
be considered as an editing operation (see Section 31.3.4.3 on page 396) for linear Genetic
Programming. The GP system based on this approach was successfully tested with several
classification tasks [390–392], function approximation and Boolean function synthesis [393].
In his doctoral dissertation, Brameier [388] elaborates that the control flow of linear
Genetic Programming more equals a graph than a tree because of jump and call instructions.
In the earlier work of Brameier and Banzhaf [391] mentioned just a few lines ago, introns were
only excluded by the genotype-phenotype mapping but preserved in the genotypes because
they were expected to make the programs robust against variations. In [388], Brameier
concludes that such implicit introns representing unreachable or ineffective code have no
real protective effect but reduce the efficiency of the reproduction operations and, thus,
should be avoided or at least minimized by them. Instead, the concept of explicitly defined
introns (EDIs) proposed by Nordin et al. [2051] is utilized in form of something like nop
instructions in order to decrease the destructive effect of crossover. Brameier finds that
introducing EDIs decreases the proportion of introns arising from unreachable or ineffective
code and leads to better results. In comparison with standard tree-based GP, his linear
Genetic Programming approach performed better during experiments with classification,
regression, and Boolean function evolution benchmarks.
31.4.4 Recombination
In Section 29.3.4.6 on page 338, we discussed homologous crossover for Genetic Algorithms.
Normal crossover operations like MPX may have a disruptive effect on building blocks. The
idea of homologous crossover is that, similar to nature Banzhaf et al. [209], only genes that
express the same functionality and are located at the same loci, are exchanged between
parental genotypes. It is assumed that such a recombination operator would have a lower
disruptiveness than plain MPX.
The programs which evolve in linear Genetic Programming are encoded in string chro-
mosome, usually of variable length. Hence, a homologous crossover mechanism could here
be beneficial as well.
408 31 GENETIC PROGRAMMING
Francone et al. [982, 2050] introduce a sticky crossover operator which resembles ho-
mology by allowing the exchange of instructions between two genotypes (programs) only if
they reside at the same loci. In sticky crossover, first a sequence of code in the first parent
genotype is choosen. This sequence is then swapped with the sequence at exactly the same
position in the second parent.
An approach similar to Francone et al.’s sticky crossover is the Page-based linear Genetic
Programming by Heywood and Zincir-Heywood [1229]. Here, programs are described as
sequences of pages. Each page includes the same number z of instructions.
Crossover exchanges exactly a single page between the parents but, different from the
homologous operators, is not locally bound. In other words, the third page of the first
parent may be swapped with the fifth page of the second parent. Since exactly one page is
exchanged, the length of the programs remains the same and the number of their instructions
does not change. This way of crossover is useful to identify and spread building blocks and
thus, is less destructive.
Heywood and Zincir-Heywood [1229] also device a dynamic page size adaptation method
for their crossover operator. Instead of keeping the page size z constant throughout the
˘
evolution, an upper limit z for this size is defined. The Genetic Programming process starts
with a working page size of z = 1. If a fitness plateau is reached, i. e., no further improvement
˘
of the best fitness can be found for some time, z is increased to the next divisor of z. In
˘ ˘
other words, if z = 12, z would go through 1, 2, 3, 4, 6, and 12. If z = z and the GP process
stagnates again, this iteration starts all over again at z = 1. The idea of such an approach
is that initially, the building blocks in the system are likely to be simple and small. Over
time, longer useful code sequences will be discovered, calling for a larger page size in order
to be preserved.
31.4.5.2 Books
1. Advances in Genetic Programming [2569]
2. Genetic Programming: An Introduction – On the Automatic Evolution of Computer Pro-
grams and Its Applications [209]
3. Linear Genetic Programming [394]
4. Evolutionary Program Induction of Binary Machine Code and its Applications [2045]
31.5.1.2 Books
1. Data Mining Using Grammar Based Genetic Programming and Applications [2969]
91. Use Genetic Programming to perform symbolic regression (see Section 49.1 on page 531
and Example E31.1 on page 387) on the data given in Listing 31.2. Which mathemat-
ical function φ with b = φ(a) relates the values of a to the values of b? You can use
the files from Section 57.2.1.1 on page 905 and the other sources provided with this
book to construct your optimization process.
1 a b
2 =============================================
3 0.008830073673861905 0.008752332965198615
4 0.018694563401923325 0.018347254459257438
5 0.05645469964408556 0.05332752398796371
6 0.0766136391753689 0.0708938021131889
7 0.14129105849778467 0.12226631044115759
8 0.34977791509449296 0.241542619578675
9 0.35201052061616 0.24247840720311337
10 0.36438318566633743 0.2475456173823204
11 0.3893919925542615 0.2571847111870792
12 0.45985453110101393 0.2802156496978409
13 0.5294420401690347 0.29744194372200355
14 0.545693694874572 0.30073566724237905
15 0.5794622522099054 0.30675072726587754
16 0.5854806945538868 0.3077087939606112
17 0.587999637880455 0.30809978088344814
18 0.6134151841062928 0.31172078461868413
19 0.6305855626214137 0.3138417822317378
20 0.6451020098521425 0.315436888525947
21 0.6595800013242682 0.3168517955439009
22 0.6622167712763556 0.3170909127233424
23 0.7470926929713728 0.32191168989382
24 0.7513516563209482 0.3220146762426389
25 0.7571482262199194 0.32213477030839227
26 0.760834074066516 0.32219920416927167
27 0.799102390437177 0.32233694519750067
28 0.8823767703419055 0.31955612939384237
29 0.8839162433815917 0.31946827608018985
30 0.9278045504520468 0.3164575071414033
31 0.9482259113015682 0.3147394303666216
32 0.9483969161261779 0.31472423502162195
Listing 31.2: The data for Task 91
[50 points]
92. Use any of the optimization algorithms for real-valued problem spaces to match the
polynomial g5 a5 + g4 a4 + g3 a3 + g2 a2 + g1 a + g0 to the data given in Listing 31.2
T
from Task 91. Optimize the vector g = (g0 , g1 , . . . , g5 ) by considering it as genotype
in the six-dimensional search space G ⊆ R . Use an objective function which could
6
be roughly similar to Listing 57.20 on page 908. Do not use any tree-related data
structures.
[50 points]
93. Do the same as in Task 92, but instead of the polynomial, try to fit the function
g2 a2 + g4 a + sin(g3 a + g2 ) + g1 ∗ ea∗g0 .
[20 points]
94. Combine what you have learned from Task 91, 92, and 93: Develop a system which in
a first optimization run synthesizes a formula F via symbolic regression and Genetic
Programming. In a second step, this system should discover all nodes which represent
ephemeral random constants (see Definition D49.3) and optimizes their value with a
numerical optimization procedure, while keeping the structure of the formula F intact.
In other words, first discover a good formula structure, and then adapt it in the best
412 31 GENETIC PROGRAMMING
possible way to the available data. Both steps should be performed by an automatic
procedure and provide you with the final result without needing manual interaction.
Repeat the experiments multiple times. Try to find functions that fit different data
created by yourself. What conclusions can you draw about the result?
[50 points]
95. Replace the Evolutionary Algorithm as underlying optimization method for Genetic
Programming with symbolic regression or a Hill Climber and apply the resulting sys-
tem to Task 91 or 94. How does the result change? What are your conclusions about
GP, especially in terms of the binary reproduction operation crossover (which is not
used in Simulated Annealing or Hill Climbing)?
[20 points]
96. Instead of a tree-based representation for programs and formulas, as used in Task 91
to 95, implement a linear Genetic Programming (LGP) system which provides mathe-
matical operations in a style similar to assembler code. Repeat the experiment defined
in Task 91 with this LGP system and compare the performance with the tree-based,
Standard Genetic Programming approach.
[70 points]
Chapter 32
Evolutionary Programming
414 32 EVOLUTIONARY PROGRAMMING
32.1 General Information on Evolutionary Programming
32.1.2 Books
1. Evolutionary Computation: The Fossil Record [939]
2. Artificial Intelligence through Simulated Evolution [945]
3. Blondie24: Playing at the Edge of AI [940]
4. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
5. Genetic Algorithms + Data Structures = Evolution Programs [1886]
32.1.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
Chapter 33
Differential Evolution
33.1 Introduction
The essential idea behind Differential Evolution is the way in which the (ternary) recombina-
tion operator “recombineDE” is defined for creating new genotypes. It takes three parental
individuals g1 , g2 , and g3 as parameter. The difference between g1 and g2 in the search
space G is computed, weighted with a weight F ∈ R and added to the third parent g3 .
The algorithm is completely self-organizing. This classical reproduction strategy has been
complemented with new ideas like triangle mutation and alternations with weighted directed
strategies. Gao and Wang [1017] emphasize the close similarities between the reproduction
operators of Differential Evolution and the search step of the Downhill Simplex. Thus, it
is only logical to combine or to compare the two methods as we show in ?? on page ??.
Further improvements to the basic Differential Evolution scheme have been contributed, for
instance, by Kaelo and Ali. Their DERL and DELB algorithms outperformed [1475–1477]
standard DE on the test benchmark compiled by Ali et al. [72].
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 421
33.3 General Information on Differential Evolution
33.3.2 Books
1. Differential Evolution – A Practical Approach to Global Optimization [2220]
2. Differential Evolution – In Search of Solutions [903]
33.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 425
Tasks T33
97. In Paragraph 56.2.1.1.4 on page 794, we give the implementation of the ternary search
operations of the simple Differential Evolution algorithm, which in turn, you can find
implemented in Listing 56.19 on page 771. However, as discussed by Mezura-Montes
et al. [1882], Differential Evolution does not necessary only take two points in the
search space to compute the difference information. Instead, it may take p point pairs.
In this case, the search operators are no longer ternary but (1 + 2p)-ary. Re-implement
the operators and Differential Evolution algorithm for this general case. You may use
a programming language of your choice.
[50 points]
98. In Task 90 on page 377, an Evolution Strategy was applied to an instance of the Trav-
eling Salesman Problem and its results were compared to what we obtain with a simple
Hill Climber that uses the straightforward integer-string permutation representation
discussed in Section 29.3.1.2 on page 329. Repeat this experiment, but now test some
Differential Evolution variants instead of Evolution Strategy. Compare your results
with those obtained with the other two algorithms in Task 90.
[50 points]
Chapter 34
Estimation Of Distribution Algorithms
34.1 Introduction
Traditional Evolutionary Algorithms are based directly on the natural paragon of Darwinian
evolution. The candidate solutions are considered to be equivalent to living organisms in
nature, which compete for survival (selection) and reproduce if successful. Estimation of
Distribution Algorithms1 (EDAs, [126, 1683, 1974, 2153, 2158]) are different. Instead of
improving possible solutions step by step, they try to learn how a perfect solution should
look like.
This is done from the point of view stochastic: Initially, nothing is known about where
the optimal solution is located in the search space, so the probability of each point which
could be investigated to be the optimum is basically the same. With every candidate solution
which has actually been evaluated with the objective functions, more information is gained.
In Genetic Algorithms or Genetic Programming, the population begins to move to certain,
promising regions. From the stochastic point of view, this corresponds to the assumption
that the probability that the optimum may be contained in these regions is estimated to be
high. The spatially denser the population becomes, the higher this probability is assumed.
EDAs abandon the biological example of repeated sexual and asexual reproduction [2153]
and instead, try to estimate the probability distribution M directly.
In case of an Estimation of Distribution Algorithm solving a continuous optimization
problem, initially, the probability model M(t = 0) at time step t = 0 may be estimated with
a uniform distribution. Step by step, the variance D2 M of the distribution corresponding to
the parameters of M should decrease and, in case that there exists a single, global optimum
x⋆ , the goal is that D2 [M(t)] = 0 and E [M(t)] = x⋆ are approached for t → ∞ (ideally,
earlier) [1197].
In Figure 34.1 we sketch the information held in a population of, for instance, an Evolu-
tion Strategy in Fig. 34.1.a to ?? and the information held in a model of an Estimation of
Distribution Algorithm in Fig. 34.1.d to 34.1.f. In the first row, of the figure, from the left
to the right, it can be seen how the population of the ES converges. The model used in a
real-valued EDA may converge similarly. In this case, its parameters may shrink down and
become smaller and smaller, thus decreasing the number of likely sampled genotypes.
s1 s1 s1
m2 s2 m2 s2 m2 s2
m1 m1 m1
Fig. 34.1.d: Model 1 Fig. 34.1.e: Model 3 Fig. 34.1.f: Model 4
34.2.2 Books
1. Scalable Optimization via Probabilistic Modeling – From Algorithms to Applications [2158]
34.2. GENERAL INFORMATION ON ESTIMATION OF DISTRIBUTION ALGORITHMS429
2. Estimation of Distribution Algorithms – A New Tool for Evolutionary Computation [1683]
3. Towards a New Evolutionary Computation – Advances on Estimation of Distribution Algo-
rithms [1782]
4. Hierarchical Bayesian Optimization Algorithm – Toward a New Generation of Evolutionary
Algorithms [2145]
34.2.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
34.3 Algorithm
Maybe the simplest EDA for binary search spaces is the Univariate Marginal Distribution
Algorithm (UMDA) [1973, 1974]. We define it here as “UMDA” in Algorithm 34.3 and
provide a simple example implementation of its modules in Java on top of our basic EDA
(see Listing 56.20) in Paragraph 56.1.4.4.1 on page 781.
The UMDA constructs a model M storing one real number M[i] corresponding to the
estimated probability that the bit at index i in the genotype p⋆ .g belonging to the optimal
34.4. EDAS SEARCHING BIT STRINGS 435
1 g1
g2
=
=
(1,
(1,
0,
0,
0,
1,
0,
1,
1)
1)
g3 = (1, 1, 1, 0, 1)
Select individuals, g4 = (1, 0, 1, 1, 0)
g5 = (0, 1, 0, 0, 0)
Illustrated: Indivduals g6 = (1, 0, 0, 1, 1)
in the mating pool: g7 = (1, 0, 1, 1, 1)
g8 = (1, 1, 1, 1, 1)
g9 = (0, 1, 1, 1, 0)
g10 = (1, 0, 1, 0, 1)
2 Model #0 2 6 3 4 3
building: #1 8 4 7 6 7
5 randUni[0,1)=0.5 randUni[0,1)=0.9
Model
sampling:
randUni[0,1)=0.3 randUni[0,1)=0.2 randUni[0,1)=0.7
The model building process Algorithm 34.5 of the UMDA is very simple and you can
438 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
find a Java example implementation in Listing 56.22 on page 782. For the bit at index j,
the number c of times it was 1 is counted over all genotypes g stored the individual records
in the mating pool mate. c divided the number of individuals in the mating pool hence is
the relative frequency h(p.x[j] = 1, len(mate)) and serves as the estimate of the probability
that the locus j should be 1 in the optimal solution. With the new probability
vector, the
cycle starts again until the termination criterion terminationCriterion is met.
One of the first steps into EDA research was the population-based incremental learning
algorithm2 PBIL published in 1994 by Baluja [193], [194, 195]. Here we define it as Algo-
rithm 34.6 in the version given in [195].
The structure of the models in PBIL is exactly the same as in the UMDA: For each
bit in the genotypes, a real number denoting the estimated probability that that bit should
be 1 in the optimal genotype. The model is therefore sampled like in the UMDA and the
algorithm itself also proceeds pretty much in the same way.
The main difference is the model building step presented as Algorithm 34.7. Here, a new
model M is created by merging the current model M′ and the information provided by the
mating pool mate. The learning rate λ defines how much each bit in the genotypes g of the
selected individuals p ∈ mate contributes to the new model. In PBIL, truncation selection
is used which allows only the best mps individuals to enter mating pool.
It should be noted that in the original version of the algorithm given in [193], a simple
mutation was applied to the models, too. Here, the single probabilities were randomly
2 http://en.wikipedia.org/wiki/Population-based_incremental_learning [accessed 2009-08-20]
34.4. EDAS SEARCHING BIT STRINGS 439
changed by a very small number. Also, the mating pool size was one, i. e., only the best
individual had influence on M. Similar algorithms have been introduced by Kvasnicka et al.
[1652] (Hill Climbing with Learning, HCwL) and Mühlenbein [1973] (Incremental Univariate
Marginal Distribution Algorithm, IUMDA).
The compact Genetic Algorithm (cGA) by Harik et al. [1198, 1199] again uses a vector of
(real-valued) probabilities to represent the estimation of the solutions with the best features
according to the objective function f. In Algorithm 34.8 we provide an outline of this
algorithm which modifies this vector M so that there is a direct relation between it and the
population it represents [2153]. The idea of the cGA is the following: Assume that we have
a normal Genetic Algorithm and two individuals p1 and p2 compete in a binary tournament.
In the single-objective case, usually one of them will win and reproduce whereas the other
vanishes. Assume further that p1 prevails in the competition, i. e., p1 .x ≺ p2 .x, because
fp1 .x < fp2 .x in case of minimization, for instance. Then, the absolute frequency of its
corresponding genotype p1 .g in the population will increase by one. For a population of size
440 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
1
ps, this means that its relative frequency increases by ps . This obviously also holds for every
single gene in p1 .g, since the individual is copied as a whole.
The cGA tries to emulate this situation by creating exactly two new genotypes
in each iteration by using the same sampling method already utilized in the UMDA
(sampleModelUMDA(M, 2), see Algorithm 34.4). These two individuals then compete with
each other and the genes of the winner then are used to update the model M as sketched in
Algorithm 34.9.
The optimization process has converged if all elements of the model M became either 1
or 0. Then, only genotypes which exactly equal to the model can be sampled and the vector
will not change anymore. Hence, as termination criterion “terminationCriterionCGA” we
simply need to check whether this convergence already has taken place or not.
The result of the optimization process is the phenotype gpm(g) corresponding to the
genotype g ≡ M defined by the model M in the converged state. The trivial way to produce
this element is specified here as Algorithm 34.11.
Further theoretical results concerning the efficiency of the cGA can be found in [833,
34.4. EDAS SEARCHING BIT STRINGS 441
834, 2271]. Because of its compact structure – only one real vector and two bit strings need
to be held in memory – the cGA is especially suitable for implementation in hardware [117].
442 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
34.5 EDAs Searching Real Vectors
A variety of Estimation of Distribution Algorithms has been developed for finding optimal
real vectors from the Rn , where n the number of elements of the genotypes. Mühlenbein
et al. [1976], for instance, introduce one of the first real-valued EDAs and in the following,
we discuss multiple such approaches.
The Stochastic Hill Climbing with Learning by Vectors of Normal Distribution (SHCLVND)
by Rudlof and Köppen [2346] is kind of a real-valued version of the PBIL. For search spaces
G ⊆ Rn , it builds models M which consist of a vector of expected values M.µ and a vector
of standard deviations M.σ. We introduce it here as Algorithm 34.12.
Initially, the expected value vector M.µ is set to the point right in the middle of the search
space which is delimited by the vector of minimum values Ǧ and the vector of maximum
values Ĝ. Notice that these limits are soft and genotypes may be created which lie outside
the limits (due to the unconstraint normal distribution used for sampling). The vector of
34.5. EDAS SEARCHING REAL VECTORS 443
standard deviations M.σ is initialized by multiplying a constant rangeToStdDev with the
range spanned by Ĝ − Ǧ. The constant rangeToStdDev is set to rangeToStdDev = 0.5 in
the original paper by Rudlof and Köppen [2346].
Algorithm 34.13 illustrates how the models M = (µ, σ) are sampled in order to create
new points in the search space. Therefore, one univariate normally distributed random
number randomNorm(µ[j], σ [j])is created for each locus j of the genotypes according to
the value µ[j] denoting the expected location of the optimum in the jth dimension and the
prescribed standard deviation σ [j].
Sebag and Ducoulombier [2443] suggest a similar extension of the PBIL to continuous search
spaces: PBILC . For updating the center vector M.µ of the estimated distribution, the
consider the genotypes gb,1 and gb,2 of the two best and the genotype gw of the worst
individual discovered in the current generation. These vectors are combined in a way similar
to the reproduction operator used in Differential Evolution [2220, 2621] and here listed in
Equation 34.2.
M.µ = (1 − λ) ∗ M′ .µ + λ ∗ (gb,1 + gb,2 − gw ) (34.2)
For updating M.σ, they suggest four possibilities:
1. Setting it to a constant value would, however, prevent the search from becoming very
specific or would slow it down significantly, depending on the value,
3. to set each of its elements to the variance found in the corresponding loci of the K
best genotypes of the current generation,
4. or to adjust each of its elements to the variance found in the corresponding loci of the
K best genotypes of the current generation using the same learning mechanism as for
the center vector.
The real-coded PBIL by Servet et al. [2455, 2456] follows a different approach. Here, the
model M consists of three vectors: the upper and lower boundary vectors M.Ĝ and M.Ǧ
limiting the current region of interest and vector M.P denoting the probability
h for each
i locus
j that a good genotype is located in the upper half of the interval M.Ǧ[j], M.Ĝ[j] . We
define this real-coded PBIL as Algorithm 34.15 which basically proceeds almost exactly like
the PBIL for binary search spaces (see Algorithm 34.6). In the startup phase, the boundary
vectors M.Ǧ and M.Ĝ of the model M are initialized to the problem-specific boundaries of
the search space and the M.P is set to (0.5, 0.5, . . . ).
The model can be sampled using, forhinstance, independent
i uniformly distributed random
numbers in the interval prescribed by M.Ǧ[j], M.Ĝ[j] for the jth gene from any of the n
loci, as done in Algorithm 34.16.
After a possible genotype-phenotype mapping, determining the objective values, and a
truncation selection step, the model M is to be updated with Algorithm 34.17. First, the
vector M.P by merging it with the information of whether good traits can be found in
34.5. EDAS SEARCHING REAL VECTORS 445
17 return M
34.6. EDAS SEARCHING TREES 447
the upper or lower halves of the regions of interest for each locus. In case that the upper
halve of one dimension j of the region of interest is very promising, i. e., M.P [j] ≥ 0.9,
the
lower boundaryM.Ǧ[j] of the region in this dimension is updated towards the middle
1
2 M.Ĝ[j] + M.Ǧ[j] and M.P [j] is reset to 0.5. On the other hand, if the lower half of
dimension j of the region is very interesting and M.P [j] ≤ 0.1, the same is adjustment is
done with the upper boundary. In this respect, the real-coded PBIL works similar to a
(randomized) multi-dimensional binary search.
Estimation of Distribution Algorithms cannot only be used for evolving bit strings or real
vectors, but also are applicable to tree-based spaces, i. e., for Genetic Programming as dis-
cussed in Chapter 31 and [1602, 2199].
Salustowicz and Schmidhuber [2380, 2381] define such an EDA for learning (program) trees:
PIPE: Probabilistic Incremental Program Evolution. Here, we provide this GP approach
as Algorithm 34.18. Let us assume a Genetic Programming system would construct trees
consisting of inner nodes, each belonging to one of the types in the function set N and leaf
nodes, each belonging to one of the nodes in the terminal set Σ.
The function set could, for instance, be N = {+, −, ∗, /, sin} and the terminal set could
be Σ = {X, R}, where X stands for an input variable of the expression to be evolved and
R is a real constant. Programs are n-ary trees where n is the number of arguments of the
function with the most parameters in N . In the example configuration, n would be two
since, for instance, arityOf (∗) = 2. Let V = N ∪ Σ the set of all possible node types.
Salustowicz and Schmidhuber [2380, 2381] represent the probability distribution of pos-
sible trees with a so-called Probabilistic Prototype Tree (PPT). Each node t of such a tree
P N .P : V 7→ [0, 1] which assigns a probability N .P (i) to each
consists of a vector function
node type i such that ∀i∈V N .P (i) = 1 for all the tree nodes holds. Furthermore, it
also contains one value N .R ∈ [0, 1] which stands for the constant value that the termi-
nal R takes on if it was selected. Each inner node N of a prototype tree has n children
N .children[0] to N .children[n − 1] which again, are prototype tree nodes. The models M
are the complete prototype trees or, more precisely, their roots (and, due to the children
structure N .children, hold the complete tree data.
The initial prototype tree consists only of a single node as created by the function
“newModelNodePIPE” in Algorithm 34.19. Salustowicz and Schmidhuber [2381] prescribe
a start probability PΣ for creating terminal symbols which they set to PΣ = 0.8 in their
experiments. All the |Σ| terminal symbols are given the same probability for creation and the
remaining share of the probability (1 − PΣ ) is evenly distributed amongst the |N | function
node types. The random constant N .R is set to a value uniformly distributed in [0, 1].
A new population pop is created by sampling the prototype tree as prescribed in Algo-
rithm 34.20. The method uses a procedure “buildTreePIPE” defined in Algorithm 34.21
which recursively builds and attaches child nodes to a new genotype. As previously pointed
out, the model tree M initially only consists of a single node. In PIPE, model nodes are
dynamically attached when required. This is done by “buildTreePIPE”, which therefore
returns a tuple (N, t) consisting of the updated node N of the prototype tree and the new
node t which was inserted in the genotype. In the top-level called instance of the function,
N corresponds to the updated model M and t is the genotype p.g of a new individual.
“buildTreePIPE” works as follows: First, a node type i is chosen from both, the function
set N and the terminal set Σ according to the probability distribution defined in the input
prototype tree node N ′ which, in the top-level instance of the method, corresponds to the
448 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
current model. The selected type i is then instantiated as new node t to be inserted into
the genotype.
If i happens to be the type for real constants (which is kind of a problem-specific node
type and may be left away in implementations for other domains), the constant value t.val
is either set to a random value uniformly distributed in [0, 1) or to the constant value N .R
held in the prototype node, depending on whether the probability of choosing the type R
surpassed a given threshold PR . Salustowicz and Schmidhuber [2381] set PR to 0.3 in their
experiments.
If t is a function node, all necessary child nodes are created by recursively calling the
same procedure. In the case that for a required child node no corresponding node exists in
the prototype tree, this node is again created via Algorithm 34.19.
Updating the model in PIPE means to adapt it towards the direction of the genotype of
the best individual p⋆ found so far. This is done by first computing the recursively defined
probability P ( p⋆ .g| M) that this individual would occur given the current model M, as noted
in Equation 34.3.
QnumChildren(t)
if numChildren(t) > 0
P (t |N ) = N .P (typeOf (t)) ∗ i=0 (34.3)
1 otherwise
ε ∗ f(x)
Pgoal (p.g |M, x ) = P (g |M ) + (1 − P (p.g |M )) ∗ λ ∗ (34.4)
ε ∗ f(gpm(g))
Based on a learning rate λ, the target probability Pgoal ( p⋆ .g| M, x) that this tree should
receive after the model was updated is derived. Here, also the objective values of the best
candidate solution x discovered so far are also taken into consideration. The relation of the
fitness of best individual p⋆ of the current generation and this elitist fitness is weighted by a
factor ε. In their experiments, Salustowicz and Schmidhuber [2381] set λ = 0.01 and ε = 1.
In Algorithm 34.22, we show how a probability-increasing function “increasePropPIPE”
(Algorithm 34.23) for the best tree of the current generation is repeatedly invoked in order
to shift its probability towards the target probability Pgoal . This function again works
recursively. It also sets the constant values stored in the probability to the constant values
used in the best tree and ensures that all probabilities remain normalized. A constant c (set
to 0.1 in the original work) is used to trade-off between fast increase of the tree probabilities
and the precision in reaching Pgoal .
The prototype trees are pruned with Algorithm 34.24 after learning which removes paths
which are not reached when expanding a prototype node with a probability of at least PP
set to 0.999 999 in [2381].
Algorithm 34.18 performs either elitist learning towards the best individual p⋆ discovered
during all generations with probability PE (0.2 in the original paper) or normal learning.
Also, the original work [2381] features a mutation operator which changes the probabilities
450 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
in the prototype trees with a given probability (and normalizes them again).
34.7 Difficulties
34.7.1 Diversity
A very simple way for increasing the diversity in EDAs is mutating the model itself from
time to time. By randomly modifying its parameters, new individuals will be sampled a bit
more distant from the configurations which are currently assumed to perform best. If the
newly explored parts of the search space are inferior, chances are that the EDA finds back
the previous parameters. If they are better, on the other hand, the EDA managed to escape
a local optimum. This approach is relatively old and has already been applied in PBIL [193]
and PIPE [2380, 2381].
34.7. DIFFICULTIES 453
34.7.1.2 Sampling Mutation
But diverse individuals can also be created without permanently mutating the model. In-
stead, a model mutation may be applied which only affects the sampling of one single
genotype and is reverted thereafter, i. e., which has very temporary characteristics. This
way, the danger that a good model may get lost is circumvented.
Such an operator is the Sampling Mutation by Wallin and Ryan [2843, 2844] who tem-
porarily invert the probability distribution describing one variable of the model. In a , for
instance, we could temporarily set the ith element of element of the model vector M to
1 − M[i] during the sampling of one individual.
Wallin and Ryan [2843] further combine such fitness clustering with sampling mutation.
Pelikan et al. [2157], Sastry and Goldberg [2398] extend the method for multiple objective
functions. They define a multi-objective version of their hBOA algorithm as a combination
of hBOA, NSGA-II, and clustering in the objective space.
Lu and Yao [1784] introduce a basic multi-model EDA scheme for real-valued optimiza-
tion. This scheme utilizes clustering in a way similar to our work presented here. However,
they focus mainly on numerical optimization whereas we introduce a general framework
together with one possible instantiation for numerical optimization. Platel et al. [2187] pro-
pose a quantum-inspired evolutionary algorithm which is an EDA for bit-string base search
spaces. This algorithm utilizes a structured population, similar to the use of demes in EAs,
which makes it a multi-model EDA. Gallagher et al. [1009] extend the PBIL algorithm to
real-valued optimization by using an Adaptive Gaussian mixture model density estimator.
This approach can deal with multi-modal problems but too, is more complex than multi-
model algorithms which utilize clustering
Niemczyk and Weise [2038] finally introduce a general multi-model EDA which unites
research on Estimation of Distribution Algorithmsand Evolutionary Algorithms: They sug-
gest divide the population into clusters and derive one model for each of them. These models
may then be recombined and sampled. In the case that one model is created for each se-
lected individual and additional models result from recombination, the algorithm behaves
exactly like a standard EA. If, on the other hand, only one single cluster is created and no
model recombination is performed, the algorithm equals a traditional EDA. Niemczyk and
454 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Weise [2038] point out that this scheme applies to arbitrary search and problem spaces and
clustering methods. They instantiate it for a continuous search space, but it could as well
be used for bit-string based or tree-based genomes.
Symmetric optimization problems can be especially challenging for EDAs since optima with
similar objective values but located in different areas of the search space may hinder them
from converging. Pelikan and Goldberg [2147, 2148] first applied clustering the search space
of general EAs and EDAs in order to alleviate this problem. In a Genetic Algorithm, they
would cluster the mating pool (i. e., the selected parents) according to the genotypes. Each
cluster will then be processed separately and yield its own offspring. The recombination
operator only uses parents which stem from the same cluster. The number of new genotypes
per cluster is proportional to its average fitness or its size. This method hence facilitates
niching. In their work, Pelikan and Goldberg [2147, 2148] use k-means clustering [1503, 1808]
both, in a simple Genetic Algorithm and in UMDA, and found that it helps to achieve better
results in the presence of symmetry.
Bosman and Thierens [363, 364] too apply k-means clustering but also the randomized
leader algorithm and expectation maximization after selection. For each of the clusters,
again a separate model is created. These are combined to an overall distribution via a
weighted sum approach, where the weight of a cluster is proportional to its size.
Lu and Yao [1784] apply the heuristic competitive learning algorithm RPCL to cluster the
populations in their Clustering and Estimation of Gaussian Network Algorithms (CEGNA)
for continuous optimization problems. RPCL has the advantage that it can select the number
of clusters automatically [2999]. In CEGNA, after selection, the mating pool is clustered
into k clusters where both, the clusters and k, are determined by RPCL. For each of the
clusters, an independent model Mi : i ∈ 1..k is derived. In the sampling step, Ni new
genotypes are sampled for each cluster i where Ni is proportional to the average fitness of
the individuals in the cluster. The experimental results [1784] show that the new algorithm
performs well on multi-modal problems.
Other EDA approaches utilizing clustering have been provided in [42, 486, 2069]. Cluster-
ing has also been incorporated in EAs as, for example, niching method, by various research
groups [1081, 1241].
In [2357], Ruzgas et al. show that a multi-modal problem can be reduced to a set of
uni-modal ones and the quality of estimation procedures increases significantly if applying
estimators first to each cluster and later combining the results. Although their work was
not related to EDAs, it is another account for the successful integration of clustering into
distribution estimation.
34.7.2 Epistasis
34.7. DIFFICULTIES 455
Tasks T34
99. Implement the Royal Road function given in Section 50.2.4 on page 558 for genotypes
of lengths 64, 512, and 2048. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your findings.
[25 points]
100. Implement the OneMax function given in Section 50.2.5.1 on page 562 for genotypes
of lengths 64, 512, and 2048. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your findings.
[25 points]
101. Implement the BinInt function given in Section 50.2.5.2 on page 562 for genotypes
of lengths 16, 32, and 48. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your findings.
[25 points]
102. Implement the NK Fitness Landscape given in Section 50.2.1 on page 553 for N = 12
and 16 and K = 4 and K = 6 and random neighbors. Apply both, the UMDA (see
Section 34.4.1) and a simple Genetic Algorithm (see Section 29.3) to the four resulting
functions. Repeat the experiments 15 times. Describe your findings.
[50 points]
103. Implement any of the real-valued Estimation of Distribution Algorithms defined in Sec-
tion 34.5 on page 442 and test them on suitable benchmark problems (such as those
given in Task 64 on page 238).
[50 points]
Chapter 35
Learning Classifier Systems
35.1 Introduction
In the late 1970s, Holland, the father of Genetic Algorithms, also invented the concept of
classifier systems (CS) [1251, 1254, 1256]. These systems are a special case of production
systems [696, 697] and consist of four major parts:
By time, classifier systems have undergone some name changes. In 1986, reinforcement
learning was added to the approach and the name changed to Learning Classifier Systems1
(LCS) [1220, 2534]. Learning Classifier Systems are sometimes subsumed under a machine
learning paradigm called evolutionary reinforcement learning (ERL) [1220] or Evolutionary
Algorithms for Reinforcement Learning (EARLs) [1951].
The exact definition of Learning Classifier Systems [1257, 1588, 1677, 2534] still seems con-
tentious and there exist many different implementations. There are, for example, versions
without message list where the action part of the rules does not encode messages but direct
output signals. The importance of the role of Genetic Algorithms in conjunction with the
reinforcement learning component is also not quite clear. There are scientists who emphasize
more the role of the learning components [2956] and others who tend to grant the Genetic
Algorithms a higher weight [655, 1256].
The families of Learning Classifier Systems have been listed and discussed by Brownlee
[430] elaborately. Here we will just summarize their differences in short. De Jong [712, 713]
and Grefenstette [1127] divide Learning Classifier Systems into two main types, depending
on how the Genetic Algorithm acts: The Pitt approach originated at the University of
Pittsburgh with the LS-1 system developed by Smith [2536]. It was then developed further
and applied by Bacardit i Peñarroya [158, 159], De Jong and Spears [716], De Jong et al.
1 http://en.wikipedia.org/wiki/Learning_classifier_system [accessed 2007-07-03]
458 35 LEARNING CLASSIFIER SYSTEMS
[717], Spears and De Jong [2564], and Bacardit i Peñarroya and Krasnogor [161]. Pittsburgh-
style Learning Classifier Systems work on a population of separate classifier systems, which
are combined and reproduced by the Genetic Algorithm.
The original idea of Holland and Reitman [1256] were Michigan-style LCSs, where the
whole population itself is considered as classifier system. They focus on selecting the best
rules in this rule set [706, 1074, 1751]. Wilson [2952, 2953] developed two subtypes of
Michigan-style LCS:
1. In ZCS systems, there is no message list use fitness sharing [440, 442, 591, 2952] for a
Q-learning-like reinforcement learning approach called QBB.
2. ZCS have later been somewhat superseded by XCS systems in which the Bucket
Brigade Algorithm has fully been replaced by Q-learning. Furthermore, the credit
assignment is based on the accuracy (usefulness) of the classifiers. The Genetic Algo-
rithm is applied to sub-populations containing only classifiers which apply to the same
situations. [1587, 1681, 2953–2955]
35.3. GENERAL INFORMATION ON LEARNING CLASSIFIER SYSTEMS 459
35.3 General Information on Learning Classifier Systems
35.3.2 Books
1. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
2. Foundations of Learning Classifier Systems [443]
3. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
4. Anticipatory Learning Classifier Systems [459]
5. Applications of Learning Classifier Systems [441]
35.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
Chapter 36
Memetic and Hybrid Algorithms
Starting with the research contributed by Bosworth et al. [368] in 1972, Bethke [288] in 1980,
and Brady [382] in 1985, there is a long tradition of hybridizing Evolutionary Algorithms
with other optimization methods such as Hill Climbing, Simulated Annealing, or Tabu
Search [2507]. A comprehensive review on this topic has been provided by Grosan and
Abraham [1139, 1141].
Hybridization approaches are not limited to GAs as “basis” algorithm. In ?? for example,
we have already listed a wide variety of approaches to combine the Downhill Simplex with
population-based optimization methods spanning from Genetic Algorithms to Differential
Evolution and Particle Swarm Optimization. Many of these approaches can be subsumed
under the umbrella term Memetic Algorithms1 (MAs).
The principle of Genetic Algorithms is to simulate the natural evolution in order to solve
optimization problems. In nature, the features of phenotypes are encoded in the genes
of their genotypes. The term Memetic Algorithm was coined by Moscato [1961, 1962] as
allegory for simulating a social evolution where behavioral patterns are passed on in memes 2 .
A meme has been defined by Dawkins [700] as “unit of imitation in cultural transmission”.
With the simulation of social cooperation and competition, optimization problems can be
solved efficiently.
Moscato [1961] uses the example of Chinese martial art Kung-Fu which has developed
over many generations of masters teaching their students certain sequences of movements,
the so-called forms. Each form is composed of a set of elementary aggressive and defensive
patterns. These undecomposable sub-movements can be interpreted as memes. New memes
are rarely introduced and only few amongst the masters of the art have the ability to do
so. Being far from random, such modifications involve a lot of problem-specific knowledge
and almost always result in improvements. Furthermore, only the best of the population
of Kung-Fu practitioners can become masters and teach decibels. Kung-Fu fighters can
determine their fitness by evaluating their performance or by competing with each other in
tournaments.
Based on this analog, Moscato [1961] creates an example algorithm for solving the Trav-
eling Salesman Problem [118, 1694] by using the three principles of
1. Individuals can attain new, beneficial characteristics during their lifetime and lose
unused abilities.
2. They inherit their traits (also those acquired during their life) to their offspring.
While the first concept is obviously correct, the second one contradicts the state of knowledge
in modern biology. This does not decrease the merits of de Monet, Chevalier de Lamarck,
who provided an early idea about how evolution could proceed. In his era, things like genes
and the DNA simply had not been discovered yet. Weismann [2918] was the first to argue
that the heredity information of higher organisms is separated from the somatic cells and,
thus, could not be influenced by them [2739]. In nature, no phenotype-genotype mapping
can take place.
ian evolution can be “included” in Evolutionary Algorithms by performing a local search
starting with each new individual resulting from applications of the reproduction operations.
This search can be thought of as training or learning and its results are coded back into the
genotypes g ∈ G [2933]. Therefore, this local optimization approach usually works directly
in the search space G. Here, algorithms such as greedy search Hill Climbing, Simulated
Annealing, or Tabu Search can be utilized, but simply modifying the genotypes randomly
and remembering the best results is also possible.
The Baldwin effect4 , [2495, 2740, 2830, 2831] first proposed by Baldwin [191, 192], Morgan
[1943, 1944], and Osborn [2101] in 1896, is a evolution theory which remains controversial
until today [710, 2179]. Suzuki and Arita [2637] describe it as a “possible scenario of in-
teractions between evolution and learning caused by balances between benefit and cost of
learning” [2873]. Learning is a rather local phenomenon, normally involving only single
individuals, whereas evolution usually takes place in the global scale of a population. The
Baldwin effect combines both in two steps [2739]:
3 http://en.wikipedia.org/wiki/Lamarckism [accessed 2008-09-10]
4 http://en.wikipedia.org/wiki/Baldwin_effect [accessed 2008-09-10]
36.3. BALDWIN EFFECT 465
1. First, lifelong learning gives the individuals the chance to adapt to their environment
or even to change their phenotype. This phenotypic plasticity 5 may help the creatures
to increase their fitness and, hence, their probability to produce more offspring. Dif-
ferent from Lamarckian evolution, the abilities attained this way do not influence the
genotypes nor are inherited.
2. In the second phase, evolution step by step generates individuals which can learn
these abilities faster and easier and, finally, will have encoded them in their genome.
Genotypical traits then replace the learning (or phenotypic adaptation) process and
serve as an energy-saving shortcut to the beneficial traits. This process is called genetic
assimilation 6 [2829–2832].
f(gpm(g))
with learning
without learning
gÎG
f(gpm(g))
Case 1: Case 4:
neutralized landscape smoothed out landscape
Case 2: Case 3:
increased decreased
gradient gradient gÎG
Dg Dg
Fig. 36.1.b: The positive and negative influence of learning capabilities of
individuals on the fitness landscape, extension of the figure in [2637].
Hinton and Nowlan [1235, 1236] performed early experiments on the Baldwin effect with
Genetic Algorithms [253, 1208]. They found that the evolutionary interaction with learning
smoothens the fitness landscape [1144] and illustrated this effect on the graphical example of
a needle-in-a-haystack problem. Mayley [1844] used experiments on Kauffman’s NK fitness
landscapes [1501] (see Section 50.2.1) to show that the Baldwin effect can also have negative
influence. We use Figure 36.1 to sketch both effects that can occur when the fitness of an
5 http://en.wikipedia.org/wiki/Phenotypic_plasticity [accessed 2008-09-10]
6 http://en.wikipedia.org/wiki/Genetic_assimilation [accessed 2008-09-10]
466 36 MEMETIC AND HYBRID ALGORITHMS
individual is computed by performing a local search around it, returning the objective values
of the best candidate solution found (without preserving this optimized phenotype).
Whereas learning adds gradient information in regions of the search space which are
distant from local or global optima (case 2 in Fig. 36.1.b), it decreases the information in
their near proximity (called hiding effect [1457, 1844], case 3 in Fig. 36.1.b). Similarly, using
the Baldwin effect in the described way may smoothen out rugged fitness landscapes (case
4), it could also reduce smaller gradient information (case 1). Suzuki and Arita [2637] found
that the Baldwin effect decreases the evolution speed in their rugged experimental fitness
landscape, but also led to significantly better results in the long term.
One interpretation of these issues is that learning capabilities help individuals to survive
in adverse conditions since they may find good abilities by learning and phenotypic adap-
tation. On the other hand, it makes not much of a difference whether an individual learns
certain abilities or whether it was already born with them when it can exercise them at the
same level of perfection. Thus, the selection pressure furthering the inclusion of good traits
in the heredity information decreases if a life form can learn or adapt its phenotypes.
Like Lamarckian evolution, the Baldwin effect can also be added to Evolutionary Al-
gorithms by performing a local search starting at each new offspring individual. Different
from Lamarckism, the abilities and characteristics attained by this process only influence
the objective values of an individual and are not coded back to the genotypes. Hence, it
plays no role whether the search takes place in the search space G or in the problem space
X. The best objective values f (p′.x) found in a search around individual p become its own
objective values, but the modified variant p′ of p actually scoring them is discarded [2933].
Nevertheless, the implementer will store these individuals somewhere else if they were
the best candidate solutions ever found. She must furthermore ensure that the user will be
provided with the correct objective values of the final set of candidate solutions resulting
from the optimization process (f (p.x), not f (p′.x)).
36.4. GENERAL INFORMATION ON MEMETIC ALGORITHMS 467
36.4 General Information on Memetic Algorithms
36.4.2 Books
1. Recent Advances in Memetic Algorithms [1205]
2. Hybrid Evolutionary Algorithms [1141]
3. Advances in Evolutionary Algorithms [1581]
36.4.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
3. Memetic Computing published by Springer-Verlag GmbH
Chapter 37
Ant Colony Optimization
37.1 Introduction
Inspired by the research done by Deneubourg et al. [767, 768, 1111] on real ants and probably
by the simulation experiments by Stickland et al. [2613], Dorigo et al. [810] developed the
Ant Colony Optimization1 (ACO) Algorithm for problems that can be reduced to finding
optimal paths in graphs in 1996. [807, 811, 819, 1820, 1823] Ant Colony Optimization is
based on the metaphor of ants seeking food. In order to do so, an ant will leave the anthill
and begin to wander into a random direction. While the little insect paces around, it lays
a trail of pheromone. Thus, after the ant has found some food, it can track its way back.
By doing so, it distributes another layer of pheromone on the path. An ant that senses
the pheromone will follow its trail with a certain probability. Each ant that finds the food
will excrete some pheromone on the path. By time, the pheromone density of the path
will increase and more and more ants will follow it to the food and back. The higher the
pheromone density, the more likely will an ant stay on a trail. However, the pheromones
vaporize after some time. If all the food is collected, they will no longer be renewed and the
path will disappear after a while. Now, the ants will head to new, random locations.
This process of distributing and tracking pheromones is one form of stigmergy2 and was
first described by Grassé [1123]. Today, we subsume many different ways of communication
by modifying the environment under this term, which can be divided into two groups:
sematectonic and sign-based [2425]. According to Wilson [2948], we call modifications in
the environment due to a task-related action which leads other entities involved in this task
to change their behavior sematectonic stigmergy. If an ant drops a ball of mud somewhere,
this may cause other ants to place mud balls at the same location. Step by step, these
effects can cumulatively lead to the growth of complex structures. Sematectonic stigmergy
has been simulated on computer systems by, for instance, Théraulaz and Bonabeau [2693]
and with robotic systems by Werfel and Nagpal [2431, 2920, 2921].
The second form, sign-based stigmergy, is not directly task-related. It has been attained
evolutionary by social insects which use a wide range of pheromones and hormones for
communication. Computer simulations for sign-based stigmergy were first performed by
Stickland et al. [2613] in 1992.
The sign-based stigmergy is copied by Ant Colony Optimization [810], where optimiza-
tion problems are visualized as (directed) graphs. First, a set of ants performs randomized
walks through the graphs. Proportional to the goodness of the solutions denoted by the
paths, pheromones are laid out, i. e., the probability to walk into the direction of the paths
1 http://en.wikipedia.org/wiki/Ant_colony_optimization [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Stigmergy [accessed 2007-07-03]
472 37 ANT COLONY OPTIMIZATION
is shifted. The ants run again through the graph, following the previously distributed
pheromone. However, they will not exactly follow these paths. Instead, they may deviate
from these routes by taking other turns at junctions, since their walk is still randomized.
The pheromones modify the probability distributions.
The intention of Ant Colony Optimization is solving combinatorial problems and not
simulating ants realistically [810]. The ants ACO thus differ from “real” ones in three major
aspects:
It is interesting to note that even real vector optimizations can be mapped to a graph
problem, as introduced by Korošec and Šilc [1580]. Thanks to such ideas, the applicability
of Ant Colony Optimization is greatly increased.
37.2. GENERAL INFORMATION ON ANT COLONY OPTIMIZATION 473
37.2 General Information on Ant Colony Optimization
37.2.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
2. New Optimization Techniques in Engineering [2083]
3. Swarm Intelligent Systems [2013]
4. Swarm Intelligence: From Natural to Artificial Systems [353]
5. Swarm Intelligence: Collective, Adaptive [1524]
6. Swarm Intelligence – Introduction and Applications [341]
7. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
8. Advances in Metaheuristics for Hard Optimization [2485]
9. Ant Colony Optimization [809]
10. Fundamentals of Computational Swarm Intelligence [881]
37.2.4 Journals
1. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
Chapter 38
River Formation Dynamics
38.1.2 Books
1. Ant Colony Optimization [809]
2. Fundamentals of Computational Swarm Intelligence [881]
3. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
4. Swarm Intelligence – Introduction and Applications [341]
5. Swarm Intelligence: Collective, Adaptive [1524]
6. Swarm Intelligence: From Natural to Artificial Systems [353]
7. Swarm Intelligent Systems [2013]
8. Nature-Inspired Algorithms for Optimisation [562]
9. New Optimization Techniques in Engineering [2083]
38.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 39
Particle Swarm Optimization
39.1 Introduction
Particle Swarm Optimization1 (PSO), developed by Eberhart and Kennedy [853, 855, 1523]
in 1995, is a form of swarm intelligence in which the behavior of a biological social system
like a flock of birds or a school of fish [2129] is simulated. When a swarm looks for food, its
individuals will spread in the environment and move around independently. Each individual
has a degree of freedom or randomness in its movements which enables it to find food
accumulations. So, sooner or later, one of them will find something digestible and, being
social, announces this to its neighbors. These can then approach the source of food, too.
Particle Swarm Optimization has been discussed, improved, and refined by many researchers
such as Cai et al. [468], Gao and Duan [1018], Venter and Sobieszczanski-Sobieski [2799], and
Gao and Ren [1019]. Comparisons with other evolutionary approaches have been provided
by Eberhart and Shi [854] and Angeline [103].
With Particle Swarm Optimization, a swarm of particles (individuals) in a n-dimensional
search space G is simulated, where each particle p has a position p.g ∈ G ⊆ Rn and a
velocity p.v ∈ Rn . The position p.g corresponds to the genotypes, and, in most cases,
also to the candidate solutions, i. e., p.x = p.g, since most often the problem space X
is also the Rn and X = G. However, this is not necessarily the case and generally, we
can introduce any form of genotype-phenotype mapping in Particle Swarm Optimization.
The velocity vector p.v of an individual p determines in which direction the search will
continue and if it has an explorative (high velocity) or an exploitive (low velocity) character.
It represents endogenous parameters which we discussed under the subject of Evolution
Strategy in Chapter 30. Whereas the endogenous information in Evolution Strategy is used
for an undirected mutation, the velocity in Particle Swarm Optimization is used to perform
a directed modification of the genotypes (particles’ positions).
In the initialization phase of Particle Swarm Optimization, the positions and velocities of all
individuals are randomly initialized. In each step, first the velocity of a particle is updated
and then its position. Therefore, each particle p has another endogenous parameter: a
memory holding its best position best(p) ∈ G.
1 http://en.wikipedia.org/wiki/Particle_swarm_optimization [accessed 2007-07-03]
482 39 PARTICLE SWARM OPTIMIZATION
39.2.1 Communication with Neighbors – Social Interaction
In order to realize the social component of natural swarms, a particle furthermore has
a set of topological neighbors N(p). This set could be pre-defined for each particle at
startup. Alternatively, it could contain adjacent particles within a specific perimeter, i. e.,
all individuals which are no further away from p.g than a given distance δ according to a
certain distance measure2 “dist”. Using the Euclidian distance measure “dist eucl” specified
in ?? on page ?? we get:
Each particle can communicate with its neighbors, so the best position found so far by any
element in N(p) is known to all of them as best(N(p)). The best position ever visited by
any individual in the population (which the optimization algorithm always keeps track of)
is referred to as best(pop).
One of the basic methods to conduct Particle Swarm Optimization is to use either best(N(p))
or best(pop) for adjusting the velocity of the particle p. If the globally best position is
utilized, the algorithm will converge quickly but maybe prematurely, i. e., may not find the
global optimum. If, on the other hand, neighborhood communication and best(N(p)) is
used, the convergence speed drops but the global optimum is found more likely.
The search operation q = psoUpdate(p, pop) applied in Particle Swarm Optimization
creates a new particles q to replace an existing one (p) by incorporating its genotype p.g and
its velocity p.v. We distinguish local updating (Equation 39.3) and global updating (Equa-
tion 39.2), which additionally uses the data from the whole population pop. “psoUpdate”
thus fulfills one of these two equations and Equation 39.4, which shows how the ith compo-
nents of the corresponding vectors are computed.
The learning rate vectors c and d have strong influence of the convergence speed of Par-
ticle Swarm Optimization. The search space G (and thus, also the values of p.g) is normally
confined by minimum and maximum boundaries. For the absolute values of the velocity,
normally maximum thresholds also exist. Thus, real implementations of “psoUpdate” have
to check and refine their results before the utility of the candidate solutions is evaluated.
Algorithm 39.1 illustrates the native form of the Particle Swarm Optimization using the
basic update procedure. Like Hill Climbing, this algorithm can easily be generalized for
multi-objective optimization and for returning sets of optimal solutions (compare with Sec-
tion 26.3 on page 230).
39.3.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
2. Swarm Intelligent Systems [2013]
3. Swarm Intelligence: Collective, Adaptive [1524]
4. Parallel Evolutionary Computations [2015]
5. Swarm Intelligence – Introduction and Applications [341]
6. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
7. Particle Swarm Optimization [589]
8. Fundamentals of Computational Swarm Intelligence [881]
9. Advances in Evolutionary Algorithms [1581]
39.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Evolutionary Computation published by MIT Press
3. Proceedings of the National Academy of Science of the United States of America (PNAS)
published by National Academy of Sciences
4. Journal of Theoretical Biology published by Academic Press Professional, Inc. and Elsevier
Science Publishers B.V.
5. Natural Computing: An International Journal published by Kluwer Academic Publishers and
Springer Netherlands
6. Biosystems published by Elsevier Science Ireland Ltd. and North-Holland Scientific Publish-
ers Ltd.
7. Australian Journal of Biological Science (AJBS)
8. BioData Mining published by BioMed Central Ltd.
9. International Journal of Computational Intelligence Research (IJCIR) published by Research
India Publications
10. The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) pub-
lished by Idea Group Publishing (Idea Group Inc., IGI Global)
11. IEEE Computational Intelligence Magazine published by IEEE Computational Intelligence
Society
12. Trends in Ecology and Evolution (TREE) published by Cell Press and Elsevier Science Pub-
lishers B.V.
13. Acta Biotheoretica published by Springer Netherlands
39.3. GENERAL INFORMATION ON PARTICLE SWARM OPTIMIZATION 487
14. Bulletin of the American Iris Society published by American Iris Society (AIS)
15. Behavioral Ecology and Sociobiology published by Springer-Verlag
16. FEBS Letters published by Elsevier Science Publishers B.V. and Federation of European
Biochemical Societies (FEBS)
17. Naturwissenschaften – The Science of Nature published by Springer-Verlag
18. International Journal of Biometric and Bioinformatics (IJBB) published by Computer Sci-
ence Journals Sdn BhD
19. Journal of Experimental Biology (JEB) published by The Company of Biologists Ltd. (COB)
20. Bulletin of Mathematical Biology published by Springer New York
21. Computational Intelligence published by Wiley Periodicals, Inc.
22. Journal of Heredity published by American Genetic Association
23. Nature Genetics published by Nature Publishing Group (NPG)
24. The American Naturalist published by University of Chicago Press for The American Society
of Naturalists
25. Philosophical Transactions of the Royal Society B: Biological Sciences published by Royal
Society of Edinburgh
26. Astrobiology published by Mary Ann Liebert, Inc.
27. Current Biology published by Elsevier Science Publishers B.V.
28. Quarterly Review of Biology (QRB) published by Stony Brook University
29. Proceedings of the Royal Society B: Biological Sciences published by Royal Society Publishing
30. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
Chapter 40
Tabu Search
Tabu Search1 (TS) has been developed by Glover [1065, 1069] in the mid-1980s based on
foundations from the 1970s [1064]. Some of the basic ideas were introduced by Hansen
[1189] and further contributions in terms of formalizing this method have been made by
Glover [1066, 1067], and de Werra and Hertz [729] (as summarized by Hertz et al. [1228] in
their tutorial on Tabu Search) as well as by Battiti and Tecchiolli [232] and Cvijović and
Klinowski [665].
According to Hertz et al. [1228], Tabu Search can be considered as a metaheuristic variant
of neighborhood search as introduced in Definition D46.3 on page 512. The word “tabu”2
stems from Polynesia and describes a sacred place or object. Things that are tabu must be
left alone and may not be visited or touched.
40.1.2 Books
1. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
2. How to Solve It: Modern Heuristics [1887]
40.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 41
Extremal Optimization
41.1 Introduction
Different from Simulated Annealing, which is an optimization method based on the metaphor
of thermal equilibria from physics, the Extremal Optimization1 (EO) algorithm developed
by Boettcher and Percus [344–348] is inspired by ideas of non-equilibrium physics. Especially
important in this context is the property of self-organized criticality2 (SOC) [186, 1439].
The theory of SOC states that large interactive systems evolve to a state where a change
in one single of their elements may lead to avalanches or domino effects that can reach
any other element in the system. The probability distribution of the number of elements n
involved in these avalanches is proportional to n−τ (τ > 0). Hence, changes involving only
few elements are most likely, but even avalanches involving the whole system are possible
with a non-zero probability. [728]
The Bak-Sneppen model3 of evolution [185] exhibits self-organized criticality and was the
inspiration for Extremal Optimization. Rather than focusing on single species, this model
considers a whole ecosystem and the co-evolution of many different species.
In the model, each species is represented only by a real fitness value between 0 and 1. In
each iteration, the species with the lowest fitness is mutated. The model does not include any
representation for genomes, instead, mutation changes the fitness of the species directly by
replacing it with a random value uniformly distributed in [0, 1]. In nature, this corresponds
to the process where one species has developed further or was replaced by another one.
So far, mutation (i. e., development) would become less likely the more the fitness
increases. Fitness can also be viewed as a barrier: New characteristics must be at least as
fit as the current ones to proliferate. In an ecosystem however, no species lives alone but
depends on others, on its successors and predecessors in the food chain, for instance. Bak
and Sneppen [185] consider this by arranging the species in a one dimensional line. If one
species is mutated, the fitness values of its successor and predecessor in that line are also
1 http://en.wikipedia.org/wiki/Extremal_optimization [accessed 2008-08-24]
2 http://en.wikipedia.org/wiki/Self-organized_criticality [accessed 2008-08-23]
3 http://en.wikipedia.org/wiki/Bak-Sneppen_model [accessed 2010-12-06]
494 41 EXTREMAL OPTIMIZATION
set to random values. In nature, the development of one species can foster the development
of others and this way, even highly fit species may become able to (re-)adapt.
After a certain amount of iterations, the species in simulations based on this model reach
a highly-correlated state of self-organized critically where all of them have a fitness above
a certain threshold. This state is very similar to the idea of punctuated equilibria from
evolutionary biology and groups of species enter a state of passivity lasting multiple cycles.
Sooner or later, this state is interrupted because mutations occurring nearby undermine
their fitness. The resulting fluctuations may propagate like avalanches through the whole
ecosystem. Thus, such non-equilibrium systems exhibit a state of high adaptability without
limiting the scale of change towards better states [344].
2. Toggle bit i in g ′ .
3. Set λ(p.g [i]) to −f(gpm(g ′ )) for maximization and to f(gpm(g ′ )) in case of minimiza-
tion.4
By doing so, λ(p.g [i]) becomes a measure for how adapted the gene is. If f is subject to
maximization, high positive values of f(gpm(g ′ )) (corresponding to low λ(p.g [i])) indicate
that gene i should be mutated and has a low fitness. For minimization, low f(gpm(g ′ ))
indicated the mutating gene i would yield high improvements in the objective value.
4 In the original work of de Sousa et al. [728], f(x⋆ ) is subtracted from this value. Since we rank the genes,
41.3.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
41.3.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 42
GRASPs
500 42 GRASPS
42.1 General Information on GRAPSs
42.1.3 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 43
Downhill Simplex (Nelder and Mead)
[1864]
43.1 Introduction
The Downhill Simplex1 (or Nelder-Mead method or amoeba algorithm2 ) published by Nelder
and Mead [2017] in 1965 is an single-objective optimization approach for searching the space
of n-dimensional real vectors (G ⊆ Rn ) [1655, 2073]. Historically, it is closely related to the
simplex extension by Spendley et al. [2572] to the Evolutionary Operation method mentioned
in Section 28.1.3 on page 264 [1719]. Since it only uses the values of the objective functions
without any derivative information (explicit or implicit), it falls into the general class of
direct search methods [2724, 2984], as most of the optimization approaches discussed in this
book do.
Downhill Simplex optimization uses n + 1 points in the Rn . These points form a poly-
tope3 , a generalization of a polygone, in the n-dimensional space – a line segment in R1 , a
triangle in R2 , a tetrahedron in R3 , and so on. Nondegenerated simplexes, i. e., those where
the set of edges adjacent to any vertex form a basis in the Rn , have one important festure:
The result of replacing a vertex with its reflection through the opposite face is again, a
nondegenerated simplex (see ??). The goal of Downhill Simplex optimization is to replace
the best vertex of the simplex with an even better one or to ascertain that it is a candidate
for the global optimum [1719]. Therefore, its other points are constantly flipped around in
an intelligent manner as we will outline in ??.
Like Hill Climbing approaches, the Downhill Simplex may not converge to the global
minimum and can get stuck at local optima [1655, 1854, 2712]. Random restarts (as in Hill
Climbing With Random Restarts discussed in Section 26.5 on page 232) can be helpful here.
43.2.3 Journals
1. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
2. Discrete Optimization published by Elsevier Science Publishers B.V.
3. Engineering Optimization published by Informa plc and Taylor and Francis LLC
4. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientific Publishers Ltd.
5. Journal of Global Optimization published by Springer Netherlands
6. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
43.2. GENERAL INFORMATION ON DOWNHILL SIMPLEX 503
7. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
8. Journal of the Operational Research Society (JORS) published by Palgrave Macmillan Ltd.
9. Optimization Methods and Software published by Taylor and Francis LLC
10. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
11. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
Chapter 44
Random Optimization
506 44 RANDOM OPTIMIZATION
44.1 General Information on Random Optimization
44.1.3 Journals
1. Journal of Heuristics published by Springer Netherlands
Part IV
One of the most fundamental principles in our world is the search for optimal states. It begins
in the microcosm in physics, where atom bonds1 minimize the energy of electrons [2137].
When molecules combine to solid bodies during the process of freezing, they assume crystal
structures which come close to energy-optimal configurations. Such a natural optimization
process, of course, is not driven by any higher intention but the result from the laws of
physics.
The same goes for the biological principle of survival of the fittest [2571] which, together
with the biological evolution [684], leads to better adaptation of the species to their envi-
ronment. Here, a local optimum is a well-adapted species that dominates its niche. We, the
Homo sapiens, have reached this level and so have ants, bears, cockroaches, lions, and many
other creatures.
As long as humankind exists, we strive for perfection in many areas. We want to reach
a maximum degree of happiness with the least amount of effort. In our economy, profit and
sales must be maximized and costs should be as low as possible. Therefore, optimization
is one of the oldest sciences which even extends into daily life [2023]. Here, we also can
observe the phenomenon of trade-offs between different criteria which are also common in
many other problem domains: One person may choose to work hard with the goal of securing
stability and pursuing wealth for her future life, accepting very little spare time in its current
stage. Another person can, deliberately, choose a lax life full of joy in the current phase
but rather unsecure in the long term. Similar, most optimization problems in real-world
applications involve multiple, usually conflicting, objectives.
Furthermore, many optimization problems involve certain constraints. The guy with the
relaxed lifestyle, for instance, regardless how lazy he might become, cannot waive dragging
himself to the supermarket from time to time in order to acquire food. The eager beaver, on
the other hand, will not be able to keep up with 18 hours of work for seven days per week in
the long run without damaging her health considerably and thus, rendering the goal of an
enjoyable future unattainable. Likewise, the parameters of the solutions of an optimization
problem may be constrained as well.
Every task which has the goal of approaching certain configurations considered as optimal
in the context of pre-defined criteria can be viewed as an optimization problem. If something
is important, general, and abstract enough, there is always a mathematical discipline dealing
with it. Global Optimization2 [609, 2126, 2181, 2714] is the branch of applied mathematics
and numerical analysis that focuses on, well, optimization. Closely related and overlapping
areas are mathematical economics 3 [559] and operations research 4 [580, 1232].
In this book, we will first discuss the things which constitute an optimization problem
and which are the basic components of each optimization process (in the remainder of this
1 http://en.wikipedia.org/wiki/Chemical_bond [accessed 2007-07-12]
2 http://en.wikipedia.org/wiki/Global_optimization [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Mathematical_economics [accessed 2009-06-16]
4 http://en.wikipedia.org/wiki/Operations_research [accessed 2009-06-19]
510 45 INTRODUCTION
chapter). In Part II, we take a look on all the possible aspects which can make an optimiza-
tion task difficult. Understanding them will allow you to anticipate possible problems and
to prevent their occurrence from the start. In the remainder of the first part of this book,
we discuss a variety of different optimization methods. Part V then gives many examples
of applications for which optimization algorithms can be used. Part VI is a collection of
definitions and knowledge which can be useful to understand the terms and formulas used
in the rest of the book. It is provided in an effort to make the book stand-alone and to allow
students to understand it even if they have little mathematical or theoretical background.
Chapter 46
State Space Search
46.1 Introduction
State space search (SSS) strategies1 are not directly counted to the optimization algorithms.
In Global Optimization, objective functions f ∈ f are optimized. Normally, all elements of
the problem space X are valid and only differ in their utility as solution. Optimization is
about to find the elements with the best utility. State space search, instead, is based on
a criterion isGoal : X 7→ B which determines whether an element of the problem space is
a valid solution or not. The purpose of the search process is to find elements x for which
isGoal(x) is true. [635, 797, 2356]
Definition D46.1 (isGoal). The function isGoal : X 7→ B is the target predicate of state
space search algorithms which states whether a given candidate solution x ∈ X is a valid
solution (by returning true) or not (by returning false).
To built such an “isGoal” criterion for State space search strategies for Global Optimization
based on objective functions f , we could, for example, specify a threshold y̆i for each objective
function fi ∈ f . If we assume minimization, isGoal(x) of a candidate solution x can be defined
as true if and only if the values of all objective functions for x are below or equal to the
corresponding thresholds, as stated in Equation 46.1.
true if fi (x) ≤ y̌i ∀i ∈ 1.. |f |
isGoal(x) = (46.1)
false otherwise
The search space G is referred to as state space in state space search and the genotypes
g ∈ G are called states. Furthermore, G is often identical to the problem space (G = X).
Most state space search can only be applied if the search space is enumerable. One feature
of the search algorithms introduced here is that they usually are deterministic. This means
that they will yield the same results in each run (when applied to the same problem, that
is). Jones [1467] provides a further comparison between SSS and Global Optimization.
1 http://en.wikipedia.org/wiki/State_space_search [accessed 2007-08-06]
512 46 STATE SPACE SEARCH
46.1.2.1 The Neighborhood Expansion Operation
Instead of the basic search operations as introduced in Section 4.2, state space search al-
gorithms often use an operator which enumerates the neighborhood of a state. Here, we
call this operation “expand” and specify it in Definition D46.2. Hertz et al. [1228] refer to
optimization methods that base their search on such an operation, i. e., to state space search
approaches, as neighborhood search.
Definition D46.2 (expand). The operator expand : G 7→ P (G) receives one element g
from the search space G as input and computes a set G of elements which can be reached
from it. Based on Equation 4.3 on page 84, we can define “expand” in the context of
optimization according to Equation 46.2.
expand(g) = Op1 (g) (46.2)
g1
g2 g3 g4
g5 g6 g7 g8
G g18
ga : isGoal(gpm(ga))=false
gb : isGoal(gpm(gb))=true
Figure 46.1: An example for a possible search space sketched as acyclic graph.
514 46 STATE SPACE SEARCH
46.1.4 Key Efficiency Features
Many state space search methods fall into the category of local search algorithms as given
in Definition D26.1 on page 229. For all state space search strategies, we can define four
criteria that tell if they are suitable for a given problem or not [797].
1. Completeness. Does the search algorithm guarantee to find a solution (given that
there exists one)? (Do not mix up with the completeness of search operations specified
in Definition D4.8 on page 85.)
2. Time Consumption. How much time will the strategy need to find a solution?
3. Memory Consumption. How much memory will the algorithm need to store inter-
mediate steps? Together with time consumption this property is closely related to
complexity theory, as discussed in ?? on page ??.
4. Optimiality. Will the algorithm find an optimal solution if there exist multiple correct
solutions?
The optimization algorithms that we have considered up to now always use some sort of
utility measure to decide in which direction to search for optima: The objective functions
allow us to make fine distinctions between different individuals.
The only exceptions, the only uninformed optimization methods we discussed so far, are
the trivial random walk and sampling as well as exhaustive search algorithms (see Chapter 8
on page 113). Under some circumstances, however, only the criterion “isGoal” is given as a
form of Boolean objective function.
If the objective functions or utility criteria are of Boolean nature, the methods previously
discussed will not be able to estimate and descend some form of gradient. In the best case,
they will degenerate to random walks which still have some chances to find an optimum. In
the worst case, they will converge to some area in the search space and cease to explore new
solutions soon.
Here, uninformed search strategies2 are a viable alternative, since they do not require or
take into consideration any knowledge about the special nature of the problem (apart from
the knowledge represented by the “expand” operation, of course). Such algorithms are very
general and can be applied to a wide variety of problems with small search spaces.
Their common drawback is that they are not suitable if a deep search in a large search
space is required. Without the incorporation of information in form of heuristic functions,
for example, the search may take very long and quickly becomes infeasible [635, 797, 2356].
The uninformed state space search algorithms usually start at one initial local in the search
space. Breadth-first search3 (BFS) begins with expanding the neighborhood of the corre-
sponding root candidate solution. All of the states derived from this expansion are visited
and then all their children, and so on. In general, first all states in depth d are expanded
before considering any state in depth d + 1.
BFS is complete, since it will always find a solution if there exists one. If so, it will
also find the solution that can be reached from the root state with the smallest number
of expansion steps. Hence, breadth-first search is also optimal if the number of expansion
steps needed from the origin to a state is a monotonous function of the objective value of
a solution, i. e., if depth(p1 .g) > depth(p2 .g) ⇒ f(p1 .x) > f(p2 .x) where depth(p.g) is the
number of “expand” steps from the root state r ∈ G to p.g.
Algorithm 46.2 illustrates how breadth-first search works. The algorithm is initialized
with a root state r ∈ G which marks the starting point of the search. BFS uses a state list
P which initially only contains this root individual. In a loop, the first element p is removed
from this list. If the goal predicate isGoal(p.x) evaluates to true, p.x is a goal state and
we can return a set X = {p.x} containing it as the solution. Otherwise, we expand p.g and
append the newly found individuals to the end of queue P . If no solution can be found, this
process will continue until the whole accessible search space has been enumerated and P
becomes empty. Then, an empty set ∅ is returned in place of X, because there is no element
x in the (accessible part of the) problem space X for which isGoal(x) becomes true.
In order to examine the space and time complexity of BFS, we assume a hypothetical
state space GH where the expansion of each state g ∈ GH will always return a set of
len(expand(g)) = b new states. In depth 0, we only have one state, the root state r. In
depth 1, there are b states, and in depth 2 we can expand each of them to again, b new
states which makes b2 , and so on. The total number of states up to depth d is given in
Equation 46.3:
bd+1 + 1
1 + b + b2 + · · · + bd = ∈ O bd (46.3)
b−1
Thus, the space and time complexity are both O bd and thus, exponentially rise with d. In
the worst case, all nodes in depth d need to be stored, in the best case only those of depth
d − 1.
3 http://en.wikipedia.org/wiki/Breadth-first_search [accessed 2007-08-06]
516 46 STATE SPACE SEARCH
46.2.2 Depth-First Search
Depth-first search4 (DFS) is very similar to BFS. From the algorithmic point of view, the
only difference that it uses a stack instead of a queue as internal storage for states (compare
line 4 in Algorithm 46.3 with line 4 in Algorithm 46.2). Here, always the last state element
of the set of expanded states is considered next. Thus, instead of searching level for level
in the breath as BFS does, DFS searches in depth which – believe it or not – is the reason
for its name. Depth-first search advances in depth until the current state cannot further be
expanded, i. e., expand(p.g) = (). Then, the search steps again up one level. If the whole
search space has been browsed and no solution is found, ∅ is returned.
The memory consumption of the DFS grows linearly with the search depth, because
in depth d, at most d ∗ b states are held in memory (if we again assume the hypothetical
search space GH from the previous section). If we assume a maximum depth m, the time
complexity is bm in the worst case where the solution is the last child state in the path which
is explored the last. If m is very large or infinite, a depth-first search may take very long to
discover a solution or will not find it at all, since it may get stuck in a “wrong” branch of the
state space which maybe can be followed (expanded) for many step. Also, it may discover
a solution at a deep expansion level while another one may exist in a lower level in another,
not explored branch of the search graph. Hence, depth first search is neither complete nor
optimal.
iteratively runs a depth-limited search with stepwise increasing maximum depths d. In each
iteration, it visits the states in the according to the depth-first search. Since the maxi-
mum depth is always incremented by one, one new level in terms of distance in “expand”-
operations from the root is explored in each iteration. This effectively leads to some form
of breadth-first search.
IDDFS thus unites the advantages of BFS and DFS: It is complete and optimal, but only
has a linearly rising memory consumption in O(d ∗ b). The time consumption, of course, is
still in O bd . IDDFS is the best uninformed search strategy and can be applied to large
search spaces with unknown depth of the solution.
The Algorithm 46.5 is intended for (countable) infinitely large search spaces. In real
˘
systems, there is a maximum d after which the whole space would have be explored and the
algorithm should return ∅ if no solution was found.
In an informed search7 , a heuristic function φ helps to decide which states are to be expanded
next. If the heuristic is good, informed search algorithms may dramatically outperform
uninformed strategies [1887, 2140, 2276]. Based on Definition D1.14 on page 34, we now
provide the more specific Definition D46.5.
There are two possible meanings of the values returned by a heuristic function φ:
1. In the above sense, the value of a heuristic function φ(p.x) for an individual p is the
higher, the more “expand”-steps p.g is probably (or approximately) away from finding
a valid solution. Hence, the heuristic function represents the distance of an individual
to a solution of the optimization problem.
2. The heuristic function can also represent an objective function in some way. Suppose
that we know the minimal value y̌ for an objective function f or at least a value
from where on all solutions are feasible. If this is the case, we could set φ(p.x) =
max {0, f(p.x) − y̌}, assuming that f is subject to minimization. Now the value of
heuristic function will be the smaller, the closer an individual is to a possible correct
solution and Equation 46.4 still holds. In other words, a heuristic function may also
represent the distance to a solution in objective space.
Of course, both meanings are often closely related since states that are close to each other
in problem space are probably also close to each other in objective space (the opposite does
not necessarily hold).
A greedy search9 is a best-limited search where the currently known candidate solution with
the lowest heuristic value is investigated next. The greedy algorithm internal sorts the list
of currently known states in descending order according to a heuristic function φ. Thus,
the elements with the best (lowest) heuristic value will be at the end of the list, which then
can be used as a stack. The greedy search as specified in Algorithm 46.6 now works like a
depth-first search on this stack and thus, also shares most of the properties of the DFS. It
is neither complete nor optimal and its worst case time consumption is bm . On the other
hand, like breadth-first search, its worst-case memory consumption is also bm .
Greedy Search algorithms always find the best possible solutions very efficiently if the
problem forms a matroid10 [2110]. In other situations, however, greedy search can produce
very bad results and therefore should be used with extreme care [201].
46.3.2 A⋆ Search
A⋆ search proceeds exactly like the greedy search outlined in Algorithm 46.6, if φ⋆ is used
instead of the plain φ. For a given start state r ∈ G, an isGoal operator isGoal : X 7→ B,
and an estimation function φ⋆ according to Equation 46.5, Equation 46.6 holds.
An A⋆ search will definitely find a solution if there exists one, i. e., it is complete.
An A⋆ search is optimal if the heuristic function φ used is admissible. Optimal in this case
means that there exists no search algorithm that can find the same solution as the A⋆ search
needing fewer expansion steps if using the same heuristic. If “expand” is implemented in a
way which prevents that a state is visited more than once, φ also needs to be monotone in
order for the search to be optimal. [1204]
47.1.2 Books
1. Introduction to Global Optimization [2126]
2. Global Optimization: Deterministic Approaches [1271]
48.1.2 Books
1. Nonlinear Programming: Analysis and Methods [152]
Applications
530 48 CUTTING-PLANE METHOD
Chapter 49
Real-World Problems
Definition D49.1 (Regression). Regression1 [828, 980, 1548] is a statistic technique used
to predict the value of a variable which is dependent one or more independent variables.
The result of the regression process is a function ψ ⋆ : Rm 7→ R that relates the m independent
variables (subsumed in the vector x to one dependent variable y ≈ ψ ⋆ (x). The function ψ ⋆
is the best estimator chosen from a set Ψ of candidate functions ψ : Rm 7→ ψ. Regression is
strongly related to the estimation theory outlined in Section 53.6 on page 691. In most cases,
like linear2 or nonlinear3 regression, the mathematical model of the candidate functions is
not completely free. Instead, we pick a specific one from an array of parametric functions
by finding the best values for the parameters.
Definition D49.2 (Symbolic Regression). Symbolic regression [149, 846, 1514, 1602,
2251, 2377, 2997] is a general approach to regression where regression functions can be
constructed by combining elements of a set of mathematical expressions, variables and con-
stants.
Symbolic regression is thus not limited to determining the optimal values for the set of
parameters of a certain array of functions. It is much more general as polynomial fitting
methods.
One of the most widespread methods to perform symbolic regression is to apply Genetic
Programming. Here, the candidate functions are constructed and refined by an evolutionary
process. In the following we will discuss the genotypes (which are also the phenotypes) of the
1 http://en.wikipedia.org/wiki/Regression_analysis [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Linear_regression [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Nonlinear_regression [accessed 2007-07-03]
532 49 REAL-WORLD PROBLEMS
evolution as well as the objective functions that drive it. As illustrated in Figure 49.1, the
candidate solutions, i. e., the candidate functions, are represented by a tree of mathematical
expressions where the leaf nodes are either constants or the fields of the independent variable
vector x.
* cos
3*x*x+cos 1
x (( 3 * /
x x 1 x
mathematical expression genetic programming tree representation
The set Ψ of functions ψ that can possible be evolved is limited by the set of expressions
Σ available to the evolutionary process.
Another aspect that influences the possible results of the symbolic regression is the concept
of constants. In general, constants are not really needed since they can be constructed
x
indirectly via expressions. The constant 2.5, for example, equals the expression x+x + lnlnx∗x
x .
The evolution of such artificial constants, however, takes rather long. Koza [1602] has
therefore introduced the concept of ephemeral random constants.
Notice that the other reproduction operators for tree genomes have been discussed in detail
in Section 31.3 on page 386.
In the following elaborations, we will reuse some terms that we have applied in our discus-
sion on likelihood in Section 53.6.2 on page 693 in order to find out what measures will
49.1. SYMBOLIC REGRESSION 533
make good objective functions for symbolic regression problems.Again, we are given a finite
set of sample data A containing n = |A| pairs of (xi , yi ) where the vectors xi ∈ Rm are
known inputs to an unknown function ϕ : Rm 7→ R and the scalars yi are its observed
outputs (possible contaminated with noise and measurement errors subsumed in the term
ηi , see Equation 53.242 on page 693). Furthermore, we can access a (possible infinite large)
set Ψ of functions ψ : Rm 7→ R ∈ Ψ which are possible estimators of ϕ. For the inputs xi ,
the results of these functions ψ deviate by the estimation error ε (see Definition D53.56 on
page 692) from the yi .
In order to guide the evolution of estimators (in other words, for driving the regression
process), we need an objective function that furthers candidate solutions that represent the
sample data A and thus, resemble the function ϕ, closely. Let us call this “driving force”
quality function.
Definition D49.4 (Quality Function). The quality function f(ψ, A) defines the quality
of the approximation of a function ϕ by a function ψ. The smaller the value of the quality
function is, the more precisely is the approximation of ϕ by ψ in the context of the sample
data A.
Under the conditions that the measurement errors ηi are uncorrelated and are all normally
distributed with an expected value of zero and the same variance (see Equation 53.243, Equa-
tion 53.244, and Equation 53.245 on page 693), we have shown in Section 53.6.2 that the best
estimators minimize the mean square error “MSE” (see Equation 53.258 on page 695, Def-
inition D53.62 on page 696 and Definition D53.59 on page 692). Thus, if the source of the
values yi complies at least in a simplified, theoretical manner with these conditions or even
is a real measurement process, the square error is the quality function to choose.
lenA−1
X 2
fσ6=0 (ψ, A) = (yi − ψ (xi )) (49.6)
i=0
While this is normally true, there is one exception to the rule: The case where the values
yi are no measurements but direct results from ϕ and η = 0. A common example for this
situation is if we apply symbolic regression in order to discover functional identities [1602,
2033, 2359] (see also Example E49.1 on the next page). Different from normal regression
analysis or estimation, we then know ϕ exactly and want to find another function ψ ⋆ that
is another, equivalent form of ϕ. Therefore, we will use ϕ to create sample data set A
beforehand, carefully selecting characteristic points xi . Thus, the noise and the measurement
errors ηi all become zero. If we would still regard them as normally distributed, their variance
s2 would be zero, too.
The proof for the statement that minimizing the square errors maximizes the likelihood
is based on the transition from Equation 53.253 to Equation 53.254 on page 695 where we
cut divisions by s2 . This is not possible if σ becomes zero. Hence, we may or may not select
metrics different from the square error as quality function. Its feature of punishing larger
deviation stronger than small ones, however, is attractive even if the measurement errors
become zero. Another metric which can be used as quality function in these circumstances
are the sums of the absolute values of the estimation errors:
lenA−1
X
fσ=0 (ψ, A) = |yi − ψ (xi )| (49.7)
i=0
534 49 REAL-WORLD PROBLEMS
Example E49.1 (An Example and the Phenomenon of Overfitting).
If multi-objective optimization can be applied, the quality function should be complemented
by an objective function that puts pressure in the direction of smaller estimations ψ. In
symbolic regression by Genetic Programming, the problem of code bloat (discussed in ??
on page ??) is eminent. Here, functions do not only grow large because they include useless
expressions (like x∗x+x
x − x − 1). A large function may consist of functional expressions only,
but instead of really representing or approximating ϕ, it is degenerated to just some sort of
misfit decision table. This phenomenon is called overfitting and has initially been discussed
in Section 19.1 on page 191.
Let us, for example, assume we want to find a function similar to Equation 49.8. Of
course, we would hope to find something like Equation 49.9.
y = ϕ(x) = x2 + 2x + 1 (49.8)
y = ψ ⋆ (x) = (x + 1)2 = (x + 1)(x + 1) (49.9)
We obtained both functions ψ1⋆ (in its second form) and ψ2⋆ using the symbolic regression
applet of Hannes Planatscher which can be found at http://www.potschi.de/sr/ [ac-
4
cessed 2007-07-03] . It needs to be said that the first (wanted) result occurred way more often than
absurd variations like ψ2⋆ . But indeed, there are some factors which further the evolution of
such eyesores:
1. If only few sample data points are provided, the set of prospective functions that have
a low estimation error becomes larger. Therefore, chances are that symbolic regression
provides results that only match those points but differ in all other points significantly
from ϕ.
4 Another good applet for symbolic regression can be found at http://alphard.ethz.ch/gerber/
30
20
10
-5 -4 -3 -2 -1 0 1 2 3 4
-10
y«1(x) º j(x)
-20 y«2(x)
Figure 49.2: ϕ(x), the evolved ψ1⋆ (x) ≡ ϕ(x), and ψ2⋆ (x).
2. If the sample data points are not chosen wisely, their expressiveness is low. We for
instance chose 4.9,5, and 5.1 as well as 2.9, 3 and 3.1 which form two groups with
members very close to each other. Therefore, a curve that approximately hits these
two clouds is rated automatically with a high quality value.
3. A small population size decreases the diversity and furthers “incest” between similar
candidate solutions. Due to a lower rate of exploration, only a local minimum of the
quality value will often be yielded.
4. Allowing functions of large depth and putting low pressure against bloat (see ?? on
page ??) leads to uncontrolled function growth. The real laws ϕ that we want to ap-
proximate with symbolic regression do usually not consist of more than 40 expressions.
This is valid for most physical, mathematical, or financial equations. Therefore, the
evolution of large functions is counterproductive in those cases.
Although we made some of these mistakes intentionally, there are many situations where it
is hard to determine good parameter sets and restrictions for the evolution and they occur
accidentally.
Definition D49.5 (Data Mining). Data mining5 can be defined as the nontrivial extrac-
tion of implicit, previously unknown, and potentially useful information from data [991] and
the science of extracting useful information from large data sets or databases [1176].
Today, gigantic amounts of data are collected in the web, in medical databases, by enterprise
resource planning (ERP) and customer relationship management (CRM) systems in corpo-
rations, in web shops, by administrative and governmental bodies, and in science projects.
These data sets are way too large to be incorporated directly into a decision making process
or to be understood as-is by a human being. Instead, automated approaches have to be
applied that extract the relevant information, to find underlying rules and patterns, or to
detect time-dependent changes. Data mining subsumes the methods and techniques capable
to perform this task. It is very closely related to estimation theory in stochastic (discussed
in Section 53.6 on page 691) – the simplest summaries of a data set are still the arithmetic
mean or the median of its elements. Data mining is also strongly related to artificial in-
telligence [797, 2356], which includes (machine) learning algorithms that can generalize the
given information. Some of the most wide spread and most common data mining techniques
are:
1. artificial neural networks (ANN) [310, 314],
2. support vector machines (SVM) [450, 2773, 2788, 2849],
3. logistic regression [37],
4. decision trees [281, 406, 2234, 2961],
5. Genetic Programming for synthesizing classifiers [481, 993, 1985, 2855],
6. Learning Classifier Systems as introduced in Chapter 35 on page 457, and
7. naı̈ve Bayes Classifiers [805, 2312].
49.2.1 Classification
Broadly speaking, the task of classification is to assign a class k from a set of possible
classes K to vectors (data samples) a = (a[0], a[1], . . . , a[n − 1]) consisting of n attribute
values a[0] ∈ Ai . In supervised approaches, the starting point is a training set At including
training samples at ∈ At for which the corresponding classes class(at ) ∈ K are already
known.
A data mining algorithm is supposed to learn a relation (called classifier ) C : A0 × A1 ×
· · ·×An−1 7→ K which can map such attribute vectors a to a corresponding class k ∈ K. The
training set At usually contains only a small subset of the possible attribute vectors and may
even include contradicting samples (at,1 = at,2 : at,1 , at,2 ∈ At ∧ class(at,1 ) 6= class(at,2 )).
The better the classifier C is, the more often it can correctly classify attribute vectors.
An overfitted classifier has no generalization capability: it learns only the exact relations
provided in the training sample but is unable to classify samples not included in At with
reasonable precision. In order to test whether a classifier C is overfitted, not the complete
available data A is used for learning. Instead, A is divided into the training set At and the
test set Ac . If C’s precision on Ac is much worse than on At , it is overfitted and should not
be used in a practical application.
1200 9
10 t*km
800
400
0
2005 2020 2035 2050
According to the German Federal Ministry of Economics and Technology [447], the freight
traffic volume on German roads will have doubled by 2050 as illustrated in Figure 49.3.
Reasons for this development are the effects of globalization as well as the central location
of the country in Europe. With the steadily increasing freight traffic resulting from trade
inside the European Union and global import and export [2248], transportation and logistics
become more important [2768, 2825]. Thus, a need for intelligent solutions for the strategic
planning of logistics becomes apparent [447]. Such a planning process can be considered as
a multi-objective optimization problem which has the goals [2607, 2914] of increasing the
profit of the logistics companies by
Fortunately, the last point is a side-effect of the others. By reducing the total distance
covered and by transporting a larger fraction of the freight via (inexpensive) trains, not only
the driver’s work hours and the costs are decreased, but also the CO2 production declines.
Efficient freight planning is not a static procedure. Although it involves building an
overall plan on how to deliver orders, it should also be able to dynamically react to unforeseen
problems such as traffic jams or accidents. This reaction should lead to a local adaptation of
the plan and re-routing of all involved freight vehicles whereas parts of the plan concerning
geographically distant and uninvolved objects are supposed to stay unchanged.
In the literature, the creation of freight plans is known as the Vehicle Routing Prob-
lem [497, 1091, 2162]. In this section, we present an approach to Vehicle Routing for
real-world scenarios: the freight transportation planning component of the in.west system.
in.west, or “Intelligente Wechselbrücksteuerung” in full, is a joint research project of DHL,
Deutsche Post AG, Micromata, BIBA, and OHB Teledata funded by the German Federal
Ministry of Economics and Technology.6
In the following section, we discuss different flavors of the Vehicle Routing Problem and
the general requirements of logistics departments which specify the framework for our freight
planning component. These specific conditions rendered the related approaches outlined in
Section 49.3.3 infeasible for our situation. In Section 49.3.4, we present an Evolutionary
6 See http://www.inwest.org/ [accessed 2008-10-29].
538 49 REAL-WORLD PROBLEMS
Algorithm for multi-objective, real-world freight planning problems [2189]. The problem-
specific representation of the candidate solutions and the intelligent search operators working
on them are introduced, as well as the objective functions derived from the requirements.
Our approach has been tested in many different scenarios and the experimental results are
summarized in Section 49.3.5. The freight transportation planning component described
in this chapter is only one part of the holistic in.west approach to logistics which will be
outlined in Section 49.3.6. Finally, we conclude with a discussion of the results and future
work in Section 49.3.7.
The Vehicle Routing Problem (VRP) is one of the most famous combinatorial optimization
problems. In simple terms, the goal is to determine a set of routes than can satisfy sev-
eral geographically scattered customers’ demands while minimizing the overall costs [2162].
Usually, a fleet of vehicles located in one depot is supposed to fulfill these requests. In this
context, the original version of the VRP problem was proposed by Dantzig and Ramser
[680] in 1959 who addressed the calculation of a set of optimal routes for a fleet of gasoline
delivery trucks.
As described next, a large number of variants of the VRP exist, adding different con-
straints to the original definition. Within the scope of in.west, we first identified all the
restrictions of real-world Vehicle Routing Problems that occur in companies like and then
analyzed available approaches from the literature.
The Capacitated Vehicle Routing Problem (CVRP), for example, is similar to the classical
Vehicle Routing Problem with the additional constraint that every vehicle must have the
same capacity. A fixed fleet of delivery vehicles must service known customers’ demands
of a single commodity from a common depot at minimum transit costs [806, 2212, 2262].
The Distance Vehicle Routing Problem (DVRP) is a VRP extended with the additional
constraint on the maximum total distance traveled by each vehicle. In addition, Multiple
Depot Vehicle Routing Problems (MDVRP) have several depots from which customers can
be supplied. Therefore, the MDVPR requires the assignment of customers to depots. A
fleet of vehicles is based at each depot. Each vehicle then starts at its corresponding depot,
services the customers assigned to that depot, and returns.
Typically, the planning period for a classical VRP is a single day. Different from this
approach are Periodic Vehicle Routing Problems (PVRP), where the planning period is ex-
tended to a specific number of days and customers have to be served several times with
commodities. In practice, Vehicle Routing Problems with Backhauls (VRPB), where cus-
tomers can return some commodities [2212] are very common. Therefore all deliveries for
each route must be completed before any pickups are made. Then, it also becomes necessary
to take into account that the goods which customers return to the deliverer must fit into
the vehicle.
The Vehicle Routing Problem with Pick-up and Delivering (VRPPD) is a capacitated
Vehicle Routing Problem where each customer can be supplied with commodities as well
as return commodities to the deliverer. Finally the Vehicle Routing Problem with Time
Windows (VRPTW) is similar to the classical Vehicle Routing Problem with the additional
restriction that time windows (intervals) are defined in which the customers have to be
supplied [2212]. Figure 49.4 shows the hierarchy of Vehicle Routing Problem variants and
also the problems which are relevant in the in.west case.
49.3. FREIGHT TRANSPORTATION PLANNING 539
multi-depot periodic
MDVRP VRP PVRP
tim
ew
str or
s
nt
constraints
o n ce
ai
in
capacity
e c tan
do
w
tim d is
s
DVRP VRPTW
CVRP
capacity a.
ba
w. loading a.
time or
unloading
ck
distance ha
ul
const.
DCVRP VRPB
VRPPD
Figure 49.4: Different flavors of the VRP and their relation to the in.west system.
As it becomes obvious from Figure 49.4, the situation in logistics companies is relatively
complicated and involves many different aspects of Vehicle Routing. The basic unit of freight
considered in this work is a swap body b, a standardized container (C 745, EN 284 [2679])
with a dimension of roughly 7.5m × 2.6m × 2.7m and special appliances for easy exchange
between transportation vehicles or railway carriages. Logistics companies like DHL usually
own up to one thousand such containers. We refer to the union of all swap bodies as the set
B.
We furthermore define the union of all possible means of transportation as the set F . All
trucks tr ∈ F can carry at most a certain maximum number v̂ (tr) of swap bodies at once.
Commonly and also in the case of DHL, this limit is v̂ (tr) = 2. The maximum load of trains
z ∈ F , on the other hand, is often more variable and usually ranges somewhere between
30 and 60 (v̂ (z) ∈ [30..60]). Trains have fixed routes, departure, and arrival times whereas
freight trucks can move freely on the map. In many companies, trucks must perform cyclic
tours, i.e., return to their point of departure by the end of the day, in order to allow the
drivers to return home.
b
The clients and the depots of the logistics companies together can form more than one
thousand locations from which freight may be collected or to which it may be delivered.
b
We will refer to the set of all these locations as L. Each transportation order has a fixed
time window [ts , tbs ] in which it must be collected from its source ls ∈ L. From there, it has
to be carried to its destination location ld ∈ L where it must arrive within a time window
b b
[td , tbd ]. An order furthermore has a volume v which we assume to be an integer multiple
of the capacity D of a swap body. Hence, E a transportation order o can fully be described by
the tuple o = ls , ld , [ts , tbs ], [td , tbd ], v . In our approach, orders which require more than one
(v > 1) swap body will be split up into multiple orders requiring one swap body (v = 1)
each.
Logistics companies usually have to service up to a few thousand such orders per day.
The express unit of the project partner DHL, for instance, delivered between 100 and 3000
540 49 REAL-WORLD PROBLEMS
per day in 2007, depending on the day of the week as well as national holidays etc.
The result
of the planning process is a set x of tours. Each single tour ξ is described by
a tuple ξ = ls , ld , f, ť, t̂, b, o . ls and ld are the start and destination locations and ť and
t̂ are the departure and arrival time of the vehicle f ∈ F . On this tour, f carries the set
b = {b1 , b2 , . . . } of swap bodies which, in turn, contain the orders o = {o1 , o2 , . . . }. It is
assumed that, for each truck, there is at least one corresponding truck driver and that the
same holds for all trains.
Tours are the smallest unit of freight transportation. Usually, multiple tours are com-
bined for a delivery: First, a truck tr may need to drive from the depot in Dortmund to
Bochum to pick up an unused swap body sb (ξ1 = hDortmund, Bochum, tr, 9am, 10am, ∅, ∅i).
In a subsequent tour ξ2 = hBochum, Essen, tr, 10.05am, 11am, {sb} , ∅i, it car-
ries the empty swap body sb to a customer in Essen. There, the order
o is loaded into sb and then transported to its destination o.ld = Hannover
(ξ3 = hEssen, Hannover, tr, 11.30am, 4pm, {sb} , {o}i).
Obviously, the set x must be physically sound. It must, for instance, not contain
any two intersecting tours ξ1 , ξ2 ξ1 .ť < ξ2 .t̂ ∧ ξ2 .ť < ξ1 .t̂ involving the same vehicle
(ξ1 .f = ξ2 .f ), swap bodies (ξ1 .b ∩ ξ2 .b 6= ∅), or orders (ξ1 .o ∩ ξ2 .o 6= ∅). Also, it must be en-
sured that all objects involved in a tour ξ reside at ξ.ls at time ξ.ť. Furthermore, the capacity
limits of all involved means of transportation must be respected, i. e., 0 ≤ |ξ.b| ≤ v̂ (ξ.f ). If
some of the freight is carried by trains, the fixed halting locations of the trains as well as
their assigned departure and arrival times must be considered. The same goes for laws re-
stricting the maximum amount of time a truck driver is allowed to drive without breaks and
constraints imposed by the company’s policies such as the aforementioned cyclic character
of truck tours. Only plans for which all these conditions hold can be considered as correct.
From the perspective of the planning system’s user, runtime constraints are of the same
importance: Ideally, the optimization process should not exceed one day. Even the best
results become useless if their computation takes longer than the time span from receiving
the orders to the day where they actually have to be delivered.
Experience has shown that hiring external carriers for a small fraction of the freight
can often reduce the number of required tours to be carried out by the own vehicles and
the corresponding total distance to be covered significantly, if the organization’s existing
capacities are already utilized to their limits. Therefore, a good transportation planning
system should also be able to make suggestions on opportunities for such an on-demand
outsourcing. An example for this issue is illustrated in Fig. 49.7.b.
The framework introduced in this section holds for practical scenarios in logistics compa-
nies like DHL and Deutsche Post. It proposes a hard challenge for research, since it involves
multiple intertwined optimization problems and combines several aspects even surpassing
the complexity of the most difficult Vehicle Routing Problems known from the literature.
Evolutionary Algorithms in general are discussed in-depth in Chapter 28. Here we just focus
on the methods that we introduced to solve the transportation planning problem at hand.
We base our approach on an EA with new, intelligent operators and a complex solution
representation.
When analyzing the problem structure outlined in Section 49.3.2.2, it becomes very obvious
that standard encodings such as binary [1075] or integer strings, matrixes, or real vectors
cannot be used in the context of this very general logistics planning task. Although it might
be possible to create a genotype-phenotype mapping capable of translating an integer string
into a tuple ξ representing a valid tour, trying to encode a set x of a variable number of such
tours in an integer string is not feasible. First, there are many substructures involved in a
tour which have variable length such as the sets of orders o and swap bodies b. Second, it
would be practically impossible to ensure the required physical soundness of the tours given
that the reproduction operations would randomly modify the integer strings.
In our work, we adhered to the premise that all candidate solutions must represent
correct solutions according to the specification given in Section 49.3.2.2 and none of the
search operations are allowed to violate this correctness. A candidate solution x ∈ X does
not necessarily contain a complete plan which manages to deliver all orders. Instead, partial
solutions (again as demanded in Section 49.3.2.2) are admitted, too.
Location (lÎL) *
* Tour (x) 1..*
* *
Phenotype (x)
1 1 1
startLocationID (ls)
1 1
Order (o) ^ (ld)
endLocationID
startLocationID (ls) startTime (t)
endLocationID (ld) ^
endTime (t) * SwapBody (b)
minStartTime (^
t s) orderIDs[] (o)
^
0..v(f) *
maxStartTime ( ts) swapBodyIDs[] (b)
1..* * Vehicle (fÎF)
minEndTime (^
td) vehicleID (f)
maxEndTime ( td)
In order to achieve such a behavior, it is clear that all reproduction operations in the
Evolutionary Algorithm must have access to the complete set x of tuples ξ. Only then, they
can check whether the modifications to be applied may impair the correctness of the plans.
Therefore, the phenotypes are not encoded at all, but instead, they are the plan objects in
their native representations as illustrated in Figure 49.5.
This figure holds the UML specification of the phenotypes in our planning system. The
exactly same data structures are also used by the in.west middleware and graphical user
interface. The location IDs (startLocationID, endLocationID) of the Orders and Tours are
49.3. FREIGHT TRANSPORTATION PLANNING 543
indices into a database. They are also used to obtain distances and times of travel between
locations from a sparse distance matrix which can be updated asynchronously from different
information sources. The orderIDs, swapBodyIDs, and vehicleIDs are indices into a database
as well.
By using this explicit representation, the search operations have full access to all the infor-
mation in the freight plans. Standard crossover and mutation operators are, however, no
longer applicable. Instead, intelligent operators have to be introduced which respect the
correctness of the candidate solutions.
For the in.west planning system, three crossover and sixteen mutation operations have
been defined, each dealing with a specific constellation in the phenotypes and performing one
distinct type of modification. During the evolution, individuals to be mutated are processed
by a randomly picked operator. If the operator is not applicable because the individual does
not belong to the corresponding constellation, another operator is tried. This is repeated,
until either the individual is modified or all operators were tested. Two individuals to be
combined with crossover are processed by a randomly selected operator as well.
B
B B B
xÎO
x yÎO
A
x
C A C A C A C
D
D
yx
D D
B B B B
x
yÎO
x
A A A C A C
C C
x yx
y
y
D D D D
Figure 49.6: Some mutation operators from the freight planning EA.
Obviously, we cannot give detailed specifications on all twenty genetic operations [2188]
(including the initial individual creation) in this chapter. Instead, we will outline the muta-
tion operators sketched in Figure 49.6 exemplarily.
The first operator (Fig. 49.6.a) is applicable if there is at least one order which would
not be delivered if the plan in the input phenotype x was carried out. This operator chooses
randomly from all available means of transportation. Available in this context means “not
involved in another tour for the time between the start and end times of the order”. The
freight transporters closer to the source of the order are picked with higher probability.
Then, a swap body is allocated in the same manner. This process leads to between one and
three new tours being added to the phenotype. If the transportation vehicle is a truck, a
fourth tour is added which allows it to travel back to its starting point. This step is optional
and is applied only if the system is configured to send all trucks back “home” after the end
of their routes, as it is the case in DHL.
Fig. 49.6.b illustrates one operator which tries to include an additional order o into an
already existing set of related tours. If the truck driving these tours has space for another
swap body, at least one free swap body b is available, and picking up o and b as well as
delivering o is possible without violating the time constraints of the other transportation
544 49 REAL-WORLD PROBLEMS
orders already involved in the set of tours, the order is included and the corresponding new
tours are added to the plan.
The mutator sketched in Fig. 49.6.c does the same if an additional order can be included
in already existing tours because of available capacities in swap bodies. Such spare capacities
occur from time to time since the containers are not left at the customers’ locations after
unloading the goods but transported back to the depots. For all operators which add new
orders, swap bodies, or tours to the candidate solutions, inverse operations which remove
these elements are provided, too.
One exceptional operator is the “truck-meets-truck” mechanism. Often, two trucks are
carrying out deliveries in opposite directions (B → D and D → B in Fig. 49.6.d). The
operator tries to find a location C which is close to both, B and D. If the time windows of
the orders allow it, the two involved trucks can meet at this halting point C and exchange
their freight. This way, the total distance that they have to drive can almost be halved from
4 ∗ BD to 2 ∗ BC + 2 ∗ CD where BC + CD ≈ BD.
The first recombination operator used in the in.west system copies all tours from the
first parent and then adds all tours from the second parent in a way that does not lead to a
violation of the solution’s correctness. In this process, tours which belong together such as
those created by the first mutator mentioned are kept together. A second crossover method
tries to find sets of tours in the first parent which intersect with similar sets in the second
parent and joins them into an offspring plan in the same way the truck-meets-truck mutator
combines tours.
The freight transportation planning process run by the Evolutionary Algorithm is driven
by a set f of three objective functions (f = {f1 , f2 , f3 }). These functions, all subject to
minimization, are based on the requirements stated in Section 49.3.1 and are combined via
Pareto comparisons (see Section 3.3.5 on page 65 and [593, 599]) in the fitness assignment
processes.
One of the most important aspects of freight planning is to deliver as many orders as possible.
Therefore, the first objective function f1 (x) returns the number of orders which will not be
delivered in a timely manner if the plan x was carried out. The optimum of f1 is zero.
Human operators need to hire external carriers for orders which cannot be delivered (due
to insufficient resources, for instance).
By using a sparse distance matrix stored in memory, the second objective function deter-
mines the total distance covered by all vehicles involved. Minimizing this distance will lead
to less fuel consumption and thus, lower costs and lesser CO2 production. The global opti-
mum of this function is not known a priori and may not be discovered by the optimization
process as well.
The third objective function minimizes the spare capacities of the vehicles involved in tours.
In other words, it considers the total volume left empty in the swap bodies on the road and
the unused swap body slots of the trucks and trains. f2 does not consider whether trucks
are driving tours empty or loaded with empty containers. These aspects are handled by f3
which again has the optimum zero.
49.3. FREIGHT TRANSPORTATION PLANNING 545
49.3.5 Experiments
Because of the special requirements of the in.west project and the many constraints im-
posed on the corresponding optimization problem, the experimental results cannot be di-
rectly compared with other works. As we have shown in our discussion of related work
in Section 49.3.3, none of the approaches in the Vehicle Routing literature are sufficiently
similar to this scenario.
Hence, it was especially important to evaluate our freight planning system rigorously. We
have therefore carried out a series of tests according to the full factorial design of experiments
paradigm [378, 3030]. These experiments (which we will discuss in Section 49.3.5.1) are based
on a single, real-world set of orders. The results of additional experiments performed with
different datasets are outlined in Section 49.3.5.2. All data used have been reconstructed
from the actual order database of the project partner DHL, one of the largest logistics
companies worldwide. This database is also the yardstick with which we have measured the
utility of our system.
The experiments were conducted using a simplified distance matrix for both, the EA
and the original plans. Since the original plans did not involve trains, we deactivated the
mutation operators which incorporate train tours into candidate solutions too – otherwise
the results would have been incomparable. Legal aspects like statutory idle periods of the
truck drivers have not been considered in the reproduction operators either. However, only
plans not violating these constraints were considered in the experimental evaluation.
Evolutionary Algorithms have a wide variety of parameters, ranging from the choice of sub-
algorithms (like those computing a fitness value from the vectors of objective values for
each individual) to the mutation rate determining the fraction of the selected candidate
solutions which are to undergo mutation. The performance of an EA strongly depends on
the configuration of these parameters. In different optimization problems, usually differ-
ent configurations are beneficial and a setting finding optimal solutions in one application
may lead to premature convergence to a local optimum in other scenarios. Because of the
novelty of the presented approach for transportation planning, performing a large number
of experiments with different settings of the EA was necessary in order to find the optimal
configuration to be utilized in the in.west system in practice.
We, therefore, decided to conduct a full factorial experimental series, i. e., one where all
possible combinations of settings of a set of configuration parameters are tested. As basis
for this series, we used a test case consisting of 183 orders reconstructed from one day in
December 2007. The original freight plan xo for these orders contained 159 tours which
covered a total distance of d = f2 (xo ) = 19 109 km. The capacity of the vehicles involved
was filled to 65.5%. The parameters examined in these experiments are listed in Table 49.2.
546 49 REAL-WORLD PROBLEMS
Param. Setting and Meaning
ss
In every generation of the EA, new individuals are created by the reproduction
operations. The parent individuals in the population are then either discarded
(generational, ss = 0) or compete with their offspring (steady-state, ss = 1).
el
Elitist Evolutionary Algorithms keep an additional archive preserving the best
candidate solutions found (el = 1). Using elitism ensures that these candidate
solutions cannot be lost due to the randomness of the selection process. Turning
off this feature (el = 0) may allow the EA to escape local optima easier.
ps Allowing the EA to work with populations consisting of many individuals in-
creases its chance of finding good solutions but also increases its runtime. Three
different population sizes were tested: ps ∈ {200, 500, 1000}
fa
Either simple Pareto-Ranking [599] (fa = 0) or an extended assignment process
(fa = 1, called variety preserving in [2892]) with sharing was applied. Shar-
ing [742, 1250] decreases the fitness of individuals which are very similar to others
in the population in order to force the EA to explore many different areas in the
search space.
c
The simple convergence prevention (SCP) method proposed in Section 28.4.7.2
on page 304 was either used (c = 0.3) or not (c = 0). SCP is a clearing ap-
proach [2167, 2391] applied in the objective space which discards candidate solu-
tions with equal objective values with probability c.
mr/cr Different settings for the mutation rate mr ∈ {0.6, 0.8} and the crossover rate
cr ∈ {0.2, 0.4} were tested. These rates do not necessarily sum up to 1, since
individuals resulting from recombination may undergo mutation as well.
These settings were varied in the experiments and each of the 192 possible configurations
was tested ten times. All runs utilized a tournament selection scheme with five contestants
and were granted 10 000 generations. The measurements collected are listed in Table 49.3.
Meas. Meaning
ar The number of runs which found plans that completely covered all orders.
at The median number of generations needed by these runs until such plans were
found.
gr The number of runs which managed to find such plans which additionally were
at least as good as the original freight plans.
gt The median number of generations needed by these runs in order to find such
plans.
et The median number of generations after which f2 did not improve by more than
1%, i. e., the point where the experiments could have been stopped without a
significant loss in the quality of the results.
eτ The median number of individual evaluations until this point.
d The median value of f2 , i. e., the median distance covered.
Table 49.4: The best and the worst evaluation results in the full-factorial tests.
Table 49.4 contains the thirteen best and the twelve worst configurations, sorted accord-
ing to gr, d, and eτ . The best configuration managed to reduce the distance to be covered
by over 3000 km (17%) consistently. Even the configuration ranked 170 (not in Table 49.4)
saved almost 1100 km in median. In total, 172 out of the 192 test series managed to surpass
the original plans for the orders in the dataset in all of ten runs and only ten configurations
were unable to achieve this goal at all.
The experiments indicate that a combination of the highest tested population size
(ps = 1000), steady-state and elitist population treatment, SCP with rejection probability
c = 0.3, a sharing-based fitness assignment process, a mutation rate of 80%, and a crossover
rate of 40% is able to produce the best results. We additionally applied significance tests –
the sign test [2490] and Wilcoxon’s signed rank test [2490, 2939] – in order to check whether
there also are settings of single parameters which generally have positive influence. On
a significance level of α = 0.02, we considered a tendency only if both (two-tailed) tests
agreed. Applying the convergence prevention mechanism (SCP) [2892], larger population
sizes, variety preserving fitness assignment [2892], elitism, and higher mutation and lower
crossover rates have significantly positive influence in general.
Interestingly, the steady-state configurations lost in the significance tests against the
generational ones, although the seven best-performing settings were steady-state. Here the
utility of full factorial tests becomes obvious: steady-state population handling performed
very well if (and only if) sharing and the SCP mechanism were applied, too. In the other
cases, it led to premature convergence.
This behavior shows the following: transportation planning is a multimodal optimization
problem with a probably rugged fitness landscape [2915] or with local optima which are many
548 49 REAL-WORLD PROBLEMS
search steps (applications of reproduction operators) apart. Hence, applying steady-state
EAs for Vehicle Routing Problems similar to the one described here can be beneficial, but
only if diversity-preserving fitness assignment or selection algorithms are used in conjunction.
Only then, the probability of premature convergence is kept low enough and different local
optima and distant areas of the search space are explored sufficiently.
We ran experiments with many other order datasets for which the actual freight plans used by
70 000
original plan
performance
f2
60 000
55 000
f2
40 000
the project partners were available. In all scenarios, our approach yielded an improvement
which was never below 1%, usually above 5%, and for some days even exceeding 15%.
Figure 49.7 illustrates the best f2 -values (the total kilometers) of the individuals with the
most orders satisfied in the population for two typical example evolutions.
In both diagrams, the total distance first increases as the number of orders delivered by
the candidate solutions rises due to the pressure from f1 . At some point, plans which are
49.3. FREIGHT TRANSPORTATION PLANNING 549
able to deliver all orders evolved and f1 is satisfied (minimized). Now, its corresponding
dimension of the objective space begins to collapse, the influence of f2 intensifies, and the
total distances of the plans decrease. Soon afterwards, the efficiency of the original plans is
surpassed. Finally, the populations of the EAs converge to a Pareto frontier and no further
improvements occur. In Fig. 49.7.a, this limit was 54 993 km, an improvement of more than
8800 km or 13.8% compared to the original distance of 63 812 km.
Each point in the graph of f2 in the diagrams represents one point in the Pareto frontier
of the corresponding generation. Fig. 49.7.b illustrates one additional graph for f2 : the best
plans which can be created when at most 1% of the orders are outsourced. Compared to the
transportation plan including assignments for all orders which had a length of 79 464 km,
these plans could reduce the distance to 74 436 km, i. e., another 7% of the overall distance
could be saved. Thus, in this case, an overall reduction of around 7575 km is achieved in
comparison to the original plan, which had a length of 82 013 km.
One run of the algorithm (prototypically implemented in Java) for the dataset used in
the full factorial tests (Section 49.3.5.1) took around three hours. For sets with around
1000 orders it still easily fulfills the requirement of delivering result within 24 hours. Runs
with close to 2000 orders, however, take longer than one week (in our experiments, we used
larger population sizes for them). Here, it should be pointed out that these measurements
were taken on a single dual-core 2.6 GHz machine, which is only a fraction of the capacity
available in the dedicated data centers of the project partners. It is well known that EAs can
be efficiently parallelized and distributed over clusters [1791, 2897]. The final implementation
of the in.west system will incorporate distribution mechanisms and thus be able to deliver
results in time for all situations.
Logistics involve many aspects and the planning of efficient routes is only one of them.
Customers and legislation [2768], for instance, require traceability of the production data
and thus, also of the goods on the road. Experts require more linkage between the traffic
carriers and efficient distribution of traffic to cargo rail and water transportation.
Communication via
Satelite-based Middleware
location
Swap body
with sensor node
Transportation
Planner
Communication
Graphical User Interface (Screenshot) Software/GUI via Middleware
Technologies like telematics are regarded as the enablers of smart logistics which optimize
the physical value chain. Only a holistic approach combining intelligent planning and such
technologies can solve the challenges [2248, 2607] faced by logistics companies. Therefore, the
focus of the in.west project was to combine software, telematics, and business optimization
approaches into one flexible and adaptive system sketched in Figure 49.8.
550 49 REAL-WORLD PROBLEMS
Data from telematics become input to the optimization system which suggests trans-
portation plans to the human operator who selects or modifies these suggestions. The final
plans then, in turn, are used to configure the telematic units. A combination of both allows
the customers to track their deliveries and the operator to react to unforeseen situations.
In such situations, traffic jams or accidents, for instance, the optimization component can
again be used to make ad-hoc suggestions for resolutions.
The prototype introduced in this chapter was provided in the context of the BMWi promoted
project in.west and will be evaluated in field test in the third quarter of 2009. The analyses
take place in the area of freight transportation in the business segment courier -, express-
and parcel services of DHL, the market leader in this business. Today the transport volume
of this company constitutes a substantial portion of the traffic volume of the traffic carriers
road, ship, and rail. Hence, a significant decrease in the freight traffic caused by DHL might
lead to a noticeable reduction in the freight traffic volume on German roads.
The goal of this project is to achieve this decrease by utilizing information and com-
munication technologies on swap bodies, new approaches of planning, and novel control
processes. The main objective is to design a decision support tool to assist the operator
with suggestions for traffic reduction and a smart swap body telematic unit.
The requirements for the in.west software are various, since the needs of both, the
operators and the customers are to be satisfied. From their perspective, for example, a
constant documentation of the transportation processes is necessary. The operators require
that this documentation start with the selection, movement, and employment of the swap
bodies. From the view of the customer, only tracking and tracing of the containers on the
road must be supported. With the web-based user interfaces we provide, a spontaneous
check of the load condition, the status of the container, and information about the carrier
is furthermore possible.
All the information required by customers and operators on the status of the freight have
to be obtained by the software system first. Therefore, in.west also features a hardware
development project with the goal of designing a telematic unit. A device called YellowBox
(illustrated in Figure 49.9) was developed which enables swap bodies to transmit real time
geo positioning data of the containers to a software system. The basic functions of the device
are location, communication, and identification. The data received from the YellowBox is
processed by the software for planning and controlling the swap body in the logistic network.
The YellowBox consist of a main board, a location unit (GPS), a communication unit
(GSM/GPRS) and a processor unit. Digital input and output interfaces ensure the scal-
ability of the device, e.g., for load volume monitoring. Swap bodies do not offer reliable
power sources for technical equipment like telematic systems. Thus, the YellowBox has
been designed as a sensor node which uses battery current [2189, 2912].
49.3. FREIGHT TRANSPORTATION PLANNING 551
One of the most crucial criteria for the application of these sensor nodes in practice
was long battery duration and thus, low power consumption. The YellowBox is therefore
turned on and off for certain time intervals. The software system automatically configures
the device with the route along which it will be transported (and which has been evolved
by the planner). In pre-defined time intervals, it can thus check whether or not it is “on the
right track”. Only if location deviations above a certain threshold are detected, it will notify
the middleware. Also, if more than one swap body is to be transported by the same vehicle,
only one of them needs to perform this notification. With these approaches, communication
– one of the functions with the highest energy consumption – is effectively minimized.
49.3.7 Conclusions
In this section, we presented the in.west freight planning component which utilizes an Evolu-
tionary Algorithm with intelligent reproduction operations for general transportation plan-
ning problems. The approach was tested rigorously on real-world data from the in.west
partners and achieved excellent results. It has been integrated as a constituting part of the
holistic in.west logistics software system.
We presented this work at the EvoTRANSLOG’09 workshop [1052] in Tübingen [2914].
One point of the very fruitful discussion there was the question why we did not utilize
heuristics to create some initial solutions for the EA. We intentionally left this for our
future work for two reasons: First, we fear that creating such initial solutions may lead
to a decrease of diversity in the population. In Section 49.3.5.1 we showed that diversity
is a key to finding good solutions to this class of problem. Second, as can be seen in the
diagrams provided in Figure 49.7, finding initial solutions where all orders are assigned to
routes is not the time consuming part of the EA – optimizing them to plans with a low
total distance is. Hence, incorporating measures for distribution and efficient parallelization
may be a more promising addition to our approach. If a cluster of, for instance, twenty
computers is available, we can assume that distribution according to client-server or island
model schemes [1108, 1838, 2897] will allow us to decrease the runtime to at least one tenth
of the current value. Nevertheless, testing the utility of heuristics for creating the initial
population is both, interesting and promising.
The runtime issues for very large datasets of our system could be resolved by paral-
lelization and distribution. Another interesting feature for future work would be to allow
for online updates of the distance matrix which is used to both, to compute f2 and also to
determine the time a truck needs to travel from one location to another. The system would
then then be capable to (a) perform planning for the whole orders of one day in advance,
and (b) update smaller portions of the plans online if traffic jams occur. It should be noted
that even with such a system, the human operator cannot be replaced. There are always
constraints and pieces of information which cannot be employed in even the most advanced
automated optimization process. Hence, the solutions generated by our transportation plan-
ner are suggestions rather than doctrines. They are displayed to the operator and she may
modify them according to her needs.
Chapter 50
Benchmarks
50.1 Introduction
In this chapter, we discuss some benchmark and toy problems which are used to demonstrate
the utility of Global Optimization algorithms. Such problems have usually no direct real-
world application but are well understood, widely researched, and can be used
1. to measure the speed and the ability of optimization algorithms to find good solutions,
as done for example by Borgulya [360], Liu et al. [1747], Luke and Panait [1787], and
Yang et al. [3020],
2. as benchmark for comparing different optimization approaches, as done by Purshouse
and Fleming [2229], Zitzler et al. [3098], Yang et al. [3020], for instance,
3. to derive theoretical results, as done for example by Jansen and Wegener [1433],
4. as basis to verify theories, as used for instance by Burke et al. [455] and Langdon and
Poli [1672],
5. as playground to test new ideas, research, and developments,
6. as easy-to-understand examples to discuss features and problems of optimization (as
done here in Example E3.5 on page 57, for instance), and
7. for demonstration purposes, since they normally are interesting, funny, and can be
visualized in a nice manner.
The ideas of fitness landscapes1 and epistasis2 came originally from evolutionary biology
and later were adopted by Evolutionary Computation theorists. It is thus not surprising
that biologists also contributed much to the research of both. In the late 1980s, Kauffman
1 Fitness landscapes have been introduced in Section 5.1 on page 93.
2 Epistasis is discussed in Chapter 17 on page 177.
554 50 BENCHMARKS
[1499] defined the NK fitness landscape [1499, 1501, 1502], a family of objective functions
with tunable epistasis, in an effort to investigate the links between epistasis and ruggedness.
The problem space and also the search space of this problem are bit strings of the length
N , i. e., G = X = BN . Only one single objective function is used and referred to as fitness
function FN,K : BN 7→ R+ . Each gene xi contributes one value fi : BK+1 7→ [0, 1] ⊂ R+
to the fitness function which is defined as the average of all of these N contributions. The
fitness fi of a gene xi is determined by its allele and the alleles at K other loci xi1 , xi2 , .., xiK
with i1..K ∈ [0, N − 1] \ {i} ⊂ N0 , called its neighbors.
N −1
1 X
FN,K (x) = fi (xi , xi1 , xi2 , .., xiK ) (50.1)
N i=0
Whenever the value of a gene changes, all the fitness values of the genes to whose neighbor set
it belongs will change too – to values uncorrelated to their previous state. While N describes
the basic problem complexity, the intensity of this epistatic effect can be controlled with the
parameter K ∈ 0..N : If K = 0, there is no epistasis at all, but for K = N − 1 the epistasis
is maximized and the fitness contribution of each gene depends on all other genes. Two
different models are defined for choosing the K neighbors: adjacent neighbors, where the K
nearest other genes influence the fitness of a gene or random neighbors where K other genes
are therefore randomly chosen.
The single functions fi can be implemented by a table of length 2K+1 which is indexed
by the (binary encoded) number represented by the gene xi and its neighbors. These tables
contain a fitness value for each possible value of a gene and its neighbors. They can be filled
by sampling an uniform distribution in [0, 1) (or any other random distribution).
We may also consider the fi to be single objective functions that are combined to a
fitness value FN,K by a weighted sum approach, as discussed in Section 3.3.3. Then, the
nature of NK problems will probably lead to another well known aspect of multi-objective
optimization: conflicting criteria. An improvement in one objective may very well lead to
degeneration in another one.
The properties of the NK landscapes have intensely been studied in the past and the
most significant results from Kauffman [1500], Weinberger [2882], and Fontana et al. [959]
will be discussed here. We therefore borrow from the summaries provided by Altenberg [79]
and Platel et al. [2186]. Further information can be found in [572, 573, 1016, 2515, 2982]. An
analysis of the behavior of Estimation of Distribution Algorithms and Genetic Algorithms
in NK landscapes has been provided by Pelikan [2146].
50.2.1.1 K=0
For K = 0, the fitness function is not epistatic. Hence, all genes can be optimized separately
and we have the classical additive multi-locus model.
1. There is a single optimum x⋆ which is globally attractive, i. e., which can and will be
found by any (reasonable) optimization process regardless of the initial configuration.
3. An adaptive walk3 from any point in the search space will proceed by reducing the
Hamming distance to the global optimum by 1 in each step (if each mutation only
affects one single gene). The number of better neighbors equals the Hamming distance
to the global optimum. Hence, the estimated number of steps of such a walk is N2 .
For K = N − 1, the fitness function equals a random assignment of fitness to each point of
the search space.
1
1. The probability that a genotype is a local optimum is N −1 .
2N
2. The expected total number of local optima is thus N +1 .
6. With increasing N , the expected fitness of local optima reached by an adaptive from
a random initial configuration decreases towards the mean fitness FN,K = 21 of the
search space. This is called the complexity catastrophe [1500].
50.2.1.3 Intermediate K
1. For small K, the best local optima share many common alleles. As K increases, this
correlation diminishes. This degeneration proceeds faster for the random neighbors
method than for the nearest neighbors approach.
2. For larger K, the fitness of the local optima approach a normal distribution with mean
m and variance s approximately
p
m = µ + σ 2 ln (K + 1)K + 1 (50.2)
(K + 1)σ 2
s= (50.3)
N (K + 1 + 2(K + 2) ln (K + 1))
3. The mean distance between local optima, roughly twice the length of an adaptive walk,
is approximately N log 2 (K+1)
2(K+1) .
4. The autocorrelation function4 ρ(k, FN,K ) and the correlation length τ are:
k
K +1
ρ(k, FN,K ) = 1− (50.4)
N
−1
τ = (50.5)
K +1
ln 1 −
N
Altenberg [79] nicely summarizes the four most important theorems about the computational
complexity of optimization of NK fitness landscapes. These theorems have been proven using
different algorithms introduced by Weinberger [2883] and Thompson and Wright [2703].
1. The NK optimization problem with adjacent neighbors is solvable in O 2K N steps
and thus in P [2883].
As we have discussed in Section 16.1, natural genomes exhibit a certain degree of neutral-
ity. Therefore, researchers have proposed extensions for the NK landscape which introduce
neutrality, too [1031, 1032]. Two of them, the NKp and NKq landscapes, achieve this by
altering the contributions fi of the single genes to the total fitness. In the following, assume
that there are N tables, each with 2K entries representing these contributions.
The NKp landscapes devised by Barnett [219] achieves neutrality by setting a certain
number of entries in each table to zero. Hence, the corresponding allele combinations do
not contribute to the overall fitness of an individual. If a mutation leads to a transition
from one such zero configuration to another one, it is effectively neutral. The parameter p
denotes the probability that a certain allele combination does not contribute to the fitness.
As proven by Reidys and Stadler [2287], the ruggedness of the NKp landscape does not vary
for different values of p. Barnett [218] proved that the degree of neutrality in this landscape
depends on p.
Newman and Engelhardt [2028] follow a similar approach with their NKq model. Here,
the fitness contributions fi are integers drawn from the range [0, q) and the total fitness of a
candidate solution is normalized by multiplying it with 1/q−1. A mutation is neutral when
the new allelic combination resulting from it leads to the same contribution than the old
one. In NKq landscapes, the neutrality decreases with rising values of q. In [1032], you can
find a thorough discussion of the NK, the NKp, and the NKq fitness landscape.
With their technological landscapes, Lobo et al. [1754] follow the same approach from
the other side: the discretize the continuous total fitness function FN,K . The parameter M
of their technological landscapes corresponds to a number of bins [0, 1/M ), [1/M , 2/M ), . . . ,
into which the fitness values are sorted and put away.
Motivated by the wish of researching the models for the origin of biological information
by Anderson [94, 2317] and Tsallis and Ferreira [2725], Amitrano et al. [86] developed
the p-spin model. This model is an alternative to the NK fitness landscape for tunable
ruggedness [2884]. Other than the previous models, it includes a complete definition for all
genetic operations which will be discussed in this section.
The p-spin model works with a fixed population size ps of individuals of an also fixed
length N . There is no distinction between genotypes and phenotype, in other words, G = X.
Each gene of an individual x is a binary variable which can take on the values −1 and 1.
N
G = {−1, 1} , xi [j] ∈ {−1, 1} ∀i ∈ [1..ps], j ∈ [0..N − 1] (50.6)
50.2. BIT-STRING BASED PROBLEM SPACES 557
On the 2N possible genotypic configurations, a space with the topology of an N -dimensional
hypercube is defined where neighboring individuals differ in exactly one element. On this
genome, the Hamming distance “dist ham” can be defined as
N −1
1 X
dist ham(x1 , x2 ) = (1 − x1 [i]x2 [i]) (50.7)
2 i=0
Two configurations are said to be ν mutations away from each other if they have the Ham-
ming distance ν. Mutation in this model is applied to a fraction of the N ∗ ps genes in the
population. These genes are chosen randomly and their state is changed, i. e., x[i] → −x[i].
The objective function fK (which is called fitness function in this context) is subject to
maximization, i. e., individuals with a higher value of fK are less likely to be removed from
the population. For very subset z of exactly K genes of a genotype, one contribution A(z)
is added to the fitness.
X
fK (x) = A(x[z [0]], x[z [1]], . . . , x[z [K − 1]]) (50.8)
∀z∈P([0..K])∧|z|=K
Equation 50.11 shows a similar “Trap” function defined by Ackley [19] where u(x) is the
number of ones in the bit string x of length n and z = ⌊3n/4⌋ [1467]. The objective function
f(x) is subject to maximization is sketched in Figure 50.1.
(8n/z) (z − u(x)) if u(x) ≤ z
f(x) = (50.11)
(10n/ (n − z)) (u(x) − z) otherwise
local optimium
with large basin
of attraction
u(x)
By the way, from this example, we can easily see that a fraction of all mutation and crossover
operations applied to most of the candidate solutions will fall into the don’t care areas. Such
modifications will not yield any fitness change and therefore are neutral.
The Royal Road functions provide certain, predefined stepping stones (i. e., building
blocks) which (theoretically) can be combined by the Genetic Algorithm to successively
create schemas of higher fitness and order. Mitchell et al. [1913] performed several tests
with their Royal Road functions. These tests revealed or confirmed that
2. In the spirit of the Building Block Hypothesis, one would expect that the intermediate
steps (for instance order 32 and 16) of the Royal Road functions would help the Genetic
Algorithm to reach the optimum. The experiments of Mitchell et al. [1913] showed the
exact opposite: leaving them away speeds up the evolution significantly. The reason
is the fitness difference between the intermediate steps and the low-order schemas is
high enough that the first instance of them will lead the GA to converge to it and wipe
out the low-order schemas. The other parts of this intermediate solution play no role
and may allow many zeros to hitchhike along.
Especially this last point gives us another insight on how we should construct genomes: the
fitness of combinations of good low-order schemas should not be too high so other good
low-order schemas do not extinct when they emerge. Otherwise, the phenomenon of domino
convergence researched by Rudnick [2347] and outlined in Section 13.2.3 and Section 50.2.5.2
may occur.
The original Royal Road problems can be defined for binary string genomes of any given
length n, as long as n is fixed. A Royal Road benchmark for variable-length genomes has
been defined by Platel et al. [2185].
The problem space XΣ of the VLR (variable-length representation) Royal Road problem
is based on an alphabet Σ with N = |Σ| letters. The fitness of an individual x ∈ XΣ is
determined by whether or not consecutive building blocks of the length b of the letters l ∈ Σ
are present. This presence can be defined as
1 if ∃i : (0 ≤ i < (len(x) − b)) ∧ (x[i + j] = l ∀j : 0 ≤ j < (b − 1))
Bb (x, l) = (50.14)
0 otherwise
x⋆ = AAAGTGGGTAATTTTCCCTCCC (50.17)
The relevant building blocks of x⋆ are written in bold face. As it can easily be seen, their
location plays no role, only their presence is important. Furthermore, multiple occurrences of
building blocks (like the second CCC) do not contribute to the fitness. The fitness landscape
has been designed in a way ensuring that fitness degeneration by crossover can only occur
if the crossover points are located inside building blocks and not by block translocation or
concatenation. In other words, there is no inter-block epistasis.
Platel et al. [2186] combined their previous work on the VLR Royal Road with Kauffman’s
NK landscapes and introduced the Epistatic Road. The original NK landscape works on
binary representation of the fixed length N . To each locus i in the representation, one
fitness function fi is assigned denoting its contribution to the overall fitness. fi however is
not exclusively computed using the allele at the ith locus but also depends on the alleles of
K other loci, its neighbors.
The VLR Royal Road uses a genome based on the alphabet Σ with N = len(Σ) letters.
It defines the function Bb (x, l) which returns 1 if a building block of length b containing only
the character l is present in x and 0 otherwise. Because of the fixed size of the alphabet
Σ, there exist exactly N such functions. Hence, the variable-length representation can be
translated to a fixed-length, binary one by simply concatenating them:
Now we can define a NK landscape for the Epistatic Road by substituting the Bb (x, l)
into Equation 50.1 on page 554:
N −1
1 X
FN,K,b (x) = fi (Bb (x, Σ[i]), Bb (x, Σ[i1 ]), . . . , Bb (x, Σ[iK ])) (50.19)
N i=0
The only thing left is to ensure that the end of the road, i. e., the presence of all N building
blocks, also is the optimum of FN,K,b . This is done by exhaustively searching the space BN
and defining the fi in a way that Bb (x, l) = 1 ∀ l ∈ Σ ⇒ FN,K,b (x) = 1.
50.2. BIT-STRING BASED PROBLEM SPACES 561
50.2.4.3 Royal Trees
An analogue of the Royal Road for Genetic Programming has been specified by Punch et al.
[2227]. This Royal Tree problem specifies a series of functions A, B, C, . . . with increasing
arity, i. e., A has one argument, B has two arguments, C has three, and so on. Additionally,
a set of terminal nodes x, y, z is defined.
B B B B
A A A A A A A A A
x x x x x x x x x
Fig. 50.2.a: Perfect A-level Fig. 50.2.b: Perfect B-level Fig. 50.2.c: Perfect C-level
For the first three levels, the perfect trees are shown Figure 50.2. An optimal A-level
tree consists of an A node with an x leaf attached to it. The perfect level-B tree has a B as
root with two perfect level-A trees as children. A node labeled with C having three children
which all are optimal B-level trees is the optimum at C-level, and so on.
The objective function, subject to maximization, is computed recursively. The raw fitness
of a node is the weighted sum of the fitness of its children. If the child is a perfect tree at the
appropriate level, a perfect C tree beneath a D-node, for instance, its fitness is multiplied
with the constant F ullBonus, which normally has the value 2. If the child is not a perfect
tree, but has the correct root, the weight is P artialBonus (usually 1). If it is otherwise
incorrect, its fitness is multiplied with P enalty, which is 13 per default. If the whole tree is
a perfect tree, its raw fitness is finally multiplied with CompleteBonus which normally is
also 2. The value of a x leaf is 1.
From Punch et al. [2227], we can furthermore borrow three examples for this fitness
assignment and outline them in Figure 50.3. A tree which represents a perfect A level has
the score of CompleteBonus ∗ F ullBonus ∗ 1 = 2 ∗ 2 ∗ 1 = 4. A complete and perfect tree at
level B receives CompleteBonus(F ullBonus∗4+F ullBonus∗4) = 2∗(2∗4+2∗4) = 32. At
level C, this makes CompleteBonus(F ullBonus ∗ 32 + F ullBonus ∗ 32 + F ullBonus ∗ 32) =
2(2 ∗ 32 + 2 ∗ 32 + 2 ∗ 32) = 384.
C C C
B B B B B B B x x
A A A A A A A A A A x x A A
x x x x x x x x x x x x
Fig. 50.3.a: 2(2 ∗ 32 + 2 ∗ 32 + Fig. 50.3.b: 2(2 ∗ 32 + 2 ∗ 32 + 1
Fig. 50.3.c: 2(2 ∗ 32 + 3
∗1+
2
2 ∗ 32) = 384 3
∗ 1) = 128 23 1
∗ 1) = 64 32
3
Storch and Wegener [2616, 2617, 2618] used their Real Royal Road for showing that there
exist problems where crossover helps improving the performance of Evolutionary Algorithms.
Naudts et al. [2009] have contributed generalized Royal Road functions functions in order
to study epistasis.
The OneMax and BinInt are two very simple model problems for measuring the convergence
of Genetic Algorithms.
The task in the OneMax (or BitCount) problem is to find a binary string of length n
consisting of all ones. The search and problem space are both the fixed-length bit strings
G = X = Bn . Each gene (bit) has two alleles 0 and 1 which also contribute exactly this
value to the total fitness, i. e.,
n−1
X
f(x) = x[i], ∀x ∈ X (50.20)
i=0
For the OneMax problem, an extensive body of research has been provided by Ackley
[19], Bäck [165], Blickle and Thiele [338], Miller and Goldberg [1898], Mühlenbein and
Schlierkamp-Voosen [1975], Thierens and Goldberg [2697], and Wilson and Kaur [2947].
The BinInt problem devised by Rudnick [2347] also uses the bit strings of the length n as
search and problem space (G = X = Bn ). It is something like a perverted version of the
OneMax problem, with the objective function defined as
n−1
X
f(x) = 2n−i−1 x[i], x[i] ∈ {0, 1} ∀i ∈ [0..n − 1] (50.21)
i=0
Since the bit at index i has a higher contribution to the fitness than all other bit at higher
indices together, the comparison between two candidate solutions x1 and x2 is won by the
lexicographically bigger one. Thierens et al. [2698] give the example x1 = (1, 1, 1, 1, 0, 0, 0, 0)
and x2 = (1, 1, 0, 0, 1, 1, 0, 0), where the first deviating bit (underlined, at index 2) fully
determines the outcome of the comparison of the two.
We can expect that the bits with high contribution (high salience) will converge quickly
whereas the other genes with lower salience are only pressured by selection when all oth-
ers have already been fully converged. Rudnick [2347] called this sequential convergence
phenomenon domino convergence due to its resemblance with a row of falling domino
stones [2698] (see Section 13.2.3). Generally, he showed that first, the highly salient genes
converge (i. e., take on the correct values in the majority of the population). Then, step
by step, the building blocks of lower significance can converge, too. Another result of Rud-
nick’s work is that mutation may stall the convergence because it can disturb the highly
significant genes, which then counters the effect of the selection pressure on the less salient
ones. Then, it becomes very less likely that the majority of the population will have the
best alleles in these genes. This somehow dovetails with the idea of error thresholds from
theoretical biology [867, 2057] which we have mentioned in Section 14.3. It is also explains
some the experimental results obtained with the Royal Road problem from Section 50.2.4.
The BinInt problem was used in the studies of Sastry and Goldberg [2401, 2402].
50.2. BIT-STRING BASED PROBLEM SPACES 563
One of the maybe most important conclusions from the behavior of GAs applied to the
BinInt problem is that applying a Genetic Algorithm to solve a numerical problem (X ⊆ Rn )
whilst encoding the candidate solutions binary (G ⊆ Bn ) in a straightforward manner will
like produce suboptimal solutions. Schraudolph and Belew [2429], for instance, recognized
this problem and suggested a Dynamic Parameter Encoding (DPE) where the values genes of
the bit string are readjusted over time: Initially, optimization takes place on a rather coarse
grained scale and after the optimum on this scale is approximated, the focus is shifted to a
finer interval and the genotypes are re-encoded to fit into this interval. In their experiments,
this method works better as the direct encoding.
The long path problems have been designed by Horn et al. [1267] in order to construct a
unimodal, non-deceptive problem without noise which Hill Climbing algorithms still can
only solve in exponential time. The idea is to wind a path with increasing fitness through
the search space so that any two adjacent points on the path are no further away than one
search step and any two points not adjacent on the path are away at least two search steps.
All points which are not on the path should guide the search to its origin.
The problem space and search space in their concrete realization is the space of the
binary strings G = X = Bl of the fixed, odd length l. The objective function flp (x) is
subject to maximization. It is furthermore assumed that the search operations in Hill
Climbing algorithms alter at most one bit per search step, from which we can follow that
two adjacent points on the path have a Hamming distance of one and two non-adjacent
points differ in at least two bits.
The simplest instance of the long path problems that Horn et al. [1267] define is the
Root2path Pl . Paths of this type are constructed by iteratively increasing the search space
dimension. Starting with P1 = (0, 1), the path Pl+2 is constructed from two copies Pla = Plb
of the path Pl as follows. First, we prepend 00 to all elements of the path Pla and 11 to
all elements of the path Plb . For l = 1 this makes P1a = (000, 001) and Plb = (110, 111).
Obviously, two elements on Pla or on Plb still have a Hamming distance of one whereas
each element from Pla differs at least two bits from each element on Plb . Then, a bridge
element Bl is created that equals the last element of Pla , but has 01 as the first two bits,
i. e., B1 = 011. Now the sequence of the elements in Plb is reversed and Pla , Bl , and the
reversed Plb are concatenated. Hence, P3 = (000, 001, 011, 111, 110). Due to this recursive
structure of the path construction, the path length increases exponentially with l (for odd
l):
The basic fitness of a candidate solution is 0 if it is not on the path and its (zero-based)
position on the path plus one if it is part of the path. The total number of points in the
l+1
space is Bl is 2l and thus, the fraction occupied by the path is approximately 3 ∗ 2− 2 , i. e.,
decreases exponentially. In order to avoid that the long path problem becomes a needle-in-a-
haystack problem7 , Horn et al. [1267] assign a fitness that leads the search algorithm to the
path’s origin to all off-path points xo . Since the first point of the path is always the string 00.
..0 containing only zeros, subtracting the number of ones from l, i. e., flp (xo ) = l−count1xo ,
is the method of choice. To all points on the path, l is added to the basic fitness, making
them superior to all other candidate solutions.
010 011
flp(010)=2 flp(011)=5
100 101
flp(100)=2 flp(101)=1
000 001
flp(000)=3 flp(001)=4
P1 = (0 , 1 )
P3 = (000, 001, 011, 111, 110)
P5 = (00000, 00001, 00011, 00111, 00110, 01110, 11110, 11111, 11011, 11001, 11000)
P7 = (0000000, 0000001, 0000011, 0000111, 0000110, 0001110, 0011110, 0011111,
0011011, 0011001, 0011000, 0111000, 1111000, 1111001, 1111011, 1111111,
1111110, 1101110, 1100110, 1100111, 1100011, 1100001, 1100000)
P9 = (000000000, 000000001, 000000011, 000000111, 000000110, 000001110, 000011110,
000011111, 000011011, 000011001, 000011000, 000111000, 001111000, 001111001,
001111011, 001111111, 001111110, 001101110, 001100110, 001100111, 001100011,
001100001, 001100000, 011100000, 111100000, 111100001, 111100011, 111100111,
111100110, 111101110, 111111110, 111111111, 111111011, 111111001, 111111000,
110111000, 110011000, 110011001, 110011011, 110011111, 110011110, 110001110,
110000110, 110000111, 110000011, 110000001, 110000000)
P11 = (00000000000, 00000000001, 00000000011, 00000000111, 00000000110,
00000001110, 00000011110, 00000011111, 00000011011, 00000011001,
00000011000, 00000111000, 00001111000, 00001111001, 00001111011,
00001111111, 00001111110, 00001101110, 00001100110, 00001100111,
00001100011, 00001100001, 00001100000, 00011100000, 00111100000,
00111100001, 00111100011, 00111100111, 00111100110, 00111101110,
00111111110, 00111111111, 00111111011, 00111111001, 00111111000,
00110111000, 00110011000, 00110011001, 00110011011, 00110011111,
00110011110, 00110001110, 00110000110, 00110000111, 00110000011,
00110000001, 00110000000, 01110000000, 11110000000, 11110000001,
11110000011, 11110000111, 11110000110, 11110001110, 11110011110,
11110011111, 11110011011, 11110011001, 11110011000, 11110111000,
11111111000, 11111111001, 11111111011, 11111111111, 11111111110,
11111101110, 11111100110, 11111100111, 11111100011, 11111100001,
11111100000, 11011100000, 11001100000, 11001100001, 11001100011,
11001100111, 11001100110, 11001101110, 11001111110, 11001111111,
11001111011, 11001111001, 11001111000, 11000111000, 11000011000,
11000011001, 11000011011, 11000011111, 11000011110, 11000001110,
11000000110, 11000000111, 11000000011, 11000000001, 11000000000)
50.2. BIT-STRING BASED PROBLEM SPACES 565
Table 50.1: Some long Root2paths for l from 1 to 11 with underlined bridge elements.
Some examples for the construction of Root2paths can be found in Table 50.1 and the
path for l = 3 is illustrated in Figure 50.4. In Algorithm 50.1 we try to outline how the
objective value of a candidate solution x can be computed online. Here, please notice two
things: First, this algorithm deviates from the one introduced by Horn et al. [1267] – we
tried to resolve the tail recursion and also added some minor changes. Another algorithm
for determining flp is given by Rudolph [2349]. The second thing to realize is that for small
l, we would not use the algorithm during the individual evaluation but rather a lookup
table. Each candidate solution could directly be used as index for this table which contains
the objective values. For l = 20, for example, a table with entries of the size of 4B would
consume 4MiB which is acceptable on today’s computers.
The experiments of Horn et al. [1267] showed that Hill Climbing methods that only
concentrate on sampling the neighborhood of the currently known best solution perform very
poor on long path problems whereas Genetic Algorithms which combine different candidate
solutions via crossover easily find the correct solution. Rudolph [2349] shows that it does so
in polynomial expected time. He also extends this idea long k-paths in [2350]. Droste et al.
[836] and Garnier and Kallel [1028] analyze this path and find that also (1+1)-EAs can have
exponential expected runtime on such unimodal functions. Höhn and Reeves [1246] further
elaborate on why long path problems are hard for Genetic Algorithms.
566 50 BENCHMARKS
It should be mentioned that the Root2paths constructed according to the method de-
scribed in this section here do not have the maximum length possible for long paths. Horn
et al. [1267] also introduce Fibonacci paths which are longer than the Root2paths. The
problem of finding maximum length paths in a l-dimensional hypercube is known as the
snake-in-the-box 8 problem [682, 1505] which was first described by Kautz [1505] in the late
1950s. It is a very hard problem suffering from combinatorial explosion and currently, max-
imum snake lengths are only known for small values of l.
What is a good model problem? Which model fits best to our purposes? These questions
should be asked whenever we apply a benchmark, whenever we want to use something for
testing the ability of a Global Optimization approach. The mathematical functions intro-
duced in ??, for instance, are good for testing special mathematical reproduction operations
like used in Evolution Strategies and for testing the capability of an Evolutionary Algorithm
for estimating the Pareto frontier in multi-objective optimization. Kauffman’s NK fitness
landscape (discussed in Section 50.2.1) was intended to be a tool for exploring the relation
of ruggedness and epistasis in fitness landscapes but can prove very useful for finding out
how capable an Global Optimization algorithm is to deal with problems exhibiting these
phenomena. In Section 50.2.4, we outlined the Royal Road functions, which were used to
investigate the ability of Genetic Algorithms to combine different useful formae and to test
the Building Block Hypothesis. The Artificial Ant (??) and the GCD problem from ?? are
tests for the ability of Genetic Programming of learning algorithms.
All these benchmarks and toy problems focus on specific aspects of Global Optimization
and will exhibit different degrees of the problematic properties of optimization problems to
which we had devoted Part II:
With the exception of the NK fitness landscape, it remains unclear to which degrees these
phenomena occur in the test problem. How much intrinsic epistasis does the Artificial Ant
or the GCD problem emit? What is the quantity of neutrality inherent in Royal Road for
variable-length representations? Are the mathematical test functions rugged and, if so, to
which degree? All the problems are useful test instances for Global Optimization. They have
not been designed to give us answers to questions like: Which fitness assignment process
can be useful when an optimization problem exhibits weak causality and thus has a rugged
fitness landscape? How does a certain selection algorithm influence the ability of a Genetic
Algorithm to deal with neutrality? Only Kauffman’s NK landscape provides such answers,
but only for epistatis. By fine-tuning its N and K parameters, we can generate problems
with different degrees of epistatis. Applying a Genetic Algorithm to these problems then
allows us to draw conclusions on its expected performance when being fed with high or low
epistatic real-world problems.
In this section, a new model problem is defined that exhibits ruggedness, epistasis, neu-
trality, multi-objectivity, overfitting, and oversimplification features in a controllable manner
Niemczyk [2037], Weise et al. [2910]. Each of them is introduced as a distinct filter compo-
nent which can separately be activated, deactivated, and fine-tuned. This model provides
a perfect test bed for optimization algorithms and their configuration settings. Based on a
8 http://en.wikipedia.org/wiki/Snake-in-the-box [accessed 2008-08-13]
50.2. BIT-STRING BASED PROBLEM SPACES 567
rough estimation of the structure of the fitness landscape of a given problem, tests can be
run very fast using the model as a benchmark for the settings of an optimization algorithm.
Thus, we could, for instance, determine a priori whether increasing the population size of
an Evolutionary Algorithm over an approximated limit is likely to provide a gain.
With it, we also can evaluate the behavior of an optimization method in the presence of
various problematic aspects, like epistasis or neutrality. This way, strengths and weaknesses
of different evolutionary approaches could be explored in a systematic manner. Additionally,
it is also well suited for theoretical analysis because of its simplicity. The layers of the model,
sketched using an example in Figure 50.5, are specified in the following.
1 Genotype
gÎG 010001100000111010000
2 Introduction of Neutrality
m=2 01 00 01 10 00 00 11 10 10 00 0
u2 (g)ÎG 1 0 1 1 0 0 1 1 1 0
3 Introduction of Epistasis
h=4 1011 0011 10 insufficient bits,
at the end, use
e4 e4 e2
h=2 instead of
1001 0110 11 h=4
5 Introduction of Overfitting
tc=5, e=1, o=1 100110 011010
T1 h*=4 *10001 h*=3
T2 h*=2 0101*0 h*=2
T3 h*=2 0*0111 h*=3
T4 h*=5 011*01 h*=2
T5 h*=3 11*101 h*=4
* (x )=16
f1,1,5 f1,1,5
* (x )=14
1 2
6 Introduction of Ruggedness
g`=34
g=57, q=25 f1,1,5(x1)=16 f1,1,5 (x2)=14
(D(rg=57)=82)
r57[f1,1,5(x1)]=17 r57[f1,1,5 (x2))]=15
The basic optimization task in this model is to find a bit string x⋆ of a predefined length
n = len(x⋆ ) consisting of alternating zeros and ones in the space of all possible bit strings
X = B∗ , as sketched in box 1 in Figure 50.5. The tuning parameter for the problem size is
n ∈ N1 .
Although the model is based on finding optimal bit strings, it is not intended to be a
benchmark for Genetic Algorithms. Instead, the purpose of the model is to provide problems
with specific fitness landscape features to the optimization algorithm. An optimization algo-
rithm initially creates a set of random candidate solutions with a nullary creation operator.
It evaluates these solutions and selects promising solutions. To these individuals, unary and
binary reproduction operations are applied. New candidate solutions result, which subse-
quently again undergo selection and reproduction. Notice that the nature of the nullary,
unary, and binary operations – and hence, the data structure used to represent the candi-
date solutions – is not important from the perspective of the optimization algorithm. The
algorithm makes its selection decisions purely based on the obtained objective values and
also may use this information to decide how often mutation and recombination should be
applied. In other words, the behavior of a general optimization algorithm, such as an EA,
is purely based on the fitness landscape features, and not on the precise data structures
representing genotypes and phenotypes.
From this perspective, a Genetic Algorithm is just an instantiation of an Evolutionary
Algorithm with search operations for bit strings. Genetic Programming is just an EA for
trees. If we can find a good strategy with which a Genetic Algorithm can solve a highly
epistatic problem and this strategy does not involve changes inside the reproduction op-
erators, then this strategy is likely to hold for problems in Genetic Programming as well.
Hence, our benchmark model – though working on bit-string based genomes – is well suit-
able for providing insights which are helpful for other search spaces as well. It was used,
for example, in order to find good setups for large-scale experimental studies on GP, where
each experiment was very costly and direct parameter tuning was not possible [2888].
x⋆ = 0101010101010 . . . 01 (50.24)
Searching this optimal string could be done by comparing each genotype g with x⋆ . Therefore
we would use the Hamming distance9 [1173] dist ham(a, b), which defines the difference
between two bit strings a and b of equal length as the number of bits in which they differ
(see ??).
Instead of doing this directly, we test the candidate solution against tc training samples
T1 , T2 , .., Ttc . These samples are modified versions of the perfect string x⋆ .
As outlined in Section 19.1 on page 191, we can distinguish between overfitting and
oversimplification. The latter is often caused by incompleteness of the training cases and
the former can originate from noise in the training data. Both forms can be expressed in
terms of our model by the objective function fε,o,tc (based on a slightly extended version of
the Hamming distance dist∗Ham ) which is subject to minimization.
In the case of oversimplification, the perfect solution x⋆ will always reach a perfect score in
all training cases. There may be incorrect solutions reaching this value in some cases too,
9 ?? on page ?? includes the specification of the Hamming distance.
50.2. BIT-STRING BASED PROBLEM SPACES 569
because some of the facets of the problem are hidden. We take this into consideration by
placing o don’t care symbols (*) uniformly distributed into the training cases. The values of
the candidate solutions at their loci have no influence on the fitness.
When overfitting is enabled, the perfect solution will not reach the optimal score in any
training case because of the noise present. Incorrect solutions may score better in some
cases and even outperform the real solution if the noise level is high. Noise is introduced
in the training cases by toggling ε of the remaining n − o bits, again following a uniform
distribution. An optimization algorithm can find a correct solution only if there are more
training samples with correctly defined values for each locus than with wrong or don’t care
values.
The optimal objective value is zero and the maximum f̂ of fε,o,tc is limited by the upper
boundary f̂ ≤ (n − o)tc. Its exact value depends on the training cases. For each bit index i,
we have to take into account whether a zero or a one in the phenotype would create larger
errors:
count(i, val) = |{j ∈ 1..n : Tj [i] = val}| (50.27)
count(i, 1) if count(i, 1) ≥ count(i, 0)
e(i) = (50.28)
count(i, 0) otherwise
tc
X
f̂ = e(i) (50.29)
i=1
h³3
The tunable parameter η for the epistasis ranges from 2 to n ∗ m, the product of the
basic problem length n and the number of objectives m (see next section). If it is set to a
value smaller than 3, no additional epistasis is introduced. Figure 50.6 outlines the mapping
for η = 4.
Again it should be noted that, besides the explicit epistasis introduced via this trans-
formation, implicit epistasis can occur through the neutrality and ruggedness (see ??) map-
pings. It is not possible to study these three effects completely separately, but with our
model, well-dosed degrees of epistasis, neutrality, and ruggedness can separately or jointly
generated.
There are two (possibly interacting) sources of ruggedness in a fitness landscape. The first
one, epistasis, has already been modeled and discussed. The other source concerns the
objective functions themselves, the nature of the problem. We will introduce this type
of ruggedness a posteriori by artificially lowering the causality of the problem space. We
therefore shuffle the objective values with a permutation r : [0, f̂] 7→ [0, f̂], where f̂ the
abbreviation for the maximum possible objective value, as defined in Equation 50.29.
Before we do that, let us shortly outline what makes a function rugged. Ruggedness
is obviously the opposite of smoothness and causality. In a smooth objective function, the
objective values of the candidate solutions neighboring in problem space are also neighboring.
In our original problem with o = 0, ε = 0, and tc = 1 for instance, two individuals differing
in one bit will also differ by one in their objective values. We can write down the list of
objective values the candidate solutions will take
on when theyare stepwise improved from
the worst to the best possible configuration as f̂, f̂ − 1, .., 2, 1, 0 . If we exchange two of the
values in this list, we will create some artificial ruggedness. A measure for the ruggedness
of such a permutation r is ∆(r):
f̂−1
X
∆(r) = |r[i] − r[i + 1]| (50.32)
i=0
r6 r7 r8 r9 r10
Algorithm 50.2: rγ ←− buildRPermutation γ, f̂
Input: γ: the γ value
Input: f̂: the maximum objective value
Data: i, j, d, tmp: temporary variables
Data: k, start, r: parameters of the subalgorithm
Output: rγ : the permutation rγ
1 begin
2 Subalgorithm r ←− permutate(k, r, start)
3 begin
4 if k > 0 then
5 if k ≤ (f̂ − 1) then
6 r ←− permutate(k
hi
− 1, r, start)
7 tmp ←− r f̂
hi h i
8 r f̂ ←− r f̂ − k
h i
9 r f̂ − k ←− tmp
10 else
start + 1
11 i ←−
2
12 if (start mod 2) = 0 then
13 i ←− f̂ + 1 − i
14 d ←− −1
15 else
16 d ←− 1
17 for j ←− start up to f̂ do
18 r[j] ←− i
19 i ←− i + d
20 r ←− permutate k − f̂ + start, r, start + 1
21 r ←− 0, 1, 2, .., f̂ − 1, f̂
22 return permutate(γ, r, 1)
50.2. BIT-STRING BASED PROBLEM SPACES 573
Figure 50.7 outlines all ruggedness permutations rγ for an objective function which can
range from 0 to f̂ = 5. As can be seen, the permutations scramble the objective function more
and more with rising γ and reduce its gradient information. The ruggedness transformation
is sketched in box 6 of Figure 50.5.
In this section, we will use a selection of the experimental results obtained with our model in
order to validate the correctness of the approach.10 In our experiments, we applied variable-
length bit string genomes (see Section 29.4) with lengths between 1 and 8000 bits. We apply
a plain Genetic Algorithm to the default benchmark function set f = {fε,o,tc , fnf }, where
fnf is the non-functional length criterion fnf (x) = len(x). Pareto ranking (Section 28.3.3) is
used for fitness assignment and Tournament selection (see Section 28.4.4) with a tournament
size k = 5 for selection. In a population of ps = 1000 individuals, we perform single-point
crossover at a crossover rate of cr = 80% and apply single-bit mutation to mr = 20% of the
genotypes. The genotypes are mapped to phenotypes as discussed in Section 50.2.7.1. Each
run performs exactly maxt = 1001 generations. For each of the experiment-specific settings
discussed later, at least 50 runs have been performed.
In the experiments, we distinguish between success and perfection. Success means finding
individuals x of optimal functional fitness, i. e., fε,o,tc (x) = 0. Multiple such successful
strings may exist, since superfluous bits at the end of genotypes do not influence their
functional objective. We will refer to the number of generations needed to find a successful
individual as success generations. The perfect string x⋆ has no useless bits, it is the shortest
possible solution with fε,o,tc = 0 and, hence, also optimal in the non-functional length
b
criterion. In our experiments, we measure:
b
neither overfitted nor oversimplified, individual.
In Figure 50.8, we have computed the minimum, average, and maximum number of the
b for values of n ranging from 8 to 800. As illustrated, the
success generations (st, st, and st)
problem hardness increases steadily with rising string length n. Trimming down the solution
strings to the perfect length becomes more and more complicated with growing n. This is
likely because the fraction at the end of the strings where the trimming is to be performed
will shrinks in comparison with its overall length.
50.2.7.2.2 Ruggedness
In Figure 50.9, we plotted the average success generations st with n = 80 and different
ruggedness settings γ. Interestingly, the gray original curve behaves very strangely. It is
divided into alternating solvable and unsolvable11 problems. The unsolvable ranges of γ
10 More experimental results and more elaborate discussions can be found in the bachelor’s thesis of
Niemczyk [2037].
11 We call a problem unsolvable if it has not been solved within 1000 generations.
574 50 BENCHMARKS
150
^
st
100
-
st
50
^
st
n
0 100 200 300 400 500 600 700
Figure 50.8: The basic problem hardness.
at n=80
110 -
st
100
original g
90
rearranged g`
80
70
60
50
40 deceptivenes/
ruggness fold
30
deceptive gaps g
20
0 500 1000 1500 2000 2500 3000
Figure 50.9: Experimental results for the ruggedness.
50.2. BIT-STRING BASED PROBLEM SPACES 575
correspond to gaps in the curve. With rising γ, the solvable problems require more and
more generations until they are solved. After a certain (earlier) γ threshold value, the
unsolvable sections become solvable. From there on, they become simpler with rising γ. At
some point, the two parts of the curve meet.
Algorithm 50.3: γ ←− translate γ ′ , f̂
Input: γ ′ : the raw γ value
Input: f̂: the maximum value of fε,o,tc
Data: i, j, k, l: some temporary variables
Output: γ: the translated γ value
1 begin
f̂ f̂ − 1
2 l ←−
$ %2 $ %
f̂ f̂ + 1
3 i ←− ∗
2 2
4 if γ ≤ f̂i then s
2
f̂ + 2 f̂
5 j ←− − + 1 − γ
2 4
6 k ←− γ − j f̂ + 2 + j 2 + f̂
7 return k + 2 j f̂ + 2 − j 2 − f̂ − j
8 else v
u
f̂ mod 2 + 1 u 1 − f̂ mod 2
t
9 j ←−
+ + γ − 1 − i
2 4
10 k ←− γ − j − f̂ mod 2 (j − 1) − 1 − i
11 return l − k − 2j 2 + j − f̂ mod 2 (−2j + 1)
The reason for this behavior is rooted in the way that we construct the ruggedness
mapping r and illustrates the close relation between ruggedness and deceptiveness. Algo-
rithm 50.2 is a greedy algorithm which alternates between creating groups of mappings that
are mainly rugged and such that are mainly deceptive. In Figure 50.7 for instance, from
γ = 5 to γ = 7, the permutations exhibit a high degree of deceptiveness whilst just being
rugged before and after that range. Thus, it seems to be a good idea to rearrange these
sections of the ruggedness mapping. The identity mapping should still come first, followed
by the purely rugged mappings ordered by their ∆-values. Then, the permutations should
gradually change from rugged to deceptive and the last mapping should be the most de-
ceptive one (γ = 10 in Figure 50.7). The black curve in Figure 50.9 depicts the results of
rearranging the γ-values with Algorithm 50.3. This algorithm maps deceptive gaps to higher
γ-values and, by doing so, makes the resulting curve continuous.12
Fig. 50.10.a sketches the average success generations for the rearranged ruggedness prob-
lem for multiple values of n and γ ′ . Depending on the basic problem size n, the problem
hardness increases steeply with rising values of γ ′ .
In Algorithm 50.2 and Algorithm 50.3, we use the maximum value of the functional
objective function (abbreviated with f̂) in order to build and to rearrange the ruggedness
12 This is a deviation from our original idea, but this idea did not consider deceptiveness.
576 50 BENCHMARKS
300
- 300
200 st -
100 200 st
0 100
0
0
40 0
80 g` 40 aled)
120 n 8000 80 n g e d n ess (sc
6000 120 rug 8
160 4000 160 4 6
0 2000 0 2
Fig. 50.10.a: Unscaled ruggedness st plot. Fig. 50.10.b: Scaled ruggedness st plot.
1-s r
/
-
st 100%
200 66%
100 33%
0 0%
decep n decep n
tiven 160 tiven 160
8 ess (s 120 8 ess (s 120
6 cale d) 6 caled
4 80 4 ) 80
2 40 2 40
0 0
Fig. 50.10.c: Scaled deceptiveness – average success Fig. 50.10.d: Scaled deceptiveness – failed runs
generations st
When using this scaling mechanism, the curves resulting from experiments with different
n-values can be compared more easily: Fig. 50.10.b based on the scale from Equation 50.33,
for instance, shows much clearer how the problem difficulty rises with increasing ruggedness
than Fig. 50.10.a. We also can spot some irregularities which always occur at about the
same degree of ruggedness, near g ≈ 9.5, and that we will investigate in future.
The experiments with the deceptiveness scale Equation 50.34 show the tremendous effect
of deceptiveness in the fitness landscape. Not only does the problem hardness rise steeply
with g (Fig. 50.10.c), after certain threshold, the Evolutionary Algorithm becomes unable to
solve the model problem at all (in 1000 generations), and the fraction of failed experiments
in Fig. 50.10.d jumps to 100% (since the fraction s/r of solved ones goes to zero).
50.2.7.2.3 Epistasis
Fig. 50.11.a illustrates the relation between problem size n, the epistasis factor η, and
800
-
st 1-s r
/
600
80%
400 40%
200
0 0
n 160 h n 160
80 120
h
50 40 50 40
30 20 80120
30 20 40 10 40
10 0 0
Fig. 50.11.a: Epistasis η and problem length n. Fig. 50.11.b: Epistasis η and problem length n:
failed runs.
the average success generations. Although rising epistasis makes the problems harder, the
complexity does not rise as smoothly as in the previous experiments. The cause for this is
likely the presence of crossover – if mutation was allowed solely, the impact of epistasis would
most likely be more intense. Another interesting fact is that experimental settings with odd
values of η tend to be much more complex than those with even ones. This relation becomes
even more obvious in Fig. 50.11.b, where the proportion of failed runs, i. e., those which
were not able to solve problem in less than 1000 generations, is plotted. A high plateau
for greater values of η is cut by deep valleys at positions where η = 2 + 2i ∀i ∈ N1 . This
578 50 BENCHMARKS
phenomenon has thoroughly been discussed by Niemczyk [2037] and can be excluded from
the experiments by applying the scaling mechanism with parameter y ∈ [0, 10] as defined in
Equation 50.35:
y ≤ 0 if 2
epistasis: η = y ≥ 10 if 41 (50.35)
2 ⌊2y⌋ + 1 otherwise
50.2.7.2.4 Neutrality
Figure 50.12 illustrates the average number of generations st needed to grow an individual
-
300 st
200
100
0
n 120160
m 80
50 40 30 40
20 10 0
Figure 50.12: The results from experiments with neutrality.
with optimal functional fitness for different values of the neutrality parameter µ. Until
µ ≈ 10, the problem hardness increases rapidly. For larger degrees of redundancy, only
minor increments in st can be observed. The reason for this strange behavior seems to be the
crossover operation. Niemczyk [2037] shows that a lower crossover rate makes experiments
involving the neutrality filter of the model problem very hard. We recommend using only
µ-values in the range from zero to eleven for testing the capability of optimization methods
of dealing with neutrality.
Our model problem consists of independent filters for the properties that may influence the
hardness of an optimization task. It is especially interesting to find out whether these filters
can be combined arbitrarily, i. e., if they are indeed free of interaction. In the ideal case, st
of an experiment with n = 80, µ = 8, and η = 0 added to st of an experiment for n = 80,
µ = 0, and η = 4 should roughly equal to st of an experiment with n = 80, µ = 8, and η = 4.
In Figure 50.13, we have sketched these expected values (Fig. 50.13.a) and the results of the
corresponding real experiments (Fig. 50.13.b). In fact, these two diagrams are very similar.
The small valleys caused by the “easier” values of η (see Paragraph 50.2.7.2.3) occur in both
charts. The only difference is a stronger influence of the degree of neutrality.
It is a well-known fact that epistasis leads to ruggedness, since it violates the causality as
discussed in Chapter 17. Combining the ruggedness and the epistasis filter therefore leads
to stronger interactions. In Fig. 50.14.b, the influence of ruggedness seems to be amplified
by the presence of epistasis when compared with the estimated results shown in Fig. 50.14.a.
Apart from this increase in problem hardness, the model problem behaves as expected. The
characteristic valleys stemming from the epistasis filter are clearly visible, for instance.
50.2. BIT-STRING BASED PROBLEM SPACES 579
300 400
-
st
-
st
200
100 100
0 0
m 9 m 9
h 7 h 7
9 5 9 5
7 5 3 7 5 3
3 1 3 1
Fig. 50.13.a: The expected results. Fig. 50.13.b: The experimentally obtained re-
sults.
Figure 50.13: Expection and reality: Experiments involving both, epistasis and neutrality
300 -
450 st
-
st 300
100 150
0 0
le) le)
( sca ( sca
h ess 8 h ess 8
9 n 9 n
7 ged 4 6 7 ged 4 6
5 rug 2 5 rug2
3 3
10 10
Fig. 50.14.a: The expected results. Fig. 50.14.b: The experimentally obtained re-
sults.
Figure 50.14: Expection and reality: Experiments involving both, ruggedness and epistasis
580 50 BENCHMARKS
50.2.7.3 Summary
In summary, this model problem has proven to be a viable approach for simulating prob-
lematic phenomena in optimization. It is
Niemczyk [2037] has written a stand-alone Java class implementing the model
problem which is provided at http://www.it-weise.de/documents/files/
TunableModel.java [accessed 2008-05-17]. This class allows setting the parameters discussed
in this section and provides methods for determining the objective values of individuals in
the form of byte arrays. In the future, some strange behaviors (like the irregularities in the
ruggedness filter and the gaps in epistasis) of the model need to be revisited, explained, and,
if possible, removed.
Mathematical benchmark functions are especially interesting for testing and comparing
techniques based on real vectors (X ⊆ Rn ) like plain Evolution Strategy (see Chapter 30
on page 359), Differential Evolution (see Chapter 33 on page 419), and Particle Swarm
Optimization (see Chapter 39 on page 481). However, they only require such vectors as
candidate solutions, i. e., elements of the problem space X. Hence, techniques with differ-
ent search spaces G, like Genetic Algorithms, can also be applied to them, given that a
genotype-phenotype mapping gpm : G 7→ X ⊆ Rn is provided accordingly.
The optima or the Pareto frontier of benchmark functions often have already been de-
termined theoretically. When applying an optimization algorithm to the functions, we are
interested in the number of candidate solutions which they need to process in order to find
the optima and how close we can get to them. They also give us a great opportunity to
find out about the influence of parameters like population size, the choice of the selection
algorithm, or the efficiency of reproduction operations.
In this section, we list some of the most important benchmark functions for scenarios in-
volving only a single optimization criterion. This, however, does not mean that the search
space has only a single dimension – even a single-objective optimization can take place in
n-dimensional space Rn .
50.3. REAL PROBLEM SPACES 581
50.3.1.1 Sphere
The sphere functions listed by Suganthan et al. [2626] (or as F1 by De Jong [711] and
in [2934], f1 in [3020, 3026]) and defined here in Table 50.2 as fRSO1 are a very simple mea-
sure of efficiency of optimization methods. They have, for instance, been used by Rechenberg
[2278] for testing his Evolution Strategy-approach. In Table 50.3 we list some results ob-
tained for optimizing the sphere function from related work and in Figure 50.15, we illustrate
fRSO1 for one-dimensional and two-dimensional problem spaces X. The code snippet provided
in Listing 50.2 shows how the sphere function can be computed and in Table 50.3, we list
some results for the application of optimizers to the sphere function from literature.
n
X
function: fRSO1 (x) = xi 2 50.36
i=1
n
domain: X = [−v, v] , v ∈ {5.12, 10, 100} 50.37
optimization: minimization
T
optimum: x⋆ = (0, 0, .., 0) , fRSO1 (x⋆ ) = 0 50.38
separable: yes
multimodal: no
^
f x
10000
8000
f
18000
^
6000 12000
x
6000
4000
0
2000 100
-100
xÎX 50
0
xÎX -50 0
0 -50
50
-100 -50 0 50 100 100 -100
Fig. 50.15.a: 1d Fig. 50.15.b: 2d
Table 50.3: Results obtained for the sphere function with τ function evaluations.
582 50 BENCHMARKS
n, v τ fRSO1 (x) Algorithm Class Reference
500, 100 2.5 · 106 2.41E-11 SaNSDE DE [3020]
500, 100 2.5 · 106 4.90E-8 FEPCC EA [3020]
500, 100 2.5 · 106 2.28E-21 DECC-O DE [3020]
500, 100 2.5 · 106 6.33E-27 DECC-G DE [3020]
6
1000, 100 5 · 10 6.97 SaNSDE DE [3020]
1000, 100 5 · 106 5.40E-8 FEPCC EA [3020]
1000, 100 5 · 106 1.77E-20 DECC-O DE [3020]
1000, 100 5 · 106 2.17E-25 DECC-G DE [3020]
This function is defined as f2 in [3026]. Here, we name it fRSO2 and list its features in
Table 50.4. In Figure 50.16 we sketch it one and two-dimensional version, Listing 50.3
contains a Java snippet showing how it can be computed, and provides some results for
this function from literature Table 50.5.
n
X n
Y
function fRSO2 (x) = |xi | + |xi | 50.39
i=1 i=1
n
domain X = [−v, v] , v ∈ {10} 50.40
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO2 (x⋆ ) = 0 50.41
separable yes
multimodal no
^
x
f
16 f
120
^
12 x 80
40
8
0
4 10
5
-10 xÎX 0
xÎX -5
0 0 -5
5
-10 -5 0 5 10 10 -10
Fig. 50.16.a: 1d Fig. 50.16.b: 2d
Table 50.5: Results obtained for Schwefel’s problem 2.22 with τ function evaluations.
584 50 BENCHMARKS
n, v τ fRSO2 (x) Algorithm Class Reference
500, 10 2.5 · 106 5.27E-2 SaNSDE DE [3020]
500, 10 2.5 · 106 1.3E-3 FEPCC EA [3020]
500, 10 2.5 · 106 3.77E-10 DECC-O DE [3020]
500, 10 2.5 · 106 5.95E-15 DECC-G DE [3020]
6
500, 10 2.5 · 10 4.08E-19 LSRS HC [1140]
500, 10 1 · 107 6.0051E2 GA [1140]
500, 10 1 · 107 8.9119E2 PSO [1140]
1000, 10 5 · 106 1.24 SaNSDE DE [3020]
1000, 10 5 · 106 2.6E-3 FEPCC EA [3020]
1000, 10 5 · 106 +∞ DECC-O DE [3020]
1000, 10 5 · 106 5.37E-14 DECC-G DE [3020]
6
1000, 10 2.5 · 10 1.12E-17 LSRS HC [1140]
1000, 10 1 · 107 1.492E3 GA [1140]
1000, 10 1 · 107 4.1E47 PSO [1140]
50, 10 2.5 · 105 1.91E-11 LSRS HC [1140]
50, 10 1 · 107 1.346E1 GA [1140]
50, 10 1 · 107 4.68E1 PSO [1140]
100, 10 2.5 · 105 3.98E-10 LSRS HC [1140]
100, 10 1 · 107 4.007E1 GA [1140]
100, 10 1 · 107 1.0486E2 PSO [1140]
2000, 10 2.5 · 106 3.08E-17 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 585
50.3.1.3 Schwefel’s Problem 1.2 (Quadric Function)
This function is defined as f3 in [3026] and is here introduced as fRSO3 in Table 50.6. In
Figure 50.17, we sketch this function for one and two dimensions, in Listing 50.4, we give a
Java code snippet showing how it can be computed, and Table 50.7 contains some results
from literature.
2
n
X Xi
function fRSO3 (x) = xi 50.42
i=1 j=1
n
domain X = [−v, v] , v ∈ {10, 100} 50.43
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO3 (x⋆ ) = 0 50.44
separable no
multimodal no
f ^
10000
x
8000
f
^ 40e3
6000 30e3
x 20e3
10e3
4000 0
100
2000 50
xÎX -100 xÎX 0
0 -50
0 -50
50 100 -100
-100 -50 0 50 100
Fig. 50.17.a: 1d Fig. 50.17.b: 2d
Table 50.7: Results obtained for Schwefel’s problem 1.2 with τ function evaluations.
586 50 BENCHMARKS
n, v τ fRSO3 (x) Algorithm Class Reference
500, 100 2.5 · 106 2.03E-8 SaNSDE DE [3020]
500, 100 2.5 · 106 – FEPCC EA [3020]
500, 100 2.5 · 106 2.93E-19 DECC-O DE [3020]
500, 100 2.5 · 106 6.17E-25 DECC-G DE [3020]
6
1000, 100 5 · 10 6.43E1 SaNSDE DE [3020]
1000, 100 5 · 106 – FEPCC EA [3020]
1000, 100 5 · 106 8.69E-18 DECC-O DE [3020]
1000, 100 5 · 106 3.71E-23 DECC-G DE [3020]
This function is defined as f4 in [3026] and is here introduced as fRSO4 in Table 50.8. In
Figure 50.18, we sketch this function for one and two dimensions, in Listing 50.5, we give a
Java code snippet showing how it can be computed, and Table 50.9 contains some results
from literature.
^
x
100 f
80
^ 90
60
f
60 x
30
40 0
100
20 50
xÎX -100 xÎX 0
0 -50 -50
0
50
-100 -50 0 50 100 100 -100
Fig. 50.18.a: 1d Fig. 50.18.b: 2d
Table 50.9: Results obtained for Schwefel’s problem 2.21 with τ function evaluations.
588 50 BENCHMARKS
n, v τ fRSO4 (x) Algorithm Class Reference
500, 100 2.5 · 106 4.07E1 SaNSDE DE [3020]
500, 100 2.5 · 106 9E-5 FEPCC EA [3020]
500, 100 2.5 · 106 6.01E1 DECC-O DE [3020]
500, 100 2.5 · 106 4.58E-5 DECC-G DE [3020]
6
1000, 100 5 · 10 4.99E1 SaNSDE DE [3020]
1000, 100 5 · 106 8.5E-5 FEPCC EA [3020]
1000, 100 5 · 106 7.92E1 DECC-O DE [3020]
1000, 100 5 · 106 1.01E1 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 589
50.3.1.5 Generalized Rosenbrock’s Function
Originally proposed by Rosenbrock [2333] and also known under the name “banana valley
function” [1841], this function is defined as f5 in [3026] and in [2934], the two-dimensional
variant is given as F2 . The interesting feature of Rosenbrock’s function is that it is unimodal
for two dimension but becomes multimodal for higher ones [2464]. Here, we introduce it
as fRSO5 in Table 50.10. In Figure 50.19, we sketch this function for two dimensions, in
Listing 50.6, we give a Java code snippet showing how it can be computed, and Table 50.11
contains some results from literature.
Xh
n−1
2 2
i
function fRSO5 (x) = 100 xi+1 − xi 2 + (xi − 1) 50.48
i=1
n
domain X = [v1 , v2 ] , v1 ∈ {−5, −30} , v2 ∈ {10, 30} 50.49
optimization minimization
T
optimum x⋆ = (1, 1, .., 1) , fRSO5 (x⋆ ) = 0 50.50
separable no
multimodal no for n ≤ 2, yes otherwise
^
x
f
8e+7
6e+7
4e+7
2e+7
0
30
20
10
-30 -20 xÎX 0
-10 -10
0 10 -20
20 30 -30
Table 50.11: Results obtained for the generalized Rosenbrock’s function with τ function
evaluations.
n, X τ fRSO5 (x) Algorithm Class Reference
500, [−30, 30] 2.5 · 106 1.33E3 SaNSDE DE [3020]
500, [−30, 30] 2.5 · 106 – FEPCC EA [3020]
500, [−30, 30] 2.5 · 106 6.64E2 DECC-O DE [3020]
500, [−30, 30] 2.5 · 106 4.92E2 DECC-G DE [3020]
100
f
80
60 ^
x
40
20
xÎX
0
-10 -8 -6 -4 -2 0 2 4 6 8 10
Table 50.13: Results obtained for the step function with τ function evaluations.
n, v τ fRSO6 (x) Algorithm Class Reference
500, 100 2.5 · 106 3.12E2 SaNSDE DE [3020]
592 50 BENCHMARKS
500, 100 2.5 · 106 0 FEPCC EA [3020]
500, 100 2.5 · 106 0 DECC-O DE [3020]
500, 100 2.5 · 106 0 DECC-G DE [3020]
This function is defined as f7 in [3026]. Whitley et al. [2934] defines a similar function as F4
but uses normally distributed noise instead of uniformly distributed one. Here, the quartic
function with noise is introduced as fRSO7 in Table 50.14. In Figure 50.21, we sketch this
function for one and two dimensions, in Listing 50.8, we give a Java code snippet showing
how it can be computed, and Table 50.15 contains some results from literature.
n
X
function fRSO7 (x) = ixi 4 + randomUni[0, 1) 50.54
i=1
n
domain X = [−v, v] , v ∈ {500} 50.55
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO7 (x⋆ ) = 0 50.56
separable yes
multimodal no
^
7e+10
f x
6e+10
5e+10 f
2e+11
4e+10
^ 1.6e+11
x 1.2e+11
3e+10 8e+10
4e+10
2e+10 0
1e+10 400
200
xÎX -400 xÎX 0
0 -200 -200
0
-400 -200 0 200 400 200 -400
400
Fig. 50.21.a: 1d Fig. 50.21.b: 2d
Table 50.15: Results obtained for the quartic function with noise with τ function evaluations.
594 50 BENCHMARKS
n, v τ fRSO7 (x) Algorithm Class Reference
500, 500 2.5 · 106 1.28 SaNSDE DE [3020]
500, 500 2.5 · 106 – FEPCC EA [3020]
500, 500 2.5 · 106 1.04E1 DECC-O DE [3020]
500, 500 2.5 · 106 1.5E-3 DECC-G DE [3020]
6
1000, 500 5 · 10 1.18E1 SaNSDE DE [3020]
1000, 500 5 · 106 – FEPCC EA [3020]
1000, 500 5 · 106 2.21E1 DECC-O DE [3020]
1000, 500 5 · 106 8.4E-3 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 595
50.3.1.8 Generalized Schwefel’s Problem 2.26
This function is defined as f8 in [3026] and as F7 in [2934]. Here, we name it fRSO8 and list
its features in Table 50.16. In Figure 50.22 we sketch it one and two-dimensional version,
Listing 50.9 contains a Java snippet showing how it can be computed, and provides some
results for this function from literature Table 50.17.
n
X p
function fRSO8 (x) = − xi sin |xi | 50.57
i=1
n
domain X = [−v, v] , v ∈ {500} 50.58
optimization minimization
T
optimum v = 500, x⋆ = (420.9687, .., 420.9687) , fRSO8 (x⋆ ) ≈ 418.983n 50.59
separable yes
multimodal yes
500 ^
f
400 x
300 f
800
200
^ 400
100
x 0
0
-400
-100
-800
-200
-300 400
xÎX 200
-400 -400 0
-500
xÎX -200 -200
0 -400
200
-400 -200 0 200 400 400
Fig. 50.22.a: 1d Fig. 50.22.b: 2d
Table 50.17: Results obtained for Schwefel’s problem 2.26 with τ function evaluations.
596 50 BENCHMARKS
n, v τ fRSO8 (x) Algorithm Class Reference
500, 500 2.5 · 106 -201796.5 SaNSDE DE [3020]
500, 500 2.5 · 106 -209316.4 FEPCC EA [3020]
500, 500 2.5 · 106 -209491.0 DECC-O DE [3020]
500, 500 2.5 · 106 -209491.0 DECC-G DE [3020]
1000, 500 5 · 106 -372991.0 SaNSDE DE [3020]
1000, 500 5 · 106 -418622.6 FEPCC EA [3020]
1000, 500 5 · 106 -418983 DECC-O DE [3020]
1000, 500 5 · 106 -418983 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 597
50.3.1.9 Generalized Rastrigin’s Function
This function is defined as f9 in [3026] and as F6 in [2934]. In the context of this work, we
refer to it as fRSO9 in Table 50.18. In Figure 50.23, we sketch this function for one and two
dimensions, in Listing 50.10, we give a Java code snippet showing how it can be computed,
and Table 50.19 contains some results from literature.
n
X
function fRSO9 (x) = xi 2 − 10cos(2πxi ) + 10 50.60
i=1
n
domain X = [−v, v] , v ∈ {5.12} 50.61
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO9 (x⋆ ) = 0 50.62
separable yes
multimodal yes
^
100 x
90
f
80
^ f
70 200
x 160
60
120
50 80
40 40
0
30
600
20 400
200
10
xÎX -600 -400 xÎX -200
0
0 -200 0
200 400 -400
-600 -400 -200 0 200 400 600 600-600
Fig. 50.23.a: 1d Fig. 50.23.b: 2d
Table 50.19: Results obtained for the generalized Rastrigin’s function with τ function eval-
uations.
598 50 BENCHMARKS
n, v τ fRSO9 (x) Algorithm Class Reference
500, 5.12 2.5 · 106 2.84E2 SaNSDE DE [3020]
500, 5.12 2.5 · 106 1.43E-1 FEPCC EA [3020]
500, 5.12 2.5 · 106 1.76E1 DECC-O DE [3020]
500, 5.12 2.5 · 106 0 DECC-G DE [3020]
6
1000, 5.12 5 · 10 8.69E2 SaNSDE DE [3020]
1000, 5.12 5 · 106 3.13E-1 FEPCC EA [3020]
1000, 5.12 5 · 106 3.12E1 DECC-O DE [3020]
1000, 5.12 5 · 106 3.55E-16 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 599
50.3.1.10 Ackley’s Function
Ackley’s function is defined as f10 in [3026] and is here introduced as fRSO10 in Table 50.20.
In Figure 50.24, we sketch this function for one and two dimensions, in Listing 50.11, we
give a Java code snippet showing how it can be computed, and Table 50.21 contains some
results from literature.
q P Pn
n
function fRSO10 (x) = −20exp −0.2 n1 i=1 xi 2 − exp n1 i=1 cos(2πxi ) 50.63
+20 + e
n
domain X = [−v, v] , {10, 32} 50.64
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO10 (x⋆ ) = 0 50.65
separable yes
multimodal yes
^
x
25
^
f
x
20 f
25
20
15 15
10
10 5
0
5 30
20
10
xÎX -30 -20 xÎX 0
0 -10 0 -10
10 20 -20
-30 -20 -10 0 10 20 30 30 -30
Fig. 50.24.a: 1d Fig. 50.24.b: 2d
Table 50.21: Results obtained for Ackley’s function with τ function evaluations.
n, v τ fRSO10 (x) Algorithm Class Reference
500, 32 2.5 · 106 7.88 SaNSDE DE [3020]
500, 32 2.5 · 106 5.7E-4 FEPCC EA [3020]
500, 32 2.5 · 106 1.86E-11 DECC-O DE [3020]
500, 32 2.5 · 106 9.13E-14 DECC-G DE [3020]
1000, 32 5 · 106 1.12E1 SaNSDE DE [3020]
1000, 32 5 · 106 9.5E-4 FEPCC EA [3020]
1000, 32 5 · 106 4.39E-11 DECC-O DE [3020]
1000, 32 5 · 106 2.22E-13 DECC-G DE [3020]
The generalized Griewank function (also called Griewangk function [1106, 2934]) is defined
as f11 in [3026] and as F8 in [2934]. Locatelli [1755] showed that this function becomes
easier to optimize with increasing dimension n and Cho et al. [569] found a way to derive
the number of (local) minima of this function (wich grows exponentiall with n). Here, we
name the function fRSO11 and list its features in Table 50.22. In Figure 50.25 we sketch it one
and two-dimensional version, Listing 50.12 contains a Java snippet showing how it can be
computed, and provides some results for this function from literature Table 50.23.
n n
1 X 2 Y xi
function fRSO11 (x) = xi − cos √ + 1 50.66
4000 i=1 i=1 i
n
domain X = [−v, v] , v ∈ {600} 50.67
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO11 (x⋆ ) = 0 50.68
separable no
multimodal yes
^
100 x
90 f
80 f
^ 200
70
x 160
60 120
50 80
40 40
0
30
600
20 400
200
10 -600 -400 xÎX 0
0
xÎX -200 0 -200
200 400 -400
-600 -400 -200 0 200 400 600 600 -600
Fig. 50.25.a: 1d Fig. 50.25.b: 2d
This function is defined as f12 in [3026] and is here introduced as fRSO12 in Table 50.24. In
Figure 50.26, we sketch this function for one and two dimensions, in Listing 50.13, we give a
Java code snippet showing how it can be computed, and Table 50.25 contains some results
from literature.
n Pn−1 2
function fRSO12 (x) = nπ 10sin2 (πy1 ) + i=1 (yi − 1) · 1 + 10sin2 (πyi+1 ) 50.69
o P
2 n
+ (yn − 1) + i=1 u(xi , 10, 100, 4)
m
k (xi − a) if xi > a
u(xi , a, k, m) = 0 if − a ≤ xi ≤ a 50.70
m
k (−xi − a) otherwise
xi + 1
yi = 1 + 50.71
n4
domain X = [−v, v] , v ∈ {50} 50.72
optimization minimization
T
optimum x⋆ = (−1, −1, .., −1) , fRSO12 (x⋆ ) = 0 50.73
separable yes
multimodal yes
^
x
3e+8
f
2.5e+8 ^
x
2e+8 6e+8
f
4e+8
1.5e+8
2e+8
1e+8 0
5e+7 40
20
xÎX -40 xÎX 0
0 -20
0 -20
-40 -20 0 20 40 20 -40
40
Fig. 50.26.a: 1d Fig. 50.26.b: 2d
Table 50.25: Results obtained for the generalized penalized function 1 with τ function
evaluations.
n, v τ fRSO12 (x) Algorithm Class Reference
500, 50 2.5 · 106 2.96 SaNSDE DE [3020]
500, 50 2.5 · 106 – FEPCC EA [3020]
500, 50 2.5 · 106 2.17E-25 DECC-O DE [3020]
500, 50 4.29 · 10−21 0 DECC-G DE [3020]
1000, 50 5 · 106 2.96 SaNSDE DE [3020]
1000, 50 5 · 106 – FEPCC EA [3020]
1000, 50 5 · 106 1.08E-24 DECC-O DE [3020]
1000, 50 5 · 106 6.89E-25 DECC-G DE [3020]
606 50 BENCHMARKS
50.3.1.13 Generalized Penalized Function 2
This function is defined as f13 in [3026] and is here introduced as fRSO13 in Table 50.26. In
Figure 50.27, we sketch this function for one and two dimensions, in Listing 50.14, we give a
Java code snippet showing how it can be computed, and Table 50.27 contains some results
from literature.
n Pn−1 2
function fRSO13 (x) = 0.1 sin2 (3πxi ) + i=1 (xi − 1) · 1 + sin2 (3πxi+1 ) + 50.74
2 o Pn
(xn − 1) · 1 + sin2 (2πxn ) + i=1 u(xi , 5, 100, 4)
see Equation 50.70 for u(xi , a, k, m)
n
domain X = [−v, v] , v ∈ {50} 50.75
optimization minimization
T
optimum x⋆ = (1, 1, .., 1) , fRSO13 (x⋆ ) = 0 50.76
separable yes
multimodal yes
^
x
4.5e+8
4e+8 f
3.5e+8 ^
f
3e+8 x 8e+8
2.5e+8 6e+8
4e+8
2e+8
2e+8
1.5e+8 0
1e+8
40
5e+7 20
xÎX -40 xÎX 0
0 -20 -20
0
-40 -20 0 20 40 20 -40
40
Fig. 50.27.a: 1d Fig. 50.27.b: 2d
Table 50.27: Results obtained for the generalized penalized function 2 with τ function
evaluations.
n, v τ fRSO13 (x) Algorithm Class Reference
500, 50 2.5 · 106 1.89E2 SaNSDE DE [3020]
500, 50 2.5 · 106 – FEPCC EA [3020]
500, 50 2.5 · 106 5.04E-23 DECC-O DE [3020]
500, 50 4.29 · 10−21 5.34E-18 DECC-G DE [3020]
6
1000, 50 5 · 10 7.41E2 SaNSDE DE [3020]
1000, 50 5 · 106 – FEPCC EA [3020]
1000, 50 5 · 106 4.82E-22 DECC-O DE [3020]
1000, 50 5 · 106 2.55E-21 DECC-G DE [3020]
608 50 BENCHMARKS
50.3.1.14 Levy’s Function
16
f ^
14 x
f
12 ^ 100
80
10 x 60
8 40
20
6 0
4 10
5
2 -10 xÎX 0
xÎX -5
0 0 -5
5
-10 -5 0 5 10 -10 10
Fig. 50.28.a: 1d Fig. 50.28.b: 2d
Table 50.29: Results obtained for Levy’s function with τ function evaluations.
n, v τ fRSO14 (x) Algorithm Class Reference
50, 10 2.5 · 105 2.9E-39 LSRS HC [1140]
50, 10 1 · 107 3.27E-1 GA [1140]
50, 10 1 · 107 1.888E1 PSO [1140]
100, 10 2.5 · 105 2.9E-39 LSRS HC [1140]
100, 10 1 · 107 6.509E-1 GA [1140]
100, 10 1 · 107 5.572E1 PSO [1140]
500, 10 2.5 · 106 2.9E-39 LSRS HC [1140]
500, 10 1 · 107 3.33 GA [1140]
500, 10 1 · 107 5.4602E2 PSO [1140]
1000, 10 2.5 · 106 2.9E-39 LSRS HC [1140]
1000, 10 1 · 107 7.17 GA [1140]
50.3. REAL PROBLEM SPACES 609
1000, 10 1 · 107 1.62E3 PSO [1140]
2000, 10 2.5 · 106 2.9E-39 LSRS HC [1140]
610 50 BENCHMARKS
50.3.1.15 Shifted Sphere
The sphere function given in Section 50.3.1.1 is shifted in X by a vector o and in the
objective space by a bias fbias . This function is defined as F1 in [2626, 2627] and [2668].
Here, we specify it as fRSO15 in Table 50.30 and provide a Java snipped showing how it can
be computed in Listing 50.15 before listing some results from the literature in Table 50.31.
n
X
function: fRSO15 (x) = zi2 + fbias 50.81
i=1
zi = x i + o i 50.82
n
domain: X = [−100, 100] 50.83
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO15 (x⋆ ) = fbias 50.84
separable: yes
multimodal: no
shifted: yes
rotated: no
scalable: no
Table 50.31: Results obtained for the shifted sphere function with τ function evaluations.
n, fbias τ fRSO15 (x) Algorithm Class Reference
500, 0 2.5 · 106 2.61E-11 SaNSDE DE [3020]
500, 0 2.5 · 106 1.04E-12 DECC-O DE [3020]
500, 0 2.5 · 106 3.71E-13 DECC-G DE [3020]
6
1000, 0 5 · 10 1.17 SaNSDE DE [3020]
1000, 0 5 · 106 3.66E-8 DECC-O DE [3020]
1000, 0 5 · 106 6.84E-13 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 611
50.3.1.16 Shifted Rotated High Conditioned Elliptic Function
Table 50.33: Results obtained for the shifted and rotated elliptic function with τ function
evaluations.
n, fbias τ fRSO16 (x) Algorithm Class Reference
500, ? 2.5 · 106 6.88E8 SaNSDE DE [3020]
500, ? 2.5 · 106 4.78E8 DECC-O DE [3020]
500, ? 2.5 · 106 3.06E8 DECC-G DE [3020]
6
1000, ? 5 · 10 2.34E9 SaNSDE DE [3020]
1000, ? 5 · 106 1.08E9 DECC-O DE [3020]
1000, ? 5 · 106 8.11E8 DECC-G DE [3020]
612 50 BENCHMARKS
50.3.1.17 Shifted Schwefel’s Problem 2.21
Schwefel’s problem 2.21 given in Section 50.3.1.4 is shifted in X by a vector o and in the
objective space by a bias fbias − 450. This function is named F2 in [2668]. In Table 50.34,
we define it as fRSO17 and in Listing 50.16, we provide a Java snippet showing how it can be
computed.
Table 50.35: Properties of Schwefel’s modified problem 2.21 with optimum on the bounds.
Table 50.36: Results obtained for Schwefel’s modified problem 2.21 with optimum on the
bounds with τ function evaluations.
n, fbias τ fRSO18 (x) Algorithm Class Reference
500, 0 2.5 · 106 4.96E5 SaNSDE DE [3020]
500, 0 2.5 · 106 2.4E5 DECC-O DE [3020]
500, 0 2.5 · 106 1.15E5 DECC-G DE [3020]
6
1000, 0 5 · 10 5.03E5 SaNSDE DE [3020]
1000, 0 5 · 106 3.73E5 DECC-O DE [3020]
1000, 0 5 · 106 2.2E5 DECC-G DE [3020]
614 50 BENCHMARKS
50.3.1.19 Shifted Rosenbrock’s Function
X
n−1
2 2
function: fRSO19 (x) = 100 zi2 − zi+1 + (zi − 1) + fbias 50.96
i=1
zi = x i − o i + 1 50.97
n
domain: X = [−100, 100] 50.98
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO19 (x⋆ ) = fbias 50.99
separable: no
multimodal: yes
shifted: yes
rotated: no
scalable: yes
Table 50.38: Results obtained for the shifted Rosenbrock’s function with τ function evalua-
tions.
n, fbias τ fRSO19 (x) Algorithm Class Reference
500, ? 2.5 · 106 2.71E3 SaNSDE DE [3020]
500, ? 2.5 · 106 1.71E3 DECC-O DE [3020]
50.3. REAL PROBLEM SPACES 615
500, ? 2.5 · 106 1.56E3 DECC-G DE [3020]
The Ackley’s function given in Section 50.3.1.10 is shifted in X by a vector o, rotated with
a matrix M, and shifted in the objective space by a bias fbias . This function is defined as
F8 in [2626, 2627].
q P Pn
n
function fRSO21 (x) = −20exp −0.2 n1 i=1 zi2 − exp 1
n i=1 cos(2πzi ) 50.104
+20 + fbias
z = (x + o) ∗ M 50.105
n
domain: X = [−32, 32] 50.106
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO21 (x⋆ ) = fbias 50.107
separable: no
multimodal: yes
shifted: yes
rotated: yes
scalable: yes
Table 50.41: Results obtained for the shifted and rotated Ackley’s function with τ function
evaluations.
n, fbias τ fRSO21 (x) Algorithm Class Reference
500, ? 2.5 · 106 2.15E1 SaNSDE DE [3020]
500, ? 2.5 · 106 2.14E1 DECC-O DE [3020]
500, ? 2.5 · 106 2.16E1 DECC-G DE [3020]
1000, ? 5 · 106 2.16E1 SaNSDE DE [3020]
1000, ? 5 · 106 2.14E1 DECC-O DE [3020]
1000, ? 5 · 106 2.16E1 DECC-G DE [3020]
618 50 BENCHMARKS
50.3.1.22 Shifted Rastrigin’s Function
Table 50.43: Results obtained for the shifted Rastrigin’s function with τ function evaluations.
n, fbias τ fRSO22 (x) Algorithm Class Reference
500, ? 2.5 · 106 6.6E2 SaNSDE DE [3020]
500, ? 2.5 · 106 8.66 DECC-O DE [3020]
500, ? 2.5 · 106 4.5E2 DECC-G DE [3020]
1000, ? 5 · 106 3.2E3 SaNSDE DE [3020]
1000, ? 5 · 106 8.96E1 DECC-O DE [3020]
1000, ? 5 · 106 6.32E2 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 619
50.3.1.23 Shifted and Rotated Rastrigin’s Function
Table 50.45: Results obtained for the shifted and rotated Rastrigin’s function with τ function
evaluations.
n, fbias τ fRSO23 (x) Algorithm Class Reference
500, ? 2.5 · 106 6.97E3 SaNSDE DE [3020]
500, ? 2.5 · 106 1.5E4 DECC-O DE [3020]
500, ? 2.5 · 106 5.33E3 DECC-G DE [3020]
6
1000, ? 5 · 10 1.27E4 SaNSDE DE [3020]
1000, ? 5 · 106 3.18E4 DECC-O DE [3020]
1000, ? 5 · 106 9.73E3 DECC-G DE [3020]
620 50 BENCHMARKS
50.3.1.24 Schwefel’s Problem 2.13
This function is defined as F13 in [2626, 2627]. A and B are n × n matrices, aij and bij
T
are integer random numbers from −100..100, and the αj in α = (α1 , α2 , .., αn ) are random
numbers in [−π, π].
2
function fRSO24 (x) = sumni=1 (Ai − Bi (x)) + fbias 50.116
Xn
Ai = (aij sin(αj ) + bij cos(αj )) 50.117
j=1
n
X
Bi (x) = (aij sin(xj ) + bij cos(xj )) 50.118
j=1
n
domain X = [−π, π] 50.119
optimization minimization
T
optimum x⋆ = (α1 , α2 , .., αn ) , fRSO24 (x⋆ ) = fbias 50.120
separable no
multimodal yes
shifted: yes
rotated: no
scalable: yes
Table 50.47: Results obtained for Schwefel’s problem 2.13 with τ function evaluations.
n, v τ fRSO24 (x) Algorithm Class Reference
500, 100 2.5 · 106 2.53E2 SaNSDE DE [3020]
500, 100 2.5 · 106 2.81E1 DECC-O DE [3020]
500, 100 2.5 · 106 2.09E2 DECC-G DE [3020]
1000, 100 5 · 106 6.61E2 SaNSDE DE [3020]
1000, 100 5 · 106 7.52E1 DECC-O DE [3020]
1000, 100 5 · 106 3.56E2 DECC-G DE [3020]
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 621
50.3.1.25 Shifted Griewank Function
50.4.1 Introduction
Logistics are one of the most important services in our economy. The transportation of
goods, parcel, and mail from one place to another is what connects different businesses and
people. Without such transportation, basically all economical processes in an industrial
society would break done.
Logistics is omnipresent. But is it efficient? We already discussed that the Travel-
ing Salesman Problem (see Example E2.2 on page 45) is N P-complete in Section 12.3 on
page 147. Solving the TSP means finding an efficient travel plan for a single salesman. Plan-
ning for a whole fleet of vehicles, a set of orders, under various constraints and conditions,
622 50 BENCHMARKS
seems to be at least as complex and, from an algorithmic and implementation perspective,
is probably much harder. Hence, most logistics companies today still rely on manual plan-
ning. Knowing that, we can be pretty sure that they can benefit greatly from solving such
Vehicle Routing Problems [497, 1091, 2162] in an optimal manner which reduces costs, travel
distances, and CO2 emissions.
In Section 49.3 on page 536, we discuss a real-world application of Evolutionary Algo-
rithms in the area of logistics: the search for optimal goods transportation plans, a problem
which belongs to the class of Vehicle Routing Problems (VRP). Section 49.3, you can find a
detailed description and model gleaned from the situation in one of the divisions of a large
logistics company (Section 49.3.2) as well as an in-depth discussion of related research work
on Vehicle Routing Problems (Section 49.3.3). Here, we want to introduce a benchmark
generator for a general class of VRPs which are close to the situation in today’s parcel
companies (if not even closer than the work introduced in Section 49.3).
container c1
Another container is picked
up in another depot.
truck t1 customer C
depot A pickup location
container
c2
depot B
delivery
location
A real-world logistics company knows five kinds of objects: orders, containers, trucks,
drivers, and locations. With these objects, the situation faced by the planner can be com-
pletely described. In Figure 50.29 we sketch how these objects interact and describe this
interaction below in detail.
2. Orders are placed into containers which are provided by the logistics company. A
container can either be empty or contain exactly one order. We assume that each
order fits into exactly one container. If an order was larger than one container, we
can imagine that it will be split up into n single orders without loss of generality. A
customer has to pay for at least one container. If the orders are smaller, the company
still uses (and charges for) one full container. Containers reside at depots and, after
all deliveries are performed, must be transported back to a depot (not necessarily their
starting depot). For more information, see Section 50.4.2.4.
3. Trucks are them vehicles that transport the vehicles. The company can own different
types of trucks with different capacities. The capacity limit per truck is usually between
one and three containers. A truck can either be empty or transport at most as many
containers as its capacity allows. A truck is driven always by one driver, but some
trucks allow that another driver can travel along (i.e., have a higher number of seats).
Trucks also reside at depots and, after they have finished their tours, must arrive at
a depot again (not necessarily their starting depot). A truck cannot drive without a
driver. See Section 50.4.2.3 on page 627 for more information.
4. Drivers work for the logistics company and drive the trucks. Each driver has a “home
depot” to which she needs to return after the orders have been delivered. For more
information, see Section 50.4.2.5 on page 628.
5. Locations are places on the map. The customers reside at locations and deliveries take
place between locations. Depots are special locations: Here, the logistics company
stores its containers, employs its drivers, and parks its trucks. Trucks pick up orders at
given locations and deliver them to other locations. Also, trucks may meet at locations
and exchange their freight. See Section 50.4.2.1 on page 625 for more information.
1. Pick up and deliver all orders within their time windows and, hence, to fulfill its con-
tracts with the customer. Every missed time window or undelivered order is considered
as a delivery error. The number of delivery errors must be minimized.
2. Orders should be delivered at the lowest possible costs. Hence, the costs should be
minimized.
While trying to achieve these goals, it obviously has to obey to the laws of physics and logic.
The following constraints do always hold:
624 50 BENCHMARKS
1. No truck, container, order, or driver can be at more than one place at one point in
time. A driver cannot drive a truck from A to B while simultaneously driving another
truck from C to D. A container cannot be used to transport order O from C to D
while, at the same time, being carried by another truck from C to E. And so on.
2. Trucks, containers, orders, and drivers can only leave from places where they actually
reside: A truck which is parked at depot A cannot leave depot B (without driving
there first). A truck driver who is located at depot C cannot get into a truck which
is parked in depot D (without driving there first). An container located at depot E
cannot be loaded into a truck residing in depot F (without being driven there first).
And so on.
3. Driving from one location to another takes time. A truck cannot get from location A
to a distant location B within 0s. The time needed for getting from A to B depends
on (1) the distance between A and B and (2) the maximum speed of the truck. A
truck cannot go from one location to another faster than what the (Newtonian) laws
of physics [2029] allow.
5. Objects can only be moved by trucks and trucks are driven by drivers: (a) A truck can-
not move without a driver. (b) A driver cannot move without a truck. (c) A container
cannot move without a truck. (d) An order cannot move without a container.
6. The capacity limits of the involved objects (trucks, containers) must be obeyed. A
truck can only carry a finite weight and a container can only contain a finite volume
(here: one order).
7. Orders can only be picked up during the pickup time interval and only be delivered
during the delivery time interval.
8. No new objects appear during the planning process. No new trucks, drivers, or con-
tainers can somehow be “created” and no new orders come in.
9. The features of the involved objects do not change during the planning process. A
truck cannot become faster or slower and the capacity limits of the involved objects
cannot be increased or decreased.
The input data of an optimization algorithm for this benchmark problem comprises the
sets of objects and their features describing a complete starting situation for the planning
process.
1. The orders with their IDs pickup and destination locations and time windows.
2. The IDs of the containers and the corresponding starting locations (depots).
3. The trucks with their IDs, starting locations (depots), speed and capacity limits as
well as their costs (per kmh).
4. The drivers with their IDs, home depots, and salary (per day).
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 625
5. The distance matrix13 containing the distances between the different locations. Notice
that, in the real world, these matrices are slightly asymmetric, meaning that the
distance from location A to B may be (slightly) different from the distance between
B and A. The distances are given in m.
The optimizer then must find a plan which involves the input objects shortly listed in Sec-
tion 50.4.1.3. Such a plan, which can be constructed by any of the optimization algorithms
discussed in Part III and Part IV, has the following features:
1. It assigns trucks, containers, and drivers to moves. A move is one atomic logistic
action which is denoted byarabic (a) a truck, (b) a (possibly empty) set of orders to
be transported by that truck, (c) a non-empty set of truck drivers (contains at least
one driver), (d) a (possibly empty) set of containers which will contain the orders,
(e) a starting location, (f ) a starting time,orders, (g) and a destination location.
Notice that the arrival time at the destination is determined by the distance between
the start and destination location, the truck’s average speed, and the starting time.
See Section 50.4.3.1 on page 632 for more information.
2. A plan may consist of an arbitrary number of such moves. See Section 50.4.3.2 on
page 634 for more information.
3. This plan must fulfill all the constraints given in Section 50.4.1.2 while minimizing the
two objectives, cost and delivery errors.
50.4.2.1 Locations
A location is a place where a truck can drive to. Each location l = (id, isDepot) has the
following features:
1. A unique identifier id which is a string that starts with an uppercase L followed by six
decimal digits. Example: L000021.
2. A Boolean value isDepot which is true if the location denotes a depot or false if it is
a normal place (such as a customer’s location). Trucks, containers, and drivers always
reside at depots at the beginning of a planning period and must return to depots at
the end of a plan. Example: true.
In our benchmark Java implementation given in Section 57.3, locations are represented by
the class Location given in Listing 57.24 on page 914. This class allows creating and reading
textual representations of the location data. Hence, using our benchmark is not bound to
using the Java language. A list of locations can be given in the form of Listing 50.21.
13 http://en.wikipedia.org/wiki/Distance_matrix [accessed 2010-11-09]
626 50 BENCHMARKS
Listing 50.21: The location data of the first example problem instance.
1 %id is_depot
2 L000000 true
3 L000001 true
4 L000002 true
5 L000003 true
6 L000004 true
7 L000005 false
8 L000006 false
9 L000007 false
10 L000008 false
11 L000009 true
12 L000010 false
13 L000011 false
14 L000012 false
15 L000013 false
16 L000014 false
17 L000015 true
18 L000016 false
19 L000017 false
20 L000018 false
21 L000019 true
50.4.2.2 Orders
Listing 50.22: The order data of the first example problem instance.
1 %id pickup_window_start pickup_window_end pickup_location delivery_window_start
delivery_window_end delivery_location
2 O000000 2700000 5400000 L000016 2700000 7200000 L000000
3 O000001 1800000 5400000 L000016 2700000 7200000 L000000
4 O000002 4500000 6300000 L000010 5400000 8100000 L000006
5 O000003 4500000 7200000 L000006 4500000 8100000 L000000
6 O000004 1800000 2700000 L000008 2700000 3600000 L000011
7 O000005 0 3600000 L000011 0 5400000 L000018
8 O000006 1800000 3600000 L000018 2700000 5400000 L000005
9 O000007 2700000 5400000 L000005 2700000 6300000 L000013
10 O000008 2700000 7200000 L000010 2700000 9900000 L000008
11 O000009 3600000 5400000 L000002 4500000 7200000 L000004
7. The delivery location dl is where the order has to be delivered to. An order can only
be delivered during its delivery time window, i. e., at times t with ds ≤ t ≤ de . The
delivery location is denoted with a location identifier. Example: L000000. If a truck
arrives at a delivery location before ds , it has to wait until ds before it can deliver the
order. All deliveries outside the delivery window are delivery errors.
In Listing 57.25 on page 915, we provide a Java class to hold the order information. This
class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.22.
50.4.2.3 Trucks
1. A unique identifier id which is a string that starts with an uppercase T followed by six
decimal digits. Example: T000003.
2. A starting location sl , a depot, where the truck is parked. This location is denoted by
a location identifier. Example: L000015.
3. The maximum number maxC ∈ N1 of containers that this truck can transport. This
number, usually 0 < maxc ≤ 3 can be different for each truck. A truck cannot carry
more than maxc containers. Example: 2.
4. The maximum number maxD ∈ N1 of drivers which can be in the truck. A truck must
be driven by at least one driver. It may, however, be possible that a truck has more
than one seat, so that two truck drivers can drive to their home depot together on
their return trip, for example. Example: 2
5. The costs per hour-kilometer costHKm in USD-cent (or RMB-fen). If the truck t drives
for m kilometers needing n hours, the total costs of this move are n ∗ m ∗ costHKm.
These costs include, for example, fuel consumption and material/maintenance costs
for the truck. These costs, of course, may be different for different trucks. Example:
170.
6. The average speed avgSpeed in km/h with which the truck can move. Of course, this
speed can again be different from truck to truck, some larger trucks may be slower,
smaller trucks may be faster. The same goes for older and newer trucks. Example:
45.
628 50 BENCHMARKS
Listing 50.23: The truck data of the first example problem instance.
1 %id start_location max_containers max_drivers costs_per_h_km avg_speed_km_h
2 T000000 L000000 3 2 230 50
3 T000001 L000019 2 1 185 50
4 T000002 L000015 3 1 215 30
5 T000003 L000002 3 1 160 50
6 T000004 L000004 1 1 185 50
7 T000005 L000015 1 2 170 50
8 T000006 L000001 2 1 145 50
9 T000007 L000019 2 1 230 40
10 T000008 L000019 2 1 145 45
11 T000009 L000003 2 2 200 55
12 T000010 L000002 3 1 270 45
13 T000011 L000003 2 1 230 40
14 T000012 L000009 3 1 160 50
A truck will always be located at a depot at the beginning of a planning period and must
also be located at a depot after the plan was executed. The end depot may, however, be a
different one from the start depot.
In Listing 57.26 on page 918, we provide a Java class to hold the truck information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.23.
50.4.2.4 Containers
Containers c = (id, sl ) are simple objects which are used to package the orders for trans-
portation. They are characterized by
1. The container ID id. Container IDs start with an uppercase C followed by a six-digit
decimal number. The container IDs form a strict sequence of numbers. Example:
C000003.
2. The start location sl – the depot where the container resides. sl is a location ID.
Example: L000000.
A container will always be located at a depot at the beginning of a planning period and
must also be located at a depot after the plan was executed. The end depot may, however,
be a different one from the start depot.
In Listing 57.27 on page 920, we provide a Java class to hold the container information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in ??.
50.4.2.5 Drivers
Drivers d = (id, sl , costs) are needed to drive the trucks. They are characterized by
1. The driver ID id. Driver IDs start with an uppercase D followed by a six-digit decimal
number. The driver IDs form a strict sequence of numbers. Example: D000007.
2. The home location sl – the depot where the driver starts her work. sl is a location
ID. Example: L000002.
3. The costs per day costs denote the amount in USD – cent (or RMB-fen) that have to
paid for each day on which the driver is working. Even if the driver only works for
one millisecond of the day, the full amount must be paid.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 629
Listing 50.24: The container data of the first example problem instance.
1 %id start
2 C000000 L000000
3 C000001 L000000
4 C000002 L000019
5 C000003 L000015
6 C000004 L000019
7 C000005 L000019
8 C000006 L000019
9 C000007 L000001
10 C000008 L000001
11 C000009 L000004
12 C000010 L000000
13 C000011 L000019
14 C000012 L000009
15 C000013 L000000
Listing 50.25: The driver data of the first example problem instance.
1 %id home costs_per_day
2 D000000 L000000 7000
3 D000001 L000000 11000
4 D000002 L000019 9000
5 D000003 L000015 10000
6 D000004 L000003 11000
7 D000005 L000009 7000
8 D000006 L000001 10000
9 D000007 L000019 7000
10 D000008 L000019 8000
11 D000009 L000009 11000
12 D000010 L000019 9000
13 D000011 L000001 11000
14 D000012 L000009 10000
15 D000013 L000009 11000
A driver will always be located at a depot at the beginning of a planning period and must
also be located at exactly this depot after the plan was executed.
In Listing 57.28 on page 921, we provide a Java class to hold the truck information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.25.
The asymmetric distance matrix ∆ contains the distances between the different locations in
meters. If there are n locations, the distance matrix is an n × n matrix.
The element ∆i,j at position i, j ∈ N0 denotes the distance between the locations i and
j. The distance between L000009 and L000017 would be found at ∆9,17 . Notice that this
may be different from the distance between L000017 and L000009 which can be found at
∆17,9 . Since the resolution of the distance matrix is meters, it is easy to understand that
these values may slightly differ. If highways are used, the on-ramps and exits may to the
highway may be at slightly different positions.
The time T (t, i, j) in milliseconds that a truck t needs to drive from location i to location
j can be computed as specified in Equation 50.125.
∆i,j ∆i,j
T (t, i, j) = ∗ 3600 ∗ 1000 = ∗ 3600 (50.125)
1000 ∗ t.avgSpeed t.avgSpeed
630 50 BENCHMARKS
The driving costs C(t, i, j) in USD-cent (or RMB-fen) consumed by a truck t when driving
from location i to location j can be computed as specified in Equation 50.126.
T (t, i, j) ∆i,j
C(t, i, j) = ∗ ∗ t.costsHKm (50.126)
3 600 000 1000
These costs do not include the costs that have to be spent for the drivers. In Listing 57.29
on page 923, we provide a Java class to hold the distance information. This class can
produce and read a textual representation of the data mentioned above. An example for
this representation is given in Listing 50.26.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK
Listing 50.26: The distance data of the first example problem instance.
1 0 15700 10500 16450 5600 6750 5000 4900 5200 14100 14000 11850 13150 14200 13700 12800 14450 18050 12350 4550
2 15550 0 17050 22950 12150 13300 11550 11450 11750 20600 20550 18350 19650 20650 20200 19300 20900 24600 18800 11050
3 10450 17050 0 17950 7100 8150 6450 6350 6700 15500 15500 13300 14500 15550 15150 14150 15800 19500 13700 5950
4 16400 23100 18050 0 12150 13300 11550 12400 11750 6500 2450 4650 3500 4050 4500 5350 7050 10750 5100 12000
5 5550 12250 7100 12150 0 1150 650 1450 400 9750 9700 7500 8850 9800 9400 8400 10100 13750 7950 1150
6 6650 13300 8200 13300 1150 0 1700 2600 1550 10850 10850 8600 9900 10950 10500 9550 11200 14900 9100 2200
7 4950 11650 6500 11650 650 1750 0 850 200 9200 9100 6900 8250 9300 8900 7850 9500 13200 7400 450
8 4800 11500 6400 12300 1450 2600 900 0 1050 9950 9900 7700 8950 10050 9600 8600 10250 13900 8100 350
9 5200 11850 6700 11800 400 1550 200 1050 0 9400 9300 7150 8550 9550 9150 8100 9700 13450 7650 750
10 13950 20600 15500 6500 9750 10850 9150 9850 9350 0 4050 2300 3600 2400 2300 1300 1500 5250 1850 9500
11 13950 20600 15500 2450 9750 10850 9150 9850 9300 4050 0 2250 1050 1700 1950 2950 4650 8300 2700 9500
12 11800 18400 13350 4700 7550 8700 6900 7700 7150 2250 2250 0 1300 2350 1950 900 2600 6300 450 7300
13 13050 19700 14650 3500 8800 10000 8200 8950 8400 3600 1050 1300 0 2800 3000 2200 3900 7600 1750 8600
14 14000 20700 15600 4050 9800 10950 9250 10000 9450 2400 1700 2350 2750 0 450 1450 3100 6750 1900 9550
15 13650 20250 15150 4400 9400 10500 8850 9550 9050 2350 1950 1950 3000 450 0 1000 2650 6350 1500 9150
16 12700 19300 14250 5400 8450 9550 7850 8600 8100 1300 2950 950 2250 1450 1000 0 1700 5350 550 8200
17 14350 21050 15900 7000 10100 11250 9500 10300 9750 1500 4600 2600 3900 3100 2650 1700 0 3650 2200 9950
18 18050 24650 19550 10750 13750 14850 13200 13900 13400 5200 8300 6250 7550 6750 6300 5400 3650 0 5850 13550
19 12150 18900 13700 5100 7950 9050 7350 8100 7550 1850 2700 450 1750 1900 1500 550 2200 5850 0 7750
20 4450 11100 6000 11950 1100 2200 450 350 650 9550 9500 7300 8550 9700 9200 8200 9850 13550 7750 0
631
632 50 BENCHMARKS
50.4.3 Problem Space
A candidate solution for this optimization problem is a list of atomic transportation actions
which we here call moves. A move m = (t, o, d, c, sl , st , dl ) is a tuple consisting of the
following elements.
3. A non-empty set d of drivers, one of which will drive the truck (whereas the others, if
any, just use the truck for transportation). Notice that 0 < |d| ≤ t.maxD. Example:
(D000000, D000001).
5. A starting location sl from where the truck will start to drive. Example: L000010.
6. A starting time st at which the truck will leave the starting location (measured in
milliseconds from the beginning of the planning period). Example: 5 691 600.
7. A destination location dl 6= sl where the truck will drive to. Example: L000006.
It is assumed that truck t will arrive at m.dl at time m.st + T (t, m.sl , m.dl ) (see Equa-
tion 50.125), hence it is not necessary to store an arrival time in the tuple m. One part of
the costs of the move can be computed according to Equation 50.126. The other part must
determined by adding the driver costs (see Section 50.4.2.5).
In Listing 57.30 on page 925, we provide a Java class to hold the information of a single
move. This class can produce and read a textual representation of the data mentioned above.
An example for this representation is given in Listing 50.27.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK
Listing 50.27: The move data of the solution to the first example problem instance.
1 %truck orders drivers containers from start time to
2 T000000 () (D000000,D000001) (C000001,C000000) L000000 2610000 L000016
3 T000000 (O000001,O000000) (D000000,D000001) (C000001,C000000) L000016 3650400 L000000
4 T000000 () (D000000,D000001) (C000001,C000000) L000000 4683600 L000010
5 T000000 (O000002) (D000000,D000001) (C000001,C000000) L000010 5691600 L000006
6 T000000 (O000003) (D000000,D000001) (C000001,C000000) L000006 6350400 L000000
7 T000001 () (D000002) (C000002) L000019 2520000 L000008
8 T000001 (O000004) (D000002) (C000002) L000008 2566800 L000011
9 T000001 (O000005) (D000002) (C000002) L000011 3081600 L000018
10 T000001 (O000006) (D000002) (C000002) L000018 3114000 L000005
11 T000001 (O000007) (D000002) (C000002) L000005 3765600 L000013
12 T000001 () (D000002) (C000002) L000013 4554000 L000010
13 T000001 (O000008) (D000002) (C000002) L000010 4676400 L000008
14 T000001 () (D000002) (C000002) L000008 5346000 L000019
15 T000002 () (D000003) (C000003) L000015 3060000 L000002
16 T000002 (O000009) (D000003) (C000003) L000002 4770000 L000004
17 T000002 () (D000003) (C000003) L000004 5619600 L000015
633
634 50 BENCHMARKS
1 Total Errors : 4
2 Total Costs : 63863
3 Total Time Error : 0
4 Container Errors : 0
5 Driver Errors : 0
6 Order Errors : 4
7 Truck Errors : 0
8 Pickup Time Error : 0
9 Delivery Time Error : 0
10 Truck Costs : 5863
11 Driver Costs : 58000
Listing 50.28: An example output of the features of an example plan (with errors).
In Listing 50.27, we do not display a single move. Instead, a set of multiple moves is
presented. Here, we refer to such sets of moves as plans x = (m1 , m2 , . . . ). Trucks first drive
to the pickup location of the orders. The pickup the orders and deliver them and then drive
back. Sometimes, a truck may not drive back directly but could pick up another order on
the way.
The problem space X thus consists of all possible plans x, i. e., of lists of all possible
moves m. It is the goal of the optimization process to construct good plans.
The solutions to an instance of the presented type of Vehicle Routing Problem follow
exactly the format given in Listing 50.27. They are textual representations of the candidate
solutions x and, hence, can be created using an arbitrary synthesis/optimization method
implemented in an arbitrary programming language.
The objectives of solving the (benchmarking) real-world Vehicle Routing Problem are quite
easy, as already stated in Section 50.4.1.2 on page 623: (a) We want to deliver all orders
in time (b) at the least possible costs. Additional to these objectives, we have to consider
constraints: the synthesized plans must be physically sound.
Evaluating a candidate solution of this kind of problem is relatively complex. Basically,
we can do this as follows:
1. Initialize an internal data structure that keeps track of the positions and arrival times
of all objects by placing all objects at their starting locations.
2. Sort the list x ∈ X of moves m ∈ x according to the starting times of the moves.
3. For each move, starting with the earliest one, do:
(a) Compute the costs of the move and add them to an internal cost counter.
(b) Update the positions and arrival times of all involved objects in each step.
(c) After each step, check if any physical or time window constraint has been violated.
If so, count the number of errors.
The feature values obtained by such a computation can be used by the objective functions.
There is no clear rule on how to design the objective functions – we already showed in Task 67
on page 238 that often, the same optimization criterion can be expressed in different ways.
Hence, we just provide a class Compute which determines some key features of a candidate
solution (plan) x in Listing 57.31 on page 931. This class can evaluate a plan and generate
outputs such as the one in Listing 50.28.The class Compute allows us to process the moves
of a plan x step-by-step and to check, in each step, if new errors occurred or how the total
costs changed. This would also allow for a successive generation of solutions.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 635
Notice that, if you wish to use the evaluation capabilities of this class without imple-
menting your solution in Java, you can still produce plans as presented in Listing 50.27 on
page 633, parse them in with Java, and then apply Compute to the parsed list.
50.4.5 Framework
As already outlined in the previous sections, we provide a complete framework for this
benchmark problem and list the most important source files in Section 57.3 on page 914.
Additional to the classes listed representing the essential objects involved in the optimization
process (see Section 50.4.2 and Section 50.4.3), we also provide an instance generator which
can create datasets such as those given as examples in the two mentioned sections.
With this instance generator, we create six benchmark instances which differ in the
number of involved objects. Naturally, we would expect that smaller problem instances
(such as case01 which was used as example in our discussion on the benchmark objects)
are easier to solve. Solutions to the larger problem instances should be much harder to
discover. For each of the benchmark sets, we provide a baseline solution. This solution
is not necessarily the best possible solution to the benchmark dataset, but it fulfills all
constraints. Matter of fact, the best solutions to the benchmark datasets are unknown and
yet to be discovered.
These benchmark datasets can be loaded via a function loadInstance of the class
InstanceLoader. Notice that each of the classes Container, Driver, Location, Order, Truck,
and DistanceMatrix contains a static variable which allows accessing the list of instances of
that class as defined in the benchmark dataset. The class Driver, for example, has a static
variable called DRIVERS which lists all of its instances. Container has a similar variable called
CONTAINERS.
Part VI
Background
638 50 BENCHMARKS
Chapter 51
Set Theory
Set theory1 [763, 1169, 2615] is an important part of the mathematical theory. Numerous
other disciplines like algebra, analysis, and topology are based on it. Since set theory is not
the topic of this book but a mere prerequisite, this chapter (and the ones to follow) will
just briefly introduce it. We make heavy use of wild definitions and, in some cases, even
rough simplifications. More information on the topics discussed can be retrieved from the
literature references and, again, from [2938].
Set theory can be divided into naı̈ve set theory2 and axiomatic set theory3 . The first
approach, the naı̈ve set theory, is inconsistent and therefore not regarded in this book.
The expression a ∈ A means that the element a is a member of the set A while b 6∈ A means
that b is not a member of A. There are three common forms to define sets:
1. With their elements in braces: A = {1, 2, 3} defines a set A containing the three
elements 1, 2, and 3. {1, 1, 2, 3} = {1, 2, 3} since the curly braces only denote set
membership.
2. The same set can be specified using logical operators to describe its elements: ∀a ∈
N1 : (a ≥ 1) ∧ (a < 4) ⇔ a ∈ A.
3. A shortcut for the previous form is to denote the logical expression in braces, like
A = {(a ≥ 1) ∧ (a < 4), a ∈ N1 }.
The cardinality of a set A is written as |A| and stands for the count of elements in the
set.
1 http://en.wikipedia.org/wiki/Set_theory [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Naive_set_theory [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Axiomatic_set_theory [accessed 2007-07-03]
4 http://en.wikipedia.org/wiki/Set_%28mathematics%29 [accessed 2007-07-03]
640 51 SET THEORY
51.2 Special Sets
2. The natural numbers5 N1 without zero include all whole numbers bigger than 0. (N1 =
{1, 2, 3, . . . })
3. The natural numbers including zero (N0 ) comprise all whole numbers bigger than or
equal to 0. (N0 = {0, 1, 2, 3, . . . })
8. C is the set of complex numbers8 . When needed in the context of this book, we
abbreviate the immaginary unit with i. The real and imaginary parts of a complex
number z are denoted as “Re”z and “Im”z, respectively.
9. We define B = {0, 1} as the set of values a Boolean variable may take on, where
0 ≡ false and 1 ≡ true.
N1 ⊂ N0 ⊂ Z ⊂ Q ⊂ R ⊂ C (51.1)
N1 ⊂ N0 ⊂ R+ ⊂ R ⊂ C (51.2)
For these numerical sets, special subsets, so called intervals, can be specified. [1, 5) is a set
which contains all the numbers starting from (including) 1 up to (exclusively) 5. (1, 5] on
the other hand contains all numbers bigger than 1 and up to inclusively 5. In order to avoid
ambiguities, such sets will always used in a context where it is clear if the numbers in the
set are natural or real.
Between two sets A and B, the following subset relations may exist: If all elements of A are
also elements of B, A ⊆ B holds.
(∀a ∈ A ⇒ a ∈ B) ⇔ A ⊆ B (51.3)
Hence, {1, 2, 3} ⊆ {1, 2, 3, 4} and {x, y, z} ⊆ {x, y, z} but not {α, β, γ} ⊆ {β, γ, δ}. If
additionally there is at least one element b in B which is not in A, then A ⊂ B.
Thus, {1, 2, 3} ⊂ {1, 2, 3, 4} but not {x, y, z} ⊂ {x, y, z}. The set relations can be negated by
crossing them out, i. e., A 6⊂ B means ¬ (A ⊂ B) and A 6⊆ B denotes ¬ (A ⊆ B), respectively.
51.4. OPERATIONS ON SETS 641
A A AB
A B BA
AÇB
B AÈB
Let us now define the possible unary and binary operations on sets, some of which are
illustrated in Figure 51.1.
Definition D51.2 (Set Union). The union9 C of two sets A and B is written as A ∪ B
and contains all the objects that are element of at least one of the sets.
C = A ∪ B ⇔ ((c ∈ A) ∨ (c ∈ B) ⇔ (c ∈ C)) (51.5)
A∪B = B∪A (51.6)
A∪∅ = A (51.7)
A∪A = A (51.8)
A ⊆ A∪B (51.9)
Definition D51.3 (Set Intersection). The intersection10 D of two sets A and B, denoted
by A∩B, contains all the objects that are elements of both of the sets. If A∩B = ∅, meaning
that A and B have no elements in common, they are called disjoint.
D = A ∩ B ⇔ ((d ∈ A) ∧ (d ∈ B) ⇔ (d ∈ D)) (51.10)
A∩B = B∩A (51.11)
A∩∅ = ∅ (51.12)
A∩A = A (51.13)
A∩B ⊆ A (51.14)
5 http://en.wikipedia.org/wiki/Natural_numbers [accessed 2008-01-28]
6 http://en.wikipedia.org/wiki/Rational_number [accessed 2008-01-28]
7 http://en.wikipedia.org/wiki/Real_numbers [accessed 2008-01-28]
8 http://en.wikipedia.org/wiki/Complex_number [accessed 2008-01-29]
9 http://en.wikipedia.org/wiki/Union_%28set_theory%29 [accessed 2007-07-03]
10 http://en.wikipedia.org/wiki/Intersection_%28set_theory%29 [accessed 2007-07-03]
642 51 SET THEORY
Definition D51.4 (Set Difference). The difference E of two sets A and B, A\B, contains
the objects that are element of A but not of B.
Definition D51.5 (Set Complement). The complementary set A of the set A in a set A
includes all the elements which are in A but not element of A:
A⊆A⇒A=A\A (51.20)
Definition D51.6 (Cartesian Product). The Cartesian product11 P of two sets A and
B, denoted P = A × B is the set of all ordered pairs (a, b) whose first component is an
element from A and the second is an element of B.
P = A × B ⇔ P = {(a, b) : a ∈ A, b ∈ B} (51.21)
An = A × A × .. × A (51.22)
| {z }
n times
Definition D51.9 (Power Set). The power set14 P (A) of the set A is the set of all subsets
of A.
∀p ∈ P (A) ⇔ p ⊆ A (51.23)
Definition D51.10 (Type). A type is a set of values that a variable, constant, function,
or similar entity may take on.
We can, for instance, specify the type T = {1, 2, 3}. A variable x which is an instance of
this type then can take on the values 1, 2, or 3.
To each position in i a tuple t, a type Ti is assigned. The element t[i] at a position i must
then be an element of Ti . Other than sets, tuples may contain the same element more than
once.
In the context of this book, we define tuples with parenthesis like (a, b, c) whereas sets
are specified using braces {a, b, c}. Since every item of a tuple may be of a different type,
(Monday, 23, {a, b, c}) is a valid tuple.
Definition D51.12 (Tuple Type). To formalize this relation, we define the tuple type T .
T specifies the basic sets for the elements of the tuples. If a tuple t meets the constraints
imposed to its values by T , we can write t ∈ T which means that the tuple t is an instance
of T .
51.6 Permutations
The permutation space Π(n) be the space of all permutations π ∈ Π(n) for which the
conditions from Definition D51.13 hold.
15 http://en.wikipedia.org/wiki/Tuple [accessed 2007-07-03]
16 http://en.wikipedia.org/wiki/Permutations [accessed 2008-01-31]
644 51 SET THEORY
51.7 Binary Relations
Some types and possible properties of binary relations are listed below and illustrated in
Figure 51.2. A binary relation can be [923]:
A B A B A B
A B A B A B A B
1. Left-total if
∀a ∈ A ∃b ∈ B : R(a, b) (51.29)
2. Surjective19 or right-total if
∀b ∈ B ∃a ∈ A : R(a, b) (51.30)
3. Injective20 if
∀a1 , a2 ∈ A, b ∈ B : R(a1 , b) ∧ R(a2 , b) ⇒ a1 = a2 (51.31)
17 http://en.wikipedia.org/wiki/Binary_relation [accessed 2007-07-03]
18 http://en.wikipedia.org/wiki/Relation_%28mathematics%29 [accessed 2007-07-03]
19 http://en.wikipedia.org/wiki/Surjective [accessed 2007-07-03]
20 http://en.wikipedia.org/wiki/Injective [accessed 2007-07-03]
51.8. ORDER RELATIONS 645
4. Functional if
∀a ∈ A, b1 , b2 ∈ B : R(a, b1 ) ∧ R(a, b2 ) ⇒ b1 = b2 (51.32)
All of us have learned the meaning and the importance of order since the earliest years
in school. The alphabet is ordered, the natural numbers are ordered, the marks we can score
in school are ordered, and so on. Matter of fact, we come into contact with orders even way
before entering school by learning to distinguish things according to their size, for instance.
Order relations23 are binary relations which are used to express the order amongst the
elements of a set A. Since order relations are imposed on single sets, both their domain and
their codomain are the same (A, in this case). For such relations, we can define an additional
number of properties which can be used to characterize and distinguish the different types
of order relations:
1. Antisymmetrie:
R(a1 , a2 ) ∧ R(a2 , a1 ) ⇒ a1 = a2 ∀a1 , a2 ∈ A (51.34)
2. Asymmetrie
R(a1 , a2 ) ⇒ ¬R(a2 , a1 ) ∀a1 , a2 ∈ A (51.35)
3. Reflexivenss
R(a, a) ∀a ∈ A (51.36)
4. Irreflexivenss
6 ∃a ∈ A : R(a, a) (51.37)
All order relations are transitive24 , and either antisymmetric or symmetric and either
reflexive or irreflexive:
The ≤ and ≥ operators, for instance, represent non-strict partial orders on the set of the
complex numbers C. Partial orders that correspond to the > and < comparators are called
strict. The Pareto dominance relation introduced in Definition D3.13 on page 65 is another
example for such a strict partial order.
Definition D51.16 (Strict Partial Order). A binary relation R defines a strict (or ir-
reflexive) partial order if it is irreflexive, asymmetric, and transitive.
The real numbers R for example are totally ordered whereas on the set of complex
numbers C, only (strict or reflexive) partial (non-total) orders can be defined since it is
continuous in two dimensions.
Another important class of relations are equivalence relations26 [2774, 2842] which are
often abbreviated with ≡ or ∼, i. e., a1 ≡ a2 and a1 ∼ a2 mean R(a1 , a2 ) for the equivalence
relation R imposed on the set A and a1 , a2 ∈ A. Unlike order relations, equivalence relations
are symmetric, i. e.,
R(a1 , a2 ) ⇒ R(a2 , a1 ) ∀a1 , a2 ∈ A (51.39)
51.10 Functions
A function maps each element of X to one element in Y . The domain X is the set of possible
input values of f and the codomain Y is the set its possible outputs. The set of all actual
outputs {f (x) : x ∈ X} is called range. This distinction between range and codomain can
be made obvious with a small example. The sine function can be defined as a mapping from
the real numbers to the real numbers sin : R 7→ R, making R its codomain. Its actual range,
however, is just the real interval [−1, 1].
25 http://en.wikipedia.org/wiki/Total_order [accessed 2007-07-03]
26 http://en.wikipedia.org/wiki/Equivalence_relation [accessed 2007-07-28]
27 http://en.wikipedia.org/wiki/Equivalence_class [accessed 2007-07-28]
28 http://en.wikipedia.org/wiki/Domain_%28mathematics%29 [accessed 2007-07-03]
51.11. LISTS 647
51.10.1 Monotonicity
Functions are monotone, i. e., have the property of monotonicity29 , if they preserve orders.
In the following definitions, we assume that two order relations RX and RY are defined on
the domain X and codomain Y of a function f , respectively. These orders are expressed
with the <, ≤, and ≥ operators. In the common case that X = Y or X ⊆ R and Y ⊆ R,
RX = RY usually holds and both correspond to the intuitive order of the real numbers.
51.11 Lists
Definition D51.23 (List). Lists30 are abstract data types which can be regarded as spe-
cial tuples. They are sequences where every item is of the same type.
Other than our discussions on set theory, the following text about the data structure list is
strictly local to this book and not to be understood as a general mathematical theory. All
the functions and operations defined here are only given in order to allow for a clear and
well-defined notation in the other parts of the book. When specifying population-oriented
optimization algorithms, for instance, we will use lists rather than sets for representing the
populations.
The definitions provided here are not founded upon any related work in particular. We
introduce functions that will add elements to or remove elements from lists; that sort lists
or search within them. All list function are without side effect31 , i. e., they do not change
their parameters. Instead, the result of a function processing a list l is a new list l′ to which
is the modified version of l whereas l is left untouched.
We do not assume any specific form of list implementation such as linear arrays or linked
lists32 . Instead, we leave the actual implementation of the methods discussed in the following
completely open and try to specify their features axiomatically.
Like tuples, lists can be defined using parenthesis in this book. The single elements of a
list are accessed by their index written in brackets ((a, b, c) [1] = b). The first element of a
list is located at index 0 and the last element has the index n − 1 where n is the number of
elements in the list: n = len((a, b, c)) = 3. The empty list is abbreviated with ().
Definition D51.26 (addItem). The addItem function is a shortcut for inserting one item
at the end of a list:
addItem(l, q) ≡ insertItem(l, len(l) , q) (51.45)
Definition D51.30 (count). The function count(x, l) returns the number of occurrences
of the element x in the list l.
51.11.1 Sorting
Definition D51.33 (Sorted List). A list l is considered as being sorted, if l[i] ≤ l[j] ↔
i ≤ j holds for each i, j ∈ 0..len(l) − 1 according to a natural (or explicitly specified) order
relation defined on the type of the elements of the list. Thus, we can define the predicate
“isSorted” as:
isSorted(l) ⇔ (i ∈ 1..len(l) − 1 ⇒ l[i − 1] ≤ l[i]) (51.52)
The concept of comparator functions has been introduced in Definition D3.18 on page 77.
cmp(u1 , u2 ) returns a negative value if u1 is smaller than u2 , a positive number if u1 is
greater than u2 , and 0 if both are equal. Comparator functions are very versatile, they from
the foundation of the sorting mechanisms of the Java framework [1109, 1110], for instance.
In Global Optimization, they are perfectly suited to represent the Pareto dominance or
prevalence relations discussed in Section 3.3.5 on page 65 and Section 3.5.2. Here, we use
them to express the orders for according to which a list can be sorted.Hence, we can redefine
Equation 51.52 to:
isSorted(l, cmp) ⇔ (i ∈ 1..len(l) − 1 ⇒ cmp(l[i − 1], l[i]) < 0) (51.53)
Sorting algorithms33 transform unsorted lists into sorted ones. Here, we will not explicitly
discuss these algorithms and, instead, only specify prototype interfaces to them.Generally,
a list U can be sorted in O(len(U ) log len(U )) time complexity. For concrete examples
of sorting algorithms, see [635, 1557, 2446].We define the functions sortAsc(U, cmp) and
sortDsc(U, cmp) to sort a list U using a comparator function cmp(u1 , u2 ) in ascending
or descending order, respectively. In other words, their parameter list U is an arbitrary
(possible unsorted) list and the return value is a list containing the same elements sorted
according to cmp.
S = sortAsc(U, cmp) ⇒ ∀u ⇒ count(u, U ) = count(u, S) (51.54)
∧isSorted(S, cmp)
The result of S = sortDsc(U, cmp) is sorted reversely:
S = sortDsc(U, cmp) ⇒ ∀u ⇒ count(u, U ) = count(u, S) (51.55)
∧isSorted(reverseList(S) , cmp)
Sorting according to a specific function f of only one parameter can easily be performed
by building the comparator cmp(u1 , u2 ) ≡ (f (u1 ) − f (u2 )). Thus, we will furthermore
synonymously use the sorting predicate also with unary functions f .
sort(U, f ) ≡ sort(U, cmp(u1 , u2 ) ≡ (f (u1 ) − f (u2 ))) (51.56)
Searching an element u in an unsorted list U means walking through it until either the ele-
ment is found or the end of the whole list has been reached, which corresponds to complexity
O(len(U )). We therefore define the operation searchItemUS(u, U ) which returns either the
index of the searched element u in the list or −1 if u 6∈ U .
i : U [i] = u if u ∈ U
searchItemUS(u, U ) = (51.57)
−1 otherwise
Searching an element s in sorted list S usually means to perform a fast binary search34
returning the index of the element if it is contained in S. For the case that s is not contained
in S, we again specify an extension similar to the one used in Java: If s 6∈ S, a search
operation for sorted lists returns a negative number indicating the position where the element
could be inserted into the list without violating its order. The function searchItemAS(s, S)
searches in sorted list with elements in ascending order, searchItemDS(s, S) searches in a
descending sorted list. Searching in a sorted list is done in O(log len(S)) time. For concrete
algorithm examples, again see [635, 1557, 2446].
i : S [i] = s if s ∈ S
searchItemAS(s, S) = (−i − 1) : (∀j ≥ 0, j < i ⇒ S [j] ≤ s)∧ otherwise (51.58)
(∀j < len(S) , j ≥ i ⇒ S [j] > s)
i : S [i] = s if s ∈ S
searchItemDS(s, S) = (−i − 1) : (∀j ≥ 0, j < i ⇒ S [j] ≥ s)∧ otherwise (51.59)
(∀j < len(S) , j ≥ i ⇒ S [j] < s)
We can further define transformations between sets and lists which will implicitly be used
when needed in this book. It should be noted that setToListis not the inverse function of
listToSet.
B = setToList(set A) ⇒ ∀a ∈ A ∃i : B [i] = a ∧
∀i ∈ 0..len(B) − 1 ⇒ B [i] ∈ A ∧
len(setToList(A)) = len(A) (51.61)
The graph theory1 [791, 2741] is an area of mathematics which investigates with properties
of graphs, relations between graphs, and algorithms applied to graphs.
Definition D52.1 (Graph). A graph2 G is a tuple consisting of set of vertices (or nodes)
V and a set of edges (or connections) E which connect the vertices.
We distinguish between finite graphs, where the set of vertices is finite, and infinite
graphs, where this is not the case. The set of edges E defines a binary relation (see Sec-
tion 51.7) on the vertices v ∈ V . If the edges of a graph have no orientation, the graph
is called undirected . Then, E can either be considered as a symmetric relation, i. e., if
(v1 , v2 ) ∈ E ⇔ (v2 , v1 ), or the single vertexes are defined as 2-multisets, i. e., {v1 , v2 } in-
stead of (v1 , v2 ) and (v2 , v1 ). The edges in directed graphs have an orientation and, in
sketches, are often represented by arrows whereas undirected edges are usually denoted by
simple lines between the nodes v.
A cycle is a path where the start and end vertex are the same, i. e., p0 = plen(p)−1 .
A graph is a weighted graph if values (weights) w are assigned to its edges e ∈ E. The
weight might stand for a distance or cost. The total weight of a path p in such a graph then
evaluates to
len(p)−1
X
w(o) = w(pi ) (52.3)
i=0
In this chapter we give a rough introduction into stochastic theory1 [1438, 1693, 2292], which
subsumes
1. probability2 theory3 , the mathematical study of phenomena characterized by random-
ness or uncertainty, and
2. statistics4 , the art of collecting, analyzing, interpreting, and presenting data.
53.1 Probability
Probability theory is used to determine the likeliness of the occurrence of an event under
ideal mathematical conditions. [1481, 1482] Let us start this short summary with some very
basic definitions.
Definition D53.3 (Sample Space). The set of all possible outcomes (elementary events,
samples) ω of a random experiment is the sample space ∀ω : ω ∈ Ω.
Definition D53.5 (Certain Event). The certain event is the random event will always
occur in each repetition of a random experiment. Therefore, it is equal to the whole sample
space Ω.
Definition D53.6 (Impossible Event). The impossible event will never occur in any rep-
etition of a random experiment, it is defined as ∅.
Definition D53.7 (Conflicting Events). Two conflicting events A1 and A2 can never
occur together in a random experiment. Therefore, A1 ∩ A2 = ∅.
53.1.2 Combinatorics
For many random experiments of this type, we can use combinatorial6 approaches in order
to determine the number of possible outcomes. Therefore, we want to shortly outline the
mathematical concepts of factorial numbers, combinations, and permutations.7
In combinatorial mathematics9 , we often want to know in how many ways we can arrange
n ∈ N1 elements from a set Ω with M = |Ω| ≥ n elements. We can distinguish between
combinations, where the order of the elements in the arrangement plays no role, and permuta-
tions, where it is important. (a, b, c) and (c, b, a), for instance, denote the same combination
but different permutations of the elements {a, b, c}. We furthermore distinguish between
arrangements where each element of Ω can occurred at most once (without repetition) and
arrangements where the same elements may occur multiple time (with repetition).
6 http://en.wikipedia.org/wiki/Combinatorics [accessed 2007-07-03]
7 http://en.wikipedia.org/wiki/Combinations_and_permutations [accessed 2007-07-03]
8 http://en.wikipedia.org/wiki/Factorial [accessed 2007-07-03]
9 http://en.wikipedia.org/wiki/Combinatorics [accessed 2008-01-31]
53.1. PROBABILITY 655
53.1.2.1 Combinations
The number of possible combinations10 C (M, n) of n ∈ N1 elements out of a set Ω with
M = |Ω| ≥ n elements without repetition is
M M!
C (M, n) = = (53.5)
n n! (M − n)!
M M M −1 M −2 M −n+1
= ∗ ∗ ∗ .. ∗ (53.6)
n n n−1 n−2 1
M +1 M M
C (M + 1, n) = C (M, n) + C (M, n − 1) = = + (53.7)
n n n−1
If the elements of Ω may repeatedly occur in the arrangements, the number of possible
combinations becomes
M + n − 1! M +n−1 M +n−1
= = = C (M + n − 1, n) = C (M + n − 1, n − 1)
n! (M − 1)! n n−1
(53.8)
53.1.2.2 Permutations
The number of possible permutations (see Section 51.6) P erm(M, n) of n ∈ N1 elements
out of a set Ω with M = |Ω| ≥ n elements without repetition is
M!
P erm(M, n) = (M )n = (53.9)
(M − n)!
If an element from Ω can occur more than once in the arrangements, the number of possible
permutations is
Mn (53.10)
Definition D53.9 (Absolute Frequency). The number H(A, n) denoting how often an
event A occurred during n repetitions of a random experiment is its absolute frequency11 .
Definition D53.11 (σ-algebra). A subset S of the power set P (Ω) of the sample space
Ω is called σ-algebra12 , if the following axioms hold:
Ω ∈ S (53.16)
∅ ∈ S (53.17)
A∈S⇔A∈S (53.18)
A ∈ S ∧ B ∈ S ⇒ (A ∪ B) ∈ S (53.19)
Definition D53.13 (Probability). A mapping P which maps a real number to each el-
ementary event ω ∈ Ω is called probability measure if and only if the σ-algebra S on Ω
holds:
∀A ∈ S ⇒ 0 ≤ P (A) ≤ 1 (53.23)
P (Ω) = 1 (53.24)
!
[ X
∀disjoint Ai ∈ S ⇒ P (A) = P Ai = P (Ai ) (53.25)
∀i ∀i
12 http://en.wikipedia.org/wiki/Sigma-algebra [accessed 2007-07-03]
13 http://en.wikipedia.org/wiki/Probability_measure [accessed 2007-07-03]
14 http://en.wikipedia.org/wiki/Kolmogorov_axioms [accessed 2007-07-03]
53.1. PROBABILITY 657
From these axioms, it can be deduced that:
P (∅) = 0 (53.26)
P (A) = 1 − P A (53.27)
P A ∩ B = P (A) − P (A ∩ B) (53.28)
P (A ∪ B) = P (A) + P (B) − P (A ∩ B) (53.29)
Definition D53.16 (Random Variable). The function X which relates the sample space
Ω to the real numbers R is called random variable16 in the probability space (Ω, S, P ).
X : Ω 7→ R (53.35)
Using such a random variable, we can replace the sample space Ω with the new sample space
ΩX . Furthermore, the σ-algebra S can be replaced with a σ-algebra SX , which consists of
subsets of ΩX instead of Ω. Last but not least, we replace the probability measure P which
relates the ω ∈ Ω to the interval [0, 1] by a new probability measure PX which relates the
real numbers R to this interval.
One example for such a new probability measure would be the probability that a random
variable X takes on a real value which is smaller or equal a value x:
PX (X ≤ x) = P ({ω : ω ∈ Ω ∧ X (ω) ≤ x}) (53.37)
4. The probability that the random variable X takes on values in the interval x0 ≤ X ≤ x1
can be computed using the CDF:
5. The probability that the random variable X takes on the value of a single random
number x is:
P (X = x) = FX (x) − lim FX (x − h) (53.43)
h→0
We can further distinguish between sample spaces Ω which contain at most countable infinite
many elements and such that are continuums. Hence, there are discrete20 and continuous21
random variables:
Definition D53.19 (Discrete Random Variable). A random variable X (and its prob-
ability measure PX (X) respectively) is called discrete if it takes on at most countable infinite
many values. Its cumulative distribution function FX (X) therefore has the shape of a stair-
way.
We specify the relation between the PMF and its corresponding (discrete) CDF in Equa-
tion 53.45 and Equation 53.45. We further define the probability of an event A in Equa-
tion 53.47 using the PMF.
x
X
PX (X ≤ x) = FX (x) = fX (x) (53.45)
i=−∞
PX (X = x) = fX (x) = FX (x) − FX (x − 1) (53.46)
X
PX (A) = fX (x) (53.47)
∀x∈A
The probability density function23 (PDF) is the counterpart of the PMF for continuous
distributions. The PDF does not represent the probabilities of the single values of a random
variable. Since a continuous random variable can take on uncountable many values, each
distinct value itself has the probability 0.
The most primitive features of a random distribution are the minimum, maximum, and the
range of its values. From the statistical perspective, the number of values A[i] in the data
sample A is another primitive parameter.
Definition D53.23 (Count). n = len(A) is the number of elements in the data sample A.
The item number is only defined for data samples, not for random variables, since random
variables represent experiments which can infinitely be repeated and thus stand for infinitely
many values. The number of items should not be mixed up with the possible number
of different values the random variable may take on. A data sample A may contain the
same value a multiple times. When throwing a dice seven times, one may throw A =
(1, 4, 3, 3, 2, 6, 1), for example24 .
Definition D53.24 (Minimum: Statistics). There exists no smaller element in the sam-
ple data A than the minimum sample ǎ ≡ min(A).
min(A) ≡ ǎ ∈ A : ∀a ∈ A ⇒ ǎ ≤ a (53.49)
Both definitions are fully compliant to Definition D3.2 on page 53 and the maximum can
specified similarly.
Definition D53.26 (Maximum: Statistics). There exists no larger element in the sam-
ple data A than the maximum sample â ≡ max(A).
max(A) ≡ â ∈ A : ∀a ∈ A ⇒ â ≥ A (53.51)
Definition D53.27 (Maximum: Stochastic). The value x̂ is the upper boundary of the
values a random variable X may take on (or positive infinity, if X is unbounded).
24 Throwing a dice is discussed as example for stochastic extensively in Section 53.5 on page 689.
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 661
Definition D53.28 (Range: Statistics). The range “range”A of the sample data A is the
difference of the maximum max(A) and the minimum min(A) elements in A and therefore
represents the span covered by the data.
range(A) = â − ǎ = max(A) − min(A) (53.53)
If the expected value EX of a random variable X is known, the following statements can
be derived for the expected values of related random variables as follows:
Y = a + X ⇒ EY = a + EX (53.57)
Z = bX ⇒ EZ = bEX (53.58)
Definition D53.31 (Sum). The sum(A) represents the sum of all elements in a set of data
samples A.
n−1
X
sum(A) = A[i] (53.59)
i=0
Definition D53.32 (Arithmetic Mean). The arithmetic mean26 a is the sum of all ele-
ments in the sample data A divided by the total number of values.
In the spirit of the limiting frequency method of von Mises [2822], it is an estimation of the
expected value a ≈ EX of the random variable X that produced the sample data A.
n−1 n−1
sum(A) 1X X
a= = A[i] = h(A[i], n) (53.60)
n n i=0 i=0
53.2.3.1 Variance
The variance of a discrete random variable X can be computed using Equation 53.62 and
for continuous distributions, Equation 53.63 will hold.
∞ ∞
" ∞ #2
X X X
2 2 2
D X= fX (x) (x − EX) = x fX (x) − xfX (x)
x=−∞ x=−∞ x=−∞
" ∞
#
X 2
= x2 fX (x) − (EX) (53.62)
x=−∞
Z ∞ Z ∞ Z ∞ 2
2
D2 X = (x − EX) dx = x2 fX (x) dx − xfX (x) dx
−∞ −∞ −∞
Z ∞
2
= x2 fX (x) dx − (EX) (53.63)
−∞
If the variance D2 X of a random variable X is known, we can derive the variances the some
related random variables as follows:
Y = a + X ⇒ D2 Y = D2 X (53.64)
Z = bX ⇒ D2 Z = b2 D2 X (53.65)
Definition D53.34 (Sum of Squares). The function sumSqrs(A) represents the sum of
the squares of all elements in the data sample A in statistics.
n−1
X 2
sumSqrs(A) = (A[i]) (53.66)
i=0
Definition D53.36 (Standard Deviation). The standard deviation29 is the square root
of the variance. The standard deviation of a random variable X is abbreviated with DX
and σ, its statistical estimate is s.
√
DX = D2 X (53.68)
√
s= s 2 (53.69)
The standard deviation is zero for all samples with n ≤ 1.
53.2.3.3 Covariance
For covariance matrices, Σ[i, j] = Σ[j, i] holds for all i, j ∈ 1..n as well as Σ[i, i] = D2 Xi for
all i ∈ 1..n (and Σ[i, i] = s2 Xi for the estimated covariance matrix).
53.2.4 Moments
Definition D53.42 (Moment). The kth moment32 µ′k (c) about a value c is defined for a
random distribution X as h i
k
µ′k (c) = E (X − c) (53.85)
It can be specified for discrete (Equation 53.86) and continuous (Equation 53.87) probability
distributions using Equation 53.55 and Equation 53.56 as follows.
∞
X k
µ′k (c) = fX (x) (x − c) (53.86)
x=−∞
Z ∞
k
µ′k (c) = fX (x) (x − c) dx (53.87)
−∞
Definition D53.44 (Central Moment). The kth moment about the mean (or central
moment)33 is the expected value of the difference between elements and their expected
value raised to the kth power. h i
k
µ = E (X − EX) (53.89)
The two other most important moments of random distributions are the skewness γ1 and
the kurtosis γ2 and their estimates G1 and G2 .
Definition D53.47 (Kurtosis). The excess kurtosis35 γ2 is a measure for the sharpness
of a distribution’s peak.
µ4
γ2 = µσ,4 − 3 = 3 − 3 (53.93)
s
A distribution with a high kurtosis has a sharper “peak” and fatter “tails”, while a distri-
bution with a low kurtosis has a more rounded peak with wider “shoulders”. The normal
distribution (see Section 53.4.2) has a zero kurtosis.
For sample data A which represents only a subset of a greater amount of data, the sample
kurtosis can be approximated with the estimator G2 where s is the estimate of the sample’s
standard deviation. The kurtosis is only defined for sets with at least four elements.
" #
n(n + 1) X A[i] − a 4
n−1
3(n − 1)2
G2 = − (53.94)
(n − 1)(n − 2)(n − 3) i=0 s (n − 2)(n − 3)
Definition D53.48 (Median). The median med(X) is the value right in the middle of a
sample or distribution, dividing it into two equal halves.
1 1
P (X ≤ med(X)) ≥ ∧ P (X ≥ med(X)) ≥ ∧ P (X ≤ med(X)) ≤ P (X ≥ med(X))
2 2
(53.95)
The probability of drawing an element less than med(X) is equal to the probability of draw-
ing an element larger than med(X). We can determine the median med(X) of continuous
34 http://en.wikipedia.org/wiki/Skewness [accessed 2007-07-03]
35 http://en.wikipedia.org/wiki/Kurtosis [accessed 2008-02-01]
666 53 STOCHASTIC THEORY AND STATISTICS
and discrete distributions by solving Equation 53.96 and Equation 53.97 respectively.
Z med(X)
1
= fX (x) dx (53.96)
2 −∞
med(X)−1 ∞
X 1 X
fX (x) ≤ ≤ fX (x) (53.97)
i=−∞
2
i=med(X)
(53.98)
If a sample A has an odd element count, the median med(X) is the element in the middle,
otherwise (in a set with an even element count there exists no single “middle”-element), the
arithmetic mean of the two middle elements.
Quantiles36 are points taken at regular intervals from a sorted dataset (or a cumulative
distribution function).
They can be regarded as the generalized median, or vice versa, the median is the 2-quantile.
A sorted data sample is divided into q subsets of equal length by the q-quantiles. The cu-
mulative distribution function of a random variable X is divided by the q-quantiles into q
subsets of equal area. The quantiles are the boundaries between the subsets/areas. There-
fore, the kth q-quantile is the value ζ so that the probability that the random variable (or
an element of the data set) will take on a value less than ζ is at most kq and the probability
q−k
that it will take on a value greater than or equal to ζ is at most q . There exist q − 1
q-quantiles (k spans from 1 to q − 1). The k th
q-quantile quantilekq (A) of a dataset A can
be computed as:
As ≡ sortAscA> (53.102)
k∗n
t= (53.103)
q
1
(A [t] + As [t − 1]) if if t is integer
quantilekq (A) = 2 s (53.104)
As [⌊t⌋] otherwise
For some special values of q, the quantiles have been given special names which are listed in
Table 53.1.
36 http://en.wikipedia.org/wiki/Quantiles [accessed 2007-07-03]
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 667
q name
100 percentiles
10 deciles
9 noniles
5 quintiles
4 quartiles
2 median
Definition D53.50 (Interquartile Range). The interquartile range37 is the range be-
tween the first and the third quartile and defined as quantile34 (X) − quantile14 (X).
Definition D53.51 (Mode). The mode38 is the value that most often occurs in a data
sample or is most frequently assumed by a random variable.
There exist unimodal distributions/samples that have one mode value and multimodal dis-
tributions/samples with multiple modes. In [369, 2821] you can find further information of
the relation between the mode, the mean and the skewness.
53.2.7 Entropy
Definition D53.53 (Differential Entropy). The differential (also called continuous) en-
tropy h(X) is a generalization of the information entropy to continuous probability density
functions f of random variables X. [1699]
Z ∞
h(X) = − fX (x) ln fX (x) dx (53.107)
−∞
The law of large numbers (LLN) combines statistics and probability by showing that if an
event e with the probability P (e) = p is observed in n independent repetitions of a random
experiment, its relative frequency h(e, n) (see Definition D53.10) converges to its probability
p if n becomes larger.
In the following, assume that the A is an infinite sequence of samples from equally
distributed and pairwise independent random variables Xi with the (same) expected value
EX. The weak law of large numbers states that the mean a of the sequence A converges to
a value in (EX − ε, EX + ε) for each positive real number ε > 0, ε ∈ R+ .
In other words, the weak law of large numbers says that the sample average will converge
to the expected value of the random experiment if the experiment is repeated many times.
According to the strong law of large numbers, the mean a of the sequence A even con-
verges to the expected value EX of the underlying distribution for infinite large n
P lim a = EX = 1 (53.109)
n→∞
The law of large numbers implies that the accumulated results of each random experiment
will approximate the underlying distribution function if repeated infinitely (under the con-
dition that there exists an invariable underlying distribution function).
Many random processes follow in their characteristics well-known probability distributions,
some of which we will discuss here. Parts of the information provided in this and the
following section have been obtained from Wikipedia – The Free Encyclopedia [2938].
In this section, we discuss some of the common discrete distributions. Discrete probability
distributions assign probabilities to the elements of a finite (or, at most, countable infinite)
set of discrete events/outcomes of a random experiment.
Table 53.2 contains the characteristics of the discrete uniform distribution. In Fig. 53.1.a
you can find some example uniform probability mass functions and in Fig. 53.1.b we have
sketched their according cumulative distribution functions.
fX(x)
1/n2=0,2
0.2
n1=10
n2=5
single, discrete point
discontinuous point
1/n1=0,1 n2=b2-a2=5
0.1
n1=b1-a1=10
a1 b1 a2 b2
0 5 10 15 20 25 x
Fig. 53.1.a: The PMFs of some discrete uniform distributions.
FX(x)
The features of the Poisson distribution are listed in Table 53.342 and examples for its PDF
and CDF are illustrated in Fig. 53.2.a and Fig. 53.2.b.
Parameter Definition
parameters λ = µt > 0 53.123
(µt)x −µt λx −λ
PMF P (X = x) = fX (x) = e = e 53.124
x! x! x
Γ(⌊k + 1⌋ , λ) X e−λ λi
CDF P (X ≤ x) = FX (x) = = 53.125
⌊k⌋! i=0
i!!
mean EX = µt = λ 53.126
1 1
median med ≈ ⌊λ + − ⌋ 53.127
3 5λ
mode mode = ⌊λ⌋ 53.128
variance D2 X = µt = λ 53.129
1
skewness γ1 = λ− 2 53.130
1
kurtosis γ2 = 53.131
λ ∞
X λk ln(k!)
entropy H(X) = λ (1 − ln λ) + e−λ 53.132
k!
k=0
t
mgf MX (t) = eλ(e −1) 53.133
it
char. func. ϕX (t) = eλ(e −1) 53.134
0.4 fX(x)
0.35
0.3 l=1
0.25 l=3
0.2 l=6
0.15
l=10
0.1
l=20
0.05
0
4 l
8 l=20
12
16 l=6
20
24 x l=1
28
Fig. 53.2.a: The PMFs of some Poisson distributions.
1
FX(x)
0.9
0.8
0.7
0.6
0.5
l=1
0.4 l=3
0.3 l=6
l=10
0.2 l=20
0.1 discontinuous point
0 5 10 15 20 25 x 30
Fig. 53.2.b: The CDFs of some Poisson distributions.
Here we use an infinitesimal version the small-o notation.44 The statement that f ∈ o(ξ) ⇒
|f (x)| ≪ |ξ (x)| is normally only valid for x → ∞. In the infinitesimal variant, it holds for
x → 0. Thus, we can state that o(∆t) is much smaller than ∆t. In principle, the above
equations imply that in an infinite small time span either no or one event occurs, i. e., events
do not arrive simultaneously:
lim P (Xt > 1) = 0 (53.136)
t→0
53.3.2.2 The Relation between the Poisson Process and the Exponential
Distribution
It is important to know that the (time) distance between two events of the Poisson process is
exponentially distributed (see Section 53.4.3 on page 682). The expected value of the number
of events to arrive per time unit in a Poisson process is EXpois , then the expected value of
the time between two events EX1pois . Since this is the excepted value EXexp = EX1pois of the
exponential distribution, its λexp -value is λexp = EX1exp = 11 = EXpois . Therefore, the
EXpois
λexp -value of the exponential distribution equals the λpois -value of the Poisson distribution
λexp = λpois = EXpois . In other words, the time interval between (neighboring) events of the
Poisson process is exponentially distributed with the same λ value as the Poisson process,
as illustrated in Equation 53.137.
44 See Definition D12.1 on page 144 and ?? on page ?? for a detailed elaboration on the small-o notation.
674 53 STOCHASTIC THEORY AND STATISTICS
53.3.3 Binomial Distribution B (n, p)
The binomial distribution45 B (n, p) is the probability distribution that describes the prob-
ability of the possible numbers successes of n independent experiments with the success
probability p each. Such experiments is called Bernoulli experiments or Bernoulli trials. For
n = 1, the binomial distribution is a Bernoulli distribution46 .
Table 53.447 points out some of the properties of the binomial distribution. A few
examples for PMFs and CDFs of different binomial distributions are given in Fig. 53.3.a
and Fig. 53.3.b.
Parameter Definition
parameters n ∈ N0 , 0 ≤ p ≤ 1, p ∈R 53.138
n x n−x
PMF P (X = x) = fX (x) = p (1 − p) 53.139
x
⌊x⌋
X
CDF P (X ≤ x) = FX (x) = fX (x) = I1−p (n − ⌊x⌋, 1 + ⌊x⌋) 53.140
i=0
mean EX = np 53.141
median med is one of {⌊np⌋ − 1, ⌊np⌋, ⌊np⌋ + 1} 53.142
mode mode = ⌊(n + 1)p⌋ 53.143
variance D2 X = np(1 − p) 53.144
1 − 2p
skewness γ1 = p 53.145
np(1 − p)
1 − 6p(1 − p)
kurtosis γ2 = 53.146
np(1 − p)
1 1
entropy H(X) = ln (2πn e p (1 − p)) + O 53.147
2 n
mgf MX (t) = (1 − p + pet )n 53.148
char. func. ϕX (t) = (1 − p + peit )n 53.149
In case these rules hold, we still need to transform a continuous distribution to a discrete
one. In order to do so, we add 0.5 to the x values, i. e., FX,X,bin (x) ≈ FX,X,normal (x + 0.5).
fX(x)
n=20,p=0.3
0.18 n=20,p=0.6
0.16 n=30,p=0.6
n=40,p=0.6
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0 5 10 15 20 25 x 35
single, discrete point
Fig. 53.3.a: The PMFs of some binomial distributions.
FX(x)
1
0.9
0.8
0.7
0.6
0.5 n=20,p=0.3
0.4 n=20,p=0.6
n=30,p=0.6
0.3 n=40,p=0.6
0.2
0.1 discontinuous point
0 5 10 15 20 25 x 35
Fig. 53.3.b: The CDFs of some binomial distributions.
After discussing the discrete uniform distribution in Section 53.3.1, we now elaborate on
its continuous form48 . In a uniform distribution, all possible outcomes in a range [a, b], b > a
have exactly the same probability. The characteristics of this distribution can be found in
Table 53.5. Examples of its probability density function is illustrated in Fig. 53.4.a whereas
the according cumulative density functions are outlined Fig. 53.4.b.
Parameter Definition
parameters a, b ∈ R, a≥ b 53.152
1
b−a if x ∈ [a, b]
PDF fX (x) = 53.153
0 otherwise
0 if x < a
CDF P (X ≤ x) = FX (x) = x−a if x ∈ [a, b] 53.154
b−a
1 otherwise
1
mean EX = (a + b) 53.155
2
1
median med = (a + b) 53.156
2
mode mode = ∅ 53.157
1
variance D2 X = (b − a)2 53.158
12
skewness γ1 = 0 53.159
6
kurtosis γ2 = − 53.160
5
entropy h(X) = ln (b − a) 53.161
etb − eta
mgf MX (t) = 53.162
t(b − a)
eitb − eita
char. func. ϕX (t) = 53.163
it(b − a)
fX(x)
1/(b2-a2)
0.2
a=5, b=15
a=20, b=25
1/(b1-a1)
0.1
1
a=5, b=15
a=20, b=25
Many phenomena in nature, like the size of chicken eggs, noise, errors in measurement, and
such and such, can be considered as outcomes of random experiments
with properties that
can be approximated by the normal distribution49 N µ, σ 2 [3060]. Its probability density
function, shown for some example values in Fig. 53.5.a, is symmetric to the expected value
µ and becomes flatter with rising standard deviation σ.
The cumulative density function is outline for the same example values in Fig. 53.5.b.
Other characteristics of the normal distribution can be found in Table 53.6.
Parameter Definition
parameters µ ∈ R, σ ∈ R+ 53.164
1 (x−µ)2
PDF fX (x) = √ e− 2σ2 53.165
σ 2π Z x
1 (z−µ)2
CDF P (X ≤ x) = FX (x) = √ e− 2σ2 dz 53.166
σ 2π −∞
mean EX = µ 53.167
median med = µ 53.168
mode mode = µ 53.169
variance D2 X = σ2 53.170
skewness γ1 = 0 53.171
kurtosis γ2 = 0 √ 53.172
entropy h(X) = ln σ 2πe 53.173
2 2
µt+ σ 2t
mgf MX (t) = e 53.174
σ 2 t2
char. func. ϕX (t) = eµit+ 2 53.175
For the sake of simplicity, the standard normal distribution N (0, 1) with the CDF Φ(x) is
defined with µ = 0 and σ = 1. Values of this function are listed in tables. You can compute
the CDF of any normal distribution using the one of the standard normal distribution by
applying Equation 53.176.
Z x
1 z2
Φ(x) = √ dz e− 2 (53.176)
2π −∞
x−µ
P (X ≤ x) = Φ (53.177)
σ
Some values of Φ(x) are listed in Table 53.7. For the sake of saving space by using two
dimensions, we compose the values of x as a sum of a row and column value. If you want to
look up Φ(2.13) for example, you would go to the row which starts with 2.1 and the column
of 0.03, so you’d find Φ(2.13) ≈ 0.9834.
fX(x)
0.4
0.35
-15 -10 -5 0 m 5 10 15 x 20
Fig. 53.5.a: The PDFs of some normal distributions.
1
FX(x)
0.9
0.2
0.1
-15 -10 -5 0 5 10 15 20 x
Fig. 53.5.b: The CDFs of some normal distributions.
The inverse of the cumulative distribution function of the standard normal distribution is
called the “probit” function. It is also often denoted as z-quantile of the standard normal
distribution.
The values of the quantiles of the standard normal distribution can also be looked up in
Table 53.7. Therefore, the previously discussed process is simply reversed. If we wanted to
find the value “z”0.922, we locate the closest match in the table. In Table 53.7, we will find
0.9222 which leads us to x = 1.4 + 0.02. Hence, z (0.922) ≈ 1.42.
The probability density function PDF of the multivariate normal distribution50 [2345, 2523,
2664] is illustrated in Equation 53.180 and Equation 53.181 in the general case (where Σ is
the covariance matrix) and in Equation 53.182 in the uncorrelated form. If the distributions,
50 http://en.wikipedia.org/wiki/Multivariate_normal_distribution [accessed 2007-07-03]
53.4. SOME CONTINUOUS DISTRIBUTIONS 681
additionally to being uncorrelated, also have the same parameters σ and µ, the probability
density function of the multivariate normal distribution can be expressed as it is done in
Equation 53.183.
p
Σ−1 − 1 (x−µ)T Σ−1 (x−µ)
fX (x) = n e
2 (53.180)
(2π) 2
1 − 21 (x−µ)T Σ−1 (x−µ)
= n 1 e (53.181)
(2π) Σ2 2
Yn (x −µ )2
1 − i 2i
fX (x) = √ e 2σ
i (53.182)
i=1 2πσi
n
Y 1 (xi −µi )2
fX (x) = √ e− 2σ 2
i=1 2πσ
n2 Pn 2
1 i=1 (xi −µ)
= e− 2σ 2 (53.183)
2πσ 2
The exponential distribution52 exp(λ) [781] is often used if the probabilities of lifetimes of
apparatuses, half-life periods of radioactive elements, or the time between two events in
the Poisson process (see Section 53.3.2.2 on page 673) has to be approximated. Its PDF
is sketched in Fig. 53.6.a for some example values of λ the according cases of the CDF are
illustrated Fig. 53.6.b. The most important characteristics of the exponential distribution
can be obtained from Table 53.8.
Parameter Definition
parameters λ ∈ R+ 53.184
0 if x ≤ 0
PDF fX (x) = 53.185
λe−λx otherwise
0 if x ≤ 0
CDF P (X ≤ x) = FX (x) = 53.186
1 − e−λx otherwise
1
mean EX = 53.187
λ
ln 2
median med = 53.188
λ
mode mode = 0 53.189
1
variance D2 X = 2 53.190
λ
skewness γ1 = 2 53.191
kurtosis γ2 = 6 53.192
entropy h(X) = 1 − ln λ 53.193
−1
t
mgf MX (t) = 1 − 53.194
λ
−1
it
char. func. ϕX (t) = 1 − 53.195
λ
3
fX(x)
2.5
2 1/l
1.5 l=0.3
l=0.5
1 l=1
l=2
l=3
0.5
1
F (x)
0.9 X
0.8
0.7
0.6
0.5 l=0.3
0.4 l=0.5
l=1
0.3
l=2
0.2 l=3
0.1
In Table 53.954 , the characteristic parameters of the χ2 distribution are outlined. A few
examples for the PDF and CDF of the χ2 distribution are illustrated in Fig. 53.7.a and
Fig. 53.7.b.
Table 53.10 provides some selected values of the χ2 distribution. The headline of the
table contains results of the cumulative distribution function FX (x) of a χ2 distribution
with n degrees of freedom (values in the first column). The cells now denote the x values
that belong to these (m, FX (x)) combinations.
Parameter Definition
parameters n ∈ R+ , n
(> 0 53.196
0 if x ≤ 0
PDF fX (x) = n
2− /2 n/2−1 −x/2 53.197
Γ(n/2) x e otherwise
γ (n/2, x/2)
CDF P (X ≤ x) = FX (x) = = Pγ (n/2, x/2) 53.198
Γ(n/2)
mean EX = n 53.199
2
median med ≈ n − 53.200
3
mode mode = n − 2 if n ≥ 2 53.201
variance D2 X r= 2n 53.202
8
skewness γ1 = 53.203
n
12
kurtosis γ2 = 53.204
n n
entropy h(X) = + ln (2Γ(n/2)) + (1 − n/2)ψ (n/2) 53.205
2
MX (t) = (1 − 2t)− /2 for 2t < 1
n
mgf 53.206
ϕX (t) = (1 − 2it)− /2
n
char. func. 53.207
0.5
fX(x)
0.4 n=10
n=7
0.35 n=4
n=2
0.3 n=1
0.25
0.2
0.15
0.1
0.05
0 1 2 3 4 5 6 7 8 x 10
Fig. 53.7.a: The PDFs of some χ2 distributions.
1
FX(x)
0.8
0.7
0.6
0.5
0.4
0.3
n=10
0.2 n=7
n=4
0.1 n=2
n=1
0 1 2 3 4 5 6 7 8 x 10
Fig. 53.7.b: The CDFs of some χ2 distributions.
The Student’s t-distribution55 is based on the insight that the mean of a normally dis-
tributed feature of a sample is no longer normally distributed if the variance is unknown
and needs to be estimated from the data samples [929, 1112, 1113]. It has been design by
Gosset [1112] who published it under the pseudonym Student.
The parameter n of the distribution denotes the degrees of freedom of the distribution.
If n approaches infinity, the t-distribution approaches the standard normal distribution.
The characteristic properties of Student’s t-distribution are outlined in Table 53.1156
and examples for its PDF and CDF are illustrated in Fig. 53.8.a and Fig. 53.8.b.
Table 53.12 provides some selected values for the quantiles t1−α,n of the t-distribution
(one-sided confidence intervals, see Section 53.6.3 on page 696). The headline of the table
contains results of the cumulative distribution function FX (x) of a Student’s t-distribution
with n degrees of freedom (values in the first column). The cells now denote the x values
that belong to these (n, FX (x)) combinations.
Parameter Definition
parameters n ∈ R+ , n > 0 53.208
Γ((n+1)/2) 2
−(n+1)/2
PDF fX (x) = √ 1 + x /n 53.209
nπΓ(n/2) 2
1 n+1 3 x
2 F1 , , ,−
1 n+1 2 2 2 n
CDF P (X ≤ x) = FX (x) = + xΓ √ n 53.210
2 2 nπΓ
2
mean EX = 0 53.211
median med = 0 53.212
mode mode = 0 53.213
n
variance D2 X = for n > 2, otherwise undefined 53.214
n−2
skewness γ1 = 0 for n > 3 53.215
6
kurtosis γ2 = for n > 4 53.216
n − 4 n
n n+1 √ n 1
entropy h(X) = ψ −ψ + log nB , 53.217
2 2 2 2 2
mgf undefined 53.218
0.4 fX(x)
0.35
0.15
0.1
0.05
-4 -3 -2 -1 0 1 2 3 x 5
Fig. 53.8.a: The PDFs of some Student’s t-distributions
1
FX(x)
0.9
0.8
0.7
The Student`s t-distribution 0.6
approaches the normal
distribution N(0,1) 0.5
for n®¥. normal distribution
0.4 n=5
n=2
0.3 n=1
0.2
0.1
-4 -3 -2 -1 0 1 2 3 x 5
Fig. 53.8.b: The CDFs of some Student’s t-distributions
Let us now discuss the different parameters of a random variable at the example of throwing
a dice. On a dice, numbers from one to six are written and the result of throwing it is the
number written on the side facing upwards. If a dice is ideal, the numbers one to six will
show up with exactly the same probability, 61 . The set of all possible outcomes of throwing
a dice Ω is thus n o
Ω= 1, 2, 3, 4, 5, 6 (53.219)
We define a random variable X : Ω 7→ R that assigns real numbers to the possible outcomes
of throwing the dice in a way that the value of X matches the number on the dice:
X : Ω 7→ {1, 2, 3, 4, 5, 6} (53.220)
690 53 STOCHASTIC THEORY AND STATISTICS
It is obviously a uniformly distributed discrete random variable (see Section 53.3.1 on
page 668) that can take on six states. We can now define the probability mass function PMF
and the according cumulative distribution function CDF as follows (see also Figure 53.9):
0 if x < 1
FX (x) = P (X ≤ x) = x6 if 1 ≤ x ≤ 6 (53.221)
1 otherwise
0 if x < 1
fX (x) = P (X = x) = 16 if 1 ≤ x ≤ 6 (53.222)
0 otherwise
We now can discuss the statistical parameters of this experiment. This is a good opportunity
2/3
1/2
1/3
1/6
0 1 2 3 4 5 6 x
to compare the “real” parameters and their estimates. We therefore assume that the dice
was thrown ten times (n = 10) in an experiment. The following numbers have been thrown
as illustrated in Figure 53.10):
A = {4, 5, 3, 2, 4, 6, 4, 2, 5, 3} (53.223)
Table 53.13 outlines how the parameters of the random variable are computed. The real
values of the parameters are defined using the PMF or CDF functions, while the estimations
are based on the sample data obtained from an actual experiment.
53.6. ESTIMATION THEORY 691
6
A
5
0
Figure 53.10: The numbers thrown in the dice example
As you can see, the estimations of the parameters sometimes differ significantly from their
true values. More information about estimation can be found in the following section.
53.6.1 Introduction
Estimation theory is the science of approximating the values of parameters based on mea-
surements or otherwise obtained sample data [1506, 2203, 2299, 2494, 2782]. The center of
this branch of statistics is to find good estimators in order to approximate the real values
the parameters as good as possible.
692 53 STOCHASTIC THEORY AND STATISTICS
Definition D53.54 (Estimator). An estimator57 θe is a rule (most often a mathematical
function) that takes a set of sample data A as input and returns an estimation of one
parameter θ of the random distribution of the process sampled with this data set.
We have already discussed some estimators in Section 53.2 – the arithmetic mean of a sample
data set (see Definition D53.32 on page 661), for example, is an estimator for the expected
value (see Definition D53.30 on page 661) and in Equation 53.67 on page 662 we introduced
an estimator for the sample variance. Obviously, an estimator θe is the better the closer its
results (the estimates) come to the real values of the parameter θ.
Definition D53.56 ((Estimation) Error). The absolute (estimation) error ε58 is the dif-
ference between the value returned by a point estimator θe of a parameter θ for a certain
input A and its real value. Notice that the error ε can be zero, positive, or negative.
εA θe = θ(A)
e −θ (53.234)
In the following, we will most often not explicitly refer to the data sample A as basis of the
estimation θe anymore. We assume that it is implicitly clear that estimations are usually
based on such samples and that subscripts like the A in εA in Equation 53.234 are not
needed.
Definition D53.57 (Bias). The bias Bias θe of an estimator θe is the expected value of
the difference of the estimate and the real value.
h i h i
Bias θe = E θe − θ = E ε θe (53.235)
Definition D53.59 (Mean Square Error). The mean square error59 MSE θe of an es-
timator θe is the expected value of the square of the estimation error ε. It is also the sum of
the variance of the estimator and the square of its bias.
2
2
e
MSE θ = E θ − θ e = E ε θe (53.237)
2
MSE θe = D2 θe + Bias θe (53.238)
57 http://en.wikipedia.org/wiki/Estimator [accessed 2007-07-03], http://mathworld.wolfram.
Definition D53.60 (Likelihood Function). The likelihood function “L” returns a value
that is proportional to the probability of a postulated underlying law or probability distri-
bution ϕ according to an observed outcome (denoted as the vector y).
L[ϕ |y ] ∝ P ( y| ϕ) (53.240)
Notice that “L” not necessarily represents a probability density/mass function and its inte-
gral also does not necessarily equal to 1. In many sources, “L” is defined in dependency of
a parameter θ instead of the function ϕ. We preferred the latter notation since it is a more
general superset of the first one.
The xi are known inputs or parameters of an unknown process defined by the function
ϕ : R 7→ R. By observing the corresponding outputs of the process, we have obtained the yi
values. During our observations, we make the measurement errors61 ηi .
Eη = 0 (53.243)
η ∼ N 0, σ 2 0 < σ < +∞ (53.244)
cov(ηi , ηj ) = 0 ∀i, j ∈ 0..n − 1 (53.245)
1. The expected values of η in Equation 53.243 are all zero. Our measurement device
thus gives us, in average, unbiased results. If the expected value of η was not zero, we
could simple recalibrate our (imaginary) measurement equipment in order to subtract
Eη from all measurements and would obtain unbiased observations.
2. Furthermore, Equation 53.244 states that the ηi are normally distributed around the
zero point with an unknown, nonzero variance σ 2 . Supposing measurement errors to
be normally distributed is quite common and a reasonable good approximation in most
60 http://en.wikipedia.org/wiki/Likelihood [accessed 2007-07-03]
61 http://en.wikipedia.org/wiki/Measurement_error [accessed 2007-07-03]
694 53 STOCHASTIC THEORY AND STATISTICS
cases. The white noise62 in transmission of signals for example is often modeled with
Gaussian distributed63 amplitudes. This second assumption
includes, of course, the
first one: Being normally distributed with N µ = 0, σ 2 implies a zero expected value
of the error.
3. With Equation 53.245, we assume that the errors ηi of the single measurements are
stochastically independent. If there existed a connection between them, it would be
part of the underlying physical law ϕ and could be incorporated in our measurement
device and again be subtracted.
Finding a f ⋆ which maximizes the function f however is equal to find a f ⋆ which mini-
mizes the sum of the squares of the ε-values.
n−1
X n−1
X
2 2
f⋆ ∈ F : (εi (f ⋆ )) = min∀f ∈F (εi (f )) (53.254)
i=0 i=0
According to Equation 53.247 we can now substitute the εi -values with the difference be-
tween the observed outcomes yi and the estimates f (xi ).
n
X n−1
X
2 2
(εi (f )) = (yi − f (xi )) (53.255)
i=1 i=0
Under the particular assumption of uncorrelated error terms normally distributed around
zero, a MLE minimizes Equation 53.256.
n−1
X n−1
X
2 2
f⋆ ∈ F : (yi − f ⋆ (xi )) = min∀f ∈F (yi − f (xi )) (53.256)
i=0 i=0
Minimizing the sum of the difference between the observed yi and the estimates f (xi )
also minimizes their mean, so with this we have also shown that the estimator that mini-
mizes mean square error MSE (see Definition D53.59) is the best estimator according to the
likelihood of the produced outcomes.
n−1 n−1
1X 2 1X 2
f⋆ ∈ F : (yi − f ⋆ (xi )) = min∀f ∈F (yi − f (xi )) (53.257)
n i=0 n i=0
f ⋆ ∈ F : MSE(f ⋆ ) = min∀f ∈F MSE(f ) (53.258)
2
The term (yi − f (xi )) is often justified by the statement that large deviations of f from
the y-values are punished harder than smaller ones. The correct reason why we minimize
the square error, however, is that we maximize the likelihood of the resulting estimator.
64 http://en.wikipedia.org/wiki/Maximum_likelihood [accessed 2007-07-03]
696 53 STOCHASTIC THEORY AND STATISTICS
At this point, one should also notice that the xi also could be replaced with vectors
xi ∈ Rm without any further implications or modifications of the equations.
In most practical cases, the set F of possible functions is closely defined. It usually
contains only one type of parameterized function, so we only have to determine the unknown
parameters in order to find f ⋆ . Let us consider a set of linear functions as example. If we
want to find estimators of the form F = {∀ f (x) = ax + b : a, b ∈ R}, we will minimize
Equation 53.259 by determining the best possible values for a and b.
n−1
1X
MSE( f (x)| a, b) = (axi + b − yi )2 (53.259)
n i=0
If we now could find a perfect estimator fp⋆ and our data would be free of any measurement
error, all parts of the sum would become zero. For n > 2, this perfect estimator would be the
solution of the over-determined system of linear equations illustrated in Equation 53.260.
0 = ax1 + b − y1
0 = ax2 + b − y2
(53.260)
... ...
0 = axn + b − yn
Since it is normally not possible to obtain a perfect estimator because there are measurement
errors or other uncertainties like unknown dependencies, the system in Equation 53.260 often
cannot be solved but only minimized.
The Gauss-Markov Theorem65 defines BLUEs (best linear unbiased estimators) according
to the facts just discussed:
Definition D53.62 (BLUE). In a linear model in which the measurement errors εi are
uncorrelated and are all normally distributed with an expected value of zero and the same
variance, the best linear unbiased estimators (BLUE) of the (unknown) coefficients are the
least-square estimators [2184].
Hence, for the best linear unbiased estimator also the same three assumptions (Equa-
tion 53.243, Equation 53.244, and Equation 53.245) as for the maximum likelihood estimator
hold.
There is a very simple principle in statistics that always holds: All estimates may as well
be wrong. There is no guarantee whatsoever that we have estimated a parameter of an
underlying distribution correct regardless how many samples we have analyzed. However, if
we can assume or know the underlying distribution of the process which has been sampled,
we can compute certain intervals which include the real value of the estimated parameter
with a certain probability.Unlike point estimators, which approximate a parameter of a data
sample with a single value, confidence intervals66 (CIs) are estimations that give certain
upper and lower boundaries in which the true value of the parameter will be located with a
certain, predefined probability. [505, 674, 931]
The advantage of confidence intervals is that we can directly derive the significance of the
data samples from them – the larger the intervals are, the less reliable is the sample. The
narrower confidence intervals get for high predefined probabilities, the more profound, i. e.,
significant, will the conclusions drawn from them be.
{ 120, 121, 119, 116, 115, 122, 121, 123, 122, 120
A= (53.261)
119, 122, 121, 120, 119, 121, 123, 117, 118, 121 }
n = len(A) = 20 (53.262)
From these measurements, we can determine the arithmetic mean a and the sample variance
s2 according to Equation 53.60 on page 661 and Equation 53.67 on page 662:
n−1
1X 2400
a= A[i] = = 120 (53.263)
n i=0 20
n−1
1 X 92
s2 = (A[i] − a)2 = (53.264)
n − 1 i=0 19
The question that arises now is if the mean of 120 is significant, i. e., whether it likely
approximates the expected value of the egg weight, or if the data sample was too small to
be representative. Furthermore, we would like to know in which interval the expected value
of the egg weights will likely be located. Here, confidence intervals come into play.
First, we need to find out what the underlying distribution of the random variable
producing A as sample output is. In case of chicken eggs, we safely can assume 67 that
it is the normal distribution discussed in Section 53.4.2 on page 678. With that we can
calculate an interval which includes the unknown parameter µ (i. e., the real expected value)
with a confidence probability of γ. γ = 1 − a is the so-called confidence coefficient and a is
the probability that the real value of the estimated parameter lies not inside the confidence
interval.
Let us compute the interval including the expected value µ of the chicken egg weights
with a probability of γ = 1 − a = 95%. Thus, a = 0.05. Therefore, we have to pick the right
formula from Section 53.6.3.1 on the following page (here it is Equation 53.277 on the next
page) and substitute in the proper values:
s
µγ ∈ a ± t1− a2 ,n−1 √ (53.265)
n
r
92
19
µ95% ∈ 120 ± t0.975,19 ∗ √19
(53.266)
As you can see, the higher the confidence probabilities we specify the larger become the
intervals in which the parameter is contained. We can be to 99% sure that the expected
value of laid eggs is somewhere between 118.56 and 121.44. If we narrow the interval down
to [119.13, 120.87], we can only be 90% confident that the real expected value falls in it
based on the data samples which we have gathered.
53.6.3.1.1.1 With knowing the variance σ 2 If the exact variance σ 2 of the distribution
underlying our data samples is known, and we have an estimate of the expected value µ by
the arithmetic mean a according to Equation 53.60 on page 661, the two-sided confidence
interval (of probability γ) for the expected value of the normal distribution is:
a σ
µγ ∈ a ± z 1 − √ (53.276)
2 n
Where z (y) ≡ probit(y) ≡ Φ−1 y is the y-quantil of the standard normal distribution
(see Section 53.4.2.2 on page 680) which can for example be looked up in Table 53.7.
53.6.3.1.1.2 With estimated sample variance s2 Often, the true variance σ 2 of an underly-
ing distribution is not known and instead estimated with the sample variance s2 according
to Equation 53.67 on page 662. The two-sided confidence interval (of probability γ) for
the expected value can then be√computed using the arithmetic mean a and the estimate
of the standard deviation s = s2 of the sample and the tn−1,1− a2 quantile of Student’s
t-distribution which can be looked up in Table 53.12 on page 689.
s2
µγ ∈ a ± t1− 2 ,n−1 √
a (53.277)
n
53.7. STATISTICAL TESTS 699
53.6.3.1.2 Variance of a Normal Distribution
The two-sided confidence interval (of probability γ) for the variance of a normal distribution
can computed using sample variance s2 and the χ2 (p, k)-quantile of the χ2 distribution
which can be looked up in Table 53.10 on page 686.
2 2
(n − 1)s (n − 1)s
σγ2 ∈ a , α (53.278)
χ2 1 − , n − 1 χ2 ,n − 1
2 2
53.7.1 Introduction
With statistical tests [674, 1160, 1202, 1716, 2476, 2490], it is possible to find out whether
an alternative hypothesis H1 about the distribution(s) from a set of measured data A is
likely to be true. This is done by showing that the sampled data would very unlikely have
occurred if the opposite hypothesis, the null hypothesis H0 , holds.
Neyman and Pearson [2030, 2031] distinguish two classes of errors68 that can be made when
performing hypothesis tests:
Definition D53.64 (Type 1 Error). Type 1 ErrorA type 1 error (α error, false positive)
is the rejection of a correct null hypothesis H0 , i. e., the acceptance of a wrong alternative
hypothesis H1 . Type 1 errors are made with probability α.
Definition D53.65 (Type 2 Error). Type 2 ErrorA type 2 error (β error, false negative)
is the acceptance of a wrong null hypothesis H0 , i. e., the rejection of a correct alternative
hypothesis H1 . Type 2 errors are made with probability β.
A few basic principles for testing should be mentioned before going more into detail:
1. The more samples we have, the better the quality and significance of the conclusions
that we can make by testing. An arithmetic mean of the runtime 7s is certainly more
significant when being derived from 1000 runs of certain algorithm than from the
sample set A = {9s, 5s}. . .
2. The more assumptions that we can make about the sampled probability distribution,
the more powerful will the tests be that are available.
3. Wrong assumptions, falsely carried out measurements, or other misconduct will nullify
all results and efforts put into testing.
In the following, we will discuss multiple methods for hypothesis testing. We can, for
instance, distinguish between tests based on paired samples and those for independent pop-
ulations. In Table 53.14, we have illustrated an example for the former, where pairs of
elements (a, b) are drawn from two different populations. Table 53.15 contains two indepen-
dent samples a and b with a different number of elements (na = 6 6= nb = 8).
med(a) = 6; a = 5.8
med(b) = 7; b = 7.6 ≈ 7.67
This example data set consists of n = na = nb = 15 tuples (a, b). The leftmost column
is the tuple index, the columns a and b contain the tuple values, and d is the inner-tuple
difference b − a. The three columns to the right contain intermediate values which will later
be used in the example calculations accompanying the descriptions of the tests.
The arithmetic mean of the samples a is a = 5.8 and for b it is b = 7.6 ≈ 7.67. The
medians are med(a) = 6 and med(b) = 7. Although it looks as if a and b were different and
that the values of a seem to be lower than those of the sample b, we cannot confirm this
without a statistical test.
702 53 STOCHASTIC THEORY AND STATISTICS
Example E53.9 (Example for Unpaired Data Samples).
na = 6 nb = 8
Unlike the data given in Example E53.9, Table 53.14 contains two unpaired samples a and
b consisting of na = 6 and nb = 8 samples, respectively. We again provide the item index
in the leftmost column followed by the elements of a and b in the next two columns. In the
two columns to the right, again the ranks are given.
The arithmetic mean of the samples a is a = 20/3 = 3.3 ≈ 3.33 and for b it is b = 5.375.
The medians are med(a) = 3 and med(b) = 5.5. Again, it looks like the sample a contains
the smaller values. Whether this is actually the case we will have to check with tests. Or
better, with which error probability this assumption holds, to be precise.
Part VII
Implementation
704 53 STOCHASTIC THEORY AND STATISTICS
Chapter 54
Introduction
In this book part, we try to provide straightforward implementations of some of the algo-
rithms discussed in this book. The latest versions of the sources of the following programs,
including documentation, are available at http://goa-taa.cvs.sourceforge.net/
viewvc/goa-taa/Book/implementation/java/sources.zip [accessed 2010-10-02]. This
url directly obtains the sources, binaries, and documentation from the version management
system in which the complete book is located (see page 4 for details).
In this part of the book, we will not provide much discussion. The theoretical aspects
of optimization and the algorithm structures have already been outlined in length in the
previous parts. Here, we will just give the code with some documentation (with clickable
links) which glues the implementation to these aforementioned theoretical discussions.
The implementations and program sources given here have been developed on the fly
during the course Practical Optimization Algorithm Design given by me at School of Software
Engineering in the Suzhou Institute of Advanced Studies of the University of Science and
Technology of China (USTC) in the winter semester 2010. The goal of this course was
to solve optimization problems interactively together with my students. Some of the code
presented here is the result of the suggestions of my students and a discussion which I
tried to coarsely steer but not to control. I thus like to thank my students here for their
participation and good suggestions.
4GiB RAM
See http://www.toshiba.com/ [accessed 2010-09-30]
Table 54.1: The precise developer environment used implementing and testing.
We use the Java programming language for the implementation, mainly because it is more
or less simple and because it is the language I am more or less fluent in. In Table 54.1, we
706 54 INTRODUCTION
provide the exact description of the developer environment and testing machine used. This
may serve as a reference for verifying our implementation.
The code listed in this book is not complete. We merely provide those classes which
include the essential algorithms and skip most of the basic classes or helper classes. These
are available in the source code archive published together with this book. Here, they would
just waste space without contributing much to the understanding of the algorithms.
It is not clear how the Java language will develop, how it will be supported, or whether
the behavior of Java virtual machines remains the same or is the same as in the past.
Hence, we cannot guarantee for the functionality of our example in newer or older versions
of Java as well as in different execution environments. All the code in the following sections
is put under the GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999),
which you can find in Chapter C on page 955.
Chapter 55
The Specification Package
In this package, we give interfaces providing the abstract specifications of the basic elements
of the optimization processes given in Part I. We divide the interface specification of the
modules of optimization algorithms the objective functions from their actual implementa-
tions which will be provided Chapter 56.
Also, we provide interfaces to modules used in some specific algorithm patterns, such as
the temperature schedule in Simulated Annealing (see Listing 55.12 and Chapter 27) and
selection in Evolutionary Algorithms (see Listing 55.13 and Chapter 28).
A nullary search operations takes no arguments and just creates one new, random genotype
g ∈ G.
Unary search operations take one existing genotype g ∈ G and use it as basis to create a
new, similar one g ′ ∈ G.
Binary search operations take two existing genotype gp1 , gp2 ∈ G and use them as basis to
create a new genotype g ′ ∈ G which, hopefully, unites some of the positive features of its
“parents”.
Listing 55.4: The interface for all binary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * A binary search operation (see Definition D4.6 on page 83) such
10 * as the recombination operation in Evolutionary Algorithms (see
11 * Definition D28.14 on page 307) and its specific string-based version
12 * used in Genetic Algorithms (see
13 * Section 29.3.4 on page 335).
14 *
15 * @param <G>
16 * the search space (genome, Section 4.1 on page 81)
17 * @author Thomas Weise
18 */
19 public interface IBinarySearchOperation<G> extends IOptimizationModule {
20
21 /**
22 * This is the binary search operation. It takes two existing genotypes
23 * p1 and p2 (see Definition D4.2 on page 82) from the genome and produces
24 * one new element in the search space. There are two basic assumptions
25 * about this operator: 1) Its input elements are good because they have
26 * previously been selected. 2) It is somehow possible to combine these
27 * good traits and hence, to obtain a single individual which unites them
710 55 THE SPECIFICATION PACKAGE
28 * and thus, has even better overall qualities than its parents. The
29 * original underlying idea of this operation is the
30 * "Building Block Hypothesis" (see
31 * Section 29.5.5 on page 346) for which, so far, not much
32 * evidence has been found. The hypothesis
33 * "Genetic Repair and Extraction" (see Section 29.5.6 on page 346)
34 * has been developed as an alternative to explain the positive aspects
35 * of binary search operations such as recombination.
36 *
37 * @param p1
38 * the first "parent" genotype
39 * @param p2
40 * the second "parent" genotype
41 * @param r
42 * the random number generator
43 * @return a new genotype
44 */
45 public abstract G recombine(final G p1, final G p2, final Random r);
46
47 }
Ternary search operations take three existing genotype gp1 , gp2 , gp3 ∈ G and use them as
basis to create a new genotype g ′ ∈ G.
N -ary search operations take n existing genotype gp1 , gp2 , .., gpn ∈ G and use them as basis
to create a new genotype g ′ ∈ G which again, hopefully, unites some of the positive features
of its “parents”.
55.1. GENERAL DEFINITIONS 711
Listing 55.6: The interface for all binary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * This interface provides the method for an n-ary search operation. Search
10 * operations are discussed in Section 4.2 on page 83.
11 *
12 * @param <G>
13 * the search space (genome, Section 4.1 on page 81)
14 * @author Thomas Weise
15 */
16 public interface INarySearchOperation<G> extends IOptimizationModule {
17
18 /**
19 * This is an n-ary search operation. It takes n existing genotype gi
20 * (see Definition D4.2 on page 82) from the genome and produces one new
21 * element in the search space.
22 *
23 * @param gs
24 * the existing genotypes in the search space which will be
25 * combined to a new one
26 * @param r
27 * the random number generator
28 * @return a new genotype
29 */
30 public abstract G combine(final G[] gs, final Random r);
31
32 }
Termination criteria are used to determine when the optimization process should finish.
Here we provide a simple interface which unites the functionality of all single-objective
optimization algorithms discussed in this book.
The temperature schedule is used in Simulated Annealing to determine the current temper-
ature and hence, the acceptance probability for inferior candidate solutions.
55.2. ALGORITHM SPECIFIC INTERFACES 717
Listing 55.12: The interface for temperature schedules.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 /**
7 * The interface ITemperatureSchedule allows us to implement the different
8 * temperature scheduling methods for Simulated Annealing (see
9 * Chapter 27) as listed in
10 * Section 27.3.
11 *
12 * @author Thomas Weise
13 */
14 public interface ITemperatureSchedule extends IOptimizationModule {
15
16 /**
17 * Get the temperature to be used in the iteration t. This method
18 * implements the getTemperature function specified in
19 * Equation 27.13 on page 247.
20 *
21 * @param t
22 * the iteration
23 * @return the temperature to be used for determining whether a worse
24 * solution should be accepted or not
25 */
26 public abstract double getTemperature(final int t);
27
28 }
In Evolutionary Algorithms, the selection step determines which individuals should become
the parents of the individuals in the next generation.
In this package, we implement the interfaces given in the package org.goataa.spec and dis-
cussed in Chapter 55 on page 707.
With this class, we provide a default implementation of the interface for single-objective
optimization methods given in Listing 55.10 on page 713.
Random sampling and random walks, both discussed in Chapter 8, are two baseline search
patterns which do not utilize any information from the objective functions during their
course (apart from for remembering the best candidate solution found).
Listing 56.4: The random sampling algorithm “randomSampling” given in Algorithm 8.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
730 56 THE IMPLEMENTATION PACKAGE
4 package org.goataa.impl.algorithms;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.searchOperations.NullarySearchOperation;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IGPM;
12 import org.goataa.spec.INullarySearchOperation;
13 import org.goataa.spec.IObjectiveFunction;
14 import org.goataa.spec.IOptimizationModule;
15 import org.goataa.spec.ITerminationCriterion;
16
17 /**
18 * A simple implementation of the Random Sampling algorithm introduced as
19 * Algorithm 8.1 on page 114.
20 *
21 * @param <G>
22 * the search space (genome, Section 4.1 on page 81)
23 * @param <X>
24 * the problem space (phenome, Section 2.1 on page 43)
25 * @author Thomas Weise
26 */
27 public final class RandomSampling<G, X> extends
28 SOOptimizationAlgorithm<G, X, Individual<G, X>> {
29
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** the nullary search operation */
34 private INullarySearchOperation<G> o0;
35
36 /** instantiate the random sampling class */
37 @SuppressWarnings("unchecked")
38 public RandomSampling() {
39 super();
40 this.o0 = (INullarySearchOperation) (NullarySearchOperation.NULL_CREATION);
41 }
42
43 /**
44 * Set the nullary search operation to be used by this optimization
45 * algorithm (see Section 4.2 on page 83 and
46 * Listing 55.2 on page 708).
47 *
48 * @param op
49 * the nullary search operation to use
50 */
51 @SuppressWarnings("unchecked")
52 public void setNullarySearchOperation(final INullarySearchOperation<G> op) {
53 this.o0 = ((op != null) ? op
54 : ((INullarySearchOperation) (NullarySearchOperation.NULL_CREATION)));
55 }
56
57 /**
58 * Get the nullary search operation which is used by this optimization
59 * algorithm (see Section 4.2 on page 83 and
60 * Listing 55.2 on page 708).
61 *
62 * @return the nullary search operation to use
63 */
64 public final INullarySearchOperation<G> getNullarySearchOperation() {
65 return this.o0;
66 }
67
68 /**
69 * Invoke the random sampling process.
70 *
56.1. THE OPTIMIZATION ALGORITHMS 731
71 * @param r
72 * the randomizer (will be used directly without setting the
73 * seed)
74 * @param term
75 * the termination criterion (will be used directly without
76 * resetting)
77 * @param result
78 * a list to which the results are to be appended
79 */
80 @Override
81 public void call(final Random r, final ITerminationCriterion term,
82 final List<Individual<G, X>> result) {
83
84 result.add(RandomSampling.randomSampling(this.getObjectiveFunction(),//
85 this.getNullarySearchOperation(), //
86 this.getGPM(), term, r));
87
88 }
89
90 /**
91 * We place the complete Random Sampling method as defined in
92 * Algorithm 8.1 on page 114 into this single procedure.
93 *
94 * @param f
95 * the objective function ( Definition D2.3 on page 44)
96 * @param create
97 * the nullary search operator
98 * ( Section 4.2 on page 83) for creating the initial
99 * genotype
100 * @param gpm
101 * the genotype-phenotype mapping
102 * ( Section 4.3 on page 85)
103 * @param term
104 * the termination criterion
105 * ( Section 6.3.3 on page 105)
106 * @param r
107 * the random number generator
108 * @return the individual holding the best candidate solution
109 * ( Definition D2.2 on page 43) found
110 * @param <G>
111 * the search space ( Section 4.1 on page 81)
112 * @param <X>
113 * the problem space ( Section 2.1 on page 43)
114 */
115 public static final <G, X> Individual<G, X> randomSampling(
116 //
117 final IObjectiveFunction<X> f,
118 final INullarySearchOperation<G> create,//
119 final IGPM<G, X> gpm,//
120 final ITerminationCriterion term, final Random r) {
121
122 Individual<G, X> p, pbest;
123 int t;
124
125 p = new Individual<G, X>();
126 pbest = new Individual<G, X>();
127
128 // create the first genotype, map it to a phenotype, and evaluate it
129 p.g = create.create(r);
130 t = 1;
131
132 // check the termination criterion
133 while (!(term.terminationCriterion())) {
134 p.x = gpm.gpm(p.g, r);
135 p.v = f.compute(p.x, r);
136
137 // remember the best candidate solution
732 56 THE IMPLEMENTATION PACKAGE
138 if ((t == 1) || (p.v < pbest.v)) {
139 pbest.assign(p);
140 }
141
142 t++;
143
144 // Instead of modifying existing genotypes as done in
145 // Listing 56.5, we create
146 // new ones in each round
147 p.g = create.create(r);
148 }
149
150 return pbest;
151 }
152
153 /**
154 * Get the full configuration which holds all the data necessary to
155 * describe this object.
156 *
157 * @param longVersion
158 * true if the long version should be returned, false if the
159 * short version should be returned
160 * @return the full configuration
161 */
162 @Override
163 public String getConfiguration(final boolean longVersion) {
164 StringBuilder sb;
165 IOptimizationModule o;
166
167 sb = new StringBuilder();
168
169 sb.append(super.getConfiguration(longVersion));
170
171 o = this.getNullarySearchOperation();
172 if (o != null) {
173 if (sb.length() > 0) {
174 sb.append(’,’);
175 }
176 sb.append("o0=");//N ON − N LS − 1
177 sb.append(o.toString(longVersion));
178 }
179
180 return sb.toString();
181 }
182
183 /**
184 * Get the name of the optimization module
185 *
186 * @param longVersion
187 * true if the long name should be returned, false if the short
188 * name should be returned
189 * @return the name of the optimization module
190 */
191 @Override
192 public String getName(final boolean longVersion) {
193 if (longVersion) {
194 return this.getClass().getSimpleName();
195 }
196 return "RS"; //N ON − N LS − 1
197 }
198 }
Listing 56.5: The random walk algorithm “randomWalk” given in Algorithm 8.2.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms;
56.1. THE OPTIMIZATION ALGORITHMS 733
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Individual;
10 import org.goataa.spec.IGPM;
11 import org.goataa.spec.INullarySearchOperation;
12 import org.goataa.spec.IObjectiveFunction;
13 import org.goataa.spec.ITerminationCriterion;
14 import org.goataa.spec.IUnarySearchOperation;
15
16 /**
17 * A simple implementation of the Random Walk algorithm introduced as
18 * Algorithm 8.2 on page 115.
19 *
20 * @param <G>
21 * the search space (genome, Section 4.1 on page 81)
22 * @param <X>
23 * the problem space (phenome, Section 2.1 on page 43)
24 * @author Thomas Weise
25 */
26 public final class RandomWalk<G, X> extends
27 LocalSearchAlgorithm<G, X, Individual<G, X>> {
28
29 /** a constant required by Java serialization */
30 private static final long serialVersionUID = 1;
31
32 /** instantiate the random walk class */
33 public RandomWalk() {
34 super();
35 }
36
37 /**
38 * Invoke the random walk.
39 *
40 * @param r
41 * the randomizer (will be used directly without setting the
42 * seed)
43 * @param term
44 * the termination criterion (will be used directly without
45 * resetting)
46 * @param result
47 * a list to which the results are to be appended
48 */
49 @Override
50 public void call(final Random r, final ITerminationCriterion term,
51 final List<Individual<G, X>> result) {
52
53 result.add(RandomWalk.randomWalk(this.getObjectiveFunction(),//
54 this.getNullarySearchOperation(), //
55 this.getUnarySearchOperation(),//
56 this.getGPM(), term, r));
57 }
58
59 /**
60 * We place the complete Random Walk method as defined in
61 * Algorithm 8.2 on page 115 into this single procedure.
62 *
63 * @param f
64 * the objective function ( Definition D2.3 on page 44)
65 * @param create
66 * the nullary search operator
67 * ( Section 4.2 on page 83) for creating the initial
68 * genotype
69 * @param mutate
70 * the unary search operator ( Section 4.2 on page 83)
71 * for modifying existing genotypes
734 56 THE IMPLEMENTATION PACKAGE
72 * @param gpm
73 * the genotype-phenotype mapping
74 * ( Section 4.3 on page 85)
75 * @param term
76 * the termination criterion
77 * ( Section 6.3.3 on page 105)
78 * @param r
79 * the random number generator
80 * @return the individual holding the best candidate solution
81 * ( Definition D2.2 on page 43) found
82 * @param <G>
83 * the search space ( Section 4.1 on page 81)
84 * @param <X>
85 * the problem space ( Section 2.1 on page 43)
86 */
87 public static final <G, X> Individual<G, X> randomWalk(
88 //
89 final IObjectiveFunction<X> f,
90 final INullarySearchOperation<G> create,//
91 final IUnarySearchOperation<G> mutate,//
92 final IGPM<G, X> gpm,//
93 final ITerminationCriterion term, final Random r) {
94
95 Individual<G, X> p, pbest;
96 int t;
97
98 p = new Individual<G, X>();
99 pbest = new Individual<G, X>();
100
101 // create the first genotype, map it to a phenotype, and evaluate it
102 p.g = create.create(r);
103 t = 1;
104
105 // check the termination criterion
106 while (!(term.terminationCriterion())) {
107 p.x = gpm.gpm(p.g, r);
108 p.v = f.compute(p.x, r);
109
110 // remember the best candidate solution
111 if ((t == 1) || (p.v < pbest.v)) {
112 pbest.assign(p);
113 }
114
115 t++;
116
117 // modify the last point checked, map the new point to a phenotype
118 // and evaluat it - this is the main difference to
119 // ?? on page ?? is that we
120 // do not use the best genotype for this
121 p.g = mutate.mutate(p.g, r);
122 }
123
124 return pbest;
125 }
126
127 /**
128 * Get the name of the optimization module
129 *
130 * @param longVersion
131 * true if the long name should be returned, false if the short
132 * name should be returned
133 * @return the name of the optimization module
134 */
135 @Override
136 public String getName(final boolean longVersion) {
137 if (longVersion) {
138 return this.getClass().getSimpleName();
56.1. THE OPTIMIZATION ALGORITHMS 735
139 }
140 return "RW"; //N ON − N LS − 1
141 }
142 }
Here we provide Hill Climbing algorithms as discussed in Chapter 26. Hill Climbing is one
of the most basic optimization methods. As discussed in Chapter 26, it is a local search
that keeps a single individual and repeatedly modifies its genotype. It transcend to the new,
resulting candidate solution if and only if it is better than the current one.
Listing 56.6: The Hill Climbing algorithm “hillClimbing” given in Algorithm 26.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.hc;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IGPM;
12 import org.goataa.spec.INullarySearchOperation;
13 import org.goataa.spec.IObjectiveFunction;
14 import org.goataa.spec.ITerminationCriterion;
15 import org.goataa.spec.IUnarySearchOperation;
16
17 /**
18 * A simple implementation of the Hill Climbing algorithm introduced as
19 * Algorithm 26.1 on page 230.
20 *
21 * @param <G>
22 * the search space (genome, Section 4.1 on page 81)
23 * @param <X>
24 * the problem space (phenome, Section 2.1 on page 43)
25 * @author Thomas Weise
26 */
27 public final class HillClimbing<G, X> extends
28 LocalSearchAlgorithm<G, X, Individual<G, X>> {
29
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** instantiate the hill climbing class */
34 public HillClimbing() {
35 super();
36 }
37
38 /**
39 * Invoke the optimization process. This method calls the optimizer and
40 * returns the list of best individuals (see Definition D4.18 on page 90)
41 * found. Usually, only a single individual will be returned. Different
42 * from the parameterless call method, here a randomizer and a
43 * termination criterion are directly passed in. Also, a list to fill in
44 * the optimization results is provided. This allows recursively using
45 * the optimization algorithms.
46 *
736 56 THE IMPLEMENTATION PACKAGE
47 * @param r
48 * the randomizer (will be used directly without setting the
49 * seed)
50 * @param term
51 * the termination criterion (will be used directly without
52 * resetting)
53 * @param result
54 * a list to which the results are to be appended
55 */
56 @Override
57 public void call(final Random r, final ITerminationCriterion term,
58 final List<Individual<G, X>> result) {
59
60 result.add(HillClimbing.hillClimbing(this.getObjectiveFunction(),//
61 this.getNullarySearchOperation(), //
62 this.getUnarySearchOperation(),//
63 this.getGPM(), term, r));
64
65 }
66
67 /**
68 * We place the complete Hill Climbing method as defined in
69 * Algorithm 26.1 on page 230 into this single procedure.
70 *
71 * @param f
72 * the objective function ( Definition D2.3 on page 44)
73 * @param create
74 * the nullary search operator
75 * ( Section 4.2 on page 83) for creating the initial
76 * genotype
77 * @param mutate
78 * the unary search operator ( Section 4.2 on page 83)
79 * for modifying existing genotypes
80 * @param gpm
81 * the genotype-phenotype mapping
82 * ( Section 4.3 on page 85)
83 * @param term
84 * the termination criterion
85 * ( Section 6.3.3 on page 105)
86 * @return the individual holding the best candidate solution
87 * @param r
88 * the random number generator
89 * ( Definition D2.2 on page 43) found
90 * @param <G>
91 * the search space ( Section 4.1 on page 81)
92 * @param <X>
93 * the problem space ( Section 2.1 on page 43)
94 */
95 public static final <G, X> Individual<G, X> hillClimbing(
96 //
97 final IObjectiveFunction<X> f,
98 final INullarySearchOperation<G> create,//
99 final IUnarySearchOperation<G> mutate,//
100 final IGPM<G, X> gpm,//
101 final ITerminationCriterion term, final Random r) {
102
103 Individual<G, X> p, pnew;
104
105 p = new Individual<G, X>();
106 pnew = new Individual<G, X>();
107
108 // create the first genotype, map it to a phenotype, and evaluate it
109 p.g = create.create(r);
110 p.x = gpm.gpm(p.g, r);
111 p.v = f.compute(p.x, r);
112
113 // check the termination criterion
56.1. THE OPTIMIZATION ALGORITHMS 737
114 while (!(term.terminationCriterion())) {
115
116 // modify the best point known, map the new point to a phenotype and
117 // evaluat it
118 pnew.g = mutate.mutate(p.g, r);
119 pnew.x = gpm.gpm(pnew.g, r);
120 pnew.v = f.compute(pnew.x, r);
121
122 // In Algorithm 26.1 on page 230, the objective functions are
123 // evaluated here. By storing the objective values in the individual
124 // records, we avoid evaluating p.x more than once.
125 if (pnew.v < p.v) {
126 p.assign(pnew);
127 }
128 }
129
130 return p;
131 }
132
133 /**
134 * Get the name of the optimization module
135 *
136 * @param longVersion
137 * true if the long name should be returned, false if the short
138 * name should be returned
139 * @return the name of the optimization module
140 */
141 @Override
142 public String getName(final boolean longVersion) {
143 if (longVersion) {
144 return this.getClass().getSimpleName();
145 }
146 return "HC"; //N ON − N LS − 1
147 }
148 }
Listing 56.7: The Multi-Objective Hill Climbing algorithm “hillClimbingMO” given in Al-
gorithm 26.2.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.hc;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.MOLocalSearchAlgorithm;
10 import org.goataa.impl.utils.Archive;
11 import org.goataa.impl.utils.MOIndividual;
12 import org.goataa.impl.utils.Utils;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.ITerminationCriterion;
18 import org.goataa.spec.IUnarySearchOperation;
19
20 /**
21 * An implementation of the Multi-Objective Hill Climbing algorithm
22 * introduced as Algorithm 26.2 on page 231.
23 *
24 * @param <G>
25 * the search space (genome, Section 4.1 on page 81)
26 * @param <X>
27 * the problem space (phenome, Section 2.1 on page 43)
28 * @author Thomas Weise
738 56 THE IMPLEMENTATION PACKAGE
29 */
30 public final class MOHillClimbing<G, X> extends
31 MOLocalSearchAlgorithm<G, X> {
32
33 /** a constant required by Java serialization */
34 private static final long serialVersionUID = 1;
35
36 /** instantiate the hill climbing class */
37 public MOHillClimbing() {
38 super();
39 }
40
41 /**
42 * Invoke the multi-objective hill climber.
43 *
44 * @param r
45 * the randomizer (will be used directly without setting the
46 * seed)
47 * @param term
48 * the termination criterion (will be used directly without
49 * resetting)
50 * @param result
51 * a list to which the results are to be appended
52 */
53 @Override
54 public void call(final Random r, final ITerminationCriterion term,
55 final List<MOIndividual<G, X>> result) {
56
57 MOHillClimbing.hillClimbingMO(//
58 this.getObjectiveFunctions(),//
59 this.getComparator(),//
60 this.getNullarySearchOperation(), //
61 this.getUnarySearchOperation(),//
62 this.getGPM(), term,//
63 this.getMaxSolutions(),//
64 r, //
65 result);
66
67 }
68
69 /**
70 * We place the complete Hill Climbing method as defined in
71 * Algorithm 26.1 on page 230 into this single procedure.
72 *
73 * @param f
74 * the objective functions ( Definition D2.3 on page 44)
75 * @param cmp
76 * the comparator
77 * @param create
78 * the nullary search operator
79 * ( Section 4.2 on page 83) for creating the initial
80 * genotype
81 * @param mutate
82 * the unary search operator ( Section 4.2 on page 83)
83 * for modifying existing genotypes
84 * @param gpm
85 * the genotype-phenotype mapping
86 * ( Section 4.3 on page 85)
87 * @param term
88 * the termination criterion
89 * ( Section 6.3.3 on page 105)
90 * @param result
91 * the list where the best individuals should be added to
92 * @param r
93 * the random number generator
94 * ( Definition D2.2 on page 43) found
95 * @param maxSolutions
56.1. THE OPTIMIZATION ALGORITHMS 739
96 * the maximum number of allowed solutions
97 * @param <G>
98 * the search space ( Section 4.1 on page 81)
99 * @param <X>
100 * the problem space ( Section 2.1 on page 43)
101 */
102 public static final <G, X> void hillClimbingMO(
103 //
104 final IObjectiveFunction<X>[] f,//
105 final IIndividualComparator cmp,//
106 final INullarySearchOperation<G> create,//
107 final IUnarySearchOperation<G> mutate,//
108 final IGPM<G, X> gpm,//
109 final ITerminationCriterion term,//
110 final int maxSolutions,//
111 final Random r,//
112 final List<MOIndividual<G, X>> result) {
113
114 MOIndividual<G, X> p, pnew;
115 Archive<G, X> arc;
116 int max;
117
118 p = new MOIndividual<G, X>(f.length);
119 pnew = new MOIndividual<G, X>(f.length);
120 arc = new Archive<G, X>(maxSolutions, cmp);
121
122 // create the first genotype, map it to a phenotype, and evaluate it
123 p.g = create.create(r);
124 p.x = gpm.gpm(p.g, r);
125 p.evaluate(f, r);
126 arc.add(p);
127
128 // check the termination criterion
129 while (!(term.terminationCriterion())) {
130 pnew = new MOIndividual<G, X>(f.length);
131
132 mutateCreate: {
133 for (max = 1000; (--max) >= 0;) {
134 // pick a random individual from the set of best individuals
135 p = arc.get(r.nextInt(arc.size()));
136
137 pnew.g = mutate.mutate(p.g, r);
138 if (!(Utils.equals(pnew.g, p.g))) {
139 break mutateCreate;
140 }
141 }
142 pnew.g = create.create(r);
143 }
144 pnew.x = gpm.gpm(pnew.g, r);
145 pnew.evaluate(f, r);
146
147 // check whether the individual should enter the archive
148 arc.add(pnew);
149 }
150
151 result.addAll(arc);
152 }
153
154 /**
155 * Get the name of the optimization module
156 *
157 * @param longVersion
158 * true if the long name should be returned, false if the short
159 * name should be returned
160 * @return the name of the optimization module
161 */
162 @Override
740 56 THE IMPLEMENTATION PACKAGE
163 public String getName(final boolean longVersion) {
164 if (longVersion) {
165 return this.getClass().getSimpleName();
166 }
167 return "MOHC"; //N ON − N LS − 1
168 }
169 }
In this package, we provide the implementations of the Simulated Annealing algorithm and
its modules as discussed in Chapter 27.
Listing 56.8: The Simulated Annealing algorithm “simulatedAnnealing” given in Algo-
rithm 27.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.sa;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.algorithms.sa.temperatureSchedules.Exponential;
11 import org.goataa.impl.utils.Individual;
12 import org.goataa.spec.IGPM;
13 import org.goataa.spec.INullarySearchOperation;
14 import org.goataa.spec.IObjectiveFunction;
15 import org.goataa.spec.ITemperatureSchedule;
16 import org.goataa.spec.ITerminationCriterion;
17 import org.goataa.spec.IUnarySearchOperation;
18
19 /**
20 * A simple implementation of the Simulated Annealing algorithm introduced
21 * as Algorithm 27.1 on page 245.
22 *
23 * @param <G>
24 * the search space (genome, Section 4.1 on page 81)
25 * @param <X>
26 * the problem space (phenome, Section 2.1 on page 43)
27 * @author Thomas Weise
28 */
29 public final class SimulatedAnnealing<G, X> extends
30 LocalSearchAlgorithm<G, X, Individual<G, X>> {
31
32 /** a constant required by Java serialization */
33 private static final long serialVersionUID = 1;
34
35 /** the temperature schedule */
36 private ITemperatureSchedule schedule;
37
38 /** instantiate the simulated annealing class */
39 public SimulatedAnnealing() {
40 super();
41 }
42
43 /**
44 * Set the temperature schedule (see
45 * Definition D27.2 on page 247) which will be responsible for
46 * setting the acceptance probability of inferior candidate solutions
47 * during all optimization steps.
48 *
49 * @param aschedule
50 * the temperature schedule
56.1. THE OPTIMIZATION ALGORITHMS 741
51 */
52 public final void setTemperatureSchedule(
53 final ITemperatureSchedule aschedule) {
54 this.schedule = aschedule;
55 }
56
57 /**
58 * Get the temperature schedule (see
59 * Definition D27.2 on page 247) which is responsible for setting
60 * the acceptance probability of inferior candidate solutions during all
61 * optimization steps.
62 *
63 * @return the temperature schedule
64 */
65 public final ITemperatureSchedule getTemperatureSchedule() {
66 return this.schedule;
67 }
68
69 /**
70 * Invoke the optimization process. This method calls the optimizer and
71 * returns the list of best individuals (see Definition D4.18 on page 90)
72 * found. Usually, only a single individual will be returned. Different
73 * from the parameterless call method, here a randomizer and a
74 * termination criterion are directly passed in. Also, a list to fill in
75 * the optimization results is provided. This allows recursively using
76 * the optimization algorithms.
77 *
78 * @param r
79 * the randomizer (will be used directly without setting the
80 * seed)
81 * @param term
82 * the termination criterion (will be used directly without
83 * resetting)
84 * @param result
85 * a list to which the results are to be appended
86 */
87 @Override
88 public void call(final Random r, final ITerminationCriterion term,
89 final List<Individual<G, X>> result) {
90
91 result.add(SimulatedAnnealing.simulatedAnnealing(//
92 this.getObjectiveFunction(),//
93 this.getNullarySearchOperation(), //
94 this.getUnarySearchOperation(),//
95 this.getGPM(), this.getTemperatureSchedule(),//
96 term, r));
97 }
98
99 /**
100 * We place the complete Simulated Annealing method as defined in
101 * Algorithm 27.1 on page 245 into this single procedure.
102 *
103 * @param f
104 * the objective function ( Definition D2.3 on page 44)
105 * @param create
106 * the nullary search operator
107 * ( Section 4.2 on page 83) for creating the initial
108 * genotype
109 * @param mutate
110 * the unary search operator ( Section 4.2 on page 83)
111 * for modifying existing genotypes
112 * @param gpm
113 * the genotype-phenotype mapping
114 * ( Section 4.3 on page 85)
115 * @param temperature
116 * the temperature schedule according to
117 * ( Definition D27.2 on page 247
742 56 THE IMPLEMENTATION PACKAGE
118 * @param term
119 * the termination criterion
120 * ( Section 6.3.3 on page 105)
121 * @param r
122 * the random number generator
123 * @return the best candidate solution
124 * ( Definition D2.2 on page 43) found
125 * @param <G>
126 * the search space ( Section 4.1 on page 81)
127 * @param <X>
128 * the problem space ( Section 2.1 on page 43)
129 */
130 public static final <G, X> Individual<G, X> simulatedAnnealing(
131 //
132 final IObjectiveFunction<X> f,
133 final INullarySearchOperation<G> create,//
134 final IUnarySearchOperation<G> mutate,//
135 final IGPM<G, X> gpm,//
136 final ITemperatureSchedule temperature,
137 final ITerminationCriterion term, final Random r) {
138
139 Individual<G, X> pbest, pcur, pnew;
140 int t;
141 double T, DE;
142
143 pbest = new Individual<G, X>();
144 pnew = new Individual<G, X>();
145 pcur = new Individual<G, X>();
146
147 // create the first genotype, map it to a phenotype, and evaluate it
148 pcur.g = create.create(r);
149 pcur.x = gpm.gpm(pcur.g, r);
150 pcur.v = f.compute(pcur.x, r);
151 pbest.assign(pcur);
152 t = 1;
153
154 // check the termination criterion
155 while (!(term.terminationCriterion())) {
156
157 // compute the current temperature
158 T = temperature.getTemperature(t);
159
160 // modify the best point known, map the new point to a phenotype and
161 // evaluat it
162 pnew.g = mutate.mutate(pcur.g, r);
163 pnew.x = gpm.gpm(pnew.g, r);
164 pnew.v = f.compute(pnew.x, r);
165
166 // compute the energy difference according to
167 // Equation 27.2 on page 244
168 DE = pnew.v - pcur.v;
169
170 // implement Equation 27.3 on page 244
171 if (DE <= 0.0d) {
172 pcur.assign(pnew);
173
174 // remember the best candidate solution
175 if (pnew.v < pbest.v) {
176 pbest.assign(pnew);
177 }
178 } else {
179 // otherwise, use
180 if (r.nextDouble() < Math.exp(-DE / T)) {
181 pcur.assign(pnew);
182 }
183 }
184
56.1. THE OPTIMIZATION ALGORITHMS 743
185 t++;
186 }
187
188 return pbest;
189 }
190
191 /**
192 * Get the name of the optimization module
193 *
194 * @param longVersion
195 * true if the long name should be returned, false if the short
196 * name should be returned
197 * @return the name of the optimization module
198 */
199 @Override
200 public String getName(final boolean longVersion) {
201 ITemperatureSchedule ts;
202
203 ts = this.getTemperatureSchedule();
204
205 if (ts instanceof Exponential) {
206 return (longVersion ? "SimulatedQuenching" :
"SQ");//N ON − N LS − 1//N ON − N LS − 2
207 }
208
209 if (longVersion) {
210 return this.getClass().getSimpleName();
211 }
212 return "SA"; //N ON − N LS − 1
213 }
214
215 /**
216 * Get the full configuration which holds all the data necessary to
217 * describe this object.
218 *
219 * @param longVersion
220 * true if the long version should be returned, false if the
221 * short version should be returned
222 * @return the full configuration
223 */
224 @Override
225 public String getConfiguration(final boolean longVersion) {
226 ITemperatureSchedule ts;
227 String s;
228
229 s = super.getConfiguration(longVersion);
230 ts = this.getTemperatureSchedule();
231 if (ts != null) {
232 s += ’,’ + ts.toString(longVersion);
233 }
234
235 return s;
236 }
237 }
In this package, we provide the implementations of the temperature schedules for the Sim-
ulated Annealing algorithm and its modules as discussed in Section 27.3.
Listing 56.11: A base class providing the features Algorithm 28.1 in a generational way (see
Section 28.1.4.1).
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Constants;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IBinarySearchOperation;
12 import org.goataa.spec.IGPM;
13 import org.goataa.spec.INullarySearchOperation;
14 import org.goataa.spec.IObjectiveFunction;
15 import org.goataa.spec.IOptimizationModule;
16 import org.goataa.spec.ISelectionAlgorithm;
17 import org.goataa.spec.ITerminationCriterion;
18 import org.goataa.spec.IUnarySearchOperation;
19
20 /**
21 * A straightforward implementation of the Evolutionary Algorithm given in
22 * Section 28.1.1 and specified in
23 * Algorithm 28.1 on page 255 with generational population treatment
24 * (see Section 28.1.4.1 on page 265).
25 *
26 * @param <G>
27 * the search space (genome, Section 4.1 on page 81)
28 * @param <X>
29 * the problem space (phenome, Section 2.1 on page 43)
30 * @author Thomas Weise
31 */
32 public final class SimpleGenerationalEA<G, X> extends EABase<G, X> {
33
34 /** a constant required by Java serialization */
35 private static final long serialVersionUID = 1;
36
37 /** instantiate the simple generational EA */
38 public SimpleGenerationalEA() {
39 super();
40 }
41
42 /**
43 * Perform a simple EA as defined in Algorithm 28.1 on page 255 while
56.1. THE OPTIMIZATION ALGORITHMS 747
44 * using a generational population handling (see
45 * Section 28.1.4.1 on page 265).
46 *
47 * @param f
48 * the objective function ( Definition D2.3 on page 44)
49 * @param create
50 * the nullary search operator
51 * ( Section 4.2 on page 83) for creating the initial
52 * genotype
53 * @param mutate
54 * the unary search operator (mutation,
55 * Section 4.2 on page 83) for modifying existing
56 * genotypes
57 * @param recombine
58 * the recombination operator ( Definition D28.14 on page 307)
59 * @param gpm
60 * the genotype-phenotype mapping
61 * ( Section 4.3 on page 85)
62 * @param sel
63 * the selection algorithm ( Section 28.4 on page 285
64 * @param term
65 * the termination criterion
66 * ( Section 6.3.3 on page 105)
67 * @param ps
68 * the population size
69 * @param mps
70 * the mating pool size
71 * @param cr
72 * the crossover rate
73 * @param mr
74 * the mutation rate
75 * @param r
76 * the random number generator
77 * @return the individual containing the best candidate solution
78 * ( Definition D2.2 on page 43) found
79 * @param <G>
80 * the search space ( Section 4.1 on page 81)
81 * @param <X>
82 * the problem space ( Section 2.1 on page 43)
83 */
84 @SuppressWarnings("unchecked")
85 public static final <G, X> Individual<G, X> evolutionaryAlgorithm(
86 //
87 final IObjectiveFunction<X> f,
88 final INullarySearchOperation<G> create,//
89 final IUnarySearchOperation<G> mutate,//
90 final IBinarySearchOperation<G> recombine,
91 final IGPM<G, X> gpm,//
92 final ISelectionAlgorithm sel, final int ps, final int mps,
93 final double mr, final double cr, final ITerminationCriterion term,
94 final Random r) {
95
96 Individual<G, X>[] pop, mate;
97 int mateIndex, popIndex, i, j, k;
98 Individual<G, X> p, best;
99
100 best = new Individual<G, X>();
101 best.v = Constants.WORST_FITNESS;
102 pop = new Individual[ps];
103 mate = new Individual[mps];
104
105 // build the initial population of random individuals
106 for (i = pop.length; (--i) >= 0;) {
107 p = new Individual<G, X>();
108 pop[i] = p;
109 p.g = create.create(r);
110 }
748 56 THE IMPLEMENTATION PACKAGE
111
112 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
113 for (;;) {
114 // for eachgeneration do...
115
116 // for each individual in the population
117 for (i = pop.length; (--i) >= 0;) {
118 p = pop[i];
119
120 // perform the genotype-phenotype mapping
121 p.x = gpm.gpm(p.g, r);
122 // compute the objective value
123 p.v = f.compute(p.x, r);
124
125 // is the current individual the best one so far?
126 if (p.v < best.v) {
127 best.assign(p);
128 }
129
130 // after each objective function evaluation, check if we should
131 // stop
132 if (term.terminationCriterion()) {
133 // if we should stop, return the best individual found
134 return best;
135 }
136 }
137
138 // perform the selection step
139 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
140
141 mateIndex = 0;
142 popIndex = ps;
143
144 // create cr*ps individuals with crossover
145 k = (int) (0.5 + (cr * ps));
146 for (j = 0; (j < k) && (popIndex > 0); j++) {
147 p = new Individual<G, X>();
148
149 // recombine an individual with another
150 p.g = recombine.recombine(mate[mateIndex].g,
151 mate[r.nextInt(mps)].g, r);
152 pop[--popIndex] = p;
153 mateIndex = ((mateIndex + 1) % mps);
154 }
155
156 // create mr*ps individuals with mutation
157 k = (int) (0.5 + (mr * ps));
158 for (j = 0; (j < k) && (popIndex > 0); j++) {
159 p = new Individual<G, X>();
160
161 // recombine an individual with another
162 p.g = mutate.mutate(mate[mateIndex].g, r);
163 pop[--popIndex] = p;
164 mateIndex = ((mateIndex + 1) % mps);
165 }
166
167 // copy the remaining individuals
168 for (; popIndex > 0;) {
169 pop[--popIndex] = mate[mateIndex];
170 mateIndex = ((mateIndex + 1) % mps);
171 }
172 }
173 }
174
175 /**
176 * Invoke the simple ga
177 *
56.1. THE OPTIMIZATION ALGORITHMS 749
178 * @param r
179 * the randomizer (will be used directly without setting the
180 * seed)
181 * @param term
182 * the termination criterion (will be used directly without
183 * resetting)
184 * @param result
185 * a list to which the results are to be appended
186 */
187 @Override
188 public void call(final Random r, final ITerminationCriterion term,
189 final List<Individual<G, X>> result) {
190
191 result.add(SimpleGenerationalEA.evolutionaryAlgorithm(//
192 this.getObjectiveFunction(),//
193 this.getNullarySearchOperation(), //
194 this.getUnarySearchOperation(),//
195 this.getBinarySearchOperation(),//
196 this.getGPM(), //
197 this.getSelectionAlgorithm(),//
198 this.getPopulationSize(),//
199 this.getMatingPoolSize(),//
200 this.getMutationRate(),//
201 this.getCrossoverRate(),//
202 term, // //
203 r));
204 }
205
206 /**
207 * Get the name of the optimization module
208 *
209 * @param longVersion
210 * true if the long name should be returned, false if the short
211 * name should be returned
212 * @return the name of the optimization module
213 */
214 @Override
215 public String getName(final boolean longVersion) {
216 IOptimizationModule om;
217 boolean ga;
218
219 ga = false;
220
221 om = this.getNullarySearchOperation();
222 if (om != null) {
223 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
224 ga = true;
225 }
226 } else {
227 om = this.getUnarySearchOperation();
228 if (om != null) {
229 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
230 ga = true;
231 }
232 } else {
233 om = this.getBinarySearchOperation();
234 if (om != null) {
235 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
236 ga = true;
237 }
238 }
239 }
240 }
241
750 56 THE IMPLEMENTATION PACKAGE
242 if (ga) {
243 return (longVersion ? "sGeneticAlgorithm" :
"sGA");//N ON − N LS − 1//N ON − N LS − 2
244 }
245
246 if (longVersion) {
247 return this.getClass().getSimpleName();
248 }
249 return "sEA"; //N ON − N LS − 1
250 }
251
252 }
Listing 56.12: A base class providing the features Algorithm 28.2 in a generational way.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Archive;
10 import org.goataa.impl.utils.MOIndividual;
11 import org.goataa.spec.IBinarySearchOperation;
12 import org.goataa.spec.IFitnessAssignmentProcess;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.IOptimizationModule;
18 import org.goataa.spec.ISelectionAlgorithm;
19 import org.goataa.spec.ITerminationCriterion;
20 import org.goataa.spec.IUnarySearchOperation;
21
22 /**
23 * A straightforward implementation of the multi-objective Evolutionary
24 * Algorithm given in Section 28.1.1 and specified in
25 * Algorithm 28.2 on page 256 with generational population treatment
26 * (see Section 28.1.4.1 on page 265).
27 *
28 * @param <G>
29 * the search space (genome, Section 4.1 on page 81)
30 * @param <X>
31 * the problem space (phenome, Section 2.1 on page 43)
32 * @author Thomas Weise
33 */
34 public class SimpleGenerationalMOEA<G, X> extends MOEABase<G, X> {
35
36 /** a constant required by Java serialization */
37 private static final long serialVersionUID = 1;
38
39 /** instantiate the simple generational EA */
40 public SimpleGenerationalMOEA() {
41 super();
42 }
43
44 /**
45 * Perform a simple EA as defined in Algorithm 28.1 on page 255 while
46 * using a generational population handling (see
47 * Section 28.1.4.1 on page 265).
48 *
49 * @param f
50 * the objective functions ( Definition D2.3 on page 44)
51 * @param cmp
52 * the comparator
53 * @param create
56.1. THE OPTIMIZATION ALGORITHMS 751
54 * the nullary search operator
55 * ( Section 4.2 on page 83) for creating the initial
56 * genotype
57 * @param mutate
58 * the unary search operator (mutation,
59 * Section 4.2 on page 83) for modifying existing
60 * genotypes
61 * @param recombine
62 * the recombination operator ( Definition D28.14 on page 307)
63 * @param gpm
64 * the genotype-phenotype mapping
65 * ( Section 4.3 on page 85)
66 * @param sel
67 * the selection algorithm ( Section 28.4 on page 285
68 * @param fa
69 * the fitness assignment process
70 * @param term
71 * the termination criterion
72 * ( Section 6.3.3 on page 105)
73 * @param ps
74 * the population size
75 * @param mps
76 * the mating pool size
77 * @param cr
78 * the crossover rate
79 * @param mr
80 * the mutation rate
81 * @param maxSolutions
82 * the maximum number of solutions
83 * @param r
84 * the random number generator
85 * @param result
86 * the list to add the best candidate solutions
87 * ( Definition D2.2 on page 43) to
88 * @param <G>
89 * the search space ( Section 4.1 on page 81)
90 * @param <X>
91 * the problem space ( Section 2.1 on page 43)
92 */
93 @SuppressWarnings("unchecked")
94 public static final <G, X> void evolutionaryAlgorithmMO(
95 //
96 final IObjectiveFunction<X>[] f,//
97 final IIndividualComparator cmp,//
98 final INullarySearchOperation<G> create,//
99 final IUnarySearchOperation<G> mutate,//
100 final IBinarySearchOperation<G> recombine, final IGPM<G, X> gpm,//
101 final ISelectionAlgorithm sel,//
102 final IFitnessAssignmentProcess fa,//
103 final int ps, final int mps,//
104 final double mr,//
105 final double cr,//
106 final ITerminationCriterion term, final int maxSolutions,//
107 final Random r,//
108 final List<MOIndividual<G, X>> result) {
109
110 MOIndividual<G, X>[] pop, mate;
111 int mateIndex, popIndex, i, j, k;
112 MOIndividual<G, X> p;
113 Archive<G, X> best;
114
115 best = new Archive<G, X>(maxSolutions, cmp);
116 pop = new MOIndividual[ps];
117 mate = new MOIndividual[mps];
118
119 // build the initial population of random individuals
120 for (i = pop.length; (--i) >= 0;) {
752 56 THE IMPLEMENTATION PACKAGE
121 p = new MOIndividual<G, X>(f.length);
122 pop[i] = p;
123 p.g = create.create(r);
124 }
125
126 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
127 for (;;) {
128 // for each generation do...
129
130 // for each individual in the population
131 for (i = pop.length; (--i) >= 0;) {
132 p = pop[i];
133
134 // perform the genotype-phenotype mapping
135 p.x = gpm.gpm(p.g, r);
136
137 // compute the objective values
138 p.evaluate(f, r);
139
140 // update the archive
141 best.add(p);
142
143 // after each objective function evaluation, check if we should
144 // stop
145 if (term.terminationCriterion()) {
146 // if we should stop, return the best individuals found
147 result.addAll(best);
148 return;
149 }
150 }
151
152 // assign the fitness
153 fa.assignFitness(pop, 0, pop.length, cmp, r);
154
155 // perform the selection step
156 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
157
158 mateIndex = 0;
159 popIndex = ps;
160
161 // create cr*ps individuals with crossover
162 k = (int) (0.5 + (cr * ps));
163 for (j = 0; (j < k) && (popIndex > 0); j++) {
164 p = new MOIndividual<G, X>(f.length);
165
166 // recombine an individual with another
167 p.g = recombine.recombine(mate[mateIndex].g,
168 mate[r.nextInt(mps)].g, r);
169 pop[--popIndex] = p;
170 mateIndex = ((mateIndex + 1) % mps);
171 }
172
173 // create mr*ps individuals with mutation
174 k = (int) (0.5 + (mr * ps));
175 for (j = 0; (j < k) && (popIndex > 0); j++) {
176 p = new MOIndividual<G, X>(f.length);
177
178 // recombine an individual with another
179 p.g = mutate.mutate(mate[mateIndex].g, r);
180 pop[--popIndex] = p;
181 mateIndex = ((mateIndex + 1) % mps);
182 }
183
184 // copy the remaining individuals
185 for (; popIndex > 0;) {
186 pop[--popIndex] = mate[mateIndex];
187 mateIndex = ((mateIndex + 1) % mps);
56.1. THE OPTIMIZATION ALGORITHMS 753
188 }
189 }
190 }
191
192 /**
193 * Invoke the multi-objective EA.
194 *
195 * @param r
196 * the randomizer (will be used directly without setting the
197 * seed)
198 * @param term
199 * the termination criterion (will be used directly without
200 * resetting)
201 * @param result
202 * a list to which the results are to be appended
203 */
204 @Override
205 public void call(final Random r, final ITerminationCriterion term,
206 final List<MOIndividual<G, X>> result) {
207
208 evolutionaryAlgorithmMO(//
209 this.getObjectiveFunctions(),//
210 this.getComparator(),//
211 this.getNullarySearchOperation(), //
212 this.getUnarySearchOperation(),//
213 this.getBinarySearchOperation(),//
214 this.getGPM(), //
215 this.getSelectionAlgorithm(),//
216 this.getFitnesAssignmentProcess(),//
217 this.getPopulationSize(),//
218 this.getMatingPoolSize(),//
219 this.getMutationRate(),//
220 this.getCrossoverRate(),//
221 term, //
222 this.getMaxSolutions(),//
223 r,//
224 result);
225 }
226
227 /**
228 * Get the name of the optimization module
229 *
230 * @param longVersion
231 * true if the long name should be returned, false if the short
232 * name should be returned
233 * @return the name of the optimization module
234 */
235 @Override
236 public String getName(final boolean longVersion) {
237 IOptimizationModule om;
238 boolean ga;
239
240 ga = false;
241
242 om = this.getNullarySearchOperation();
243 if (om != null) {
244 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
245 ga = true;
246 }
247 } else {
248 om = this.getUnarySearchOperation();
249 if (om != null) {
250 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
251 ga = true;
252 }
754 56 THE IMPLEMENTATION PACKAGE
253 } else {
254 om = this.getBinarySearchOperation();
255 if (om != null) {
256 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
257 ga = true;
258 }
259 }
260 }
261 }
262
263 if (ga) {
264 return (longVersion ? "sMOGeneticAlgorithm" :
"sMOGA");//N ON − N LS − 1//N ON − N LS − 2
265 }
266
267 if (longVersion) {
268 return this.getClass().getSimpleName();
269 }
270 return "sMOEA"; //N ON − N LS − 1
271 }
272
273 }
Listing 56.13: The truncation selection method given in Algorithm 28.8 on page 289.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Arrays;
12 import java.util.Random;
13
14 import org.goataa.impl.OptimizationModule;
15 import org.goataa.impl.utils.Constants;
16 import org.goataa.impl.utils.Individual;
17 import org.goataa.spec.ISelectionAlgorithm;
18
19 /**
20 * An implementation of the truncation selection algorithm which unites
21 * both, Algorithm 28.7 on page 289 and
22 * Algorithm 28.8 on page 289 from
23 * Section 28.4.2 in one class.
24 *
25 * @author Thomas Weise
26 */
27 public class TruncationSelection extends OptimizationModule implements
28 ISelectionAlgorithm {
29
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** the number of best individuals to select */
34 private final int k;
35
36 /** Create a new instance of the truncation selection algorithm */
56.1. THE OPTIMIZATION ALGORITHMS 755
37 public TruncationSelection() {
38 this(-1);
39 }
40
41 /**
42 * Create a new instance of the truncation selection algorithm
43 *
44 * @param ak
45 * the number of survivors
46 */
47 public TruncationSelection(final int ak) {
48 super();
49 this.k = ((ak > 0) ? ak : Integer.MAX_VALUE);
50 }
51
52 /**
53 * Perform the truncation selection as defined in
54 * Section 28.4.2.
55 *
56 * @param pop
57 * the population (see Definition D4.19 on page 90)
58 * @param popStart
59 * the index of the first individual in the population
60 * @param popCount
61 * the number of individuals to select from
62 * @param mate
63 * the mating pool (see Definition D28.10 on page 285)
64 * @param mateStart
65 * the first index to which the selected individuals should be
66 * copied
67 * @param mateCount
68 * the number of individuals to be selected
69 * @param r
70 * the random number generator
71 */
72 @Override
73 public void select(final Individual<?, ?>[] pop, final int popStart,
74 final int popCount, final Individual<?, ?>[] mate,
75 final int mateStart, final int mateCount, final Random r) {
76 int i, x, e;
77
78 Arrays.sort(pop, popStart, popStart + popCount,
79 Constants.FITNESS_SORT_ASC_COMPARATOR);
80
81 i = mateStart;
82 e = (i + mateCount);
83 while (i < e) {
84 x = Math.min(this.k, Math.min(mateCount - i, popCount));
85 System.arraycopy(pop, popStart, mate, i, x);
86 i += x;
87 }
88 }
89
90 /**
91 * Get the name of the optimization module
92 *
93 * @param longVersion
94 * true if the long name should be returned, false if the short
95 * name should be returned
96 * @return the name of the optimization module
97 */
98 @Override
99 public String getName(final boolean longVersion) {
100 if (longVersion) {
101 return this.getClass().getSimpleName();
102 }
103 return "Trunc"; //N ON − N LS − 1
756 56 THE IMPLEMENTATION PACKAGE
104 }
105
106 /**
107 * Get the full configuration which holds all the data necessary to
108 * describe this object.
109 *
110 * @param longVersion
111 * true if the long version should be returned, false if the
112 * short version should be returned
113 * @return the full configuration
114 */
115 @Override
116 public String getConfiguration(final boolean longVersion) {
117 if ((this.k > 0) && (this.k < Integer.MAX_VALUE)) {
118 return "k=" + String.valueOf(this.k); //N ON − N LS − 1
119 }
120 return super.getConfiguration(longVersion);
121 }
122 }
Listing 56.14: The tournament selection method given in Algorithm 28.11 on page 298.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Random;
12
13 import org.goataa.impl.OptimizationModule;
14 import org.goataa.impl.utils.Individual;
15 import org.goataa.spec.ISelectionAlgorithm;
16
17 /**
18 * The tournament selection algorithm with replacement as specified in
19 * Algorithm 28.11 on page 298 in
20 * Section 28.4.4.
21 *
22 * @author Thomas Weise
23 */
24 public class TournamentSelection extends OptimizationModule implements
25 ISelectionAlgorithm {
26
27 /** a constant required by Java serialization */
28 private static final long serialVersionUID = 1;
29
30 /** the tournament size */
31 private final int k;
32
33 /**
34 * Create a new instance of the tournament selection algorithm with
35 * replacement
36 *
37 * @param ik
38 * the tournament size
39 */
40 public TournamentSelection(final int ik) {
41 super();
42 this.k = ik;
43 }
44
45 /**
46 * Perform the tournament selection as defined in
56.1. THE OPTIMIZATION ALGORITHMS 757
47 * Algorithm 28.11 on page 298.
48 *
49 * @param pop
50 * the population (see Definition D4.19 on page 90)
51 * @param popStart
52 * the index of the first individual in the population
53 * @param popCount
54 * the number of individuals to select from
55 * @param mate
56 * the mating pool (see Definition D28.10 on page 285)
57 * @param mateStart
58 * the first index to which the selected individuals should be
59 * copied
60 * @param mateCount
61 * the number of individuals to be selected
62 * @param r
63 * the random number generator
64 */
65 @Override
66 public void select(final Individual<?, ?>[] pop, final int popStart,
67 final int popCount, final Individual<?, ?>[] mate,
68 final int mateStart, final int mateCount, final Random r) {
69 Individual<?, ?> x, y;
70 int i, j;
71
72 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
73 x = pop[popStart + r.nextInt(popCount)];
74 for (j = 2; j <= this.k; j++) {
75 y = pop[popStart + r.nextInt(popCount)];
76 if (y.v < x.v) {
77 x = y;
78 }
79 }
80 mate[i] = x;
81 }
82
83 }
84
85 /**
86 * Get the name of the optimization module
87 *
88 * @param longVersion
89 * true if the long name should be returned, false if the short
90 * name should be returned
91 * @return the name of the optimization module
92 */
93 @Override
94 public String getName(final boolean longVersion) {
95 if (longVersion) {
96 return this.getClass().getSimpleName();
97 }
98 return "Tour"; //N ON − N LS − 1
99 }
100
101 /**
102 * Get the full configuration which holds all the data necessary to
103 * describe this object.
104 *
105 * @param longVersion
106 * true if the long version should be returned, false if the
107 * short version should be returned
108 * @return the full configuration
109 */
110 @Override
111 public String getConfiguration(final boolean longVersion) {
112 return "k=" + String.valueOf(this.k); //N ON − N LS − 1
113 }
758 56 THE IMPLEMENTATION PACKAGE
114 }
Listing 56.15: The random selection method given in Algorithm 28.15 on page 302.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Random;
12
13 import org.goataa.impl.utils.Individual;
14 import org.goataa.spec.ISelectionAlgorithm;
15
16 /**
17 * An selection operator which randomly selects individuals as specified in
18 * Algorithm 28.15 on page 302. This operator should mainly be used
19 * for parental selection but not for survival selection.
20 *
21 * @author Thomas Weise
22 */
23 public class RandomSelection extends SelectionAlgorithm {
24
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /** the globally shared instance of random selection */
29 public static final ISelectionAlgorithm RANDOM_SELECTION = new RandomSelection();
30
31 /** Create a new instance of the random selection algorithm */
32 protected RandomSelection() {
33 super();
34 }
35
36 /**
37 * Perform the random selection as defined in
38 * Algorithm 28.15 on page 302.
39 *
40 * @param pop
41 * the population (see Definition D4.19 on page 90)
42 * @param popStart
43 * the index of the first individual in the population
44 * @param popCount
45 * the number of individuals to select from
46 * @param mate
47 * the mating pool (see Definition D28.10 on page 285)
48 * @param mateStart
49 * the first index to which the selected individuals should be
50 * copied
51 * @param mateCount
52 * the number of individuals to be selected
53 * @param r
54 * the random number generator
55 */
56 @Override
57 public final void select(final Individual<?, ?>[] pop,
58 final int popStart, final int popCount,
59 final Individual<?, ?>[] mate, final int mateStart,
60 final int mateCount, final Random r) {
61 int i;
62
63 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
64 mate[i] = pop[popStart + r.nextInt(popCount)];
56.1. THE OPTIMIZATION ALGORITHMS 759
65 }
66
67 }
68
69 /**
70 * Get the name of the optimization module
71 *
72 * @param longVersion
73 * true if the long name should be returned, false if the short
74 * name should be returned
75 * @return the name of the optimization module
76 */
77 @Override
78 public String getName(final boolean longVersion) {
79 if (longVersion) {
80 return this.getClass().getSimpleName();
81 }
82 return "RND"; //N ON − N LS − 1
83 }
84 }
56.1.4.4.1 UMDA
Listing 56.21: The creator for the initial model to be used in UMDA.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.eda.umda;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.OptimizationModule;
10 import org.goataa.spec.INullarySearchOperation;
11
12 /**
13 * The UMDA-style creation operation for models, as defined in
14 * Algorithm 34.3 on page 436, fills the initial probability vector with 0.5.
15 *
16 * @author Thomas Weise
17 */
18 public class UMDAModelCreator extends OptimizationModule implements
19 INullarySearchOperation<double[]> {
20 /** a constant required by Java serialization */
21 private static final long serialVersionUID = 1;
22
23 /** the dimension */
24 private final int dimension;
25
26 /**
27 * Instantiate the UMDA model creation operation
28 *
29 * @param dim
30 * the dimension of the search space
31 */
32 public UMDAModelCreator(final int dim) {
33 super();
34 this.dimension = dim;
35 }
36
37 /**
38 * Create a vector of all 0.5 values, as stated in Algorithm 34.3 on page 436
39 *
40 * @param r
41 * the random number generator
42 * @return the vector of 0.5 values
782 56 THE IMPLEMENTATION PACKAGE
43 */
44 public final double[] create(final Random r) {
45 double[] d;
46
47 d = new double[this.dimension];
48 Arrays.fill(d, 0.5d);
49
50 return d;
51 }
52
53 /**
54 * Get the full configuration which holds all the data necessary to
55 * describe this object.
56 *
57 * @param longVersion
58 * true if the long version should be returned, false if the
59 * short version should be returned
60 * @return the full configuration
61 */
62 @Override
63 public String getConfiguration(final boolean longVersion) {
64 return ("dim=" + this.dimension); //N ON − N LS − 1
65 }
66 }
Here we discuss string-based search operators. Strings are very common search spaces, be it
strings of bits or of real vectors. They also can be used to represent permutations of objects.
One of the most common search spaces are vectors of real numbers and here, we introduce
search operators for them.
Here we provide the implementation of Listing 55.2 on page 708 for real vectors, that is, a
nullary search operation which creates random vectors with elements distributed (uniformly)
in a given range.
It is quite easy to define multiple different search operators for real vectors which adhere to
the interface given in Listing 55.3 on page 708.
56.2.1.1.2.1 Evolution Strategy Operators
Listing 56.25: Modify a real vector by adding normally distributed random numbers with a
given standard deviation.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
10
11 /**
12 * This is a mutation operation that performes mutation using normally
13 * distributed random numbers according to a given set of standard
14 * deviation parameters, as suggested in
15 * Section 30.4.1.2 on page 365. This operator is quite similar,
16 * but more sophisticated than the simple mutation method defined in
17 * Listing 56.28 on page 791
18 *
19 * @author Thomas Weise
20 */
21 public final class DoubleArrayStdDevNormalMutation extends
22 RealVectorMutation {
23
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /** the standard deviation to use */
28 private final double[] stddev;
29
30 /**
31 * Create a new real-vector mutation operation
32 *
33 * @param dim
34 * the dimension of the search space
35 * @param mi
36 * the minimum value of the allele ( Definition D4.4 on page 83) of
37 * a gene
38 * @param ma
39 * the maximum value of the allele of a gene
40 */
41 public DoubleArrayStdDevNormalMutation(final int dim, final double mi,
42 final double ma) {
43 super(mi, ma);
44 this.stddev = new double[dim];
45 Arrays.fill(this.stddev, 1d);
46 }
47
48 /**
49 * Perform a mutation operation that performes mutation using normally
50 * distributed random numbers according to a given set of standard
51 * deviation parameters, as suggested in
52 * Section 30.4.1.2 on page 365.
53 *
54 * @param g
55 * the existing genotype in the search space from which a
56.2. SEARCH OPERATION 787
56 * slightly modified copy should be created
57 * @param r
58 * the random number generator
59 * @return a new genotype
60 */
61 @Override
62 public final double[] mutate(final double[] g, final Random r) {
63 final double[] gnew, stddevs;
64 double d;
65 int i;
66
67 i = g.length;
68
69 // create a new real vector of dimension n
70 gnew = new double[i];
71 stddevs = this.stddev;
72
73 // set each gene Definition D4.3 on page 82 of gnew to ...
74 for (; (--i) >= 0;) {
75 do {
76 // Use a normally distributed random number with a standard
77 // deviation as specified in the array stddev.
78 d = (g[i] + (r.nextGaussian() * stddevs[i]));
79 } while ((d < this.min) || (d > this.max));
80 gnew[i] = d;
81 }
82
83 return gnew;
84 }
85
86 /**
87 * Set the standard deviations
88 *
89 * @param s
90 * the standard deviations
91 */
92 public final void setStdDevs(final double[] s) {
93 System.arraycopy(s, 0, this.stddev, 0, this.stddev.length);
94 }
95
96 /**
97 * Set the standard deviations to a single, fixed value
98 *
99 * @param s
100 * the standard deviation
101 */
102 public final void setStdDev(final double s) {
103 Arrays.fill(this.stddev, s);
104 }
105
106 /**
107 * Get the name of the optimization module
108 *
109 * @param longVersion
110 * true if the long name should be returned, false if the short
111 * name should be returned
112 * @return the name of the optimization module
113 */
114 @Override
115 public String getName(final boolean longVersion) {
116 if (longVersion) {
117 return super.getName(true);
118 }
119 return "DA-SDNM"; //N ON − N LS − 1
120 }
121 }
788 56 THE IMPLEMENTATION PACKAGE
Listing 56.26: A strategy mutation operator for Evolution Strategy.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /**
11 * This operator realizes the mutation strength strategy vector mutation
12 * for Evolution Strategies ( Chapter 30 on page 359) as
13 * specified in Algorithm 30.8 on page 371,
14 * Equation 30.12, and Equation 30.13.
15 *
16 * @author Thomas Weise
17 */
18 public final class DoubleArrayStrategyLogNormalMutation extends
19 RealVectorMutation {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /**
25 * Create a new real-vector mutation operation
26 *
27 * @param ma
28 * the maximum value of the allele of a gene
29 */
30 public DoubleArrayStrategyLogNormalMutation(final double ma) {
31 super(Double.MIN_VALUE, ma);
32 }
33
34 /**
35 * Perform the mutation strength strategy vector mutation for Evolution
36 * Strategies ( Chapter 30 on page 359) as specified in
37 * Algorithm 30.8 on page 371.
38 *
39 * @param g
40 * the existing genotype in the search space from which a
41 * slightly modified copy should be created
42 * @param r
43 * the random number generator
44 * @return a new genotype
45 */
46 @Override
47 public final double[] mutate(final double[] g, final Random r) {
48 final double[] gnew;
49 final double t0, t, c, nu;
50 int i;
51 double x;
52
53 i = g.length;
54
55 c = 1d;
56
57 // set tau0 according to Equation 30.12 on page 372
58 t0 = (c / Math.sqrt(2 * i));
59
60 // set tau according to Equation 30.13 on page 372
61 t = (c / Math.sqrt(2 * Math.sqrt(i)));
62
63 nu = Math.exp(t0 * r.nextGaussian());
64 gnew = g.clone();
65
66 // set each gene Definition D4.3 on page 82 of gnew to ...
56.2. SEARCH OPERATION 789
67 for (; (--i) >= 0;) {
68 do {
69 x = gnew[i] * nu * Math.exp(t * r.nextGaussian());
70 } while ((x < this.min) || (x > this.max));
71 gnew[i] = x;
72 }
73
74 return gnew;
75 }
76
77 /**
78 * Get the name of the optimization module
79 *
80 * @param longVersion
81 * true if the long name should be returned, false if the short
82 * name should be returned
83 * @return the name of the optimization module
84 */
85 @Override
86 public String getName(final boolean longVersion) {
87 if (longVersion) {
88 return super.getName(true);
89 }
90 return "DA-SLN"; //N ON − N LS − 1
91 }
92 }
56.2.1.1.2.2 Operators Suggested by Students The next two operators introduced here stem
from the discussion in my lecture: ?? is an operator which performs small uniformly dis-
tributed changes to the vector elements whereas ?? performs a similar modification using
normally distributed random numbers.
Listing 56.27: Modify a real vector by adding uniformly distributed random numbers.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /**
11 * A unary search operation (see Section 4.2 on page 83) for
12 * real vectors in a bounded subspace of the real vectors of dimension n.
13 * This operation tales an existing genotype (see
14 * Definition D4.2 on page 82) and adds a small uniformly distributed (see
15 * Section 53.4.1 on page 676,
16 * Section 53.3.1 on page 668) random number to each of its
17 * genes (see Definition D4.3 on page 82).
18 *
19 * @author Thomas Weise
20 */
21 public final class DoubleArrayAllUniformMutation extends
22 RealVectorMutation {
23
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /**
28 * Create a new real-vector mutation operation
29 *
30 * @param mi
31 * the minimum value of the allele ( Definition D4.4 on page 83) of
32 * a gene
33 * @param ma
790 56 THE IMPLEMENTATION PACKAGE
34 * the maximum value of the allele of a gene
35 */
36 public DoubleArrayAllUniformMutation(final double mi, final double ma) {
37 super(mi, ma);
38 }
39
40 /**
41 * This is an unary search operation for vectors of real numbers. It
42 * takes one existing genotype g (see Definition D4.2 on page 82) from the
43 * search space and produces one new genotype. This new element is a
44 * slightly modified version of g which is obtained by adding uniformly
45 * distributed random numbers to its elements.
46 *
47 * @param g
48 * the existing genotype in the search space from which a
49 * slightly modified copy should be created
50 * @param r
51 * the random number generator
52 * @return a new genotype
53 */
54 @Override
55 public final double[] mutate(final double[] g, final Random r) {
56 final double[] gnew;
57 final double strength;
58 int i;
59
60 i = g.length;
61
62 // create a new real vector of dimension n
63 gnew = new double[i];
64
65 // the mutation strength: here we use a constant which is small
66 // compared to the range min...max
67 strength = 0.001d * (this.max - this.min);
68
69 // set each gene Definition D4.3 on page 82 of gnew to ...
70 for (; (--i) >= 0;) {
71 do {
72 // the original allele ( Definition D4.4 on page 83) of the gene plus a
73 // random number uniformly distributed
74 // Section 53.4.1 on page 676 in
75 // [-0.5*strength,+0.5*strength]
76 gnew[i] = g[i] + ((r.nextDouble() - 0.5) * strength);
77 // and repeat this until the new allele falls into the specified
78 // boundaries
79 } while ((gnew[i] < this.min) || (gnew[i] > this.max));
80 }
81
82 return gnew;
83 }
84
85 /**
86 * Get the name of the optimization module
87 *
88 * @param longVersion
89 * true if the long name should be returned, false if the short
90 * name should be returned
91 * @return the name of the optimization module
92 */
93 @Override
94 public String getName(final boolean longVersion) {
95 if (longVersion) {
96 return super.getName(true);
97 }
98 return "DA-AUM"; //N ON − N LS − 1
99 }
100 }
56.2. SEARCH OPERATION 791
Listing 56.28: Modify a real vector by adding normally distributed random numbers.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /**
11 * A unary search operation (see Section 4.2 on page 83) for
12 * real vectors in a bounded subspace of the real vectors of dimension n.
13 * This operation tales an existing genotype (see
14 * Definition D4.2 on page 82) and adds a small normally distributed random
15 * number to each of its genes (see Definition D4.3 on page 82). There are two
16 * main differences between normally distributed random numbers and
17 * uniformly distributed ones: 1) The normal distribution (see
18 * Section 53.4.2 on page 678) is unbounded whereas the uniform
19 * distribution has limits to each side (see
20 * Section 53.4.1 on page 676 and
21 * Section 53.3.1 on page 668) 2) The normal distribution gives
22 * the elements around its expected value
23 * Section 53.2.2 on page 661 a higher probability and a lower
24 * probability to elements distant from it, whereas all possible samples
25 * have the same probability in a uniform distribution.
26 *
27 * @author Thomas Weise
28 */
29 public final class DoubleArrayAllNormalMutation extends RealVectorMutation {
30
31 /** a constant required by Java serialization */
32 private static final long serialVersionUID = 1;
33
34 /**
35 * Create a new real-vector mutation operation
36 *
37 * @param mi
38 * the minimum value of the allele ( Definition D4.4 on page 83) of
39 * a gene
40 * @param ma
41 * the maximum value of the allele of a gene
42 */
43 public DoubleArrayAllNormalMutation(final double mi, final double ma) {
44 super(mi, ma);
45 }
46
47 /**
48 * This is an unary search operation for vectors of real numbers. It
49 * takes one existing genotype g (see Definition D4.2 on page 82) from the
50 * search space and produces one new genotype. This new element is a
51 * slightly modified version of g which is obtained by adding normally
52 * distributed random numbers to its elements.
53 *
54 * @param g
55 * the existing genotype in the search space from which a
56 * slightly modified copy should be created
57 * @param r
58 * the random number generator
59 * @return a new genotype
60 */
61 @Override
62 public final double[] mutate(final double[] g, final Random r) {
63 final double[] gnew;
64 final double strength;
65 int i;
66
792 56 THE IMPLEMENTATION PACKAGE
67 i = g.length;
68
69 // create a new real vector of dimension n
70 gnew = new double[i];
71
72 // the mutation strength: here we use a constant which is small
73 // compared to the range min...max
74 strength = 0.001d * (this.max - this.min);
75
76 // set each gene Definition D4.3 on page 82 of gnew to ...
77 for (; (--i) >= 0;) {
78 do {
79 // the original allele ( Definition D4.4 on page 83) of the gene plus a
80 // random number normally distributed
81 // Section 53.4.2 on page 678 with N(0, strengthˆ2)
82 gnew[i] = g[i] + (r.nextGaussian() * strength);
83 // and repeat this until the new allele falls into the specified
84 // boundaries
85 } while ((gnew[i] < this.min) || (gnew[i] > this.max));
86 }
87
88 return gnew;
89 }
90
91 /**
92 * Get the name of the optimization module
93 *
94 * @param longVersion
95 * true if the long name should be returned, false if the short
96 * name should be returned
97 * @return the name of the optimization module
98 */
99 @Override
100 public String getName(final boolean longVersion) {
101 if (longVersion) {
102 return super.getName(true);
103 }
104 return "DA-NAM"; //N ON − N LS − 1
105 }
106 }
N -ary search operations for real-valued strings, i.e., implementations of Listing 55.6 on
page 711 for arrays of real numbers.
In this package we provide search operations for bit string as used by the Genetic Algorithms
56.2. SEARCH OPERATION 801
discussed in Chapter 29.
A trivial implementation of the search operations for bit strings which is only for teaching
purposes. Bit strings can much better be expressed in an array of byte where we pack eight
bits into each byte. If we use an array of Boolean, we waste a lot of memory... but for
teaching purposes, it is OK.
56.2.1.2.1.1 Nullary Search Operations: create : ∅ 7→ G The bit string creation operations
(nullary search operators).
Here we present creation operations for permutations. Such nullary operators create integer
arrays consisting of n elements, each of which is a unique number from 0 to n − 1. They
furthermore should have the feature that every possible permutation has the same chance
of being created. Create a new permutation according to Algorithm 29.2.
Here we provide mutation operations for permutations. Such unary operators take one
integer array as parameter and return a slightly modified version. The integer arrays have
length n and represent permutations of the first n natural numbers, i.e., from 0 to n−1. The
mutation operations hence must ensure that no duplicate alleles occur and that no number
is lost. They may achieve this by simply swapping the elements of the arrays. Listing 56.38
switches elements according to Algorithm 29.4 on page 333 whereas Listing 56.39 is an
implementation of Algorithm 29.5 on page 334.
Listing 56.40: The base class for search operations for trees.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.Node;
810 56 THE IMPLEMENTATION PACKAGE
10 import org.goataa.impl.searchSpaces.trees.NodeType;
11 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
12
13 /**
14 * A base class for tree-based search operations
15 *
16 * @author Thomas Weise
17 */
18 public class TreeOperation extends OptimizationModule {
19 /** a constant required by Java serialization */
20 private static final long serialVersionUID = 1;
21
22 /** the maximum depth */
23 public final int maxDepth;
24
25 /**
26 * Create a new tree operation
27 *
28 * @param md
29 * the maximum tree depth
30 */
31 protected TreeOperation(final int md) {
32 super();
33 this.maxDepth = Math.max(2, md);
34 }
35
36 /**
37 * Get the full configuration which holds all the data necessary to
38 * describe this object.
39 *
40 * @param longVersion
41 * true if the long version should be returned, false if the
42 * short version should be returned
43 * @return the full configuration
44 */
45 @Override
46 public String getConfiguration(final boolean longVersion) {
47 return (((longVersion) ? "maxDepth" : //N ON − N LS − 1
48 "md") + this.maxDepth); //N ON − N LS − 1
49 }
50
51 /**
52 * Create a sub-tree of the specified size maximum depth. This function
53 * facilitates both, an implementation of the full method
54 * Algorithm 31.1 on page 390 and one of the grow method
55 * Algorithm 31.2 on page 391, which can be chosen by setting the
56 * parameter ’full’ apropriately. It can be used to create trees during
57 * the random population initialization phase or during mutation steps.
58 *
59 * @param types
60 * the node types available for creating the tree
61 * @param maxDepth
62 * the maximum depth of the tree
63 * @param full
64 * Should we construct a sub-tree according to the full method?
65 * If full is false, grow is used.
66 * @param r
67 * the random number generator
68 * @return the new tree
69 * @param <NT>
70 * the node type
71 */
72 @SuppressWarnings("unchecked")
73 public static final <NT extends Node<NT>> NT createTree(
74 final NodeTypeSet<NT> types, final int maxDepth, final boolean full,
75 final Random r) {
76 NodeType<NT, NT> t;
56.2. SEARCH OPERATION 811
77 NT[] x;
78 int i;
79
80 t = null;
81 if (maxDepth <= 1) {
82 t = types.randomTerminalType(r);
83 } else {
84 if (full) {
85 t = types.randomNonTerminalType(r);
86 }
87 if (t == null) {
88 t = types.randomType(r);
89 }
90 }
91 if (t.isTerminal()) {
92 return types.randomTerminalType(r).instantiate(null, r);
93 }
94
95 i = t.getChildCount();
96 x = ((NT[]) (new Node[i]));
97 for (; (--i) >= 0;) {
98 x[i] = createTree(t.getChildTypes(i), maxDepth - 1, full, r);
99 }
100 return t.instantiate(x, r);
101 }
102
103 }
Listing 56.41: An utility class to construct random paths in a tree, i. e., to select random
nodes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.Node;
10
11 /**
12 * This class allows us to select a path in a tree. When applying a
13 * reproduction operation, it is not only necessary to find a tree node
14 * within the tree, but also to find the complete path to this node. This
15 * class allows us to perform such a selection. It also allows us to
16 * replace the end node of such a path, hence creating a completely new
17 * tree: Since genotypes must not directly be changed (because of possible
18 * side-effects if an individual is selected more than once), all
19 * modifications (such as the replacement of a node) will lead to the
20 * creation of new trees.
21 *
22 * @param <NT>
23 * the node type
24 * @author Thomas Weise
25 */
26 public class TreePath<NT extends Node<NT>> extends OptimizationModule {
27 /** a constant required by Java serialization */
28 private static final long serialVersionUID = 1;
29
30 /** the path */
31 private NT[] path;
32
33 /** the path indexes */
34 private int[] pathidx;
35
36 /** the length */
812 56 THE IMPLEMENTATION PACKAGE
37 private int len;
38
39 /** Create a tree path */
40 @SuppressWarnings("unchecked")
41 public TreePath() {
42 super();
43 this.path = ((NT[]) (new Node[16]));
44 this.pathidx = new int[16];
45 }
46
47 /**
48 * Get the path length
49 *
50 * @return the path length
51 */
52 public final int size() {
53 return this.len;
54 }
55
56 /**
57 * Get the element at the specified index
58 *
59 * @param index
60 * the index
61 * @return the element
62 */
63 public final NT get(final int index) {
64 return this.path[index];
65 }
66
67 /**
68 * Get the index of the next child in the path
69 *
70 * @param index
71 * the index of the current parent
72 * @return the index of the next child in the path
73 */
74 public final int getChildIndex(final int index) {
75 return this.pathidx[index];
76 }
77
78 /**
79 * Create a random path through the tree. Each node in the tree is
80 * selected with exactly the same probability.
81 *
82 * @param node
83 * the node
84 * @param r
85 * the randomizer
86 */
87 public final void randomPath(final NT node, final Random r) {
88 int i, w, w2;
89 NT cur, next;
90
91 this.len = 0;
92 cur = node;
93 w = r.nextInt(node.getWeight());
94
95 for (;;) {
96
97 if (w <= 0) {
98 this.addPath(cur, -1);
99 return;
100 }
101
102 // iterate over the children
103 innerLoop: for (i = cur.size(); (--i) >= 0;) {
56.2. SEARCH OPERATION 813
104 next = cur.get(i);
105
106 w2 = next.getWeight();
107 if (w2 >= w) {
108 this.addPath(cur, i);
109 cur = next;
110 break innerLoop;
111 }
112 w -= w2;
113 }
114
115 // take account for the currently selected node
116 w--;
117 }
118 }
119
120 /**
121 * An internal method used to add an element to the path.
122 *
123 * @param node
124 * the node
125 * @param pi
126 * the parent index
127 */
128 @SuppressWarnings("unchecked")
129 private final void addPath(final NT node, final int pi) {
130 NT[] ppp;
131 int l;
132 int[] idx;
133
134 ppp = this.path;
135 idx = this.pathidx;
136 l = this.len;
137
138 if (l >= ppp.length) {
139 ppp = (NT[]) (new Node[l << 1]);
140 System.arraycopy(this.path, 0, ppp, 0, l);
141 this.path = ppp;
142
143 idx = new int[l << 1];
144 System.arraycopy(this.pathidx, 0, idx, 0, l);
145 this.pathidx = idx;
146 }
147
148 idx[l] = pi;
149 ppp[l++] = node;
150
151 this.len = l;
152 }
153
154 /**
155 * Replace the end of this path with the new node. Since tree nodes are
156 * immuatable, this will result in the creation of a completely new tree
157 * and a new root node. The path is updated while this operation is
158 * performed, i.e., it is still valid afterwards.
159 *
160 * @param newNode
161 * the new node
162 * @return the result new root node of the path
163 */
164 public final NT replaceEnd(final NT newNode) {
165 NT[] ppp;
166 int l;
167 int[] idx;
168 NT x;
169
170 l = this.len;
814 56 THE IMPLEMENTATION PACKAGE
171 ppp = this.path;
172 idx = this.pathidx;
173 x = newNode;
174
175 l = this.len;
176 ppp[--l] = x;
177 for (; (--l) >= 0;) {
178 ppp[l] = x = ppp[l].setChild(x, idx[l]);
179 }
180
181 return x;
182 }
183 }
Tree creation operations as discussed in Section 31.3.2 on page 389 (but for strongly-typed
trees).
Tree crossover operations as discussed in Section 31.3.5 on page 398 (but for strongly-typed
trees).
56.2. SEARCH OPERATION 817
Listing 56.44: A binary search operation for trees.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees.binary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.trees.TreeOperation;
9 import org.goataa.impl.searchOperations.trees.TreePath;
10 import org.goataa.impl.searchSpaces.trees.Node;
11 import org.goataa.spec.IBinarySearchOperation;
12
13 /**
14 * A simple recombination operation for a given tree which implants a
15 * randomly chosen subtree from one parent into the other parent, thereby
16 * replacing a randomly picked node in the second parent. This operation
17 * basically proceeds according to the ideas discussed in
18 * Section 31.3.5 on page 398 with the extension that
19 * it also respects the type system of the strongly-typed GP system.
20 *
21 * @author Thomas Weise
22 */
23 public class TreeRecombination extends TreeOperation implements
24 IBinarySearchOperation<Node<?>> {
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /** the internal path 1 */
29 private final TreePath<?> pa1;
30
31 /** the internal path 2 */
32 private final TreePath<?> pa2;
33
34 /**
35 * Create a new tree mutationoperation
36 *
37 * @param md
38 * the maximum tree depth
39 */
40 @SuppressWarnings("unchecked")
41 public TreeRecombination(final int md) {
42 super(md);
43 this.pa1 = new TreePath();
44 this.pa2 = new TreePath();
45 }
46
47 /**
48 * This is the binary search operation. It takes two existing genotypes
49 * p1 and p2 (see Definition D4.2 on page 82) from the genome and produces
50 * one new element in the search space. There are two basic assumptions
51 * about this operator: 1) Its input elements are good because they have
52 * previously been selected. 2) It is somehow possible to combine these
53 * good traits and hence, to obtain a single individual which unites them
54 * and thus, has even better overall qualities than its parents. The
55 * original underlying idea of this operation is the
56 * "Building Block Hypothesis" (see
57 * Section 29.5.5 on page 346) for which, so far, not much
58 * evidence has been found. The hypothesis
59 * "Genetic Repair and Extraction" (see Section 29.5.6 on page 346)
60 * has been developed as an alternative to explain the positive aspects
61 * of binary search operations such as recombination.
62 *
63 * @param p1
64 * the first "parent" genotype
65 * @param p2
66 * the second "parent" genotype
818 56 THE IMPLEMENTATION PACKAGE
67 * @param r
68 * the random number generator
69 * @return a new genotype
70 */
71 @SuppressWarnings("unchecked")
72 public Node<?> recombine(final Node<?> p1, final Node<?> p2,
73 final Random r) {
74 TreePath pt1, pt2;
75 int trials, i;
76 Node<?> e1, e2;
77
78 pt1 = this.pa1;
79 pt2 = this.pa2;
80
81 outer: for (trials = 100; trials > 0; trials--) {
82 pt1.randomPath(p1, r);
83 pt2.randomPath(p2, r);
84
85 i = pt1.size();
86 if (i <= 1) {
87 continue outer;
88 }
89
90 e2 = pt2.get(pt2.size() - 1);
91 i -= 2;
92 e1 = pt1.get(i);
93 if (((i + e2.getHeight() + 1) < this.maxDepth)
94 && e1.getType().getChildTypes(pt1.getChildIndex(i))
95 .containsType(e2.getType())) {
96 return pt1.replaceEnd(e2);
97 }
98 }
99
100 return (r.nextBoolean() ? p1 : p2);
101 }
102 }
Data structure to support the synthesis of strongly-typed trees as discussed in Section 31.3.
56.6 Comparator
Implementations of the individual comparator interface for multi-objective optimization as
introduced in Section 3.5 on page 75.
Listing 56.60: A small and handy class for computing some useful statistics.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.util.Arrays;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /**
11 * This class helps us to collect statistics
12 *
13 * @author Thomas Weise
14 */
15 public class BufferedStatistics extends OptimizationModule {
16
17 /** a constant required by Java serialization */
18 private static final long serialVersionUID = 1;
19
20 /** the data */
21 private double[] data;
22
23 /** the length */
24 private int len;
25
26 /** did we calculate the statistics? */
27 private boolean calculated;
28
29 /** the mean */
852 56 THE IMPLEMENTATION PACKAGE
30 private double mean;
31
32 /** the median */
33 private double med;
34
35 /** the minimum */
36 private double min;
37
38 /** the maximum */
39 private double max;
40
41 /**
42 * Create a new statistics object
43 */
44 public BufferedStatistics() {
45 super();
46 this.data = new double[128];
47 this.clear();
48 }
49
50 /**
51 * IfThenElse a double to the buffer
52 *
53 * @param d
54 * the double
55 */
56 public final void add(final double d) {
57 double[] x;
58 int l;
59
60 this.calculated = false;
61 x = this.data;
62 l = this.len;
63 if (l >= x.length) {
64 x = new double[l << 1];
65 System.arraycopy(this.data, 0, x, 0, l);
66 this.data = x;
67 }
68
69 x[l] = d;
70 this.len = (l + 1);
71 }
72
73 /**
74 * Calculate the statistics
75 */
76 private final void calculate() {
77 double[] x;
78 int l;
79 double s;
80
81 if (this.calculated) {
82 return;
83 }
84
85 x = this.data;
86 l = this.len;
87 if (l <= 0) {
88 this.max = Double.NaN;
89 this.min = Double.NaN;
90 this.mean = Double.NaN;
91 this.med = Double.NaN;
92 } else {
93 Arrays.sort(x, 0, l);
94 this.min = x[0];
95 this.max = x[l - 1];
96
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 853
97 if ((l & 1) == 0) {
98 this.med = (0.5d * (x[l >>> 1] + x[(l >>> 1) - 1]));
99 } else {
100 this.med = x[l >>> 1];
101 }
102
103 s = 0d;
104 for (; (--l) >= 0;) {
105 s += x[l];
106 }
107 this.mean = (s / this.len);
108 }
109
110 this.calculated = true;
111 }
112
113 /**
114 * Get the minimum of the collected value
115 *
116 * @return the minimum value
117 */
118 public final double min() {
119 this.calculate();
120 return this.min;
121 }
122
123 /**
124 * Get the maximum of the collected value
125 *
126 * @return the maximum value
127 */
128 public final double max() {
129 this.calculate();
130 return this.max;
131 }
132
133 /**
134 * Get the median of the collected value
135 *
136 * @return the median value
137 */
138 public final double median() {
139 this.calculate();
140 return this.med;
141 }
142
143 /**
144 * Clear this statistics record
145 */
146 public final void clear() {
147 this.len = 0;
148 this.calculated = false;
149 }
150
151 /**
152 * Get the mean of the collected values
153 *
154 * @return the mean value
155 */
156 public final double mean() {
157 this.calculate();
158 return this.mean;
159 }
160
161 /**
162 * Get the full configuration which holds all the data necessary to
163 * describe this object.
854 56 THE IMPLEMENTATION PACKAGE
164 *
165 * @param longVersion
166 * true if the long version should be returned, false if the
167 * short version should be returned
168 * @return the full configuration
169 */
170 @Override
171 public String getConfiguration(final boolean longVersion) {
172 StringBuilder sb;
173
174 sb = new StringBuilder();
175 sb.append("min="); //N ON − N LS − 1
176 sb.append(TextUtils.formatNumberSameWidth(this.min()));
177 sb.append(longVersion ? ", median=" : " med=");//N ON − N LS − 1//N ON − N LS − 2
178 sb.append(TextUtils.formatNumberSameWidth(this.median()));
179 sb.append(longVersion ? ", mean=" : " avg=");//N ON − N LS − 1//N ON − N LS − 2
180 sb.append(TextUtils.formatNumberSameWidth(this.mean()));
181 sb.append(longVersion ? ", max=" : " max=");//N ON − N LS − 1//N ON − N LS − 2
182 sb.append(TextUtils.formatNumberSameWidth(this.max()));
183
184 return sb.toString();
185 }
186
187 /**
188 * Get the name of the optimization module
189 *
190 * @param longVersion
191 * true if the long name should be returned, false if the short
192 * name should be returned
193 * @return the name of the optimization module
194 */
195 @Override
196 public String getName(final boolean longVersion) {
197 if (longVersion) {
198 return super.getName(true);
199 }
200 return "stat"; //N ON − N LS − 1
201 }
202 }
Here we provide some benchmark problems for real-valued continuous vector spaces.
57.1.1.2 Experiments
57 DEMOS
60 min = 1 . 6 8E−110 med = 1 . 5 6E−60 avg = 3 . 0 8E−11 max= 2 . 5 6E−09 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
61 min = 4 . 8 4E−272 med = 1 . 3 5E−257 avg = 1 . 1 7E−251 max= 7 . 2 1E−250 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
62 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
63 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
64 min = 2 . 5 8E−137 med = 1 . 8 8E−89 avg = 7 . 1 9E−40 max= 7 . 1 9E−38 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
65 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
66 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
67 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
68 min = 1 . 1 5E−122 med = 7 . 9 2E−82 avg = 1 . 2 1E−32 max= 1 . 2 1E−30 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
69 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
70 min = 1 . 5 1E−321 med = 2 . 0 6E−310 avg = 9 . 4 3E−301 max= 5 . 2 3E−299 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
71 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
72 min = 1 . 7 3E−93 med = 3 . 5 9E−61 avg = 4 . 6 0E−25 max= 4 . 6 0E−23 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
73 min = 1 . 1 8E−237 med = 5 . 9 2E−231 avg = 1 . 3 8E−225 max= 1 . 1 9E−223 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
74 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
75 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 3 2E−161 max= 1 . 3 2E−159 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
76 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
77 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
78 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
79 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
80 min = 3 . 9 4E−219 med = 8 . 3 3E−216 avg = 3 . 4 2E−213 max= 1 . 4 7E−211 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
81 min = 9 . 0 3E−275 med = 1 . 6 6E−269 avg = 5 . 7 6E−267 max= 1 . 9 9E−265 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
82 min = 1 . 4 6E−243 med = 5 . 7 3E−240 avg = 6 . 2 8E−237 max= 5 . 3 9E−235 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
83 min = 2 . 8 1E−204 med = 2 . 9 8E−200 avg = 1 . 1 3E−198 max= 4 . 9 4E−197 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
84 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
85 min = 1 . 7 9E−265 med = 1 . 2 3E−259 avg = 1 . 4 1E−257 max= 3 . 9 5E−256 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
86 min = 1 . 7 3E−111 med = 1 . 2 6E−108 avg = 9 . 4 0E−108 max= 3 . 6 9E−106 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
87 min = 6 . 9 2E−140 med = 6 . 3 8E−136 avg = 8 . 4 8E−135 max= 2 . 6 6E−133 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
88 min = 3 . 6 8E−123 med = 6 . 7 4E−121 avg = 1 . 1 4E−119 max= 6 . 9 6E−118 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
89 min = 1 . 6 7E−103 med = 2 . 5 9E−101 avg = 2 . 0 8E−100 max= 6 . 1 6E−99 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
90 min = 4 . 0 8E−170 med = 8 . 8 4E−167 avg = 1 . 6 6E−165 max= 7 . 6 9E−164 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
91 min = 1 . 0 6E−133 med = 8 . 0 9E−131 avg = 5 . 8 9E−130 max= 1 . 0 8E−128 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
92 min = 1 . 0 9E−05 med = 1 . 0 7 E01 avg = 1 . 6 8 E01 max= 7 . 1 4 E01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
93 min = 1 . 1 8 E00 med = 3 . 0 9 E01 avg = 3 . 7 8 E01 max= 9 . 1 4 E01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
94 min = 9 . 3 6E−06 med = 8 . 1 6E−04 avg = 1 . 1 6E−03 max= 9 . 5 5E−03 RS ( f=S p h e r e , r )
95
96
97 SumCosLn
98
99 min = 9 . 3 0E−13 med = 6 . 3 7E−10 avg = 9 . 1 7E−10 max= 5 . 5 2E−09 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
100 min = 8 . 6 0E−13 med = 9 . 5 8E−11 avg = 1 . 4 4E−10 max= 7 . 2 1E−10 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
101 min = 3 . 1 4E−09 med = 1 . 2 8E−04 avg = 4 . 2 0E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
102 min = 3 . 6 6E−02 med = 4 . 2 1E−01 avg = 4 . 1 6E−01 max= 7 . 0 6E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
103 min = 3 . 1 4E−09 med = 1 . 2 8E−04 avg = 4 . 2 0E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
104 min = 3 . 6 6E−02 med = 4 . 2 1E−01 avg = 4 . 1 6E−01 max= 7 . 0 6E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
105 min = 4 . 2 1E−09 med = 1 . 0 2E−07 avg = 1 . 3 5E−07 max= 4 . 6 2E−07 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 1 ) )
106 min = 1 . 6 9E−09 med = 4 . 4 0E−07 avg = 5 . 2 8E−02 max= 4 . 1 2E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 1 ) )
107 min = 1 . 3 3E−11 med = 1 . 1 2E−09 avg = 1 . 7 4E−09 max= 7 . 7 4E−09 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
108 min = 1 . 3 3E−11 med = 7 . 3 9E−10 avg = 1 . 1 2E−09 max= 6 . 2 7E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
109 min = 2 . 2 5E−11 med = 7 . 3 4E−10 avg = 8 . 9 5E−10 max= 3 . 9 0E−09 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
110 min = 3 . 1 5E−13 med = 1 . 5 7E−10 avg = 2 . 0 9E−10 max= 1 . 1 8E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
111 min = 3 . 9 8E−06 med = 2 . 2 5E−01 avg = 2 . 1 7E−01 max= 5 . 8 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
112 min = 1 . 1 9E−01 med = 4 . 7 1E−01 avg = 4 . 5 5E−01 max= 7 . 0 6E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
113 min = 9 . 7 9E−11 med = 5 . 9 6E−09 avg = 8 . 1 2E−09 max= 3 . 3 6E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
114 min = 1 . 5 7E−10 med = 3 . 6 0E−09 avg = 4 . 6 3E−09 max= 2 . 2 9E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
115 min = 2 . 3 3E−08 med = 2 . 5 6E−06 avg = 1 . 0 1E−02 max= 3 . 1 1E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
116 min = 2 . 3 4E−04 med = 3 . 8 9E−01 avg = 3 . 8 3E−01 max= 7 . 0 6E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
117 min = 3 . 5 2E−11 med = 1 . 8 3E−09 avg = 2 . 8 9E−09 max= 1 . 3 4E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
118 min = 8 . 6 8E−13 med = 6 . 2 2E−10 avg = 8 . 8 5E−10 max= 4 . 6 0E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
119 min = 3 . 4 1E−11 med = 5 . 6 7E−09 avg = 7 . 2 3E−09 max= 2 . 8 5E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
120 min = 1 . 5 4E−10 med = 5 . 7 2E−09 avg = 8 . 4 1E−09 max= 4 . 1 6E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
121 min = 1 . 6 0E−12 med = 1 . 1 5E−09 avg = 1 . 4 2E−09 max= 7 . 0 1E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
122 min = 6 . 9 0E−12 med = 1 . 6 9E−10 avg = 2 . 4 3E−10 max= 1 . 1 9E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
123 min = 8 . 9 7E−13 med = 1 . 1 2E−09 avg = 1 . 6 0E−09 max= 9 . 4 3E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
124 min = 1 . 6 2E−11 med = 4 . 3 3E−10 avg = 7 . 9 1E−10 max= 3 . 4 4E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
125 min = 3 . 1 8E−12 med = 7 . 3 5E−10 avg = 9 . 8 8E−10 max= 3 . 8 7E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
126 min = 1 . 1 5E−12 med = 1 . 2 8E−10 avg = 1 . 7 4E−10 max= 1 . 0 8E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
127 min = 7 . 8 7E−13 med = 3 . 3 1E−10 avg = 5 . 0 0E−10 max= 3 . 0 8E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
128 min = 6 . 8 1E−14 med = 4 . 5 3E−11 avg = 6 . 0 0E−11 max= 3 . 3 8E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
129 min = 9 . 0 3E−13 med = 1 . 0 5E−10 avg = 1 . 2 8E−10 max= 4 . 8 5E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
130 min = 2 . 2 0E−13 med = 1 . 3 0E−11 avg = 1 . 9 8E−11 max= 1 . 0 4E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
131 min = 3 . 6 5E−14 med = 3 . 2 4E−12 avg = 6 . 0 0E−12 max= 4 . 2 2E−11 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
869
132 min = 3 . 4 4E−15 med = 4 . 9 2E−13 avg = 7 . 6 2E−13 max= 4 . 8 9E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
133 min = 0 . 0 0 E00 med = 5 . 7 7E−12 avg = 1 . 4 8E−10 max= 3 . 2 9E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
870
134 min = 0 . 0 0 E00 med = 1 . 4 0E−13 avg = 1 . 7 8E−11 max= 3 . 0 5E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
135 min = 3 . 9 9E−12 med = 4 . 4 1E−11 avg = 6 . 2 9E−11 max= 3 . 0 8E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
136 min = 1 . 0 1E−13 med = 6 . 9 2E−12 avg = 1 . 0 1E−11 max= 4 . 9 1E−11 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
137 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
138 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
139 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
140 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
141 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
142 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
143 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 3 8E−11 max= 9 . 4 7E−10 ES ( ( 1 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
144 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
145 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
146 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
147 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 6 5E−16 max= 1 . 8 1E−14 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
148 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
149 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
150 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
151 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
152 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
153 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
154 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
155 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
156 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
157 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
158 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
159 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
160 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
161 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
162 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
163 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
164 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
165 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
166 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
167 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
168 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
169 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
170 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
171 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
172 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
173 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
174 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
175 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
176 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
177 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
178 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
179 min = 1 . 3 6E−06 med = 2 . 5 7E−01 avg = 2 . 5 7E−01 max= 6 . 4 9E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
180 min = 6 . 9 6E−02 med = 4 . 7 7E−01 avg = 4 . 5 8E−01 max= 6 . 9 4E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
181 min = 1 . 1 7E−06 med = 9 . 9 5E−05 avg = 1 . 4 0E−04 max= 1 . 1 1E−03 RS ( f=f 1 , r )
182
183
184 CosLnSum
185
186 min = 1 . 0 3E−11 med = 1 . 8 7E−09 avg = 2 . 5 7E−09 max= 1 . 9 5E−08 HC( o1=NM( [ −10 ,10]ˆ2) , f=f 2 ,r)
187 min = 1 . 2 2E−11 med = 3 . 5 7E−10 avg = 5 . 4 5E−10 max= 2 . 9 1E−09 HC( o1=UM( [ −10 ,10]ˆ2) , f=f 2 ,r)
188 min = 2 . 6 2E−08 med = 4 . 5 7E−06 avg = 1 . 0 5E−01 max= 8 . 7 3E−01 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )
189 min = 3 . 3 2E−03 med = 7 . 6 1E−01 avg = 6 . 8 3E−01 max= 9 . 3 8E−01 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )
190 min = 2 . 6 2E−08 med = 4 . 5 7E−06 avg = 1 . 0 5E−01 max= 8 . 7 3E−01 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )
191 min = 3 . 3 2E−03 med = 7 . 6 1E−01 avg = 6 . 8 3E−01 max= 9 . 3 8E−01 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )
57 DEMOS
192 min = 1 . 9 0E−09 med = 7 . 0 3E−08 avg = 1 . 1 2E−07 max= 4 . 1 9E−07 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.1) )
193 min = 6 . 3 6E−10 med = 2 . 5 1E−07 avg = 5 . 7 7E−02 max= 8 . 5 1E−01 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.1) )
194 min = 3 . 5 4E−12 med = 2 . 5 0E−09 avg = 3 . 5 8E−09 max= 1 . 7 9E−08 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.001) )
195 min = 9 . 0 3E−12 med = 8 . 6 0E−10 avg = 1 . 3 3E−09 max= 7 . 4 0E−09 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.001) )
196 min = 2 . 1 8E−12 med = 2 . 1 3E−09 avg = 2 . 6 8E−09 max= 1 . 1 6E−08 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , l o g ( 1 . 0 E−4) )
197 min = 5 . 5 2E−12 med = 3 . 0 8E−10 avg = 4 . 7 6E−10 max= 3 . 7 5E−09 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , l o g ( 1 . 0 E−4) )
198 min = 5 . 4 9E−06 med = 4 . 7 9E−01 avg = 4 . 2 0E−01 max= 8 . 9 2E−01 SQ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
199 min = 1 . 9 3E−01 med = 7 . 9 0E−01 avg = 7 . 3 7E−01 max= 9 . 4 4E−01 SQ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
200 min = 1 . 8 4E−10 med = 8 . 9 8E−09 avg = 1 . 6 0E−08 max= 7 . 8 0E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
201 min = 6 . 0 2E−12 med = 4 . 2 7E−09 avg = 6 . 5 1E−09 max= 3 . 2 6E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
202 min = 9 . 4 9E−09 med = 1 . 0 6E−06 avg = 3 . 7 4E−02 max= 8 . 1 4E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
203 min = 5 . 3 9E−06 med = 7 . 3 9E−01 avg = 6 . 2 7E−01 max= 9 . 3 4E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
204 min = 9 . 4 2E−11 med = 4 . 9 2E−09 avg = 6 . 8 5E−09 max= 3 . 4 7E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
205 min = 5 . 3 2E−12 med = 1 . 2 6E−09 avg = 1 . 6 9E−09 max= 8 . 5 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
206 min = 9 . 6 5E−12 med = 7 . 9 7E−09 avg = 9 . 7 2E−09 max= 4 . 8 3E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
207 min = 6 . 0 7E−12 med = 3 . 6 5E−09 avg = 5 . 8 4E−09 max= 3 . 7 4E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
208 min = 2 . 0 2E−11 med = 2 . 4 7E−09 avg = 4 . 2 8E−09 max= 1 . 7 4E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
209 min = 7 . 7 7E−12 med = 4 . 6 0E−10 avg = 7 . 1 8E−10 max= 2 . 7 2E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
210 min = 4 . 3 9E−11 med = 2 . 4 7E−09 avg = 3 . 6 9E−09 max= 1 . 5 1E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
211 min = 2 . 2 5E−12 med = 7 . 7 4E−10 avg = 9 . 9 1E−10 max= 5 . 2 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
212 min = 5 . 1 0E−12 med = 2 . 5 0E−09 avg = 3 . 7 0E−09 max= 1 . 8 7E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
213 min = 1 . 9 3E−11 med = 4 . 5 1E−10 avg = 6 . 4 5E−10 max= 2 . 7 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
214 min = 1 . 5 3E−11 med = 1 . 1 0E−09 avg = 1 . 5 3E−09 max= 8 . 5 7E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
215 min = 5 . 4 9E−13 med = 1 . 0 0E−10 avg = 1 . 7 2E−10 max= 9 . 1 7E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
216 min = 1 . 7 6E−11 med = 4 . 6 5E−10 avg = 5 . 6 9E−10 max= 2 . 5 6E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
217 min = 3 . 9 0E−13 med = 3 . 6 8E−11 avg = 5 . 5 7E−11 max= 2 . 6 0E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
218 min = 1 . 3 6E−13 med = 1 . 2 9E−11 avg = 1 . 8 8E−11 max= 1 . 0 6E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
219 min = 1 . 2 9E−14 med = 1 . 5 4E−12 avg = 1 . 9 0E−12 max= 7 . 5 1E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
220 min = 0 . 0 0 E00 med = 3 . 8 2E−12 avg = 6 . 6 2E−10 max= 1 . 0 1E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
221 min = 0 . 0 0 E00 med = 1 . 0 0E−12 avg = 8 . 9 1E−11 max= 2 . 0 7E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
222 min = 1 . 6 9E−12 med = 1 . 3 7E−10 avg = 2 . 1 8E−10 max= 8 . 5 0E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
223 min = 8 . 4 8E−14 med = 1 . 9 6E−11 avg = 2 . 7 9E−11 max= 1 . 8 2E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
224 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
225 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
226 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
227 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
228 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
229 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
230 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 5 3E−11 max= 3 . 5 3E−09 ES ( ( 1 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
231 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
232 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
233 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
234 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 1 3E−12 max= 3 . 1 3E−10 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
235 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
236 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
237 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
238 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
239 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
240 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
241 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
242 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
243 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
244 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
245 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
246 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
247 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
248 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
249 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
250 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
251 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
252 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
253 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
254 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
255 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
256 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
257 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
258 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
259 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
260 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
261 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
262 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
871
263 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
264 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
872
265 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
266 min = 4 . 4 1E−06 med = 5 . 1 3E−01 avg = 4 . 7 3E−01 max= 9 . 0 8E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
267 min = 1 . 6 9E−01 med = 7 . 7 2E−01 avg = 7 . 3 2E−01 max= 9 . 4 1E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
268 min = 3 . 7 4E−06 med = 2 . 5 2E−04 avg = 4 . 5 2E−04 max= 3 . 8 8E−03 RS ( f=f 2 , r )
269
270
271 MaxSqr
272
273 min = 2 . 5 5E−13 med = 4 . 2 1E−11 avg = 5 . 8 3E−11 max= 2 . 8 5E−10 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
274 min = 7 . 1 2E−14 med = 8 . 5 5E−12 avg = 1 . 1 5E−11 max= 5 . 2 5E−11 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
275 min = 7 . 4 5E−08 med = 1 . 7 0E−04 avg = 5 . 5 7E−03 max= 7 . 3 3E−02 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
276 min = 2 . 6 2E−03 med = 1 . 6 1E−01 avg = 1 . 8 4E−01 max= 6 . 4 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
277 min = 7 . 4 5E−08 med = 1 . 7 0E−04 avg = 5 . 5 7E−03 max= 7 . 3 3E−02 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
278 min = 2 . 6 2E−03 med = 1 . 6 1E−01 avg = 1 . 8 4E−01 max= 6 . 4 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
279 min = 1 . 4 7E−10 med = 7 . 3 2E−08 avg = 1 . 2 7E−07 max= 7 . 9 8E−07 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 1 ) )
280 min = 5 . 1 3E−09 med = 4 . 8 0E−03 avg = 7 . 6 3E−03 max= 3 . 2 1E−02 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 1 ) )
281 min = 1 . 2 6E−13 med = 6 . 5 1E−10 avg = 9 . 5 4E−10 max= 6 . 6 2E−09 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
282 min = 4 . 5 7E−12 med = 6 . 7 3E−10 avg = 8 . 8 8E−10 max= 4 . 1 5E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
283 min = 4 . 1 0E−13 med = 1 . 1 2E−10 avg = 1 . 7 4E−10 max= 7 . 7 5E−10 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
284 min = 5 . 6 9E−14 med = 7 . 4 7E−11 avg = 1 . 1 2E−10 max= 7 . 1 0E−10 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
285 min = 7 . 0 3E−08 med = 6 . 6 5E−02 avg = 8 . 7 9E−02 max= 4 . 3 6E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
286 min = 1 . 3 6E−02 med = 1 . 8 2E−01 avg = 2 . 4 4E−01 max= 7 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
287 min = 3 . 2 1E−11 med = 3 . 9 0E−09 avg = 5 . 5 4E−09 max= 2 . 4 8E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
288 min = 2 . 1 3E−11 med = 3 . 2 6E−09 avg = 5 . 3 3E−09 max= 3 . 8 0E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
289 min = 1 . 5 5E−08 med = 3 . 4 3E−06 avg = 4 . 5 9E−04 max= 8 . 4 1E−03 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
290 min = 9 . 6 5E−04 med = 1 . 4 5E−01 avg = 1 . 5 0E−01 max= 4 . 7 4E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
291 min = 4 . 8 0E−13 med = 6 . 2 5E−10 avg = 7 . 7 5E−10 max= 3 . 8 9E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
292 min = 1 . 3 3E−11 med = 4 . 1 1E−10 avg = 5 . 6 9E−10 max= 2 . 8 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
293 min = 8 . 7 9E−12 med = 3 . 8 9E−09 avg = 5 . 8 3E−09 max= 3 . 4 4E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
294 min = 1 . 0 5E−10 med = 4 . 4 6E−09 avg = 9 . 3 1E−09 max= 1 . 0 0E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
295 min = 1 . 7 0E−13 med = 9 . 9 8E−11 avg = 1 . 4 6E−10 max= 6 . 5 5E−10 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
296 min = 2 . 4 5E−13 med = 1 . 5 0E−11 avg = 2 . 2 5E−11 max= 1 . 4 2E−10 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
297 min = 1 . 2 8E−12 med = 5 . 3 7E−10 avg = 6 . 6 3E−10 max= 3 . 6 3E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
298 min = 4 . 2 0E−12 med = 4 . 6 0E−10 avg = 6 . 8 5E−10 max= 3 . 8 9E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
299 min = 8 . 3 4E−13 med = 4 . 9 5E−11 avg = 9 . 0 3E−11 max= 7 . 9 7E−10 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
300 min = 2 . 9 2E−13 med = 1 . 3 8E−11 avg = 1 . 6 8E−11 max= 9 . 6 3E−11 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
301 min = 1 . 9 4E−13 med = 2 . 3 4E−11 avg = 3 . 6 1E−11 max= 2 . 0 7E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
302 min = 3 . 8 0E−14 med = 2 . 3 9E−12 avg = 3 . 2 5E−12 max= 1 . 4 4E−11 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
303 min = 7 . 1 1E−14 med = 6 . 8 1E−12 avg = 9 . 0 5E−12 max= 5 . 0 8E−11 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
304 min = 1 . 6 3E−14 med = 8 . 7 4E−13 avg = 1 . 2 7E−12 max= 8 . 1 2E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
305 min = 5 . 3 4E−16 med = 2 . 0 3E−13 avg = 3 . 5 7E−13 max= 2 . 4 7E−12 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
306 min = 2 . 1 6E−17 med = 4 . 0 3E−14 avg = 5 . 6 8E−14 max= 3 . 2 8E−13 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
307 min = 6 . 1 7E−55 med = 3 . 5 9E−13 avg = 1 . 5 7E−11 max= 1 . 7 5E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
308 min = 1 . 4 0E−37 med = 8 . 1 4E−15 avg = 1 . 8 1E−12 max= 4 . 5 1E−11 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
309 min = 5 . 4 3E−14 med = 3 . 6 8E−12 avg = 4 . 9 9E−12 max= 1 . 9 8E−11 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
310 min = 2 . 5 7E−15 med = 4 . 6 6E−13 avg = 6 . 4 4E−13 max= 2 . 6 9E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
311 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
312 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
313 min = 3 . 4 2E−176 med = 2 . 7 0E−90 avg = 5 . 2 3E−14 max= 5 . 2 2E−12 ES ( ( 1 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
314 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
315 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
316 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
317 min = 7 . 9 8E−129 med = 7 . 6 6E−76 avg = 1 . 9 8E−15 max= 1 . 4 5E−13 ES ( ( 1 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
318 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
319 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
320 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
321 min = 2 . 1 0E−110 med = 8 . 9 9E−60 avg = 2 . 8 0E−15 max= 2 . 5 2E−13 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
322 min = 1 . 3 2E−271 med = 1 . 7 8E−258 avg = 4 . 9 2E−245 max= 4 . 9 2E−243 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
57 DEMOS
323 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
324 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
325 min = 5 . 5 2E−153 med = 6 . 2 9E−90 avg = 3 . 9 2E−33 max= 3 . 9 2E−31 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
326 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
327 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
328 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
329 min = 3 . 6 2E−112 med = 1 . 9 9E−81 avg = 1 . 0 8E−18 max= 1 . 0 8E−16 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
330 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
331 min = 3 . 8 5E−322 med = 2 . 3 8E−308 avg = 7 . 4 5E−300 max= 6 . 9 8E−298 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
332 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
333 min = 1 . 4 6E−93 med = 3 . 5 2E−64 avg = 2 . 8 4E−28 max= 2 . 8 4E−26 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
334 min = 2 . 1 4E−237 med = 2 . 0 5E−231 avg = 4 . 2 0E−226 max= 3 . 6 3E−224 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
335 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
336 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 6 8E−145 max= 3 . 6 8E−143 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
337 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
338 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
339 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
340 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
341 min = 2 . 7 4E−215 med = 4 . 0 6E−211 avg = 8 . 0 1E−209 max= 3 . 0 3E−207 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
342 min = 2 . 7 5E−276 med = 2 . 9 1E−269 avg = 2 . 7 3E−265 max= 1 . 7 6E−263 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
343 min = 6 . 2 2E−241 med = 9 . 4 1E−238 avg = 1 . 8 6E−235 max= 5 . 0 5E−234 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
344 min = 1 . 4 0E−201 med = 4 . 2 6E−197 avg = 9 . 1 8E−196 max= 2 . 0 4E−194 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
345 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 9 2E−321 max= 1 . 0 4E−319 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
346 min = 1 . 9 2E−259 med = 1 . 6 8E−255 avg = 9 . 9 2E−253 max= 8 . 5 9E−251 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
347 min = 1 . 4 4E−109 med = 2 . 0 4E−107 avg = 1 . 0 3E−106 max= 2 . 1 0E−105 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
348 min = 8 . 2 5E−140 med = 4 . 6 8E−137 avg = 2 . 3 2E−135 max= 7 . 7 2E−134 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
349 min = 1 . 7 2E−123 med = 7 . 1 8E−121 avg = 5 . 0 0E−120 max= 7 . 4 0E−119 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
350 min = 2 . 8 3E−103 med = 1 . 6 7E−100 avg = 7 . 0 0E−100 max= 8 . 5 8E−99 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
351 min = 1 . 0 6E−167 med = 1 . 5 1E−164 avg = 6 . 7 9E−163 max= 3 . 1 1E−161 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
352 min = 2 . 4 3E−132 med = 1 . 2 1E−129 avg = 1 . 1 1E−128 max= 5 . 4 9E−127 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
353 min = 9 . 7 2E−08 med = 7 . 5 2E−02 avg = 1 . 1 8E−01 max= 5 . 2 0E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
354 min = 9 . 3 0E−03 med = 1 . 9 4E−01 avg = 2 . 6 0E−01 max= 7 . 1 1E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
355 min = 8 . 4 1E−08 med = 6 . 3 2E−06 avg = 9 . 1 8E−06 max= 7 . 3 0E−05 RS ( f=f 3 , r )
356
357
358 Dimensions : 5
359
360
361
362 Sphere
363
364 min = 1 . 2 0E−06 med = 1 . 2 4E−05 avg = 1 . 2 6E−05 max= 2 . 3 8E−05 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
365 min = 3 . 8 5E−07 med = 1 . 7 9E−06 avg = 1 . 9 4E−06 max= 3 . 6 4E−06 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
366 min = 2 . 2 0E−04 med = 1 . 5 9E−03 avg = 1 . 7 2E−03 max= 4 . 9 4E−03 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
367 min = 3 . 2 8E−04 med = 5 . 2 3E−03 avg = 6 . 2 7E−03 max= 2 . 5 7E−02 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
368 min = 2 . 2 0E−04 med = 1 . 5 9E−03 avg = 1 . 7 2E−03 max= 4 . 9 4E−03 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
369 min = 3 . 2 8E−04 med = 5 . 2 3E−03 avg = 6 . 2 7E−03 max= 2 . 5 7E−02 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
370 min = 4 . 2 1E−05 med = 1 . 3 1E−04 avg = 1 . 5 3E−04 max= 3 . 2 6E−04 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 1 ) )
371 min = 3 . 0 6E−05 med = 1 . 7 3E−04 avg = 1 . 8 2E−04 max= 5 . 0 7E−04 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 1 ) )
372 min = 4 . 1 4E−07 med = 1 . 2 5E−05 avg = 1 . 3 3E−05 max= 2 . 8 2E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 0 0 1 ) )
373 min = 5 . 5 8E−07 med = 2 . 5 3E−06 avg = 2 . 5 2E−06 max= 5 . 8 7E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 0 0 1 ) )
374 min = 1 . 4 7E−06 med = 1 . 1 0E−05 avg = 1 . 1 4E−05 max= 2 . 3 0E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 . 0 E−4) )
375 min = 3 . 2 6E−07 med = 1 . 7 9E−06 avg = 1 . 8 4E−06 max= 4 . 1 0E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 . 0 E−4) )
376 min = 1 . 0 5E−03 med = 1 . 8 1E−02 avg = 2 . 2 6E−02 max= 5 . 8 9E−02 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−5) )
377 min = 1 . 4 7E−03 med = 1 . 6 5E−01 avg = 2 . 1 1E−01 max= 7 . 5 6E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−5) )
378 min = 1 . 9 3E−06 med = 2 . 4 0E−05 avg = 2 . 4 4E−05 max= 4 . 5 5E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−4) )
379 min = 1 . 2 4E−06 med = 5 . 2 9E−06 avg = 5 . 2 4E−06 max= 1 . 1 6E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−4) )
380 min = 1 . 7 9E−04 med = 7 . 3 5E−04 avg = 8 . 4 3E−04 max= 2 . 2 5E−03 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
381 min = 2 . 9 3E−04 med = 2 . 0 4E−03 avg = 2 . 3 9E−03 max= 7 . 2 3E−03 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
382 min = 3 . 5 8E−06 med = 1 . 6 0E−05 avg = 1 . 6 7E−05 max= 3 . 1 5E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
383 min = 4 . 2 7E−07 med = 3 . 0 2E−06 avg = 2 . 9 7E−06 max= 5 . 8 9E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
384 min = 3 . 2 1E−06 med = 2 . 1 4E−05 avg = 2 . 2 3E−05 max= 4 . 1 9E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
385 min = 1 . 2 0E−06 med = 9 . 3 4E−06 avg = 9 . 2 7E−06 max= 1 . 7 9E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
386 min = 1 . 6 3E−06 med = 1 . 3 4E−05 avg = 1 . 3 1E−05 max= 2 . 7 5E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
387 min = 2 . 5 9E−07 med = 2 . 0 5E−06 avg = 2 . 0 8E−06 max= 3 . 9 8E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
388 min = 2 . 1 4E−06 med = 1 . 2 8E−05 avg = 1 . 2 5E−05 max= 2 . 4 8E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
389 min = 3 . 5 7E−07 med = 2 . 1 3E−06 avg = 2 . 2 0E−06 max= 4 . 0 0E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
390 min = 2 . 5 7E−06 med = 1 . 1 4E−05 avg = 1 . 2 0E−05 max= 2 . 6 2E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
391 min = 1 . 5 2E−07 med = 1 . 6 9E−06 avg = 1 . 8 1E−06 max= 3 . 3 8E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
392 min = 9 . 9 4E−07 med = 5 . 4 2E−06 avg = 5 . 7 7E−06 max= 1 . 0 6E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
393 min = 8 . 8 1E−08 med = 5 . 3 4E−07 avg = 5 . 6 8E−07 max= 1 . 2 5E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
394 min = 3 . 1 2E−07 med = 1 . 7 0E−06 avg = 1 . 8 1E−06 max= 3 . 9 2E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
395 min = 4 . 0 4E−08 med = 1 . 8 9E−07 avg = 1 . 9 5E−07 max= 4 . 0 6E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
873
396 min = 1 . 7 5E−08 med = 1 . 3 6E−07 avg = 1 . 4 4E−07 max= 3 . 3 6E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
397 min = 8 . 3 3E−10 med = 1 . 6 4E−08 avg = 1 . 6 3E−08 max= 3 . 5 1E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
874
398 min = 8 . 5 0E−08 med = 6 . 4 5E−06 avg = 7 . 5 7E−06 max= 2 . 7 7E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
399 min = 2 . 5 6E−08 med = 7 . 8 1E−07 avg = 9 . 7 0E−07 max= 3 . 9 8E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
400 min = 2 . 8 7E−07 med = 1 . 1 0E−06 avg = 1 . 1 4E−06 max= 2 . 3 8E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
401 min = 7 . 2 0E−09 med = 1 . 0 5E−07 avg = 1 . 1 5E−07 max= 2 . 7 2E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
402 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , r )
403 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , r )
404 min = 3 . 9 2E−262 med = 1 . 3 9E−243 avg = 1 . 3 2E−215 max= 1 . 3 2E−213 ES ( ( 1 0 / 1 0 + 5 0 ) , r )
405 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , r )
406 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , r )
407 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , r )
408 min = 1 . 3 7E−203 med = 1 . 8 0E−187 avg = 3 . 9 4E−166 max= 3 . 7 9E−164 ES ( ( 1 0 / 1 0 + 1 0 0 ) , r )
409 min = 2 . 6 4E−298 med = 1 . 5 5E−290 avg = 2 . 2 4E−285 max= 2 . 1 1E−283 ES ( ( 1 0 / 1 0 , 1 0 0 ) , r )
410 min = 1 . 1 9E−223 med = 8 . 4 2E−217 avg = 2 . 6 1E−210 max= 1 . 7 1E−208 ES ( ( 1 0 / 2 + 2 0 0 ) , r )
411 min = 1 . 8 1E−249 med = 7 . 4 0E−240 avg = 2 . 3 8E−233 max= 2 . 0 5E−231 ES ( ( 1 0 / 2 , 2 0 0 ) , r )
412 min = 1 . 7 4E−137 med = 2 . 0 2E−124 avg = 2 . 6 8E−105 max= 2 . 6 1E−103 ES ( ( 1 0 / 1 0 + 2 0 0 ) , r )
413 min = 4 . 5 6E−179 med = 7 . 8 7E−173 avg = 5 . 6 9E−170 max= 2 . 0 4E−168 ES ( ( 1 0 / 1 0 , 2 0 0 ) , r )
414 min = 1 . 9 7E−310 med = 1 . 0 0E−301 avg = 5 . 6 0E−290 max= 5 . 6 0E−288 ES ( ( 2 0 / 2 + 5 0 ) , r )
415 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , r )
416 min = 1 . 1 3E−169 med = 1 . 7 3E−158 avg = 3 . 8 3E−146 max= 3 . 8 0E−144 ES ( ( 2 0 / 1 0 + 5 0 ) , r )
417 min = 6 . 2 3E−257 med = 3 . 3 0E−251 avg = 3 . 1 4E−245 max= 1 . 2 2E−243 ES ( ( 2 0 / 1 0 , 5 0 ) , r )
418 min = 2 . 5 0E−258 med = 1 . 5 8E−249 avg = 5 . 5 3E−242 max= 4 . 5 0E−240 ES ( ( 2 0 / 2 + 1 0 0 ) , r )
419 min = 0 . 0 0 E00 med = 1 . 5 0E−316 avg = 2 . 8 6E−311 max= 2 . 3 7E−309 ES ( ( 2 0 / 2 , 1 0 0 ) , r )
420 min = 4 . 5 3E−146 med = 4 . 3 0E−138 avg = 1 . 1 9E−130 max= 5 . 9 1E−129 ES ( ( 2 0 / 1 0 + 1 0 0 ) , r )
421 min = 8 . 3 9E−227 med = 2 . 6 7E−223 avg = 5 . 0 5E−220 max= 2 . 3 3E−218 ES ( ( 2 0 / 1 0 , 1 0 0 ) , r )
422 min = 8 . 7 5E−183 med = 2 . 8 8E−177 avg = 2 . 2 6E−173 max= 1 . 8 5E−171 ES ( ( 2 0 / 2 + 2 0 0 ) , r )
423 min = 3 . 8 5E−211 med = 1 . 0 8E−205 avg = 6 . 3 2E−203 max= 2 . 9 0E−201 ES ( ( 2 0 / 2 , 2 0 0 ) , r )
424 min = 3 . 5 6E−110 med = 2 . 2 8E−102 avg = 2 . 2 5E−97 max= 2 . 0 8E−95 ES ( ( 2 0 / 1 0 + 2 0 0 ) , r )
425 min = 6 . 8 0E−151 med = 2 . 9 1E−147 avg = 5 . 9 9E−146 max= 1 . 2 5E−144 ES ( ( 2 0 / 1 0 , 2 0 0 ) , r )
426 min = 1 . 0 8E−192 med = 1 . 4 1E−186 avg = 1 . 6 9E−36 max= 1 . 6 9E−34 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
427 min = 5 . 1 4E−09 med = 2 . 7 3E−02 avg = 1 . 5 6E−01 max= 2 . 7 3 E00 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
428 min = 9 . 2 7E−209 med = 7 . 4 3E−203 avg = 1 . 4 9E−09 max= 1 . 4 9E−07 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
429 min = 9 . 9 4E−195 med = 1 . 0 5E−189 avg = 2 . 2 8E−187 max= 6 . 9 9E−186 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
430 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
431 min = 6 . 9 0E−282 med = 1 . 1 4E−276 avg = 2 . 5 1E−273 max= 1 . 1 0E−271 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
432 min = 6 . 4 6E−97 med = 4 . 0 8E−93 avg = 4 . 8 6E−92 max= 1 . 2 7E−90 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
433 min = 2 . 3 1E−35 med = 1 . 9 6E−08 avg = 1 . 4 8E−03 max= 7 . 6 9E−02 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
434 min = 6 . 6 8E−105 med = 6 . 0 9E−102 avg = 5 . 4 0E−101 max= 1 . 0 1E−99 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
435 min = 2 . 8 0E−97 med = 2 . 4 3E−95 avg = 1 . 7 8E−94 max= 1 . 9 9E−93 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
436 min = 3 . 7 3E−193 med = 9 . 3 5E−187 avg = 1 . 7 1E−184 max= 4 . 0 9E−183 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
437 min = 2 . 2 5E−143 med = 9 . 4 2E−140 avg = 1 . 6 9E−138 max= 3 . 3 8E−137 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
438 min = 7 . 5 9E−48 med = 4 . 5 3E−46 avg = 2 . 1 3E−45 max= 4 . 9 2E−44 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
439 min = 1 . 3 6E−58 med = 2 . 0 3E−56 avg = 7 . 8 0E−22 max= 7 . 8 0E−20 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
440 min = 4 . 1 2E−52 med = 1 . 2 6E−50 avg = 5 . 0 2E−50 max= 1 . 1 2E−48 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
441 min = 9 . 2 7E−49 med = 1 . 2 3E−47 avg = 3 . 0 3E−47 max= 3 . 3 1E−46 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
442 min = 1 . 5 1E−96 med = 6 . 8 5E−94 avg = 8 . 0 7E−93 max= 2 . 3 1E−91 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
443 min = 5 . 8 7E−72 med = 8 . 4 9E−70 avg = 2 . 3 4E−69 max= 3 . 8 1E−68 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
444 min = 2 . 1 5 E00 med = 6 . 9 1 E01 avg = 7 . 6 4 E01 max= 1 . 8 1 E02 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
445 min = 2 . 4 3 E01 med = 1 . 2 2 E02 avg = 1 . 2 3 E02 max= 2 . 4 4 E02 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
446 min = 4 . 6 7E−01 med = 1 . 8 4 E00 avg = 1 . 9 4 E00 max= 3 . 7 4 E00 RS ( r )
447
448
449 SumCosLn
450
451 min = 7 . 0 9E−08 med = 5 . 6 4E−07 avg = 5 . 9 1E−07 max= 1 . 3 7E−06 HC( o1=NM( [ −10 ,10]ˆ5) , f=f 1 ,r)
452 min = 1 . 1 9E−08 med = 9 . 0 4E−08 avg = 9 . 2 4E−08 max= 1 . 9 0E−07 HC( o1=UM( [ −10 ,10]ˆ5) , f=f 1 ,r)
453 min = 5 . 3 4E−03 med = 2 . 5 9E−01 avg = 2 . 6 6E−01 max= 5 . 5 2E−01 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )
454 min = 2 . 1 4E−01 med = 4 . 9 0E−01 avg = 4 . 8 0E−01 max= 7 . 0 7E−01 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )
455 min = 5 . 3 4E−03 med = 2 . 5 9E−01 avg = 2 . 6 6E−01 max= 5 . 5 2E−01 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )
57 DEMOS
456 min = 2 . 1 4E−01 med = 4 . 9 0E−01 avg = 4 . 8 0E−01 max= 7 . 0 7E−01 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )
457 min = 5 . 2 3E−05 med = 4 . 7 3E−04 avg = 6 . 1 5E−04 max= 2 . 9 2E−03 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.1) )
458 min = 1 . 1 0E−02 med = 3 . 0 6E−01 avg = 2 . 7 9E−01 max= 6 . 8 2E−01 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.1) )
459 min = 2 . 3 6E−07 med = 1 . 8 9E−06 avg = 2 . 0 1E−06 max= 4 . 6 6E−06 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.001) )
460 min = 2 . 6 1E−07 med = 1 . 6 3E−06 avg = 1 . 6 2E−06 max= 3 . 2 3E−06 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.001) )
461 min = 1 . 1 2E−07 med = 6 . 9 7E−07 avg = 7 . 2 9E−07 max= 1 . 8 8E−06 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , l o g ( 1 . 0 E−4) )
462 min = 2 . 0 8E−08 med = 1 . 8 2E−07 avg = 1 . 8 4E−07 max= 3 . 6 0E−07 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , l o g ( 1 . 0 E−4) )
463 min = 1 . 1 6E−01 med = 3 . 6 7E−01 avg = 3 . 5 9E−01 max= 5 . 9 0E−01 SQ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
464 min = 2 . 5 8E−01 med = 5 . 0 8E−01 avg = 4 . 9 6E−01 max= 7 . 2 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
465 min = 5 . 3 7E−07 med = 3 . 9 3E−06 avg = 4 . 0 0E−06 max= 8 . 6 9E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
466 min = 4 . 8 6E−07 med = 2 . 3 5E−06 avg = 2 . 6 6E−06 max= 7 . 7 7E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
467 min = 1 . 0 3E−02 med = 1 . 8 1E−01 avg = 2 . 0 0E−01 max= 5 . 4 4E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
468 min = 1 . 8 3E−01 med = 4 . 9 1E−01 avg = 4 . 7 1E−01 max= 7 . 2 5E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
469 min = 1 . 3 6E−07 med = 1 . 3 1E−06 avg = 1 . 3 8E−06 max= 2 . 8 9E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
470 min = 6 . 3 6E−08 med = 3 . 8 3E−07 avg = 3 . 8 3E−07 max= 7 . 9 1E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
471 min = 1 . 2 6E−06 med = 9 . 5 0E−06 avg = 9 . 7 6E−06 max= 2 . 1 1E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
472 min = 2 . 5 2E−06 med = 1 . 1 6E−05 avg = 1 . 4 8E−05 max= 5 . 7 3E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
473 min = 1 . 2 5E−07 med = 7 . 2 4E−07 avg = 7 . 0 9E−07 max= 1 . 5 8E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
474 min = 2 . 0 2E−08 med = 1 . 2 2E−07 avg = 1 . 1 9E−07 max= 2 . 8 4E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
475 min = 4 . 0 7E−07 med = 1 . 3 0E−06 avg = 1 . 3 6E−06 max= 3 . 6 0E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
476 min = 6 . 6 0E−08 med = 7 . 9 1E−07 avg = 8 . 5 0E−07 max= 1 . 5 8E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
477 min = 4 . 5 5E−08 med = 6 . 7 7E−07 avg = 6 . 6 3E−07 max= 1 . 3 1E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
478 min = 1 . 0 6E−08 med = 1 . 0 5E−07 avg = 1 . 0 7E−07 max= 2 . 1 6E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
479 min = 7 . 3 5E−08 med = 2 . 9 8E−07 avg = 3 . 0 0E−07 max= 5 . 5 8E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
480 min = 7 . 0 8E−09 med = 3 . 0 2E−08 avg = 3 . 0 8E−08 max= 5 . 6 9E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
481 min = 1 . 3 6E−08 med = 8 . 7 4E−08 avg = 8 . 9 5E−08 max= 1 . 7 7E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
482 min = 6 . 5 2E−10 med = 9 . 0 8E−09 avg = 9 . 6 2E−09 max= 2 . 4 6E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
483 min = 1 . 0 1E−09 med = 6 . 6 6E−09 avg = 7 . 1 7E−09 max= 1 . 9 7E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
484 min = 9 . 1 4E−11 med = 7 . 6 8E−10 avg = 7 . 9 6E−10 max= 1 . 8 1E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
485 min = 4 . 1 7E−09 med = 2 . 4 3E−07 avg = 2 . 7 2E−07 max= 8 . 2 9E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
486 min = 1 . 2 3E−09 med = 4 . 2 7E−08 avg = 5 . 0 6E−08 max= 1 . 5 9E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
487 min = 7 . 8 0E−09 med = 5 . 3 8E−08 avg = 5 . 7 2E−08 max= 1 . 1 2E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
488 min = 8 . 8 9E−10 med = 6 . 2 5E−09 avg = 6 . 4 4E−09 max= 1 . 2 2E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
489 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 1 , r )
490 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 1 , r )
491 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 1 , r )
492 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 1 , r )
493 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 1 , r )
494 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 1 , r )
495 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
496 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
497 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 1 , r )
498 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 1 , r )
499 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
500 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
501 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 1 , r )
502 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 1 , r )
503 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 1 , r )
504 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 1 , r )
505 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 1 , r )
506 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 1 , r )
507 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
508 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
509 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 1 , r )
510 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 1 , r )
511 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
512 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
513 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
514 min = 2 . 1 7E−13 med = 3 . 3 3E−04 avg = 1 . 6 1E−03 max= 2 . 2 8E−02 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
515 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 2 . 2 4E−12 max= 2 . 2 4E−10 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
516 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
517 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
518 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
519 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
520 min = 0 . 0 0 E00 med = 3 . 1 4E−10 avg = 6 . 5 3E−06 max= 4 . 0 3E−04 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
521 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
522 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
523 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
524 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
525 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
526 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 5 . 0 0E−18 max= 5 . 0 0E−16 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
875
527 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
528 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
876
529 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
530 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
531 min = 5 . 0 6E−02 med = 3 . 6 5E−01 avg = 3 . 6 6E−01 max= 6 . 3 8E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r )
532 min = 2 . 3 3E−01 med = 4 . 9 9E−01 avg = 4 . 8 7E−01 max= 7 . 1 6E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r )
533 min = 1 . 6 5E−02 med = 4 . 6 5E−02 avg = 4 . 7 0E−02 max= 8 . 3 9E−02 RS ( f=f 1 , r )
534
535
536 CosLnSum
537
538 min = 4 . 3 8E−06 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
539 min = 2 . 3 0E−07 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
540 min = 5 . 2 8E−03 med = 9 . 2 2E−01 avg = 8 . 7 2E−01 max= 9 . 8 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
541 min = 8 . 1 6E−01 med = 9 . 7 3E−01 avg = 9 . 6 6E−01 max= 9 . 9 7E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
542 min = 5 . 2 8E−03 med = 9 . 2 2E−01 avg = 8 . 7 2E−01 max= 9 . 8 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
543 min = 8 . 1 6E−01 med = 9 . 7 3E−01 avg = 9 . 6 6E−01 max= 9 . 9 7E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
544 min = 6 . 3 0E−05 med = 8 . 6 0E−01 avg = 5 . 7 6E−01 max= 9 . 4 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 1 ) )
545 min = 1 . 5 2E−04 med = 9 . 5 7E−01 avg = 8 . 3 3E−01 max= 9 . 9 7E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 1 ) )
546 min = 4 . 0 0E−06 med = 8 . 5 2E−01 avg = 5 . 7 9E−01 max= 8 . 5 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
547 min = 5 . 5 3E−07 med = 8 . 5 2E−01 avg = 5 . 5 4E−01 max= 8 . 5 2E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
548 min = 2 . 4 2E−06 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
549 min = 1 . 5 4E−07 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
550 min = 4 . 0 7E−01 med = 9 . 3 0E−01 avg = 9 . 1 3E−01 max= 9 . 7 8E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
551 min = 8 . 5 8E−01 med = 9 . 7 3E−01 avg = 9 . 6 7E−01 max= 9 . 9 7E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
552 min = 4 . 7 7E−06 med = 8 . 5 2E−01 avg = 6 . 3 0E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
553 min = 2 . 6 4E−06 med = 8 . 5 2E−01 avg = 6 . 0 5E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
554 min = 7 . 2 4E−04 med = 9 . 2 7E−01 avg = 8 . 5 3E−01 max= 9 . 8 7E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
555 min = 7 . 7 7E−01 med = 9 . 7 2E−01 avg = 9 . 6 4E−01 max= 9 . 9 7E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
556 min = 3 . 8 2E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
557 min = 8 . 1 9E−07 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
558 min = 5 . 4 6E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
559 min = 2 . 6 1E−06 med = 8 . 5 2E−01 avg = 5 . 7 9E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
560 min = 1 . 3 7E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
561 min = 6 . 3 4E−07 med = 8 . 5 2E−01 avg = 5 . 7 9E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
562 min = 9 . 0 8E−07 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
563 min = 3 . 5 2E−07 med = 8 . 5 2E−01 avg = 5 . 5 4E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
564 min = 2 . 6 7E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
565 min = 3 . 2 6E−07 med = 8 . 5 2E−01 avg = 5 . 5 4E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
566 min = 3 . 5 6E−07 med = 5 . 0 2E−06 avg = 5 . 0 5E−06 max= 1 . 0 9E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
567 min = 2 . 7 8E−08 med = 4 . 7 3E−07 avg = 4 . 8 7E−07 max= 1 . 0 4E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
568 min = 3 . 6 5E−07 med = 1 . 4 5E−06 avg = 1 . 5 6E−06 max= 3 . 7 7E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
569 min = 1 . 7 8E−08 med = 1 . 6 1E−07 avg = 1 . 7 1E−07 max= 4 . 1 7E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
570 min = 1 . 8 8E−08 med = 9 . 9 3E−08 avg = 1 . 1 4E−07 max= 3 . 5 3E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
571 min = 3 . 0 2E−09 med = 1 . 3 2E−08 avg = 1 . 7 0E−02 max= 8 . 5 2E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
572 min = 2 . 5 3E−07 med = 5 . 0 0E−06 avg = 1 . 7 0E−02 max= 8 . 5 2E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
573 min = 8 . 8 9E−09 med = 7 . 6 5E−07 avg = 1 . 9 1E−02 max= 8 . 5 2E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
574 min = 1 . 9 5E−07 med = 8 . 5 9E−07 avg = 8 . 9 5E−07 max= 1 . 8 1E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
575 min = 2 . 0 6E−08 med = 9 . 7 1E−08 avg = 1 . 0 4E−07 max= 2 . 3 3E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
576 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 2 , r )
577 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 2 , r )
578 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 2 , r )
579 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 2 , r )
580 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 2 , r )
581 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 2 , r )
582 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
583 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
584 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 2 , r )
585 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 2 , r )
586 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 2 , r )
57 DEMOS
587 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
588 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 2 , r )
589 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 2 , r )
590 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 2 , r )
591 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 2 , r )
592 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 2 , r )
593 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 2 , r )
594 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
595 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
596 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 2 , r )
597 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 2 , r )
598 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 2 , r )
599 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
600 min = 0 . 0 0 E00 med = 8 . 6 1E−03 avg = 2 . 9 9E−01 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
601 min = 3 . 8 6E−10 med = 6 . 3 1E−04 avg = 6 . 6 1E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
602 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 6 8E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
603 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 6 7E−01 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
604 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 7 0E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
605 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 8 . 5 2E−03 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
606 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 6 4E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
607 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 8 . 5 3E−03 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
608 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 4 . 1 5E−04 max= 4 . 1 5E−02 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
609 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 5 . 0 9E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
610 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
611 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
612 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 1 7E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
613 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
614 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
615 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
616 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
617 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
618 min = 3 . 3 3E−01 med = 9 . 3 4E−01 avg = 9 . 0 6E−01 max= 9 . 7 5E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
619 min = 8 . 4 5E−01 med = 9 . 7 5E−01 avg = 9 . 6 9E−01 max= 9 . 9 8E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
620 min = 1 . 4 1E−01 med = 3 . 3 4E−01 avg = 3 . 3 0E−01 max= 5 . 1 1E−01 RS ( f=f 2 , r )
621
622
623 MaxSqr
624
625 min = 6 . 9 3E−09 med = 6 . 0 0E−08 avg = 5 . 7 5E−08 max= 1 . 1 5E−07 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
626 min = 2 . 5 4E−09 med = 9 . 3 0E−09 avg = 9 . 4 3E−09 max= 1 . 7 5E−08 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
627 min = 5 . 0 3E−03 med = 6 . 2 5E−02 avg = 8 . 3 5E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
628 min = 7 . 9 2E−02 med = 3 . 8 5E−01 avg = 3 . 9 5E−01 max= 8 . 4 3E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
629 min = 5 . 0 3E−03 med = 6 . 2 5E−02 avg = 8 . 3 5E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
630 min = 7 . 9 2E−02 med = 3 . 8 5E−01 avg = 3 . 9 5E−01 max= 8 . 4 3E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
631 min = 6 . 5 5E−05 med = 7 . 0 2E−04 avg = 8 . 3 5E−04 max= 3 . 8 3E−03 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 1 ) )
632 min = 8 . 3 0E−03 med = 6 . 6 3E−02 avg = 7 . 1 6E−02 max= 2 . 1 8E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 1 ) )
633 min = 3 . 1 2E−07 med = 1 . 4 4E−06 avg = 1 . 4 6E−06 max= 3 . 0 0E−06 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
634 min = 2 . 1 9E−07 med = 2 . 0 6E−06 avg = 2 . 4 0E−06 max= 6 . 7 9E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
635 min = 4 . 4 1E−08 med = 1 . 9 2E−07 avg = 2 . 0 7E−07 max= 4 . 1 3E−07 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
636 min = 1 . 5 8E−08 med = 1 . 4 0E−07 avg = 1 . 4 8E−07 max= 3 . 8 5E−07 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
637 min = 1 . 9 3E−02 med = 2 . 6 3E−01 avg = 2 . 7 2E−01 max= 6 . 0 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
638 min = 7 . 7 0E−02 med = 5 . 2 7E−01 avg = 4 . 8 7E−01 max= 8 . 2 3E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
639 min = 4 . 0 8E−07 med = 2 . 6 6E−06 avg = 2 . 7 5E−06 max= 5 . 8 1E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
640 min = 3 . 6 8E−07 med = 5 . 0 0E−06 avg = 5 . 5 4E−06 max= 1 . 9 7E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
641 min = 1 . 6 1E−03 med = 2 . 6 5E−02 avg = 3 . 8 0E−02 max= 1 . 7 5E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
642 min = 3 . 6 6E−02 med = 3 . 0 9E−01 avg = 3 . 3 6E−01 max= 7 . 0 0E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
643 min = 5 . 1 8E−08 med = 4 . 0 3E−07 avg = 3 . 9 6E−07 max= 9 . 8 6E−07 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
644 min = 4 . 7 7E−08 med = 2 . 4 0E−07 avg = 2 . 6 8E−07 max= 7 . 3 4E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
645 min = 1 . 4 3E−06 med = 9 . 0 2E−06 avg = 9 . 8 9E−06 max= 2 . 4 0E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
646 min = 2 . 4 3E−06 med = 3 . 1 5E−05 avg = 4 . 1 0E−05 max= 1 . 3 6E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
647 min = 1 . 1 9E−08 med = 9 . 7 0E−08 avg = 9 . 4 1E−08 max= 2 . 0 3E−07 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
648 min = 2 . 6 9E−09 med = 1 . 4 1E−08 avg = 1 . 5 4E−08 max= 3 . 4 8E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
649 min = 1 . 2 0E−07 med = 8 . 5 1E−07 avg = 8 . 3 0E−07 max= 2 . 0 2E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
650 min = 1 . 3 8E−07 med = 9 . 2 6E−07 avg = 1 . 0 5E−06 max= 2 . 1 7E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
651 min = 1 . 0 5E−08 med = 6 . 5 0E−08 avg = 6 . 8 7E−08 max= 1 . 6 3E−07 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
652 min = 3 . 2 8E−09 med = 1 . 0 7E−08 avg = 1 . 1 7E−08 max= 2 . 6 3E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
653 min = 1 . 5 3E−09 med = 2 . 8 7E−08 avg = 2 . 9 2E−08 max= 6 . 1 6E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
654 min = 2 . 7 1E−10 med = 3 . 1 2E−09 avg = 3 . 1 4E−09 max= 7 . 3 5E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
655 min = 1 . 9 5E−09 med = 1 . 1 2E−08 avg = 1 . 1 1E−08 max= 2 . 3 0E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
656 min = 1 . 6 3E−10 med = 1 . 0 4E−09 avg = 1 . 0 8E−09 max= 2 . 3 5E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
657 min = 1 . 1 9E−10 med = 7 . 2 6E−10 avg = 7 . 8 2E−10 max= 1 . 9 2E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
658 min = 6 . 1 6E−12 med = 9 . 6 5E−11 avg = 1 . 0 5E−10 max= 2 . 5 7E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
877
659 min = 4 . 3 6E−10 med = 2 . 5 4E−08 avg = 3 . 1 7E−08 max= 9 . 1 2E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
660 min = 5 . 6 2E−11 med = 5 . 3 2E−09 avg = 5 . 9 6E−09 max= 1 . 7 9E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
878
661 min = 8 . 6 7E−10 med = 6 . 1 9E−09 avg = 6 . 0 8E−09 max= 1 . 3 7E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
662 min = 1 . 0 2E−10 med = 6 . 1 6E−10 avg = 6 . 5 4E−10 max= 1 . 3 0E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
663 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 3 , r )
664 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 3 , r )
665 min = 8 . 1 5E−238 med = 6 . 6 3E−218 avg = 3 . 0 7E−190 max= 2 . 3 7E−188 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 3 , r )
666 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 3 , r )
667 min = 0 . 0 0 E00 med = 2 . 6 5E−318 avg = 1 . 8 2E−307 max= 1 . 2 2E−305 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 3 , r )
668 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 3 , r )
669 min = 1 . 1 9E−185 med = 1 . 0 5E−170 avg = 7 . 2 8E−154 max= 7 . 2 7E−152 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
670 min = 4 . 6 4E−282 med = 2 . 4 8E−275 avg = 3 . 0 8E−270 max= 9 . 3 5E−269 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
671 min = 3 . 7 4E−218 med = 5 . 5 4E−208 avg = 3 . 6 9E−200 max= 3 . 6 6E−198 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 3 , r )
672 min = 5 . 3 7E−238 med = 5 . 9 8E−232 avg = 8 . 6 1E−228 max= 2 . 8 0E−226 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 3 , r )
673 min = 4 . 5 5E−130 med = 1 . 5 5E−116 avg = 5 . 6 7E−105 max= 5 . 6 4E−103 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
674 min = 9 . 4 6E−173 med = 2 . 4 7E−167 avg = 3 . 5 7E−163 max= 3 . 0 9E−161 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
675 min = 1 . 1 6E−291 med = 1 . 0 5E−279 avg = 1 . 2 9E−268 max= 1 . 2 9E−266 ES ( ( 2 0 / 2 + 5 0 ) , f=f 3 , r )
676 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 3 , r )
677 min = 1 . 1 6E−151 med = 9 . 9 0E−140 avg = 6 . 7 2E−128 max= 6 . 7 1E−126 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 3 , r )
678 min = 4 . 3 5E−225 med = 2 . 0 0E−219 avg = 9 . 1 7E−212 max= 9 . 1 7E−210 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 3 , r )
679 min = 5 . 5 3E−244 med = 2 . 4 3E−234 avg = 1 . 4 3E−229 max= 5 . 2 6E−228 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 3 , r )
680 min = 6 . 6 2E−308 med = 3 . 2 5E−301 avg = 1 . 1 3E−297 max= 6 . 7 0E−296 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 3 , r )
681 min = 1 . 0 1E−136 med = 1 . 3 4E−125 avg = 1 . 2 8E−116 max= 1 . 0 6E−114 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
682 min = 9 . 5 3E−211 med = 5 . 4 3E−207 avg = 5 . 9 2E−203 max= 4 . 0 1E−201 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
683 min = 2 . 0 2E−174 med = 2 . 6 5E−169 avg = 1 . 9 3E−166 max= 6 . 4 7E−165 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 3 , r )
684 min = 2 . 4 0E−202 med = 6 . 0 7E−199 avg = 4 . 0 8E−196 max= 1 . 8 9E−194 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 3 , r )
685 min = 9 . 8 5E−102 med = 1 . 1 8E−95 avg = 7 . 7 9E−89 max= 7 . 6 3E−87 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
686 min = 3 . 7 9E−144 med = 6 . 3 0E−141 avg = 4 . 7 3E−139 max= 1 . 4 0E−137 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
687 min = 1 . 5 8E−169 med = 2 . 9 6E−154 avg = 4 . 2 6E−12 max= 4 . 2 6E−10 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
688 min = 4 . 5 9E−08 med = 1 . 3 0E−04 avg = 7 . 8 4E−04 max= 6 . 3 0E−03 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
689 min = 6 . 2 4E−195 med = 2 . 0 0E−32 avg = 3 . 3 3E−06 max= 1 . 2 4E−04 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
690 min = 2 . 2 2E−174 med = 3 . 1 4E−168 avg = 3 . 1 1E−165 max= 1 . 4 1E−163 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
691 min = 0 . 0 0 E00 med = 4 . 3 6E−318 avg = 8 . 2 6E−23 max= 8 . 2 6E−21 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
692 min = 7 . 3 9E−253 med = 1 . 4 9E−245 avg = 1 . 9 1E−240 max= 1 . 4 1E−238 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
693 min = 1 . 3 0E−88 med = 3 . 4 7E−84 avg = 3 . 3 5E−82 max= 2 . 6 5E−80 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
694 min = 6 . 3 2E−51 med = 9 . 1 6E−11 avg = 2 . 0 7E−05 max= 1 . 1 0E−03 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
695 min = 5 . 1 7E−101 med = 5 . 7 3E−97 avg = 7 . 3 6E−35 max= 7 . 3 6E−33 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
696 min = 9 . 7 2E−89 med = 2 . 5 2E−86 avg = 4 . 1 2E−85 max= 1 . 3 3E−83 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
697 min = 2 . 3 1E−173 med = 1 . 8 8E−165 avg = 5 . 6 4E−161 max= 5 . 6 2E−159 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
698 min = 8 . 3 4E−130 med = 8 . 2 6E−127 avg = 1 . 1 8E−124 max= 5 . 3 3E−123 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
699 min = 1 . 1 8E−44 med = 3 . 2 3E−43 avg = 8 . 4 1E−43 max= 7 . 5 0E−42 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
700 min = 4 . 1 4E−60 med = 4 . 3 1E−57 avg = 3 . 0 7E−30 max= 1 . 5 8E−28 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
701 min = 5 . 5 0E−51 med = 4 . 3 8E−49 avg = 1 . 2 6E−48 max= 1 . 4 6E−47 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
702 min = 9 . 5 8E−46 med = 2 . 2 2E−44 avg = 5 . 0 0E−44 max= 8 . 0 9E−43 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
703 min = 3 . 4 2E−88 med = 1 . 5 8E−85 avg = 1 . 6 7E−84 max= 5 . 2 9E−83 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
704 min = 7 . 4 5E−67 med = 7 . 8 0E−65 avg = 2 . 5 4E−64 max= 2 . 9 8E−63 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
705 min = 1 . 0 8E−02 med = 2 . 8 2E−01 avg = 2 . 8 3E−01 max= 5 . 8 5E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
706 min = 9 . 7 0E−02 med = 5 . 3 6E−01 avg = 4 . 9 2E−01 max= 8 . 0 4E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
707 min = 1 . 4 5E−03 med = 9 . 3 6E−03 avg = 9 . 6 7E−03 max= 2 . 0 9E−02 RS ( f=f 3 , r )
708
709
710 Dimensions : 10
711
712
713
714 Sphere
715
716 min = 9 . 5 1E−05 med = 2 . 2 5E−04 avg = 2 . 2 1E−04 max= 3 . 0 9E−04 HC( o1=NM( [ −10 ,10]ˆ10) ,r)
717 min = 1 . 3 6E−05 med = 3 . 1 5E−05 avg = 3 . 0 2E−05 max= 4 . 3 2E−05 HC( o1=UM( [ −10 ,10]ˆ10) ,r)
718 min = 1 . 2 4E−02 med = 3 . 1 9E−02 avg = 3 . 2 3E−02 max= 5 . 8 4E−02 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (1) )
719 min = 1 . 8 0E−02 med = 6 . 4 6E−02 avg = 6 . 6 8E−02 max= 1 . 3 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (1) )
57 DEMOS
720 min = 1 . 2 4E−02 med = 3 . 1 9E−02 avg = 3 . 2 3E−02 max= 5 . 8 4E−02 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (1) )
721 min = 1 . 8 0E−02 med = 6 . 4 6E−02 avg = 6 . 6 8E−02 max= 1 . 3 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (1) )
722 min = 7 . 6 7E−04 med = 2 . 7 7E−03 avg = 2 . 6 5E−03 max= 4 . 0 3E−03 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (0.1) )
723 min = 1 . 1 6E−03 med = 3 . 3 8E−03 avg = 3 . 3 9E−03 max= 6 . 6 3E−03 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (0.1) )
724 min = 1 . 1 1E−04 med = 2 . 3 2E−04 avg = 2 . 3 0E−04 max= 3 . 5 9E−04 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (0.001) )
725 min = 1 . 5 9E−05 med = 4 . 5 3E−05 avg = 4 . 5 0E−05 max= 6 . 7 4E−05 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (0.001) )
726 min = 5 . 9 4E−05 med = 2 . 2 4E−04 avg = 2 . 2 5E−04 max= 3 . 7 7E−04 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log ( 1 . 0 E−4) )
727 min = 9 . 8 2E−06 med = 3 . 0 5E−05 avg = 2 . 9 2E−05 max= 4 . 8 6E−05 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log ( 1 . 0 E−4) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
728 min = 5 . 6 2E−02 med = 2 . 6 5E−01 avg = 2 . 7 0E−01 max= 5 . 3 5E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−5) )
729 min = 1 . 3 8E−01 med = 7 . 8 2E−01 avg = 8 . 6 4E−01 max= 2 . 0 8 E00 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−5) )
730 min = 1 . 3 5E−04 med = 3 . 1 5E−04 avg = 3 . 2 2E−04 max= 5 . 5 6E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−4) )
731 min = 1 . 8 5E−05 med = 6 . 1 7E−05 avg = 5 . 9 4E−05 max= 9 . 3 7E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−4) )
732 min = 5 . 0 4E−03 med = 1 . 5 1E−02 avg = 1 . 5 3E−02 max= 2 . 5 4E−02 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
733 min = 7 . 2 1E−03 med = 2 . 7 9E−02 avg = 2 . 9 0E−02 max= 6 . 1 9E−02 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
734 min = 1 . 0 3E−04 med = 2 . 8 1E−04 avg = 2 . 7 9E−04 max= 4 . 2 1E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
735 min = 9 . 8 8E−06 med = 4 . 0 1E−05 avg = 3 . 9 4E−05 max= 5 . 4 8E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
736 min = 6 . 3 7E−05 med = 3 . 4 5E−04 avg = 3 . 4 2E−04 max= 5 . 3 2E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
737 min = 6 . 3 7E−05 med = 1 . 3 8E−04 avg = 1 . 4 3E−04 max= 2 . 1 7E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
738 min = 9 . 6 3E−05 med = 2 . 2 6E−04 avg = 2 . 3 1E−04 max= 3 . 6 2E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
739 min = 1 . 0 5E−05 med = 3 . 2 8E−05 avg = 3 . 2 2E−05 max= 5 . 3 0E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
740 min = 9 . 6 9E−05 med = 2 . 4 1E−04 avg = 2 . 3 5E−04 max= 3 . 6 5E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
741 min = 1 . 7 4E−05 med = 3 . 6 5E−05 avg = 3 . 5 9E−05 max= 5 . 1 2E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
742 min = 1 . 0 4E−04 med = 2 . 2 1E−04 avg = 2 . 2 3E−04 max= 3 . 2 8E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
743 min = 1 . 5 2E−05 med = 3 . 2 2E−05 avg = 3 . 1 9E−05 max= 4 . 5 6E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
744 min = 3 . 9 2E−05 med = 1 . 0 3E−04 avg = 1 . 0 2E−04 max= 1 . 5 7E−04 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
745 min = 1 . 9 3E−06 med = 9 . 6 5E−06 avg = 9 . 4 2E−06 max= 1 . 5 9E−05 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
746 min = 1 . 3 0E−05 med = 3 . 2 4E−05 avg = 3 . 2 7E−05 max= 5 . 1 1E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
747 min = 1 . 4 0E−06 med = 3 . 4 5E−06 avg = 3 . 4 8E−06 max= 5 . 8 0E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
748 min = 1 . 3 9E−06 med = 4 . 5 1E−06 avg = 4 . 5 2E−06 max= 8 . 3 6E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
749 min = 1 . 7 6E−07 med = 4 . 4 2E−07 avg = 4 . 5 7E−07 max= 8 . 1 7E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
750 min = 1 . 9 2E−05 med = 1 . 3 0E−04 avg = 1 . 4 1E−04 max= 4 . 0 1E−04 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
751 min = 4 . 2 4E−06 med = 2 . 2 2E−05 avg = 2 . 1 8E−05 max= 4 . 4 6E−05 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
752 min = 9 . 4 8E−06 med = 2 . 2 4E−05 avg = 2 . 2 3E−05 max= 3 . 5 9E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
753 min = 8 . 0 2E−07 med = 2 . 1 5E−06 avg = 2 . 1 3E−06 max= 3 . 7 8E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
754 min = 2 . 7 3E−310 med = 3 . 5 8E−300 avg = 1 . 9 5E−287 max= 1 . 9 5E−285 ES ( ( 1 0 / 2 + 5 0 ) , r )
755 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , r )
756 min = 6 . 4 9E−199 med = 1 . 0 1E−187 avg = 2 . 9 6E−177 max= 2 . 9 6E−175 ES ( ( 1 0 / 1 0 + 5 0 ) , r )
757 min = 2 . 1 7E−297 med = 2 . 7 6E−288 avg = 1 . 1 3E−282 max= 1 . 1 2E−280 ES ( ( 1 0 / 1 0 , 5 0 ) , r )
758 min = 4 . 6 9E−228 med = 5 . 9 7E−218 avg = 4 . 1 3E−211 max= 3 . 9 4E−209 ES ( ( 1 0 / 2 + 1 0 0 ) , r )
759 min = 5 . 0 5E−265 med = 1 . 2 7E−259 avg = 3 . 3 9E−255 max= 2 . 2 1E−253 ES ( ( 1 0 / 2 , 1 0 0 ) , r )
760 min = 1 . 2 0E−151 med = 1 . 9 1E−144 avg = 1 . 7 4E−139 max= 1 . 1 2E−137 ES ( ( 1 0 / 1 0 + 1 0 0 ) , r )
761 min = 2 . 2 3E−201 med = 1 . 4 4E−196 avg = 3 . 6 9E−193 max= 3 . 2 8E−191 ES ( ( 1 0 / 1 0 , 1 0 0 ) , r )
762 min = 5 . 8 9E−146 med = 2 . 6 2E−141 avg = 1 . 3 2E−138 max= 1 . 1 3E−136 ES ( ( 1 0 / 2 + 2 0 0 ) , r )
763 min = 1 . 5 2E−159 med = 2 . 2 9E−156 avg = 1 . 5 9E−152 max= 1 . 5 3E−150 ES ( ( 1 0 / 2 , 2 0 0 ) , r )
764 min = 1 . 3 0E−102 med = 3 . 1 5E−97 avg = 2 . 0 4E−93 max= 1 . 9 2E−91 ES ( ( 1 0 / 1 0 + 2 0 0 ) , r )
765 min = 6 . 4 0E−123 med = 1 . 2 5E−119 avg = 1 . 6 5E−117 max= 9 . 1 5E−116 ES ( ( 1 0 / 1 0 , 2 0 0 ) , r )
766 min = 2 . 0 3E−199 med = 5 . 2 4E−193 avg = 2 . 4 1E−189 max= 1 . 1 6E−187 ES ( ( 2 0 / 2 + 5 0 ) , r )
767 min = 1 . 5 0E−246 med = 8 . 1 3E−241 avg = 1 . 0 1E−236 max= 5 . 8 8E−235 ES ( ( 2 0 / 2 , 5 0 ) , r )
768 min = 4 . 0 6E−124 med = 1 . 5 8E−118 avg = 3 . 5 9E−115 max= 1 . 8 9E−113 ES ( ( 2 0 / 1 0 + 5 0 ) , r )
769 min = 3 . 6 4E−164 med = 3 . 6 4E−160 avg = 4 . 6 0E−156 max= 2 . 8 2E−154 ES ( ( 2 0 / 1 0 , 5 0 ) , r )
770 min = 1 . 9 4E−165 med = 2 . 8 2E−161 avg = 1 . 0 0E−157 max= 4 . 5 9E−156 ES ( ( 2 0 / 2 + 1 0 0 ) , r )
771 min = 2 . 3 5E−207 med = 1 . 9 8E−202 avg = 3 . 1 9E−199 max= 2 . 9 8E−197 ES ( ( 2 0 / 2 , 1 0 0 ) , r )
772 min = 2 . 1 9E−108 med = 5 . 4 8E−104 avg = 1 . 6 5E−101 max= 4 . 4 8E−100 ES ( ( 2 0 / 1 0 + 1 0 0 ) , r )
773 min = 2 . 9 7E−151 med = 2 . 1 5E−148 avg = 2 . 1 4E−146 max= 6 . 3 9E−145 ES ( ( 2 0 / 1 0 , 1 0 0 ) , r )
774 min = 6 . 3 5E−119 med = 8 . 7 2E−116 avg = 1 . 0 1E−113 max= 4 . 6 1E−112 ES ( ( 2 0 / 2 + 2 0 0 ) , r )
775 min = 1 . 5 0E−136 med = 2 . 3 2E−133 avg = 3 . 8 9E−132 max= 1 . 9 2E−130 ES ( ( 2 0 / 2 , 2 0 0 ) , r )
776 min = 2 . 0 1E−80 med = 1 . 3 7E−77 avg = 2 . 6 6E−76 max= 7 . 4 5E−75 ES ( ( 2 0 / 1 0 + 2 0 0 ) , r )
777 min = 6 . 7 3E−103 med = 2 . 7 1E−100 avg = 2 . 1 6E−99 max= 4 . 5 2E−98 ES ( ( 2 0 / 1 0 , 2 0 0 ) , r )
778 min = 7 . 6 4E−98 med = 2 . 5 4E−88 avg = 7 . 4 4E−04 max= 7 . 0 8E−02 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
779 min = 2 . 1 4E−01 med = 3 . 1 5 E00 avg = 3 . 9 8 E00 max= 1 . 6 9 E01 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
780 min = 2 . 4 2E−24 med = 3 . 7 8E−07 avg = 6 . 1 4E−03 max= 3 . 4 1E−01 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
781 min = 2 . 9 3E−117 med = 1 . 3 1E−113 avg = 1 . 1 1E−111 max= 5 . 4 9E−110 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
782 min = 8 . 0 2E−223 med = 2 . 5 9E−216 avg = 4 . 8 3E−38 max= 4 . 8 3E−36 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
783 min = 1 . 5 9E−170 med = 2 . 7 2E−166 avg = 2 . 5 1E−163 max= 1 . 9 8E−161 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
784 min = 3 . 2 7E−49 med = 4 . 7 5E−47 avg = 1 . 2 3E−46 max= 1 . 7 3E−45 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
785 min = 2 . 6 6E−03 med = 1 . 5 0E−01 avg = 3 . 3 9E−01 max= 2 . 4 1 E00 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
786 min = 7 . 1 4E−55 med = 7 . 3 5E−53 avg = 5 . 5 3E−52 max= 9 . 7 7E−51 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
787 min = 5 . 2 5E−59 med = 5 . 5 0E−57 avg = 1 . 9 2E−56 max= 3 . 5 6E−55 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
788 min = 1 . 3 2E−112 med = 1 . 0 3E−109 avg = 2 . 1 7E−107 max= 7 . 8 0E−106 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
789 min = 3 . 7 7E−86 med = 3 . 7 6E−84 avg = 7 . 1 7E−83 max= 3 . 1 0E−81 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
790 min = 3 . 0 8E−24 med = 3 . 9 5E−22 avg = 8 . 0 5E−22 max= 7 . 2 0E−21 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
879
791 min = 9 . 0 0E−10 med = 1 . 1 7E−04 avg = 7 . 7 3E−04 max= 1 . 1 2E−02 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
792 min = 2 . 9 8E−26 med = 5 . 1 4E−25 avg = 8 . 4 4E−25 max= 5 . 5 7E−24 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
880
793 min = 5 . 5 0E−29 med = 3 . 1 5E−28 avg = 6 . 0 9E−28 max= 6 . 1 5E−27 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
794 min = 4 . 5 7E−58 med = 2 . 7 2E−55 avg = 1 . 9 1E−54 max= 5 . 0 4E−53 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
795 min = 1 . 0 4E−43 med = 4 . 8 7E−42 avg = 1 . 7 0E−41 max= 4 . 0 4E−40 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
796 min = 6 . 5 2 E01 med = 2 . 0 4 E02 avg = 1 . 9 9 E02 max= 3 . 3 9 E02 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r )
797 min = 9 . 7 8 E01 med = 2 . 8 0 E02 avg = 2 . 8 0 E02 max= 4 . 8 0 E02 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r )
798 min = 1 . 4 8 E01 med = 3 . 2 1 E01 avg = 3 . 1 4 E01 max= 4 . 4 0 E01 RS ( r )
799
800
801 SumCosLn
802
803 min = 1 . 8 1E−06 med = 5 . 4 5E−06 avg = 5 . 3 5E−06 max= 8 . 3 2E−06 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
804 min = 2 . 8 7E−07 med = 7 . 8 5E−07 avg = 7 . 6 8E−07 max= 1 . 0 7E−06 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
805 min = 1 . 7 4E−01 med = 3 . 7 3E−01 avg = 3 . 7 2E−01 max= 5 . 4 7E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
806 min = 2 . 6 6E−01 med = 5 . 0 8E−01 avg = 5 . 0 1E−01 max= 6 . 9 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
807 min = 1 . 7 4E−01 med = 3 . 7 3E−01 avg = 3 . 7 2E−01 max= 5 . 4 7E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
808 min = 2 . 6 6E−01 med = 5 . 0 8E−01 avg = 5 . 0 1E−01 max= 6 . 9 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
809 min = 4 . 4 9E−03 med = 1 . 9 0E−02 avg = 3 . 1 0E−02 max= 1 . 5 8E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 1 ) )
810 min = 1 . 5 9E−01 med = 4 . 0 0E−01 avg = 4 . 0 5E−01 max= 6 . 2 8E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 1 ) )
811 min = 1 . 3 0E−05 med = 3 . 0 1E−05 avg = 3 . 0 2E−05 max= 4 . 6 0E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
812 min = 9 . 1 2E−06 med = 3 . 0 6E−05 avg = 3 . 0 9E−05 max= 4 . 6 5E−05 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
813 min = 1 . 9 1E−06 med = 7 . 6 5E−06 avg = 7 . 3 7E−06 max= 1 . 0 8E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
814 min = 9 . 6 9E−07 med = 2 . 8 1E−06 avg = 2 . 7 3E−06 max= 4 . 6 4E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
815 min = 1 . 8 5E−01 med = 4 . 1 2E−01 avg = 4 . 1 1E−01 max= 6 . 2 8E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
816 min = 2 . 7 7E−01 med = 5 . 0 5E−01 avg = 5 . 0 8E−01 max= 6 . 9 8E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
817 min = 1 . 5 1E−05 med = 3 . 7 9E−05 avg = 3 . 6 9E−05 max= 5 . 6 2E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
818 min = 1 . 4 2E−05 med = 3 . 8 0E−05 avg = 3 . 8 6E−05 max= 7 . 4 6E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
819 min = 1 . 1 9E−01 med = 3 . 2 8E−01 avg = 3 . 3 0E−01 max= 5 . 1 6E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
820 min = 2 . 5 7E−01 med = 4 . 9 6E−01 avg = 4 . 9 8E−01 max= 6 . 9 3E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
821 min = 5 . 3 1E−06 med = 1 . 0 9E−05 avg = 1 . 0 8E−05 max= 1 . 6 1E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
822 min = 5 . 4 4E−07 med = 3 . 3 3E−06 avg = 3 . 3 0E−06 max= 5 . 0 7E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
823 min = 5 . 1 8E−05 med = 1 . 7 0E−04 avg = 1 . 6 7E−04 max= 2 . 7 0E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
824 min = 6 . 2 1E−05 med = 2 . 7 4E−04 avg = 2 . 8 0E−04 max= 5 . 3 2E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
825 min = 2 . 9 2E−06 med = 6 . 1 2E−06 avg = 6 . 1 7E−06 max= 9 . 9 9E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
826 min = 4 . 2 0E−07 med = 9 . 5 2E−07 avg = 9 . 5 2E−07 max= 1 . 3 3E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
827 min = 6 . 3 2E−06 med = 2 . 0 0E−05 avg = 1 . 9 3E−05 max= 3 . 0 6E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
828 min = 7 . 7 5E−06 med = 1 . 4 9E−05 avg = 1 . 5 0E−05 max= 2 . 8 3E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
829 min = 1 . 4 1E−06 med = 5 . 9 6E−06 avg = 5 . 8 2E−06 max= 8 . 8 4E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
830 min = 4 . 2 0E−07 med = 8 . 1 3E−07 avg = 8 . 3 4E−07 max= 1 . 2 2E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
831 min = 9 . 0 8E−07 med = 2 . 4 0E−06 avg = 2 . 3 9E−06 max= 4 . 1 1E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
832 min = 9 . 2 5E−08 med = 2 . 3 3E−07 avg = 2 . 3 4E−07 max= 4 . 2 5E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
833 min = 3 . 5 4E−07 med = 8 . 8 3E−07 avg = 8 . 8 1E−07 max= 1 . 3 9E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
834 min = 3 . 9 9E−08 med = 8 . 9 9E−08 avg = 8 . 9 6E−08 max= 1 . 4 3E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
835 min = 2 . 9 1E−08 med = 1 . 1 6E−07 avg = 1 . 1 2E−07 max= 1 . 9 4E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
836 min = 4 . 1 6E−09 med = 1 . 1 3E−08 avg = 1 . 1 4E−08 max= 2 . 0 3E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
837 min = 5 . 6 1E−07 med = 3 . 1 1E−06 avg = 3 . 3 2E−06 max= 8 . 1 7E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
838 min = 1 . 2 2E−07 med = 4 . 8 5E−07 avg = 5 . 1 9E−07 max= 1 . 1 8E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
839 min = 1 . 6 1E−07 med = 5 . 6 1E−07 avg = 5 . 6 4E−07 max= 9 . 3 2E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
840 min = 1 . 9 1E−08 med = 5 . 4 8E−08 avg = 5 . 4 7E−08 max= 1 . 1 4E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
841 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 1 , r )
842 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 1 , r )
843 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 1 , r )
844 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 1 , r )
845 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 1 , r )
846 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 1 , r )
847 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
848 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
849 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 1 , r )
850 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 1 , r )
57 DEMOS
851 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
852 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
853 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 1 , r )
854 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 1 , r )
855 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 1 , r )
856 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 1 , r )
857 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 1 , r )
858 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 1 , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
859 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
860 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
861 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 1 , r )
862 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 1 , r )
863 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
864 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
865 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 5 0E−06 max= 1 . 4 0E−04 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
866 min = 1 . 9 6E−03 med = 1 . 8 5E−02 avg = 2 . 4 6E−02 max= 8 . 8 9E−02 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
867 min = 0 . 0 0 E00 med = 1 . 3 9E−10 avg = 6 . 2 8E−06 max= 2 . 5 7E−04 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
868 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
869 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
870 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
871 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
872 min = 6 . 4 2E−06 med = 1 . 2 6E−03 avg = 1 . 7 9E−03 max= 1 . 4 6E−02 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
873 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
874 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
875 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
876 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
877 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
878 min = 3 . 3 7E−12 med = 1 . 1 2E−06 avg = 1 . 1 1E−05 max= 2 . 3 0E−04 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
879 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
880 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
881 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
882 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
883 min = 2 . 4 8E−01 med = 4 . 2 0E−01 avg = 4 . 1 5E−01 max= 5 . 8 4E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
884 min = 2 . 7 3E−01 med = 5 . 0 9E−01 avg = 5 . 0 5E−01 max= 7 . 0 4E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
885 min = 1 . 1 4E−01 med = 1 . 6 8E−01 avg = 1 . 6 6E−01 max= 1 . 9 5E−01 RS ( f=f 1 , r )
886
887
888 CosLnSum
889
890 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
891 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
892 min = 6 . 3 8E−01 med = 7 . 4 4E−01 avg = 7 . 4 2E−01 max= 8 . 9 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
893 min = 6 . 8 0E−01 med = 8 . 2 3E−01 avg = 8 . 1 7E−01 max= 9 . 5 4E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
894 min = 6 . 3 8E−01 med = 7 . 4 4E−01 avg = 7 . 4 2E−01 max= 8 . 9 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
895 min = 6 . 8 0E−01 med = 8 . 2 3E−01 avg = 8 . 1 7E−01 max= 9 . 5 4E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
896 min = 5 . 6 8E−01 med = 5 . 9 9E−01 avg = 6 . 0 2E−01 max= 6 . 9 1E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 1 ) )
897 min = 6 . 6 5E−01 med = 7 . 7 8E−01 avg = 7 . 7 7E−01 max= 9 . 2 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 1 ) )
898 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
899 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
900 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
901 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
902 min = 6 . 6 4E−01 med = 7 . 6 1E−01 avg = 7 . 6 1E−01 max= 8 . 8 7E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
903 min = 6 . 8 1E−01 med = 8 . 2 7E−01 avg = 8 . 1 9E−01 max= 9 . 5 7E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
904 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
905 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
906 min = 6 . 3 1E−01 med = 7 . 2 7E−01 avg = 7 . 3 3E−01 max= 8 . 5 1E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
907 min = 6 . 8 5E−01 med = 8 . 1 7E−01 avg = 8 . 1 4E−01 max= 9 . 5 4E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
908 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
909 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
910 min = 5 . 4 9E−01 med = 5 . 5 0E−01 avg = 5 . 5 0E−01 max= 5 . 5 0E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
911 min = 5 . 4 9E−01 med = 5 . 5 0E−01 avg = 5 . 5 0E−01 max= 5 . 5 1E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
912 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
913 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
914 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
915 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
916 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
917 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
918 min = 5 . 4 9E−01 med = 5 . 5 5E−01 avg = 5 . 5 7E−01 max= 5 . 8 3E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
919 min = 5 . 7 6E−01 med = 6 . 3 9E−01 avg = 6 . 3 8E−01 max= 6 . 7 2E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
920 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 5 5E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
921 min = 5 . 6 5E−01 med = 6 . 1 3E−01 avg = 6 . 1 2E−01 max= 6 . 4 5E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
922 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
881
923 min = 5 . 5 7E−01 med = 5 . 9 5E−01 avg = 5 . 9 4E−01 max= 6 . 2 4E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
924 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
882
925 min = 5 . 5 2E−01 med = 5 . 7 6E−01 avg = 5 . 7 7E−01 max= 6 . 0 7E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
926 min = 5 . 4 9E−01 med = 5 . 5 6E−01 avg = 5 . 5 8E−01 max= 5 . 8 7E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
927 min = 5 . 7 8E−01 med = 6 . 3 2E−01 avg = 6 . 3 1E−01 max= 6 . 7 5E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
928 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 2 , r )
929 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 2 , r )
930 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 0 9E−01 max= 7 . 2 5E−01 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 2 , r )
931 min = 0 . 0 0 E00 med = 6 . 8 8E−01 avg = 6 . 5 8E−01 max= 7 . 4 0E−01 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 2 , r )
932 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 2 , r )
933 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 9 3E−02 max= 6 . 6 8E−01 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 2 , r )
934 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 6 . 3 4E−02 max= 7 . 3 2E−01 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
935 min = 0 . 0 0 E00 med = 6 . 9 9E−01 avg = 6 . 5 2E−01 max= 7 . 3 8E−01 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
936 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 2 , r )
937 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 2 . 0 8E−02 max= 7 . 1 3E−01 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 2 , r )
938 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 6 1E−02 max= 7 . 2 4E−01 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 2 , r )
939 min = 0 . 0 0 E00 med = 6 . 9 7E−01 avg = 6 . 0 7E−01 max= 7 . 3 9E−01 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
940 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 2 , r )
941 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 2 2E−02 max= 5 . 9 1E−01 ES ( ( 2 0 / 2 , 5 0 ) , f=f 2 , r )
942 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 4 3E−01 max= 7 . 3 2E−01 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 2 , r )
943 min = 4 . 3 5E−01 med = 6 . 7 3E−01 avg = 6 . 6 1E−01 max= 7 . 3 4E−01 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 2 , r )
944 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 2 , r )
945 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 2 . 5 1E−01 max= 7 . 1 7E−01 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 2 , r )
946 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 0 4E−01 max= 7 . 3 8E−01 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
947 min = 5 . 0 8E−01 med = 6 . 8 5E−01 avg = 6 . 7 8E−01 max= 7 . 4 2E−01 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
948 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 2 , r )
949 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 1 1E−02 max= 7 . 2 1E−01 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 2 , r )
950 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 6 . 1 8E−02 max= 7 . 2 0E−01 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 2 , r )
951 min = 0 . 0 0 E00 med = 6 . 9 8E−01 avg = 6 . 7 4E−01 max= 7 . 3 6E−01 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
952 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
953 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 6 0E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
954 min = 0 . 0 0 E00 med = 5 . 4 9E−01 avg = 5 . 2 7E−01 max= 5 . 5 0E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
955 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
956 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
957 min = 1 . 6 0E−05 med = 5 . 4 9E−01 avg = 4 . 6 7E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
958 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
959 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
960 min = 9 . 2 2E−05 med = 5 . 4 9E−01 avg = 4 . 9 2E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
961 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
962 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
963 min = 0 . 0 0 E00 med = 2 . 2 0E−01 avg = 2 . 9 8E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
964 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
965 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
966 min = 0 . 0 0 E00 med = 5 . 4 9E−01 avg = 4 . 8 3E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
967 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
968 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
969 min = 0 . 0 0 E00 med = 1 . 1 6E−04 avg = 1 . 1 8E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
970 min = 6 . 7 9E−01 med = 7 . 6 0E−01 avg = 7 . 6 5E−01 max= 9 . 0 0E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
971 min = 7 . 0 1E−01 med = 8 . 1 9E−01 avg = 8 . 2 0E−01 max= 9 . 5 0E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
972 min = 5 . 9 4E−01 med = 6 . 2 0E−01 avg = 6 . 2 0E−01 max= 6 . 4 1E−01 RS ( f=f 2 , r )
973
974
975 MaxSqr
976
977 min = 2 . 4 8E−07 med = 7 . 0 9E−07 avg = 6 . 7 2E−07 max= 1 . 0 8E−06 HC( o1=NM( [ −10 ,10]ˆ10) , f=f 3 ,r)
978 min = 3 . 8 9E−08 med = 9 . 9 3E−08 avg = 9 . 9 2E−08 max= 1 . 5 5E−07 HC( o1=UM( [ −10 ,10]ˆ10) , f=f 3 ,r)
979 min = 5 . 8 3E−02 med = 2 . 4 5E−01 avg = 2 . 6 1E−01 max= 5 . 8 4E−01 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )
980 min = 1 . 6 0E−01 med = 5 . 6 6E−01 avg = 5 . 5 9E−01 max= 8 . 4 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )
981 min = 5 . 8 3E−02 med = 2 . 4 5E−01 avg = 2 . 6 1E−01 max= 5 . 8 4E−01 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )
982 min = 1 . 6 0E−01 med = 5 . 6 6E−01 avg = 5 . 5 9E−01 max= 8 . 4 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )
57 DEMOS
983 min = 2 . 3 0E−03 med = 7 . 5 7E−03 avg = 8 . 2 4E−03 max= 2 . 1 0E−02 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.1) )
984 min = 6 . 0 2E−02 med = 1 . 5 9E−01 avg = 1 . 7 5E−01 max= 3 . 7 7E−01 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.1) )
985 min = 6 . 6 1E−06 med = 2 . 6 6E−05 avg = 2 . 6 6E−05 max= 4 . 2 7E−05 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.001) )
986 min = 1 . 4 0E−05 med = 3 . 9 0E−05 avg = 4 . 0 9E−05 max= 8 . 4 8E−05 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.001) )
987 min = 8 . 3 8E−07 med = 3 . 1 2E−06 avg = 3 . 0 9E−06 max= 4 . 6 9E−06 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , l o g ( 1 . 0 E−4) )
988 min = 7 . 4 9E−07 med = 2 . 7 7E−06 avg = 2 . 8 0E−06 max= 4 . 9 2E−06 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , l o g ( 1 . 0 E−4) )
989 min = 1 . 5 8E−01 med = 4 . 6 9E−01 avg = 4 . 6 1E−01 max= 6 . 7 9E−01 SQ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
990 min = 1 . 6 5E−01 med = 6 . 4 8E−01 avg = 6 . 3 9E−01 max= 8 . 8 5E−01 SQ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
991 min = 1 . 2 6E−05 med = 3 . 1 6E−05 avg = 3 . 2 3E−05 max= 6 . 2 2E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
992 min = 1 . 5 5E−05 med = 5 . 9 5E−05 avg = 6 . 3 9E−05 max= 1 . 7 1E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
993 min = 2 . 1 7E−02 med = 1 . 4 2E−01 avg = 1 . 4 6E−01 max= 3 . 4 0E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
994 min = 1 . 7 3E−01 med = 5 . 0 9E−01 avg = 4 . 9 8E−01 max= 7 . 8 8E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
995 min = 1 . 6 9E−06 med = 4 . 2 7E−06 avg = 4 . 0 7E−06 max= 6 . 3 8E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
996 min = 8 . 9 5E−07 med = 3 . 3 0E−06 avg = 3 . 2 3E−06 max= 5 . 5 4E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
997 min = 5 . 8 1E−05 med = 1 . 9 1E−04 avg = 1 . 8 7E−04 max= 3 . 0 8E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
998 min = 1 . 3 9E−04 med = 4 . 2 5E−04 avg = 4 . 5 6E−04 max= 1 . 3 1E−03 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
999 min = 3 . 8 2E−07 med = 8 . 6 7E−07 avg = 8 . 5 9E−07 max= 1 . 4 4E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
1000 min = 4 . 1 7E−08 med = 1 . 4 5E−07 avg = 1 . 3 9E−07 max= 2 . 2 0E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
1001 min = 5 . 1 5E−06 med = 1 . 4 9E−05 avg = 1 . 4 5E−05 max= 2 . 3 6E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
1002 min = 5 . 3 6E−06 med = 1 . 7 4E−05 avg = 1 . 8 7E−05 max= 3 . 6 5E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
1003 min = 4 . 3 3E−07 med = 7 . 6 8E−07 avg = 7 . 7 7E−07 max= 1 . 3 7E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
1004 min = 4 . 8 7E−08 med = 1 . 1 5E−07 avg = 1 . 1 5E−07 max= 1 . 8 3E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
1005 min = 1 . 6 2E−07 med = 3 . 6 4E−07 avg = 3 . 6 2E−07 max= 5 . 3 6E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
1006 min = 1 . 5 5E−08 med = 6 . 6 8E−08 avg = 1 . 7 4E−04 max= 3 . 3 8E−03 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
1007 min = 4 . 8 0E−08 med = 1 . 2 3E−07 avg = 1 . 2 7E−07 max= 2 . 0 9E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
1008 min = 1 . 6 3E−09 med = 1 . 5 4E−08 avg = 6 . 8 8E−05 max= 6 . 2 7E−03 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
1009 min = 8 . 0 5E−09 med = 1 . 8 5E−08 avg = 1 . 8 3E−08 max= 3 . 9 3E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
1010 min = 5 . 5 6E−10 med = 2 . 0 9E−09 avg = 1 . 8 5E−06 max= 1 . 0 9E−04 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
1011 min = 1 . 1 2E−07 med = 5 . 3 4E−07 avg = 5 . 1 8E−07 max= 1 . 0 1E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
1012 min = 3 . 1 0E−08 med = 8 . 6 4E−08 avg = 1 . 5 1E−04 max= 6 . 3 2E−03 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
1013 min = 4 . 0 5E−08 med = 8 . 5 1E−08 avg = 8 . 5 2E−08 max= 1 . 3 9E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
1014 min = 4 . 0 4E−09 med = 1 . 0 3E−08 avg = 3 . 1 2E−05 max= 7 . 6 0E−04 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
1015 min = 2 . 1 6E−263 med = 3 . 2 7E−252 avg = 4 . 4 7E−241 max= 4 . 1 3E−239 ES ( ( 1 0 / 2 + 5 0 ) , f=f 3 , r )
1016 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 3 , r )
1017 min = 2 . 7 5E−157 med = 5 . 6 0E−143 avg = 5 . 1 9E−132 max= 4 . 1 8E−130 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 3 , r )
1018 min = 2 . 4 8E−247 med = 1 . 3 4E−240 avg = 4 . 3 5E−234 max= 3 . 8 8E−232 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 3 , r )
1019 min = 2 . 0 4E−198 med = 5 . 8 6E−191 avg = 6 . 1 8E−184 max= 6 . 1 3E−182 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 3 , r )
1020 min = 1 . 8 4E−243 med = 1 . 0 6E−235 avg = 4 . 3 8E−231 max= 1 . 5 2E−229 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 3 , r )
1021 min = 4 . 4 3E−126 med = 2 . 7 4E−116 avg = 6 . 5 6E−112 max= 4 . 1 9E−110 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
1022 min = 1 . 6 7E−178 med = 1 . 8 6E−172 avg = 3 . 7 1E−169 max= 2 . 5 1E−167 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
1023 min = 1 . 7 0E−132 med = 3 . 6 8E−128 avg = 3 . 7 5E−124 max= 3 . 6 6E−122 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 3 , r )
1024 min = 1 . 4 8E−149 med = 4 . 4 1E−145 avg = 1 . 1 1E−140 max= 1 . 0 8E−138 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 3 , r )
1025 min = 5 . 6 2E−89 med = 2 . 7 0E−83 avg = 4 . 3 1E−79 max= 3 . 2 2E−77 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
1026 min = 1 . 5 1E−112 med = 5 . 2 2E−109 avg = 4 . 8 8E−107 max= 2 . 1 3E−105 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
1027 min = 4 . 7 4E−170 med = 1 . 8 9E−161 avg = 5 . 4 4E−155 max= 5 . 4 3E−153 ES ( ( 2 0 / 2 + 5 0 ) , f=f 3 , r )
1028 min = 1 . 6 7E−211 med = 4 . 9 1E−204 avg = 5 . 5 6E−200 max= 3 . 9 2E−198 ES ( ( 2 0 / 2 , 5 0 ) , f=f 3 , r )
1029 min = 1 . 2 6E−97 med = 8 . 5 6E−91 avg = 2 . 1 7E−87 max= 4 . 9 5E−86 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 3 , r )
1030 min = 9 . 3 7E−124 med = 1 . 6 1E−119 avg = 1 . 4 6E−115 max= 1 . 0 1E−113 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 3 , r )
1031 min = 1 . 3 0E−147 med = 1 . 4 8E−140 avg = 1 . 6 1E−136 max= 1 . 2 6E−134 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 3 , r )
1032 min = 4 . 6 2E−186 med = 3 . 3 9E−181 avg = 7 . 3 2E−179 max= 5 . 2 9E−177 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 3 , r )
1033 min = 5 . 4 5E−87 med = 3 . 1 6E−83 avg = 2 . 1 5E−79 max= 1 . 0 7E−77 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
1034 min = 5 . 7 1E−129 med = 1 . 0 4E−125 avg = 3 . 0 3E−123 max= 2 . 0 3E−121 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
1035 min = 1 . 3 7E−107 med = 3 . 0 7E−104 avg = 9 . 6 0E−103 max= 5 . 2 8E−101 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 3 , r )
1036 min = 5 . 7 2E−127 med = 3 . 1 1E−123 avg = 6 . 4 2E−122 max= 9 . 3 9E−121 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 3 , r )
1037 min = 2 . 4 7E−70 med = 1 . 8 9E−65 avg = 5 . 6 0E−64 max= 2 . 3 8E−62 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
1038 min = 6 . 7 2E−92 med = 1 . 2 5E−89 avg = 1 . 3 1E−88 max= 5 . 0 9E−87 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
1039 min = 4 . 7 9E−08 med = 4 . 5 0E−04 avg = 4 . 4 4E−03 max= 1 . 0 5E−01 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1040 min = 6 . 8 6E−04 med = 1 . 8 8E−02 avg = 2 . 3 4E−02 max= 7 . 6 7E−02 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1041 min = 2 . 6 7E−07 med = 1 . 6 7E−03 avg = 3 . 4 3E−03 max= 2 . 3 8E−02 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1042 min = 3 . 3 1E−93 med = 6 . 7 1E−89 avg = 6 . 3 4E−87 max= 2 . 3 5E−85 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1043 min = 4 . 6 7E−152 med = 4 . 7 3E−23 avg = 3 . 0 2E−04 max= 2 . 1 7E−02 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1044 min = 5 . 1 1E−134 med = 3 . 4 6E−128 avg = 1 . 7 4E−123 max= 1 . 5 4E−121 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1045 min = 6 . 3 5E−40 med = 4 . 2 5E−37 avg = 3 . 8 1E−07 max= 3 . 4 3E−05 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1046 min = 4 . 6 3E−05 med = 3 . 1 2E−03 avg = 4 . 2 0E−03 max= 1 . 9 2E−02 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1047 min = 2 . 0 1E−46 med = 2 . 3 0E−13 avg = 1 . 9 2E−05 max= 8 . 1 5E−04 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1048 min = 1 . 5 6E−49 med = 5 . 0 8E−47 avg = 5 . 8 6E−46 max= 1 . 6 8E−44 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1049 min = 2 . 9 6E−89 med = 3 . 1 6E−83 avg = 2 . 3 1E−18 max= 1 . 9 9E−16 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1050 min = 8 . 3 2E−71 med = 2 . 6 9E−68 avg = 5 . 7 4E−67 max= 2 . 7 4E−65 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1051 min = 3 . 6 2E−21 med = 2 . 3 2E−19 avg = 3 . 3 1E−19 max= 2 . 6 9E−18 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1052 min = 3 . 6 5E−09 med = 3 . 5 7E−05 avg = 1 . 7 1E−04 max= 2 . 2 1E−03 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1053 min = 9 . 6 5E−25 med = 2 . 1 8E−23 avg = 4 . 5 8E−23 max= 3 . 9 9E−22 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
883
1054 min = 3 . 4 5E−26 med = 1 . 0 8E−24 avg = 2 . 1 3E−24 max= 1 . 2 8E−23 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1055 min = 3 . 7 0E−47 med = 9 . 8 1E−45 avg = 8 . 4 1E−44 max= 2 . 4 8E−42 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
884
1056 min = 3 . 6 7E−38 med = 6 . 0 8E−36 avg = 1 . 6 9E−35 max= 1 . 4 9E−34 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1057 min = 1 . 8 4E−01 med = 4 . 7 8E−01 avg = 4 . 6 9E−01 max= 7 . 1 2E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r )
1058 min = 2 . 2 6E−01 med = 6 . 7 8E−01 avg = 6 . 5 6E−01 max= 8 . 4 5E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r )
1059 min = 4 . 7 0E−02 med = 9 . 8 1E−02 avg = 9 . 4 5E−02 max= 1 . 3 4E−01 RS ( f=f 3 , r )
57 DEMOS
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 885
57.1.1.2.2 Sphere Problem
The Bin Packing problem has been introduced in Example E1.1 on page 25 in the book.
In Task 67 on page 238, we provide a version of the problem where the aim is to pack n
objects with sizes a1 to an into as few as possible bins of size b. The number of bins required
is k.
Here, we focus on that problem and the objective functions introduced in this package
consider a candidate solution x to be a permutation of the natural numbers 0 to n − 1 which
denotes the order in which the objects are to be put into bins. Starting with the first object
and an empty bin, the objects are packed in a loop. The ith element x[i] is taken. If its size
a[x[i]] is smaller than the remaining space in the bin, it is placed into the bin and the free
space of the bin is reduced by a[x[i]]. If it does not fit, a new bin is required. The object
is put into the bin and b − a[x[i]] remains as free space. This process is repeated until all
objects have been packed into the bins according to the permutation x. The number k of
bins is counted.
This number is subject to minimization. There exists a couple of different ways to express
this optimization criterion and in this package, we provide some of them. Besides counting
the bins directly, we could, for instance, also count the (squares of the) space remaining free
(wasted) in each bin and minimize that number instead.
888 57 DEMOS
57.1.2.1.1 Base Classes
57.1.2.1.2 Experiment
903
904 57 DEMOS
57.1.2.2 Example instances of the Traveling Salesman Problem.
A Traveling Salesman Problem which corresponds to finding the optimal tour through
the 48 capitals of the US. This problem has been taken from http://www.iwr.uni-
heidelberg.de/groups/comopt/software/TSPLIB95/tsp/. In this task, each of the 48 cities
is represented by a coordinate pair and the distance between two cities is the Euclidean
distance of their coordinates. The optimal solution of this problem is 10628.
All the objects involved in the benchmark problem discussed in Section 50.4.2.
Listing 57.24: An instance of this class holds the information of one location.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * This class represents a location. Each customer, depot, and halting
11 * point is always associated with a location.
12 *
13 * @author Thomas Weise
14 */
15 public class Location extends InputObject {
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /** the locations */
20 public static final InputObjectList<Location> LOCATIONS = new
InputObjectList<Location>(
21 Location.class);
22
23 /** the headers */
24 private static final List<String> HEADERS;
25
26 static {
27 HEADERS = new ArrayList<String>();
28 HEADERS.addAll(IHEADERS);
29 HEADERS.add("is_depot");//N ON − N LS − 1
57.3. THE VRP BENCHMARK PROBLEM 915
30 }
31
32 /** is the location a depot? */
33 public final boolean isDepot;
34
35 /**
36 * Create a new location of the given id
37 *
38 * @param pid
39 * the id
40 * @param depot
41 * true if and only if the location is a depot
42 */
43 public Location(final int pid, final boolean depot) {
44 super(pid, LOCATIONS);
45 this.isDepot = depot;
46 }
47
48 /**
49 * Create a new location
50 *
51 * @param data
52 * the data
53 */
54 public Location(final String[] data) {
55 super(data, LOCATIONS);
56 this.isDepot = Boolean.parseBoolean(data[1]);
57 }
58
59 /**
60 * {@inheritDoc}
61 */
62 @Override
63 final char getIDPrefixChar() {
64 return ’L’;
65 }
66
67 /**
68 * Get the column headers for a csv data representation. Child classes
69 * may add additional fields.
70 *
71 * @return the column headers for representing the data of this object in
72 * csv format
73 */
74 @Override
75 public List<String> getCSVColumnHeader() {
76 return HEADERS;
77 }
78
79 /**
80 * Fill in the data of this object into a string array which can be used
81 * to serialize the object in a nice way
82 *
83 * @param output
84 * the output
85 */
86 @Override
87 public void fillInCSVData(final String[] output) {
88 super.fillInCSVData(output);
89 output[1] = String.valueOf(this.isDepot);
90 }
91 }
Listing 57.25: An instance of this class holds the information of one order.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
916 57 DEMOS
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * An order is a set of goods that must be taken from one place to another
11 * one. It requires a single of container and has timewindoes for pickup
12 * and delivery as well as a start and an end location.
13 *
14 * @author Thomas Weise
15 */
16 public class Order extends InputObject {
17 /** a constant required by Java serialization */
18 private static final long serialVersionUID = 1;
19
20 /** the list of orders */
21 public static final InputObjectList<Order> ORDERS = new InputObjectList<Order>(
22 Order.class);
23
24 /** no orders */
25 public static final Order[] NO_ORDERS = new Order[0];
26
27 /** the headers */
28 private static final List<String> HEADERS;
29
30 static {
31 HEADERS = new ArrayList<String>();
32 HEADERS.addAll(IHEADERS);
33 HEADERS.add("pickup_window_start");//N ON − N LS − 1
34 HEADERS.add("pickup_window_end");//N ON − N LS − 1
35 HEADERS.add("pickup_location");//N ON − N LS − 1
36 HEADERS.add("delivery_window_start");//N ON − N LS − 1
37 HEADERS.add("delivery_window_end");//N ON − N LS − 1
38 HEADERS.add("delivery_location");//N ON − N LS − 1
39 }
40
41 /** the beginning of the pickup time */
42 public final long pickupWindowStart;
43
44 /** the ending of the pickup time */
45 public final long pickupWindowEnd;
46
47 /** the pickup location */
48 public final Location pickupLocation;
49
50 /** the beginning of the delivery time */
51 public final long deliveryWindowStart;
52
53 /** the ending of the deliver time */
54 public final long deliveryWindowEnd;
55
56 /** the delivery location */
57 public final Location deliveryLocation;
58
59 /**
60 * Create a new order
61 *
62 * @param pid
63 * the id
64 * @param ppickupWindowStart
65 * the beginning of the pickup time
66 * @param ppickupWindowEnd
67 * the ending of the pickup time
68 * @param ppickupLocation
69 * the pickup location
70 * @param pdeliveryWindowStart
57.3. THE VRP BENCHMARK PROBLEM 917
71 * the beginning of the delivery time
72 * @param pdeliveryWindowEnd
73 * the ending of the deliver time
74 * @param pdeliveryLocation
75 * the delivery location
76 */
77 public Order(final int pid, final long ppickupWindowStart,
78 final long ppickupWindowEnd, final Location ppickupLocation,
79 final long pdeliveryWindowStart, final long pdeliveryWindowEnd,
80 final Location pdeliveryLocation) {
81 super(pid, ORDERS);
82 this.pickupWindowStart = ppickupWindowStart;
83 this.pickupWindowEnd = ppickupWindowEnd;
84 this.pickupLocation = ppickupLocation;
85 this.deliveryWindowStart = pdeliveryWindowStart;
86 this.deliveryWindowEnd = pdeliveryWindowEnd;
87 this.deliveryLocation = pdeliveryLocation;
88 }
89
90 /**
91 * Create a new order
92 *
93 * @param data
94 * the order data
95 */
96 public Order(final String[] data) {
97 super(data, ORDERS);
98
99 this.pickupWindowStart = Long.parseLong(data[1]);
100 this.pickupWindowEnd = Long.parseLong(data[2]);
101 this.pickupLocation = Location.LOCATIONS.find(extractId(data[3]));
102 this.deliveryWindowStart = Long.parseLong(data[4]);
103 this.deliveryWindowEnd = Long.parseLong(data[5]);
104 this.deliveryLocation = Location.LOCATIONS.find(extractId(data[6]));
105 }
106
107 /**
108 * {@inheritDoc}
109 */
110 @Override
111 final char getIDPrefixChar() {
112 return ’O’;
113 }
114
115 /**
116 * Get the column headers for a csv data representation. Child classes
117 * may add additional fields.
118 *
119 * @return the column headers for representing the data of this object in
120 * csv format
121 */
122 @Override
123 public List<String> getCSVColumnHeader() {
124 return HEADERS;
125 }
126
127 /**
128 * Fill in the data of this object into a string array which can be used
129 * to serialize the object in a nice way
130 *
131 * @param output
132 * the output
133 */
134 @Override
135 public void fillInCSVData(final String[] output) {
136 super.fillInCSVData(output);
137 output[1] = String.valueOf(this.pickupWindowStart);
918 57 DEMOS
138 output[2] = String.valueOf(this.pickupWindowEnd);
139 output[3] = String.valueOf(this.pickupLocation.getIDString());
140 output[4] = String.valueOf(this.deliveryWindowStart);
141 output[5] = String.valueOf(this.deliveryWindowEnd);
142 output[6] = String.valueOf(this.deliveryLocation.getIDString());
143 }
144 }
Listing 57.26: An instance of this class holds the information of one truck.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * A truck is a car that can transport containers.
11 *
12 * @author Thomas Weise
13 */
14 public class Truck extends InputObject {
15 /** a constant required by Java serialization */
16 private static final long serialVersionUID = 1;
17
18 /** the trucks */
19 public static final InputObjectList<Truck> TRUCKS = new InputObjectList<Truck>(
20 Truck.class);
21
22 /** the headers */
23 private static final List<String> HEADERS;
24
25 static {
26 HEADERS = new ArrayList<String>();
27 HEADERS.addAll(IHEADERS);
28 HEADERS.add("start_location");//N ON − N LS − 1
29 HEADERS.add("max_containers");//N ON − N LS − 1
30 HEADERS.add("max_drivers");//N ON − N LS − 1
31 HEADERS.add("costs_per_h_km");//N ON − N LS − 1
32 HEADERS.add("avg_speed_km_h");//N ON − N LS − 1
33 }
34
35 /** the starting location */
36 public final Location start;
37
38 /** the maximum number of containers this truck can transport */
39 public final int maxContainers;
40
41 /** the maximum number of drivers */
42 public final int maxDrivers;
43
44 /** the costs per kilometer in cent */
45 public final int costsPerHKm;
46
47 /** the average speed in km/h */
48 public final int avgSpeedKmH;
49
50 /**
51 * Create a new truck
52 *
53 * @param pid
54 * the id
55 * @param lstart
56 * the starting location
57 * @param maxCont
58 * the maximum number of containers this truck can transport
57.3. THE VRP BENCHMARK PROBLEM 919
59 * @param maxDriv
60 * the maximum number of drivers that can sit in this truck
61 * @param costsHKM
62 * the costs per hour*kilometer in cent
63 * @param avgSpeed
64 * the average speed in km/h
65 */
66 public Truck(final int pid, final Location lstart, final int maxCont,
67 final int maxDriv, final int costsHKM, final int avgSpeed) {
68 super(pid, TRUCKS);
69 this.start = lstart;
70 this.maxContainers = maxCont;
71 this.maxDrivers = maxDriv;
72 this.costsPerHKm = costsHKM;
73 this.avgSpeedKmH = avgSpeed;
74 }
75
76 /**
77 * Create a new truck
78 *
79 * @param data
80 * the data
81 */
82 public Truck(final String[] data) {
83 super(data, TRUCKS);
84 this.start = Location.LOCATIONS.find(InputObject.extractId(data[1]));
85 this.maxContainers = Integer.parseInt(data[2]);
86 this.maxDrivers = Integer.parseInt(data[3]);
87 this.costsPerHKm = Integer.parseInt(data[4]);
88 this.avgSpeedKmH = Integer.parseInt(data[5]);
89 }
90
91 /**
92 * {@inheritDoc}
93 */
94 @Override
95 final char getIDPrefixChar() {
96 return ’T’;
97 }
98
99 /**
100 * Get the column headers for a csv data representation. Child classes
101 * may add additional fields.
102 *
103 * @return the column headers for representing the data of this object in
104 * csv format
105 */
106 @Override
107 public List<String> getCSVColumnHeader() {
108 return HEADERS;
109 }
110
111 /**
112 * Fill in the data of this object into a string array which can be used
113 * to serialize the object in a nice way
114 *
115 * @param output
116 * the output
117 */
118 @Override
119 public void fillInCSVData(final String[] output) {
120 super.fillInCSVData(output);
121 output[1] = this.start.getIDString();
122 output[2] = String.valueOf(this.maxContainers);
123 output[3] = String.valueOf(this.maxDrivers);
124 output[4] = String.valueOf(this.costsPerHKm);
125 output[5] = String.valueOf(this.avgSpeedKmH);
920 57 DEMOS
126 }
127 }
Listing 57.27: An instance of this class holds the information of one container.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * A container is an object which can be used to package one order.
11 *
12 * @author Thomas Weise
13 */
14 public class Container extends InputObject {
15 /** a constant required by Java serialization */
16 private static final long serialVersionUID = 1;
17
18 /** the drivers */
19 public static final InputObjectList<Container> CONTAINERS = new
InputObjectList<Container>(
20 Container.class);
21
22 /** no containers */
23 public static final Container[] NO_CONTAINERS = new Container[0];
24
25 /** the headers */
26 private static final List<String> HEADERS;
27
28 static {
29 HEADERS = new ArrayList<String>();
30 HEADERS.addAll(IHEADERS);
31 HEADERS.add("start");//N ON − N LS − 1
32 }
33
34 /** the start location */
35 public final Location start;
36
37 /**
38 * Create a new container
39 *
40 * @param pid
41 * the id
42 * @param lstart
43 * the start location
44 */
45 public Container(final int pid, final Location lstart) {
46 super(pid, CONTAINERS);
47 this.start = lstart;
48 }
49
50 /**
51 * Create a new truck
52 *
53 * @param data
54 * the data
55 */
56 public Container(final String[] data) {
57 super(data, CONTAINERS);
58 this.start = Location.LOCATIONS.find(InputObject.extractId(data[1]));
59 }
60
61 /**
62 * {@inheritDoc}
57.3. THE VRP BENCHMARK PROBLEM 921
63 */
64 @Override
65 final char getIDPrefixChar() {
66 return ’C’;
67 }
68
69 /**
70 * Get the column headers for a csv data representation. Child classes
71 * may add additional fields.
72 *
73 * @return the column headers for representing the data of this object in
74 * csv format
75 */
76 @Override
77 public List<String> getCSVColumnHeader() {
78 return HEADERS;
79 }
80
81 /**
82 * Fill in the data of this object into a string array which can be used
83 * to serialize the object in a nice way
84 *
85 * @param output
86 * the output
87 */
88 @Override
89 public void fillInCSVData(final String[] output) {
90 super.fillInCSVData(output);
91 output[1] = this.start.getIDString();
92 }
93 }
Listing 57.28: An instance of this class holds the information of one driver.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * A driver is a person that can drive a truck.
11 *
12 * @author Thomas Weise
13 */
14 public class Driver extends InputObject {
15 /** a constant required by Java serialization */
16 private static final long serialVersionUID = 1;
17
18 /** the drivers */
19 public static final InputObjectList<Driver> DRIVERS = new
InputObjectList<Driver>(
20 Driver.class);
21
22 /** no drivers */
23 public static final Driver[] NO_DRIVERS = new Driver[0];
24
25 /** the headers */
26 private static final List<String> HEADERS;
27
28 static {
29 HEADERS = new ArrayList<String>();
30 HEADERS.addAll(IHEADERS);
31 HEADERS.add("home");//N ON − N LS − 1
32 HEADERS.add("costs_per_day");//N ON − N LS − 1
33 }
922 57 DEMOS
34
35 /** the maximum driving time */
36 public static int MAX_DRIVING_TIME = 8 * 60 * 60 * 1000;
37
38 /** the minimum break time */
39 public static int MIN_BREAK_TIME = 6 * 60 * 60 * 1000;
40
41 /** the home location */
42 public final Location home;
43
44 /** the costs for any day during which the driver works */
45 public final int costsPerDay;
46
47 /**
48 * Create a new driver
49 *
50 * @param pid
51 * the id
52 * @param lhome
53 * the home location
54 * @param costs
55 * the costs per day
56 */
57 public Driver(final int pid, final Location lhome, final int costs) {
58 super(pid, DRIVERS);
59 this.home = lhome;
60 this.costsPerDay = costs;
61 }
62
63 /**
64 * Create a new truck
65 *
66 * @param data
67 * the data
68 */
69 public Driver(final String[] data) {
70 super(data, DRIVERS);
71 this.home = Location.LOCATIONS.find(InputObject.extractId(data[1]));
72 this.costsPerDay = Integer.parseInt(data[2]);
73 }
74
75 /**
76 * {@inheritDoc}
77 */
78 @Override
79 final char getIDPrefixChar() {
80 return ’D’;
81 }
82
83 /**
84 * Get the column headers for a csv data representation. Child classes
85 * may add additional fields.
86 *
87 * @return the column headers for representing the data of this object in
88 * csv format
89 */
90 @Override
91 public List<String> getCSVColumnHeader() {
92 return HEADERS;
93 }
94
95 /**
96 * Fill in the data of this object into a string array which can be used
97 * to serialize the object in a nice way
98 *
99 * @param output
100 * the output
57.3. THE VRP BENCHMARK PROBLEM 923
101 */
102 @Override
103 public void fillInCSVData(final String[] output) {
104 super.fillInCSVData(output);
105 output[1] = this.home.getIDString();
106 output[2] = String.valueOf(this.costsPerDay);
107 }
108 }
Listing 57.29: This class holds the complete infomation about the distances between loca-
tions.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package demos.org.goataa.vrpBenchmark.objects;
10
11 import java.io.BufferedReader;
12 import java.io.IOException;
13 import java.io.Writer;
14
15 import org.goataa.impl.utils.TextUtils;
16
17 /**
18 * A distance matrix calculates the distance between two locations
19 *
20 * @author Thomas Weise
21 */
22 public class DistanceMatrix extends TextSerializable {
23 /** a constant required by Java serialization */
24 private static final long serialVersionUID = 1;
25
26 /** the globally shared distance matrix instance */
27 public static final DistanceMatrix MATRIX = new DistanceMatrix();
28
29 /** the matrix */
30 private int[][] matrix;
31
32 /**
33 * Create a new distance matrix
34 */
35 public DistanceMatrix() {
36 super();
37 }
38
39 /**
40 * Initialize to a given matrix
41 *
42 * @param m
43 * the matrix
44 */
45 public void init(final int[][] m) {
46 int[] drow;
47 int i, j;
48
49 this.matrix = m;
50
51 if (m != null) {
52 i = m.length;
53
54 for (; (--i) >= 0;) {
55 drow = m[i];
924 57 DEMOS
56
57 for (j = drow.length; (--j) >= 0;) {
58 if (i != j) {
59 drow[j] = Math.max(50, (50 * ((drow[j] + 49) / 50)));
60 } else {
61 drow[j] = 0;
62 }
63 }
64 }
65 }
66 }
67
68 /**
69 * Obtain the distance between two locations
70 *
71 * @param l1
72 * the first location
73 * @param l2
74 * the second location
75 * @return the distance in meters
76 */
77 public final int getDistance(final Location l1, final Location l2) {
78 if (l1 == l2) {
79 return 0;
80 }
81 return this.matrix[l1.id][l2.id];
82 }
83
84 /**
85 * Get the time needed by a given truck for getting from l1 to l2. We
86 * assume that a truck will drive in average with 66% of its top speed.
87 *
88 * @param l1
89 * the first location
90 * @param l2
91 * the second location
92 * @param t
93 * the truck
94 * @return the time needed, in milliseconds
95 */
96 public final int getTimeNeeded(final Location l1, final Location l2,
97 final Truck t) {
98
99 if (l1 == l2) {
100 return 0;
101 }
102
103 return (int) (0.5d + (3600 * (this.matrix[l1.id][l2.id] / t.avgSpeedKmH)));
104 }
105
106 /**
107 * Serialize to a writer
108 *
109 * @param w
110 * the writer
111 * @throws IOException
112 * if anything goes wrong
113 */
114 @Override
115 public void serialize(final Writer w) throws IOException {
116 int[][] ma;
117 int[] l;
118 int i, j;
119
120 ma = this.matrix;
121 for (i = 0; i < ma.length; i++) {
122 if (i > 0) {
57.3. THE VRP BENCHMARK PROBLEM 925
123 w.write(TextUtils.NEWLINE);
124 }
125 l = ma[i];
126 for (j = 0; j < l.length; j++) {
127 if (j > 0) {
128 w.write(’\t’);
129 }
130 w.write(String.valueOf(l[j]));
131 }
132 }
133 }
134
135 /**
136 * Deserialize from a buffered reader
137 *
138 * @param r
139 * reader
140 * @throws IOException
141 * if something goes wrong
142 */
143 @Override
144 public void deserialize(final BufferedReader r) throws IOException {
145 String[] s;
146 String l;
147 int[][] ma;
148 int[] q;
149 int v, i, j;
150
151 l = r.readLine();
152 s = TransportationObjectList.SPLIT.split(l);
153
154 v = s.length;
155 this.matrix = ma = new int[v][v];
156 for (i = 0; i < v; i++) {
157 q = ma[i];
158 for (j = 0; j < v; j++) {
159 q[j] = Integer.parseInt(s[j]);
160 }
161 if (i < (v - 1)) {
162 s = TransportationObjectList.SPLIT.split(r.readLine());
163 }
164 }
165
166 }
167 }
The objective value computation utility classes from Section 50.4.4 and the candidate solu-
tion element according to Section 50.4.3.
Listing 57.31: An object which can be used to compute the features of a candidate solution
of the benchmark problem.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.optimization;
5
6 import java.io.Serializable;
7 import java.util.Arrays;
8 import java.util.Comparator;
9
10 import demos.org.goataa.vrpBenchmark.objects.Container;
11 import demos.org.goataa.vrpBenchmark.objects.DistanceMatrix;
12 import demos.org.goataa.vrpBenchmark.objects.Driver;
13 import demos.org.goataa.vrpBenchmark.objects.Location;
14 import demos.org.goataa.vrpBenchmark.objects.Order;
15 import demos.org.goataa.vrpBenchmark.objects.TransportationObjectList;
16 import demos.org.goataa.vrpBenchmark.objects.Truck;
17
18 /**
19 * This class allows to compute the state of the world and, hence, the
20 * objective values. The initial state of the world is t=0 and all objects
21 * are at their starting locations. Then, we can make moves. Every move
22 * consists of a truck driving from one location to another. By doing so,
23 * it may carry objects (containers, orders) and is driven by at least one
24 * driver (but may carry more). After the move, the involved objects are at
25 * their new locations.
26 *
27 * @author Thomas Weise
28 */
29 public class Compute extends PerformanceValues {
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** the internal comparator */
34 private static final Comparator<Move> MOVE_COMPARATOR = new CMP();
35
36 /** the drivers */
37 private final DriverTrack[] drivers;
38
39 /** the trucks */
40 private final TruckTrack[] trucks;
41
42 /** the container */
43 private final Track[] containers;
44
45 /** the order */
46 private final Track[] orders;
47
48 /** the temporary move list */
49 private transient Move[] temp;
50
51 /** Create a new compute instance */
52 public Compute() {
53 super();
54
55 Track[] t;
56 DriverTrack[] x;
932 57 DEMOS
57 TruckTrack[] y;
58 int i;
59
60 i = Truck.TRUCKS.size();
61 this.trucks = y = new TruckTrack[i];
62 for (; (--i) >= 0;) {
63 y[i] = new TruckTrack();
64 }
65
66 i = Driver.DRIVERS.size();
67 this.drivers = x = new DriverTrack[i];
68 for (; (--i) >= 0;) {
69 x[i] = new DriverTrack();
70 }
71
72 i = Container.CONTAINERS.size();
73 this.containers = t = new Track[i];
74 for (; (--i) >= 0;) {
75 t[i] = new Track();
76 }
77
78 i = Order.ORDERS.size();
79 this.orders = t = new Track[i];
80 for (; (--i) >= 0;) {
81 t[i] = new Track();
82 }
83
84 this.temp = null;
85 }
86
87 /**
88 * Initialize the computer: reset all variables and move all objects to
89 * their default locations
90 */
91 @Override
92 public final void init() {
93 Track[] ts;
94 DriverTrack[] dts;
95 TruckTrack[] tts;
96 TruckTrack q;
97 Track t;
98 DriverTrack dt;
99 Truck a;
100 Driver b;
101 Container c;
102 Order o;
103 int i;
104
105 super.init();
106
107 tts = this.trucks;
108 for (i = tts.length; (--i) >= 0;) {
109 q = tts[i];
110 a = Truck.TRUCKS.get(i);
111 q.location = a.start;
112 q.time = 0l;
113 q.orders = null;
114 q.lastTime = -1l;
115 }
116
117 dts = this.drivers;
118 for (i = dts.length; (--i) >= 0;) {
119 dt = dts[i];
120 b = Driver.DRIVERS.get(i);
121 dt.location = b.home;
122 dt.time = 0l;
123 dt.lastDay = -1;
57.3. THE VRP BENCHMARK PROBLEM 933
124 }
125
126 ts = this.containers;
127 for (i = ts.length; (--i) >= 0;) {
128 t = ts[i];
129 c = Container.CONTAINERS.get(i);
130 t.location = c.start;
131 t.time = 0l;
132 }
133
134 ts = this.orders;
135 for (i = ts.length; (--i) >= 0;) {
136 t = ts[i];
137 o = Order.ORDERS.get(i);
138 t.location = o.pickupLocation;
139 t.time = o.pickupWindowStart;
140 }
141
142 }
143
144 /**
145 * Make a move: Truck t drives from the location "from" to the location
146 * "to" at the given start time. It carries the drivers "d", the orders
147 * "o", and the containers "c".
148 *
149 * @param t
150 * the truck
151 * @param o
152 * the orders
153 * @param d
154 * the drivers
155 * @param c
156 * the container
157 * @param from
158 * the starting location
159 * @param startTime
160 * the start time
161 * @param to
162 * the destination location
163 */
164 public final void move(final Truck t, final Order[] o, final Driver[] d,
165 final Container[] c, final Location from, long startTime,
166 final Location to) {
167 final Track[] lcontainers, lorders;
168 final DriverTrack[] ldrivers;
169 Track track;
170 final TruckTrack truck;
171 Container cc;
172 Location startLoc;
173 long start, end, m, timeNeeded, lastTime;
174 int totalContainers, totalOrders, totalDrivers;
175 int i, ed, sd, td;
176 Order oo;
177 Driver dd;
178 DriverTrack dt;
179 Order[] o2;
180
181 this.clearNew();
182
183 ldrivers = this.drivers;
184 lcontainers = this.containers;
185 lorders = this.orders;
186
187 // move the truck
188 startLoc = from;
189 start = startTime;
190
934 57 DEMOS
191 if (t != null) {
192 truck = this.trucks[t.id];
193 lastTime = truck.lastTime;
194
195 // is the truck at the right location?
196 startLoc = truck.location;
197 if (startLoc != from) {
198 this.addTruckError(1);
199 }
200
201 // is it there at the right time
202 if (truck.time > start) {
203 this.addTruckError(1);
204 start = truck.time;
205 }
206
207 // move the truck, if the start location is different from the end
208 if (startLoc != to) {
209 // compute the time to travel
210 m = DistanceMatrix.MATRIX.getDistance(startLoc, to);
211 timeNeeded = DistanceMatrix.MATRIX.getTimeNeeded(startLoc, to, t);
212
213 this
214 .addTruckCosts(((long) (0.5d + ((t.costsPerHKm * timeNeeded * m) /
3600000000.0d))));
215
216 end = (start + timeNeeded);
217 truck.location = to;
218 } else {
219 end = (start + 1l);
220 this.addTruckCosts(1000l);
221 }
222
223 truck.time = end;
224 } else {
225 // no truck? -> error
226 this.addTruckError(1);
227 end = start + 10000000l;
228 this.addTruckCosts(1000l);
229 truck = null;
230 lastTime = -1l;
231 }
232
233 // move all containers
234 totalContainers = 0;
235 if (c != null) {
236 for (i = c.length; (--i) >= 0;) {
237 cc = c[i];
238
239 // aha, a container
240 if (cc != null) {
241 totalContainers++;
242 track = lcontainers[cc.id];
243
244 // check the current location of the container
245 if (track.location != startLoc) {
246 this.addContainerError(1);
247 }
248
249 // check the time at which the container is there
250 if (track.time > start) {
251 this.addContainerError(1);
252 }
253
254 // move the container
255 track.location = to;
256 track.time = end;
57.3. THE VRP BENCHMARK PROBLEM 935
257 }
258 }
259
260 // are there too many containers?
261 i = (totalContainers - ((t == null) ? 0 : t.maxContainers));
262 if (i > 0) {
263 this.addContainerError(i);
264 }
265 }
266
267 // move all orders
268 totalOrders = 0;
269 if (o != null) {
270 for (i = o.length; (--i) >= 0;) {
271 oo = o[i];
272
273 // aha, an order
274 if (oo != null) {
275 totalOrders++;
276 track = lorders[oo.id];
277
278 // check the current location of the order
279 if (track.location != startLoc) {
280 this.addOrderError(1);
281 }
282
283 // is the order at the starting location?
284 if ((track.location == oo.pickupLocation)
285 && (track.time <= oo.pickupWindowStart)) {
286
287 if (lastTime >= 0l) {
288 m = windowIntersection(oo.pickupWindowStart,
289 oo.pickupWindowEnd, lastTime, start);
290 } else {
291 m = start;
292 }
293
294 m = Compute.windowViolation(oo.pickupWindowStart,
295 oo.pickupWindowEnd, m);
296
297 if (m > 0l) {
298 this.addPickupErrorTime(m);
299 this.addOrderError(1);
300 }
301 } else {
302 // check the time at which the order is there
303 if (track.time > start) {
304 this.addOrderError(1);
305 }
306 }
307
308 // move the order
309 track.location = to;
310 track.time = end;
311 }
312 }
313 }
314
315 // now let us check if delivery took place in the time windows by
316 // possibly correcting drop-off times. It could be that a truck arrives
317 // before the delivery time window but leaves after it
318 if (truck != null) {
319
320 if (lastTime >= 0l) {
321 o2 = truck.orders;
322 if (o2 != null) {
323 // for each order
936 57 DEMOS
324 for (i = o2.length; (--i) >= 0;) {
325 oo = o2[i];
326 if (oo != null) {
327 // get the tracking object
328 track = lorders[oo.id];
329 // is the order still at the place where this truck dropped
330 // it
331 // off and is the order at the delivery location?
332 if ((track.time == lastTime) && (track.location == startLoc)
333 && (track.location == oo.deliveryLocation)) {
334 track.time = windowIntersection(oo.deliveryWindowStart,
335 oo.deliveryWindowEnd, lastTime, start);
336 }
337 }
338 }
339 }
340 }
341
342 truck.orders = o;
343 truck.lastTime = end;
344 }
345
346 // are there more orders than containers?
347 if (totalOrders > totalContainers) {
348 this.addContainerError((totalOrders - totalContainers));
349 }
350
351 // move all drivers
352 totalDrivers = 0;
353 if (d != null) {
354 sd = ((int) ((start / 86400000)));
355 ed = ((int) ((end / 86400000)));
356 td = (ed - sd + 1);
357
358 for (i = d.length; (--i) >= 0;) {
359 dd = d[i];
360
361 // aha, a driver
362 if (dd != null) {
363 totalDrivers++;
364 dt = ldrivers[dd.id];
365
366 // check the current location of the driver
367 if (dt.location != startLoc) {
368 this.addDriverError(1);
369 }
370
371 // check the time at which the driver is there
372 if (dt.time > start) {
373 this.addDriverError(1);
374 }
375
376 // compute costs of that driver: each day, he receives a salary
377 if (ed > dt.lastDay) {
378 dt.lastDay = ed;
379 this.addDriverCosts((td * dd.costsPerDay));
380 }
381
382 // move the driver
383 dt.location = to;
384 dt.time = end;
385 }
386 }
387 }
388
389 // no driver? that’s an error
390 if (totalDrivers <= 0) {
57.3. THE VRP BENCHMARK PROBLEM 937
391 this.addDriverError(1);
392 }
393
394 this.commitNew();
395 }
396
397 /**
398 * Compute the time window violation.
399 *
400 * @param ws
401 * the window start
402 * @param we
403 * the window end
404 * @param t
405 * the time
406 * @return the window violation
407 */
408 private static final long windowViolation(final long ws, final long we,
409 final long t) {
410 if (t < ws) {
411 return (ws - t);
412 }
413 if (t > we) {
414 return (t - we);
415 }
416 return 0l;
417 }
418
419 /**
420 * Get the best window intersection time
421 *
422 * @param w1s
423 * the start of the first window
424 * @param w1e
425 * the end of the first window
426 * @param w2s
427 * the start of the second window
428 * @param w2e
429 * the end of the second window
430 * @return the earliest/best time within the second window which creates
431 * the smallest violation of the first window
432 */
433 private static final long windowIntersection(final long w1s,
434 final long w1e, final long w2s, final long w2e) {
435
436 if ((w2e >= w1s) && (w2s <= w1e)) {
437 return Math.max(w1s, w2s);
438 }
439
440 if (w2e < w1s) {
441 return w2e;
442 }
443
444 if (w2s > w1e) {
445 return w2s;
446 }
447
448 throw new RuntimeException();
449 }
450
451 /**
452 * Finalize the computation, i.e., check if everything is placed at
453 * positions to which it belongs.
454 */
455 public final void finish() {
456 final Track[] ltrucks, lcontainers, lorders;
457 final DriverTrack[] ldrivers;
938 57 DEMOS
458 Track track;
459 int i;
460 Order oo;
461 long y;
462
463 this.clearNew();
464
465 // check the trucks
466 ltrucks = this.trucks;
467 for (i = ltrucks.length; (--i) >= 0;) {
468 track = ltrucks[i];
469 // all trucks need to be at a depot
470 if (!(track.location.isDepot)) {
471 this.addTruckError(1);
472 }
473 }
474
475 // check the drivers
476 ldrivers = this.drivers;
477 for (i = ldrivers.length; (--i) >= 0;) {
478 track = ldrivers[i];
479 // all drivers need to be home at the end of the jobs
480 if (track.location != Driver.DRIVERS.get(i).home) {
481 this.addDriverError(1);
482 }
483 }
484
485 // check the containers
486 lcontainers = this.containers;
487 for (i = lcontainers.length; (--i) >= 0;) {
488 track = lcontainers[i];
489 // all containers need to be at a depot
490 if (!(track.location.isDepot)) {
491 this.addContainerError(1);
492 }
493 }
494
495 // check the orders
496 lorders = this.orders;
497 for (i = lorders.length; (--i) >= 0;) {
498 track = lorders[i];
499 oo = Order.ORDERS.get(i);
500 // all orders need to be at their destination
501 if (track.location != oo.deliveryLocation) {
502 this.addOrderError(1);
503 }
504 y = windowViolation(oo.deliveryWindowStart, oo.deliveryWindowEnd,
505 track.time);
506 if (y > 0) {
507 this.addOrderError(1);
508 this.addDeliveryErrorTime(y);
509 }
510 }
511
512 this.commitNew();
513 }
514
515 /**
516 * Make a move
517 *
518 * @param m
519 * the move
520 */
521 public final void move(final Move m) {
522 if (m != null) {
523 this.move(m.truck, m.orders, m.drivers, m.containers, m.from,
524 m.startTime, m.to);
57.3. THE VRP BENCHMARK PROBLEM 939
525 }
526 }
527
528 /**
529 * Do some moves internally
530 *
531 * @param moves
532 * the moves
533 * @param len
534 * the length
535 */
536 private final void move(final Move[] moves, final int len) {
537 int i;
538
539 Arrays.sort(moves, 0, len, MOVE_COMPARATOR);
540 for (i = len; (--i) >= 0;) {
541 this.move(moves[i]);
542 }
543 }
544
545 /**
546 * Get the internal move array
547 *
548 * @param len
549 * the length
550 * @return the move array
551 */
552 private final Move[] getMoves(final int len) {
553 Move[] m;
554
555 m = this.temp;
556 if ((m == null) || (m.length < len)) {
557 return (this.temp = new Move[len]);
558 }
559 return m;
560 }
561
562 /**
563 * Make the given moves
564 *
565 * @param ms
566 * the move list
567 */
568 public final void move(final TransportationObjectList<Move> ms) {
569 int i, s;
570 Move[] m;
571
572 if (ms != null) {
573 s = ms.size();
574 if (s > 0) {
575 m = this.getMoves(s);
576 for (i = s; (--i) >= 0;) {
577 m[i] = ms.get(i);
578 }
579 this.move(m, s);
580 }
581 }
582 }
583
584 /**
585 * Make the given moves
586 *
587 * @param ms
588 * the moves
589 */
590 public final void move(final Move[] ms) {
591 Move[] m;
940 57 DEMOS
592 int i;
593
594 if (ms != null) {
595 i = ms.length;
596 if (i > 0) {
597 m = this.getMoves(i);
598 System.arraycopy(ms, 0, m, 0, i);
599 this.move(m, i);
600 }
601 }
602 }
603
604 /**
605 * Get the location where the given container was placed last
606 *
607 * @param c
608 * the container
609 * @return the location where the given container was placed last
610 */
611 public final Location getLocation(final Container c) {
612 return this.containers[c.id].location;
613 }
614
615 /**
616 * Get the location where the given truck was placed last
617 *
618 * @param c
619 * the truck
620 * @return the location where the given truck was placed last
621 */
622 public final Location getLocation(final Truck c) {
623 return this.trucks[c.id].location;
624 }
625
626 /**
627 * Get the location where the given driver was placed last
628 *
629 * @param c
630 * the driver
631 * @return the location where the given driver was placed last
632 */
633 public final Location getLocation(final Driver c) {
634 return this.drivers[c.id].location;
635 }
636
637 /**
638 * Get the location where the given order was placed last
639 *
640 * @param c
641 * the order
642 * @return the location where the given order was placed last
643 */
644 public final Location getLocation(final Order c) {
645 return this.orders[c.id].location;
646 }
647
648 /**
649 * A class used for tracking
650 *
651 * @author Thomas Weise
652 */
653 private static class Track implements Serializable {
654 /** a constant required by Java serialization */
655 private static final long serialVersionUID = 1;
656
657 /** the location */
658 Location location;
57.3. THE VRP BENCHMARK PROBLEM 941
659
660 /** the time */
661 long time;
662
663 /** the track */
664 Track() {
665 super();
666 }
667
668 }
669
670 /**
671 * A class used for driver tracking
672 *
673 * @author Thomas Weise
674 */
675 private static class DriverTrack extends Track {
676 /** a constant required by Java serialization */
677 private static final long serialVersionUID = 1;
678
679 /** the days during which a driver was driving */
680 int lastDay;
681
682 /** the track */
683 DriverTrack() {
684 super();
685 }
686
687 }
688
689 /**
690 * A class used for truck tracking
691 *
692 * @author Thomas Weise
693 */
694 private static class TruckTrack extends Track {
695 /** a constant required by Java serialization */
696 private static final long serialVersionUID = 1;
697
698 /** the last time the truck arrived somewhere */
699 long lastTime;
700
701 /** orders which were dropped off */
702 Order[] orders;
703
704 /** the track */
705 TruckTrack() {
706 super();
707 }
708
709 }
710
711 /**
712 * The move comparator for sorting
713 *
714 * @author Thomas Weise
715 */
716 private static final class CMP implements Comparator<Move> {
717
718 /** instantiate the comparator */
719 CMP() {
720 super();
721 }
722
723 /**
724 * Compare two moves according to their start time
725 *
942 57 DEMOS
726 * @param a
727 * the first move
728 * @param b
729 * the second move
730 * @return the comparison result
731 */
732 public final int compare(final Move a, final Move b) {
733 if (a.startTime < b.startTime) {
734 return 1;
735 }
736 if (b.startTime < a.startTime) {
737 return -1;
738 }
739 return 0;
740 }
741 }
742 }
Appendices
944 57 DEMOS
Appendix A
Citation Suggestion
@book{W2011GOEB,
author = {Thomas Weise},
title = {Global Optimization Algorithms -- Theory and Application},
year = {2011},
month = dec # {˜7,},
publisher = {www.it-weise.de},
howpublished = {Online as E-Book},
edition = {third},
copyright = {Copyright
c 2006-2011 Thomas Weise, licensed under GNU FDL},
keywords = {Global Optimization, Evolutionary Computation, Evolutionary Al-
gorithms, Genetic Algorithms, Genetic Programming, Evolution
Strategy, Learning Classifier Systems, Extremal Optimization,
Raindrop Method, Ant Colony Optimization, Particle Swarm Op-
timization, Downhill Simplex, Simulated Annealing, Hill Climbing,
Combinatorial Optimization, Numerical Optimization, Java},
language = {en},
url = {http://www.it-weise.de/projects/book.pdf},
}
%0 Book
%A Weise, Thomas
%T Global Optimization Algorithms – Theory and Application
%F Thomas Weise: “Global Optimization Algorithms -- Theory and Application”
%I www.it-weise.de
%D third
%8 third
%7 third
%9 Online available as E-Book at http://www.it-weise.de/projects/book.pdf
Appendix B
GNU Free Documentation License (FDL)
Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.
Preamble
The purpose of this License is to make a manual, textbook, or other functional and useful
document “free” in the sense of freedom: to assure everyone the effective freedom to copy
and redistribute it, with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way to get credit for their
work, while not being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document
must themselves be free in the same sense. It complements the GNU General Public License,
which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free
software needs free documentation: a free program should come with manuals providing the
same freedoms that the software does. But this License is not limited to software manuals; it
can be used for any textual work, regardless of subject matter or whether it is published as a
printed book. We recommend this License principally for works whose purpose is instruction
or reference.
This License applies to any manual or other work, in any medium, that contains a notice
placed by the copyright holder saying it can be distributed under the terms of this License.
Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that
work under the conditions stated herein. The “Document”, below, refers to any such
manual or work. Any member of the public is a licensee, and is addressed as “you”. You
accept the license if you copy, modify or distribute the work in a way requiring permission
under copyright law.
948 B GNU FREE DOCUMENTATION LICENSE (FDL)
A “Modified Version” of the Document means any work containing the Document or
a portion of it, either copied verbatim, or with modifications and/or translated into another
language.
A “Secondary Section” is a named appendix or a front-matter section of the Document
that deals exclusively with the relationship of the publishers or authors of the Document
to the Document’s overall subject (or to related matters) and contains nothing that could
fall directly within that overall subject. (Thus, if the Document is in part a textbook of
mathematics, a Secondary Section may not explain any mathematics.) The relationship
could be a matter of historical connection with the subject or with related matters, or of
legal, commercial, philosophical, ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated,
as being those of Invariant Sections, in the notice that says that the Document is released
under this License. If a section does not fit the above definition of Secondary then it is not
allowed to be designated as Invariant. The Document may contain zero Invariant Sections.
If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under this
License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at
most 25 words.
A “Transparent” copy of the Document means a machine readable copy, represented
in a format whose specification is available to the general public, that is suitable for revising
the document straightforwardly with generic text editors or (for images composed of pixels)
generic paint programs or (for drawings) some widely available drawing editor, and that is
suitable for input to text formatters or for automatic translation to a variety of formats
suitable for input to text formatters. A copy made in an otherwise Transparent file format
whose markup, or absence of markup, has been arranged to thwart or discourage subsequent
modification by readers is not Transparent. An image format is not Transparent if used for
any substantial amount of text. A copy that is not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without
markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly avail-
able DTD, and standard-conforming simple HTML, PostScript or PDF designed for human
modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque
formats include proprietary formats that can be read and edited only by proprietary word
processors, SGML or XML for which the DTD and/or processing tools are not generally
available, and the machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following
pages as are needed to hold, legibly, the material this License requires to appear in the title
page. For works in formats which do not have any title page as such, “Title Page” means
the text near the most prominent appearance of the work’s title, preceding the beginning of
the body of the text.
The “publisher” means any person or entity that distributes copies of the Document
to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title ei-
ther is precisely XYZ or contains XYZ in parentheses following text that translates XYZ
in another language. (Here XYZ stands for a specific section name mentioned below,
such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To
“Preserve the Title” of such a section when you modify the Document means that it
remains a section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that
this License applies to the Document. These Warranty Disclaimers are considered to be
included by reference in this License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has no effect on the
B.2. VERBATIM COPYING 949
meaning of this License.
B.4 Modifications
You may copy and distribute a Modified Version of the Document under the conditions
of sections 2 and 3 above, provided that you release the Modified Version under precisely
this License, with the Modified Version filling the role of the Document, thus licensing
distribution and modification of the Modified Version to whoever possesses a copy of it. In
addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the
Document, and from those of previous versions (which should, if there were any, be
950 B GNU FREE DOCUMENTATION LICENSE (FDL)
listed in the History section of the Document). You may use the same title as a
previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for
authorship of the modifications in the Modified Version, together with at least five of
the principal authors of the Document (all of its principal authors, if it has fewer than
five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the
publisher.
E. Add an appropriate copyright notice for your modifications adjacent to the other
copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public
permission to use the Modified Version under the terms of this License, in the form
shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover
Texts given in the Document’s license notice.
I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item
stating at least the title, year, new authors, and publisher of the Modified Version as
given on the Title Page. If there is no section Entitled “History” in the Document,
create one stating the title, year, authors, and publisher of the Document as given
on its Title Page, then add an item describing the Modified Version as stated in the
previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a
Transparent copy of the Document, and likewise the network locations given in the
Document for previous versions it was based on. These may be placed in the “History”
section. You may omit a network location for a work that was published at least four
years before the Document itself, or if the original publisher of the version it refers to
gives permission.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in
their titles. Section numbers or the equivalent are not considered part of the section
titles.
M. Delete any section Entitled “Endorsements”. Such a section may not be included in
the Modified Version.
If the Modified Version includes new front-matter sections or appendices that qualify as
Secondary Sections and contain no material copied from the Document, you may at your
option designate some or all of these sections as invariant. To do this, add their titles to
B.5. COMBINING DOCUMENTS 951
the list of Invariant Sections in the Modified Version’s license notice. These titles must be
distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but en-
dorsements of your Modified Version by various parties – for example, statements of peer
review or that the text has been approved by an organization as the authoritative definition
of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up
to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified
Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added
by (or through arrangements made by) any one entity. If the Document already includes
a cover text for the same cover, previously added by you or by arrangement made by the
same entity you are acting on behalf of, you may not add another; but you may replace the
old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to
use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under
the terms defined in Section B.4 above for modified versions, provided that you include in
the combination all of the Invariant Sections of all of the original documents, unmodified,
and list them all as Invariant Sections of your combined work in its license notice, and that
you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical
Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections
with the same name but different contents, make the title of each such section unique by
adding at the end of it, in parentheses, the name of the original author or publisher of that
section if known, or else a unique number. Make the same adjustment to the section titles
in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various
original documents, forming one section Entitled “History”; likewise combine any sections
Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete
all sections Entitled “Endorsements”.
You may make a collection consisting of the Document and other documents released
under this License, and replace the individual copies of this License in the various documents
with a single copy that is included in the collection, provided that you follow the rules of
this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually
under this License, provided you insert a copy of this License into the extracted document,
and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent
documents or works, in or on a volume of a storage or distribution medium, is called an
952 B GNU FREE DOCUMENTATION LICENSE (FDL)
“aggregate” if the copyright resulting from the compilation is not used to limit the legal rights
of the compilation’s users beyond what the individual works permit. When the Document
is included in an aggregate, this License does not apply to the other works in the aggregate
which are not themselves derivative works of the Document.
If the Cover Text requirement of Section B.3 is applicable to these copies of the Docu-
ment, then if the Document is less than one half of the entire aggregate, the Document’s
Cover Texts may be placed on covers that bracket the Document within the aggregate, or
the electronic equivalent of covers if the Document is in electronic form. Otherwise they
must appear on printed covers that bracket the whole aggregate.
B.8 Translation
B.9 Termination
You may not copy, modify, sublicense, or distribute the Document except as expressly
provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute
it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular
copyright holder is reinstated (a) provisionally, unless and until the copyright holder explic-
itly and finally terminates your license, and (b) permanently, if the copyright holder fails to
notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if
the copyright holder notifies you of the violation by some reasonable means, this is the first
time you have received notice of violation of this License (for any work) from that copyright
holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties
who have received copies or rights from you under this License. If your rights have been
terminated and not permanently reinstated, receipt of a copy of some or all of the same
material does not give you any rights to use it.
The Free Software Foundation may publish new, revised versions of the GNU Free Doc-
umentation License from time to time. Such new versions will be similar in spirit to the
B.11. RELICENSING 953
present version, but may differ in detail to address new problems or concerns. See http://
www.gnu.org/copyleft/ [accessed 2009-06-07].
Each version of the License is given a distinguishing version number. If the Document
specifies that a particular numbered version of this License “or any later version” applies
to it, you have the option of following the terms and conditions either of that specified
version or of any later version that has been published (not as a draft) by the Free Software
Foundation. If the Document does not specify a version number of this License, you may
choose any version ever published (not as a draft) by the Free Software Foundation. If the
Document specifies that a proxy can decide which future versions of this License can be
used, that proxy’s public statement of acceptance of a version permanently authorizes you
to choose that version for the Document.
B.11 Relicensing
“Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide Web
server that publishes copyrightable works and also provides prominent facilities for anybody
to edit those works. A public wiki that anybody can edit is an example of such a server. A
“Massive Multiauthor Collaboration” (or “MMC”) contained in the site means any set of
copyrightable works thus published on the MMC site.
“CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license published
by Creative Commons Corporation, a not-for-profit corporation with a principal place of
business in San Francisco, California, as well as future copyleft versions of that license
published by that same organization.
“Incorporate” means to publish or republish a Document, in whole or in part, as part of
another Document.
An MMC is “eligible for relicensing” if it is licensed under this License, and if all works
that were first published under this License somewhere other than this MMC, and subse-
quently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant
sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-
BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible
for relicensing.
To use this License in a document you have written, include a copy of the License in the
document and put the following copyright and license notices just after the title page:
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the
“with . . . Texts.” line with this:
954 B GNU FREE DOCUMENTATION LICENSE (FDL)
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover
Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the
three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing
these examples in parallel under your choice of free software license, such as the GNU General
Public License, to permit their use in free software.
Appendix C
GNU Lesser General Public License (LGPL)
Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.
This version of the GNU Lesser General Public License incorporates the terms and
conditions of version 3 of the GNU General Public License, supplemented by the additional
permissions listed below.
As used herein, “this License” refers to version 3 of the GNU Lesser General Public
License, and the “GNU GPL” refers to version 3 of the GNU General Public License.
“The Library” refers to a covered work governed by this License, other than an Appli-
cation or a Combined Work as defined below.
An “Application” is any work that makes use of an interface provided by the Library,
but which is not otherwise based on the Library. Defining a subclass of a class defined by
the Library is deemed a mode of using an interface provided by the Library.
A “Combined Work” is a work produced by combining or linking an Application with
the Library. The particular version of the Library with which the Combined Work was made
is also called the “Linked Version”.
The “Minimal Corresponding Source” for a Combined Work means the Corresponding
Source for the Combined Work, excluding any source code for portions of the Combined
Work that, considered in isolation, are based on the Application, and not on the Linked
Version.
The “Corresponding Application Code” for a Combined Work means the object code
and/or source code for the Application, including any data and utility programs needed for
reproducing the Combined Work from the Application, but excluding the System Libraries
of the Combined Work.
You may convey a covered work under Section C.4 and Section C.5 of this License without
being bound by section 3 of the GNU GPL.
956 C GNU LESSER GENERAL PUBLIC LICENSE (LGPL)
C.3 Conveying Modified Versions
If you modify a copy of the Library, and, in your modifications, a facility refers to a
function or data to be supplied by an Application that uses the facility (other than as an
argument passed when the facility is invoked), then you may convey a copy of the modified
version:
1. under this License, provided that you make a good faith effort to ensure that, in the
event an Application does not supply the function or data, the facility still operates,
and performs whatever part of its purpose remains meaningful, or
2. under the GNU GPL, with none of the additional permissions of this License applicable
to that copy.
The object code form of an Application may incorporate material from a header file
that is part of the Library. You may convey such object code under terms of your choice,
provided that, if the incorporated material is not limited to numerical parameters, data
structure layouts and accessors, or small macros, inline functions and templates (ten or
fewer lines in length), you do both of the following:
1. Give prominent notice with each copy of the object code that the Library is used in it
and that the Library and its use are covered by this License.
2. Accompany the object code with a copy of the GNU GPL and this license document.
You may convey a Combined Work under terms of your choice that, taken together,
effectively do not restrict modification of the portions of the Library contained in the Com-
bined Work and reverse engineering for debugging such modifications, if you also do each of
the following:
1. Give prominent notice with each copy of the Combined Work that the Library is used
in it and that the Library and its use are covered by this License.
2. Accompany the Combined Work with a copy of the GNU GPL and this license docu-
ment.
3. For a Combined Work that displays copyright notices during execution, include the
copyright notice for the Library among these notices, as well as a reference directing
the user to the copies of the GNU GPL and this license document.
You may place library facilities that are a work based on the Library side by side in
a single library together with other library facilities that are not Applications and are not
covered by this License, and convey such a combined library under terms of your choice, if
you do both of the following:
1. Accompany the combined library with a copy of the same work based on the Library,
uncombined with any other library facilities, conveyed under the terms of this License.
2. Give prominent notice with the combined library that part of it is a work based on
the Library, and explaining where to find the accompanying uncombined form of the
same work.
The Free Software Foundation may publish revised and/or new versions of the GNU
Lesser General Public License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library as you received it
specifies that a certain numbered version of the GNU Lesser General Public License “or any
later version” applies to it, you have the option of following the terms and conditions either
of that published version or of any later version published by the Free Software Foundation.
If the Library as you received it does not specify a version number of the GNU Lesser General
Public License, you may choose any version of the GNU Lesser General Public License ever
published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide whether future versions
of the GNU Lesser General Public License shall apply, that proxy’s public statement of
acceptance of any version is permanent authorization for you to choose that version for the
Library.
Appendix D
Credits and Contributors
In this section I want to give credits to whom they deserve (by directly or indirectly con-
tributing to this book). So its props to:
Stefan Achler
For working together at the 2007 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Steffen Bleul
For working together at the 2006, 2007, and 2008 Web Service Challenge.
See ?? on page ?? and [334, 335, 2903, 2908]
Raymond Chiong
For co-authoring a bookchapter corresponding to Part II and by doing so, helping to correct
many mistakes.
See Part II on page 138
Distributed Systems Group, University of Kassel
To the whole Distributed Systems Group at the University of Kassel for being supportive
and coming over again and again with new ideas and helpful advices.
Each and every research project described in this book.
Qiu Li
For careful reading and pointing out inconsistencies and spelling mistakes.
See Chapter 51
Gan Min
For careful reading and pointing out inconsistencies and spelling mistakes.
See especially the Pareto optimization area Section 3.3.5 on page 65.
Kurt Geihs
For being supportive and contributing to many of my research projects.
See for instance ?? on page ?? and [334, 335, 2896, 2897, 2904]
Martin Göb
For working together at the 2007 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Ibrahim Ibrahim
For careful proofreading.
See Chapter 1 on page 21.
Antonio Nebro
For co-authoring a bookchapter corresponding to Part II and by doing so, helping to correct
960 D CREDITS AND CONTRIBUTORS
many mistakes.
See Chapter 3 on page 51
Stefan Niemczyk
For careful proofreading and scientific support.
See for instance Chapter 1 on page 21, Section 50.2.7 on page 566, and [2037, 2910].
Roland Reichle
For fruitful discussions and scientific support.
See for instance Section 53.6.2 on page 693, Section 50.2.7 on page 566, and [2910].
Laurent Rodriguez
For contributing corrections in some of the chapters.
See for instance Section 53.1 on page 653 and Section 51.11 on page 647.
Richard Seabrook
For pointing out mistakes in some of the set theory definitions.
See Chapter 51 on page 639
Hendrik Skubch
For fruitful discussions and scientific support.
See for instance ?? on page ??, Section 50.2.7 on page 566, and [2910].
Christian Voigtmann
For working together at the 2007 and 2008 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Wibul Wongpoowarak
For valuable suggestions in different areas, like including random optimization.
See for instance ?? on page ?? [2971], Chapter 44 on page 505.
Michael Zapf
For helping me to make many sections more comprehensible.
See for instance ?? on page ??, ?? on page ??, Section 31.5 on page 409, and [2906? ].
References
49. Marco Aiello, Nico van Benthem, and Elie el Khoury. Visualizing Compositions of Services
from Large Repositories. In CEC/EEE’08 [1346], pages 359–362, 2008. doi: 10.1109/CE-
CandEEE.2008.149. INSPEC Accession Number: 10475111. See also [48, 50, 204, 761].
50. Marco Aiello, Elie el Khoury, Alexander Lazovik, and Patrick Ratelband. Opti-
mal QoS–Aware Web Service Composition. In CEC’09 [1245], pages 491–494, 2009.
doi: 10.1109/CEC.2009.63. INSPEC Accession Number: 10839135. See also [48, 49, 761, 1571].
51. Jan Aikins and Howard Shrobe, editors. Proceedings of the The Seventh Conference on In-
novative Applications of Artificial Intelligence (IAAI’95), August 21–25, 1995, Montréal,
QC, Canada. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-81-4. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai95.php [accessed 2007-09-06].
OCLC: 33813525, 633974570, and 635878104. GBV-Identification (PPN): 212259687.
52. P. Aiyarak, A. S. Saket, and Mark C. Sinclair. Genetic Programming Approaches
for Minimum Cost Topology Optimisation of Optical Telecommunication Networks. In
GALESIA’97 [3056], pages 415–420, 1997. doi: 10.1049/cp:19971216. Fully available
at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/aiyarak_1997_GPtootn.
html [accessed 2009-09-01]. CiteSeerx : 10.1.1.55.6656. See also [2498].
53. M. Ali Akbar and Muddassar Farooq. Application of Evolutionary Algorithms in De-
tection of SIP based Flooding Attacks. In GECCO’09-I [2342], pages 1419–1426,
2009. doi: 10.1145/1569901.1570092. Fully available at http://nexginrc.org/papers/
gecco09-ali.pdf [accessed 2009-09-03].
REFERENCES 965
54. Rama Akkiraju, Biplav Srivastava, Anca Ivan, Richard Goodwin, and Tanveer Syeda-
Mahmood. Semantic Matching to Achieve Web Service Discovery and Composition. In
CEC/EEE’06 [2967], pages 445–447, 2006. doi: 10.1109/CEC-EEE.2006.78. INSPEC Acces-
sion Number: 9199704. See also [318].
55. Selim G. Akl, Cristian S. Calude, Michael J. Dinneen, Grzegorz Rozenberg, and Todd Ware-
ham, editors. Proceedings of the 6th International Conference on Unconventional Compu-
tation (UC’07), August 13–17, 2007, Kingston, Canada, volume 4618/2007 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 978-3-540-73553-3.
56. Jarmo Tapani Alander. An Indexed Bibliography of Genetic Algorithms in Signal and Im-
age Processing. Technical Report 94-1-SIGNAL, University of Vaasa: Finland, May 18,
1998. Fully available at ftp://ftp.uwasa.fi/cs/report94-1/gaSIGNALbib.ps.Z
and http://citeseer.ist.psu.edu/319835.html [accessed 2008-05-18].
57. Jarmo Tapani Alander, editor. Proceedings of the 1st Finnish Workshop on Genetic Algo-
rithms and Their Applications (1FWGA), November 4–5, 1992, Helsinki University of Tech-
nology: Espoo, Finland. Helsinki University of Technology, Department of Computer Science:
Helsinki, Finland. Technical Report TKO-A30, partly in Finnish, published 1993.
58. Jarmo Tapani Alander, editor. Proceedings of the Second Finnish Workshop on Genetic Al-
gorithms and Their Applications (2FWGA), March 16–18, 1994, Vaasa, Finland. University
of Vaasa, Department of Information Technology and Production Economics: Vaasa, Fin-
land. isbn: 951-683-526-0. Fully available at ftp://ftp.uwasa.fi/cs/report94-2
[accessed 2008-07-24]. issn: 1237-7589. Technical Report 94-2.
59. Jarmo Tapani Alander, editor. Proceedings of the First Nordic Workshop on Genetic Algo-
rithms (The Third Finnish Workshop on Genetic Algorithms, 3FWGA) (1NWGA), June 9–
13, 1995, Vaasa, Finland, 2 in Proceedings of the University of Vaasa. University of Vaasa,
Department of Information Technology and Production Economics: Vaasa, Finland. Fully
available at ftp://ftp.uwasa.fi/cs/1NWGA [accessed 2008-07-24]. Technical Report 95-1.
60. Jarmo Tapani Alander, editor. Proceedings of the Second Nordic Workshop on Genetic Al-
gorithms (2NWGA), August 19–23, 1996, Vaasa, Finland, volume 13 in Proceedings of the
University of Vaasa. Fully available at ftp://ftp.uwasa.fi/cs/2NWGA [accessed 2008-07-24].
In collaboration with the Finnish Artificial Intelligence Society.
61. Jarmo Tapani Alander, editor. Proceedings of the Third Nordic Workshop on Genetic Al-
gorithms (3NWGA), August 20–22, 1997, Helsinki, Finland. Finnish Artificial Intelligence
Society (FAIS): Vantaa, Finland. Fully available at ftp://ftp.uwasa.fi/cs/3NWGA [ac-
cessed 2008-07-24].
62. Jarmo Tapani Alander, editor. Proceedings of the Fourth Nordic Workshop on Genetic Al-
gorithms (4NWGA), July 27–31, 1998, Norwegian University of Science and Technology:
Trondheim, Norway. Finnish Artificial Intelligence Society (FAIS): Vantaa, Finland. seem-
ingly didn’t take place?
63. Enrique Alba Torres and J. Francisco Chicano. Evolutionary Algorithms in Telecommunica-
tions. In MELECON’06 [2388], pages 795–798, 2006. doi: 10.1109/MELCON.2006.1653218.
Fully available at http://dea.brunel.ac.uk/ugprojects/docs/melecon06.pdf [ac-
cessed 2008-08-01].
64. Enrique Alba Torres and Bernabé Dorronsoro Dı́az. Solving the Vehicle Routing Problem by
Using Cellular Genetic Algorithms. In EvoCOP’04 [1115], pages 11–20, 2004. Fully available
at http://www.lcc.uma.es/˜eat/pdf/evocop04.pdf [accessed 2008-10-27]. See also [65].
65. Enrique Alba Torres and Bernabé Dorronsoro Dı́az. Computing Nine New Best-So-Far So-
lutions for Capacitated VRP with a Cellular Genetic Algorithm. Information Processing
Letters, 98(6):225–230, June 30, 2006, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/j.ipl.2006.02.006. See also [64].
66. Enrique Alba Torres and Bernabé Dorronsoro Dı́az. Cellular Genetic Algorithms, volume 42
in Operations Research/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin,
Germany, June 2008. isbn: 0387776095.
67. Enrique Alba Torres, Carlos Cotta, J. Francisco Chicano, and Antonio Jesús Nebro Ur-
baneja. Parallel Evolutionary Algorithms in Telecommunications: Two Case Studies. In
CACIC’02 [2747], 2002. Fully available at http://www.lcc.uma.es/˜eat/./pdf/
cacic02.pdf [accessed 2008-08-01]. CiteSeerx : 10.1.1.8.4073.
966 REFERENCES
68. Enrique Alba Torres, José Garcı́a-Nieto, Javid Taheri, and Albert Zomaya. New Research
in Nature Inspired Algorithms for Mobility Management in GSM Networks. In EvoWork-
shops’08 [1051], pages 1–10, 2008. doi: 10.1007/978-3-540-78761-7 1.
69. Rudolf F. Albrecht, Colin R. Reeves, and Nigel C. Steele, editors. Proceedings of
Artificial Neural Nets and Genetic Algorithms (ANNGA’93), April 14–16, 1993, Inns-
bruck, Austria. Springer New York: New York, NY, USA. isbn: 0-387-82459-6 and
3-211-82459-6. Google Books ID: ZYkAQAAIAAJ. OCLC: 27936988, 300357118,
443980789, 611581837, and 633734458. GBV-Identification (PPN): 04374902X and
126104662.
70. John Aldrich. R.A. Fisher and the Making of Maximum Likelihood 1912-1922. Discussion Pa-
per Series In Economics And Econometrics, Statistical Science, 3(12):162–176, January 1995,
Institute of Mathematical Statistics, University of Southampton. doi: 10.1214/ss/1030037906.
Fully available at http://projecteuclid.org/euclid.ss/1030037906 [accessed 2007-09-
15]. Zentralblatt MATH identifier: 0955.62525. Mathematical Reviews number (Math-
SciNet): 1617519.
71. Christos E. Alexakos, Konstantinos C. Giotopoulos, Eleni J. Thermogianni, Grigorios N.
Beligiannis, and Spiridon D. Likothanassis. Integrating E-learning Environments with Com-
putational Intelligence Assessment Agents. Proceedings of World Academy of Science, En-
gineering and Technology, 13:233–239, May 2006. Fully available at http://www.waset.
org/pwaset/v13/v13-45.pdf [accessed 2009-03-31].
72. M. Montaz Ali, Charoenchai Khompatraporn, and Zelda B. Zabinsky. A Numerical Eval-
uation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test
Problems. Journal of Global Optimization, 31(4):635–672, April 2005, Springer Netherlands:
Dordrecht, Netherlands. doi: 10.1007/s10898-004-9972-2.
73. Eugene L. Allgower and Kurt Georg, editors. Computational Solution of Nonlinear Systems
of Equations – Proceedings of the 1988 SIAM-AMS Summer Seminar on Computational So-
lution of Nonlinear Systems of Equations, July 18–29, 1988, Colorado State University: Fort
Collins, CO, USA, volume 26 in Lectures in Applied Mathematics (LAM). American Math-
ematical Society (AMS) Bookstore: Providence, RI, USA. isbn: 0-8218-1131-2. Google
Books ID: UAabWwXBifoC. OCLC: 20992934, 243444170, 311687125, and 423088702.
Published in 1990.
74. Jean-Marc Alliot, Evelyne Lutton, and Marc Schoenauer, editors. European Conference on
Artificial Evolution (AE’94), September 20–23, 1994, ENAC: Toulouse, France. Cépaduès:
Toulouse, France. isbn: 2854284119. Google Books ID: AhBWAAAACAAJ. OCLC: 34264284
and 258151166. Published in January 1995.
75. Jean-Marc Alliot, Evelyne Lutton, Edmund M. A. Ronald, Marc Schoenauer, and Dominique
Snyers, editors. Selected Papers of the European Conference on Artificial Evolution (AE’95),
September 4–6, 1995, Brest, France, volume 1063 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-61108-8. Published in
1996.
76. Riyad Alshammari, Peter I. Lichodzijewski, Malcom Ian Heywood, and A. Nur Zincir-
Heywood. Classifying SSH Encrypted Traffic with Minimum Packet Header Fea-
tures using Genetic Programming. In GECCO’09-II [2343], pages 2539–2546, 2009.
doi: 10.1145/1570256.1570358.
77. Lee Altenberg. The Schema Theorem and Price’s Theorem. In FOGA 3 [2932], pages 23–
49, 1994. Fully available at http://citeseer.ist.psu.edu/old/700230.html and
http://dynamics.org/Altenberg/FILES/LeeSTPT.pdf [accessed 2009-07-10].
78. Lee Altenberg. Genome Growth and the Evolution of the Genotype-Phenotype Map. In
Evolution and Biocomputation – Computational Models of Evolution [206]. Springer-Verlag
GmbH: Berlin, Germany, 1995. CiteSeerx : 10.1.1.39.3876. Corrections and revisions,
October 1997.
79. Lee Altenberg. NK Fitness Landscapes. In Handbook of Evolutionary Computation [171],
Chapter B2.7.2. Oxford University Press, Inc.: New York, NY, USA, Institute of Physics
Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press, Inc.:
Boca Raton, FL, USA, 1997. Fully available at http://dynamics.org/Altenberg/
FILES/LeeNKFL.pdf and http://www.cmi.univ-mrs.fr/˜pardoux/LeeNKFL.pdf
x
[accessed 2010-08-26]. CiteSeer : 10.1.1.79.1351.
95. David Andre. Evolution of Mapmaking Ability: Strategies for the Evolution of Learning,
Planning, and Memory using Genetic Programming. In CEC’94 [1891], pages 250–255,
volume 1, 1994. See also [96].
96. David Andre. The Evolution of Agents that Build Mental Models and Create Simple Plans
968 REFERENCES
Using Genetic Programming. In ICGA’95 [883], pages 248–255, 1995. See also [95].
97. David Andre. Learning and Upgrading Rules for an Optical Character Recognition System
Using Genetic Programming. In Handbook of Evolutionary Computation [171]. Oxford Uni-
versity Press, Inc.: New York, NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac
House, Temple Back, Bristol, UK, and CRC Press, Inc.: Boca Raton, FL, USA, 1997.
98. David Andre and Astro Teller. Evolving Team Darwin United. In RoboCup-98: Robot
Soccer World Cup II [129], pages 346–351, 1998. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/Andre_1999_ETD.html and http://www.
cs.cmu.edu/afs/cs/usr/astro/public/papers/Teller_Astro.ps [accessed 2010-08-
x
21]. CiteSeer : 10.1.1.36.2696.
99. David Andre, Forrest H. Bennett III, and John R. Koza. Discovery by Genetic Programming
of a Cellular Automata Rule that is Better than any Known Rule for the Majority Classifi-
cation Problem. pages 3–11. Fully available at http://citeseer.ist.psu.edu/33008.
html [accessed 2007-08-01]. See also [100].
100. David Andre, Forrest H. Bennett III, and John R. Koza. Evolution of Intricate Long-
Distance Communication Signals in Cellular Automata using Genetic Programming. In
Artificial Life V [1911], volume 1, 1996. Fully available at http://citeseer.ist.
psu.edu/andre96evolution.html and http://www.genetic-programming.com/
jkpdf/alife1996gkl.pdf [accessed 2007-10-03]. See also [99].
101. Peter John Angeline. Subtree Crossover: Building Block Engine or Macromutation? In
GP’97 [1611], pages 9–17, 1997. Fully available at http://ncra.ucd.ie/COMP30290/
SubtreeXoverBuildingBlockorMacromutation_angeline_gp97.ps [accessed 2010-08-
20].
102. Peter John Angeline. A Historical Perspective on the Evolution of Executable Struc-
tures. Fundamenta Informaticae – Annales Societatis Mathematicae Polonae, Series IV,
35(1-2):179–195, August 1998, IOS Press: Amsterdam, The Netherlands and European
Association for Theoretical Computer Science (EATCS): Rio, Greece. Fully available
at http://www.natural-selection.com/publications_1998.html [accessed 2009-07-
09]. CiteSeerx : 10.1.1.15.6779. Special volume: Evolutionary Computation. See also
[105].
103. Peter John Angeline. Evolutionary Optimization versus Particle Swarm Optimiza-
tion: Philosophy and Performance Differences. In EP’98 [2208], pages 601–610, 1998.
doi: 10.1007/BFb0040811.
104. Peter John Angeline. Subtree Crossover Causes Bloat. In GP’98 [1612], pages 745–752, 1998.
105. Peter John Angeline. A Historical Perspective on the Evolution of Executable Structures. In
Evolutionary Computation [863]. IOS Press: Amsterdam, The Netherlands, 1999. See also
[102].
106. Peter John Angeline and Kenneth E. Kinnear, Jr, editors. Advances in Genetic Programming
II, Complex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, Octo-
ber 26, 1996. isbn: 0-262-01158-1. Google Books ID: c3G7QgAACAAJ. OCLC: 29595260
and 59664755. GBV-Identification (PPN): 282212574.
107. Peter John Angeline and Jordan B. Pollack. Coevolving High-Level Representations. In
Artificial Life III [1667], pages 55–71, 1992. Fully available at http://citeseer.
ist.psu.edu/angeline94coevolving.html and http://demo.cs.brandeis.edu/
papers/alife3.pdf [accessed 2007-10-05].
108. Peter John Angeline and Jordan B. Pollack. Evolutionary Module Acquisition. In
EP’93 [943], pages 154–163, 1993. Fully available at http://www.demo.cs.brandeis.
edu/papers/ep93.pdf and https://eprints.kfupm.edu.sa/38410/ [accessed 2009-07-
x
11]. CiteSeer : 10.1.1.50.938 and 10.1.1.57.3280.
109. Peter John Angeline, Robert G. Reynolds, John Robert McDonnell, and Russel C. Eber-
hart, editors. Proceedings of the 6th International Conference on Evolutionary Pro-
gramming (EP’97), April 13–16, 1997, Indianapolis, IN, USA, volume 1213/1997 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 354062788X. Google Books ID: 4cviSAAACAAJ. OCLC: 36629908, 247514632,
316851036, 441306808, 466121928, and 473591147.
110. Peter John Angeline, Zbigniew Michalewicz, Marc Schoenauer, Xin Yao, and Ali
M. S. Zalzala, editors. Proceedings of the IEEE Congress on Evolutionary Compu-
tation (CEC’99), July 6–9, 1999, Mayflower Hotel: Washington, DC, USA, volume
REFERENCES 969
1-3. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-5536-9 and
0-7803-5537-7. Google Books ID: GJtVAAAAMAAJ, YZpVAAAAMAAJ, hM6nAAAACAAJ,
and r5hVAAAAMAAJ. OCLC: 42253982 and 64431060. Library of Congress Control Num-
ber (LCCN): 99-61143. INSPEC Accession Number: 6338817. Catalogue no.: 99TH8406.
CEC-99 – A joint meeting of the IEEE, Evolutionary Programming Society, Galesia, and the
IEE.
111. Shoshana Anily and Awi Federgruen. Ergodicity in Parametric Non-Stationary Markov
Chains: An Application to Simulated Annealing Methods. Operations Research, 35(6):867–
874, November–December 1987, Institute for Operations Research and the Management Sci-
ences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cam-
bridge, MA, USA. JSTOR Stable ID: 171435.
112. Pavlos Antoniou, Andreas Pitsilides, Tim Blackwell, and Andries P. Engelbrecht. Employing
the Flocking Behavior of Birds for Controlling Congestion in Autonomous Decentralized Net-
works. In CEC’09 [1350], pages 1753–1761, 2009. doi: 10.1109/CEC.2009.4983153. INSPEC
Accession Number: 10688748.
113. Hendrik James Antonisse. A Grammar-Based Genetic Algorithm. In FOGA’90 [2562], pages
193–204, 1990.
114. 19th European Conference on Machine Learning and the 12th European Conference on Princi-
ples and Practice of Knowledge Discovery in Databases (ECML PKDD’08), September 15–19,
2008, Antwerp, Belgium.
115. Kamel Aouiche and Jérôme Darmont. Data Mining-Based Materialized View and Index
Selection in Data Warehouses. Journal of Intelligent Information Systems, 33(1):65–93, Au-
gust 2009, Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/s10844-009-0080-0.
116. Kamel Aouiche, Pierre-Emmanuel Jouve, and Jérôme Darmont. Clustering-Based Mate-
rialized View Selection in Data Warehouses. In ADBIS’06 [1828], pages 81–95, 2009.
doi: 10.1007/11827252 9. Fully available at http://hal.archives-ouvertes.fr/docs/
00/32/06/24/PDF/adbis_06.pdf and http://hal.archives-ouvertes.fr/docs/
00/32/06/24/PS/adbis_06.ps [accessed 2011-04-12]. CiteSeerx : 10.1.1.95.8281. arXiv
ID: cs/0703114v1.
117. Chatchawit Aporntewan and Prabhas Chongstitvatana. A Hardware Implementation of
the Compact Genetic Algorithm. In CEC’01 [1334], pages 624–629, volume 1, 2001.
doi: 10.1109/CEC.2001.934449. Fully available at http://pioneer.netserv.chula.
ac.th/˜achatcha/Publications/0004.pdf [accessed 2009-08-21]. INSPEC Accession Num-
ber: 7006160.
118. David L. Applegate, Robert E. Bixby, Vašek Chvátal, and William J. Cook. The Travel-
ing Salesman Problem: A Computational Study, Princeton Series in Applied Mathematics.
Princeton University Press: Princeton, NJ, USA, February 2007. isbn: 0-691-12993-2.
Google Books ID: nmF4rVNJMVsC.
119. Hamid R. Arabnia, editor. Proceedings of the 2010 International Conference on Genetic and
Evolutionary Methods (GEM’10), July 12–15, 2010, Monte Carlo Resort: Las Vegas, NV,
USA.
120. Hamid R. Arabnia and Youngsong Mun, editors. Proceedings of the 2008 International Con-
ference on Genetic and Evolutionary Methods (GEM’08), June 14–17, 2008, Monte Carlo
Resort: Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-069-8.
121. Hamid R. Arabnia and Ashu M. G. Solo, editors. Proceedings of the 2009 International
Conference on Genetic and Evolutionary Methods (GEM’09), July 13–16, 2009, Monte Carlo
Resort: Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-106-6.
122. Hamid R. Arabnia, Jack Y. Yang, and Mary Qu Yang, editors. Proceedings of the 2007
International Conference on Genetic and Evolutionary Methods (GEM’07), June 25–28, 2007,
Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-038-8.
123. Victoria S. Aragón and Susana C. Esquivel. An Evolutionary Algorithm to Track Changes
of Optimum Value Locations in Dynamic Environments. Journal of Computer Science
& Technology (JCS&T), 4(3(12)):127–133, October 2004, Iberoamerican Science & Tech-
nology Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM,
USA. Fully available at http://journal.info.unlp.edu.ar/journal/journal12/
papers/JCST-Oct04-1.pdf [accessed 2009-07-13]. Invited paper.
124. Edoardo Ardizzone, Salvatore Gaglio, and Filippo Sorbello, editors. Proceedings of the 2nd
Congress of the Italian Association for Artificial Intelligence (AI*IA) on Trends in Artificial
970 REFERENCES
Intelligence (AIIA’91), October 29, 1991, Palermo, Italy, volume 549 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0387547126 and
3-540-54712-6. OCLC: 150398646, 311526865, 320328143, 440934107, 465123834,
and 466123701.
125. Maribel G. Arenas, Pierre Collet, Ágoston E. Eiben, Márk Jelasity, Juan Julián Merelo-
Guervós, Ben Paechter, Mike Preuß, and Marc Schoenauer. A Framework for Distributed
Evolutionary Algorithms. In PPSN VII [1870], pages 665–675, 2002. Fully available
at http://citeseer.ist.psu.edu/arenas02framework.html [accessed 2008-04-06].
126. Rubén Armañanzas, Iñaki Inza, Roberto Santana, Yvan Saeys, Jose Luis Flores, José An-
tonio Lozano, Yves Van de Peer, Rosa Blanco, Vı́ctor Robles, Concha Bielza, and Pedro
Larrañaga. A Review of Estimation of Distribution Algorithms in Bioinformatics. Bio-
Data Mining, 1(6), 2008, BioMed Central Ltd.: London, UK. doi: 10.1186/1756-0381-1-6.
Fully available at http://www.biodatamining.org/content/1/1/6 [accessed 2009-08-28].
PubMed ID: 18822112.
127. Grenville Armitage, Mark Claypool, and Philip Branch. Network Latency, Jitter and Loss.
In Networking and Online Games: Understanding and Engineering Multiplayer Internet
Games [128], Chapter 5, pages 69–82. John Wiley & Sons Ltd.: New York, NY, USA, 2006.
doi: 10.1002/047003047X.ch5.
128. Grenville Armitage, Mark Claypool, and Philip Branch. Networking and Online Games:
Understanding and Engineering Multiplayer Internet Games. John Wiley & Sons Ltd.:
New York, NY, USA, 2006. doi: 10.1002/047003047X. isbn: 0470018577. Google Books
ID: AQ7bIwAACAAJ and mK9QAAAAMAAJ. OCLC: 62896496 and 222232694.
129. Minoru Asada and Hiroaki Kitano, editors. RoboCup-98: Robot Soccer World Cup II,
July 1998, Paris, France, volume 1604 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-66320-7. Published in 1999.
130. B. Ashadevi and R. Balasubramanian. Cost Effective Approach for Materialized View Se-
lection in Data Warehousing Environment. International Journal of Computer Science and
Network Security (IJCSNS), 8(10):236–242, 2008, IJCSNS: Seoul, South Korea. Fully avail-
able at http://paper.ijcsns.org/07_book/200810/20081036.pdf [accessed 2011-04-12].
131. B. Ashadevi and R. Balasubramanian. Optimized Cost Effective Approach for Se-
lection of Materialized Views in Data Warehousing. Journal of Computer Science
& Technology (JCS&T), 9(1):21–26, April 2009, Iberoamerican Science & Technol-
ogy Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM,
USA. Fully available at http://journal.info.unlp.edu.ar/journal/journal25/
papers/JCST-Apr09-4.pdf [accessed 2011-04-12].
132. Workshop on Practical Combinatorial Optimization (I ALIO/EURO), August 14–18, 1989,
Rio de Janeiro, RJ, Brazil. Asociación Latino-Iberoamericana de Investigación Operativa
(ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research Soci-
eties (EURO): Brussels, Belgium.
133. Workshop on Practical Combinatorial Optimization (II ALIO/EURO), November 11–15,
1996, Catholic University: Valparaı́so, Chile. Asociación Latino-Iberoamericana de Inves-
tigación Operativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European Opera-
tional Research Societies (EURO): Brussels, Belgium.
134. Workshop on Applied Combinatorial Optimization (III ALIO/EURO), November 1–7, 1999,
Erice, Italy. Asociación Latino-Iberoamericana de Investigación Operativa (ALIO): Rio de
Janeiro, RJ, Brazil and Association of European Operational Research Societies (EURO):
Brussels, Belgium.
135. Workshop on Applied Combinatorial Optimization (IV ALIO/EURO), November 4–6, 2002,
Pucón, Chile. Asociación Latino-Iberoamericana de Investigación Operativa (ALIO): Rio de
Janeiro, RJ, Brazil and Association of European Operational Research Societies (EURO):
Brussels, Belgium.
136. The Fifth ALIO/EURO Conference on Combinatorial Optimization (V ALIO/EURO), Octo-
ber 26–28, 2005, Paris, France. Asociación Latino-Iberoamericana de Investigación Operativa
(ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research Soci-
eties (EURO): Brussels, Belgium.
137. Workshop on Applied Combinatorial Optimization (VI ALIO/EURO), December 15–17,
2008, Buenos Aires, Argentina. Asociación Latino-Iberoamericana de Investigación Oper-
REFERENCES 971
ativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research
Societies (EURO): Brussels, Belgium.
138. Workshop on Applied Combinatorial Optimization (VII ALIO/EURO), May 4–6, 2011, Uni-
versity of Porto, Faculty of Sciences: Porto, Portugal. Asociación Latino-Iberoamericana de
Investigación Operativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European
Operational Research Societies (EURO): Brussels, Belgium.
139. Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing
Systems (CHI’08), April 5–10, 2008, Florence, Italy. Association for Computing Machinery
(ACM): New York, NY, USA.
140. Proceedings of the 17th International World Wide Web Conference (WWW’08), April 21–25,
2008, Běijı̄ng International Convention Center: Běijı̄ng, China. Association for Computing
Machinery (ACM): New York, NY, USA.
141. Mikhail J. Atallah, editor. Algorithms and Theory of Computation Handbook, Chapman and
Hall/CRC Applied Algorithms and Data Structures Series. CRC Press, Inc.: Boca Raton,
FL, USA, first edition, 1998. isbn: 0-8493-2649-4. Google Books ID: fyTk5AwnnoQC.
OCLC: 39606815, 54016189, 222972701, and 471519169. Library of Congress Con-
trol Number (LCCN): 98038016. GBV-Identification (PPN): 247401749. LC Classifica-
tion: QA76.9.A43 A43 1999. Produced By Suzanne Lassandro.
142. Proceedings of the Fifth Joint Conference on Information Science (JCIS’00), February 27–
March 3, 2000, Atlantic City, NJ, USA.
143. Paolo Atzeni, Albertas Caplinskas, and Hannu Jaakkola, editors. Proceedings of the 12th
East European Conference on Advances in Databases and Information Systems (ADBIS),
September 5–9, 2008, Pori, Finland, volume 5207 in Information Systems and Application,
incl. Internet/Web and HCI (SL 3), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-85713-6. isbn: 3-540-85712-5.
Library of Congress Control Number (LCCN): 2008933208.
144. Wai-Ho Au, Keith C. C. Chan, and Xin Yao. A Novel Evolutionary Data Mining Algorithm
with Applications to Churn Prediction. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 7(6):532–545, December 2003, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.819264. CiteSeerx : 10.1.1.10.8230.
145. Anne Auger and Oliver Teytaud. Continuous Lunches are Free Plus the Design of
Optimal Optimization Algorithms. Rapports de Recherche inria-00369788, Insti-
tut National de Recherche en Informatique et en Automatique (INRIA), March 21,
2009. Fully available at http://hal.inria.fr/docs/00/36/97/88/PDF/
ccflRevisedVersionAugerTeytaud.pdf [accessed 2010-10-29]. See also [146].
146. Anne Auger and Oliver Teytaud. Continuous Lunches are Free Plus the Design of Optimal
Optimization Algorithms. Algorithmica, 57(1):121–146, May 2010, Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/s00453-008-9244-5. See also [145].
147. Anne Auger, Nikolaus Hansen, Nikolas Mauny, Raymond Ros, and Marc Schoenauer. Bio-
Inspired Continuous Optimization: The Coming of Age. In CEC’07 [1343], 2007. Fully
available at http://www.lri.fr/˜marc/schoenauerCEC07.pdf [accessed 2009-10-20]. In-
vited Talk at CEC’2007. See also [1181].
148. P. Augerat, José Manuel Belenguer Ribera, Enrique Benavent, A. Corberán, D. Naddef, and
G. Rinaldi. Computational Results with a Branch and Cut Code for the Capacitated Vehicle
Routing Problem. Technical Report Research Report 949-M, Université Joseph Fourier:
Grenoble, France, 1995.
149. Douglas A. Augusto and Helio Joseé Correa Barbosa. Symbolic Regression via Genetic
Programming. In SBRN’00 [981], pages 173–178, 2000. doi: 10.1109/SBRN.2000.889734.
INSPEC Accession Number: 6813089.
150. Lerina Aversano, Massililiano di Penta, and Kunal Taneja. A Genetic Programming Approach
to Support the Design of Service Compositions. International Journal of Computer Systems
Science and Engineering (CSSE), 21(4):247–254, July 2006, CRL Publishing Limited: Le-
icester, UK. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
Aversano_2006_IJCSSE.html and http://www.rcost.unisannio.it/mdipenta/
papers/csse06.pdf [accessed 2011-01-19]. CiteSeerx : 10.1.1.145.843.
151. Erel Avineri, Mario Köppen, Keshav Dahal, Yos Sunitiyoso, and Rajkumar Roy, editors. Ap-
plications of Soft Computing: 12th Online World Conference on Soft Computing in Industrial
Applications (WSC12), October 16–26, 2007, volume 52 in Advances in Intelligent and Soft
972 REFERENCES
Computing. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-540-88079-0. Library of
Congress Control Number (LCCN): 2008935493.
152. Mordecai Avriel. Nonlinear Programming: Analysis and Methods. Dover Publica-
tions: Mineola, NY, USA, September 16, 2003. isbn: 0-486-43227-0. Google Books
ID: 815RAAAAMAAJ and byF4Xb1QbvMC.
153. Proceedings of the 3rd International Conference on Bio-Inspired Models of Network, Informa-
tion, and Computing Systems (BIONETICS’08), November 25–28, 2008, Awaji Yumebutai
International Conference Center: Awaji City, Hyogo, Japan.
154. Mehmet E. Aydin, Jun Yang, and Jie Zhang. A Comparative Investigation on Heuristic
Optimization of WCDMA Radio Networks. In EvoWorkshops’07 [1050], pages 111–120,
2007. doi: 10.1007/978-3-540-71805-5 12.
155. Mehmet E. Aydin, Raymond S. K. Kwan, Cyril Leung, and Jie Zhang. Multiuser Scheduling
in HSDPA with Particle Swarm Optimization. In EvoWorkshops’09 [1052], pages 71–80,
2009. doi: 10.1007/978-3-642-01129-0 8.
156. Antonia Azzini, Matteo De Felice, Sandro Meloni, and Andrea G. B. Tettamanzi. Soft
Computing Techniques for Internet Backbone Traffic Anomaly Detection. In EvoWork-
shops’09 [1052], pages 99–104, 2009. doi: 10.1007/978-3-642-01129-0 12.
157. Sara Baase and Allen Van Gelder. Computer Algorithms: Introduction to Design and Analy-
sis. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 3rd edition, Novem-
ber 1999. isbn: 0-201-61244-5. Google Books ID: GInCQgAACAAJ. OCLC: 40813345 and
474499890. Library of Congress Control Number (LCCN): 99014185. GBV-Identification
(PPN): 301465576 and 314912401. LC Classification: QA76.9.A43 B33 2000.
158. Jaume Bacardit i Peñarroya. Pittsburgh Genetics-Based Machine Learning in the Data Min-
ing Era: Representations, Generalization, and Run-time. PhD thesis, Ramon Llull Univer-
sity: Barcelona, Catalonia, Spain, October 8, 2004, Josep Maria Garrell i Guiu, Advisor.
Fully available at http://www.cs.nott.ac.uk/˜jqb/publications/thesis.pdf [ac-
cessed 2007-09-12].
161. Jaume Bacardit i Peñarroya and Natalio Krasnogor. Smart Crossover Operator with Multiple
Parents for a Pittsburgh Learning Classifier System. In GECCO’06 [1516], pages 1441–1448,
2006. doi: 10.1145/1143997.1144235. Fully available at http://www.cs.nott.ac.uk/
˜jqb/publications/gecco2006-sx.pdf [accessed 2007-09-12].
162. Jaume Bacardit i Peñarroya and Natalio Krasnogor. A Mixed Discrete-
Continuous Attribute List Representation for Large Scale Classification Domains.
In GECCO’09-I [2342], pages 1155–1162, 2009. doi: 10.1145/1569901.1570057.
Fully available at http://www.asap.cs.nott.ac.uk/publications/pdf/
AMixedDiscreteContinuousAttributeListRepresentation.pdf [accessed 2009-09-
05].
163. Paul Gustav Heinrich Bachmann. Die Analytische Zahlentheorie / Dargestellt von Paul Bach-
mann, volume Zweiter Theil in Zahlentheorie: Versuch einer Gesamtdarstellung dieser Wis-
senschaft in ihren Haupttheilen. University of Michigan Library, Scholarly Publishing Office:
Ann Arbor, MI, USA, 1894–2004. isbn: 1418169633, 141818540X, and 978-1418169633.
Fully available at http://gallica.bnf.fr/ark:/12148/bpt6k994750 [accessed 2008-05-
20]. Google Books ID: SfKYPQAACAAJ, SvKYPQAACAAJ, and ujyhGQAACAAJ.
164. George Baciu, Yingxu Wang, Yiyu Yao, Witold Kinsner, Keith C. C. Chan, and Lotfi A.
REFERENCES 973
Zadeh, editors. Cognitive Computing and Semantic Mining – Proceedings of the 8th IEEE
International Conference on Cognitive Informatics (ICCI’09), June 15–17, 2009, Hong Kong
Polytechnic University: Kowloon, Hong Kong / Xiānggǎng, China. IEEE Computer Society:
Piscataway, NJ, USA. isbn: 1-4244-4642-2. Google Books ID: MTx3PwAACAAJ.
165. Thomas Bäck. Generalized Convergence Models for Tournament- and (µ, λ)-Selection. In
ICGA’95 [883], pages 2–8, 1995. CiteSeerx : 10.1.1.56.5969.
166. Thomas Bäck, editor. Proceedings of The Seventh International Conference on Genetic Al-
gorithms (ICGA’97), July 19–23, 1997, Michigan State University: East Lansing, MI, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-487-1. Google
Books ID: eP1QAAAAMAAJ and fI0BAAAACAAJ. OCLC: 37024995 and 247130717.
167. Thomas Bäck. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evo-
lutionary Programming, Genetic Algorithms. Oxford University Press, Inc.: New York,
NY, USA, January 1996. isbn: 0-19-509971-0. Google Books ID: 6MOqAAAACAAJ and
EaN7kvl5coYC. OCLC: 32350061 and 246886085. Library of Congress Control Number
(LCCN): 95013506. GBV-Identification (PPN): 185320007 and 27885365X. LC Classifi-
cation: QA402.5 .B333 1996.
168. Thomas Bäck and Ulrich Hammel. Evolution Strategies Applied to Perturbed Objective
Functions. In CEC’94 [1891], pages 40–45, volume 1, 1994. doi: 10.1109/ICEC.1994.350045.
Fully available at http://citeseer.ist.psu.edu/16973.html [accessed 2008-07-19].
169. Thomas Bäck and Hans-Paul Schwefel. An Overview of Evolutionary Algorithms for Pa-
rameter Optimization. Evolutionary Computation, 1(1):1–23, Spring 1993, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1993.1.1.1. Fully available at http://www.
santafe.edu/events/workshops/images/7/7e/Back-ec93.pdf and www.robots.
ox.ac.uk/˜cvrg/trinity2002/EC/EAs.ps.gz [accessed 2009-10-18].
170. Thomas Bäck, Frank Hoffmeister, and Hans-Paul Schwefel. A Survey of Evolution Strate-
gies. In ICGA’91 [254], pages 2–9, 1991. Fully available at http://130.203.133.121:
8080/viewdoc/summary?doi=10.1.1.42.3375, http://bi.snu.ac.kr/Info/EC/
A%20Survey%20of%20Evolution%20Strategies.pdf, and http://www6.uniovi.
es/pub/EC/GA/papers/icga91.ps.gz [accessed 2009-09-18]. CiteSeerx : 10.1.1.42.3375.
171. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, editors. Handbook of Evolutionary
Computation, Computational Intelligence Library. Oxford University Press, Inc.: New York,
NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK,
and CRC Press, Inc.: Boca Raton, FL, USA, January 1, 1997. isbn: 0-7503-0392-1 and
0-7503-0895-8. Google Books ID: kgqGQgAACAAJ, n5nuiIZvmpAC, and neKNGAAACAAJ.
OCLC: 173074676, 327018351, and 327018351. Library of Congress Control Number
(LCCN): 97004461. GBV-Identification (PPN): 224708430 and 364589272. LC Classifi-
cation: QA76.87 .H357 1997.
172. Thomas Bäck, Ulrich Hammel, and Hans-Paul Schwefel. Evolutionary Computation:
Comments on the History and Current State. IEEE Transactions on Evolution-
ary Computation (IEEE-EC), 1(1):3–17, April 1997, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/4235.585888. Fully available at http://sci2s.
ugr.es/docencia/doctobio/EC-History-IEEETEC-1-1-1997.pdf [accessed 2009-08-05].
CiteSeerx : 10.1.1.6.5943. INSPEC Accession Number: 5611369.
173. Thomas Bäck, Zbigniew Michalewicz, and Xin Yao, editors. IEEE International Confer-
ence on Evolutionary Computation (CEC’97), April 13–16, 1997, Indianapolis, IN, USA.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-3949-5. Google Books
ID: 6JZVAAAAMAAJ and so7XHQAACAAJ. INSPEC Accession Number: 5577833.
174. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, editors. Evolutionary Compu-
tation 1: Basic Algorithms and Operators. Institute of Physics Publishing Ltd. (IOP):
Dirac House, Temple Back, Bristol, UK, January 2000. isbn: 0750306645. Google Books
ID: 4HMYCq9US78C.
175. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, editors. Evolutionary Computation
2: Advanced Algorithms and Operators. Institute of Physics Publishing Ltd. (IOP): Dirac
House, Temple Back, Bristol, UK, November 2000. isbn: 0750306653.
176. Jeffrey L. Bada, C. Bigham, and Stanley L. Miller. Impact Melting of Frozen Oceans on
the Early Earth: Implications for the Origin of Life. Proceedings of the National Academy
of Science of the United States of America (PNAS), 91(4):1248–1250, February 15, 1994,
National Academy of Sciences: Washington, DC, USA. doi: 10.1073/pnas.91.4.1248. PubMed
974 REFERENCES
ID: 11539550.
177. Liviu Badea and Monica Stanciu. Refinement Operators Can Be (Weakly) Perfect. In ILP-
99 [851], pages 21–32, 1999. doi: 10.1007/3-540-48751-4 4. Fully available at http://www.
ai.ici.ro/papers/ilp99.ps.gz [accessed 2008-02-22].
178. P. Badeau, Michel Gendreau, F. Guertin, Jean-Yves Potvin, and Éric D. Taillard. A Parallel
Tabu Search Heuristic for the Vehicle Routing Problem with Time Windows. Transportation
Research Part C: Emerging Technologies, 5(2):109–122, April 1997, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands and Pergamon Press: Oxford, UK.
179. Philipp Andreas Baer and Roland Reichle. Communication and Collaboration in Heteroge-
neous Teams of Soccer Robots. In Robotic Soccer [1740], Chapter 1, pages 1–28. I-Tech Educa-
tion and Publishing: Vienna, Austria, 2007. Fully available at http://s.i-techonline.
com/Book/Robotic-Soccer/ISBN978-3-902613-21-9-rs01.pdf [accessed 2008-04-23].
180. John Daniel Bagley. The Behavior of Adaptive Systems which Employ Genetic and Cor-
relation Algorithms. PhD thesis, University of Michigan: Ann Arbor, MI, USA, De-
cember 1967. Fully available at http://deepblue.lib.umich.edu/handle/2027.
42/3354 [accessed 2010-08-03]. Order No.: AAI6807556. University Microfilms No.: 68-7556.
ID: bab0779.0001.001. ORA PROJECT.01-i252. See also [181].
181. John Daniel Bagley. The Behavior of Adaptive Systems which employ Genetic and Correla-
tion Algorithms. Dissertation Abstracts International (DAI), 28(12):5160B, ProQuest: Ann
Arbor, MI, USA. See also [180].
182. Ruzena Bajcsy, editor. Proceedings of the 13th International Joint Conference on Artificial In-
telligence (IJCAI’93-I), August 28–September 3, 1993, Chambéry, France, volume 1. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-300-X. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-93-VOL1/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [183, 927, 2259].
183. Ruzena Bajcsy, editor. Proceedings of the 13th International Joint Conference on Artificial In-
telligence (IJCAI93-II), August 28–September 3, 1993, Chambéry, France, volume 2. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-300-X. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-93-VOL2/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [182, 927, 2259].
184. Javier Bajo, Emilio S. Corchado, Álvaro Herrero, and Juan M. Corchado, editors. Proceed-
ings of the 1st International Workshop on Hybrid Artificial Intelligence Systems (HAIS’06),
October 27, 2006, Ribeirão Preto, SP, Brazil. University of Salamanca: Salamanca, Spain.
isbn: 84-934181-9-6.
185. Per Bak and Kim Sneppen. Punctuated Equilibrium and Criticality in a Simple Model of
Evolution. Physical Review Letters, 71(24):4083–4086, December 13, 1993, American Phys-
ical Society: College Park, MD, USA. doi: 10.1103/PhysRevLett.71.4083. Fully available
at http://prola.aps.org/abstract/PRL/v71/i24/p4083_1 [accessed 2008-08-24].
186. Per Bak, Chao Tang, and Kurt Wiesenfeld. Self-Organized Criticality. Physical Review
A, 38(1):364–374, July 1, 1998, American Physical Society: College Park, MD, USA.
doi: 10.1103/PhysRevA.38.364. Fully available at http://prola.aps.org/abstract/
PRA/v38/i1/p364_1 [accessed 2008-08-23].
187. James E. Baker. Adaptive Selection Methods for Genetic Algorithms. In ICGA’85 [1131],
pages 101–111, 1985.
188. James E. Baker. Reducing Bias and Inefficiency in the Selection Algorithm. In
ICGA’87 [1132], pages 14–21, 1987.
189. Lazlo Bako. Real-time Classification of Datasets with Hardware Embedded Neuromorphic
Neural Networks. Briefings in Bioinformatics, 11(3):348–363, 2010, Oxford University Press,
Inc.: New York, NY, USA. doi: 10.1093/bib/bbp066.
190. Roberto Baldacci, Maria Battarra, and Daniele Vigo. Routing a Heterogeneous Fleet
of Vehicles. Technical Report DEIS OR.INGCE 2007/1, Università degli Studi di
Bologna, Dipartimento di Elettronica, Informatica e Sistemistica (DEIS), Operations
Research Group (Ricerca Operativa): Bológna, Emilia-Romagna, Italy, January 2007.
Fully available at or.ingce.unibo.it/ricerca/technical-reports-or-ingce/
papers/bbv_hvrp.pdf [accessed 2011-10-12].
191. James Mark Baldwin. A New Factor in Evolution. The American Naturalist, 30(354):441–451,
June 1896, University of Chicago Press for The American Society of Naturalists: Chicago,
IL, USA. Fully available at http://www.brocku.ca/MeadProject/Baldwin/Baldwin_
REFERENCES 975
1896_h.html [accessed 2008-09-10]. See also [192].
192. James Mark Baldwin. A New Factor in Evolution. In Adaptive Individuals in Evolving Pop-
ulations: Models and Algorithms [255], pages 59–80. Addison-Wesley Longman Publishing
Co., Inc.: Boston, MA, USA and Westview Press, 1996. See also [191].
193. Shumeet Baluja. Population-Based Incremental Learning – A Method for Integrating Genetic
Search Based Function Optimization and Competitive Learning. Technical Report CMU-CS-
94-163, Carnegy Mellon University (CMU), School of Computer Science, Computer Science
Department: Pittsburgh, PA, USA, June 2, 1994. Fully available at http://www.ri.cmu.
edu/pub_files/pub1/baluja_shumeet_1994_2/baluja_shumeet_1994_2.pdf [ac-
x
cessed 2011-12-05]. CiteSeer : 10.1.1.61.8554.
194. Shumeet Baluja. An Empirical Comparison of Seven Iterative and Evolutionary Function Op-
timization Heuristics. Technical Report CMU-CS-95-193, Carnegy Mellon University (CMU),
School of Computer Science, Computer Science Department: Pittsburgh, PA, USA, Septem-
ber 1, 1995. CiteSeerx : 10.1.1.43.1108.
195. Shumeet Baluja and Richard A. Caruana. Removing the Genetics from the Standard Genetic
Algorithm. In ICML’95 [2221], pages 38–46, 1995. Fully available at http://www.cs.
cornell.edu/˜caruana/ml95.ps [accessed 2009-08-20]. See also [196].
196. Shumeet Baluja and Richard A. Caruana. Removing the Genetics from the Stan-
dard Genetic Algorithm. Technical Report CMU-CS-95-141, Carnegy Mellon University
(CMU), School of Computer Science, Computer Science Department: Pittsburgh, PA,
USA, May 22, 1995. Fully available at http://www.i3s.unice.fr/˜verel/supports/
articles/baluja95removing.pdf and https://eprints.kfupm.edu.sa/62112/
1/62112.pdf [accessed 2009-08-20]. CiteSeerx : 10.1.1.44.5424. See also [195].
197. Robert Balzer, editor. Proceedings of the 1st Annual National Conference on Artificial Intel-
ligence (AAAI’80), August 18–20, 1980, Stanford University: Stanford, CA, USA. American
Association for Artificial Intelligence: Los Altos, CA, USA and John W. Kaufmann Inc.:
Washington, DC, USA. isbn: 0-262-51050-2. Partly available at http://www.aaai.
org/Conferences/AAAI/aaai80.php [accessed 2007-09-06].
198. Carlos A. Bana e Costa, editor. Selected Readings from the Third International Summer
School on Multicriteria Decision Aid: Methods, Applications, and Software (MCDA’90),
July 23–27, 1998, Monte Estoril, Lisbon, Portugal. Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 0387529500 and 3540529500. Google Books ID: F2yfGAAACAAJ. Library of
Congress Control Number (LCCN): 90010313. LC Classification: T58.62 .R43 198. Pub-
lished in 1990.
199. Saswatee Banerjee and Lakshminarayan N. Hazra. Thin Lens Design of Cooke Triplet
Lenses: Application of a Global Optimization Technique. In Novel Optical Systems and
Large-Aperture Imaging [259], pages 175–183, 1998. doi: 10.1117/12.332474.
200. Proceedings of the 10th IEEE International Conference on Cognitive Informatics & Cogni-
tive Computing (ICCI*CC’11), August 18–20, 2011, Banff, AB, Canada. Partly available
at http://www.ucalgary.ca/icci_cc2011/ [accessed 2011-02-28].
201. Jørgen Bang-Jensen, Gregory Gutin, and Anders Yeo. When the Greedy Algorithm
Fails. Discrete Optimization, 1(2):121–127, November 15, 2004, Elsevier Science Publish-
ers B.V.: Essex, UK. doi: 10.1016/j.disopt.2004.03.007. Fully available at http://www.
optimization-online.org/DB_FILE/2003/03/622.pdf.
202. Balázs Bánhelyi, Marco Biazzini, Alberto Montresor, and Márk Jelasity. Peer-to-Peer Op-
timization in Large Unreliable Networks with Branch-and-Bound and Particle Swarms. In
EvoWorkshops’09 [1052], pages 87–92, 2009. doi: 10.1007/978-3-642-01129-0 10.
203. David Banks, Leanna House, Frederick R. McMorris, Phipps Arabie, and Wolfgang Gaul,
editors. Classification, Clustering, and Data Mining Applications: Proceedings of the Meeting
of the International Federation of Classification Societies (IFCS), July 15–18, 2004, Studies in
Classification, Data Analysis, and Knowledge Organization. Springer New York: New York,
NY, USA. isbn: 3540220143.
204. Ajay Bansal, M. Brian Blake, Srividya Kona, Steffen Bleul, Thomas Weise, and Michael C.
Jäger. WSC-08: Continuing the Web Services Challenge. In CEC/EEE’08 [1346], pages
351–354, 2008. doi: 10.1109/CECandEEE.2008.146. Fully available at http://www.
it-weise.de/documents/files/BBKBWJ2008WSC08CTWSC.pdf. INSPEC Accession
Number: 10475109. Ei ID: 20091512027446. IDS (SCI): BJF01.
205. Wolfgang Banzhaf. Genetic Programming for Pedestrians. In ICGA’93 [969], pages 17–21,
976 REFERENCES
1993. Fully available at ftp://ftp.cs.cuhk.hk/pub/EC/GP/papers/pedes93.
ps.gz, http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/banzhaf_mrl.html,
and http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/
GenProg_forPed.ps.Z [accessed 2010-08-18]. CiteSeerx : 10.1.1.38.1144. Also: MRL
Technical Report 93-03.
206. Wolfgang Banzhaf and Frank H. Eeckman, editors. Evolution and Biocomputation –
Computational Models of Evolution, volume 899/1995 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany, kindle edition, April 13, 1995.
doi: 10.1007/3-540-59046-3. isbn: 0-387-59046-3 and 3-540-59046-3. Google Books
ID: dZf-yrmSZPkC. asin: B000V1DAP0. Revised papers from the “Evolution as a Compu-
tational Process” workshop held in July 1992 in Monterey, CA, USA.
207. Wolfgang Banzhaf and Christian W. G. Lasarczyk. Genetic Programming of an Algorith-
mic Chemistry. In GPTP’04 [2089], pages 175–190, 2004. doi: 10.1007/0-387-23254-0 11.
Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/banzhaf_
2004_GPTP.html and http://www.cs.mun.ca/˜banzhaf/papers/algochem.pdf [ac-
x
cessed 2010-08-01]. CiteSeer : 10.1.1.94.6816.
208. Wolfgang Banzhaf and Colin R. Reeves, editors. Proceedings of the Fifth Workshop on Foun-
dations of Genetic Algorithms (FOGA’98), July 22–25, 1998, Madison, WI, USA. Morgan
Kaufmann: San Mateo, CA, USA. isbn: 1-55860-559-2. Published April 2, 1991.
209. Wolfgang Banzhaf, Peter Nordin, Robert E. Keller, and Frank D. Francone. Genetic Pro-
gramming: An Introduction – On the Automatic Evolution of Computer Programs and Its Ap-
plications. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA and dpunkt.verlag:
Heidelberg, Germany, November 30, 1997. isbn: 1-558-60510-X and 3-920993-58-6.
Google Books ID: 1697qefFdtIC. OCLC: 38067741, 247738380, 468426941, and
474902629. Library of Congress Control Number (LCCN): 97051603. GBV-Identification
(PPN): 238379108, 279969872, and 307343650. LC Classification: QA76.623 .G46 1998.
210. Wolfgang Banzhaf, Riccardo Poli, Marc Schoenauer, and Terence Claus Fogarty, editors. Pro-
ceedings of the First European Workshop on Genetic Programming (EuroGP’98), April 14–
15, 1998, Paris, France, volume 1391/1998 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0055923. isbn: 3-540-64360-5.
See also [2195].
211. Wolfgang Banzhaf, Jason M. Daida, Ágoston E. Eiben, Max H. Garzon, Vasant Honavar,
Mark J. Jakiela, and Robert Elliott Smith, editors. Proceedings of the Genetic and Evolu-
tionary Computation Conference (GECCO’99), July 13–17, 1999, Orlando, FL, USA. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-611-4. Google
Books ID: -1vTAAAACAAJ and hZlQAAAAMAAJ. OCLC: 42918215, 43044489, 59333111,
and 316858650. The 1999 Genetic and Evolutionary Computational Conference (GECCO-
99) combined the longest running conferences in evolutionary computation (ICGA) and the
world’s two largest EC conferences (GP and ICGA) to create a unique opportunity to col-
lect the best in research in this growing field of computer science and engineering. See also
[2093, 2502].
212. Wolfgang Banzhaf, Guillaume Beslon, Steffen Christensen, James A. Foster, François Képès,
Virginie Lefort, Julian Francis Miller, Miroslav Radman, and Jeremy J. Ramsden. From
Artificial Evolution to Computational Evolution: A Research Agenda. Nature Reviews Ge-
netics, 7(9):729–735, September 2006, Nature Publishing Group (NPG): Houndmills, Bas-
ingstoke, Hampshire, UK. doi: 10.1038/nrg1921. Fully available at http://www.cs.mun.
ca/˜banzhaf/papers/nrg2006.pdf [accessed 2008-07-20]. Guidelines/Perspectives.
213. Elena Baralis, Stefano Paraboschi, and Ernest Teniente. Materialized View Selection in a Mul-
tidimensional Database. In VLDB’97 [1434], pages 156–165, 1997. Fully available at http://
www.vldb.org/conf/1997/P156.PDF [accessed 2011-04-13]. CiteSeerx : 10.1.1.41.7437
and 10.1.1.78.9501.
214. Helio Joseé Correa Barbosa, Fernanda M. P. Raupp, Carlile Campos Lavor, Harry C. Lima,
and Nelson Maculan. A Hybrid Genetic Algorithm for Finding Stable Conformations of Small
Molecules. In SBRN’00 [981], pages 90–94, 2000. doi: 10.1109/SBRN.2000.889719. INSPEC
Accession Number: 6813074.
215. 11th European Conference on Machine Learning (ECML’00), May 30–June 2, 2000,
Barcelona, Catalonia, Spain.
216. Proceedings of the EUROGEN2003 Conference: Evolutionary Methods for Design Optimiza-
REFERENCES 977
tion and Control with Applications to Industrial Problems (EUROGEN’03), September 15–
17, 2003, Barcelona, Catalonia, Spain. isbn: 84-95999-33-1. Partly available at http://
congress.cimne.upc.es/eurogen03/ [accessed 2007-09-16]. Published on CD.
217. Tiffany Barnes, Michel Desmarais, Cristóbal Romero, and Sebastián Ventura, ed-
itors. Proceedings of the 2nd International Conference on Educational Data
Mining (EDM’09), July 1–3, 2009, Córdoba, Spain. isbn: 84-613-2308-4.
Fully available at http://www.educationaldatamining.org/EDM2009/uploads/
proceedings/edm-proceedings-2009.pdf [accessed 2011-10-25]. OCLC: 733734163.
218. Lionel Barnett. Tangled Webs: Evolutionary Dynamics on Fitness Landscapes with Neu-
trality. Master’s thesis, University of Sussex, School of Cognitive Science: Brighton, UK,
Summer 1997, Inman Harvey, Supervisor. CiteSeerx : 10.1.1.48.2862.
219. Lionel Barnett. Ruggedness and Neutrality – The NKp Family of Fitness Landscapes. In
Artificial Life VI [32], pages 18–27, 1998. Fully available at http://www.cogs.susx.ac.
uk/users/lionelb/downloads/publications/alife6_paper.ps.gz [accessed 2009-07-
x
10]. CiteSeer : 10.1.1.43.5010.
220. Nils Aaall Barricelli. Esempi Numerici di Processi di Evoluzione. Methodos, 6(21–22):45–68,
1954.
221. Nils Aaall Barricelli. Symbiogenetic Evolution Processes Realized by Artificial Methods.
Methodos, 9(35–36):143–182, 1957.
222. Nils Aaall Barricelli. Numerical Testing of Evolution Theories. Part I. Theroetical Introduc-
tion and Basic Tests. Acta Biotheoretica, 16(1/2):69–98, March 1962, Springer Netherlands:
Dordrecht, Netherlands. doi: 10.1007/BF01556771. Received: 27 November 1961. See also
[223].
223. Nils Aaall Barricelli. Numerical Testing of Evolution Theories. Part II. Preliminary Tests
of Performance. Symbiogenesis and Terrestrial Life. Acta Biotheoretica, 16(3/4):99–126,
September 1963, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF01556602.
Received: 27 November 1961. See also [222].
224. Alwyn M. Barry, editor. Proceedings of the Bird of a Feather Workshop at Genetic and
Evolutionary Computation Conference (GECCO’02 WS), July 8, 2002, Roosevelt Hotel: New
York, NY, USA. AAAI Press: Menlo Park, CA, USA. Bird-of-a-feather Workshops, GECCO-
2002. See also [478, 1675, 1790].
225. Peter Bartalos and Mária Bieliková. Semantic Web Service Composition Framework Based
on Parallel Processing. In CEC’09 [1245], pages 495–498, 2009. doi: 10.1109/CEC.2009.27.
INSPEC Accession Number: 10839132. See also [1571].
226. Thomas Barth, Bernd Freisleben, Manfred Grauer, and Frank Thilo. Distributed Solution of
Optimal Hybrid Control Problems on Networks of Workstations. In CLUSTER 2000 [1365],
pages 162–169, 2000. doi: 10.1109/CLUSTR.2000.889027.
227. Thomas Bartz-Beielstein. Experimental Analysis of Evolution Strategies – Overview and
Comprehensive Introduction. Technical Report 157, Universität Dortmund, Collabora-
tive Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia,
Germany, May 24, 2004. Fully available at http://hdl.handle.net/2003/5449 and
http://ls11-www.cs.uni-dortmund.de/people/tom/158703.pdf [accessed 2009-09-09].
CiteSeerx : 10.1.1.5.2261.
228. Thomas Bartz-Beielstein, Marco Chiarandini, Luı́s Paquete, and Mike Preuß, editors. Ex-
perimental Methods for the Analysis of Optimization Algorithms. Springer-Verlag GmbH:
Berlin, Germany, February 2010. doi: 10.1007/978-3-642-02538-9. isbn: 3-642-02537-4.
Libri-Number: 9596283.
229. Pablo Manuel Rabanal Basalo, Ismael Rodrı́guez Laguna, and Fernando Rubio Dı́ez. Using
River Formation Dynamics to Design Heuristic Algorithms. In UC’07 [55], pages 163–177,
2007. doi: 10.1007/978-3-540-73554-0 16.
230. Pablo Manuel Rabanal Basalo, Ismael Rodrı́guez Laguna, and Fernando Rubio Dı́ez. Finding
Minimum Spanning/Distance Trees by Using River Formation Dynamics. In ANTS’08 [2580],
pages 60–71, 2008. doi: 10.1007/978-3-540-87527-7 6.
231. William Bateson. Mendel’s Principles of Heredity, Kessinger Publishing’s
R Rare
Reprints. Cambridge University Press: Cambridge, UK, 1909–1930. isbn: 1428648194.
Google Books ID: ASoFAQAAIAAJ, NskrHgAACAAJ, WWZfDQljn8gC, YpTPAAAAMAAJ, and
dRjSCFULQegC.
232. Roberto Battiti and Giampietro Tecchiolli. The Reactive Tabu Search. ORSA
978 REFERENCES
Journal on Computing, 6(2):126–140, 1994, Operations Research Society of Amer-
ica (ORSA). doi: 10.1287/ijoc.6.2.126. Fully available at http://citeseer.ist.
psu.edu/141556.html and http://rtm.science.unitn.it/˜battiti/archive/
reactive-tabu-search.ps.gz [accessed 2007-09-15].
233. Andreas Bauer and Wolfgang Lehner. On Solving the View Selection Problem in
Distributed Data Warehouse Architectures. In SSDBM’03 [1368], pages 43–51, 2003.
doi: 10.1109/SSDM.2003.1214953. INSPEC Accession Number: 7804318.
234. Helmut Baumgarten, editor. Das Beste Der Logistik: Innovationen, Strategien, Umsetzun-
gen. Springer-Verlag GmbH: Berlin, Germany and Bundesvereinigung Logistik e.V. (BVL):
Bremen, Germany, 2008. isbn: 3-540-78404-7. Google Books ID: di5g8FY5X YC.
235. James C. Bean. Genetics and Random Keys for Sequencing and Optimization. Technical
Report 92-43, University of Michigan, Department of Industrial and Operations Engineering:
Ann Arbor, MI, USA, June 1992–December 17, 1993. Fully available at http://deepblue.
lib.umich.edu/bitstream/2027.42/3481/5/ban1152.0001.001.pdf [accessed 2010-
11-22]. See also [236].
236. James C. Bean. Genetic Algorithms and Random Keys for Sequencing and Optimization.
ORSA Journal on Computing, 6(2):154–160, Spring 1994, Operations Research Society of
America (ORSA). doi: 10.1287/ijoc.6.2.154. See also [235].
237. James C. Bean and Atidel Ben Hadj-Alouane. A Dual Genetic Algorithm for Bounded Integer
Programs. Technical Report 92-53, University of Michigan, Department of Industrial and
Operations Engineering: Ann Arbor, MI, USA, October 1992. Fully available at http://
hdl.handle.net/2027.42/3480 [accessed 2008-11-15]. Revised in June 1993.
238. David Beasley, David R. Bull, and Ralph R. Martin. An Overview of Genetic Algorithms:
Part 1. Fundamentals. University Computing, 15(2):58–69, 1993, Inter-University Commit-
tee on Computing: Cambridge, MA, USA. Fully available at http://ralph.cs.cf.ac.
uk/papers/GAs/ga_overview1.pdf [accessed 2010-07-23]. CiteSeerx : 10.1.1.44.3757 and
10.1.1.61.3619. See also [239].
239. David Beasley, David R. Bull, and Ralph R. Martin. An Overview of Genetic Algorithms:
Part 2, Research Topics. University Computing, 15(4):170–181, 1993, Inter-University
Committee on Computing: Cambridge, MA, USA. Fully available at http://proton.
tau.ac.il/˜marek/genetic_algorithms_beasley93_overview_partII.pdf and
http://ralph.cs.cf.ac.uk/papers/GAs/ga_overview2.pdf [accessed 2010-07-23].
241. John Edward Beasley. OR-Library. Brunel University, Department of Mathematical Sciences:
Uxbridge, Middlesex, UK, June 1990. Fully available at http://people.brunel.ac.uk/
˜mastjjb/jeb/info.html [accessed 2011-10-12].
242. William Beaudoin, Sébastien Vérel, Philippe Collard, and Cathy Escazut. Deceptiveness and
Neutrality the ND Family of Fitness Landscapes. In GECCO’06 [1516], pages 507–514, 2006.
doi: 10.1145/1143997.1144091.
243. Jörg D. Becker and Ignaz Eisele, editors. Proceedings of the Workshop on Parallel Process-
ing: Logic, Organization and Technology (WOPPLOT’83), June 27–29, 1983, Federal Armed
Forces University Munich (HSBw M): Neubiberg, Bavaria, Germany, volume 196/1984 in
Lecture Notes in Physics. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/BFb0018249.
isbn: 3540129170. Published in 1984.
244. Jörg D. Becker and Ignaz Eisele, editors. Proceedings of the Workshop on Parallel Pro-
cessing: Logic, Organization, and Technology (WOPPLOT’86), July 2–4, 1986, Federal
Armed Forces University Munich (HSBw M): Neubiberg, Bavaria, Germany, volume 253/1987
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-18022-2. isbn: 0-387-18022-2 and 3-540-18022-2. Published in
1987.
245. Jörg D. Becker, Ignaz Eisele, and F. W. Mündemann, editors. Parallelism, Learning, Evolu-
tion: Workshop on Evolutionary Models and Strategies (Neubiberg, Germany, 1989-03-10/11)
and Workshop on Parallel Processing: Logic, Organization, and Technology (Wildbad Kreuth,
Germany, 1989-07-24 to 28) (WOPPLOT’89), 1989, Germany, volume 565/1991 in Lecture
REFERENCES 979
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-55027-5. isbn: 0-387-55027-5
and 3-540-55027-5. Google Books ID: AJrVD3dAousC. OCLC: 25111932, 231262527,
311366610, 320328459, and 440927259. Published in 1991.
246. Mark A. Bedau, John S. McCaskill, Norman H. Packard, and Steen Rasmussen, edi-
tors. Proceedings of the Seventh International Conference on Artificial Life (Artificial Life
VII), August 1–2, 2000, Reed College: Portland, OR, USA, Complex Adaptive Systems,
Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 026252290X. Google Books
ID: xLGm7KFGy4C. OCLC: 44117888.
247. Witold Bednorz, editor. Advances in Greedy Algorithms. IN-TECH Education and Pub-
lishing: Vienna, Austria, November 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-27-5&type=B [accessed 2009-01-13].
248. Niko Beerenwinkel, Lior Pachter, and Bernd Sturmfels. Epistasis and Shapes of Fitness
Landscapes. Statistica Sinica, 17(4):1317–1342, October 2007, Academia Sinica, Institute of
Statistical Science: Taipei, Taiwan and International Chinese Statistical Association (ICSA).
Fully available at http://www3.stat.sinica.edu.tw/statistica/J17N4/J17N43/
J17N43.html [accessed 2009-07-11]. arXiv ID: q-bio/0603034.
249. Catriel Beeri and Peter Buneman, editors. Proceedings of the 7th International Confer-
ence on Database Theory (ICDT’99), January 10–12, 1999, Jerusalem, Israel, volume 1540
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-65452-6.
250. José Manuel Belenguer Ribera. CARPLIB. Universitat de València, Departament
d’Estadı́stica i Investigació Operativa: València, Spain, November 5, 2005. Fully available
at http://www.uv.es/˜belengue/carp/READ_ME [accessed 2011-09-18].
251. José Manuel Belenguer Ribera. The Capacitated Arc Routing Problem (CARPLIB). Universi-
tat de València, Departament d’Estadı́stica i Investigació Operativa: València, Spain, Novem-
ber 5, 2005. Fully available at http://www.uv.es/belengue/carp.html [accessed 2010-12-
03].
252. José Manuel Belenguer Ribera and Enrique Benavent. A Cutting Plane Algorithm for
the Capacitated Arc Routing Problem. Computers & Operations Research, 30(5):705–728,
April 2003, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0305-0548(02)00046-1.
253. Richard K. Belew. When both Individuals and Populations Search: Adding Simple Learning
to the Genetic Algorithm. In ICGA’89 [2414], pages 34–41, 1989.
254. Richard K. Belew and Lashon Bernard Booker, editors. Proceedings of the Fourth Interna-
tional Conference on Genetic Algorithms (ICGA’91), July 13–16, 1991, University of Cal-
ifornia (UCSD): San Diego, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-208-9. Google Books ID: OtnJOwAACAAJ and YvBQAAAAMAAJ.
OCLC: 24132977.
255. Richard K. Belew and Melanie Mitchell, editors. Adaptive Individuals in Evolving Popu-
lations: Models and Algorithms, volume 26 in Santa Fe Institue Studies in the Sciences of
Complexity. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA and West-
view Press, January 15, 1996. isbn: 0-201-48364-5 and 0-201-48369-6. Google Books
ID: OKkytWTQxeEC, YurWHgAACAAJ, and qOAhIAAACAAJ.
256. Richard K. Belew and Michael D. Vose, editors. Proceedings of the 4th Workshop on
Foundations of Genetic Algorithms (FOGA’96), August 5, 1996, University of San Diego:
San Diego, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA.
isbn: 1-55860-460-X. Published March 1, 1997.
257. Bartlomiej Beliczyński, Andrzej Dzieliński, Marcin Iwanowski, and Bernardete Ribeiro, ed-
itors. Proceedings of the 8th International Conference on Adaptive and Natural Comput-
ing Algorithms, Part I (ICANNGA’07-I), April 11–17, 2007, Warsaw University: War-
saw, Poland, volume 4431/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71618-1. isbn: 3-540-71589-4. Partly available at http://
icannga07.ee.pw.edu.pl/ [accessed 2009-08-20]. Google Books ID: WYvMFlgm1KkC.
OCLC: 122935609, 154951366, and 318299908. Library of Congress Control Number
(LCCN): 2007923870. See also [258].
258. Bartlomiej Beliczyński, Andrzej Dzieliński, Marcin Iwanowski, and Bernardete Ribeiro, ed-
980 REFERENCES
itors. Proceedings of the 8th International Conference on Adaptive and Natural Comput-
ing Algorithms, Part II (ICANNGA’07-II), April 11–17, 2007, Warsaw University: War-
saw, Poland, volume 4432/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71629-7. isbn: 3-540-71590-8 and 3-540-71629-7. Google Books
ID: 5yYBOAAACAAJ and V9ts51 ia2sC. OCLC: 122935609 and 154951366. Library of
Congress Control Number (LCCN): 2007923870. See also [257].
259. Kevin D. Bell, Michael K. Powers, and Jose M. Sasian, editors. Novel Optical Systems and
Large-Aperture Imaging, July 21, 1998, San Diego, CA, USA, volume 3430 in The Proceedings
of SPIE. Society of Photographic Instrumentation Engineers (SPIE): Bellingham, WA, USA.
Published in December 1998.
260. Richard Ernest Bellman. Dynamic Programming, Dover Books on Mathematics. Prince-
ton University Press: Princeton, NJ, USA, 1957–2003. isbn: 0486428095. Google Books
ID: L ChPwAACAAJ, fyVtp3EMxasC, and wdtoPwAACAAJ. OCLC: 50866848.
261. Richard Ernest Bellman. Adaptive Control Processes: A Guided Tour. Prince-
ton University Press: Princeton, NJ, USA, 1961–1990. isbn: 0691079013. Google
Books ID: 8wY3PAAACAAJ, POAmAAAAMAAJ, cXJEAAAAIAAJ, and pVu0PQAACAAJ.
OCLC: 67470618.
262. David A. Belsley and Christopher F. Baum, editors. Fifth International Conference on Com-
puting in Economics and Finance, June 24–26, 1999, Boston College: Boston, MA, USA.
263. David A. Belsley and Erricos John Kontoghiorghes, editors. Handbook on Com-
putational Econometrics. Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.1002/9780470748916. isbn: 0-470-74385-9. OCLC: 319499406. Library of
Congress Control Number (LCCN): 2009025907. GBV-Identification (PPN): 599321954.
LC Classification: HB143.5 .H357 2009.
264. Aharon Ben-Tal. Characterization of Pareto and Lexicographic Optimal Solutions. In
MCDM’80 [1960], pages 1–11, 1980.
265. Enrique Benavent, V. Campos, A. Corberán, and E. Mota. The Capacitated Arc Routing
Problem. Lower Bounds. Networks, 22(7):669–690, December 1992, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1002/net.3230220706.
266. Stefano Benedettini, Christian Blum, and Andrea Roli. A Randomized Iterated Greedy
Algorithm for the Founder Sequence Reconstruction Problem. In LION 4 [2581], pages
37–51, 2010. doi: 10.1007/978-3-642-13800-3 4.
267. Ernesto Benini and Andrea Toffolo. Optimal Design of Horizontal-Axis Wind Turbines Using
Blade-Element Theory and Evolutionary Computation. Journal of Solar Energy Engineering,
124(4):357–363, November 2002, American Society of Mechanical Engineers (ASME): New
York, NY, USA. doi: 10.1115/1.1510868.
268. J. H. Bennett, editor. The Collected Papers of R.A. Fisher, volume I–V. University
of Adelaide: SA, Australia, 1971–1974. Fully available at http://digital.library.
adelaide.edu.au/coll/special/fisher/stat_math.html [accessed 2008-12-08]. Col-
lected Papers of R.A. Fisher, Relating to Statistical and Mathematical Theory and Ap-
plications (excluding Genetics).
269. Forrest H. Bennett III. Emergence of a Multi-Agent Architecture and New Tactics For the
Ant Colony Foraging Problem Using Genetic Programming. In SAB’96 [1811], pages 430–
439, 1996.
270. Forrest H. Bennett III. Automatic Creation of an Efficient Multi-Agent Architecture Using
Genetic Programming with Architecture-Altering Operations. pages 30–38.
271. Forrest H. Bennett III, John R. Koza, David Andre, and Martin A. Keane. Evolu-
tion of a 60 Decibel Op Amp using Genetic Programming. In ICES’96 [1231], 1996.
doi: 10.1007/3-540-63173-9 65. CiteSeerx : 10.1.1.56.6051. See also [1607].
272. Peter J. Bentley, editor. Evolutionary Design by Computers. Morgan Kaufmann Pub-
lishers Inc.: San Francisco, CA, USA, May 1999. isbn: 1-55860-605-X. Google Books
ID: EgC6LBAH5r8C.
273. Peter J. Bentley and S.P. Kumar. The Ways to Grow Designs: A Comparison of Embryogenies
for an Evolutionary Design Problem. In GECCO’99 [211], pages 35–43, 1999.
274. Peter J. Bentley, Doheon Lee, and Sungwon Jung, editors. Proceedings of the 7th International
Conference on Artificial Immune Systems (ICARIS’08), August 10–13, 2008, Phuket, Thai-
land, volume 5132 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
REFERENCES 981
Berlin, Germany.
275. Carlos Bento, Amı́lcar Cardoso, and Gaël Dias, editors. Progress in Artificial Intelligence
– Proceedings of the 13th Portuguese Conference on Aritficial Intelligence (EPIA’05), De-
cember 5–8, 2005, Covilhã, Portugal, volume 3808 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
276. Aviv Bergman and Marcus W. Feldman. Recombination Dynamics and the Fitness Land-
scape. Physica D: Nonlinear Phenomena, 56(1):57–67, April 1992, Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Am-
sterdam, The Netherlands. doi: 10.1016/0167-2789(92)90050-W.
277. Pavel Berkhin. Survey Of Clustering Data Mining Techniques. Technical Report, Ac-
crue Software: San José, CA, USA, 2002. Fully available at http://citeseer.ist.
psu.edu/berkhin02survey.html and http://www.ee.ucr.edu/˜barth/EE242/
clustering_survey.pdf [accessed 2007-08-27].
278. 17th European Conference on Machine Learning (ECML’06), September 18–22, 2006, Berlin,
Germany.
279. Ester Bernadó, Xavier Llorà, and Josep Maria Garrell i Guiu. XCS and GALE: A Com-
parative Study of Two Learning Classifier Systems with Six Other Learning Algorithms on
Classification Tasks. In IWLCS’01 [1679], pages 115–132, 2001. doi: 10.1007/3-540-48104-4 8.
CiteSeerx : 10.1.1.1.4467.
280. Gérard Berry, Hubert Comon, and Alain Finkel, editors. Computer Aided Verifica-
tion – Proceedings of the 13th International Conference on Computer Aided Verification
(CAV’01), July 18–22, 2001, Palais de la Mutualité: Paris, France, volume 2102/2001
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-44585-4. isbn: 3-540-42345-1. Google Books ID: 1llkK-8I7YYC.
OCLC: 47282220, 47282220, 47675546, 48729917, 61847966, 67015253, 76232298,
243488788, and 248623804.
281. Michael J. A. Berry and Gordon S. Linoff. Mastering Data Mining: The Art and Science
of Customer Relationship Management. Wiley Interscience: Chichester, West Sussex, UK,
December 1999. isbn: 0471331236. Google Books ID: QeVJPwAACAAJ. Library of Congress
Control Number (LCCN): 00265296. GBV-Identification (PPN): 311887082. LC Classifi-
cation: HF5415.125 .B47 2000.
282. Hugues Bersini and Jorge Carneiro, editors. Proceedings of the 5th International Conference
on Artificial Immune Systems (ICARIS’06), September 4–6, 2006, Instituto Gulbenkian de
Ciência: Oeiras, Portugal, volume 4163 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-37749-2.
283. Elisa Bertino and Silvana Castano, editors. Atti del Settimo Convegno Nazionale Sistemi
Evoluti per Basi di Dati, June 23–25, 1999, Villa Olmo, Como, Italy.
284. Alberto Bertoni and Marco Dorigo. Implicit Parallelism in Genetic Algorithms. Artifi-
cial Intelligence, 61(2):307–314, June 1993, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(93)90071-I. See also [285].
285. Alberto Bertoni and Marco Dorigo. Implicit Parallelism in Genetic Algorithms. Tech-
nical Report TR-93-001-Revised, Universitá Statale di Milano, Dipartimento di Scienze
dell’Informazione: Milano, Italy and International Computer Science Institute (ICSI), Uni-
versity of California: Berkeley, CA, USA, April 1993. Fully available at ftp://ftp.
icsi.berkeley.edu/pub/techreports/1993/tr-93-001.ps.gz and http://ftp.
icsi.berkeley.edu/ftp/pub/techreports/1993/tr-93-001.pdf [accessed 2010-07-30].
CiteSeerx : 10.1.1.34.3275. See also [284].
286. Patricia Besson, Jean-Marc Vesin, Vlad Popovici, and Murat Kunt. Differential Evolu-
tion Applied to a Multimodal Information Theoretic Optimization Problem. In EvoWork-
shops’06 [2341], pages 505–509, 2006. doi: 10.1007/11732242 46.
287. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. In CSC’78 [774],
1978. Fully available at http://hdl.handle.net/2027.42/3572 [accessed 2010-07-23].
ID: UMR0341. See also [288].
288. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. PhD thesis, University
of Michigan: Ann Arbor, MI, USA, 1980. Order No.: AAI8106101. University Microfilms
No.: 8106101. National Science Foundation Grant No. MCS76-04297. See also [287, 289].
289. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. Dissertation Abstracts
International (DAI), 41(9):3503B, 1980, ProQuest: Ann Arbor, MI, USA. See also [288].
982 REFERENCES
290. Pete Bettinger and Jianping Zhu. A New Heuristic Method for Solving Spatially Constrained
Forest Planning Problems based on Mitigation of Infeasibilities Radiating Outward from
a Forced Choice. Silva Fennica, 40(2):315–333, 2006, Finnish Society of Forest Science,
The Finnish Forest Research Institute: Finland, Eeva Korpilahti, editor. Fully available
at http://www.metla.fi/silvafennica/full/sf40/sf402315.pdf [accessed 2010-07-
x
24]. CiteSeer : 10.1.1.133.2656.
291. Gregory Beurier, Fabien Michel, and Jacques Ferber. A Morphogenesis Model for Multia-
gent Embryogeny. In ALIFE’06 [2314], 2006. Fully available at http://www.lirmm.fr/
˜fmichel/publi/pdfs/beurier06alifeX.pdf [accessed 2010-08-06].
292. Hans-Georg Beyer. Some Aspects of the ’Evolution Strategy’ for Solving TSP-Like Optimiza-
tion Problems Appearing at the Design Studies of a 0.5T eV e+ e− -Linear Collider. In PPSN
II [1827], pages 361–370, 1992.
293. Hans-Georg Beyer. Towards a Theory of ’Evolution Strategies’: Some Asymptotical Results
from the (1, +λ)-Theory. Technical Report SYS–5/92, Universität Dortmund, Fachbereich
Informatik: Dortmund, North Rhine-Westphalia, Germany, December 1, 1992. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/˜beyer/coll/Bey93a/Bey93a.ps
[accessed 2009-07-11]. See also [294].
294. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: Some Asymptotical Re-
sults from the (1, +λ)-Theory. Evolutionary Computation, 1(2):165–188, Summer 1993,
MIT Press: Cambridge, MA, USA, Marc Schoenauer, editor. Fully available at http://
ls11-www.cs.uni-dortmund.de/˜beyer/coll/Bey93a/Bey93a.ps [accessed 2009-07-11].
CiteSeerx : 10.1.1.38.3238. See also [293].
295. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: The (µ, λ)-Theory.
Evolutionary Computation, 2(4):381–407, Winter 1994, MIT Press: Cambridge, MA,
USA. doi: 10.1162/evco.1994.2.4.381. Fully available at http://ls11-www.cs.
uni-dortmund.de/people/beyer/coll/Bey95a/Bey95a.ps [accessed 2009-07-10].
CiteSeerx : 10.1.1.47.3088.
296. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: On the Benefit of
Sex – The (µ/µ, λ)-Theory. Evolutionary Computation, 3(1):81–111, Spring 1995,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.1.81. Fully avail-
able at http://ls11-www.informatik.uni-dortmund.de/people/beyer/coll/
Bey95b/Bey95b.ps [accessed 2010-08-18]. CiteSeerx : 10.1.1.30.5489.
297. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: Self-Adaptation. Evo-
lutionary Computation, 3(3):311–347, Fall 1995, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1995.3.3.311. Fully available at http://ls11-www.cs.uni-dortmund.
de/˜beyer/coll/Bey95e/Bey95e.ps [accessed 2010-08-18]. CiteSeerx : 10.1.1.26.3527.
298. Hans-Georg Beyer. An Alternative Explanation for the Manner in which Genetic Algorithms
Operate. Biosystems, 41(1):1–15, January 1997, Elsevier Science Ireland Ltd.: East Park,
Shannon, Ireland and North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/S0303-2647(96)01657-7. CiteSeerx : 10.1.1.39.307.
299. Hans-Georg Beyer. Design Optimization of a Linear Accelerator using Evolution Strategy:
Solving a TSP-like Optimization Problem. In Handbook of Evolutionary Computation [171],
Chapter G4.2, pages 1–8. Oxford University Press, Inc.: New York, NY, USA, Institute of
Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press,
Inc.: Boca Raton, FL, USA, 1997.
300. Hans-Georg Beyer. The Theory of Evolution Strategies, Natural Computing Series. Springer
New York: New York, NY, USA, May 27, 2001. isbn: 3-540-67297-4. Google Books
ID: 8tbInLufkTMC. OCLC: 46394059 and 247928112. Library of Congress Control
Number (LCCN): 2001020620. LC Classification: QA76.618 B49 2001.
301. Hans-Georg Beyer and William B. Langdon, editors. Foundations of Genetic Algorithms
XI (FOGA’11), January 5–9, 2011, Hirschen-Schwarzenberg: Schwarzenberg, Austria. ACM
Press: New York, NY, USA.
302. Hans-Georg Beyer and Hans-Paul Schwefel. Evolution Strategies – A Comprehensive In-
troduction. Natural Computing: An International Journal, 1(1):3–52, March 2002, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1015059928466. Fully available at http://www.cs.bham.ac.uk/
˜pxt/NIL/es.pdf [accessed 2010-08-09].
303. Hans-Georg Beyer, Eva Brucherseifer, Wilfried Jakob, Hartmut Pohlheim, Bernhard Send-
REFERENCES 983
hoff, and Thanh Binh To. Evolutionary Algorithms – Terms and Definitions. Universität
Dortmund: Dortmund, North Rhine-Westphalia, Germany, version e-1.2 edition, Febru-
ary 25, 2002. Fully available at http://ls11-www.cs.uni-dortmund.de/people/
beyer/EA-glossary/ [accessed 2008-04-10].
304. Hans-Georg Beyer, Una-May O’Reilly, Dirk V. Arnold, Wolfgang Banzhaf, Christian Blum,
Eric W. Bonabeau, Erick Cantú-Paz, Dipankar Dasgupta, Kalyanmoy Deb, James A. Fos-
ter, Edwin D. de Jong, Hod Lipson, Xavier Llorà, Spiros Mancoridis, Martin Pelikan,
Günther R. Raidl, Terence Soule, Jean-Paul Watson, and Eckart Zitzler, editors. Pro-
ceedings of Genetic and Evolutionary Computation Conference (GECCO’05), June 25–
27, 2005, Loews L’Enfant Plaza Hotel: Washington, DC, USA. ACM Press: New York,
NY, USA. isbn: 1-59593-010-8. Google Books ID: 1Y5QAAAAMAAJ, CY9QAAAAMAAJ,
KY1QAAAAMAAJ, TlWQAAAACAAJ, and VI5QAAAAMAAJ. OCLC: 62461063 and 63125628.
ACM Order No.: 910050. A joint meeting of the fourteenth international conference on genetic
algorithms (ICGA-2005) and the tenth annual genetic programming conference (GP-2005).
See also [2337, 2339].
305. James C. Bezdek, James M. Keller, Raghu Krishnapuram, Ludmila I. Kuncheva, and
Nikhil R. Pal. Will the Real Iris Data Please Stand Up? IEEE Transactions on Fuzzy
Systems (TFS), 7(3):368–369, June 1999, IEEE Computational Intelligence Society: Piscat-
away, NJ, USA. doi: 10.1109/91.771092. INSPEC Accession Number: 6293498. See also
[92].
306. V. Bhaskar, Santosh K. Gupta, and A.K. Ray. Applications of Multiobjective Optimization
in Chemical Engineering. Reviews in Chemical Engineering, 16(1):1–54, 2000.
307. Gouri K. Bhattacharyya and Richard A. Johnson. Statistical Concepts and Methods. John Wi-
ley & Sons Ltd.: New York, NY, USA, March 8, 1977. isbn: 0471035327 and 0471072044.
Google Books ID: Xk9AAAACAAJ. OCLC: 2597235.
308. Leonora Bianchi, Marco Dorigo, Luca Maria Gambardella, and Walter J. Gutjahr. A Sur-
vey on Metaheuristics for Stochastic Combinatorial Optimization. Natural Computing: An
International Journal, 8(2):239–287, June 2009, Kluwer Academic Publishers: Norwell, MA,
USA and Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/s11047-008-9098-4.
309. Arthur S. Bickel and Riva Wenig Bickel. Tree Structured Rules in Genetic Algorithms. In
ICGA’87 [1132], pages 77–81, 1987.
310. Joseph P. Bigus. Data Mining With Neural Networks: Solving Business Problems from Ap-
plication Development to Decision Support. McGraw-Hill: New York, NY, USA, May 20,
1996. isbn: 0-07-005779-6. OCLC: 34282366 and 34282366. Library of Congress
Control Number (LCCN): 96033779. GBV-Identification (PPN): 194728420. LC Classifi-
cation: QA76.87 .B55 1996.
311. Kurt Binder and A. Peter Young. Spin Glasses: Experimental Facts, Theoretical Concepts,
and Open Questions. Reviews of Modern Physics, 58(4):801–976, October 1986, American
Physical Society: College Park, MD, USA. doi: 10.1103/RevModPhys.58.801. Fully available
at http://link.aps.org/abstract/RMP/v58/p801 [accessed 2008-07-02].
312. Lothar Birk, Günther F. Clauss, and June Y. Lee. Practical Application of Global Optimiza-
tion to the Design of Offshore Structures. In Proceedings of OMAE04, 23rd International
Conference on Offshore Mechanics and Arctic Engineering [85], pages 567–579, volume III,
2004. Fully available at http://130.149.35.79/downloads/publikationen/2004/
OMAE2004-51225.pdf [accessed 2007-09-01]. Paper no. OMAE2004-51225.
313. Lawrence A. Birnbaum and Gregg C. Collins, editors. Proceedings of the 8th International
Workshop on Machine Learning (ML’91), June 1991, Evanston, IL, USA. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-200-3. Google Books
ID: OnNgQgAACAAJ. OCLC: 24106151, 438477848, and 1558602003. GBV-Identification
(PPN): 022371885.
314. Christopher M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press
(Oxford University Press): Oxford, UK, November 3, 1995. isbn: 0-19-853849-9
and 0-19-853864-2. Google Books ID: -aAwQO -rXwC. OCLC: 33101074,
174549000, 257120262, and 263662370. Library of Congress Control Number
(LCCN): 95040465. GBV-Identification (PPN): 082852855, 265786185, 315914459,
354835041, 475047974, 506020282, and 611503409. LC Classification: QA76.87 .B574
1995.
315. Susanne Biundo, Karen Myers, and Kanna Rajan, editors. Proceedings of the Fifteenth
984 REFERENCES
International Conference on Automated Planning and Scheduling (ICAPS’05), June 5–10,
2005, Monterey, CA, USA. AAAI Press: Menlo Park, CA, USA.
316. Tim Blackwell. Particle Swarm Optimization in Dynamic Environments. In Evolution-
ary Computation in Dynamic and Uncertain Environments [3019], Chapter 2, pages 29–49.
Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-49774-5 2. Fully available
at http://igor.gold.ac.uk/˜mas01tb/papers/PSOdynenv.pdf [accessed 2009-07-13].
317. M. Brian Blake, Kwok Ching Tsui, and Andreas Wombacher. The EEE-05 Challenge:
A New Web Service Discovery and Composition Competition. In EEE’05 [1371], pages
780–783, 2005. doi: 10.1109/EEE.2005.131. Fully available at http://ws-challenge.
georgetown.edu/ws-challenge/The%20EEE.htm [accessed 2007-09-02]. INSPEC Accession
Number: 8530438. Ei ID: 2006049663049.
318. M. Brian Blake, William K.W. Cheung, Michael C. Jäger, and Andreas Wombacher.
WSC-06: The Web Service Challenge. In CEC/EEE’06 [2967], pages 505–508, 2006.
doi: 10.1109/CEC-EEE.2006.98. INSPEC Accession Number: 9189338.
319. M. Brian Blake, William K.W. Cheung, Michael C. Jäger, and Andreas Wombacher. WSC-
07: Evolving the Web Service Challenge. In CEC/EEE’07 [1344], pages 505–508, 2007.
doi: 10.1109/CEC-EEE.2007.107. INSPEC Accession Number: 9868634.
320. M. Brian Blake, Thomas Weise, and Steffen Bleul. WSC-2010: Web Services Composition and
Evaluation. In SOCA’10 [1378], pages 1–4, 2010. doi: 10.1109/SOCA.2010.5707190. Fully
available at http://www.it-weise.de/documents/files/BWB2010W2WSCAE.pdf.
321. 20th European Conference on Machine Learning and the 13th European Conference on Princi-
ples and Practice of Knowledge Discovery in Databases (ECML PKDD’09), September 7–11,
2009, Bled, Slovenia.
322. Progress in Machine Learning – Proceedings of the Second European Working Session on
Learning (EWSL’87), May 13–17, 1987, Bled, Yugoslavia (now Slovenia).
323. Woodrow “Woody” Wilson Bledsoe. The Use of Biological Concepts in the Analytical Study
of Systems. Technical Report PRI 2, Panoramic Research, Inc.: Palo Alto, CA, USA, 1961.
Presented at ORSA-TIMS National Meeting, San Francisco, California, November 10, 1961.
324. Woodrow “Woody” Wilson Bledsoe. Lethally Dependent Genes Using Instant Selection.
Technical Report PRI 1, Panoramic Research, Inc.: Palo Alto, CA, USA, 1961.
325. Woodrow “Woody” Wilson Bledsoe. An Analysis of Genetic Populations. Technical Report,
Panoramic Research, Inc.: Palo Alto, CA, USA, 1962.
326. Woodrow “Woody” Wilson Bledsoe. The Evolutionary Method in Hill Climbing: Convergence
Rates. Technical Report, Panoramic Research, Inc.: Palo Alto, CA, USA, 1962.
327. Woodrow “Woody” Wilson Bledsoe and I. Browning. Pattern Recognition and Reading by
Machine. In Proceedings of the Eastern Joint Computer Conference (EJCC) – Papers and
Discussions Presented at the Joint IRE – AIEE – ACM Computer Conference [1395], pages
225–232, 1959. doi: 10.1109/AFIPS.1959.88.
328. Maria J. Blesa and Christian Blum. Ant Colony Optimization for the Maximum Edge-Disjoint
Paths Problem. In EvoWorkshops’04 [2254], pages 160–169, 2004.
329. Maria J. Blesa, Pablo Moscato, and Fatos Xhafa. A Memetic Algorithm for the Minimum
Weighted k-Cardinality Tree Subgraph Problem. In MIC’01 [2294], pages 85–90, 2001.
CiteSeerx : 10.1.1.11.2478 and 10.1.1.21.5376.
330. Steffen Bleul and Kurt Geihs. Automatic Quality-Aware Service Discovery and Matching.
In HPOVUA [370], 2006.
331. Steffen Bleul and Thomas Weise. An Ontology for Quality-Aware Service Discovery. In
WESC’05 [3095], volume RC23821, 2005. Fully available at http://www.it-weise.de/
documents/files/BW2005QASD.pdf [accessed 2011-01-11]. CiteSeerx : 10.1.1.111.390. See
also [333].
332. Steffen Bleul and Michael Zapf. Ontology-Based Self-Organization in Service-Oriented Ar-
chitectures. In Proceedings of the Workshop SAKS 2007 [2801], 2007.
333. Steffen Bleul, Thomas Weise, and Kurt Geihs. An Ontology for Quality-Aware Service
Discovery. International Journal of Computer Systems Science and Engineering (CSSE), 21
(4):227–234, July 2006, CRL Publishing Limited: Leicester, UK. Fully available at http://
www.it-weise.de/documents/files/BWG2006QASD.pdf. Ei ID: 20064410208380. IDS
(SCI): 095XP. Special issue on “Engineering Design and Composition of Service-Oriented
Applications”. See also [331].
334. Steffen Bleul, Thomas Weise, and Kurt Geihs. Large-Scale Service Composi-
REFERENCES 985
tion in Semantic Service Discovery. In CEC/EEE’06 [2967], pages 427–429,
2006. doi: 10.1109/CEC-EEE.2006.59. Fully available at http://www.it-weise.
de/documents/files/BWG2006WSC.pdf. INSPEC Accession Number: 9189339. Ei
ID: 20070110350633. 1st place in 2006 WSC. See also [318, 335, 2909].
335. Steffen Bleul, Thomas Weise, and Kurt Geihs. Making a Fast Semantic Ser-
vice Composition System Faster. In CEC/EEE’07 [1344], pages 517–520, 2007.
doi: 10.1109/CEC-EEE.2007.62. Fully available at http://www.it-weise.de/
documents/files/BWG2007WSC.pdf. INSPEC Accession Number: 9868637. Ei
ID: 20083511485208. IDS (SCI): BGR20. 2nd place in 2007 WSC. See also [319, 334, 2909].
336. Tobias Blickle. Theory of Evolutionary Algorithms and Application to System Synthe-
sis, TIK-Schriftenreihe. PhD thesis, Eidgenössische Technische Hochschule (ETH) Zürich:
Zürich, Switzerland, Verlag der Fachvereine Hochschulverlag AG der ETH Zürich: Zürich,
Switzerland, November 1996, Lothar Thiele and Hans-Paul Schwefel, Committee mem-
bers. doi: 10.3929/ethz-a-001710359. Fully available at http://www.handshake.de/
user/blickle/publications/diss.pdf and http://www.tik.ee.ethz.ch/˜tec/
publications/bli97a/diss.ps.gz [accessed 2007-09-07]. ETH Thesis Number: 11894.
337. Tobias Blickle and Lothar Thiele. Genetic Programming and Redundancy. In Genetic Algo-
rithms within the Framework of Evolutionary Computation, Workshop at KI-94 [1265], pages
33–38, 1994. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/people/thiele/
paper/BlTh94.ps [accessed 2007-09-07]. CiteSeerx : 10.1.1.48.8474.
338. Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes Used in Genetic Al-
gorithms. TIK-Report 11, Eidgenössische Technische Hochschule (ETH) Zürich, Department
of Electrical Engineering, Computer Engineering and Networks Laboratory (TIK): Zürich,
Switzerland, December 1995. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/
publications/TIK-Report11.ps and http://www.handshake.de/user/blickle/
publications/tik-report11_v2.ps [accessed 2010-08-03]. CiteSeerx : 10.1.1.11.509. See
also [340].
339. Tobias Blickle and Lothar Thiele. A Mathematical Analysis of Tournament Se-
lection. In ICGA’95 [883], pages 9–16, 1995. Fully available at http://
www.handshake.de/user/blickle/publications/tournament.ps [accessed 2010-08-04].
CiteSeerx : 10.1.1.52.9907.
340. Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes used in
Evolutionary Algorithms. Evolutionary Computation, 4(4):361–394, Winter 1996,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.4.361. Fully avail-
able at http://www.handshake.de/user/blickle/publications/ECfinal.ps [ac-
x
cessed 2010-08-03]. CiteSeer : 10.1.1.15.9584. See also [338].
341. Christian Blum and Daniel Merkle, editors. Swarm Intelligence – Introduction and Ap-
plications, Natural Computing Series. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-3-540-74089-6. Google Books ID: 6Ky4bVPCXqMC, B1rrPQAACAAJ, and
ofRKPAAACAAJ. Library of Congress Control Number (LCCN): 2008933849.
342. Christian Blum and Andrea Roli. Metaheuristics in Combinatorial Optimization: Overview
and Conceptual Comparison. ACM Computing Surveys (CSUR), 35(3):268–308, Septem-
ber 2003, ACM Press: New York, NY, USA. doi: 10.1145/937503.937505. Fully available
at http://iridia.ulb.ac.be/˜meta/newsite/downloads/ACSUR-blum-roli.
pdf [accessed 2008-10-06].
343. Egbert J. W. Boers, Jens Gottlieb, Pier Luca Lanzi, Robert Elliott Smith, Stefano Cagnoni,
Emma Hart, Günther R. Raidl, and Harald Tijink, editors. Applications of Evolution-
ary Computing, Proceedings of EvoWorkshops 2001: EvoCOP, EvoFlight, EvoIASP, Ev-
oLearn, and EvoSTIM (EvoWorkshops’01), April 18–20, 2001, Lake Como, Milan, Italy,
volume 2037/2001 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-45365-2. isbn: 3-540-41920-9. Google Books
ID: EhROxow98NoC.
344. Stefan Boettcher. Extremal Optimization of Graph Partitioning at the Percolation
Threshold. Journal of Physics A: Mathematical and General, 32(28):5201–5211, July 16,
1999, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol,
UK. doi: 10.1088/0305-4470/32/28/302. Fully available at http://www.iop.org/
EJ/article/0305-4470/32/28/302/a92802.pdf and http://xxx.lanl.gov/abs/
cond-mat/9901353 [accessed 2008-08-23]. arXiv ID: cond-mat/9901353v2.
986 REFERENCES
345. Stefan Boettcher and Allon G. Percus. Extremal Optimization: Methods Derived from Co-
Evolution. In GECCO’99 [211], pages 825–832, 1999. arXiv ID: math.OC/9904056.
346. Stefan Boettcher and Allon G. Percus. Extremal Optimization for Graph Partitioning.
Physical Review E, 64(2), July 2001, American Physical Society: College Park, MD,
USA. doi: 10.1103/PhysRevE.64.026114. Fully available at http://www.math.ucla.edu/
˜percus/Publications/eopre.pdf [accessed 2008-08-23]. arXiv ID: cond-mat/0104214.
347. Stefan Boettcher and Allon G. Percus. Optimization with Extremal Dynamics. Physi-
cal Review Letters, 86(23):5211–5214, June 4, 2001, American Physical Society: College
Park, MD, USA. doi: 10.1103/PhysRevLett.86.5211. CiteSeerx : 10.1.1.94.3835. arXiv
ID: cond-mat/0010337.
348. Stefan Boettcher and Allon G. Percus. Optimization with Extremal Dynamics. Complexity,
8(2):57–62, March 21, 2003, Wiley Periodicals, Inc.: Wilmington, DE, USA. doi: 10.1002/c-
plx.10072. Special Issue: Complex Adaptive Systems: Part I.
349. The 5th International Conference on Bioinspired Optimization Methods and their Applica-
tions (BIOMA’12), May 24–25, 2012, Bohinj, Slovenia. Partly available at http://bioma.
ijs.si/conference/2012/ [accessed 2011-06-21].
350. Celia C. Bojarczuk, Heitor Silverio Lopes, and Alex Alves Freitas. Data Mining with
Constrained-Syntax Genetic Programming: Applications to Medical Data Sets. In
IDAMAP’01 [3103], 2001. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/bojarczuk_2001_idamap.html, http://www.graco.unb.br/
alvares/emule/Data%20Mining%20with%20Constrained-Syntax%20Genetic
%20Programming%20Applications%20in%20Medical%20Data%20Set.pdf, and
www.ailab.si/idamap/idamap2001/papers/bojarczuk.pdf [accessed 2009-09-09].
CiteSeerx : 10.1.1.21.6146.
351. Celia C. Bojarczuk, Heitor Silverio Lopes, and Alex Alves Freitas. An Innova-
tive Application of a Constrained-Syntax Genetic Programming System to the Prob-
lem of Predicting Survival of Patients. In EuroGP’03 [2360], pages 11–59, 2003.
doi: 10.1007/3-540-36599-0 2. Fully available at http://www.cs.kent.ac.uk/people/
staff/aaf/pub_papers.dir/EuroGP-2003-Celia.pdf [accessed 2007-09-09].
352. Alessandro Bollini and Marco Piastra. Distributed and Persistent Evolutionary Algorithms:
A Design Pattern. In EuroGP’99 [2196], pages 173–183, 1999.
353. Eric W. Bonabeau, Marco Dorigo, and Guy Théraulaz. Swarm Intelligence: From Nat-
ural to Artificial Systems. Oxford University Press, Inc.: New York, NY, USA, Au-
gust 1999. isbn: 0195131592. Google Books ID: PvTDhzqMr7cC, fcTcHvSsRMYC, and
mhLj70l8HfAC. OCLC: 40331208, 232159801, and 318382300.
354. Eric W. Bonabeau, Marco Dorigo, and Guy Théraulaz. Inspiration for Optimization from So-
cial Insect Behavior. Nature, 406:39–42, July 6, 2000. doi: 10.1038/35017500. Fully available
at http://biology.unm.edu/pibbs/classes/readings/socialinsectbehavior.
pdf and http://www.nature.com/nature/journal/v406/n6791/abs/406039a0.
html [accessed 2011-05-02].
355. Josh C. Bongard and Chandana Paul. Making Evolution an Offer It Can’t Refuse: Mor-
phology and the Extradimensional Bypass. In ECAL’01 [1520], pages 401–412, 2001.
doi: 10.1007/3-540-44811-X 43. Fully available at http://www.cs.uvm.edu/˜jbongard/
papers/bongardPaulEcal2001.ps.gz [accessed 2008-11-02]. CiteSeerx : 10.1.1.27.8240.
356. Josh C. Bongard and Rolf Pfeifer. Repeated Structure and Dissociation of Genotypic and
Phenotypic Complexity in Artificial Ontogeny. In GECCO’01 [2570], pages 829–836, 2001.
CiteSeerx : 10.1.1.11.6911.
357. John Tyler Bonner. On Development: The Biology of Form, Commonwealth Fund Pub-
lications. Harvard University Press: Cambridge, MA, USA, new ed edition, Decem-
ber 12, 1974. isbn: 0674634128. Google Books ID: dNa2AAAACAAJ, dda2AAAACAAJ, and
lO5BGACTstMC.
358. Lashon Bernard Booker. Intelligent Behavior as an Adaptation to the Task Environment.
PhD thesis, University of Michigan: Ann Arbor, MI, USA. Fully available at http://
hdl.handle.net/2027.42/3746 and http://hdl.handle.net/2027.42/3747 [ac-
cessed 2010-08-03]. Order No.: AAI8214966. University Microfilms No.: 8214966. Technical
Report No. 243.
359. Egon Börger. Computability, Complexity, Logic, volume 128 in Studies in Logic and the
Foundations of Mathematics. Elsevier Science Publishing Company, Inc.: New York, NY,
REFERENCES 987
USA. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands, 1989.
isbn: 0-444-87406-2. Google Books ID: T88gs0nemygC.
360. István Borgulya. A Cluster-based Evolutionary Algorithm for the Single Machine Total
Weighted Tardiness-Scheduling Problem. Journal of Computing and Information Technology
(CIT), 10(3):211–217, September 2002, University Computing Centre: Zagreb, Croatia. Fully
available at http://cit.zesoi.fer.hr/downloadPaper.php?paper=384 [accessed 2008-
04-07].
361. Erich G. Bornberg-Bauer and Hue Sun Chan. Modeling Evolutionary Landscapes: Muta-
tional Stability, Topology, and Superfunnels in Sequence Space. Proceedings of the National
Academy of Science of the United States of America (PNAS), 96(19):10689–10694, Septem-
ber 14, 1999, National Academy of Sciences: Washington, DC, USA, Peter G. Wolynes, edi-
tor. Fully available at http://www.pnas.org/cgi/content/abstract/96/19/10689
[accessed 2008-07-02].
362. Bruno Bosacchi, David B. Fogel, and James C. Bezdek, editors. Proceedings of SPIE’s
45th International Symposium on Optical Science and Technology: Applications and Science
of Neural Networks, Fuzzy Systems, and Evolutionary Computation III, July 30–August 4,
2000, San Diego, CA, USA, volume 4120. Society of Photographic Instrumentation Engineers
(SPIE): Bellingham, WA, USA. isbn: 0-8194-3765-4. Google Books ID: tLhQAAAAMAAJ.
OCLC: 45211584 and 248133501.
363. Peter A. N. Bosman and Dirk Thierens. Mixed IDEAs. Technical Report UU-CS-2000-
45, Utrecht University, Department of Information and Computing Sciences: Utrecht,
The Netherlands, December 2000. Fully available at http://www.cs.uu.nl/research/
techreps/UU-CS-2000-45.html [accessed 2009-08-28].
364. Peter A. N. Bosman and Dirk Thierens. Advancing Continuous IDEAs with Mixture
Distributions and Factorization Selection Metrics. In OBUPM’01 [2149], pages 208–212,
2001. Fully available at http://citeseer.ist.psu.edu/old/bosman01advancing.
html [accessed 2009-08-28]. CiteSeerx : 10.1.1.19.7412.
365. Peter A. N. Bosman and Dirk Thierens. Multi-Objective Optimization with Diversity Pre-
serving Mixture-based Iterated Density Estimation Evolutionary Algorithms. International
Journal Approximate Reasoning, 31(2):259–289, November 2002, Elsevier Science Publishers
B.V.: Essex, UK. doi: 10.1016/S0888-613X(02)00090-7.
366. Peter A. N. Bosman and Dirk Thierens. A Thorough Documentation of Obtained Results on
Real-Valued Continuous And Combinatorial Multi-Objective Optimization Problems Using
Diversity Preserving Mixture-Based Iterated Density Estimation Evolutionary Algorithms.
Technical Report UU-CS-2002-052, Utrecht University, Institute of Information and Comput-
ing Sciences: Utrecht, The Netherlands, December 2002. Fully available at http://www.
cs.uu.nl/research/techreps/repo/CS-2002/2002-052.pdf [accessed 2007-08-24].
367. Proceedings of the 5th International ICST Conference on Bio-Inspired Models of Network,
Information, and Computing Systems (BIONETICS’10), December 1–3, 2010, Boston, MA,
USA.
368. Jack Bosworth, Norman Foo, and Bernard P. Zeigler. Comparison of Genetic Algorithms
with Conjugate Gradient Methods. Technical Report 00312-1-T, University of Michigan:
Ann Arbor, MI, USA, February 1972. Fully available at http://hdl.handle.net/2027.
42/3761 [accessed 2010-05-29]. ID: UMR0554. Other Identifiers: UMR0554.
369. Henry Bottomley. Relationship between the Mean, Median, Mode, and Standard Deviation in
a Unimodal Distribution. self-published, September 2006. Fully available at http://www.
btinternet.com/˜se16/hgb/median.htm [accessed 2007-09-15].
370. Karima Boudaoud, Nicolas Nobelis, and Thomas Nebe, editors. Proceedings of the 13th
Annual Workshop of HP OpenView University Association (HPOVUA), May 16, 2006, Côte
d’Azur, France. Infonomics-Consulting.
371. Abdelmadjid Boukra, Mohamed Ahmed-nacer, and Sadek Bouroubi. Selection of Views to
Materialize in Data Warehouse: A Hybrid Solution. International Journal of Computational
Intelligence Research (IJCIR), 3(4):327–334, 2007, Research India Publications: Delhi, India.
doi: 10.5019/j.ijcir.2004.113. Fully available at http://www.ijcir.com/download.php?
idArticle=113 [accessed 2011-04-10].
372. Anu G. Bourgeois, editor. Proceedings of the 8th International Conference on Algorithms and
Architectures for Parallel Processing (ICA3PP), June 9–11, 2008, Ayia Napa, Cyprus, volume
5022/2008 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
988 REFERENCES
Germany. doi: 10.1007/978-3-540-69501-1. isbn: 3-540-69500-1. OCLC: 496756775.
GBV-Identification (PPN): 568737482.
373. Craig Boutilier, editor. Proceedings of the International Joint Conference on Artificial Intel-
ligence (IJCAI’09), July 11–17, 2009, Pasadena, CA, USA. AAAI Press: Menlo Park, CA,
USA. Fully available at http://ijcai.org/papers09/ [accessed 2010-06-23].
374. Denis Bouyssou. Building Criteria: A Prerequisite For MCDA. In MCDA’90 [198],
pages 58–80, 1998. Fully available at http://www.lamsade.dauphine.fr/˜bouyssou/
CRITERIA.PDF [accessed 2009-07-17]. CiteSeerx : 10.1.1.2.7628.
375. Chris P. Bowers. Simulating Evolution with a Computational Model of Embryogeny: Ob-
taining Robustness from Evolved Individuals. In ECAL’05 [488], pages 149–158, 2005. Fully
available at http://www.cs.bham.ac.uk/˜cpb/publications/ecal05_bowers.pdf
[accessed 2010-08-06].
376. George Edward Pelham Box. Evolutionary Operation: A Method for Increasing Industrial
Productivity. Journal of the Royal Statistical Society: Series C – Applied Statistics, 6(2):
81–101, June 1957, Blackwell Publishing for the Royal Statistical Society: Chichester, West
Sussex, UK.
377. George Edward Pelham Box and Norman Richard Draper. Evolutionary Operation. A Sta-
tistical Method for Process Improvement, Wiley Publication in Applied Statistics. John
Wiley & Sons Ltd.: New York, NY, USA, 1969. isbn: 0-471-09305-X. OCLC: 2592,
252982175, and 639061685. Library of Congress Control Number (LCCN): 68056159.
GBV-Identification (PPN): 020827873. LC Classification: TP155.7 .B6.
378. George Edward Pelham Box, J. Stuart Hunter, and William G. Hunter. Statistics for Ex-
perimenters: Design, Innovation, and Discovery. John Wiley & Sons Ltd.: New York, NY,
USA, 1978–2005. isbn: 471093157 and 0-471-71813-0.
379. Anthony Brabazon and Michael O’Neill. Evolving Financial Models using Grammatical Evo-
lution. In Proceedings of The Annual Conference of the South Eastern Accounting Group
(SEAG) 2003 [1771], 2003.
380. Anthony Brabazon, Michael O’Neill, Robin Matthews, and Conor Ryan. Grammati-
cal Evolution And Corporate Failure Prediction. In GECCO’02 [1675], pages 1011–
1018, 2002. Fully available at http://business.kingston.ac.uk/research/intbus/
paper4.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/gecco2002/RWA145.
pdf [accessed 2007-09-09].
381. Ronald J. Brachman, editor. Proceedings of the Fourth National Conference on Artifi-
cial Intelligence (AAAI’84), August 6–10, 1984, University of Texas: Austin, TX, USA.
isbn: 0-262-51053-7. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai84.php [accessed 2007-09-06].
382. R. M. Brady. Optimization Strategies Gleaned from Biological Evolution. Nature, 317(6040):
804–806, October 31, 1985. doi: 10.1038/317804a0. Fully available at http://www.nature.
com/nature/journal/v317/n6040/pdf/317804a0.pdf [accessed 2008-09-10]. In: Letters to
Nature.
383. Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Fed-
erico Michele Facca, and Christina Tziviskou. Improvements and Future Perspectives on
Web Engineering Methods for Automating Web Services Mediation, Choreography and
Discovery. In Third Workshop of the Semantic Web Service Challenge 2006 – Chal-
lenge on Automating Web Services Mediation, Choreography and Discovery [2690], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Athens/papers/
SWS-phase-III_polimi_cefriel_v1.0.pdf [accessed 2010-12-16]. See also [384–387, 1650,
1831].
384. Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele Facca,
Piero Fraternali, and Christina Tziviskou. Web Modeling-based Approach to Automat-
ing Web Services Mediation, Choreography and Discovery. In First Workshop of the Se-
mantic Web Service Challenge 2006 – Challenge on Automating Web Services Mediation,
Choreography and Discovery [2688], 2006. Fully available at http://sws-challenge.
org/workshops/2006-Stanford/papers/01.pdf [accessed 2010-12-12]. See also [383, 385–
387, 1650, 1831].
385. Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele
Facca, and Christina Tziviskou. Coping with Requirements Changes: SWS-Challenge
Phase II. In Second Workshop of the Semantic Web Service Challenge 2006 – Chal-
REFERENCES 989
lenge on Automating Web Services Mediation, Choreography and Discovery [2689], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Budva/papers/
SWS-phase-Finale_polimi_cefriel.pdf [accessed 2010-12-12]. See also [383, 384, 386, 387,
1650, 1831].
386. Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle,
Federico Michele Facca, Andrea Turati, and Christina Tziviskou. WebML and Glue:
An Integrated Discovery Approach for the SWS Challenge. In KWEB SWS Chal-
lenge Workshop [2450], 2007. Fully available at http://sws-challenge.org/
workshops/2007-Innsbruck/papers/3-SWS-phase-IV_polimi_cefriel_v1.
2.pdf [accessed 2010-12-16]. See also [383–385, 387, 1650, 1831].
387. Marco Brambilla, Stefano Ceri, Federico Michele Facca, Christina Tziviskou, Irene Celino,
Dario Cerizza, Emanuele Della Valle, and Andrea Turati. WebML and Glue: An Integrated
Discovery Approach for the SWS Challenge. In WI/IAT Workshops’07 [1731], pages 148–
151, 2007. doi: 10.1109/WI-IATW.2007.58. INSPEC Accession Number: 9894902. See also
[384–386, 1831].
388. Markus F. Brameier. On Linear Genetic Programming. PhD thesis, Universität Dort-
mund, Fachbereich Informatik: Dortmund, North Rhine-Westphalia, Germany, Febru-
ary 13, 2004, Wolfgang Banzhaf, Martin Riedmiller, and Peter Nordin, Committee members.
Fully available at http://hdl.handle.net/2003/20098 and https://eldorado.
tu-dortmund.de/handle/2003/20098 [accessed 2010-08-22].
389. Markus F. Brameier and Wolfgang Banzhaf. A Comparison of Genetic Programming and
Neural Networks in Medical Data Analysis. Reihe computational intelligence: Design and
management of complex technical processes and systems by means of computational intelli-
gence methods, Universität Dortmund, Collaborative Research Center (Sonderforschungs-
bereich) 531: Dortmund, North Rhine-Westphalia, Germany, November 8, 1998. Fully
available at http://dspace.hrhttps://eldorado.tu-dortmund.de/handle/2003/
5344 [accessed 2009-07-22]. CiteSeerx : 10.1.1.36.4678. issn: 1433-3325. Reihe Computa-
tional Intelligence, Sonderforschungsbereich 531. See also [391].
390. Markus F. Brameier and Wolfgang Banzhaf. Evolving Teams of Predictors with Linear Ge-
netic Programming. Genetic Programming and Evolvable Machines, 2(4):381–407, Decem-
ber 2001, Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Pub-
lishers: Norwell, MA, USA. doi: 10.1023/A:1012978805372. CiteSeerx : 10.1.1.16.7410.
391. Markus F. Brameier and Wolfgang Banzhaf. A Comparison of Linear Genetic Program-
ming and Neural Networks in Medical Data Mining. IEEE Transactions on Evolution-
ary Computation (IEEE-EC), 5(1):17–26, 2001, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.910462. Fully available at http://web.cs.mun.ca/˜banzhaf/
papers/ieee_taec.pdf [accessed 2009-07-22]. CiteSeerx : 10.1.1.41.208. INSPEC Accession
Number: 6876106. See also [389].
392. Markus F. Brameier and Wolfgang Banzhaf. Explicit Control of Diversity and Effective Vari-
ation Distance in Linear Genetic Programming. In EuroGP’02 [977], pages 37–49, 2002.
doi: 10.1007/3-540-45984-7 4. Fully available at http://eldorado.uni-dortmund.de:
8080/bitstream/2003/5419/1/123.pdf and http://www.cs.mun.ca/˜banzhaf/
papers/eurogp02_dist.pdf [accessed 2010-08-22]. CiteSeerx : 10.1.1.11.5531.
393. Markus F. Brameier and Wolfgang Banzhaf. Neutral Variations Cause Bloat in Linear GP.
In EuroGP’03 [2360], pages 997–1039, 2003. CiteSeerx : 10.1.1.120.8070.
394. Markus F. Brameier and Wolfgang Banzhaf. Linear Genetic Programming, volume 1 in
Genetic and Evolutionary Computation. Springer US: Boston, MA, USA and Kluwer Aca-
demic Publishers: Norwell, MA, USA, December 11, 2006. doi: 10.1007/978-0-387-31030-5.
isbn: 0-387-31029-0 and 0-387-31030-4. Library of Congress Control Number
(LCCN): 2006920909. Series Editor: David L. Goldberg, John R. Koza.
395. Jürgen Branke. Creating Robust Solutions by Means of Evolutionary Algorithms. In PPSN
V [866], pages 119–128, 1998. doi: 10.1007/BFb0056855.
396. Jürgen Branke. The Moving Peaks Benchmark. Technical Report, University of Karlsruhe,
Institute for Applied Computer Science and Formal Description Methods (AIFB): Karlsruhe,
Germany, December 16, 1999. Fully available at http://www.aifb.uni-karlsruhe.de/
˜jbr/MovPeaks/ [accessed 2007-08-19]. See also [397].
397. Jürgen Branke. Memory Enhanced Evolutionary Algorithms for Changing Optimization
Problems. In CEC’99 [110], pages 1875–1882, volume 3, 1999. doi: 10.1109/CEC.1999.785502.
990 REFERENCES
Fully available at http://www.aifb.uni-karlsruhe.de/˜jbr/Papers/memory_
final2.ps.gz [accessed 2008-12-07]. CiteSeerx : 10.1.1.2.8897. INSPEC Accession Num-
ber: 6346978.
398. Jürgen Branke. Evolutionary Optimization in Dynamic Environments, volume 3 in Genetic
and Evolutionary Computation. PhD thesis, University of Karlsruhe, Department of Eco-
nomics and Business Engineering: Karlsruhe, Germany, Springer US: Boston, MA, USA and
Kluwer Academic Publishers: Norwell, MA, USA, December 2000–2001, Hartmut Schmeck,
Georg Bol, and Lothar Thiele, Advisors. isbn: 0792376315 and 9780792376316. Google
Books ID: weXuTB5JjFsC.
399. Jürgen Branke, Kalyanmoy Deb, Kaisa Miettinen, and Ralph E. Steuer, editors. Practi-
cal Approaches to Multi-Objective Optimization, November 7–12, 2004, Schloss Dagstuhl:
Wadern, Germany, 04461 in Dagstuhl Seminar Proceedings. Internationales Begegnungs-
und Forschungszentrum für Informatik (IBFI): Schloss Dagstuhl, Wadern, Germany.
Fully available at http://drops.dagstuhl.de/portals/index.php?semnr=04461
and http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=04461 [ac-
cessed 2007-09-19]. Published in 2005.
400. Jürgen Branke, Erdem Salihoğlu, and A. Şima Uyar. Towards an Analysis of Dynamic
Environments. In GECCO’05 [304], pages 1433–1440, 2005. doi: 10.1145/1068009.1068237.
401. Ivan Bratko and Sašo Džeroski, editors. Proceedings of the 16th International Conference on
Machine Learning (ICML’99), June 27–30, 1999, Bled, Slovenia. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. isbn: 1-55860-612-2.
402. Olli Bräysy. Genetic Algorithms for the Vehicle Routing Problem with Time Windows.
Arpakannus – Newsletter of the Finnish Artificial Intelligence Society (FAIS), 1:33–38,
May 2001, Finnish Artificial Intelligence Society (FAIS): Vantaa, Finland. Special issue
on Bioinformatics and Genetic Algorithms.
403. Olli Bräysy and Michel Gendreau. Tabu Search Heuristics for the Vehicle Routing Problem
with Time Windows. TOP: An Official Journal of the Spanish Society of Statistics and Op-
erations Research, 10(2):211–237, December 2002, Springer-Verlag GmbH: Berlin, Germany
and Sociedad de Estadı́stica e Investigatión Operativa. doi: 10.1007/BF02579017.
404. Mihaela Breaban, Lenuta Alboaie, and Henri Luchian. Guiding Users within Trust
Networks Using Swarm Algorithms. In CEC’09 [1350], pages 1770–1777, 2009.
doi: 10.1109/CEC.2009.4983155. INSPEC Accession Number: 10688750.
405. Leo Breiman. Bagging Predictors. Machine Learning, 24(2):123–140, August 1996, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/BF00058655. Fully available at http://www.salford-systems.com/doc/
BAGGING_PREDICTORS.pdf [accessed 2009-03-18]. CiteSeerx : 10.1.1.32.9399.
406. Leo Breiman. Random Forests. Machine Learning, 45(1):5–32, 2001, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1010933404324. Fully available at http://www.springerlink.
com/content/u0p06167n6173512/fulltext.pdf [accessed 2008-12-27].
407. Hans J. Bremermann. Optimization Through Evolution and Recombination. In Self-
Organizing Systems (Proceedings of the conference sponsored by the Information Systems
Branch of the Office of Naval Research and the Armour Research Foundation of the Illinois
Institute of Technology.) [3037], pages 93–103, 1962. Fully available at http://holtz.org/
Library/Natural%20Science/Physics/ [accessed 2007-10-31].
408. Janez Brest, Viljem Žumer, and Mirjam Sepesy Maučec. Control Parameters
in Self-Adaptive Differential Evolution. In BIOMA’06 [919], pages 35–44, 2006.
CiteSeerx : 10.1.1.106.8106.
409. Anne F. Brindle. Genetic Algorithms for Function Optimization. PhD thesis, University of
Alberta: Edmonton, Alberta, Canada, 1980. isbn: 0-315-01039-8. OCLC: 15896420,
59840399, and 632337062. Technical Report TR81-2.
410. David Brittain, Jon Sims Williams, and Chris McMahon. A Genetic Algorithm Approach
To Planning The Telecommunications Access Network. In ICGA’97 [166], pages 623–628,
1997. Fully available at http://citeseer.ist.psu.edu/brittain97genetic.html
[accessed 2008-08-01].
411. Proceedings of the 14th International Conference on Soft Computing (MENDEL’08), June 18–
20, 2009, Brno, Czech Republic.
412. Proceedings of the 15th International Conference on Soft Computing (MENDEL’09), June 17–
REFERENCES 991
26, 2009, Brno, Czech Republic.
413. Proceedings of the 12th International Conference on Soft Computing (MENDEL’06), May 31–
June 2, 2006, Brno University of Technology: Brno, Czech Republic. Brno University of Tech-
nology, Faculty of Mechanical Engineering: Brno, Czech Republic. isbn: 80-214-3195-4.
414. Proceedings of the 1st International Mendel Conference on Genetic Algorithms on the occasion
of 130th anniversary of Mendel’s laws (Mendel’95), September 1995, Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0672-0.
415. Proceedings of the 2nd International Mendel Conference on Genetic Algorithms
(MENDEL’96), June 26–28, 1996, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0769-7.
416. Proceedings of the 2nd International Mendel Conference on Genetic Algorithms
(MENDEL’97), June 26–28, 1997, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0769-7.
417. Proceedings of the 4th International Mendel Conference on Genetic Algorithms, Optimisa-
tion Problems, Fuzzy Logic, Neural Networks, Rough Sets (MENDEL’98), June 24–26, 1998,
Brno University of Technology: Brno, Czech Republic. Brno University of Technology, Ústav
Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-1131-7.
418. Proceedings of the 3rd International Conference on Genetic Algorithms (MENDEL’99),
June 1997, Brno University of Technology: Brno, Czech Republic. Brno University of Tech-
nology, Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-1131-7.
419. Proceedings of the 8th International Conference on Soft Computing (MENDEL’02), June 5–7,
2002, Brno University of Technology: Brno, Czech Republic. Brno University of Technology,
Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2135-5.
420. Proceedings of the 10th International Conference on Soft Computing (MENDEL’04), June 16–
18, 2004, Brno University of Technology: Brno, Czech Republic. Brno University of Technol-
ogy, Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2676-4.
421. Proceedings of the 11th International Conference on Soft Computing (MENDEL’05), June 15–
17, 2005, Brno University of Technology: Brno, Czech Republic. Brno University of Technol-
ogy, Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2961-5.
422. Dimo Brockhoff and Eckart Zitzler. Are All Objectives Necessary? On Dimensionality Re-
duction in Evolutionary Multiobjective Optimization. In PPSN IX [2355], pages 533–542,
2006. doi: 10.1007/11844297 54. Fully available at http://www.tik.ee.ethz.ch/sop/
publicationListFiles/bz2006d.pdf [accessed 2009-07-18].
423. Dimo Brockhoff and Eckart Zitzler. Dimensionality Reduction in Multiobjective Optimiza-
tion: The Minimum Objective Subset Problem. In Selected Papers of the Annual Interna-
tional Conference of the German Operations Research Society (GOR), Jointly Organized with
the Austrian Society of Operations Research (ÖGOR) and the Swiss Society of Operations
Research (SVOR) [2840], pages 423–429, 2006. doi: 10.1007/978-3-540-69995-8 68. Fully
available at http://www.tik.ee.ethz.ch/sop/publicationListFiles/bz2007d.
pdf [accessed 2009-07-18].
424. Dimo Brockhoff and Eckart Zitzler. Improving Hypervolume-based Multiobjective Evolution-
ary Algorithms by using Objective Reduction Methods. In CEC’07 [1343], pages 2086–2093,
2007. doi: 10.1109/CEC.2007.4424730. Fully available at http://www.tik.ee.ethz.ch/
sop/publicationListFiles/bz2007c.pdf [accessed 2009-07-18]. INSPEC Accession Num-
ber: 9889552.
425. Dimo Brockhoff, Tobias Friedrich, Nils Hebbinghaus, Christian Klein, Frank Neumann, and
Eckart Zitzler. Do additional objectives make a problem harder? In GECCO’07-I [2699],
pages 765–772, 2007. doi: 10.1145/1276958.1277114.
426. Carla E. Brodley, editor. Proceedings of the 21st International Conference on Machine Learn-
ing (ICML’04), July 4–8, 2004, Banff, AB, Canada, volume 69 in ACM International Con-
ference Proceeding Series (AICPS). ACM Press: New York, NY, USA.
427. Carla E. Brodley and Andrea Pohoreckyj Danyluk, editors. Proceedings of the 18th Interna-
tional Conference on Machine Learning (ICML’01), June 28–July 1, 2001, Williams College:
Williamstown, MA, USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA.
isbn: 1-55860-778-1.
992 REFERENCES
428. Alexander M. Bronstein and Michael M. Bronstein. Numerical Optimization. Project TOSCA
– Tools for Non-Rigid Shape Comparison and Analysis, Computer Science Department, Tech-
nion – Israel Institute of Technology: Haifa, Israel, 2008. Fully available at http://tosca.
cs.technion.ac.il/book/slides/Milano08_optimization.ppt [accessed 2010-08-28].
429. Donald E. Brown, Christopher L. Huntley, and Andrew R. Spillane. A Parallel Genetic
Heuristic for the Quadratic Assignment Problem. In ICGA’89 [2414], pages 406–415, 1989.
430. Jason Brownlee. Learning Classifier Systems. Technical Report 070514A, Swinburne Uni-
versity of Technology, Faculty of Information and Communication Technologies, Centre for
Information Technology Research, Complex Intelligent Systems Laboratory: Melbourne,
VIC, Australia, May 2007. Fully available at http://www.ict.swin.edu.au/personal/
jbrownlee/2007/TR24-2007.pdf [accessed 2008-04-03].
431. Jason Brownlee. OAT HowTo: High-Level Domain, Problem, and Algorithm Implementation.
Technical Report 20071218A, Swinburne University of Technology, Faculty of Information and
Communication Technologies, Centre for Information Technology Research, Complex Intel-
ligent Systems Laboratory: Melbourne, VIC, Australia, December 2007. Fully available
at http://optalgtoolkit.sourceforge.net/docs/TR46-2007.pdf [accessed 2010-03-
11].
432. Jason Brownlee. OAT: The Optimization Algorithm Toolkit. Technical Report 20071220A,
Swinburne University of Technology, Faculty of Information and Communication Technolo-
gies, Centre for Information Technology Research, Complex Intelligent Systems Laboratory:
Melbourne, VIC, Australia, December 2007. Fully available at http://www.ict.swin.
edu.au/personal/jbrownlee/2007/TR47-2007.pdf [accessed 2010-03-11].
433. Axel T. Brünger, Paul D. Adams, and Luke M. Rice. New Applications of Simulated Anneal-
ing in X-Ray Crystallography and Solution NMR. Structure, 15:325–336, 1997. Fully available
at http://atb.slac.stanford.edu/public/papers.php?sendfile=44 [accessed 2007-
08-25].
434. Seventh International Conference on Swarm Intelligence (ANTS’10), September 8–10, 2010,
Brussels, Belgium.
435. Barrett Richard Bryant, editor. Proceedings of the 12th/1997 ACM/SIGAPP Symposium
on Applied Computing (SAC’97), February 28–March 2, 1997, Hyatt St. Claire: San José,
CA, USA. ACM Press: New York, NY, USA. isbn: 0-89791-850-9. Google Books
ID: JbAmPQAACAAJ. OCLC: 36711176 and 246580075.
436. Kylie Bryant. Genetic Algorithms and the Traveling Salesman Problem. Master’s the-
sis, Harvey Mudd College (HMC), Department of Mathematics: Claremont, CA, USA,
December 2000, Arthur Benjamin, Advisor, Lisette de Pillis, Committee member. Fully
available at http://www.math.hmc.edu/seniorthesis/archives/2001/kbryant/
kbryant-2001-thesis.pdf [accessed 2009-09-30].
437. Bradley J. Buckham and Casey Lambert. Simulated Annealing Applications. University
of Victoria, Mechanical Engineering Department: Victoria, BC, Canada, November 1999,
Zuomin Dong, Advisor. Fully available at http://www.me.uvic.ca/˜zdong/courses/
mech620/SA_App.PDF [accessed 2007-08-25]. Seminar presentation: MECH620 Quantitative
Analysis, Reasoning and Optimization Methods in CAD/CAM and Concurrent Engineering.
438. Lam Thu Bui and Sameer Alam, editors. Multi-Objective Optimization in Computational
Intelligence: Theory and Practice, Premier Reference Source. Idea Group Publishing (Idea
Group Inc., IGI Global): New York, NY, USA, March 30, 2008. isbn: 1599044986.
439. Vinh Bui, Lam Thu Bui, Hussein A. Abbass, Axel Bender, and Pradeep Ray. On the Role of
Information Networks in Logistics: An Evolutionary Approach with Military Scenarios. In
CEC’09 [1350], pages 598–605, 2009. doi: 10.1109/CEC.2009.4983000. INSPEC Accession
Number: 10688573.
440. Larry Bull. On using ZCS in a Simulated Continuous Double-Auction Market. In
GECCO’99 [211], pages 83–90, 1999. Fully available at http://www.cs.bham.ac.uk/
˜wbl/biblio/gecco1999/GA-806.pdf [accessed 2007-09-12].
441. Larry Bull, editor. Applications of Learning Classifier Systems, volume 150 in Studies
in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, April 2004.
isbn: 3540211098. Google Books ID: aBIjqGag5-kC.
442. Larry Bull and Jacob Hurst. ZCS Redux. Evolutionary Computation, 10(2):185–205, Sum-
mer 2002, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365602320169848. Fully
available at http://www.cems.uwe.ac.uk/lcsg/papers/zcsredux.ps [accessed 2007-09-
REFERENCES 993
12].
443. Larry Bull and Tim Kovacs, editors. Foundations of Learning Classifier Systems, Studies
in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, September 1,
2005. isbn: 3540250735.
444. Bernd Bullnheimer, Richard F. Hartl, and Christine Strauss. Applying the Ant System to
the Vehicle Routing Problem. In MIC’97 [2828], 1997. CiteSeerx : 10.1.1.48.7946.
445. Bernd Bullnheimer, Richard F. Hartl, and Christine Strauss. An improved Ant System Al-
gorithm for the Vehicle Routing Problem. Annals of Operations Research, 89(0):319–328,
January 1999, Springer Netherlands: Dordrecht, Netherlands and J. C. Baltzer AG, Sci-
ence Publishers: Amsterdam, The Netherlands. doi: 10.1023/A:1018940026670. Fully avail-
able at http://www.macs.hw.ac.uk/˜dwcorne/Teaching/bullnheimer-vr.pdf [ac-
x
cessed 2008-10-27]. CiteSeer : 10.1.1.49.1415.
446. Seth Bullock, Jason Noble, Richard A. Watson, and Mark A. Bedau, editors. Proceedings
of the Eleventh International Conference on the Simulation and Synthesis of Living Systems
(Artificial Life XI), August 5–8, 2008, Winchester, Hampshire, UK. MIT Press: Cambridge,
MA, USA.
447. Bundesministerium für Wirtschaft und Technologie (BMWi): Berlin, Germany. Innovation-
spolitik, Informationsgesellschaft, Telekommunikation – Mobilität und Verkehrstechnologien –
Das 3. Verkehrsforschungsprogramm der Bundesregierung. Bundesministerium für Wirtschaft
und Technologie (BMWi), Öffentlichkeitsarbeit: Berlin, Germany, May 2008.
448. Alan Bundy, editor. Proceedings of the 8th International Joint Conference on Artificial Intel-
ligence (IJCAI’83-I), August 1983, Karlsruhe, Germany, volume 1. William Kaufmann: Los
Altos, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-83-VOL-1/
CONTENT/content.htm [accessed 2008-04-01]. See also [449].
449. Alan Bundy, editor. Proceedings of the 8th International Joint Conference on Artificial Intel-
ligence (IJCAI’83-II), August 1983, Karlsruhe, Germany, volume 2. William Kaufmann: Los
Altos, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-83-VOL-2/
CONTENT/content.htm [accessed 2008-04-01]. See also [448].
450. Christopher J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recog-
nition. Data Mining and Knowledge Discovery, 2(2):121–167, 1998, Springer Nether-
lands: Dordrecht, Netherlands, Usama Fayyad, editor. doi: 10.1023/A:1009715923555.
CiteSeerx : 10.1.1.18.1083.
451. Luciana Buriol, Paulo M. França, and Pablo Moscato. A New Memetic Algo-
rithm for the Asymmetric Traveling Salesman Problem. Journal of Heuristics,
10(5):483–506, September 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:HEUR.0000045321.59202.52. Fully available at http://citeseer.
ist.psu.edu/544761.html and http://www.springerlink.com/content/
w617486q60mphg88/fulltext.pdf [accessed 2007-09-12]. CiteSeerx : 10.1.1.20.1660.
452. Edmund K. Burke and Wilhelm Erben, editors. Selected Papers from the Third Inter-
national Conference on Practice and Theory of Automated Timetabling (PATAT’00), Au-
gust 16–18, 2000, Konstanz, Germany, volume 2079/2001 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-44629-X.
isbn: 3-540-42421-0.
453. Edmund K. Burke and Graham Kendall, editors. Search Methodologies – Introductory Tuto-
rials in Optimization and Decision Support Techniques. Springer Science+Business Media,
Inc.: New York, NY, USA, 2005. doi: 10.1007/0-387-28356-0. isbn: 0-387-23460-8. Google
Books ID: X4BcOZEa4HsC. Library of Congress Control Number (LCCN): 2005051623.
454. Edmund K. Burke, Steven Matt Gustafson, and Graham Kendall. Survey and Analysis of
Diversity Measures in Genetic Programming. In GECCO’02 [1675], pages 716–723, 2002.
Fully available at http://www.gustafsonresearch.com/research/publications/
gecco-diversity-2002.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.2.3757.
455. Edmund K. Burke, Steven Matt Gustafson, Graham Kendall, and Natalio Krasnogor. Ad-
vanced Population Diversity Measures in Genetic Programming. In PPSN VII [1870],
pages 341–350, 2002. doi: 10.1007/3-540-45712-7 33. Fully available at http://www.cs.
bham.ac.uk/˜wbl/biblio/gp-html/burke_ppsn2002_pp341.html and http://
www.gustafsonresearch.com/research/publications/ppsn-2002.ps [accessed 2009-
x
08-07]. CiteSeer : 10.1.1.18.6142.
456. Edmund K. Burke, Steven Matt Gustafson, Graham Kendall, and Natalio Krasnogor. Is
994 REFERENCES
Increasing Diversity in Genetic Programming Beneficial? An Analysis of the Effects on Fit-
ness. In CEC’03 [2395], pages 1398–1405, volume 2, 2003. doi: 10.1109/CEC.2003.1299834.
Fully available at http://www.gustafsonresearch.com/research/publications/
cec-2003.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.1.1929.
457. Edmund K. Burke, Graham Kendall, and Eric Soubeiga. A Tabu Search Hyperheuristic for
Timetabling and Rostering. Journal of Heuristics, 9(6):451–470, December 2003, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1023/B:HEUR.0000012446.94732.b6. Fully
available at http://www.asap.cs.nott.ac.uk/publications/pdf/Journal_eric.
pdf and http://www.asap.cs.nott.ac.uk/publications/pdf/Journal_eric03.
pdf [accessed 2011-04-23]. CiteSeerx : 10.1.1.1.9539.
458. Edmund K. Burke, Steven Matt Gustafson, and Graham Kendall. Diversity in Genetic
Programming: An Analysis of Measures and Correlation with Fitness. IEEE Transac-
tions on Evolutionary Computation (IEEE-EC), 8(1):47–62, February 2004, IEEE Com-
puter Society: Washington, DC, USA, Xin Yao, editor. doi: 10.1109/TEVC.2003.819263.
Fully available at http://www.gustafsonresearch.com/research/publications/
gustafson-ieee2004.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.59.6663.
459. Martin V. Butz. Anticipatory Learning Classifier Systems, Genetic and Evolutionary Com-
putation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA,
USA, January 25, 2002. isbn: 0792376307.
460. Martin V. Butz. Rule-Based Evolutionary Online Learning Systems: A Principled Approach
to LCS Analysis and Design, Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, December 22, 2005. isbn: 3540253793.
461. Martin V. Butz, Kumara Sastry, and David Edward Goldberg. Tournament Selection in
XCS. Technical Report 2002020, Illinois Genetic Algorithms Laboratory (IlliGAL), Depart-
ment of Computer Science, Department of General Engineering, University of Illinois at
Urbana-Champaign: Urbana-Champaign, IL, USA, July 2002. Fully available at ftp://
ftp-illigal.ge.uiuc.edu/pub/papers/IlliGALs/2002020.ps.Z [accessed 2010-08-03].
CiteSeerx : 10.1.1.19.1850.
462. Elizabeth Byrnes and Jan Aikins, editors. Proceedings of the The Sixth Conference on Innova-
tive Applications of Artificial Intelligence (IAAI’94), August 1–3, 1994, Convention Center:
Seattle, WA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-62-8. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai94.php [accessed 2007-09-06].
OCLC: 635873487. GBV-Identification (PPN): 196175178.
463. Louis Caccetta and Araya Kulanoot. Computational Aspects of Hard Knapsack Prob-
lems. Nonlinear Analysis: Theory, Methods & Applications – Theory, Methods & Ap-
plications An International Multidisciplinary Journal, 47(8):5547–5558, August 2001, El-
sevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/S0362-546X(01)00658-7.
464. Stefano Cagnoni, Riccardo Poli, Yung-Chien Lin, George D. Smith, David Wolfe Corne,
Martin J. Oates, Emma Hart, Pier Luca Lanzi, Egbert J. W. Boers, Ben Paechter, and Ter-
ence Claus Fogarty, editors. Real-World Applications of Evolutionary Computing,:EvoIASP,
EvoSCONDI, EvoTel, EvoSTIM, EvoROB, and EvoFlight (EvoWorkshops’00), April 17,
2000, Edinburgh, Scotland, UK, volume 1803/200 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45561-2.
isbn: 3-540-67353-9. Google Books ID: KQo0n1vZbi0C. OCLC: 43851409, 122976715,
174380229, and 243487744.
465. Stefano Cagnoni, Jens Gottlieb, Emma Hart, Martin Middendorf, and Günther R. Raidl,
editors. Applications of Evolutionary Computing, Proceedings of EvoWorkshops 2002: Evo-
COP, EvoIASP, EvoSTIM/EvoPLAN (EvoWorkshops’02), April 2–4, 2002, Kinsale, Ireland,
volume 2279/-1/20 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-46004-7. isbn: 3-540-43432-1.
466. Stefano Cagnoni, Evelyne Lutton, and Gustavo Olague, editors. Genetic and Evolutionary
Computation for Image Processing and Analysis, volume 8 in EURASIP Book Series on Signal
Processing and Communications. Hindawi Publishing Corporation: New York, NY, USA,
REFERENCES 995
February 2008. Fully available at http://www.hindawi.com/books/9789774540011.
pdf [accessed 2008-05-17].
467. Sébastien Cahon, Nordine Melab, and El-Ghazali Talbi. ParadisEO: A Framework
for the Reusable Design of Parallel and Distributed Metaheuristics. Journal of
Heuristics, 10(3):357–380, May 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:HEUR.0000026900.92269.ec.
468. Tao Cai, Feng Pan, and Jie Chen. Adaptive Particle Swarm Optimization Algorithm. In
WCICA’04 [1388], pages 2245–2247, volume 3, 2004. doi: 10.1109/WCICA.2004.1341988.
INSPEC Accession Number: 8152861.
469. Proceedings of the 7th Asia-Pacific Conference on Complex Systems (Complex’04), Decem-
ber 6–10, 2004, Cairns Convention Centre: Cairns, Australia.
470. Osvaldo Cairó, Luis Enrique Sucar, and Francisco J. Cantu, editors. Advances in Artificial
Intelligence, Proceedings of the Mexican International Conference on Artificial Intelligence
(MICAI’00), April 11–14, 2000, Acapulco, México, volume 1793/2000 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/10720076. isbn: 3-540-67354-7.
471. Craig Caldwell and Victor S. Johnston. Tracking a Criminal Suspect Through “Face-Space”
with a Genetic Algorithm. In ICGA’91 [254], pages 416–421, 1991.
472. Cristian S. Calude, Michael J. Dinneen, Gheorghe Păun, Grzegorz Rozenberg, and Susan
Stepney, editors. Proceedings of the 5th International Conference on Unconventional Com-
putation (UC’06), September 4–8, 2006, York, UK, volume 4135 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11839132. isbn: 3-540-38593-2. Library of
Congress Control Number (LCCN): 2006931474.
473. William H. Calvin. The River That Flows Uphill: A Journey from the Big Bang to the Big
Brain. Macmillan Publishers Co.: New York, NY, USA, Backinprint.com: New York, NY,
USA, and Sierra Club Books: San Francisco, CA, USA, July 1986. isbn: 0025209205,
0595167004, 0792445228, and 0-87156-719-9. Google Books ID: KVdoPwAACAAJ,
OpNuPwAACAAJ, nT7rNiPqEuAC, and uavwAAAAMAAJ. OCLC: 13524443, 48962546, and
246817496. Library of Congress Control Number (LCCN): 86008490 and 87000381.
GBV-Identification (PPN): 116426063 and 277678366. LC Classification: QP356 .C35
1986b. See also [474].
474. William H. Calvin. Die Geschichte des Lebens – Vom Urknall bis zum Großhirn des Homo
Sapiens. Bechtermünz Verlag GmbH: Augsburg, Germany, 1997, Friedrich Griese, Translator.
isbn: 3-86047-567-3. OCLC: 57376043. See also [473].
475. Mike Campos, Eric W. Bonabeau, Guy Théraulaz, and Jean-Louis Deneubourg.
Dynamic Scheduling and Division of Labor in Social Insects. Adaptive Behav-
ior, 8(2):83–95, March 2000, SAGE Publications: Thousand Oaks, CA, USA.
doi: 10.1177/105971230000800201. Fully available at http://www.icosystem.com/
dynamic-scheduling-and-division-of-labor-in-social-insects/ [accessed 2011-
x
05-02]. CiteSeer : 10.1.1.112.3678.
476. Erick Cantú-Paz. Designing Efficient and Accurate Parallel Genetic Algorithms. PhD the-
sis, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science,
Department of General Engineering, University of Illinois at Urbana-Champaign: Urbana-
Champaign, IL, USA, July 1999. Fully available at http://citeseer.ist.psu.edu/
cantu-paz99designing.html [accessed 2008-04-06]. Also IlliGAL report 99017. See also [477].
477. Erick Cantú-Paz. Efficient and Accurate Parallel Genetic Algorithms, volume 1 in Genetic
and Evolutionary Computation. Springer US: Boston, MA, USA and Kluwer Academic
Publishers: Norwell, MA, USA, December 15, 2000. isbn: 0-7923-7221-2. Google Books
ID: UXkGGQbmsfAC. OCLC: 45058477. See also [476].
478. Erick Cantú-Paz, editor. Late Breaking papers at the Genetic and Evolutionary Computation
Conference (GECCO’02 LBP), July 9–13, 2002, Roosevelt Hotel: New York, NY, USA.
AAAI Press: Menlo Park, CA, USA. See also [224, 1675, 1790].
479. Erick Cantú-Paz and Francisco Fernández de Vega, editors. First Workshop on Parallel
Architectures and Bioinspired Algorithms (WPBA’05), June 14, 2005, Oslo, Norway.
480. Erick Cantú-Paz and Chandrika Kamath. Combining Evolutionary Algorithms with Oblique
Decision Trees to Detect Bent-Double Galaxis. In Proceedings of SPIE’s 45th International
Symposium on Optical Science and Technology: Applications and Science of Neural Networks,
996 REFERENCES
Fuzzy Systems, and Evolutionary Computation III [362], pages 63–71, 2000.
481. Erick Cantú-Paz and Chandrika Kamath. Using Evolutionary Algorithms to Induce
Oblique Decision Trees. In GECCO’00 [2935], pages 1053–1060, 2000. Fully avail-
able at https://computation.llnl.gov/casc/sapphire/dtrees/oc1.html and
www.evolutionaria.com/publications/gecco00-obliqueDT.ps.gz [accessed 2009-09-
x
09]. CiteSeer : 10.1.1.28.7300.
482. Erick Cantú-Paz and William F. Punch, editors. Evolutionary Computation and Parallel
Processing Workshop, 1999, Orlando, FL, USA. Morgan Kaufmann Publishers Inc.: San
Francisco, CA, USA. Part of [211].
483. Erick Cantú-Paz, Martin Pelikan, and David Edward Goldberg. Linkage Problem, Dis-
tribution Estimation, and Bayesian Networks. Evolutionary Computation, 8(3):311–
340, Fall 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600750078808.
CiteSeerx : 10.1.1.16.236. See also [2150].
484. Erick Cantú-Paz, James A. Foster, Kalyanmoy Deb, Lawrence Davis, Rajkumar Roy, Una-
May O’Reilly, Hans-Georg Beyer, Russell K. Standish, Graham Kendall, Stewart W. Wilson,
Mark Harman, Joachim Wegener, Dipankar Dasgupta, Mitchell A. Potter, Alan C. Schultz,
Kathryn A. Dowsland, Natasa Jonoska, and Julian Francis Miller, editors. Proceedings of
the Genetic and Evolutionary Computation Conference, Part I (GECCO’03-I), July 12–16,
2003, Holiday Inn Chicago: Chicago, IL, USA, volume 2723/2003 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45105-6.
isbn: 3-540-40602-6. Google Books ID: IlWhIKahmsC. OCLC: 52506094, 52559224,
53068317, 243490402, and 314238192. See also [485, 2078].
485. Erick Cantú-Paz, James A. Foster, Kalyanmoy Deb, Lawrence Davis, Rajkumar Roy, Una-
May O’Reilly, Hans-Georg Beyer, Russell K. Standish, Graham Kendall, Stewart W. Wil-
son, Mark Harman, Joachim Wegener, Dipankar Dasgupta, Mitchell A. Potter, Alan C.
Schultz, Kathryn A. Dowsland, Natasa Jonoska, and Julian Francis Miller, editors. Pro-
ceedings of the Genetic and Evolutionary Computation Conference, Part II (GECCO’03-
II), July 12–16, 2003, Holiday Inn Chicago: Chicago, IL, USA, volume 2724/2003 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-45110-2. isbn: 3-540-40603-4. Google Books ID: eYlQAAAAMAAJ and
zcN2KcCllT0C. OCLC: 52506094, 52559224, 59295690, and 314195657. See also
[484, 2078].
486. Aizeng Cao, Yueting Chen, Jun Wei, and Jinping Li. A Hybrid Evolutionary Algo-
rithm Based on EDAs and Clustering Analysis. In CCC’07 [555], pages 754–758, 2007.
doi: 10.1109/CHICC.2006.4347236. INSPEC Accession Number: 9800559.
487. Hongqing Cao, Jingxian Yu, and Lishan Kang. An Evolutionary Approach for Modeling the
Equivalent Circuit for Electrochemical Impedance Spectroscopy. In CEC’03 [2395], pages
1819–1825, volume 3, 2003. doi: 10.1109/CEC.2003.1299893. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/Cao_2003_Aeafmtecfeis.html [accessed 2009-
06-16].
488. Mathieu S. Capcarrère, Alex Alves Freitas, Peter J. Bentley, Colin G. Johnson, and Jonathan
Timmis, editors. Proceedings of the 8th European Conference on Advances in Artificial Life
(ECAL’05), September 5–9, 2005, University of Kent: Canterbury, Kent, UK, volume 3630
in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-28848-1.
489. Andrea Caponio, Giuseppe Leonardo Cascella, Ferrante Neri, Nadia Salvatore, and Mark
Sumner. A Fast Adaptive Memetic Algorithm for Online and Offline Control Design of
PMSM Drives. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernet-
ics, 37(1):28–41, February 2007, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/TSMCB.2006.883271. PubMed ID: 17278556. INSPEC Accession
Number: 9299478.
490. Andrea Caponio, Ferrante Neri, and Ville Tirronen. Super-fit Control Adaptation in Memetic
Differential Evolution Frameworks. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 13(8-9):811–831, July 2009, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s00500-008-0357-1.
491. Alain Cardon, Theirry Galinho, and Jean-Philippe Vacher. A Multi-Objective Genetic Al-
gorithm in Job Shop Scheduling Problem to Refine an Agents’ Architecture. In EURO-
GEN’99 [1894], 1999. Fully available at http://www.jeo.org/emo/cardon99.ps.gz.
REFERENCES 997
492. Jorge Cardoso and Amit P. Sheth. Introduction to Semantic Web Services and Web Process
Composition. In SWSWPC’04 [493], pages 1–13, 2004. doi: 10.1007/978-3-540-30581-1 1.
493. Jorge Cardoso and Amit P. Sheth, editors. Revised Selected Papers from the First Interna-
tional Workshop on Semantic Web Services and Web Process Composition (SWSWPC’04),
July 6, 2004, Westin Horton Plaza Hotel: San Diego, CA, USA, volume 3387/2005 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b105145. isbn: 3-540-24328-3. Google Books ID: AvChgxNuKK8C. Library
of Congress Control Number (LCCN): 2004117658.
494. Jorge Cardoso, José Cordeiro, and Joaquim Filipe, editors. Proceedings of the 9th Interna-
tional Conference on Enterprise Information Systems (ICEIS’07), June 12–16, 2007, Funchal,
Madeira, Portugal, volume SAIC. Institute for Systems and Technologies of Information,
Control and Communication (INSTICC) Press: Setubal, Portugal. isbn: 972-8865-91-0.
Google Books ID: JwTJPgAACAAJ. See also [915].
495. Alessio Carenini, Dario Cerizza, Marco Comerio, Emanuele Della Valle, Flavio De Paoli,
Andrea Maurino, Matteo Palmonari, Matteo Sassi, and Andrea Turati. Semantic Web
Service Discovery and Selection: A Test Bed Scenario. In EON-SWSC’08 [1023], 2008.
Fully available at http://sunsite.informatik.rwth-aachen.de/Publications/
CEUR-WS/Vol-359/Paper-3.pdf [accessed 2010-12-16]. See also [2166].
496. Peter A. Cariani. Extradimensional Bypass. Biosystems, 64(1-3):47–53, January 2002, El-
sevier Science Ireland Ltd.: East Park, Shannon, Ireland and North-Holland Scientific Pub-
lishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0303-2647(01)00174-5.
497. Tonci Caric and Hrvoje Gold, editors. Vehicle Routing Problem. IN-TECH Education and
Publishing: Vienna, Austria, September 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-09-1&type=B [accessed 2009-01-13].
498. Anthony Jack Carlisle. Applying the Particle Swarm Optimizer to Non-Stationary Envi-
ronments. PhD thesis, Auburn University, Graduate Faculty: Auburn, AL, USA, Decem-
ber 16, 2002, Gerry V. Dozier, Advisor, Gerry V. Dozier, Kai-Hsiung Chang, Dean Hen-
drix, and Stephen L. McFarland, Committee members. Fully available at http://antho.
huntingdon.edu/publications/Dissertation.pdf [accessed 2009-07-13].
499. Anthony Jack Carlisle and Gerry V. Dozier. Tracking Changing Extrema with Adap-
tive Particle Swarm Optimizer. In WAC’02 [1432], pages 265–270, volume 13, 2002.
doi: 10.1109/WAC.2002.1049555. Fully available at http://antho.huntingdon.edu/
publications/default.html [accessed 2007-08-19].
500. Charles W. Carroll. An Operations Research Approach to the Economic Optimization of
a Kraft Pulping Process. PhD thesis, Institute of Paper Chemistry: Appleton, WI, USA,
Georgia Institute of Technology: Atlanta, GA, USA, June 1959. Fully available at http://
hdl.handle.net/1853/5853 [accessed 2008-11-15].
501. Charles W. Carroll. The Created Response Surface Technique for Optimizing Nonlinear,
Restrained Systems. Operations Research, 9(2):169–184, March–April 1961, Institute for
Operations Research and the Management Sciences (INFORMS): Linthicum, ML, USA and
HighWire Press (Stanford University): Cambridge, MA, USA. doi: 10.1287/opre.9.2.169.
502. Ted Carson and Russell Impagliazzo. Hill-Climbing finds Random Planted Bisections. In
Proceedings of the twelfth annual ACM-SIAM Symposium on Discrete Algorithms [2543],
pages 903–909, volume 12, 2001. Fully available at http://citeseer.ist.psu.
edu/carson01hillclimbing.html and http://portal.acm.org/citation.cfm?
id=365411.365805 [accessed 2007-09-11].
503. Hugh Cartwright, editor. Intelligent Data Analysis in Science, volume 4 in Oxford Chemistry
Masters. Oxford University Press, Inc.: New York, NY, USA, June 2000. isbn: 0198502338.
504. Richard A. Caruana, Larry J. Eshelman, and J. David Schaffer. Representation and Hidden
Bias: Gray vs. Binary Coding for Genetic Algorithms. In ICML’88 [1659], pages 153–161,
1988.
505. George Casella and Roger L. Berger. Statistical Inference, Duxbury Advanced Series.
Duxbury Thomson Learning / Duxbury Press: Pacific Grove, CA, USA, 2nd edition, June 18,
2000. isbn: 0-534-24312-6. OCLC: 46538638.
506. Félix Castro, Àngela Nebot, and Francisco Mugica. A Soft Computing Decision Support
Framework to Improve the e-Learning Experience. In SpringSim’08 [2258], pages 781–
788, 2008. Fully available at http://portal.acm.org/citation.cfm?id=1400674
[accessed 2009-03-29]. SESSION: 2008 Modeling & Simulation in Education (MSE’08): The Edu-
998 REFERENCES
cation Continuum.
507. Seventh European Conference on Machine Learning (ECML’94), April 6–8, 1994, Catania,
Sicily, Italy.
508. H. John Caulfield, Shu-Heng Chen, Heng-Da Cheng, Richard J. Duro, Vasant Honavar,
Etienne E. Kerre, Mikulas Luptacik, Manuel Grana Romay, Timothy K. Shih, Dan Ventura,
Paul P. Wang, and Yuanyuan Yang, editors. Proceedings of the Sixth Joint Conference on
Information Science (JCIS 2002), Section: The Fourth International Workshop on Frontiers
in Evolutionary Algorithms (FEA’02), March 8–13, 2002, Research Triangle Park, NC, USA.
Association for Intelligent Machinery, Inc.: Durham, NC, USA. isbn: 0-9707890-1-7.
509. Proceedings of the 1st International Conference on Bio Inspired mOdels of NEtwork, Infor-
mation and Computing Systems (BIONETICS’06), December 11–13, 2006, Cavalese, Italy.
510. Daniel Joseph Cavicchio, Jr. Adaptive Search using Simulated Evolution. PhD thesis, Uni-
versity of Michigan, College of Literature, Science, and the Arts, Computer and Commu-
nication Sciences Department: Ann Arbor, MI, USA, August 1970, John Henry Holland,
Advisor. Fully available at http://deepblue.lib.umich.edu/handle/2027.42/4042
[accessed 2010-08-03]. ID: bab9712.0001.001.
511. Daniel Joseph Cavicchio, Jr. Reproductive Adaptive Plans. In ACM’72 [21], pages 60–70.
doi: 10.1145/800193.805822.
512. 14th European Conference on Machine Learning (ECML’03), September 22–26, 2003, Cavtat,
Dubrovnik, Croatia.
513. The Second Annual Symposium on Autonomous Intelligent Networks and Systems (AINS’03),
June 30–July 1, 2003, SRI International: Menlo Park, CA, USA. Center for Autonomous
Intelligent Networks and Systems (CAINS), University of California (UCLA): Los Angeles,
CA, USA. Fully available at http://path.berkeley.edu/ains/ [accessed 2009-07-10].
514. Proceedings of the 20th Informatics and Operations Research Meeting (20th Jornadas Ar-
gentinas e Informática e Investigación Operativa) (JAIIO’91), August 20–23, 1991, Centro
Cultural General San Martı́n: Buenos Aires, Argentina.
515. Proceedings of the International Conference on Soft Computing and Pattern Recognition
(SoCPaR’10), December 7–10, 2010, Cergy-Pontoise, France.
516. Vladimı́r Černý. Thermodynamical Approach to the Traveling Salesman Problem: An Ef-
ficient Simulation Algorithm. Journal of Optimization Theory and Applications, 45(1):41–
51, January 1985, Springer Netherlands: Dordrecht, Netherlands and Plenum Press: New
York, NY, USA. doi: 10.1007/BF00940812. Fully available at http://mkweb.bcgsc.ca/
papers/cerny-travelingsalesman.pdf [accessed 2010-09-25]. Communicated by S. E. Drey-
fus. Also: Technical Report, Comenius University, Mlynská Dolina, Bratislava, Czechoslo-
vakia, 1982.
517. Miroslav Červenka and Vojtěch Křesálek. Aerodynamic Wing Optimisation Us-
ing SOMA Evolutionary Algorithm. In NICSO’08 [1620], pages 127–138, 2008.
doi: 10.1007/978-3-642-03211-0 11. See also [518].
518. Miroslav Červenka and Ivan Zelinka. Application of Evolutionary Algorithm on Aerody-
namic Wing Optimisation. In ECC’08 [1819], 2008. Fully available at http://www.wseas.
us/e-library/conferences/2008/malta/ecc/ecc53.pdf [accessed 2010-12-27]. See also
[517].
519. Deepti Chafekar, Jiang Xuan, and Khaled Rasheed. Constrained Multi-objective Optimiza-
tion Using Steady State Genetic Algorithms. In GECCO’03-I [484], pages 813–824, 2003.
Fully available at http://www.cs.uga.edu/˜khaled/papers/385.pdf [accessed 2010-08-
x
01]. CiteSeer : 10.1.1.15.2094.
520. Uday Kumar Chakraborty, Kalyanmoy Deb, and Mandira Chakraborty. Analysis of Se-
lection Algorithms: A Markov Chain Approach. Evolutionary Computation, 4(2):133–167,
Summer 1996, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.2.133.
521. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: Applications, volume
I in Practical Handbook of Genetic Algorithms. Chapman & Hall: London, UK and CRC
Press, Inc.: Boca Raton, FL, USA, 2nd ed: 2000 edition, 1995. isbn: 0849325196 and
1584882409.
522. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: New Frontiers, vol-
ume II. CRC Press, Inc.: Boca Raton, FL, USA, 1995. isbn: 0849325293. Google Books
ID: 9RCE3pgj9K4C. OCLC: 32394368, 35865793, 54016160, 61827855, 468627012,
and 503637768.
REFERENCES 999
523. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: Complex Coding
Systems, volume III in Practical Handbook of Genetic Algorithms. Chapman & Hall: London,
UK and CRC Press, Inc.: Boca Raton, FL, USA, December 23, 1998. isbn: 0849325390.
524. Christine W. Chan, Witold Kinsner, Yingxu Wang, and D. Michael Miller, editors. Pro-
ceedings of the 3rd IEEE International Conference on Cognitive Informatics (ICCI’04),
August 16–17, 2004, Victoria, Canada. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2190-8.
525. Felix T.S. Chan and Manoj Kumar Tiwari, editors. Swarm Intelligence – Focus on
Ant and Particle Swarm Optimization. I-Tech Education and Publishing: Vienna,
Austria, December 2007. Fully available at http://s.i-techonline.com/Book/
Swarm-Intelligence/Swarm-Intelligence.zip [accessed 2008-08-22].
526. Sandeep Chandran and R. Russell Rhinehart. Heuristic Random Optimizer-Version II. In
ACC’02 [1335], pages 2589–2594, volume 4, 2002. doi: 10.1109/ACC.2002.1025175.
527. C. S. Chang and Chung Min Kwan. Evaluation of Evolutionary Algorithms for
Multi-Objective Train Schedule Optimization. In AI’04 [2871], pages 803–815, 2004.
doi: 10.1007/978-3-540-30549-1 69 and 10.1007/b104336.
528. Vira Chankong and Yacov Y. Haimes. Multiobjective Decision Making Theory and Methodol-
ogy. North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands, and Dover Publications: Mineola, NY, USA,
January 1983. isbn: 0-444-00710-5. Google Books ID: fQ5VPQAACAAJ.
529. 14th Conference on Technologies and Applications of Artificial Intelligence (TAAI’09), Oc-
tober 30–31, 2009, Chaoyang University of Technology (CYU): Wufeng Township, Taichung
County, Taiwan.
530. Abraham Charnes and William Wager Cooper. Management Models and Industrial Appli-
cations of Linear Programming. John Wiley & Sons Canada Ltd.: Toronto, ON, Canada,
December 1961. isbn: 0471148504. Google Books ID: xj3WAAAACAAJ.
531. Abraham Charnes, William Wager Cooper, and R. O. Ferguson. Optimal Estimation of
Executive Compensation by Linear Programming. Management Science, 1(2):138–151, Jan-
uary 1955, Institute for Operations Research and the Management Sciences (INFORMS):
Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1287/mnsc.1.2.138.
532. Neela Chattoraj and Jibendu Sekhar Roy. Application of Genetic Algorithm to the Optimiza-
tion of Microstrip Antennas with and without Superstrate. Mikrotalasna Revija (Microwave
Review), 2(6), November 2006, Yugoslav Society for Microwave Technique and Technolo-
gies, Serbia and Montenegro IEEE MTT-S Chapter: Novi Beograd, Serbia. Fully available
at http://www.mwr.medianis.net/pdf/Vol12No2-06-NChattoraj.pdf [accessed 2008-
09-02].
533. Leonardo Weiss F. Chaves, Erik Buchmann, Fabian Hueske, and Klemens Böhm. Towards
Materialized View Selection for Distributed Databases. In EDBT’09 [1526], pages 1088–
1099, 2009. doi: 10.1145/1516360.1516484. Fully available at http://www.ipd.kit.edu/
˜buchmann/pdfs/weiss08matviews.pdf [accessed 2011-04-09].
534. José M. Chaves-González, Miguel A. Vega-Rodrı́guez, David Domı́nguez-González, Juan A.
Gómez-Pulido, and Juan M. Sánchez-Pérez. SS vs PBIL to Solve a Real-World Frequency
Assignment Problem in GSM Networks. In EvoWorkshops’08 [1051], pages 21–30, 2008.
doi: 10.1007/978-3-540-78761-7 3.
535. Pravir K. Chawdry, Rajkumar Roy, and Raj K. Pant, editors. Soft Computing in En-
gineering Design and Manufacturing – Proceedings of the 2nd On-line World Conference
on Soft Computing in Engineering Design and Manufacture (WSC2), June 1997. Springer-
Verlag GmbH: Berlin, Germany. isbn: 3-540-76214-0. Google Books ID: mxcP1mSjOlsC.
OCLC: 37696334, 247521345, and 634603077. Library of Congress Control Number
(LCCN): 97031959. GBV-Identification (PPN): 236542257 and 279731663. LC Classifi-
cation: QA76.9.S63 S63 1998.
536. Sin Man Cheang, Kwong Sak Leung, and Kin Hong Lee. Genetic Parallel Programming:
Design and Implementation. Evolutionary Computation, 14(2):129–156, Summer 2006, MIT
Press: Cambridge, MA, USA. doi: 10.1162/evco.2006.14.2.129.
537. 10th European Conference on Machine Learning (ECML’98), April 21–24, 1998, Chemnitz,
Sachsen, Germany.
538. Arbee L. P. Chen, Jui-Shang Chiu, and Frank S.C. Tseng. Evaluating Aggregate Opera-
1000 REFERENCES
tions Over Imprecise Data. IEEE Transactions on Knowledge and Data Engineering, 8(2):
273–284, April 1996, IEEE Computer Society Press: Los Alamitos, CA, USA. Fully avail-
able at http://make.cs.nthu.edu.tw/alp/alp_paper/Evaluating%20aggregate
%20operations%20over%20imprecise%20data.pdf [accessed 2007-09-12].
539. Chi-hau Chen, L. F. Pau, and P. Shen-pei Wang, editors. Handbook of Pattern Recogni-
tion and Computer Vision. World Scientific Publishing Co.: Singapore, 1st edition, Au-
gust 1993. isbn: 981-02-1136-8, 981-02-2276-9, and 981-02-3071-0. Google Books
ID: 5J6J7PP0-ZMC, pCghq7-i1oMC, and wnhwKoen4xAC.
540. Chia-Mei Chen, Bing Chiang Jeng, Chia Ru Yang, and Gu Hsin Lai. Tracing Denial of
Service Origin: Ant Colony Approach. In EvoWorkshops’06 [2341], pages 286–295, 2006.
doi: 10.1007/11732242 26.
541. Guoliang Chen, editor. Genetic Algorithm and its Application (Yı́chuán Suànfǎ Jı́ Qı́
Yı̀ngyòng), National High-Tech Key Books: Communications Technology (Quánguó Gāo
Jı̀shù Zhòngdiǎn Túshū: Tōngxı̀n Jı̀shù Lı̌ngyù). Post & Telecom Press (PTPress, Rénmı́n
Yóudiàn Chūbǎn Shè): Běijı̄ng, China, 1996–2001. isbn: 7115059640. Google Books
ID: qmNoAAAACAAJ. OCLC: 300052154.
542. Jing Chen, Zeng-zhi Li, Zhi-Gang Liao, and Yun-lan Wang. Distributed Service Performance
Management Based on Linear Regression and Genetic Programming. In ICMLC’05 [2263],
pages 560–563, volume 1, 2005. doi: 10.1109/ICMLC.2005.1527007.
543. Jing Chen, Zeng-zhi Li, and Yun-lan Wang. Distributed Service Management Based on
Genetic Programming. In AWIC’05 [2642], pages 83–88, 2005. doi: 10.1007/11495772 14.
544. Jong-Chen Chen and Michael Conrad. A Multilevel Neuromolecular Architecture that uses
the Extradimensional Bypass Principle to Facilitate Evolutionary Learning. Physica D: Non-
linear Phenomena, 75(1–3):417–437, August 1, 1994, Elsevier Science Publishers B.V.: Am-
sterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam,
The Netherlands. doi: 10.1016/0167-2789(94)90295-X.
545. Jong-Chen Chen, Guo-Xun Liao, Jr-Sung Hsie, and Cheng-Hua Liao. A Study of the Contri-
bution Made by Evolutionary Learning on Dynamic Load-Balancing Problems in Distributed
Cmputing Systems. Expert Systems with Applications – An International Journal, 34(1):357–
365, January 2008, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press:
Oxford, UK. doi: 10.1016/j.eswa.2006.09.036.
546. Ken Chen, Shu-Heng Chen, Heng-Da Cheng, David K.Y. Chin, Sanjoy Das, Richard J.
Duro, Zhen Jiang, Nikola Kasabov, Etienne E. Kerre, Hong Va Leong, Qing Li, Mi Lu,
Manuel Grana Romay, Dan Ventura, Paul P. Wang, and Jie Wu, editors. Proceedings of the
Seventh Joint Conference on Information Science (JCIS 2003), Section: The Fifth Interna-
tional Workshop on Frontiers in Evolutionary Algorithms (FEA’03), September 26–30, 2003,
Embassy Suites Hotel and Conference Center: Cary, NC, USA. Association for Intelligent
Machinery, Inc.: Durham, NC, USA. isbn: 0964345692. OCLC: 248354376. Workshop
held in conjunction with Seventh Joint Conference on Information Sciences.
547. Min-Rong Chen, Yong-zai Lu, and Gen-ke Yang. Multiobjective Extremal Optimization with
Applications to Engineering Design. Journal of Zhejiang University – Science A (JZUS-A), 8
(12):1905–1911, November 2007, Zhejiang University Press: Hángzhōu, Zhèjiāng, China and
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1631/jzus.2007.A1905.
548. Min-Rong Chen, Yong-zai Lu, and Gen-ke Yang. Multiobjective Extremal Optimization with
Applications to Engineering Design. Journal of Zhejiang University – Science A (JZUS-A), 8
(12):1905–1911, November 2007, Zhejiang University Press: Hángzhōu, Zhèjiāng, China and
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1631/jzus.2007.A1905.
549. Shu-Heng Chen, editor. Evolutionary Computation in Economics and Finance, volume 100 in
Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, August 5,
2002. isbn: 3790814768.
550. Tianshi Chen, Ke Tang, Guoliang Chen, and Xin Yao. A Large Population Size Can Be
Unhelpful in Evolutionary Algorithms. Theoretical Computer Science, 2011, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.tcs.2011.02.016.
551. Wenxiang Chen, Thomas Weise, Zhenyu Yang, and Ke Tang. Large-Scale Global
Optimization Using Cooperative Coevolution with Variable Interaction Learning. In
PPSN’10-2 [2413], pages 300–309, 2010. doi: 10.1007/978-3-642-15871-1 31. Fully
available at http://mail.ustc.edu.cn/˜chenwx/ppsn10Chen.pdf [accessed 2011-02-18]
and http://www.it-weise.de/documents/files/CWYT2010LSGOUCCWVIL.pdf. Ei
REFERENCES 1001
ID: 20104513369063. IDS (SCI): BTC53.
552. Ying-Ping Chen. Extending the Scalability of Linkage Learning Genetic Algorithms – Theory
& Practice, volume 190/2006 in Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, University of Illinois: Urbana-Champaign, IL, USA, 2004–2006,
David Edward Goldberg, Advisor. doi: 10.1007/b102053. isbn: 3-540-28459-1. Google
Books ID: kKr3rKhPU7oC. Order No.: AAI3130894.
553. Yuehui Chen and Ajith Abraham, editors. Proceedings of the Sixth International Conference
on Intelligent Systems Design and Applications (ISDA’06), October 16–18, 2006, Jı̀’nán,
Shāndōng, China. IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-2528-8.
Google Books ID: xgDBAgAACAAJ. Order No.: P2528.
554. Zhixiong Chen, Charles A. Shoniregun, and Yuan-Chwen You, editors. First IEEE In-
ternational Services Computing Contest (SCContest’06), 2006, Chicago, IL, USA. IEEE
Computer Society Press: Los Alamitos, CA, USA. Fully available at http://iscc.
servicescomputing.org/2006/ [accessed 2010-12-17]. Part of [1373].
555. Dàizhǎn Chéng and Mı̌n Wú, editors. Proceedings of the 26th Chinese Control Confer-
ence (CCC’07), July 26–31, 2007, Zhāngjiājiè, Húnán, China. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 7-81124-055-6. Google Books
ID: MCLlMQAACAAJ. OCLC: 182550073. Catalogue no.: 07EX1694.
556. Hui Cheng and Shengxiang Yang. Genetic Algorithms with Elitism-based Immigrants for
Dynamic Shortest Path Problem in Mobile Ad Hoc Networks. In CEC’09 [1350], pages
3135–3140, 2009. doi: 10.1109/CEC.2009.4983340. INSPEC Accession Number: 10688935.
557. Yiu-ming Cheung, Yuping Wang, and Hailin Liu, editors. Proceedings of the 2006 Inter-
national Conference on Computational Intelligence and Security (CIS’06), November 3–6,
2006, Ramada Pearl Hotel, Guǎngzhōu, Guǎngdōng, China. IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA. isbn: 1-4244-0605-6. Library of Congress
Control Number (LCCN): 2006931375. Catalogue no.: 06EX1512.
558. Bastien Chevreux. Genetische Algorithmen zur Molekülstrukturoptimierung. Master’s thesis,
Universität Heidelberg: Heidelberg, Germany, Fachhochschule Heilbronn: Heilbronn, Ger-
many, and Deutsches Krebsforschungszentrum Heidelberg: Heidelberg, Germany, March 1,
1997. Fully available at http://chevreux.org/diplom/diplom.html [accessed 2010-08-01].
559. Alpha C. Chiang and Kevin Wainwright. Fundamental Methods of Mathematical Eco-
nomics. McGraw-Hill Ltd.: Maidenhead, UK, England, 4th edition, February 2, 2005.
isbn: 0070109109.
560. Altannar Chinchuluun, Panos M. Pardalos, Athanasios Migdalas, and Leonidas S. Pit-
soulis, editors. Pareto Optimality, Game Theory and Equilibria, volume 17 in Springer
Optimization and Its Applications. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-0-387-77247-9. isbn: 0387772464. Google Books ID: kNqHxU3Tc2YC.
561. Raymond Chiong, editor. Intelligent Systems for Automated Learning and Adaptation:
Emerging Trends and Applications. Information Science Reference: Hershey, PA, USA,
September 2009. doi: 10.4018/978-1-60566-798-0. isbn: 1-60566-798-6. Google Books
ID: bjFRPgAACAAJ and r3BUPgAACAAJ. OCLC: 318420065.
562. Raymond Chiong, editor. Nature-Inspired Algorithms for Optimisation, volume 193/2009 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, April 30, 2009.
doi: 10.1007/978-3-642-00267-0. isbn: 3-642-00266-8 and 3-642-00267-6. Google Books
ID: 557PPQbkfNYC. OCLC: 400530826, 405547847, and 423750494. Library of Congress
Control Number (LCCN): 2009920517.
563. Raymond Chiong, editor. Nature-Inspired Informatics for Intelligent Applications and Knowl-
edge Discovery: Implications in Business, Science and Engineering. Information Science
Reference: Hershey, PA, USA, 2009. doi: 10.4018/978-1-60566-705-8. isbn: 1605667056.
564. Raymond Chiong and Sandeep Dhakal, editors. Natural Intelligence for Schedul-
ing, Planning and Packing Problems, volume 250 in Studies in Computational Intelli-
gence. Springer-Verlag: Berlin/Heidelberg, October 2009. doi: 10.1007/978-3-642-04039-9.
isbn: 3-642-04038-1. Google Books ID: X1XkPwAACAAJ. Library of Congress Control
Number (LCCN): 2009934305.
565. Raymond Chiong and Thomas Weise. Global Optimisation and Mobile Learning.
Learning Technology – LTTF Newsletter, 11(1-2):26–28, January–April 2009, Technical
Committee on Learning Technology. Fully available at http://www.ieeetclt.org/
issues/april2009/index.html#_Toc230200599 and http://www.it-weise.de/
1002 REFERENCES
documents/files/CW2009GOAML.pdf [accessed 2009-09-08].
566. Raymond Chiong, Thomas Weise, and Bee Theng Lau. Template Design using Extremal
Optimization with Multiple Search Operators. In SoCPaR’09 [1224], pages 202–207, 2009.
doi: 10.1109/SoCPaR.2009.49. Fully available at http://www.it-weise.de/documents/
files/CWL2009TDUEOWMSO.pdf [accessed 2010-06-10]. INSPEC Accession Number: 11050930.
Ei ID: 20101012758280. IDS (SCI): BOP29. See also [2895].
567. Raymond Chiong, Thomas Weise, and Zbigniew Michalewicz, editors. Variants of Evolution-
ary Algorithms for Real-World Applications. Springer-Verlag: Berlin/Heidelberg, Septem-
ber 30, 2011–2012. doi: 10.1007/978-3-642-23424-8. Google Books ID: B2ONePP40MEC.
Library of Congress Control Number (LCCN): 2011935740.
568. Rada Chirkova. The View-Selection Problem Has an Exponential-Time Lower Bound
for Conjunctive Queries and Views. In PODS’02 [2204], pages 159–168, 2002.
doi: 10.1145/543613.543634. Fully available at http://dbgroup.ncsu.edu/rychirko/
Papers/pods02.pdf [accessed 2011-03-28]. CiteSeerx : 10.1.1.88.2166.
569. Huidae Cho, Francisco Olivera, and Seth D. Guikema. A Derivation of the Number of
Minima of the Griewank Function. Journal of Applied Mathematics and Computation, 204
(2):694–701, October 15, 2008, Elsevier Science Publishers B.V.: Essex, UK. Fully available
at https://ceprofs.civil.tamu.edu/folivera/PapersPDFs/A%20derivation
%20of%20the%20number%20of%20minima%20of%20the%20Griewank%20function.
pdf [accessed 2009-08-16]. Special Issue on New Approaches in Dynamic Optimization to
Assessment of Economic and Environmental Systems.
570. Chi-Hon Choi, Jeffrey Xu Yu, and Gang Gou. What Difference Heuristics Make:
Maintenance-Cost View-Selection Revisited. In WAIM’02 [1868], pages 313–350, 2002.
doi: 10.1007/3-540-45703-8 23.
571. Chi-Hon Choi, Jeffrey Xu Yu, and Hongjun Lu. Dynamic Materialized View
Management Based on Predicates. In APWeb’03 [3082], pages 583–594, 2003.
doi: 10.1007/3-540-36901-5 58.
572. Sung-Soon Choi, Kyomin Jung, and Jeong Han Kim. Phase Transition in a
Random NK Landscape Model. In GECCO’05 [304], pages 1241–1248, 2005.
doi: 10.1145/1068009.1068212. Fully available at http://web.mit.edu/kmjung/Public/
NK%20gecco%20final.pdf [accessed 2009-02-26]. Session: Genetic Algorithms. See also [573].
573. Sung-Soon Choi, Kyomin Jung, and Jeong Han Kim. Phase Transition in a Random NK
Landscape Model. Artificial Intelligence, 172(2–3):179–203, February 2008, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.artint.2007.06.002. See also [572].
574. Proceedings of the 8th International Conference on Natural Computation (ICNC’12), May 29–
31, 2012, Chóngqı̀ng, China.
575. Hosung Choo, Adrian Hutani, Luiz Cezar Trintinalia, and Hao Ling. Shape Optimisation of
Broadband Microstrip Antennas using Genetic Algorithm. Electronics Letters, 36(25):2057–
2058, December 7, 2000, Institution of Engineering and Technology (IET): Stevenage, Herts,
UK. doi: 10.1049/el:20001452.
576. Heather Christensen, Roger L. Wainwright, and Dale A. Schoenefeld. A Hybrid Algo-
rithm for the Point to Multipoint Routing Problem. In SAC’97 [435], pages 263–268,
1997. doi: 10.1145/331697.331751. Fully available at http://euler.mcs.utulsa.edu/
x
˜rogerw/papers/Heather-PMP.pdf [accessed 2008-08-28]. CiteSeer : 10.1.1.22.8559.
577. Nicos Christofides, Aristide Mingozzi, and Paolo Toth. The Vehicle Routing Problem. In
Combinatorial Optimization [578], Chapter 11, pages 315–338. John Wiley & Sons Ltd.: New
York, NY, USA, 1977.
578. Nicos Christofides, Aristide Mingozzi, Paolo Toth, and C. Sandi, editors. Combinatorial
Optimization, A Wiley-Interscience Publiction. John Wiley & Sons Ltd.: New York, NY,
USA, May 30–June 11, 1977. isbn: 0471997498. OCLC: 4495250, 264568944, and
636199125. Library of Congress Control Number (LCCN): 78011131. GBV-Identification
(PPN): 027251012. LC Classification: QA402.5 .C545. Based on a series of lectures given
at the Summer School in Combinatorial Optimization held in SOGESTA, Urbino, Italy from
30th May to 11th June 1977.
579. Hyoung-Seog Chung, Seongim Choi, and Juan J. Alonso. Supersonic Business Jet Design Us-
ing Knowledge-Based Genetic Algorithm with Adaptive, Unstructured Grid Methodology. In
AIAA’03 [2550], page 3791, 2003. Fully available at http://www.lania.mx/˜ccoello/
chung03.pdf.gz [accessed 2009-06-17].
REFERENCES 1003
580. Charles West Churchman, Lincoln Ackoff Russell, and E. Leonard Arnoff. Introduction to
Operations Research. John Wiley & Sons Ltd.: New York, NY, USA and Chapman & Hall:
London, UK, 2nd edition, 1957. asin: B0000CJP9J and B001ORMIX0.
581. Vic Ciesielski and Xiang Li. Analysis of Genetic Programming Runs. In Complex’04 [469],
2004. Fully available at http://goanna.cs.rmit.edu.au/˜xiali/pub/ai04.vc.
pdf and http://www.cs.rmit.edu.au/˜vc/papers/aspgp04.pdf [accessed 2010-11-20].
CiteSeerx : 10.1.1.87.4136.
582. Emilia Cimpian, Paavo Kotinurmi, Adrian Mocan, Matthew Moran, Tomas Vitvar, and Ma-
ciej Zaremba. Dynamic RosettaNet Integration on the Semantic Web Services. In First
Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating Web
Services Mediation, Choreography and Discovery [2688], 2006. Fully available at http://
sws-challenge.org/workshops/2006-Stanford/papers/03.pdf [accessed 2010-12-12].
See also [1210, 1940, 3057–3059].
583. Proceedings of the Third International Workshop on Local Search Techniques in Constraint
Satisfaction, September 25, 2006, Cite des Congres: Nantes, France.
584. Dawn Cizmar, editor. Proceedings of the 22nd Annual ACM Computer Science Confer-
ence on Scaling up: Meeting the Challenge of Complexity in Real-World Computing Ap-
plications (CSC’94), March 8–10, 1994, Phoenix, AZ, USA. ACM Press: New York, NY,
USA. isbn: 0-89791-634-4. Google Books ID: 6jUbOAAACAAJ, InpQAAAAMAAJ, and
nIkzMgAACAAJ. OCLC: 31861788, 51345201, 257836439, and 312016702.
585. Christopher D. Clack, editor. Advanced Research Challenges in Financial Evolutionary Com-
puting Workshop (ARC-FEC’08), July 12, 2008, Renaissance Atlanta Hotel Downtown: At-
lanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
586. Bill Clancey and Dan Weld, editors. Proceedings of the Thirteenth National Conference on
Artificial Intelligence and Eighth Innovative Applications of Artificial Intelligence Conference
(AAA’96, IAAI’96), August 4–8, 1996, Portland, OR, USA. isbn: 0-262-51091-X. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai96.php and http://
www.aaai.org/Conferences/IAAI/iaai96.php [accessed 2007-09-06]. 2 volumes. See also
[2207].
587. David E. Clark, editor. Evolutionary Algorithms in Molecular Design, volume 8 in Methods
and Principles in Medicinal Chemistry. Wiley-VCH Verlag GmbH & Co. KGaA: Weinheim,
Germany, September 11, 2000. isbn: 3527301550. Series editors: Raimund Mannhold, Hugo
Kubinyi, and Hendrik Timmerman.
588. The International Workshop on Modeling & Applied Simulation (MAS’03), October 2–4,
2003, Claudio Hotel & Congress Center: Bergeggi, Italy.
589. Maurice Clerc. Particle Swarm Optimization. ISTE Publishing Company: London, UK,
February 24, 2006. isbn: 1905209045. Google Books ID: a0YeAAAACAAJ.
590. Manuel Clergue, Philippe Collard, Marco Tomassini, and Leonardo Vanneschi. Fit-
ness Distance Correlation And Problem Difficulty For Genetic Programming. In
GECCO’02 [1675], pages 724–732, 2002. Fully available at http://www.cs.bham.ac.
uk/˜wbl/biblio/gecco2002/GP072.pdf and http://www.cs.ucl.ac.uk/staff/
W.Langdon/ftp/papers/gecco2002/gecco-2002-14.pdf [accessed 2008-07-20]. See also
[2784].
591. Dave Cliff and Susi Ross. Adding Temporary Memory to ZCS. Adaptive Be-
havior, 3(2):101–150, Fall 1994, SAGE Publications: Thousand Oaks, CA, USA.
doi: 10.1177/105971239400300201. Fully available at ftp://ftp.informatics.sussex.
ac.uk/pub/reports/csrp/csrp347.ps.Z [accessed 2007-09-12].
592. João Clı́maco, editor. Proceedings of the 11th International Conference on Multiple Criteria
Decision Making: Multicriteria Analysis (MCDM’94), August 1–6, 1994, Coimbra, Portu-
gal. Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-62074-5. OCLC: 36181115,
636440409, and 639833152. GBV-Identification (PPN): 222685735 and 25787321X.
Published in May 1997.
593. Carlos Artemio Coello Coello. A Short Tutorial on Evolutionary Multiobjective Op-
timization. In EMO’01 [3099], pages 21–40, 2001. Fully available at http://www.
cs.cinvestav.mx/˜emooworkgroup/tutorial-slides-coello.pdf [accessed 2010-07-
x
31]. CiteSeer : 10.1.1.29.3997.
594. Carlos Artemio Coello Coello. Theoretical and Numerical Constraint-Handling Techniques
used with Evolutionary Algorithms: A Survey of the State of the Art. Computer Methods
1004 REFERENCES
in Applied Mechanics and Engineering, 191(11–12):1245–1287, January 14, 2002, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. doi: 10.1016/S0045-7825(01)00323-1.
595. Carlos Artemio Coello Coello. A Comprehensive Survey of Evolutionary-Based Multiobjec-
tive Optimization Techniques. Knowledge and Information Systems – An International Jour-
nal (KAIS), 1(3), August 1999, Springer-Verlag London Limited: London, UK. Fully available
at http://www.lania.mx/˜ccoello/EMOO/informationfinal.ps.gz [accessed 2009-08-
x
04]. CiteSeer : 10.1.1.48.3504.
596. Carlos Artemio Coello Coello. An Updated Survey of Evolutionary Multiobjective Opti-
mization Techniques: State of the Art and Future Trends. In CEC’99 [110], pages 3–13,
volume 1, 1999. doi: 10.1109/CEC.1999.781901. Fully available at http://www-course.
cs.york.ac.uk/evo/SupportingDocs/coellocoello99updated.pdf [accessed 2009-06-
x
20]. CiteSeer : 10.1.1.50.2154.
597. Carlos Artemio Coello Coello and Gary B. Lamont, editors. Applications of Multi-Objective
Evolutionary Algorithms, volume 1 in Advances in Natural Computation. World Scien-
tific Publishing Co.: Singapore, December 2004. isbn: 981-256-106-4. Google Books
ID: viWm0k9meOcC.
598. Carlos Artemio Coello Coello, Alvaro de Albornoz, Luis Enrique Sucar, and Osvaldo Cairó
Battistutti, editors. Advances in Artificial Intelligence: Proceedings of the Second Mex-
ican International Conference on Artificial Intelligence (MICAI’02), April 22–26, 2002,
Mérida, Yucatán, México, volume 2313/2002 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-46016-0. isbn: 3-540-43475-5.
599. Carlos Artemio Coello Coello, Gary B. Lamont, and David A. van Veldhuizen. Evolution-
ary Algorithms for Solving Multi-Objective Problems, volume 5 in Genetic and Evolutionary
Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell,
MA, USA, 2nd edition, 2002–2007. doi: 10.1007/978-0-387-36797-2. isbn: 0306467623 and
0387332545. Google Books ID: NZdOgAACAAJ, rXIuAMw3lGAC, and sgX Cst yTsC.
600. Carlos Artemio Coello Coello, Arturo Hernández Aguirre, and Eckart Zitzler, editors. Pro-
ceedings of the Third International Conference on Evolutionary Multi-Criterion Optimiza-
tion (EMO’05), March 9–11, 2005, Guanajuato, México, volume 3410/2005 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b106458. isbn: 3-540-24983-4
and 3-540-31880-1. Google Books ID: AfE5fFFl5AgC. OCLC: 58601838, 58998290,
76657382, 150417695, 316700466, 318300131, and 403748504. Library of Congress
Control Number (LCCN): 2005920590.
601. William W. Cohen and Haym Hirsh, editors. Proceedings of the 11th International Confer-
ence on Machine Learning (ICML’94), July 10–13, 1994, New Brunswick, NJ, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-335-2.
602. William W. Cohen and Andrew Moore, editors. Proceedings of the 23rd International Con-
ference on Machine Learning (ICML’06), June 25–29, 2006, Carnegy Mellon University
(CMU): Pittsburgh, PA, USA, volume 148 in ACM International Conference Proceeding
Series (AICPS). Omipress: Madison, WI, USA. isbn: 1-59593-383-2. Fully available
at http://www.machinelearning.org/icml2006_proc.html [accessed 2010-06-23].
603. William W. Cohen, Andrew McCallum, and Sam Roweis, editors. Proceedings of the 25th
International Conference on Machine Learning (ICML’08), July 5–9, 2008, Helsinki, Fin-
land, volume 307 in ACM International Conference Proceeding Series (AICPS). Omipress:
Madison, WI, USA. Fully available at http://www.machinelearning.org/archive/
icml2008/proceedings.shtml [accessed 2010-06-23].
604. James P. Cohoon, S. U. Hegde, Worthy Neil Martin, and D. Richards. Punctuated Equilibria:
A Parallel Genetic Algorithm. In ICGA’87 [1132], pages 148–154, 1987.
605. David W. Coit and Fatema Baheranwala. Solution of Stochastic Multi-Objective System
Reliability Design Problems using Genetic Algorithms. In ESREL’05 [1567], pages 391–
398, 2005. Fully available at http://ise.rutgers.edu/resource/research_paper/
paper_05-010.pdf [accessed 2010-12-16]. See also [2645].
606. Pierre Collet, Cyril Fonlupt, Jin-Kao Hao, Evelyne Lutton, and Marc Schoenauer, edi-
tors. Selected Papers of the 5th International Conference on Artificial Evolution, Evo-
lution Artificielle (EA’01), October 29–31, 2001, Le Creusot, France, volume 2310/2002
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
REFERENCES 1005
doi: 10.1007/3-540-46033-0. isbn: 3-540-43544-1.
607. Pierre Collet, Marco Tomassini, Marc Ebner, Steven Matt Gustafson, and Anikó Ekárt,
editors. Proceedings of the 9th European Conference Genetic Programming (EuroGP’06),
April 10–12, 2006, Budapest, Hungary, volume 3905/2006 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11729976. isbn: 3-540-33143-3. Google
Books ID: 1JNQAAAAMAAJ and ZusmAAAACAAJ. Library of Congress Control Number
(LCCN): 2006922466.
608. Pierre Collet, Nicolas Monmarché, Pierrick Legrand, Marc Schoenauer, and Evelyne Lutton,
editors. Artificial Evolution: Revised Selected Papers from the 9th International Conference,
Evolution Artificielle (EA’09), October 26–28, 2009, Strasbourg, France, volume 5975 in
Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3642141552.
609. Yann Collette and Patrick Siarry. Multiobjective Optimization: Principles and Case Stud-
ies, volume 2 in Decision Engineering. Groupe Eyrolles: Paris, France, 2002–2004.
isbn: 3540401822. Google Books ID: XNYF4hltoF0C.
610. J. J. Collins and Malachy Eaton. Genocodes for Genetic Algorithms. In MENDEL’97 [416],
pages 23–30, 1997. Fully available at http://www.csis.ul.ie/staff/jjcollins/
mendel97.ps.gz [accessed 2010-08-05]. CiteSeerx : 10.1.1.51.2349.
611. Proceedings of the Third Bi-Annual EUROMICRO International Conference on Massively
Parallel Computing Systems (MPCS’98), April 6–9, 1998, Colorado Springs, CO, USA.
612. Benoı̂t Colson and Philippe L. Toint. Optimizing Partially Separable Functions with-
out Derivatives. Technical Report 2003/20, University of Namur (FUNDP), Department
of Mathematics: Namur, Belgium, November 6, 2003. Fully available at http://www.
fundp.ac.be/pdf/publications/51357.pdf and perso.fundp.ac.be/˜phtoint/
pubs/TR03-20.ps [accessed 2009-10-26]. See also [613].
613. Benoı̂t Colson and Philippe L. Toint. Optimizing Partially Separable Functions without
Derivatives. Optimization Methods and Software, 20(4 & 5):493–508, August 2005, Taylor
and Francis LLC: London, UK. doi: 10.1080/10556780500140227. See also [612].
614. Francesc Comellas. Using Genetic Programming to Design Broadcasting Algorithms for
Manhattan Street Networks. In EvoWorkshops’04 [2254], pages 170–177, 2004.
615. Francesc Comellas and G. Giménez. Genetic Programming to Design Communication Algo-
rithms for Parallel Architectures. Parallel Processing Letters (PPL), 8(4):549–560, Decem-
ber 1998, World Scientific Publishing Co.: Singapore. doi: 10.1142/S0129626498000547. Fully
available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Comellas_1998_
GPD.html [accessed 2009-09-01]. CiteSeerx : 10.1.1.57.4752.
616. Francesc Comellas and Juan Paz-Sánchez. Reconstruction of Networks from
Their Betweenness Centrality. In EvoWorkshops’08 [1051], pages 31–37, 2008.
doi: 10.1007/978-3-540-78761-7 4.
617. Diana Elena Comes, Steffen Bleul, Thomas Weise, and Kurt Geihs. A Flexible
Approach for Business Processes Monitoring. In DAIS’09 [2453], pages 116–128,
2009. doi: 10.1007/978-3-642-02164-0 9. Fully available at http://www.it-weise.de/
documents/files/CBTG2009AFAFBPM.pdf. Ei ID: 20093412265790. IDS (SCI): BKH10.
618. Committee on the Fundamentals of Computer Science: Challenges and Opportunities,
Computer Science and Telecommunications Board, National Research Council of the Na-
tional Academies: Washington, DC, USA, editor. Computer Science: Reflections on the
Field, Reflections from the Field. National Academies Press: Washington, DC, USA, 2004.
isbn: 0-309-09301-5 and 0-309-54529-3. Google Books ID: sTlPLMq6ZdYC.
619. Giuseppe Confessore, Graziano Galiano, and Giuseppe Stecca. An Evolutionary Algorithm
for Vehicle Routing Problem with Real Life Constraints. In Manufacturing Systems and Tech-
nologies for the New Frontier – The 41st CIRP Conference on Manufacturing Systems [1918],
pages 225–228, 2008. doi: 10.1007/978-1-84800-267-8 46.
620. Brian Connolly. Genetic Algorithms – Survival of the Fittest: Natural Selection
with Windows Forms. MSDN Magazin, August 2004, Microsoft Corporation: Red-
mond, WA, USA. Fully available at http://msdn.microsoft.com/de-de/magazine/
cc163934(en-us).aspx [accessed 2008-06-24].
621. Michael Conrad. Bootstrapping on the Adaptive Landscape. Biosystems, 11(2-
3):81–84, August 1979, Elsevier Science Ireland Ltd.: East Park, Shannon, Ire-
1006 REFERENCES
land and North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/0303-2647(79)90009-1. Fully available at http://dx.doi.org/10.1016/
0303-2647(79)90009-1 and http://hdl.handle.net/2027.42/23514 [accessed 2008-
11-02].
622. Michael Conrad. Adaptability: The Significance of Variability from Molecule to Ecosystem.
Plenum Press: New York, NY, USA, March 31, 1983. isbn: 0-306-41223-3. Google Books
ID: 7ewUAQAAIAAJ. OCLC: 9110177, 251709596, 299388964, and 476965274. Library
of Congress Control Number (LCCN): 82024558. GBV-Identification (PPN): 014038137.
LC Classification: QH546 .C65 1983.
623. Michael Conrad. Molecular Computing. In Advances in Computers [3036], pages 235–324.
Academic Press Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands, 1990. doi: 10.1016/S0065-2458(08)60155-2.
624. Michael Conrad. The Geometry of Evolution. Biosystems, 24(1):61–81, 1990, Elsevier Science
Ireland Ltd.: East Park, Shannon, Ireland and North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands. doi: 10.1016/0303-2647(90)90030-5.
625. Michael Conrad. Towards High Evolvability Dynamics. In Evolutionary Systems – Biological
and Epistemological Perspectives on Selection and Self-Organization [2771], pages 33–43.
Kluwer Academic Publishers: Norwell, MA, USA, 1998.
626. Stephen A. Cook. The Complexity of Theorem-Proving Procedures. In STOC’71 [20], pages
151–158, 1971. doi: 10.1145/800157.805047.
627. William J. Cook. Traveling Salesman Problem. Georgia Institute of Technology, College of
Engineering, School of Industrial and Systems Engineering, Operations Research: Atlanta,
GA, USA, in edition, September 18, 2011. Fully available at http://www.tsp.gatech.
edu/index.html [accessed 2011-09-18].
628. William J. Cook, William H. Cunningham, William R. Pulleyblank, and Alexander Schrijver.
Combinatorial Optimization, Estimation, Simulation, and Control – Wiley-Interscience Series
in Discrete Mathematics and Optimization. Wiley Interscience: Chichester, West Sussex, UK,
November 12, 1997. isbn: 0-471-55894-X. OCLC: 37451897, 247024428, 288962207,
and 473487390. Library of Congress Control Number (LCCN): 97035774. GBV-
Identification (PPN): 233568778 and 280181795. LC Classification: QA402.5 .C54523
1998.
629. Susan Coombs and Lawrence Davis. Genetic Algorithms and Communication Link Speed
Design: Constraints and Operators. In ICGA’87 [1132], pages 257–260, 1987. Partly available
at http://www.questia.com/PM.qst?a=o&d=97597557 [accessed 2008-08-29].
630. D. C. Cooper, editor. Proceedings of the 2nd International Joint Conference on Arti-
ficial Intelligence (IJCAI’71), September 1–3, 1971, Imperial College London: London,
UK. isbn: 0-934613-34-6. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-1971/CONTENT/content.htm [accessed 2008-04-01]. OCLC: 17487276.
631. Emilio S. Corchado, Juan M. Corchado, and Ajith Abraham, editors. Innovations in Hybrid
Intelligent Systems – Proceedings of the 2nd International Workshop on Hybrid Artificial
Intelligence Systems (HAIS’07), November 2007, University of Salamanca: Salamanca, Spain,
volume 44/2008 in Advances in Soft Computing. Physica-Verlag GmbH & Co.: Heidelberg,
Germany. doi: 10.1007/978-3-540-74972-1. isbn: 3-540-74971-3. Library of Congress
Control Number (LCCN): 2007935489.
632. Emilio S. Corchado, Ajith Abraham, and Witold Pedrycz, editors. Proceedings of the Third
International Workshop on Hybrid Artificial Intelligence Systems (HAIS’08), September 24–
26, 2008, Burgos, Spain, volume 5271/2008 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-87656-4. isbn: 3-540-87655-3. Library of Congress Control Number
(LCCN): 2008935394.
633. Arthur L. Corcoran and Sandip Sen. Using Real-Valued Genetic Algorithms to Evolve
Rule Sets for Classification. In CEC’94 [1891], pages 120–124, volume 1, 1994.
doi: 10.1109/ICEC.1994.350030. CiteSeerx : 10.1.1.55.1864.
634. Óscar Cordón, Francisco Herrera Triguero, and Achim G. Hoffmann. Genetic Fuzzy
Systems: Evolutionary Tuning and Learning of Fuzzy Knowledge Bases, volume 19
in Advances in Fuzzy Systems. World Scientific Publishing Co.: Singapore, 2001.
isbn: 981-02-4016-3 and 981-02-4017-1. Google Books ID: BWmSV-38fKAC
and bwa9QgAACAAJ. OCLC: 47768307. Library of Congress Control Number
REFERENCES 1007
(LCCN): 2001275437. GBV-Identification (PPN): 334123739. LC Classification: QA402.5
.G4565 2001.
635. Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to Algo-
rithms, MIT Electrical Engineering and Computer Science. MIT Press: Cambridge, MA,
USA and McGraw-Hill Ltd.: Maidenhead, UK, England, 2nd edition, 1990–August 2001.
isbn: 0070131430, 0070131449, 0-2620-3141-8, 0-2625-3196-8, and 8120321413.
Google Books ID: CJY1KwAACAAJ, CZY1KwAACAAJ, ElKaGgAACAAJ, JlCNJwAACAAJ,
KVCNJwAACAAJ, NLngYyWFl YC, O9WbHAAACAAJ, OdWbHAAACAAJ, PB02AAAACAAJ,
PNWbHAAACAAJ, V7wYPAAACAAJ, WLwYPAAACAAJ, oStGQAACAAJ, iLY1NwAACAAJ, and
zukGIQAACAAJ.
636. David Wolfe Corne and Joshua D. Knowles. Techniques for Highly Multiobjective Optimisa-
tion: Some Nondominated Points are Better than Others. In GECCO’07-I [2699], pages 773–
780, 2007. Fully available at http://dbkgroup.org/knowles/pap583s7-corne.pdf
and http://www.macs.hw.ac.uk/˜dwcorne/p773-corne.pdf [accessed 2011-12-05]. arXiv
ID: 0908.3025v1.
637. David Wolfe Corne and Jonathan L. Shapiro, editors. Proceedings of the Workshop on Ar-
tificial Intelligence and Simulation of Behaviour, International Workshop on Evolutionary
Computing, Selected Papers (AISB’97), April 7–8, 1997, Manchester, UK, volume 1305/1997
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BFb0027161. isbn: 3-540-63476-2.
638. David Wolfe Corne, Marco Dorigo, Fred Glover, Dipankar Dasgupta, Pablo Moscato, Ric-
cardo Poli, and Kenneth V. Price, editors. New Ideas in Optimization, McGraw-Hill’s Ad-
vanced Topics In Computer Science Series. McGraw-Hill Ltd.: Maidenhead, UK, England,
May 1999. isbn: 0-07-709506-5. Google Books ID: nC35AAAACAAJ. OCLC: 40883094
and 265809212.
639. David Wolfe Corne, Joshua D. Knowles, and Martin J. Oates. The Pareto Envelope-
Based Selection Algorithm for Multiobjective Optimization. In PPSN VI [2421], pages
839–848, 2000. doi: 10.1007/3-540-45356-3 82. Fully available at http://www.lania.
mx/˜ccoello/corne00.ps.gz [accessed 2009-07-18]. CiteSeerx : 10.1.1.32.9195.
640. David Wolfe Corne, Martin J. Oates, and George D. Smith, editors. Telecommunications Opti-
mization: Heuristic and Adaptive Techniques. John Wiley & Sons Ltd.: New York, NY, USA,
September 2000. doi: 10.1002/047084163X. isbn: 047084163X and 0471988553. Google
Books ID: EgNTAAAAMAAJ. OCLC: 43187002, 43903625, 50745182, and 123099960.
641. David Wolfe Corne, Zbigniew Michalewicz, Robert Ian McKay, Ágoston E. Eiben, David B.
Fogel, Carlos M. Fonseca, Günther R. Raidl, Kay Chen Tan, and Ali M. S. Zalzala, edi-
tors. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’05), Septem-
ber 2–5, 2005, Edinburgh, Scotland, UK. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-9363-5. Google Books ID: -QIVAAAACAAJ. OCLC: 62773008 and
423962661. Catalogue no.: 05TH8834.
642. F. Corno, G. Cumani, M. Sonza Reorda, and Giovanni Squillero. Efficent Machine-
Code Test-Program Induction. In CEC’02 [944], pages 1486–1491, 2002. Fully available
at http://www.cad.polito.it/pap/db/cec2002.pdf and http://www.cs.bham.
ac.uk/˜wbl/biblio/gp-html/corno_2002_emctpi.html [accessed 2008-09-17].
643. Gérard Cornuéjols. Revival of the Gomory cuts in the 1990’s. Annals of Operations Research,
149(1):63–66, February 2007, Springer Netherlands: Dordrecht, Netherlands and J. C. Baltzer
AG, Science Publishers: Amsterdam, The Netherlands. doi: 10.1007/s10479-006-0100-1. Fully
available at http://integer.tepper.cmu.edu/webpub/gomory.pdf [accessed 2009-07-05].
644. Pablo Cortés Achedad, Luis Onieva Giménez, Jesús Muñuzuri Sanz, and José Guadix
Martı́n. A Revision of Evolutionary Computation Techniques in Telecommunications
and an Application for the Network Global Planning Problem. In Success in Evolu-
tionary Computation [3014], pages 239–262. Springer-Verlag: Berlin/Heidelberg, 2008.
doi: 10.1007/978-3-540-76286-7 11.
645. Carlos Cotta and Peter Cowling, editors. Proceedings of the 9th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP’09), April 15–17,
2009, Eberhard-Karls-Universität Tübingen, Fakultät für Informations- und Kognitionswis-
senschaften: Tübingen, Germany, volume 5482/2009 in Theoretical Computer Science and
General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-642-01009-5. isbn: 3-642-01008-3.
1008 REFERENCES
646. Carlos Cotta and Jano I. van Hemert, editors. Proceedings of the 7th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP’07), April 11–13,
2007, València, Spain, volume 4446/2007 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany.
647. Richard Courant. Variational Methods for the Solution of Problems of Equilibrium and
Vibrations. Bulletin of the American Iris Society, 49(1):1–23, 1943, American Iris So-
ciety (AIS): Philadelphia, PA, USA. doi: 10.1090/S0002-9904-1943-07818-4. Fully avail-
able at http://projecteuclid.org/euclid.bams/1183504922 and http://www.
ams.org/bull/1943-49-01/S0002-9904-1943-07818-4/home.html [accessed 2008-11-
15]. Zentralblatt MATH identifier: 0063.00985. Mathematical Reviews number (Math-
SciNet): MR0007838.
648. Steven H. Cousins. Species Diversity Measurement: Choosing the Right Index. Trends
in Ecology and Evolution (TREE), 6(6):190–192, June 1991, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands and Cell Press: St. Louis, MO, USA.
doi: 10.1016/0169-5347(91)90212-G.
649. Peter Cowling, Graham Kendall, and Eric Soubeiga. A Hyperheuristic Ap-
proach to Scheduling a Sales Summit. In PATAT’00 [452], pages 176–190, 2000.
doi: 10.1007/3-540-44629-X 11. Fully available at http://www.asap.cs.nott.ac.
uk/publications/pdf/exs_patat01.pdf and http://www.cs.nott.ac.uk/˜gxk/
aim/2009/reading/hh_paper_1.pdf [accessed 2011-04-23]. CiteSeerx : 10.1.1.63.7949.
650. Louis Anthony Cox, Jr., Lawrence Davis, and Yuping Qiu. Dynamic Anticipatory Routing In
Circuit-Switched Telecommunications Networks. In Handbook of Genetic Algorithms [695],
pages 124–143. Thomson Publishing Group, Inc.: Stamford, CT, USA, 1991.
651. Teodor Gabriel Crainic and Gilbert Laporte, editors. Fleet Management and Logistics.
Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Nor-
well, MA, USA, 1998. isbn: 0792381610. Google Books ID: IXed6zq9ZigC.
652. Nichael Lynn Cramer. A Representation for the Adaptive Generation of Simple Sequential
Programs. In ICGA’85 [1131], pages 183–187, 1985. Fully available at http://www.sover.
net/˜nichael/nlc-publications/icga85/index.html [accessed 2007-09-06].
653. Ronald L. Crepeau. Genetic Evolution of Machine Language Software. In Proceedings of
the Workshop on Genetic Programming: From Theory to Real-World Applications [2326],
pages 121–134, 1995. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/crepeau_1995_GEMS.html [accessed 2008-09-17]. CiteSeerx : 10.1.1.61.7001.
654. Matej Črepinček, Marjan Mernik, and Shih-Hsi Liu. Analysis of Exploration and Exploita-
tion in Evolutionary Algorithms by Ancestry Trees. International Journal of Innovative
Computing and Applications (IJICA), 3(1), January 2011, InderScience Publishers: Geneva,
Switzerland. doi: 10.1504/IJICA.2011.037947.
655. H. Brown Cribbs, III and Robert Elliott Smith. What Can I Do With a Learning Classifier
System? In Industrial Applications of Genetic Algorithms [1496], pages 299–319. CRC Press,
Inc.: Boca Raton, FL, USA, 1998.
656. Sven F. Crone, Stefan Lessmann, and Robert Stahlbock, editors. Proceedings of the 2006
International Conference on Data Mining (DMIN’06), June 26–29, 2006, Las Vegas, NV,
USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-004-3.
657. Helena Cronin. The Ant and the Peacock – Altruism and Sexual Selection from Darwin to
Today. Cambridge University Press: Cambridge, UK, 1991. isbn: 0-521-32937-X and
0-521-45765-3. Google Books ID: SwEnVmakCB0C. OCLC: 29778586, 444540292, and
490467682. GBV-Identification (PPN): 040676250 and 232802165. With a foreword by
John Maynard Smith.
658. Mark Crosbie and Gene Spafford. Applying Genetic Programming to Intrusion De-
tection. In Proceedings of 1995 AAAI Fall Symposium Series, Genetic Programming
Track [1606], 1995. Fully available at http://ftp.cerias.purdue.edu/pub/papers/
mark-crosbie/mcrosbie-spaf-AAAI.ps.Z [accessed 2008-06-17]. The paper appeared also
as technical report COAST TR 95-05 of COAST Laboratory, Deptartment of Computer
Sciences, Purdue University, West Lafayette, IN, USA.
659. Jack L. Crosby. Computer Simulation in Genetics. John Wiley & Sons Ltd.: New York, NY,
USA, January 1973. isbn: 0-4711-8880-8.
660. Claudia Crosio, Francesco Cecconi, Paolo Mariottini, Gianni Cesareni, Sydney Brenner,
and Francesco Amaldi. Fugu Intron Oversize Reveals the Presence of U15 snoRNA Cod-
REFERENCES 1009
ing Sequences in Some Introns of the Ribosomal Protein S3 Gene. Genome Research, 6(12):
1227–1231, December 1996. Fully available at http://www.genome.org/cgi/content/
abstract/6/12/1227 [accessed 2008-03-23].
661. Proceedings of the Second Conference on Artificial General Intelligence (AGI’09), March 6–9,
2009, Crowne Plaza National Airport: Arlington, VA, USA.
662. Li Ying Cui, Soundar R. T. Kumara, John Jung-Woon Yoo, and Fatih Cavdur. Large-Scale
Network Decomposition and Mathematical Programming Based Web Service Composition.
In CEC’09 [1245], pages 511–514, 2009. doi: 10.1109/CEC.2009.91. INSPEC Accession
Number: 10839128. See also [1571, 1999, 2064–2067, 3034].
663. Xunxue Cui and Chuang Lin. A Multiobjective Genetic Algorithm for Distributed Database
Management. In WCICA’04 [1388], pages 2117–2121, volume 3, 2004. doi: 10.1109/W-
CICA.2004.1341959. INSPEC Accession Number: 8143777.
664. Francesco Cupertino, Ernesto Mininno, and David Naso. Compact Genetic Algorithms for
the Optimization of Induction Motor Cascaded Control. In IEMDC’07 [1391], pages 82–87,
volume 2, 2007. doi: 10.1109/IEMDC.2007.383557. INSPEC Accession Number: 9892564.
665. Djurdje Cvijović and Jacek Klinowski. Taboo Search: An Approach to the Multiple Minima
Problem. Science Magazine, 267(5198):664–666, February 3, 1995, American Association for
the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press (Stan-
ford University): Cambridge, MA, USA. doi: 10.1126/science.267.5198.664. Fully available
at http://www-klinowski.ch.cam.ac.uk/pdfs/244.pdf [accessed 2010-10-04]. PubMed
ID: 17745843.
666. Hans Czap, Rainer Unland, Cherif Branki, and Huaglory Tianfield, editors. Self-Organization
and Autonomic Informatics (I) – International Conference on Self-Organization and Adap-
tation of Multi-agent and Grid Systems (SOAS’2005), December 11–13, 2005, University of
Paisley: Glasgow, Scotland, UK, Frontiers in Artificial Intelligence and Applications. IOS
Press: Amsterdam, The Netherlands. isbn: 1-58603-577-0 and 1601291337. Google
Books ID: 0wp9iwroM9gC and tPP8OgAACAAJ.
667. Zbigniew J. Czech and Piotr Czarnas. Parallel Simulated Annealing for the Vehicle Routing
problem with Time Windows. In PDP’02 [1383], pages 376–383, 2002. doi: 10.1109/EM-
PDP.2002.994313. CiteSeerx : 10.1.1.16.5766.
668. António Gaspar Lopes da Cunha and José António Colaço Gomes Covas. RPSGAe - Re-
duced Pareto Set Genetic Algorithm: A Multiobjectiv Algorithm with Elistim: Application
to Polymer Extrusion. Xavier Gandibleux, Marc Sevaux, Kenneth Sörensen, and Vincent
T’kindt, editors. Fully available at http://www2.lifl.fr/PM2O/Reunions/04112002/
gaspar.pdf [accessed 2007-09-21]. Poster on the joint PM2O-EU/ME meeting.
669. Marcus Vinicius Carvalho da Silva, Nadia Nedjah, and Luiza de Macedo Mourelle. Evolution-
ary IP Assignment for Efficient NoC-based System Design using Multi-objective Optimiza-
tion. In CEC’09 [1350], pages 2257–2264, 2009. doi: 10.1109/CEC.2009.4983221. INSPEC
Accession Number: 10688816.
670. Cihan H. Dagli, Anna L. Buczak, Joydeep Ghosh, Mark J. Embrechts, and Okan Erson,
editors. Intelligent Engineering Systems through Artificial Neural Networks – Smart Engi-
neering System Design: Neural Networks, Fuzzy Logic, EP, Complex Systems and Artificial
Life – Proceedings of the Artificial Neural Networks in Engineering Conference (ANNIE’03),
November 2–5, 2003, St. Louis, MO, USA, volume 13. American Society of Mechanical En-
gineers (ASME): St. Louis, MO, USA.
671. Wei Dai, Paul Moynihan, Juanqiong Gou, Ping Zou, Xi Yang, Tiedong Chen, and Xin Wan.
Services Oriented Knowledge-based Supply Chain Application. In SCContest’07 [1374], pages
660–667, 2007. doi: 10.1109/SCC.2007.106. INSPEC Accession Number: 9869254.
672. Hōsei Daigaku, Shietung Peng, Vladimir V Savchenko, and Shuichi Yukita, editors. First
International Symposium on Cyber Worlds: Theory and Practices (CW’02), November 6–8,
2002. IEEE Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7695-1862-1.
Google Books ID: lxIOAAAACAAJ. OCLC: 51208033, 65194958, 71489103, and
423965835.
673. R. J. Dakin. A Tree-Search Algorithm for Mixed Integer Programming Problems. The
1010 REFERENCES
Computer Journal, Oxford Journals, 8(3):250–255, 1965, British Computer Society: Swindon,
UK. doi: 10.1093/comjnl/8.3.250.
674. Gerard E. Dallal. The Little Handbook of Statistical Practice. Tufts University, Jean Mayer
USDA Human Nutrition Research Center on Aging, Biostatistics Unit: Boston, MA, USA,
July 16, 2008. Fully available at http://www.statisticalpractice.com/ [accessed 2008-
08-15].
675. Hai H. Dam, Hussein A. Abbass, and Chris Lokan. DXCS: An XCS System for Distributed
Data Mining. In GECCO’05 [304], pages 1883–1890, 2005. Fully available at http://doi.
acm.org/10.1145/1068009.1068326 [accessed 2007-09-12]. See also [676].
676. Hai H. Dam, Hussein A. Abbass, and Chris Lokan. DXCS: An XCS System for Distributed
Data Mining. Technical Report TR-ALAR-200504002, University of New South Wales, School
of Information Technology and Electrical Engineering, Artificial Life and Adaptive Robotics
Laboratory: Canberra, Australia, 2005. Fully available at http://www.itee.adfa.edu.
au/˜alar/techreps/200504002.pdf [accessed 2007-09-12]. See also [675].
677. David B. D’Ambrosio and Kenneth Owen Stanley. A Novel Generative Encoding for Ex-
ploiting Neural Network Sensor and Output Geometry. In GECCO’07-I [2699], pages 974–
981, 2007. doi: 10.1145/1276958.1277155. Fully available at http://eplex.cs.ucf.edu/
papers/dambrosio_gecco07.pdf [accessed 2011-06-19].
678. Martin Damsbo, Brian S. Kinnear, Matthew R. Hartings, Peder T. Ruhoff, Martin F. Jar-
rold, and Mark A. Ratner. Application of Evolutionary Algorithm Methods to Polypep-
tide Folding: Comparison with Experimental Results for Unsolvated Ac-(Ala-Gly-Gly)5 -
LysH+ . Proceedings of the National Academy of Science of the United States of Amer-
ica (PNAS), 101(19):7215–7222, May 2004, National Academy of Sciences: Washington,
DC, USA. Fully available at http://www.pnas.org/cgi/reprint/101/19/7215.pdf?
ck=nck and http://www.pnas.org/content/vol101/issue19/ [accessed 2007-08-27].
679. Grégoire Danoy, Bernabé Dorronsoro Dı́az, and Pascal Bouvry. Overcoming Partitioning in
Large Ad Hoc Networks Using Genetic Algorithms. In GECCO’09-I [2342], pages 1347–1354,
2009. doi: 10.1145/1569901.1570082.
680. George Bernhard Dantzig and J. H. Ramser. The Truck Dispatching Problem. Management
Science, 6(1):80–91, October 1959, Institute for Operations Research and the Management
Sciences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cam-
bridge, MA, USA. doi: 10.1287/mnsc.6.1.80.
681. Andrea Pohoreckyj Danyluk, Léon Bottou, and Michael Littman, editors. Proceedings of the
26th International Conference on Machine Learning (ICML’09), June 14–18, 2009, Montréal,
QC, Canada, volume 382 in ACM International Conference Proceeding Series (AICPS). ACM
Press: New York, NY, USA. Fully available at http://www.machinelearning.org/
archive/icml2009/abstracts.html [accessed 2010-06-23].
682. Ludwig Danzera and Victor Klee. Lengths of Snakes in Boxes. Journal of Combinatorial
Theory, 2(3):258–265, May 1967, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. doi: 10.1016/S0021-9800(67)80026-7.
683. Paul J. Darwen and Xin Yao. Every Niching Method has its Niche: Fitness Shar-
ing and Implicit Sharing Compared. In PPSN IV [2818], pages 398–407, 1996.
doi: 10.1007/3-540-61723-X 1004. CiteSeerx : 10.1.1.20.9897 and 10.1.1.56.4202.
684. Charles Darwin. On the Origin of Species by Means of Natural Selection, or the Preserva-
tion of Favoured Races in the Struggle for Life. John Murray: London, UK, 6th edition,
November 24, 1859. Fully available at http://www.gutenberg.org/etext/1228 [ac-
cessed 2007-08-05]. Google Books ID: 6gcpNAAACAAJ and nNQNAAAAYAAJ. Newly published as
[685].
685. Charles Darwin, author. Gillian Beer, editor. On the Origin of Species, Oxford World’s Clas-
sics. Oxford University Press, Inc.: New York, NY, USA, May 1998. isbn: 0-19-283438-X.
Google Books ID: LDrPI52uFQsC. OCLC: 39117382. New edition of [684].
686. Erasmus Darwin. Zoönomia; or, the Organic Laws of Life, volume I. J. Johnson: St. Paul’s
Church-Yard, London, UK, second corrected edition, 1796. Fully available at http://www.
gutenberg.org/etext/15707 [accessed 2008-09-10]. Entered at Stationers’ Hall.
687. Angan Das and Ranga Vemuri. A Self-learning Optimization Technique for Topol-
ogy Design of Computer Networks. In EvoWorkshops’08 [1051], pages 38–51, 2008.
doi: 10.1007/978-3-540-78761-7 5.
688. Indraneel Das and John E. Dennis. A Closer Look at Drawbacks of Minimizing Weighted
REFERENCES 1011
Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems. Tech-
nical Report TR96.36, Rice University, Department of Computational & Applied Mathemat-
ics (CAAM): Houston, TX, USA, December 1996. Fully available at http://www.caam.
rice.edu/tech_reports/1996/TR96-36.ps [accessed 2009-07-18]. See also [689].
689. Indraneel Das and John E. Dennis. A Closer Look at Drawbacks of Minimizing Weighted
Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems. Struc-
tural Optimization, 14(1):63–69, August 1997, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BF01197559. See also [688].
690. Dipankar Dasgupta, Fernando Nino, Deon Garrett, Koyel Chaudhuri, Soujanya Medapati,
Aishwarya Kaushal, and James Simien. A Multiobjective Evolutionary Algorithm for the
Task based Sailor Assignment Problem. In GECCO’09-I [2342], pages 1475–1482, 2009.
doi: 10.1145/1569901.1570099. Fully available at http://ais.cs.memphis.edu/files/
papers/t13fp621-dasguptaPS.pdf [accessed 2010-09-21].
691. Yuval Davidor. Epistasis Variance: Suitability of a Representation to Genetic Algorithms.
Complex Systems, 4(4):369–383, 1990, Complex Systems Publications, Inc.: Champaign, IL,
USA, Stephen Wolfram and Todd Rowland, editors.
692. Yuval Davidor. Epistasis Variance: A Viewpoint on GA-Hardness. In FOGA’90 [2562], pages
23–35, 1990.
693. Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors. Proceedings of the
Third Conference on Parallel Problem Solving from Nature; International Conference on
Evolutionary Computation (PPSN III), October 9–14, 1994, Jerusalem, Israel, volume
866/1994 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/3-540-58484-6. isbn: 0387584846 and 3-540-58484-6. Google
Books ID: n2lXlC5mv68C. OCLC: 31132760, 190832323, 202350473, 311905312, and
422861493.
694. Lawrence Davis, editor. Genetic Algorithms and Simulated Annealing, Research Notes in
Artificial Intelligence. Pitman: London, UK, October 1987–1990. isbn: 0273087711 and
0934613443. Google Books ID: edfSSAAACAAJ. OCLC: 16355405, 257879939, and
476337140. Library of Congress Control Number (LCCN): 87021357.
695. Lawrence Davis, editor. Handbook of Genetic Algorithms, VNR Computer Library. Thom-
son Publishing Group, Inc.: Stamford, CT, USA, January 1991. isbn: 0-442-00173-8
and 1850328250. Google Books ID: vTG5PAAACAAJ. OCLC: 23081440, 36495685,
231258310, and 300431834. Library of Congress Control Number (LCCN): 90012823.
GBV-Identification (PPN): 01945077X and 276468937. LC Classification: QA402.5 .H36
1991.
696. Randall Davis and Jonathan King. An Overview of Production Systems. Technical Report
STAN-CS-75-524, Stanford University, Computer Science Department: Stanford, CA, USA,
October 1975. See also [697].
697. Randall Davis and Jonathan King. An Overview of Production Systems. In Machine Intel-
ligence 8 [874], pages 300–334, 1977. See also [696].
698. Brian D. Davison and Khaled Rasheed. Effect of Global Parallelism on a Steady State GA.
In Evolutionary Computation and Parallel Processing Workshop [482], pages 167–170, 1999.
Fully available at http://www.cse.lehigh.edu/˜brian/pubs/1999/cec99/pgado.
pdf [accessed 2010-08-01]. CiteSeerx : 10.1.1.112.5296. See also [2270].
699. Richard Dawkins. The Evolution of Evolvability. In Artificial Life’87 [1666], pages 201–220,
1987.
700. Richard Dawkins. The Selfish Gene. Oxford University Press, Inc.: New York, NY,
USA, 1st/2nd edition, 1976–October 1989. isbn: 0-192-86092-5. Google Books
ID: WkHO9HI7koEC.
701. Richard Dawkins. Climbing Mount Improbable. Penguin Books: London, UK and W.W.
Norton & Company: New York, NY, USA, 1 edition, 1996. isbn: 0140179186, 0141026170,
and 0670850187. Google Books ID: 5XUYHAAACAAJ, BuKaKAAACAAJ, TPzqAAAACAAJ, and
TvzqAAAACAAJ. asin: 0393039307 and 0393316823.
702. Sérgio Granato de Araújo, Aloysio de Castro Pinto Pedroza, and Antônio Carneiro de
Mesquita Filho. Evolutionary Synthesis of Communication Protocols. In ICT’03 [1776], pages
986–993, volume 2, 2003. doi: 10.1109/ICTEL.2003.1191573. Fully available at http://www.
gta.ufrj.br/ftp/gta/TechReports/AMP03a.pdf [accessed 2008-06-21]. See also [703, 705].
703. Sérgio Granato de Araújo, Aloysio de Castro Pinto Pedroza, and Antônio Carneiro de
1012 REFERENCES
Mesquita Filho. Uma Metodologia de Projeto de Protocolos de Comunicação Baseada em
Técnicas Evolutivas. In SBrT’03 [1274], 2003. Fully available at http://www.gta.ufrj.
br/ftp/gta/TechReports/AMP03e.pdf [accessed 2008-06-21]. See also [702, 704].
704. Sérgio Granato de Araújo, Antônio Carneiro de Mesquita Filho, and Aloysio de Castro Pinto
Pedroza. Sı́ntese de Circuitos Digitais Otimizados via Programação Genética. In SEM-
ISH’03 [3003], pages 273–285, volume III, 2003. Fully available at http://www.gta.ufrj.
br/ftp/gta/TechReports/AMP03d.pdf [accessed 2008-06-21].
705. Sérgio Granato de Araújo, Antônio Carneiro de Mesquita Filho, and Aloysio C. P. Pedroza.
A Scenario-Based Approach to Protocol Design Using Evolutionary Techniques. In EvoWork-
shops’04 [2254], pages 178–187, 2004. See also [702].
706. Bart de Boer. Classifier Systems: A useful Approach to Machine Learning?, IR94-02. Mas-
ter’s thesis, Leiden University: Leiden, The Netherlands, August 1994, Ida Sprinkhuizen-
Kuyper and Egbert J. W. Boers, Supervisors. Fully available at ftp://ftp.cs.bham.
ac.uk/pub/authors/T.Kovacs/lcs.archive/DeBoer1994a.ps.gz [accessed 2010-12-15].
CiteSeerx : 10.1.1.38.6367.
707. Leandro Nunes de Castro, Fernando José Von Zuben, and Helder Knidel, editors. Pro-
ceedings of the 6th International Conference on Artificial Immune Systems (ICARIS’07),
August 26–29, 2007, Santos, Brazil, volume 4628 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
708. Ivanoe de Falco, Antonio Della Cioppa, Domenico Maisto, Umberto Scafuri, and Ernesto
Tarantino. Extremal Optimization as a Viable Means for Mapping in Grids. In EvoWork-
shops’09 [1052], pages 41–50, 2009. doi: 10.1007/978-3-642-01129-0 5.
709. Edwin D. de Jong, Richard A. Watson, and Jordan B. Pollack. Reducing Bloat and Pro-
moting Diversity using Multi-Objective Methods. In GECCO’01 [2570], pages 11–18, 2001.
Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/jong_2001_
gecco.html and http://www.demo.cs.brandeis.edu/papers/rbpd_gecco01.ps.
gz [accessed 2009-07-09]. CiteSeerx : 10.1.1.28.4549.
710. Gerdien de Jong. Evolution of Phenotypic Plasticity: Patterns of Plasticity and the Emer-
gence of Ecotypes. New Phytologist, 166(1):101–118, April 2005, New Phytologist Trust and
Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1111/j.1469-8137.2005.01322.x.
711. Kenneth Alan De Jong. An Analysis of the Behavior of a Class of Genetic Adaptive Sys-
tems. PhD thesis, University of Michigan: Ann Arbor, MI, USA, August 1975, John Henry
Holland, Larry K. Flanigan, Richard A. Volz, and Bernard P. Zeigler, Committee members.
Fully available at http://cs.gmu.edu/˜eclab/kdj_thesis.html [accessed 2007-08-17]. Or-
der No.: AAI7609381.
712. Kenneth Alan De Jong. On Using Genetic Algorithms to Search Program Spaces. In
ICGA’87 [1132], pages 210–216, 1987.
713. Kenneth Alan De Jong. Learning with Genetic Algorithms: An Overview. Machine Learning,
3(2-3):121–138, October 1988, Kluwer Academic Publishers: Norwell, MA, USA and Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00113894.
714. Kenneth Alan De Jong. Genetic Algorithms are NOT Function Optimizers. In
FOGA’92 [2927], pages 5–17, 1992. Fully available at http://www.mli.gmu.edu/
papers/91-95/92-12.pdf [accessed 2011-05-30]. CiteSeerx : 10.1.1.161.5655.
715. Kenneth Alan De Jong. Evolutionary Computation: A Unified Approach, volume 4 in Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, February 2006.
isbn: 0262041944 and 8120330021. Google Books ID: OIRQAAAAMAAJ. OCLC: 46500047,
182530408, and 276452339.
716. Kenneth Alan De Jong and William M. Spears. Learning Concept Classification Rules
using Genetic Algorithms. In IJCAI’91-II [1988], pages 651–656, 1991. Fully avail-
able at http://citeseer.ist.psu.edu/dejong91learning.html [accessed 2007-09-12].
CiteSeerx : 10.1.1.104.4397 and 10.1.1.56.5640.
717. Kenneth Alan De Jong, William M. Spears, and Diana F. Gordon. Using Genetic
Algorithms for Concept Learning. Machine Learning, 13:161–188, 1993, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
Fully available at http://www.aic.nrl.navy.mil/papers/1993/AIC-93-050.ps.
Z and http://www.cs.uwyo.edu/˜dspears/papers/mlj93.pdf [accessed 2010-12-14].
CiteSeerx : 10.1.1.115.8585 and 10.1.1.51.8124.
718. Kenneth Alan De Jong, David B. Fogel, and Hans-Paul Schwefel. A History of Evolution-
REFERENCES 1013
ary Computation. In Evolutionary Computation 1: Basic Algorithms and Operators [174],
Chapter 6, page 40. Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back,
Bristol, UK, 2000.
719. Kenneth Alan De Jong, Riccardo Poli, and Jonathan E. Rowe, editors. Foundations of Genetic
Algorithms 7 (FOGA’02), September 4–6, 2002, Torremolinos, Spain. Morgan Kaufmann:
San Mateo, CA, USA. isbn: 0-12-208155-2. Published in 2003.
720. Pierre-Simon Marquis de Laplace. Théorie Analytique des Probabilités (Analytical Theory
of Probability). M. V. Courcier, Imprimeur-Libraire pour les Mathématiques, quai des Au-
gustins, no. 57: Paris, France, 1812. Google Books ID: 6MRLAAAAMAAJ, VWuGOwAACAAJ,
VmuGOwAACAAJ, dR6 PAAACAAJ, eB6 PAAACAAJ, and nQwAAAAAMAAJ. Première Partie.
721. Luis de-Marcos, José J. Martı́nez, José A. Gutiérrez, Roberto Barchino, and José M.
Gutiérrez. A New Sequencing Method in Web-Based Education. In CEC’09 [1350], pages
3219–3225, 2009. doi: 10.1109/CEC.2009.4983352. INSPEC Accession Number: 10688947.
722. Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck. Philosophie zoologique –
ou Exposition des considérations relatives à l’histoire naturelle des Animaux; à la diver-
sité de leur organisation et des facultés qu’ils en obtiennent; . . . . Dentu: Paris, France
and J. B. Baillière Liberaire: Paris, France, 1809–1830. isbn: 1412116465. Fully avail-
able at http://www.lamarck.cnrs.fr/ice/ice_book_detail.php?type=text&
bdd=lamarck&table=ouvrages_lamarck&bookId=29 [accessed 2008-09-10]. Google Books
ID: 0gkOAAAAQAAJ, L6qAG6ZPgj4C, rlAKHKY8RboC, and vUIDAAAAQAAJ. Philosophie
zoologique – ou Exposition des considérations relatives à l’histoire naturelle des Animaux; à
la diversité de leur organisation et des facultés qu’ils en obtiennent; aux causes physiques qui
maintiennent en eux la vie et donnent lieu aux mouvements qu’ils exécutant; enfin, à celles
qui produisent les unes le sentiment, et les autres l’intelligence de ceux qui en sont doués.
723. Luc de Raedt and Arno Siebes, editors. 5th European Conference on Principles of Data
Mining and Knowledge Discovery (PKDD’01), September 3–5, 2001, Freiburg, Germany,
volume 2168 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-44794-6.
isbn: 3-540-42534-9. Google Books ID: U9IXQfcoqvwC.
724. Luc de Raedt and Stefan Wrobel, editors. Proceedings of the 22nd International Conference
on Machine Learning (ICML’05), August 7–11, 2005, University of Bonn: Bonn, North
Rhine-Westphalia, Germany, volume 119 in ACM International Conference Proceeding Series
(AICPS). ACM Press: New York, NY, USA. isbn: 1-59593-180-5.
725. Fabiano Luis de Sousa and Fernando Manuel Ramos. Function Optimization us-
ing Extremal Dynamics. In ICIPE’02 [2092], pages 115–119, volume 1, 2002.
CiteSeerx : 10.1.1.128.9802.
726. Fabiano Luis de Sousa and Walter Kenkiti Takahashi. Discrete Optimal De-
sign of Trusses by Generalized Extremal Optimization. In WCSMO6 [1226], 2005.
CiteSeerx : 10.1.1.76.8740.
727. Fabiano Luis de Sousa, Fernando Manuel Ramos, Pedro Pagione, and Roberto M. Girardi.
New Stochastic Algorithm for Design Optimization. AIAA Journal, 41(9):1808–1818, 2003,
American Institute of Aeronautics and Astronautics: Reston, VA, USA. doi: 10.2514/2.7299.
728. Fabiano Luis de Sousa, Valeri Vlassov, and Fernando Manuel Ramos. Generalized Extremal
Optimization: An Application in Heat Pipe Design. Applied Mathematical Modelling, 28
(10):911–931, October 2004, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
doi: 10.1016/j.apm.2004.04.004.
729. Dominique de Werra and Alain Hertz. Tabu Search Techniques: A Tutorial
and an Application to Neural Networks. OR Spectrum – Quantitative Approaches
in Management, 11(3):131–141, September 1989, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/BF01720782. Fully available at http://www.springerlink.de/content/
x25k97k0qx237553/fulltext.pdf [accessed 2008-03-27].
730. Thomas L. Dean, editor. Proceedings of the Sixteenth International Joint Conference
on Artificial Intelligence (IJCAI’99-I), July 31–August 6, 1999, City Conference Center:
Stockholm, Sweden, volume 1. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-613-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-99-VOL-1/CONTENT/content.htm [accessed 2008-04-01]. See also [731, 2296].
731. Thomas L. Dean, editor. Proceedings of the Sixteenth International Joint Conference
on Artificial Intelligence (IJCAI’99-II), July 31–August 6, 1999, City Conference Cen-
1014 REFERENCES
ter: Stockholm, Sweden, volume 2. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-613-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-99-VOL-2/CONTENT/content.htm [accessed 2008-04-01]. See also [730, 2296].
732. Thomas L. Dean and Kathleen McKeown, editors. Proceedings of the 9th National Conference
on Artificial Intelligence (AAAI’91), July 14–19, 1991, Anaheim, CA, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51059-6. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai91.php [accessed 2007-09-06].
2 volumes.
733. James S. DeArmon. A Comparison of Heuristics for the Capacitated Chinese Postman Prob-
lem. Master’s thesis, University of Maryland: College Park, MD, USA, 1981.
734. Ed Deaton, editor. Proceedings of the 1994 ACM Symposium on Applied computing (SAC’94),
March 6–8, 1994, Phoenix Civic Plaza: Phoenix, AZ, USA. ACM Press: New York, NY, USA.
isbn: 0-89791-647-6. Google Books ID: uq1QAAAAMAAJ. OCLC: 31391245.
735. Kalyanmoy Deb. Genetic Algorithms for Multimodal Function Optimization. Master’s thesis,
Clearinghouse for Genetic Algorithms, University of Alabama: Tuscaloosa, 1989. TCGA
Report No. 89002.
736. Kalyanmoy Deb. Solving Goal Programming Problems Using Multi-Objective Genetic Algo-
rithms. In CEC’99 [110], pages 77–84, volume 1, 1999. doi: 10.1109/CEC.1999.781910.
737. Kalyanmoy Deb. Evolutionary Algorithms for Multi-Criterion Optimization in
Engineering Design. In EUROGEN’99 [1894], pages 135–161, 1999. Fully
available at http://www.lania.mx/˜ccoello/EMOO/deb99.ps.gz [accessed 2009-08-05].
CiteSeerx : 10.1.1.52.3761.
738. Kalyanmoy Deb. An Introduction to Genetic Algorithms. Sādhanā – Academy Proceedings in
Engineering Sciences, 24(4-5):293–315, August 1999, Indian Academy of Sciences: Bangalore,
India and Springer India: New Delhi, India. doi: 10.1007/BF02823145. Fully available
at http://www.iitk.ac.in/kangal/papers/sadhana.ps.gz [accessed 2007-07-28].
739. Kalyanmoy Deb. Nonlinear Goal Programming using Multi-Objective Genetic Al-
gorithms. The Journal of the Operational Research Society (JORS), 52(3):291–302,
March 2001, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and Op-
erations Research Society: Birmingham, UK. doi: 10.1057/palgrave.jors.2601089. Fully
available at http://www.lania.mx/˜ccoello/EMOO/deb01d.ps.gz [accessed 2008-11-14].
CiteSeerx : 10.1.1.6.6775.
740. Kalyanmoy Deb. Multi-Objective Optimization Using Evolutionary Algorithms, Wiley In-
terscience Series in Systems and Optimization. John Wiley & Sons Ltd.: New York, NY,
USA, May 2001. isbn: 047187339X. Google Books ID: OSTn4GSy2uQC and ll Blt03u5IC.
OCLC: 46472121 and 46694962.
741. Kalyanmoy Deb. Genetic Algorithms for Optimization. KanGAL Report 2001002, Kan-
pur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical Engineering,
Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, 2001. Fully
available at http://www.iitk.ac.in/kangal/papers/isna.ps.gz [accessed 2009-07-09].
CiteSeerx : 10.1.1.6.9296.
742. Kalyanmoy Deb and David Edward Goldberg. An Investigation of Niche and Species For-
mation in Genetic Function Optimization. In ICGA’89 [2414], pages 42–50, 1989.
743. Kalyanmoy Deb and David Edward Goldberg. Sufficient Conditions for Deceptive and Easy
Binary Functions. Annals of Mathematics and Artificial Intelligence, 10(4):385–408, Decem-
ber 1994, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF01531277. See also
[744].
744. Kalyanmoy Deb and David Edward Goldberg. Sufficient Conditions for Deceptive and Easy
Binary Functions. Technical Report 92001, Illinois Genetic Algorithms Laboratory (IlliGAL),
Department of Computer Science, Department of General Engineering, University of Illinois
at Urbana-Champaign: Urbana-Champaign, IL, USA, 1992. See also [743].
745. Kalyanmoy Deb and Dhish Kumar Saxena. On Finding Pareto-Optimal Solutions Through
Dimensionality Reduction for Certain Large-Dimensional Multi-Objective Optimization
Problems. KanGAL Report 2005011, Kanpur Genetic Algorithms Laboratory (KanGAL),
Department of Mechanical Engineering, Indian Institute of Technology Kanpur (IIT): Kan-
pur, Uttar Pradesh, India, December 2005. Fully available at http://www.iitk.ac.in/
kangal/papers/k2005011.pdf [accessed 2009-07-18].
746. Kalyanmoy Deb and J. Sundar. Reference Point Based Multi-Objective Optimization using
REFERENCES 1015
Evolutionary Algorithms. In GECCO’06 [1516], 2006. doi: 10.1145/1143997.1144112. See
also [753, 756].
747. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T Meyarivan. A Fast Elitist
Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-
II. KanGAL Report 200001, Kanpur Genetic Algorithms Laboratory (KanGAL), De-
partment of Mechanical Engineering, Indian Institute of Technology Kanpur (IIT): Kan-
pur, Uttar Pradesh, India, 2000. Fully available at http://vision.ucsd.edu/
˜sagarwal/nsga2.pdf and http://www.jeo.org/emo/deb00.ps.gz [accessed 2009-07-
x
18]. CiteSeer : 10.1.1.18.4257. See also [748, 749].
748. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T Meyarivan. A Fast Elitist Non-
Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. In PPSN
VI [2421], pages 849–858, 2000. doi: 10.1007/3-540-45356-3 83. Fully available at https://
eprints.kfupm.edu.sa/17643/1/17643.pdf [accessed 2008-10-20]. See also [747, 749].
749. Kalyanmoy Deb, Amrit Pratab, Samir Agrawal, and T Meyarivan. A Fast and Elitist Mul-
tiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 6(2):182–197, April 2002, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.996017. Fully available at http://dynamics.org/˜altenber/UH_
ICS/EC_REFS/MULTI_OBJ/DebPratapAgarwalMeyarivan.pdf [accessed 2009-07-17]. See
also [747, 748].
750. Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler. Scalable Multi-
Objective Optimization Test Problems. In CEC’02 [944], pages 825–830, volume 1, 2002.
doi: 10.1109/CEC.2002.1007032. CiteSeerx : 10.1.1.18.7531. INSPEC Accession Num-
ber: 7321894.
751. Kalyanmoy Deb, Riccardo Poli, Wolfgang Banzhaf, Hans-Georg Beyer, Edmund K. Burke,
Paul J. Darwen, Dipankar Dasgupta, Dario Floreano, James A. Foster, Mark Harman,
Owen E. Holland, Pier Luca Lanzi, Lee Spector, Andrea G. B. Tettamanzi, and Dirk
Thierens, editors. Proceedings of the Genetic and Evolutionary Computation Conference, Part
I (GECCO’04-I), June 26–30, 2004, Red Lion Hotel: Seattle, WA, USA, volume 3102/2004
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-22344-4. Google Books ID: b7k G2KpRrQC. OCLC: 55857927, 181345757,
and 315013110. Library of Congress Control Number (LCCN): 2004107860. See also
[752, 1515, 2079].
752. Kalyanmoy Deb, Riccardo Poli, Wolfgang Banzhaf, Hans-Georg Beyer, Edmund K. Burke,
Paul J. Darwen, Dipankar Dasgupta, Dario Floreano, James A. Foster, Mark Harman,
Owen E. Holland, Pier Luca Lanzi, Lee Spector, Andrea G. B. Tettamanzi, and Dirk
Thierens, editors. Proceedings of the Genetic and Evolutionary Computation Conference,
Part II (GECCO’04-II), June 26–30, 2004, Red Lion Hotel: Seattle, WA, USA, volume
3103/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-22343-6. Google Books ID: ftKoWMXvHKoC. Library of Congress
Control Number (LCCN): 2004107860. See also [751, 1515, 2079].
753. Kalyanmoy Deb, J. Sundar, and Udaya Bhaskara Rao N. Reference Point Based Multi-
Objective Optimization using Evolutionary Algorithms. KanGAL Report 2005012, Kanpur
Genetic Algorithms Laboratory (KanGAL), Department of Mechanical Engineering, Indian
Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, December 2005. Fully
available at http://www.iitk.ac.in/kangal/papers/k2005012.pdf [accessed 2009-07-
18]. See also [746, 756].
754. Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-Objective Test Problems, Link-
ages, and Evolutionary Methodologies. In GECCO’06 [1516], pages 1141–1148, 2006.
doi: 10.1145/1143997.1144179. See also [755].
755. Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-Objective Test Problems, Link-
ages, and Evolutionary Methodologies. KanGAL Report 2006001, Kanpur Genetic Algo-
rithms Laboratory (KanGAL), Department of Mechanical Engineering, Indian Institute of
Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, January 2006. Fully available
at http://www.iitk.ac.in/kangal/papers/k2006001.pdf [accessed 2009-07-11]. See also
[754].
756. Kalyanmoy Deb, J. Sundar, Udaya Bhaskara Rao N., and Shamik Chaudhuri. Reference Point
Based Multi-Objective Optimization using Evolutionary Algorithms. International Journal
of Computational Intelligence Research (IJCIR), 2(3):273–286, 2006, Research India Publica-
1016 REFERENCES
tions: Delhi, India. Fully available at http://www.lania.mx/˜ccoello/deb06b.pdf.
gz and http://www.ripublication.com/ijcirv2/ijcirv2n3_4.pdf [accessed 2009-07-
18]. See also [746, 753].
757. Stefano Debattisti, Nicola Marlat, Luca Mussi, and Stefano Cagnoni. Implementation of
a Simple Genetic Algorithm within the CUDA Architecture. Technical Report, Università
degli Studi di Parma, Dipartimento di Ingegneria dell’Informazione: Parma, Italy, 2009. Fully
available at http://www.gpgpgpu.com/gecco2009/3.pdf [accessed 2011-11-14].
758. Rina Dechter and Judea Pearl. Generalized Best-First Search Strategies and the Optimality of
A*. Journal of the Association for Computing Machinery (JACM), 32(3):505–536, July 1985,
ACM Press: New York, NY, USA. doi: 10.1145/3828.3830. CiteSeerx : 10.1.1.89.3090.
759. Rina Dechter and Richard Sutton, editors. Proceedings of the Eighteenth National Con-
ference on Artificial Intelligence and Fourteenth Conference on Innovative Applications of
Artificial Intelligence (AAAI’02, IAAI’02), July 28–August 1, 2002, Edmonton, Alberta,
Canada. AAAI Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/
Conferences/AAAI/aaai02.php and http://www.aaai.org/Conferences/IAAI/
iaai02.php [accessed 2007-09-06].
760. Garg Deepak, editor. Soft Computing. Allied Publishers: Vadodara, Gujrat, India, 2005.
isbn: 817764632X. Google Books ID: IkajJC9iGxMC.
761. Viktoriya Degeler, Ilče Georgievski, Alexander Lazovik, and Marco Aiello. Concept Mapping
for Faster QoS-Aware Web Service Composition. In SOCA’10 [1378], 2010. See also [48–
50, 320].
762. Frank K. H. A. Dehne, Todd Eavis, and Andrew Rau-Chaplin. Efficient Computation of
View Subsets. In DOLAP’07 [2556], pages 65–72, 2007. doi: 10.1145/1317331.1317343.
Fully available at http://dolap07.cs.aau.dk/dehne.pdf [accessed 2011-03-29].
CiteSeerx : 10.1.1.68.9167.
763. Marcel Dekker. Introduction to Set Theory. CRC Press, Inc.: Boca Raton, FL, USA, revised
and expanded, 3rd edition, June 22, 1999. isbn: 0-8247-7915-0.
764. Sarah Jane Delany and Michael Madden, editors. 18th Irish Conference on Artificial Intelli-
gence and Cognitive Science (AICS’07), August 29–31, 2007, Dublin Institute of Technology:
Dublin, Ireland.
765. Federico Della Croce, Roberto Tadei, and Giuseppe Volta. A Genetic Algorithm for
the Job Shop Problem. Computers & Operations Research, 22(1):15–24, January 1995,
Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0305-0548(93)E0015-L.
766. Janez Demšar. Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of
Machine Learning Research (JMLR), 7:1–30, January 2006, MIT Press: Cambridge, MA,
USA. Fully available at http://jmlr.csail.mit.edu/papers/volume7/demsar06a/
demsar06a.pdf [accessed 2010-10-19]. CiteSeerx : 10.1.1.141.3142. See also [1020].
767. Jean-Louis Deneubourg and Simon Goss. Collective Patterns and Decision-Making. Ethol-
ogy, Ecology & Evolution, 1(4):295–311, December 1989. Fully available at http://www.
ulb.ac.be/sciences/use/publications/JLD/53.pdfhttp://www.ulb.ac.be/
sciences/use/publications/JLD/53.pdf [accessed 2010-12-09].
768. Jean-Louis Deneubourg, Jacques M. Pasteels, and J. C. Verhaeghe. Probabilistic Behaviour
in Ants: A Strategy of Errors? Journal of Theoretical Biology, 105(2):259–271, 1983, El-
sevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: Academic Press
Professional, Inc.: San Diego, CA, USA. doi: 10.1016/S0022-5193(83)80007-1.
769. Berna Dengiz, Fulya Altiparmak, and Alice E. Smith. Local Search Genetic Algorithm
for Optimal Design of Reliable Networks. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 1(3):179–188, September 1997, IEEE Computer Society: Washington, DC, USA.
Fully available at http://www.eng.auburn.edu/˜aesmith/publications/journal/
ieeeec.pdf [accessed 2009-09-01]. CiteSeerx : 10.1.1.40.8821.
770. Roozbeh Derakhshan, Frank K. H. A. Dehne, Othmar Korn, and Bela Stantic. Sim-
ulated Annealing For Materialized View Selection in Data Warehousing Environment.
In DBA’06 [1407], pages 89–94, 2006. Fully available at http://www.inf.ethz.ch/
personal/droozbeh/docs/503-036.pdf [accessed 2010-09-11]. See also [771].
771. Roozbeh Derakhshan, Bela Stantic, Othmar Korn, and Frank K. H. A. Dehne. Par-
allel Simulated Annealing for Materialized View Selection in Data Warehousing Envi-
ronments. In ICA3PP [372], pages 121–132, 2008. doi: 10.1007/978-3-540-69501-1 14.
REFERENCES 1017
CiteSeerx : 10.1.1.148.4815. See also [770].
772. Anna Derezińska. Advanced Mutation Operators Applicable in C# Programs. In
SET’06 [2366], pages 283–288, 2006.
773. Ulrich Derigs, editor. Optimization and Operations Research, Encyclopedia of Life Support
Systems (EOLSS). Developed under the Auspices of the UNESCO, Eolss Publishers: Oxford,
UK.
774. Proceedings of the 1978 ACM 7th Annual Computer Science Conference (CSC’78), Febru-
ary 21–23, 1978, Detroit, MI, USA.
775. I. Devarenne, Alexandre Caminada, H. Mabed, and T. Defaix. Adaptive Local Search for
a New Military Frequency Hopping Planning Problem. In EvoWorkshops’08 [1051], pages
11–20, 2008. doi: 10.1007/978-3-540-78761-7 2.
776. Vladan Devedžic, editor. Proceedings of the 25th IASTED International Multi-Conference on
Artificial Intelligence and Applications (AIAP’07), February 12–14, 2007, Innsbruck, Aus-
tria. ACTA Press: Anaheim, CA, USA. Partly available at http://www.iasted.org/
conferences/pastinfo-549.html [accessed 2011-04-09].
777. Alexandre Devert. Building Processes Optimization: Toward an Artificial Ontogeny based
Approach. PhD thesis, Université Paris-Sud, Ecole Doctorale d’Informatique: Paris, France
and Institut National de Recherche en Informatique et en Automatique (INRIA), Centre de
Recherche Saclay - Île-de-France: Orsay, France, May 2009, Marc Schoenauer and Nicolas
Bredèche, Advisors.
778. Alexandre Devert, Thomas Weise, and Ke Tang. A Study on Scalable Representations for
Evolutionary Optimization of Ground Structures. Evolutionary Computation, 20, 2012, MIT
Press: Cambridge, MA, USA. doi: 10.1162/EVCO a 00054. PubMed ID: 22004002.
779. Yves Deville and Christine Solnon, editors. EPTCS 5: Proceedings 6th International Work-
shop on Local Search Techniques in Constraint Satisfaction, September 20, 2009, Lisbon,
Portugal. doi: 10.4204/EPTCS.5. arXiv ID: 0910.1404v1.
780. Yves Deville and Christine Solnon, editors. Proceedings of the 7th Workshop on Local Search
Techniques in Constraint Satisfaction, September 6, 2010, St Andrews, Scotland, UK.
781. Luc Devroye. Non-Uniform Random Variate Generation. Springer-Verlag London Lim-
ited: London, UK, 1986. isbn: 0-387-96305-7 and 3-540-96305-7. Fully available
at http://cg.scs.carleton.ca/˜luc/rnbookindex.html [accessed 2007-07-05]. Google
Books ID: J9KZAQAACAAJ and SJgWGQAACAAJ. OCLC: 260101691 and 300414888.
782. Patrik D’Haeseleer. Context Preserving Crossover in Genetic Programming. In
CEC’94 [1891], pages 256–261, volume 1, 1994. doi: 10.1109/ICEC.1994.350006.
CiteSeerx : 10.1.1.56.2785. INSPEC Accession Number: 4812326.
783. C. A. Dhote and M. S. Ali. Materialized View Selection in Data Warehousing: A Survey.
Journal of Applied Sciences, 9(3):401–414, 2009, Asian Network for Scientific Information
(ANSI): Faisalabad, Pakistan. doi: 10.3923/jas.2009.401.414. Fully available at http://
docsdrive.com/pdfs/ansinet/jas/2009/401-414.pdf [accessed 2011-04-13].
784. Gianni A. Di Caro. Ant Colony Optimization and its Application to Adaptive Routing
in Telecommunication Networks. PhD thesis, Applied Sciences, Polytechnic School, Uni-
versité Libre de Bruxelles: Brussels, Belgium, September 2004, Marco Dorigo, Advisor.
Partly available at http://www.idsia.ch/˜gianni/Papers/routing-chapters.pdf
and http://www.idsia.ch/˜gianni/Papers/thesis-abstract.pdf [accessed 2008-07-
26]. See also [786].
785. Gianni A. Di Caro and Marco Dorigo. AntNet: Distributed Stigmergetic Control for
Communications Networks. Journal of Artificial Intelligence Research (JAIR), 9:317–
365, December 1998, AI Access Foundation, Inc.: El Segundo, CA, USA and AAAI
Press: Menlo Park, CA, USA. Fully available at http://www.jair.org/media/544/
live-544-1748-jair.pdf [accessed 2009-07-21]. CiteSeerx : 10.1.1.53.4154. See also [786].
786. Gianni A. Di Caro and Marco Dorigo. Two Ant Colony Algorithms for Best-Effort Routing
in Datagram Networks. In PDCS’98 [1409], pages 541–546, 1998. Fully available at http://
citeseer.ist.psu.edu/dicaro98two.html and http://www.idsia.ch/˜gianni/
Papers/PDCS98.ps.gz [accessed 2008-07-26]. See also [784, 785].
787. Gianni A. Di Caro, Frederick Ducatelle, and Luca Maria Gambardella. Wireless Commu-
nications for Distributed Navigation in Robot Swarms. In EvoWorkshops’09 [1052], pages
21–30, 2009. doi: 10.1007/978-3-642-01129-0 3.
788. Cecilia Di Chio, Stefano Cagnoni, Carlos Cotta, Marc Ebner, Anikó Ekárt, Anna Isabel
1018 REFERENCES
Esparcia-Alcázar, Juan Julián Merelo-Guervós, Ferrante Neri, Mike Preuß, Hendrik Richter,
Julian Togelius, and Georgios N. Yannakakis, editors. Applications of Evolutionary Computa-
tion – Proceedings of EvoApplications 2011: EvoCOMPLEX, EvoGAMES, EvoIASP, EvoIN-
TELLIGENCE, EvoNUM, and EvoSTOC, Part 1 (EvoAPPLICATIONS’11), April 27–29,
2011, Torino, Italy, volume 6624 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-20525-5. Library of Congress Control Number (LCCN): 2011925061.
789. Robert P. Dick and Niraj K. Jha. MOGAC: A Multiobjective Genetic Algorithm for
Hardware-Software Co-synthesis of Hierarchical Heterogeneous Distributed Embedded Sys-
tems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,
17(10):920–935, October 1998, IEEE Circuits and Systems Society: Piscataway, NJ, USA.
doi: 10.1109/43.728914. Fully available at http://www.jeo.org/emo/dick98.ps.gz [ac-
x
cessed 2009-07-31]. CiteSeer : 10.1.1.46.5584. INSPEC Accession Number: 6092176.
790. Dirk Dickmanns, Jürgen Schmidhuber, and Andreas Winklhofer. Der Genetische Algo-
rithmus: Eine Implementierung in Prolog. Technische Universität München, Institut für
Informatik, Lehrstuhl Professor Radig: Munich, Bavaria, Germany, 1987. Fully avail-
able at http://www.idsia.ch/˜juergen/geneticprogramming.html [accessed 2007-11-
01]. Fortgeschrittenenpraktikum.
791. Reinhard Diestel. Graph Theory, volume 173 in Graduate Texts in Mathematics. Springer
New York: New York, NY, USA, 3rd edition, 2006. isbn: 3540261834. Google Books
ID: aR2TMYQr2CMC.
792. Thomas G. Dietterich. Overfitting and Undercomputing in Machine Learning. ACM
Computing Surveys (CSUR), 27(3):326–327, 1995, ACM Press: New York, NY, USA.
doi: 10.1145/212094.212114. CiteSeerx : 10.1.1.55.2069.
793. Thomas G. Dietterich and William R. Swartout, editors. Proceedings of the 8th Na-
tional Conference on Artificial Intelligence (AAAI’90), July 29–August 3, 1990, Boston,
MA, USA. AAAI Press: Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA.
isbn: 0-262-51057-X. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai90.php [accessed 2007-09-06]. 2 volumes.
794. Jason Digalakis and Konstantinos Margaritis. A Parallel Memetic Algorithm for Solv-
ing Optimization Problems. In MIC’01 [2294], pages 121–125, 2001. Fully available
at http://citeseer.ist.psu.edu/digalakis01parallel.html and http://www.
it.uom.gr/people/digalakis/digiasfull.pdf [accessed 2007-09-12]. See also [795].
795. Jason Digalakis and Konstantinos Margaritis. Performance Comparison of Memetic Al-
gorithms. Journal of Applied Mathematics and Computation, 158:237–252, October 2004,
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.amc.2003.08.115. Fully avail-
able at http://www.complexity.org.au/ci/draft/draft/digala02/digala02s.
pdf and http://www.complexity.org.au/ci/vol10/digala02/digala02s.pdf [ac-
x
cessed 2010-09-11]. CiteSeer : 10.1.1.21.5495. See also [794].
796. Edsger Wybe Dijkstra. A Note on Two Problems in Connexion with Graphs. Numerische
Mathematik, 1:269–271, 1959. Fully available at http://www-m3.ma.tum.de/twiki/
pub/MN0506/WebHome/dijkstra.pdf [accessed 2008-06-24].
797. Werner Dilger. Einführung in die Künstliche Intelligenz. Chemnitz University of Technol-
ogy, Faculty of Computer Science, Chair of Artificial Intelligence (Künstliche Intelligenz):
Chemnitz, Sachsen, Germany, April 2006. The Lecture Notes of the Lecture on Artificial
Intelligence by Prof. Dilger from 2006.
798. Paolo Dini. Section 1: Science, New Paradigms. Subsection 1: A Scientific Foundation
for Digital Ecosystems. In Digital Business Ecosystems [1989]. Office for Official Publi-
cations of the European Communities: Luxembourg, 2007. Fully available at http://
www.digital-ecosystems.org/book/Section1.pdf and http://www.ieee-dest.
curtin.edu.au/2007/PDini.pdf [accessed 2008-11-09]. See also: Keynote Talk “Structure
and Outlook of Digital Ecosystems Research”, IEEE Digital Ecosystems Technologies Con-
ference, DEST07, Cairns, Australia, February 20–23, 2007, and. See also [799].
799. Paolo Dini. Digital Business Ecosystems, WP 18: Socio-economic Perspectives of Sus-
tainability and Dynamic Specification of Behaviour in Digital Business Ecosystems,
Deliverable 18.4: Report on Self-Organisation from a Dynamical Systems and Computer
Science Viewpoint. Information Society Technologies, March 5, 2007. Fully available
at http://files.opaals.org/DBE/deliverables/Del_18.4_DBE_Report_on_
REFERENCES 1019
self-organisation_from_a_dynamical_systems_and_computer_viewpoint.
pdf [accessed 2008-11-09]. Contract nr. 507953.
800. Yefim Dinitz. Dinitz’ Algorithm: The Original Version and Even’s Version. In Theoretical
Computer Science: Essays in Memory of Shimon Even [1093], pages 218–240. Springer-Verlag
GmbH: Berlin, Germany, 2006. Fully available at http://www.cs.bgu.ac.il/˜dinitz/
Papers/Dinitz_alg.pdf [accessed 2010-09-01].
801. Andrej Dobnikar, Nigel C. Steele, David W. Pearson, and Rudolf F. Albrecht, editors. Pro-
ceedings of the 4th International Conference on Artificial Neural Nets and Genetic Algorithms
(ICANNGA’99), April 6–9, 1999, Protorož, Slovenia. Birkhäuser Verlag: Basel, Switzer-
land and Springer-Verlag GmbH: Berlin, Germany. isbn: 3-211-83364-1. Google Books
ID: clKwynlfZYkC.
802. Karl Doerner, Manfred Gronalt, Richard F. Hartl, Marc Reimann, Christine Strauss,
and Michael Stummer. Savings Ants for the Vehicle Routing Problem. In EvoWork-
shops’02 [465], pages 11–20, 2002. doi: 10.1007/3-540-46004-7 2. Fully avail-
able at http://epub.wu-wien.ac.at/dyn/virlib/wp/mediate/epub-wu-01_1f1.
pdf?ID=epub-wu-01_1f1 [accessed 2008-10-27]. CiteSeerx : 10.1.1.6.8882. See also [2811].
803. Benjamin Doerr, Edda Happ, and Christian Klein. Crossover can provably be
useful in Evolutionary Computation. In GECCO’08 [1519], pages 539–546, 2008.
doi: 10.1145/1389095.1389202. Fully available at http://www.mpi-inf.mpg.de/
˜doerr/papers/cpuinec.pdf and http://www.mpi-inf.mpg.de/˜edda/papers/
gecco2008_1.pdf [accessed 2011-11-23]. CiteSeerx : 10.1.1.163.9588. See also [804].
804. Benjamin Doerr, Edda Happ, and Christian Klein. Crossover can provably be useful in
Evolutionary Computation. Theoretical Computer Science, pages 1–17, 2010, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.tcs.2010.10.035. See also [803].
805. Pedro Domingos and Michael Pazzani. On the Optimality of the Simple Bayesian Clas-
sifier under Zero-One Loss. Machine Learning, 29(2-3):103–130, November 1997, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1007413511361. Fully available at http://citeseer.ist.psu.
edu/old/domingos97optimality.html and http://www.ics.uci.edu/˜pazzani/
Publications/mlj97-pedro.pdf [accessed 2009-09-10].
806. Wolfgang Domschke. Logistik, Rundreisen und Touren, Oldenbourgs Lehr- und Handbücher
der Wirtschafts- u. Sozialwissenschaften. Oldenbourg Verlag: Munich, Bavaria, Germany,
fourth edition, 1997. doi: 978-3-486-24273-7.
807. Marco Dorigo and Christian Blum. Ant Colony Optimization Theory: A Survey. Theo-
retical Computer Science, 344(2-3), November 17, 2005, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.tcs.2005.05.020. Fully available at http://code.ulb.ac.be/
dbfiles/DorBlu2005tcs.pdf [accessed 2007-08-05].
808. Marco Dorigo and Luca Maria Gambardella. Ant Colony System: A Cooperative Learning
Approach to the Traveling Salesman Problem. IEEE Transactions on Evolutionary Compu-
tation (IEEE-EC), 1(1):53–66, April 1997, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.585892. Fully available at http://www.idsia.ch/˜luca/acs-ec97.
pdf [accessed 2009-06-27]. CiteSeerx : 10.1.1.49.7702.
809. Marco Dorigo and Thomas Stützle. Ant Colony Optimization, Bradford Books. MIT
Press: Cambridge, MA, USA, July 1, 2004. isbn: 0-262-04219-3. Google Books
ID: aefcpY8GiEC.
810. Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. The Ant System: Optimization by a
Colony of Cooperating Agents. IEEE Transactions on Systems, Man, and Cybernetics – Part
B: Cybernetics, 26(1):29–41, February 1996, IEEE Systems, Man, and Cybernetics Society:
New York, NY, USA. doi: 10.1109/3477.484436. Fully available at ftp://iridia.ulb.ac.
be/pub/mdorigo/journals/IJ.10-SMC96.pdf and http://www.agent.ai/doc/
upload/200302/dori96.pdf [accessed 2009-06-26]. CiteSeerx : 10.1.1.50.6491. INSPEC
Accession Number: 5191313.
811. Marco Dorigo, Gianni A. Di Caro, and Luca Maria Gambardella. Ant Algorithms for Discrete
Optimization. Technical Report IRIDIA/98-10, Université Libre de Bruxelles: Brussels,
Belgium, 1998. CiteSeerx : 10.1.1.72.9591. See also [812].
812. Marco Dorigo, Gianni A. Di Caro, and Luca Maria Gambardella. Ant Algorithms for
Discrete Optimization. Artificial Life, 5(2):137–172, Spring 1999, MIT Press: Cambridge,
MA, USA. doi: 10.1162/106454699568728. Fully available at ftp://ftp.idsia.ch/
1020 REFERENCES
pub/luca/papers/ij_23-alife99.ps.gz, http://code.ulb.ac.be/dbfiles/
DorDicGam1999al.pdf, http://www.cs.ubc.ca/˜hutter/earg/papers04-05/
artificial_life.pdf, http://www.idsia.ch/˜luca/abstracts/papers/ij_
23-alife99.pdf, and http://www.idsia.ch/˜luca/ij_23-alife99.pdf [ac-
cessed 2010-12-10]. See also [811].
813. Marco Dorigo, Gianni A. Di Caro, and Thomas Stützle. Special issue on “Ant Algorithms“.
Future Generation Computer Systems – The International Journal of Grid Computing: The-
ory, Methods and Applications (FGCS), 16(8):851–955, June 2000, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands.
814. Marco Dorigo, Gianni A. Di Caro, and Thomas Stützle, editors. From Ant Colonies to Artifi-
cial Ants – First International Workshop on Ant Colony Optimization and Swarm Intelligence
(ANTS’98), October 15–16, 2000, Brussels, Belgium. See also [813].
815. Marco Dorigo, Luca Maria Gambardella, Martin Middendorf, and Thomas Stützle, editors.
From Ant Colonies to Artificial Ants – Second International Workshop on Ant Colony Opti-
mization and Swarm Intelligence (ANTS’00), September 8–9, 2000, Brussels, Belgium. See
also [817].
816. Marco Dorigo, Gianni A. Di Caro, and Michael Samples, editors. From Ant Colonies to
Artificial Ants – Third International Workshop on Ant Colony Optimization (ANTS’02),
September 12–14, 2002, Brussels, Belgium, volume 2463/2002 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45724-0.
isbn: 3-540-44146-8.
817. Marco Dorigo, Luca Maria Gambardella, Martin Middendorf, and Thomas Stützle. Special
Section on “Ant Colony Optimization”. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 6(4):317–365, August 2002, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2002.802446.
818. Marco Dorigo, Mauro Birattari, Christian Blum, Luca Maria Gambardella, Francesco Mon-
dada, and Thomas Stützle, editors. Fourth International Workshop on Ant Colony Optimiza-
tion and Swarm Intelligence (ANTS’04), September 5–8, 2004, Brussels, Belgium, volume
3172/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b99492.
819. Marco Dorigo, Mauro Birattari, and Thomas Stützle. Ant Colony Optimization. IEEE
Computational Intelligence Magazine, 1(4):28–39, November 2006, IEEE Computational In-
telligence Society: Piscataway, NJ, USA. doi: 10.1109/MCI.2006.329691. INSPEC Accession
Number: 9184238.
820. Marco Dorigo, Mauro Birattari, and Thomas Stützle. Ant Colony Optimization – Artificial
Ants as a Computational Intelligence Technique. IEEE Computational Intelligence Mag-
azine, 1(4):28–39, 2006, IEEE Computational Intelligence Society: Piscataway, NJ, USA.
doi: 10.1109/MCI.2006.329691. Fully available at http://iridia.ulb.ac.be/˜mbiro/
paperi/DorBirStu2006ieee-cim.pdf and http://iridia.ulb.ac.be/˜mbiro/
paperi/IridiaTr2006-023r001.pdf [accessed 2010-09-26]. CiteSeerx : 10.1.1.64.9532.
INSPEC Accession Number: 9184238.
821. Marco Dorigo, Luca Maria Gambardella, Mauro Birattari, A. Martinoli, Riccardo Poli, and
Thomas Stützle, editors. Fifth International Workshop on Ant Colony Optimization and
Swarm Intelligence (ANTS’06), September 4–7, 2006, Brussels, Belgium, volume 4150/2006
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/11839088.
822. Jürgen Dorn, Peter Hrastnik, and Albert Rainer. Web Service Discovery and Composition
with MOVE. In EEE’05 [1371], pages 791–792, 2005. doi: 10.1109/EEE.2005.142. INSPEC
Accession Number: 8530441. See also [317, 823, 2257].
823. Jürgen Dorn, Albert Rainer, and Peter Hrastnik. Toward Semantic Composi-
tion of Web Services with MOVE. In CEC/EEE’06 [2967], pages 437–438, 2006.
doi: 10.1109/CEC-EEE.2006.89. Fully available at http://move.ec3.at/Papers/
WSC06.pdf [accessed 2007-10-25]. INSPEC Accession Number: 9189342. See also [318, 822, 2257].
824. Bernabé Dorronsoro Dı́az. The VRP Web. Auren: Madrid, Spain and University of Málaga
(UMA), Complejo Tecnológico, Departamento Lenguajes y Ciencias de la Computación:
Campus de Teatinos, Málaga, Spain, March 2007. Fully available at http://neo.lcc.
uma.es/radi-aeb/WebVRP/index.html [accessed 2011-10-12].
REFERENCES 1021
825. Bernabé Dorronsoro Dı́az. Known Best Results. University of Málaga (UMA), Networking and
Emerging Optimization (NEO): Campus de Teatinos, Málaga, Spain, March 2007. Fully avail-
able at http://neo.lcc.uma.es/radi-aeb/WebVRP/results/BestResults.htm [ac-
cessed 2007-12-28].
826. Bernabé Dorronsoro Dı́az, Patricia Ruiz, Grégoire Danoy, Pascal Bouvry, and Lorenzo J. Tar-
don. Towards Connectivity Improvement in VANETs using Bypass Links. In CEC’09 [1350],
pages 2201–2208, 2009. doi: 10.1109/CEC.2009.4983214. INSPEC Accession Num-
ber: 10688809.
827. Walter Dosch and Narayan C. Debnath, editors. Proceedings of the ISCA 13th Interna-
tional Conference on Intelligent and Adaptive Systems and Software Engineering (IASSE’04),
July 1–3, 2004, Nice, France. isbn: 1-880843-51-X. Google Books ID: KigWPQAACAAJ and
TyyyAAAACAAJ.
828. Norman Richard Draper and Harry Smith. Applied Regression Analysis, Wiley Series in Prob-
ability and Mathematical Statistics – Applied Probability and Statistics Section Series. Wi-
ley Interscience: Chichester, West Sussex, UK, 1966. isbn: 0471029955. OCLC: 6486827.
GBV-Identification (PPN): 024357847. Reviewed in [879].
829. Nicole Drechsler, Rolf Drechsler, and Bernd Becker. Multi-objective Optimization in Evo-
lutionary Algorithms Using Satisfiability Classes. In International Conference on Computa-
tional Intelligence: Theory and Applications – 6th Fuzzy Days [2295], pages 108–117, 1996.
doi: 10.1007/3-540-48774-3 14. Fully available at http://citeseer.ist.psu.edu/old/
drechsler99multiobjective.html [accessed 2009-07-18].
830. Nicole Drechsler, Rolf Drechsler, and Bernd Becker. Multi-Objective Optimisation Based on
Relation Favour. In EMO’01 [3099], pages 154–166, 2001. doi: 10.1007/3-540-44719-9 11.
Fully available at http://www.lania.mx/˜ccoello/EMOO/drechsler01.pdf.gz [ac-
x
cessed 2011-12-05]. CiteSeer : 10.1.1.65.5318.
831. Dimiter Driankov, Peter W. Eklund, and Anca L. Ralescu, editors. Fuzzy Logic and Fuzzy
Control: Proceedings of Workshops on Fuzzy Logic and Fuzzy Control (IJCAI’91 Fuzzy
WS), August 24, 1991, Sydney, NSW, Australia, volume 833 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-58279-7.
OCLC: 150398218, 303915934, 303915937, 303915940, 303915944, 422860424,
473465173, and 501692697. See also [1987, 1988, 2122].
832. Moshe Dror, editor. Arc Routing: Theory, Solutions and Applications. Springer-Verlag
GmbH: Berlin, Germany, 2000. isbn: 0-7923-7898-9. Google Books ID: rPiHiE0ZZCkC.
833. Stefan Droste. Not All Linear Functions Are Equally Difficult for the Compact Genetic
Algorithm. In GECCO’05 [304], pages 679–686, 2005. doi: 10.1145/1068009.1068124. See
also [834].
834. Stefan Droste. A Rigorous Analysis of the Compact Genetic Algorithm for Linear Functions.
Natural Computing: An International Journal, 5(3):257–283, September 2006, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s11047-006-9001-0. See also [833].
835. Stefan Droste and Dirk Wiesmann. On Representation and Genetic Operators in Evo-
lutionary Algorithms. Technical Report CI–41/98, Universität Dortmund, Collaborative
Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia, Ger-
many, June 1998. Fully available at http://hdl.handle.net/2003/5341 [accessed 2009-07-
x
10]. CiteSeer : 10.1.1.36.3354.
836. Stefan Droste, Thomas Jansen, and Ingo Wegener. On the Optimization of Unimodal
Functions with the (1+1) Evolutionary Algorithm. In PPSN V [866], pages 13–22, 1998.
doi: 10.1007/BFb0056845.
837. Stefan Droste, Thomas Jansen, and Ingo Wegener. Perhaps Not a Free Lunch But At Least
a Free Appetizer. Reihe Computational Intelligence: Design and Management of Complex
Technical Processes and Systems by Means of Computational Intelligence Methods CI-45/98,
Universität Dortmund, Collaborative Research Center (Sonderforschungsbereich) 531: Dort-
mund, North Rhine-Westphalia, Germany, September 1998. Fully available at http://hdl.
handle.net/2003/5339 [accessed 2009-03-01]. issn: 1433-3325. See also [838].
838. Stefan Droste, Thomas Jansen, and Ingo Wegener. Perhaps Not a Free Lunch But At Least
a Free Appetizer. In GECCO’99 [211], pages 833–839, 1999. See also [837].
839. Stefan Droste, Thomas Jansen, and Ingo Wegener. Optimization with Randomized Search
Heuristics – The (A)NFL Theorem, Realistic Scenarios, and Difficult Functions. Reihe Com-
1022 REFERENCES
putational Intelligence: Design and Management of Complex Technical Processes and Sys-
tems by Means of Computational Intelligence Methods CI-91/00, Universität Dortmund,
Collaborative Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-
Westphalia, Germany, August 2000. Fully available at http://hdl.handle.net/2003/
5394 [accessed 2009-03-01]. issn: 1433-3325. See also [840].
840. Stefan Droste, Thomas Jansen, and Ingo Wegener. Optimization with Randomized Search
Heuristics – The (A)NFL Theorem, Realistic Scenarios, and Difficult Functions. Theoretical
Computer Science, 287(1):131–144, September 25, 2002, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/S0304-3975(02)00094-4. CiteSeerx : 10.1.1.35.5850. See also
[839].
841. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’11), July 12–
16, 2011, Dublin, Ireland.
842. Marc Dubreuil, Christian Gagné, and Marc Parizeau. Analysis of a Master-Slave Archi-
tecture for Distributed Evolutionary Computations. IEEE Transactions on Systems, Man,
and Cybernetics – Part B: Cybernetics, 36(1):229–235, February 2006, IEEE Systems, Man,
and Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCB.2005.856724. Fully
available at http://vision.gel.ulaval.ca/˜parizeau/publications/smc06.pdf
[accessed 2008-04-06]. INSPEC Accession Number: 8736975.
843. Frederick Ducatelle, Martin Roth, and Luca Maria Gambardella. Design of a User Space
Software Suite for Probabilistic Routing in Ad-Hoc Networks. In EvoWorkshops’07 [1050],
pages 121–128, 2007. doi: 10.1007/978-3-540-71805-5 13.
844. Richard O. Duda, Peter Elliot Hart, and David G. Stork. Pattern Classification, Esti-
mation, Simulation, and Control – Wiley-Interscience Series in Discrete Mathematics and
Optimization. Wiley Interscience: Chichester, West Sussex, UK, 2nd edition, Novem-
ber 2000. isbn: 0-471-05669-3. Google Books ID: YoxQAAAAMAAJ, hyQgQAAACAAJ, and
o3I8PgAACAAJ. OCLC: 41347061, 154744650, and 474918353. Library of Congress
Control Number (LCCN): 99029981. GBV-Identification (PPN): 303957670.
845. John Duffy and Jim Engle-Warnick. Using Symbolic Regression to Infer Strategies from
Experimental Data. In Fifth International Conference on Computing in Economics and Fi-
nance [262], page 150, 1999. Fully available at http://www.pitt.edu/˜jduffy/papers/
Usr.pdf [accessed 2010-12-02]. CiteSeerx : 10.1.1.34.4205. See also [846].
846. John Duffy and Jim Engle-Warnick. Using Symbolic Regression to Infer Strategies from Ex-
perimental Data. In Evolutionary Computation in Economics and Finance [549], Chapter 4,
pages 61–84. Springer-Verlag GmbH: Berlin, Germany, 2002. Fully available at http://
www.pitt.edu/˜jduffy/docs/Usr.ps [accessed 2007-09-09]. See also [845].
847. Dumitru (Dan) Dumitrescu, Beatrice Lazzerini, Lakhmi C. Jain, and A. Dumitrescu. Evo-
lutionary Computation, volume 18 in International Series on Computational Intelligence.
CRC Press, Inc.: Boca Raton, FL, USA, June 2000. isbn: 0-8493-0588-8. Google Books
ID: MSU9ep79JvUC. OCLC: 43894173, 247021679, and 468482770. Library of Congress
Control Number (LCCN): 00030348. LC Classification: QA76.618 .E882 2000.
848. Bruce S. Duncan and Arthur J. Olson. Applications of Evolutionary Programming for the
Prediction of Protein-Protein Interactions. In EP’96 [946], pages 411–417, 1996.
849. Margaret H. Dunham. Data Mining: Introductory and Advanced Topics. Prentice Hall
International Inc.: Upper Saddle River, NJ, USA, August 2002. isbn: 0130888923.
850. G. Dzemyda, V. Saltenis, and Antanas Žilinskas, editors. Stochastic and Global Optimization,
Nonconvex Optimization and Its Applications. Springer Science+Business Media, Inc.: New
York, NY, USA, March 1, 2002.
851. Sašo Džeroski and Peter A. Flach, editors. Proceedings of 9th International Workshop
on Inductive Logic Programming (ILP-99), June 24–27, 1999, Bled, Slovenia, volume
1634/1999 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48751-4.
isbn: 3-540-66109-3. Google Books ID: ePRRHzBNvFAC.
852. Fernando Almeida e Costa, Luı́s Mateus Rocha, Ernesto Jorge Fernandes Costa, Inman
Harvey, and António Coutinho, editors. Proceedings of the 9th European Conference on
REFERENCES 1023
Advances in Artificial Life (ECAL’07), September 10–14, 2007, Lisbon, Portugal, volume
4648/2007 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-74913-4.
isbn: 3-540-74912-8. Google Books ID: 8Hl-qxgNAjQC and gVLbGgAACAAJ. Library of
Congress Control Number (LCCN): 2007934544.
853. Russel C. Eberhart and James Kennedy. A New Optimizer Using Particle Swarm
Theory. In MHS’95 [1326], pages 39–43, 1995. doi: 10.1109/MHS.1995.494215.
Fully available at http://webmining.spd.louisville.edu/Websites/COMB-OPT/
FINAL-PAPERS/SwarmsPaper.pdf [accessed 2009-07-05]. INSPEC Accession Number: 5297172.
854. Russel C. Eberhart and Yuhui Shi. Comparison between Genetic Algorithms and Particle
Swarm Optimization. In EP’98 [2208], pages 611–616, 1998. doi: 10.1007/BFb0040812.
855. Russel C. Eberhart and Yuhui Shi. A Modified Particle Swarm Optimizer. In CEC’98 [2496],
pages 69–73, 1998. doi: 10.1109/ICEC.1998.699146. INSPEC Accession Number: 6021037.
856. Marc Ebner, Michael O’Neill, Anikó Ekárt, Leonardo Vanneschi, and Anna Isabel Esparcia-
Alcázar, editors. Proceedings of the 10th European Conference on Genetic Programming
(EuroGP’07), April 11–13, 2007, València, Spain, volume 4445/2007 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-71605-1. isbn: 3-540-71602-5.
Library of Congress Control Number (LCCN): 2007923720.
857. Francis Ysidro Edgeworth. Mathematical Psychics: An Essay on the Applica-
tion of Mathematics to the Moral Sciences. Kessinger Publishing: Whitefish, MT,
USA and C. Kegan Paul & Co: London, UK, 1881. isbn: 0548216657 and
1163230006. Fully available at http://socserv.mcmaster.ca/econ/ugcm/3ll3/
edgeworth/mathpsychics.pdf [accessed 2011-12-02]. asin: 0548216657.
858. Proceedings of the 9th International Conference on Artificial Immune Systems (ICARIS’10),
July 26–29, 2010, Edinburgh, Scotland, UK.
859. Proceedings of the 2013 International Conference on Systems, Man, and Cybernetics
(SMC’13), October 6–9, 2013, Edinburgh, Scotland, UK.
860. Matthias Ehrgott. Multicriteria Optimization, volume 491 in Lecture Notes in Economics
and Mathematical Systems. Birkhäuser Verlag: Basel, Switzerland, 2nd edition, 2005.
isbn: 3-540-21398-8. Google Books ID: yrZw9srrHroC.
861. Matthias Ehrgott, editor. Proceedings of the 19th International Conference on Multiple
Criteria Decision Making: MCDM for Sustainable Energy and Transportation Systems
(MCDM’08), June 7–12, 2008, Auckland, New Zealand. Partly available at https://
secure.orsnz.org.nz/mcdm2008/ [accessed 2007-09-10].
862. Matthias Ehrgott, Carlos M. Fonseca, Xavier Gandibleux, Jin-Kao Hao, and Marc Se-
vaux, editors. Proceedings of the 5th International Conference on Evolutionary Multi-
Criterion Optimization (EMO’09), April 7–10, 2009, Nantes, France, volume 5467/2009
in Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01020-0.
isbn: 3-642-01019-9.
863. Ágoston E. Eiben, editor. Evolutionary Computation, Theoretical Computer Science. IOS
Press: Amsterdam, The Netherlands, 1999. isbn: 4-274-90269-2 and 90-5199-471-0.
Google Books ID: 8LVAGQAACAAJ. This is the book edition of the journal Fundamenta
Informaticae, Volume 35, Nos. 1-4, 1998.
864. Ágoston E. Eiben and C. A. Schippers. On Evolutionary Exploration and Exploita-
tion. Fundamenta Informaticae – Annales Societatis Mathematicae Polonae, Series IV,
35(1-2):35–50, July–August 1998, IOS Press: Amsterdam, The Netherlands and Euro-
pean Association for Theoretical Computer Science (EATCS): Rio, Greece. Fully avail-
able at http://www.cs.vu.nl/˜gusz/papers/FunInf98-Eiben-Schippers.ps [ac-
x
cessed 2009-07-09]. CiteSeer : 10.1.1.29.4885.
865. Ágoston E. Eiben and James E. Smith. Introduction to Evolutionary Computing, Natural
Computing Series. Springer New York: New York, NY, USA, 1st edition, November 2003.
isbn: 3540401849. Google Books ID: 7IOE5VIpFpwC and RRKo9xVFW QC.
866. Ágoston E. Eiben, Thomas Bäck, Marc Schoenauer, and Hans-Paul Schwefel, editors.
Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
(PPSN V), September 27–30, 1998, Amsterdam, The Netherlands, volume 1498/1998 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1024 REFERENCES
doi: 10.1007/BFb0056843. isbn: 3-540-65078-4. Google Books ID: HLeU 1TkwTsC.
OCLC: 39739752, 150419863, 243486760, and 246324853.
867. Manfred Eigen and Peter K. Schuster. The Hypercycle: A Principle of Natural Self Orga-
nization. Springer-Verlag GmbH: Berlin, Germany, June 1979. isbn: 0387092935. Google
Books ID: P54PPAAACAAJ and UNJcAAAACAAJ. OCLC: 4665354.
868. Horst A. Eiselt, Michael Gendrau, and Gilbert Laporte. Arc Routing Problems, Part I:
The Chinese Postman Problem. Operations Research, 43(2):231–242, March–April 1995,
Institute for Operations Research and the Management Sciences (INFORMS): Linthicum,
ML, USA and HighWire Press (Stanford University): Cambridge, MA, USA. JSTOR Stable
ID: 171832.
869. Anikó Ekárt and Sándor Zoltán Németh. Selection Based on the Pareto Nondomination
Criterion for Controlling Code Growth in Genetic Programming. Genetic Programming and
Evolvable Machines, 2(1):61–73, March 2001, Springer Netherlands: Dordrecht, Netherlands.
Imprint: Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1023/A:1010070616149.
870. Sven E. Eklund. A Massively Parallel GP Engine in VLSI. In CEC’02 [944], pages 629–633,
2002. doi: 10.1109/CEC.2002.1006999. INSPEC Accession Number: 7321863.
871. El-Sayed M. El-Alfy. Discovering Classification Rules for Email Spam Filtering with
an Ant Colony Optimization Algorithm. In CEC’09 [1350], pages 1778–1783, 2009.
doi: 10.1109/CEC.2009.4983156. INSPEC Accession Number: 10688751.
872. Khaled El-Fakihy, Hirozumi Yamaguchi, and Gregor von Bochmann. A Method and a Ge-
netic Algorithm for Deriving Protocols for Distributed Applications with Minimum Com-
munication Cost. In PDCS’99 [1410], pages 863–868, 1999. Fully available at http://
www-higashi.ist.osaka-u.ac.jp/˜h-yamagu/resource/pdcs99.pdf [accessed 2009-
x
08-01]. CiteSeer : 10.1.1.27.4746. See also [3005].
873. Proceedings of the 3rd International Conference on Metaheuristics and Nature Inspired Com-
puting (META’10), October 27–31, 2010, El Mouradi Djerba Menzel: Djerba Island, Tunisia.
874. E.W. Elcock and Donald Michie, editors. Proceedings of the Eighth Machine Intelligence
Workshop (Machine Intelligence 8), 1977, NATO Advanced Study Institute: Santa Cruz,
CA, USA. Ellis Horwood: Chichester, West Sussex, UK and John Wiley & Sons Ltd.: New
York, NY, USA.
875. Niles Eldredge and Stephen Jay Gould. Punctuated Equilibria: An Alternative to Phyletic
Gradualism. In Models in Paleobiology [2427], Chapter 5, pages 82–115. Freeman, Cooper &
Co: San Francisco, CA, USA and DoubleDay: New York, NY, USA, 1972. Fully available
at http://www.blackwellpublishing.com/ridley/classictexts/eldredge.pdf
[accessed 2008-07-01].
876. Niles Eldredge and Stephen Jay Gould. Punctuated Equilibria: The Tempo and Mode of
Evolution Reconsidered. Paleobiology, 3(2):115–151, April 1, 1977, Paleontological Society:
Lawrence, KS, USA and Allen Press Online Publishing: Lawrence, KS, USA. Fully available
at http://www.nileseldredge.com/pdf_files/Punctuated_Equilibria_Gould_
Eldredge_1977.pdf [accessed 2008-11-09].
877. Proceedings of the 4th European Congress on Fuzzy and Intelligent Technologies (EUFIT’96),
September 2–5, 1996, Aachen, North Rhine-Westphalia, Germany. ELITE Foundation:
Aachen, North Rhine-Westphalia, Germany. isbn: 3896531875.
878. Khaled Elleithy, editor. Advances and Innovations in Systems, Computing Sciences and
Software Engineering – Volume 1 of Proceedings of the International Conference on Systems,
Computing Sciences and Software Engineering (SCSS’06). Springer-Verlag GmbH: Berlin,
Germany, December 5–14, 2006, University of Bridgeport: Bridgeport, CT, USA.
879. D. M. Ellis. Book Reviews: “Applied Regression Analysis” by N. P. Draper and H. Smith.
Journal of the Royal Statistical Society: Series C – Applied Statistics, 17(1):83–84, 1968,
Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK. See
also [828].
880. Michael T.M. Emmerich, Nicola Beume, and Boris Naujoks. An EMO Algorithm using
the Hypervolume Measure as Selection Criterion. In EMO’05 [600], pages 62–76, 2005.
doi: 10.1007/b106458. CiteSeerx : 10.1.1.60.2524.
881. Andries P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. John Wiley &
Sons Ltd.: New York, NY, USA, 2005. isbn: 0470091916. Google Books ID: UAg AAAACAAJ.
882. Fatma Corut Ergin, Ayşegül Yayımlı, and A. Şima Uyar. An Evolutionary Algorithm for Sur-
vivable Virtual Topology Mapping in Optical WDM Networks. In EvoWorkshops’09 [1052],
REFERENCES 1025
pages 31–40, 2009. doi: 10.1007/978-3-642-01129-0 4.
883. Larry J. Eshelman, editor. Proceedings of the Sixth International Conference on Genetic
Algorithms (ICGA’95), July 15–19, 1995, University of Pittsburgh: Pittsburgh, PA, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-370-0. Google
Books ID: 5KMsOwAACAAJ and 9xRRAAAAMAAJ.
884. Larry J. Eshelman and J. David Schaffer. Preventing Premature Convergence in Genetic
Algorithms by Preventing Incest. In ICGA’91 [254], pages 115–122, 1991.
885. Larry J. Eshelman, Richard A. Caruana, and J. David Schaffer. Biases in the Crossover
Landscape. In ICGA’89 [2414], pages 10–19, 1989.
886. Anna Isabel Esparcia-Alcázar, Anikó Ekárt, Sara Silva, Stephen Dignum, and A. Şima
Etaner-Uyar, editors. Proceedings of the 13th European Conference on Genetic Programming
(EuroGP’10), April 7–10, 2010, Istanbul, Turkey, volume 6021 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-12148-7. isbn: 3-642-12147-0. Google
Books ID: 7mHr-QFI-gsC. Library of Congress Control Number (LCCN): 2010922339.
887. Nicolás S. Estévez and Hod Lipson. Dynamical Blueprints: Exploiting Levels
of System-Environment Interaction. In GECCO’07-I [2699], pages 238–244, 2007.
doi: 10.1145/1276958.1277009. Fully available at http://ccsl.mae.cornell.edu/
papers/gecco07_estevez.pdf [accessed 2010-08-05]. CiteSeerx : 10.1.1.130.2238.
888. European Symposium on Intelligent Techniques (ESIT’99), June 3–4, 1999, Orthodox
Academy of Crete (OAC): Kolymvari, Kissamos, Chania, Crete, Greece. European Net-
work for Fuzzy Logic and Uncertainty Modeling in Information Technology (ERUDIT).
Fully available at http://www.erudit.de/erudit/events/esit99/programme.htm
[accessed 2009-07-20].
889. Xiannian Fan, Ke Tang, and Thomas Weise. Margin-Based Over-Sampling Method for Learn-
ing From Imbalanced Datasets. In PAKDD’11 [1294], 2011.
890. Günter Fandel and Thomáš Gál, editors. Proceedings of the 3rd International Conference
on Multiple Criteria Decision Making: Theory and Application (MCDM’79), August 20–
22, 1979, Hagen/Königswinter, West Germany, volume 177 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387099638 and
3-540-09963-8. OCLC: 6143366, 263581036, 310833050, 421850617, 470793575,
and 636202402. Published in May 1980.
891. Günter Fandel, Thomáš Gál, and Thomas Hanne, editors. Proceedings of the 12th Inter-
national Conference on Multiple Criteria Decision Making (MCDM’95), June 19–23, 1995,
Hagen, North Rhine-Westphalia, Germany, volume 448 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 3-540-62097-4.
OCLC: 36017289, 174158645, 246647660, 312770616, 476484227, 490887148, and
639833174. GBV-Identification (PPN): 222832282, 257873279, and 271804556. Pub-
lished on March 21, 1997.
892. Usenet FAQs: comp.ai.neural-nets FAQ. FAQS.ORG – Internet FAQ Archives: USA,
July 12, 2009. Fully available at http://www.faqs.org/faqs/ai-faq/neural-nets/
[accessed 2009-07-12].
893. Monique P. Fargues and Ralph D. Hippenstiel, editors. Conference Record of the Thirty-
First Asilomar Conference on Signals, Systems & Computers, November 2–5, 1997, Naval
Postgraduate School (U.S.): Pacific Grove, CA, USA. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-8186-8316-3, 0-8186-8317-1, and 0-8186-8318-X. Google Books
ID: LRwNPwAACAAJ and SyjnAAAACAAJ. OCLC: 39219056. issn: 1058-6393. Order
No.: 97CB36163.
894. Marco Farina and Paolo Amato. On the Optimal Solution Definition for Many-
Criteria Optimization Problems. In NAFIPS’00 [1522], pages 233–238, 2002.
doi: 10.1109/NAFIPS.2002.1018061. Fully available at http://www.lania.mx/
˜ccoello/farina02b.pdf.gz [accessed 2009-07-17]. INSPEC Accession Number: 7388102.
895. Tom Fawcett and Nina Mishra, editors. Proceedings of the 20th International Conference
1026 REFERENCES
on Machine Learning (ICML’03), August 21–24, 2003, Washington, DC, USA. AAAI Press:
Menlo Park, CA, USA. isbn: 1-57735-189-4.
896. Xiang Fei, Junzhou Luo, Jieyi Wu, and Guanqun Gu. QoS Routing based on Genetic Al-
gorithms. Computer Communications – The International Journal for the Computer and
Telecommunications Industry, 22(15-16):1392–1399, October 25, 1999, Elsevier Science Pub-
lishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(99)00113-9. Fully available at http://
neo.lcc.uma.es/opticomm/main.html [accessed 2008-08-01].
897. Edward A. Feigenbaum and Julian Feldman, editors. Computers and Thought, AAAI
Press Series / AAAI Press Copublications. McGraw-Hill: New York, NY, USA, 1963–
1995. isbn: 0-262-56092-5. Google Books ID: YnlQAAAAMAAJ. OCLC: 33332474 and
246968117. Library of Congress Control Number (LCCN): 95215375. GBV-Identification
(PPN): 189350644, 197823203, and 279066740. LC Classification: Q335.5 .C66 1995.
898. Stuart I. Feldman, Mike Uretsky, Marc Najork, and Craig E. Wills, editors. Proceedings
of the 13th international conference on World Wide Web (WWW’04), May 17–20, 2004,
Manhattan, New York, NY, USA. ACM Press: New York, NY, USA. isbn: 1-58113-844-X
and 1604230304. Google Books ID: Gr8dAAAACAAJ, sYW PAAACAAJ, and syExNAAACAAJ.
OCLC: 71006498, 223922047, and 327018361.
899. William Feller. An Introduction to Probability Theory and Its Applications, Volume 1, Wi-
ley Series in Probability and Mathematical Statistics – Applied Probability and Statis-
tics Section Series. Wiley Interscience: Chichester, West Sussex, UK, 3rd edition, 1968.
isbn: 0471257087. Google Books ID: E9WLSAAACAAJ and TkfeSAAACAAJ.
900. Dieter Fensel, Fausto Giunchiglia, Deborah L. McGuinness, and Mary-Anne Williams, edi-
tors. Proceedings of the Eighth International Conference on Knowledge Representation and
Reasoning (KR’02), April 22–25, 2002, Toulouse, France. Morgan Kaufmann Publishers Inc.:
San Francisco, CA, USA.
901. Thomas A. Feo and Jonathan F. Bard. Flight Scheduling and Maintenance Base Plan-
ning. Management Science, 35(12):1415–1432, December 1989, Institute for Operations Re-
search and the Management Sciences (INFORMS): Linthicum, ML, USA and HighWire Press
(Stanford University): Cambridge, MA, USA. doi: 10.1287/mnsc.35.12.1415. JSTOR Stable
ID: 2632228.
902. Thomas A. Feo and Mauricio G.C. Resende. Greedy Randomized Adaptive Search Proce-
dures. Journal of Global Optimization, 6(2):109–133, March 1995, Springer Netherlands: Dor-
drecht, Netherlands. doi: 10.1007/BF01096763. Fully available at http://www.research.
att.com/˜mgcr/doc/gtut.ps.Z [accessed 2008-10-20]. CiteSeerx : 10.1.1.48.8667.
903. Vitaliy Feoktistov. Differential Evolution – In Search of Solutions, volume 5 in Springer
Optimization and Its Applications. Springer New York: New York, NY, USA, Decem-
ber 2006. isbn: 0-387-36895-7 and 0-387-36896-5. Google Books ID: DUL7AAAACAAJ
and kG7aP v-SU4C. OCLC: 77077713, 318292315, and 494112725. Library of Congress
Control Number (LCCN): 2006929851. GBV-Identification (PPN): 516152432 and
546651992. LC Classification: QA402.5 .F42 2006.
904. Alberto Fernández, Salvador Garcı́a, Julián Luengo, Ester Bernadó-Mansilla, and Fran-
cisco Herrera Triguero. Genetics-Based Machine Learning for Rule Induction: State of
the Art, Taxonomy, and Comparative Study. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), June 21, 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2009.2039140.
905. Francisco Fernández de Vega and Erick Cantú-Paz, editors. Second Workshop on Parallel
Bioinspired Algorithms (WPBA’07), July 7, 2007, University College London (UCL): London,
UK. ACM Press: New York, NY, USA. Part of [2700].
906. Paola Festa and Mauricio G.C. Resende. An Annotated Bibliography of GRASP. AT&T
Labs Research Technical Report TD-5WYSEW, AT&T Labs: Florham Park, NJ, USA,
February 29, 2004. Fully available at http://www.research.att.com/˜mgcr/grasp/
gannbib/gannbib.html [accessed 2008-10-20]. See also [907].
907. Paola Festa and Mauricio G.C. Resende. An Annotated Bibliography of GRASP-
Part II: Applications. International Transactions in Operational Research, 16(2):
131–172, March 2009, Blackwell Publishing Ltd: Chichester, West Sussex, UK.
doi: 10.1111/j.1475-3995.2009.00664.x. See also [906].
908. Anthony V. Fiacco and Garth P. McCormick. Nonlinear Programming: Sequential Un-
constrained Minimization Techniques. John Wiley & Sons Ltd.: New York, NY, USA
REFERENCES 1027
and Defense Technical Information Center (DTIC): Fort Belvoir, VA, USA, January 1969.
isbn: 0471258105. Google Books ID: Hjp7OwAACAAJ and TYONOgAACAAJ. See also [909].
909. Anthony V. Fiacco and Garth P. McCormick. Nonlinear Programming: Sequential Uncon-
strained Minimization Techniques, volume 4 in Classics in Applied Mathematics Series. So-
ciety for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, new edition,
April 1990. isbn: 0898712548. Google Books ID: 8NhQAAAAMAAJ, sjD RJfxvr0C, and
sjD RJfxvr0C&dq. See also [908].
910. Volker Fickert, Winfried Kalfa, and Thomas Weise. Framework for Distributed Simula-
tion and Measuring of Complex Relations of Operating System Components. In EDME-
DIA’05 [1568], pages 3865–3870, 2005. Fully available at http://www.it-weise.de/
documents/files/FKW2005OSF.pdf [accessed 2010-12-07]. See also [2900].
911. M.V. Fidelis, Heitor Silverio Lopes, and Alex Alves Freitas. Discovering Comprehensible
Classification Rules with a Genetic Algorithm. In CEC’00 [1333], pages 805–810, volume 1,
2000. doi: 10.1109/CEC.2000.870381. CiteSeerx : 10.1.1.35.1828.
912. Jonathan E. Fieldsend, Richard M. Everson, and Sameer Singh. Using Unconstrained
Elite Archives for Multi-Objective Optimisation. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 7(3):305–323, June 2003, IEEE Computer Society: Washington,
DC, USA. doi: 10.1109/TEVC.2003.810733. Fully available at http://empslocal.ex.
ac.uk/people/staff/jefields/JF_06.pdf, http://eref.uqu.edu.sa/files/
using_unconstrained_elite_archives_for_m.pdf, and http://www.lania.mx/
x
˜ccoello/EMOO/fieldsend03.pdf.gz [accessed 2010-12-16]. CiteSeer : 10.1.1.14.6272
and 10.1.1.19.4563. INSPEC Accession Number: 7666738.
913. Richard Fikes and Wendy Lehnert, editors. Proceedings of the 11th National Conference
on Artificial Intelligence (AAAI’93), July 11–15, 1993, Washington, DC, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51071-5. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai93.php [accessed 2007-09-06].
914. Joaquim Filipe and Janusz Kacprzyk, editors. Proceedings of the 2nd International Con-
ference on Computational Intelligence (IJCCI’10), October 24–26, 2010, Hotel Sidi Saler:
València, Spain.
915. Joaquim Filipe, José Cordeiro, and Jorge Cardoso, editors. Revised Selected Papers of the
9th International Conference on Enterprise Information Systems (ICEIS’07), June 12–16,
2007, Funchal, Madeira, Portugal, volume 12/2009 in Lecture Notes in Business Information
Processing. Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-88710-2. See
also [494].
916. Joaquim Filipe, Ana L.N. Fred, and Bernadette Sharp, editors. Proceedings of the Interna-
tional Conference on Agents and Artificial Intelligence (ICAART’09), January 19–21, 2009,
Porto, Portugal. Institute for Systems and Technologies of Information, Control and Com-
munication (INSTICC) Press: Setubal, Portugal.
917. Joaquim Filipe, Ana L.N. Fred, and Bernadette Sharp, editors. Proceedings of the 2nd
International Conference on Agents and Artificial Intelligence (ICAART’10), January 22–
24, 2010, València, Spain. Institute for Systems and Technologies of Information, Control
and Communication (INSTICC) Press: Setubal, Portugal.
918. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the International Conference on Bioin-
spired Optimization Methods and their Applications (BIOMA’04), October 11–12, 2004, Jožef
Stefan International Postgraduate School: Ljubljana, Slovenia. Jožef Stefan Institute: Ljubl-
jana, Slovenia. Part of the Information Society Multiconference (IS 2004).
919. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the Second International Con-
ference on Bioinspired Optimization Methods and their Applications (BIOMA’06), Oc-
tober 9–10, 2006, Jožef Stefan International Postgraduate School: Ljubljana, Slove-
nia, Informacijska Družba (Information Society). Jožef Stefan Institute: Ljubljana,
Slovenia. isbn: 961-6303-81-3. Fully available at http://is.ijs.si/is/
is2006/Proceedings/C/BIOMA06_Complete.pdf [accessed 2008-11-05]. Partly available
at http://bioma.ijs.si/conference/2006/index.html [accessed 2008-11-05]. Google
Books ID: 0JLsAAAACAAJ and 0ZLsAAAACAAJ. Part of the Information Society Multi-
conference (IS 2006).
920. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the Third International Conference
on Bioinspired Optimization Methods and their Applications (BIOMA’08), October 13–14,
2008, Jožef Stefan International Postgraduate School: Ljubljana, Slovenia. Jožef Stefan Insti-
1028 REFERENCES
tute: Ljubljana, Slovenia. Fully available at http://is.ijs.si/is/is2008/zborniki/
D%20-%20BIOMA.pdf [accessed 2008-11-05]. Part of the Information Society Multiconference (IS
2008).
921. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the 4th International Conference
on Bioinspired Optimization Methods and their Applications (BIOMA’10), May 20–21,
2010, Jožef Stefan Institute: Ljubljana, Slovenia. Jožef Stefan Institute: Ljubljana, Slove-
nia. Partly available at http://bioma.ijs.si/conference/ProcBIOMA2010Intro.
pdf [accessed 2010-06-22].
922. Steven R. Finch. Mathematical Constants, volume 94. Cambridge University Press: Cam-
bridge, UK, August 18, 2003. isbn: 0521818052. Partly available at http://algo.inria.
fr/bsolve/ [accessed 2009-06-13]. Google Books ID: Pl5I2ZSI6uAC. LC Classification: QA41
.F54 2003. See also [923].
923. Steven R. Finch. Transitive Relations, Topologies and Partial Orders. Cambridge University
Press: Cambridge, UK, June 5, 2003. Fully available at http://algo.inria.fr/csolve/
posets.pdf [accessed 2009-06-13]. CiteSeerx : 10.1.1.5.402. See also [922].
924. Jenny Rose Finkel. Using Genetic Programming To Evolve an Algorithm For Factoring
Numbers. In Genetic Algorithms and Genetic Programming at Stanford [1601], pages 52–60.
Stanford University Bookstore, Stanford University: Stanford, CA, USA, 2003. Fully avail-
able at http://www.genetic-programming.org/sp2003/Finkel.pdf [accessed 2010-11-
x
19]. CiteSeer : 10.1.1.140.9317.
925. Hans Fischer. Die verschiedenen Formen und Funktionen des zentralen Grenzw-
ertsatzes in der Entwicklung von der klassischen zur modernen Wahrscheinlichkeit-
srechnung, Berichte aus der Mathematik. Shaker Verlag GmbH: Aachen, North
Rhine-Westphalia, Germany, August 2000. isbn: 3-8265-7767-1. Partly avail-
able at http://www.ku-eichstaett.de/Fakultaeten/MGF/Mathematik/Didmath/
Didmath.Fischer/HF_sections/content/ [accessed 2008-08-19]. OCLC: 247452236. “The
Different Forms and Applications of the Central Limit Theorem in its Development from
Classical to Modern Probability Theory”.
926. Douglas H. Fisher, editor. Proceedings of the 14th International Conference on Machine
Learning (ICML’97), July 8–12, 1997, Nashville, TN, USA. Morgan Kaufmann Publishers
Inc.: San Francisco, CA, USA. isbn: 1-55860-486-3.
927. Michael Fisher and Richard Owens, editors. Executable Modal and Temporal Logics, Work-
shop Proceedings (IJCAI’93-WS), August 28, 1993, Chambéry, France, volume 897 in Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-58976-7 and 3-540-58976-7.
OCLC: 150398117, 246324352, 312223057, 473142654, and 501656926. GBV-
Identification (PPN): 180339729, 272032352, and 595124461. See also [182, 183, 2259].
928. Sir Ronald Aylmer Fisher. The Correlations between Relatives on the Supposition of
Mendelian Inheritance. Philosophical Transactions of the Royal Society of Edinburgh, 52:
399–433, October 1, 1918, Royal Society of Edinburgh: Edinburgh, Scotland, UK. Fully
available at http://www.library.adelaide.edu.au/digitised/fisher/9.pdf [ac-
cessed 2009-07-11].
929. Sir Ronald Aylmer Fisher. Applications of “Student’s” distribution. Metron (Roma), 5(3):
90–104, 1925. Fully available at http://digital.library.adelaide.edu.au/coll/
special/fisher/43.pdf [accessed 2007-09-30]. See also [268].
930. Sir Ronald Aylmer Fisher. The Use of Multiple Measurements in Taxonomic Problems.
Annals of Eugenics, 7(2):179–188, 1936, University of London Galton Laboratory for National
Eugenic: Harpenden, Herts., England, UK. Fully available at http://digital.library.
adelaide.edu.au/coll/special/fisher/138.pdf [accessed 2008-08-16]. See also [92, 268].
931. Sir Ronald Aylmer Fisher. Statistical Methods and Scientific Inference. Hafner
Press: New York, NY, USA, Macmillan Publishers Co.: New York, NY, USA, and
Oliver and Boyd: London, UK, revised edition, 1956–July 1973. isbn: 0028447409.
Google Books ID: 0WoGAQAAIAAJ, IHdqAAAAMAAJ, QmsGAQAAIAAJ, oyXPAAAAMAAJ, and
p7RvGQAACAAJ. OCLC: 785822.
932. J. Michael Fitzpatrick and John J. Grefenstette. Genetic Algorithms in Noisy Environments.
Machine Learning, 3(2–3):101–120, October 1998, Kluwer Academic Publishers: Norwell,
MA, USA and Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00113893.
Fully available at http://www.springerlink.com/content/n6w328441t63q374/
REFERENCES 1029
fulltext.pdf [accessed 2008-07-19].
933. Peter J. Fleming, Robin Charles Purshouse, and Robert J. Lygoe. Many-Objective Opti-
mization: An Engineering Design Perspective. In EMO’05 [600], pages 14–32, 2005.
934. Dario Floreano, Jean-Daniel Nicoud, and Francesco Mondada, editors. Proceedings of the
5th European Conference on Advances in Artificial Life (ECAL’99), September 13–17, 1999,
Lausanne, Switzerland, volume 1674/1999 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-48304-7. isbn: 3-540-66452-1. Google Books ID: 2dWRlVcCBfIC and
sOwRVCPs5McC.
935. Henrick Flyvbjerg and Benny Lautrup. Evolution in a Rugged Fitness Landscape. Physical
Review A, 46(10), November 15, 1992, American Physical Society: College Park, MD, USA.
doi: 10.1103/PhysRevA.46.6714. CiteSeerx : 10.1.1.55.4769 and 10.1.1.55.4769.
936. Terence Claus Fogarty, editor. Proceedings of the Workshop on Artificial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB’94), April 11–13, 1994, Leeds, UK, volume 865/1994 in Lecture Notes in Com-
puter Science (LNCS). Society for the Study of Artificial Intelligence and the Simulation
of Behaviour (SSAISB): Chichester, West Sussex, UK, Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/3-540-58483-8. isbn: 0-387-58483-8 and 3-540-58483-8. GBV-
Identification (PPN): 164010106, 272035424, and 59512478X.
937. Terence Claus Fogarty, editor. Proceedings of the Workshop on Artificial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB’95), April 3–4, 1995, Sheffield, UK, volume 993/1995 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-60469-3.
isbn: 3-540-60469-3.
938. Terence Claus Fogarty, editor. Proceedings of the Workshop on Artificial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB’96), April 1–2, 1996, Brighton, UK, volume 1143/1996 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0032768.
isbn: 3-540-61749-3.
939. David B. Fogel, editor. Evolutionary Computation: The Fossil Record. IEEE Computer
Society Press: Los Alamitos, CA, USA and Wiley Interscience: Chichester, West Sussex, UK,
May 1998. isbn: 0-7803-3481-7. Google Books ID: ahYLAAAACAAJ. OCLC: 38270557.
Order No.: PC5737.
940. David B. Fogel. Blondie24: Playing at the Edge of AI, The Morgan Kaufmann Series in
Artificial Intelligence. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, Septem-
ber 2001. isbn: 1558607838. Google Books ID: 5vpuw8L0 CAC and t5RQAAAAMAAJ.
941. David B. Fogel. In Memoriam – Alex S. Fraser. Evolutionary Computation, 6(5):429–430,
October 2002, MIT Press: Cambridge, MA, USA. doi: 10.1109/TEVC.2002.805212.
942. David B. Fogel and Wirt Atmar, editors. Proceedings of the 1st Annual Conference on
Evolutionary Programming (EP’92), February 21–22, 1992, La Jolla Marriott Hotel: La
Jolla, CA, USA. Evolutionary Programming Society: San Diego, CA, USA. Google Books
ID: UA2HAAACAAJ. OCLC: 84992066.
943. David B. Fogel and Wirt Atmar, editors. Proceedings of the 2nd Annual Conference on
Evolutionary Programming (EP’93), February 25–26, 1993, Radisson Hotel: La Jolla, CA,
USA. Evolutionary Programming Society: San Diego, CA, USA. OCLC: 28466830.
944. David B. Fogel, Mohamed A. El-Sharkawi, Xin Yao, Hitoshi Iba, Paul Marrow, and
Mark Shackleton, editors. Proceedings of the IEEE Congress on Evolutionary Computa-
tion (CEC’02), 2002, Hilton Hawaiian Village Hotel (Beach Resort & Spa): Honolulu, HI,
USA, volume 1-2. IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer Society
Press: Los Alamitos, CA, USA. isbn: 0-7803-7282-4. Google Books ID: -ptVAAAAMAAJ
and Irq5AQAACAAJ. OCLC: 51476813, 59461927, 64431055, and 181357364. INSPEC
Accession Number: 7321757. CEC 2002 – A joint meeting of the IEEE, the Evolutionary
Programming Society, and the IEE. Held in connection with the World Congress on Compu-
tational Intelligence (WCCI 2002). Part of [1338].
945. Lawrence Jerome Fogel, Alvin J. Owens, and Michael J. Walsh. Artificial Intelligence
through Simulated Evolution. John Wiley & Sons Ltd.: New York, NY, USA, 1966.
isbn: 0471265160. Google Books ID: 75RQAAAAMAAJ, CwaoPAAACAAJ, EerbAAAACAAJ,
QMLaAAAAMAAJ, and rJNwOwAACAAJ. OCLC: 1208284, 223078340, and 561242276.
1030 REFERENCES
GBV-Identification (PPN): 190979224. asin: B0000CNARU.
946. Lawrence Jerome Fogel, Peter John Angeline, and Thomas Bäck, editors. Evolutionary
Programming V: Proceedings of the Fifth Annual Conference on Evolutionary Programming
(EP’96), February 29–March 3, 1996, San Diego, CA, USA, Complex Adaptive Systems,
Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-06190-2. Google Books
ID: d hQAAAAMAAJ.
947. Gianluigi Folino, Clara Pizzuti, and Giandomenico Spezzano. GP Ensemble for
Distributed Intrusion Detection Systems. In ICAPR’05 [2506], pages 54–62, 2005.
doi: 10.1007/11551188 6. Fully available at http://dns2.icar.cnr.it/pizzuti/
icapr05.pdf [accessed 2008-06-23].
948. Cyril Fonlupt, Jin-Kao Hao, Evelyne Lutton, Edmund M. A. Ronald, and Marc Schoenauer,
editors. Selected Papers of the 4th European Conference on Artificial Evolution (AE’99),
November 3–5, 1999, Dunkerque, France, volume 1829 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-67846-8. Published in
2000.
949. Carlos M. Fonseca. Decision Making in Evolutionary Optimization (Abstract of Invited Talk).
In EMO’07 [2061], 2007.
950. Carlos M. Fonseca. Preference Articulation in Evolutionary Multiobjective Optimisation –
Plenary Talk. In HIS’07 [1572], 2007.
951. Carlos M. Fonseca and Peter J. Fleming. Genetic Algorithms for Multiobjective Optimization:
Formulation, Discussion and Generalization. In ICGA’93 [969], pages 416–423, 1993. Fully
available at http://www.lania.mx/˜ccoello/EMOO/fonseca93.ps.gz [accessed 2009-06-
x
23]. CiteSeer : 10.1.1.48.9077.
952. Carlos M. Fonseca and Peter J. Fleming. An Overview of Evolutionary Algorithms in Mul-
tiobjective Optimization. Evolutionary Computation, 3(1):1–16, Spring 1995, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1995.3.1.1. CiteSeerx : 10.1.1.50.7779.
953. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms – Part II: Application example. Technical
Report 565, University of Sheffield, Department of Automatic Control and Systems Engi-
neering: Sheffield, UK, January 23, 1995. CiteSeerx : 10.1.1.55.5395. See also [955].
954. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms – Part I: A Unified Formulation. IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 28(1):
26–37, January 1998, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/3468.650319. CiteSeerx : 10.1.1.52.2473. See also [955].
955. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms – Part II: Application example. IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 28(1):
38–47, January 1998, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/3468.650320. See also [953, 954].
956. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple
Constraint Handling with Evolutionary Algorithms. In Practical Approaches to Multi-
Objective Optimization [399], 2004. Fully available at http://drops.dagstuhl.de/
opus/volltexte/2005/237/ [accessed 2007-09-19].
957. Carlos M. Fonseca, Peter J. Fleming, Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele,
editors. Proceedings of the Second International Conference on Evolutionary Multi-Criterion
Optimization (EMO’03), April 8–11, 2003, University of the Algarve: Faro, Portugal, vol-
ume 2632/2003 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-36970-8. isbn: 3-540-01869-7. Google Books
ID: 4H9aKVMsFrcC. OCLC: 51942678, 51975842, 150396006, 166467742, 243489698,
and 248786908.
958. Walter Fontana. Algorithmic Chemistry. In Artificial Life II [1668], pages 159–210, 1990.
Fully available at http://fontana.med.harvard.edu/www/Documents/WF/Papers/
alchemy.pdf [accessed 2009-08-02].
959. Walter Fontana, Peter F. Stadler, Erich G. Bornberg-Bauer, Thomas Griesmacher, Ivo L. Ho-
facker, Manfred Tacker, Pedro Tarazona, Edward D. Weinberger, and Peter K. Schuster. RNA
Folding and Combinatory Landscapes. Physical Review E, 47(3):2083–2099, March 1993,
American Physical Society: College Park, MD, USA. doi: 10.1103/PhysRevE.47.2083.
REFERENCES 1031
Fully available at http://fontana.med.harvard.edu/www/Documents/WF/
Papers/combinatory.landscapes.pdf and http://www.santafe.edu/media/
workingpapers/92-10-051.pdf [accessed 2010-12-03]. CiteSeerx : 10.1.1.47.6724.
960. Kenneth S. H. Forbus and Howard Shrobe, editors. Proceedings of the 6th National Conference
on Artificial Intelligence (AAAI’87), July 13–17, 1987, Seattle, WA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Partly available at http://www.aaai.org/
Conferences/AAAI/aaai87.php [accessed 2007-09-06].
961. George Foreman. A Pitfall and Solution in Multi-Class Feature Selection for Text
Classification. In ICML’04 [426], pages 38–46, 2004. doi: 10.1145/1015330.1015356.
CiteSeerx : 10.1.1.64.9405.
962. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Building a Peer-
to-peer Information System in Grids via Self-Organizing Agents. Journal of Grid
Computing, 6(2):125–140, June 2000, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10723-007-9062-z. Fully available at http://grid.deis.unical.it/
papers/pdf/JGC2008-ForestieroMastroianniSpezzano.pdf [accessed 2008-09-05]. See
also [963].
963. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Construc-
tion of a Peer-to-Peer Information System in Grids. In SOAS’2005 [666], pages
220–236, 2005. Fully available at http://grid.deis.unical.it/papers/pdf/
SOAS05-ForestieroMastroianniSpezzano.pdf [accessed 2008-09-05]. See also [962, 964–
967].
964. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Antares: An Ant-
Inspired P2P Information System for a Self-Structured Grid. In BIONETICS’07 [1342], pages
151–158, 2007. doi: 10.1109/BIMNICS.2007.4610103. Fully available at http://grid.
deis.unical.it/papers/pdf/ForestieroMastroianniSpezzano-Antares.pdf
[accessed 2008-06-13]. See also [963].
965. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. So-Grid: A Self-
Organizing Grid Featuring Bio-inspired Algorithms. ACM Transactions on Autonomous
and Adaptive Systems (TAAS), 3(2:5):1–37, May 2008, ACM Press: New York, NY,
USA. doi: 10.1145/1352789.1352790. Fully available at http://grid.deis.unical.it/
papers/pdf/Article.pdf [accessed 2008-09-05]. See also [963].
966. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. QoS-based Dis-
semination of Content in Grids. Future Generation Computer Systems – The Inter-
national Journal of Grid Computing: Theory, Methods and Applications (FGCS), 24
(3):235–244, March 2008, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.future.2007.05.003. Fully available at http://grid.deis.unical.it/
papers/pdf/ForestieroMastroianniSpezzano-FGCS-QoS.pdf [accessed 2008-09-05]. See
also [963].
967. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Reorganization and
Discovery of Grid Information with Epidemic Tuning. Future Generation Computer Sys-
tems – The International Journal of Grid Computing: Theory, Methods and Applications
(FGCS), 24(8):788–797, October 2008, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/j.future.2008.04.001. Fully available at http://grid.deis.unical.
it/papers/pdf/FGCS-epidemic.pdf [accessed 2008-09-05]. See also [963].
968. M. Forina, S. Lanteri, C. Armanino, and et al. PARVUS – An Extendible Package for Data
Exploration, Classification and Correlation. Technical Report, Institute of Pharmaceutical
and Food Analysis and Technologies: Genoa, Italy, Elsevier Science Publishers B.V.: Ams-
terdam, The Netherlands, 1988–1991. isbn: 0444430121. Google Books ID: qi lAAAACAAJ.
GBV-Identification (PPN): 052128725.
969. Stephanie Forrest, editor. Proceedings of the 5th International Conference on Genetic Al-
gorithms (ICGA’93), July 17–21, 1993, University of Illinois: Urbana-Champaign, IL, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-299-2 and
978-1-55860-299-1. Google Books ID: cfBQAAAAMAAJ.
970. Stephanie Forrest and Melanie Mitchell. Relative Building-Block Fitness and the
Building-Block Hypothesis. In FOGA’92 [2927], pages 109–126, 1992. Fully available
at http://web.cecs.pdx.edu/˜mm/Forrest-Mitchell-FOGA.pdf [accessed 2010-12-04].
1032 REFERENCES
CiteSeerx : 10.1.1.83.4366.
971. Christian V. Forst, Christian M. Reidys, and Jacqueline Weber. Evolutionary Dynamics and
Optimization: Neutral Networks as Model-Landscapes for RNA Secondary-Structure Folding-
Landscapes. In ECAL’95 [1939], pages 128–147, 1995. doi: 10.1007/3-540-59496-5 294.
CiteSeerx : 10.1.1.21.1221 and 10.1.1.56.9190. arXiv ID: adap-org/9505002v1.
972. Richard S. Forsyth. BEAGLE – A Darwinian Approach to Pattern Recognition. Kyber-
netes, 10(3):159–166, 1981, Thales Publications (W.O.) Ltd.: Irvine, CA, USA and Emerald
Group Publishing Limited: Bingley, UK. doi: 10.1108/eb005587. Fully available at http://
www.cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/kybernetes_forsyth.pdf [ac-
cessed 2007-11-01]. Received December 17, 1980. (copy from British Library May 1994).
973. Richard S. Forsyth. The Evolution of Intelligence. In Machine Learning: Principles and
Techniques [974], Chapter 4, pages 65–82. Chapman & Hall: London, UK, 1989. Refers also
to PC/BEAGLE. See also [972].
974. Richard S. Forsyth, editor. Machine Learning: Principles and Techniques. Chapman &
Hall: London, UK, 1989. isbn: 0-412-30570-4 and 0-412-30580-1. OCLC: 18255812,
246546781, and 315238891. Library of Congress Control Number (LCCN): 88022872.
GBV-Identification (PPN): 017008093, 025424157, and 275121755. LC Classifica-
tion: Q325 .F64 1989.
975. Richard S. Forsyth and Roy Rada. Machine Learning Applications in Expert Systems
and Information Retrieval, Ellis Horwood Series in Artificial Intelligence. John Wiley
& Sons Australia: Milton, QLD, Australia and Halsted Press: New York, NY, USA.
isbn: 0-470-20309-9 and 0-7458-0045-9. Google Books ID: TcnaAAAAMAAJ. Contains
chapters on BEAGLE. See also [972].
976. James A. Foster. Review: Discipulus: A Commercial Genetic Programming System. Ge-
netic Programming and Evolvable Machines, 2(2):201–203, June 2001, Springer Nether-
lands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1023/A:1011516717456.
977. James A. Foster, Evelyne Lutton, Julian Francis Miller, Conor Ryan, and Andrea
G. B. Tettamanzi, editors. Proceedings of the 5th European Conference on Genetic
Programming (EuroGP’02), April 3–5, 2002, Kinsale, Ireland, volume 2278/2002 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-45984-7. isbn: 3-540-43378-3. Google Books ID: RpNQAAAAMAAJ and
eCbu4GwRLusC. OCLC: 49312634, 50589656, 64651626, 66624505, 76394308, and
314133163.
978. B. R. Fox and M. B. McMahon. Genetic Operators for Sequencing Problems. In
FOGA’90 [2562], pages 284–300, 1990.
979. Dieter Fox and Carla P. Gomes, editors. Proceedings of the Twenty-Third AAAI Conference
on Artificial Intelligence (AAAI’08), June 13–17, 2008, Chicago, IL, USA. AAAI Press:
Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai08.php [accessed 2009-02-27].
980. John Fox. Applied Regression Analysis, Linear Models, and Related Methods. SAGE Publica-
tions: Thousand Oaks, CA, USA, February 1997. isbn: 0-8039-4540-X. OCLC: 36065974,
464653054, and 639274269. GBV-Identification (PPN): 225778130 and 509874118.
981. Felipe M. G. França and Carlos H. C. Ribeiro, editors. Proceedings of the VI Brazil-
ian Symposium on Neural Networks (SBRN’00), November 22–25, 2000, Rio de Janeiro,
RJ, Brazil. IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-0856-1,
0-7695-0857-X, and 0-7695-0858-8. Google Books ID: Q6d KQAACAAJ and
QqGoJgAACAAJ. OCLC: 45537396, 45537396, 47880868, 47880868, 51498392,
59548079, 59548079, 80493808, 80493808, 288979392, 288979392, 423976687, and
423976687.
982. Frank D. Francone, Markus Conrads, Wolfgang Banzhaf, and Peter Nordin. Homolo-
gous Crossover in Genetic Programming. In GECCO’99 [211], pages 1021–1026, volume
2, 1999. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gecco1999/
GP-463.pdf [accessed 2010-08-23]. CiteSeerx : 10.1.1.153.15.
983. Eibe Frank, Mark A. Hall, Geoffrey Holmes, Richard Kirkby, Bernhard Pfahringer, Ian H.
Witten, and Leonhard Trigg. WEKA – A Machine Learning Workbench for Data
Mining. In The Data Mining and Knowledge Discovery Handbook [1815], Chapter 62,
pages 1305–1314. Springer Science+Business Media, Inc.: New York, NY, USA, 2005.
REFERENCES 1033
doi: 10.1007/0-387-25465-X 62. Fully available at http://www.cs.waikato.ac.nz/˜ml/
publications/2005/weka_dmh.pdf [accessed 2008-11-19].
984. Gene F. Franklin and Anthony N. Michel, editors. Proceedings of the 24th IEEE Conference
on Decision and Control (CDC’85), December 11–13, 1985, Bonaventure Hotel & Spa: Fort
Lauderdale, FL, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA. Library of Congress Control Number (LCCN): 85CH2245-9. Catalogue no.: 79-
640961.
985. Daniel Raymond Frantz. Nonlinearities in Genetic Adaptive Search. PhD thesis, University
of Michigan: Ann Arbor, MI, USA, September 1972, John Henry Holland, Advisor. Fully
available at http://deepblue.lib.umich.edu/handle/2027.42/4950 [accessed 2010-08-
03]. Order No.: AAI7311116. University Microfilms No.: 73-11116. ID: UMR1485. Published
1003. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part I (KES’06-I), October 9–11, 2006, Bournemouth International Centre:
Bournemouth, UK, volume 4251/2006 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-46535-9. See also [1004, 1005].
1004. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part II (KES’06-II), October 9–11, 2006, Bournemouth International Cen-
tre: Bournemouth, UK, volume 4252/2006 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-46537-5. See also [1003, 1005].
1005. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part III (KES’06-III), October 9–11, 2006, Bournemouth International Cen-
tre: Bournemouth, UK, volume 4253/2006 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/11893011. isbn: 3-540-46542-1. Library of Congress Control Number
(LCCN): 2006933827. See also [1003, 1004].
1006. Malgorzata Gadomska and Andrzej Pacut. Performance of Ant Routing Algorithms When
Using TCP. In EvoWorkshops’07 [1050], pages 1–10, 2007. doi: 10.1007/978-3-540-71805-5 1.
1007. Pablo Galiasso and Roger L. Wainwright. A Hybrid Genetic Algorithm for the Point to
Multipoint Routing Problem with Single Split Paths. In SAC’01 [24], pages 327–332, 2001.
doi: 10.1145/372202.372354. CiteSeerx : 10.1.1.20.8236.
1008. Ante Galic, Tonci Caric, and Hrvoje Gold. MARS – A Programming Language for
Solving Vehicle Routing Problems. In Recent Advances in City Logistics. Proceedings
of the 4th International Conference on City Logistics [2672], pages 47–57, 2005. Fully
available at http://www.cro-grid.hr/hr/apps/rep/app/oot/fpz_galic_caric_
gold_final_paper.doc [accessed 2011-01-14].
1009. Marcus Gallagher, Marcus R. Frean, and Tom Downs. Real-Valued Evolutionary Optimiza-
tion using a Flexible Probability Density Estimator. In GECCO’99 [211], pages 840–846,
1999. CiteSeerx : 10.1.1.35.5113.
1010. E. A. Galperin. Pareto Analysis vis-à-vis Balance Space Approach in Multiobjective Global
Optimization. Journal of Optimization Theory and Applications, 93(3):533–545, June 1997,
Springer Netherlands: Dordrecht, Netherlands and Plenum Press: New York, NY, USA.
doi: 10.1023/A:1022639028824.
1011. Luca Maria Gambardella and Marco Dorigo. Solving Symmetric and Asymmetric TSPs by
Ant Colonies. In CEC’96 [1445], pages 622–627, 1996. doi: 10.1109/ICEC.1996.542672.
Fully available at http://www.idsia.ch/˜luca/icec96-acs.pdf [accessed 2009-06-26].
CiteSeerx : 10.1.1.40.4846.
1012. Luca Maria Gambardella, Éric D. Taillard, and Marco Dorigo. Ant Colonies for the
Quadratic Assignment Problem. The Journal of the Operational Research Society (JORS),
50(2):167–176, February 1999, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hamp-
shire, UK and Operations Research Society: Birmingham, UK. doi: 10.2307/3010565.
Fully available at http://www.idsia.ch/˜luca/tr-idsia-4-97.pdf [accessed 2009-06-30].
CiteSeerx : 10.1.1.32.2245.
1013. Luca Maria Gambardella, Andrea E. Rizzoli, Fabrizio Oliverio, Norman Casagrande, Al-
berto V. Donati, Roberto Montemanni, and Enzo Lucibello. Ant Colony Optimization for
REFERENCES 1035
Vehicle Routing in Advanced Logistics Systems. In MAS’03 [588], pages 3–9, 2003. Fully
available at http://www.idsia.ch/˜luca/MAS2003_18.pdf [accessed 2007-08-19].
1014. Xavier Gandibleux, Marc Sevaux, Kenneth Sörensen, and Vincent T’kindt, editors. Meta-
heuristics for Multiobjective Optimisation, volume 535 in Lecture Notes in Economics and
Mathematical Systems. Springer-Verlag: Berlin/Heidelberg, 2004. isbn: 3-540-20637-X.
Google Books ID: Lb8p1FWAh-8C and TEToXuP17RYC.
1015. X. Z. Gao, António Gaspar-Cunha, Gerald Schaefer, and Jun Wang, editors. Soft Computing
in Industrial Applications – 14th Online World Conference on Soft Computing in Industrial
Applications (WSC14), November 17–29, 2009, volume 75 in Advances in Intelligent and
Soft Computing. Springer-Verlag: Berlin/Heidelberg. isbn: 3-642-11281-1. Google Books
ID: EDp3RAAACAAJ.
1016. Yong Gao and Joseph Culberson. An Analysis of Phase Transition in NK Landscapes. Jour-
nal of Artificial Intelligence Research (JAIR), 17:309–332, October 2002, AI Access Founda-
tion, Inc.: El Segundo, CA, USA and AAAI Press: Menlo Park, CA, USA. Fully available
at http://www.jair.org/media/1081/live-1081-2104-jair.pdf [accessed 2009-02-26].
CiteSeerx : 10.1.1.20.2690.
1017. Yu Gao and Yong-Jun Wang. A Memetic Differential Evolutionary Algorithm for High
Dimensional Functions’ Optimization. In ICNC’07 [1711], pages 188–192, volume 4, 2007.
doi: 10.1109/ICNC.2007.60.
1018. Yuelin Gao and Yuhong Duan. An Adaptive Particle Swarm Optimization Algo-
rithm with New Random Inertia Weight. In ICIC’07-2 [1293], pages 342–350, 2007.
doi: 10.1007/978-3-540-74282-1 39.
1019. Yuelin Gao and Zihui Ren. Adaptive Particle Swarm Optimization Algorithm With
Genetic Mutation Operation. In ICNC’07 [1711], pages 211–215, volume 2, 2007.
doi: 10.1109/ICNC.2007.161.
1020. Salvador Garcı́a and Francisco Herrera Triguero. An Extension on “Statistical Com-
parisons of Classifiers over Multiple Data Sets“ for all Pairwise Comparisons. Journal
of Machine Learning Research (JMLR), 9:2677–2694, December 2008, MIT Press: Cam-
bridge, MA, USA. Fully available at http://jmlr.csail.mit.edu/papers/volume9/
garcia08a/garcia08a.pdf and http://sci2s.ugr.es/publications/ficheros/
2008-Garcia-JMLR.pdf [accessed 2010-10-19]. See also [766].
1021. Alma Lilia Garcı́a-Almanza and Edward P. K. Tsang. The Repository Method for
Chance Discovery in Financial Forecasting. In KES’06-III [1005], pages 30–37, 2006.
doi: 10.1007/11893011 5.
1022. Alma Lilia Garcı́a-Almanza, Edward P. K. Tsang, and Edgar Galván-López.
Evolving Decision Rules to Discover Patterns in Financial Data Sets. In
Computational Methods in Financial Engineering – Essays in Honour of Man-
fred Gilli [1575], Chapter II-5, pages 239–255. Springer-Verlag: Berlin/Heidelberg,
2008. doi: 10.1007/978-3-540-77958-2 12. Fully available at http://www.bracil.
net/finance/papers/GarciaTsangGalvan-Ecr-Book-2007.pdf [accessed 2010-06-25].
CiteSeerx : 10.1.1.153.5463.
1023. Raúl Garcı́a-Castro, Asunción Gómez-Pérez, Charles J. Petrie, Emanuele Della Valle, Ul-
rich Küster, Michal Zaremba, and Omair Shafiq, editors. Proceedings of the 6th Inter-
national Workshop on Evaluation of Ontology-based Tools and the Semantic Web Service
Challenge (EON-SWSC’08), June 1–2, 2008, Tenerife, Spain, volume 359 in CEUR Work-
shop Proceedings (CEUR-WS.org). Rheinisch-Westfälische Technische Hochschule (RWTH)
Aachen, Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/ [accessed 2010-12-12]. See also [2166].
1024. Martin Gardner. Martin Gardner’s Sixth Book of Mathematical Diversions from Scientific
American. University of Chicago Press: Chicago, IL, USA, 1983. isbn: 0226282503.
1025. Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the
Theory of NP-Completeness, Series of Books in the Mathematical Sciences. W. H. Free-
man & Co.: New York, NY, USA, 1979. isbn: 0-7167-1044-7 and 0-7167-1045-5.
Google Books ID: OHrJJQAACAAJ, gB FAgAACAAJ, and mdBxHAAACAAJ. OCLC: 4195125,
476019082, 630176268, 632535534, and 633137406. Library of Congress Control Num-
ber (LCCN): 78012361. GBV-Identification (PPN): 250973014, 303019050, 317880381,
321971280, 354248944, 513349081, 584237758, and 59999438X. LC Classifica-
1036 REFERENCES
tion: QA76.6 .G35.
1026. Michael R. Garey, David S. Johnson, and Ravi Sethi. The Complexity of Flowshop and
Jobshop Scheduling. Mathematics of Operations Research (MOR), 1(2):117–129, May 1976,
Institute for Operations Research and the Management Sciences (INFORMS): Linthicum,
ML, USA. doi: 10.1287/moor.1.2.117. JSTOR Stable ID: 3689278.
1027. Wolfgang Walter Garn. Vehicle Routing Problem (VRP). Vienna University of Technol-
ogy: Vienna, Austria, July 20, 2007. Fully available at http://osiris.tuwien.ac.at/
˜wgarn/VehicleRouting/vehicle_routing.html [accessed 2011-04-25].
1028. Josselin Garnier and Leila Kallel. Statistical Distribution of the Convergence Time of Evo-
lutionary Algorithms for Long-Path Problems. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), 4(1):16–30, April 2000, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.843492. CiteSeerx : 10.1.1.31.4329. INSPEC Accession Num-
ber: 6617599.
1029. António Gaspar-Cunha and Ricardo H. C. Takahashi, editors. 15th Online World Confer-
ence on Soft Computing in Industrial Applications (WSC15), November 15–27, 2010. World
Federation on Soft Computing (WFSC).
1030. Sergey Gavrilets. Fitness Landscapes and the Origin of Species, volume MPB-41 in Mono-
graphs in Population Biology (MPB). Princeton University Press: Princeton, NJ, USA,
July 2004. isbn: 0-691-11983-X. Google Books ID: YjQKGwAACAAJ.
1031. Nicholas Geard. An Exploration of NK Landscapes with Neutrality, 14213. Master’s thesis,
University of Queensland, School of Information Technology and Electrical Engineering: St
Lucia, Q, Australia, October 2001, Janet Wiles, Supervisor. Fully available at http://
eprints.ecs.soton.ac.uk/14213/1/ng_thesis.pdf [accessed 2008-07-01].
1032. Nicholas Geard, Janet Wiles, Jennifer Hallinan, Bradley Tonkes, and Benjamin
Skellett. A Comparison of Neutral Landscapes – NK, NKp and NKq. In
CEC’02 [944], pages 205–210, 2002. Fully available at http://eprints.ecs.soton.ac.
uk/14208/1/cec1-comp.pdf and http://www.staff.ncl.ac.uk/j.s.hallinan/
pubs/cec2002.pdf [accessed 2010-12-04]. CiteSeerx : 10.1.1.16.5159.
1033. Zong Woo Geem, Joong Hoon Kim, and G.V. Loganathan. A New Heuristic Opti-
mization Algorithm: Harmony Search. Simulation – Transactions of The Society for
Modeling and Simulation International, SAGE Journals Online, pages 60–68, Febru-
ary 2001, Society for Modeling and Simulation International (SCS): San Diego, CA, USA.
doi: 10.1177/003754970107600201.
1034. Zong Woo Geem, Kang Seok Lee, and Yongjin Park. Application of Harmony Search to
Vehicle Routing. American Journal of Applied Sciences, 2(12):1552–1557, December 2005,
Science Publications: New York, NY, USA. Fully available at http://www.scipub.org/
fulltext/ajas/ajas2121552-1557.pdf [accessed 2010-12-17].
1035. Johannes Gehrke, Raghu Ramakrishnan, and Venkatesh Ganti. RainForest – A Framework
for Fast Decision Tree Construction of Large Datasets. In VLDB’98 [1151], pages 416–427,
1998. Fully available at http://www.vldb.org/conf/1998/p416.pdf [accessed 2009-05-11].
CiteSeerx : 10.1.1.40.474.
1036. Johannes Gehrke, Venkatesh Ganti, Raghu Ramakrishnan, and Wei-Yin Loh. BOAT
– Optimistic Decision Tree Construction. In SIGMOD’99 [22], pages 169–180,
1999. doi: 10.1145/304182.304197. Fully available at http://doi.acm.org/10.
1145/304182.304197 and http://www.cs.cornell.edu/johannes/papers/1999/
sigmod1999-boat.pdf [accessed 2009-05-11].
1037. Alexander F. Gelbukh and Angel Fernando Kuri Morales, editors. Advances in Artificial In-
telligence: Proceedings of the 6th Mexican International Conference on Artificial Intelligence
(MICAI’07), November 4–10, 2007, Aguascalientes, México, volume 4827/2007 in Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-76631-5.
1038. Alexander F. Gelbukh and Eduardo F. Morales, editors. Advances in Artificial Intelligence
– Proceedings of the 7th Mexican International Conference on Artificial Intelligence (MI-
CAI’08), October 27–31, 2008, Tecnológico de Monterrey, Campus Estado de México: Ati-
zapán de Zaragoza, México, volume 5317/2008 in Lecture Notes in Artificial Intelligence
(LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-540-88636-5. isbn: 3-540-88635-4. Library of Congress
Control Number (LCCN): 2008936816.
REFERENCES 1037
1039. Alexander F. Gelbukh, Alvaro de Albornoz, and Hugo Terashima-Marı́n, editors. Ad-
vances in Artificial Intelligence: Proceedings of the 4th Mexican International Conference
on Artificial Intelligence (MICAI’05), November 14–18, 2005, Monterrey, México, volume
3789/2005 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11579427.
isbn: 3-540-29896-7.
1040. Stuart Geman, Elie Bienenstock, and René Doursat. Neural Networks and the Bias/Variance
Dilemma. Neural Computation, 4(1):1–58, January 1992, MIT Press: Cambridge, MA, USA.
doi: 10.1162/neco.1992.4.1.1.
1041. Michael R. Genesereth, editor. Proceedings of the National Conference on Artificial Intelli-
gence (AAAI’93), August 22–26, 1983, Washington, DC, USA. AAAI Press: Menlo Park, CA,
USA. isbn: 0-262-51052-9. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai83.php [accessed 2007-09-06]. OCLC: 228289483.
1042. Ian Gent, Hans van Maaren, and Toby Walsh, editors. SAT2000 – Highlights of Satisfiability
Research in the Year 2000, volume 63 in Frontiers in Artificial Intelligence and Applica-
tions. IOS Press: Amsterdam, The Netherlands, 2000. isbn: 1586030612. Google Books
ID: GGJM0lXMJQgC.
1043. Stuart German. Stochastic Relaxation, Gibbs Distribution and the Bayesian Restoration in
Images. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), PAMI-6
(6):721–741, November 1984, IEEE Computer Society: Piscataway, NJ, USA. doi: 10.1109/T-
PAMI.1984.4767596. Fully available at http://noodle.med.yale.edu/˜qian/docs/
geman.pdf [accessed 2011-11-21].
1044. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Encapsulated Evolution Strategies for the
Determination of Group Contribution Model Parameters in Order to Predict Thermodynamic
Properties. In PPSN V [866], pages 978–987, 1998. doi: 10.1007/BFb0056939. See also
[1045, 1046].
1045. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Use of Evolutionary Algorithms for
the Calculation of Group Contribution Parameters in order to Predict Thermodynamic
properties – Part 2: Encapsulated evolution strategies. Computers and Chemical En-
gineering, 23(7):955–973, July 1, 1999, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0098-1354(99)00270-7. See also [1044, 1046].
1046. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Verschachtelte Evolutionsstrategien zur
Optimierung nichtlinearer verfahrenstechnischer Regressionsprobleme. Chemie Ingenieur
Technik (CIT), 72(4):369–373, April 2000, Wiley Interscience: Chichester, West Sussex,
UK. doi: 10.1002/1522-2640(200004)72:4<369::AID-CITE369>3.0.CO;2-W. Fully avail-
able at http://www3.interscience.wiley.com/cgi-bin/fulltext/76500452/
PDFSTART [accessed 2010-08-01]. See also [1044, 1045].
1047. Zoubin Ghahramani, editor. Proceedings of the 24th Annual International Conference on
Machine Learning (ICML’07), June 20–24, 2007, Oregon State University: Corvallis, OR,
USA, volume 227 in ACM International Conference Proceeding Series (AICPS). Omipress:
Madison, WI, USA. Fully available at http://www.machinelearning.org/icml2007_
proc.html [accessed 2010-06-30].
1048. Ashish Ghosh and Lakhmi C. Jain, editors. Evolutionary Computation in Data Mining,
volume 163/2005 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH:
Berlin, Germany, 2005. doi: 10.1007/3-540-32358-9. isbn: 3-540-22370-3. Google Books
ID: Caevy3rrjPUC. OCLC: 56658773, 249236978, 318293100, and 320964275. Library
of Congress Control Number (LCCN): 2004111009.
1049. Ashish Ghosh and Shigeyoshi Tsutsui, editors. Advances in Evolutionary Computing – The-
ory and Applications, Natural Computing Series. Springer New York: New York, NY,
USA, November 22, 2002. isbn: 3-540-43330-9. Google Books ID: OGMEMC9P3vMC.
OCLC: 50207987, 249607003, and 314328282. Library of Congress Control Number
(LCCN): 2002029164. GBV-Identification (PPN): 35332180X and 358205859. LC Clas-
sification: QA76.618 .A215 2003.
1050. Mario Giacobini, Anthony Brabazon, Stefano Cagnoni, Gianni A. Di Caro, Rolf Drech-
sler, Muddassar Farooq, Andreas Fink, Evelyne Lutton, Penousal Machado, Stefan Min-
ner, Michael O’Neill, Juan Romero, Franz Rothlauf, Giovanni Squillero, Hideyuki Tak-
agi, A. Şima Uyar, and Shengxiang Yang, editors. Applications of Evolutionary Comput-
ing, Proceedings of EvoWorkshops 2007: EvoCoMnet, EvoFIN, EvoIASP, EvoINTERAC-
1038 REFERENCES
TION, EvoMUSART, EvoSTOC and EvoTransLog (EvoWorkshops’07), April 11–13, 2007,
València, Spain, volume 4448/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71805-5. isbn: 3-540-71804-4. Google Books ID: DtwgBJkn7iMC.
OCLC: 122935862, 152418528, 255966727, and 318300154. Library of Congress Con-
trol Number (LCCN): 2007923848.
1051. Mario Giacobini, Anthony Brabazon, Stefano Cagnoni, Gianni A. Di Caro, Rolf Drech-
sler, Anikó Ekárt, Anna Isabel Esparcia-Alcázar, Muddassar Farooq, Andreas Fink, Jon
McCormack, Michael O’Neill, Juan Romero, Franz Rothlauf, Giovanni Squillero, A. Şima
Uyar, and Shengxiang Yang, editors. Applications of Evolutionary Computing – Proceed-
ings of EvoWorkshops 2008: EvoCOMNET, EvoFIN, EvoHOT, EvoIASP, EvoMUSART,
EvoNUM, EvoSTOC, and EvoTransLog (EvoWorkshops’08), May 26–28, 2008, Naples, Italy,
volume 4974/2008 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-540-78761-7. isbn: 3-540-78760-7. Google Books
ID: ibbRDnV1vUkC.
1052. Mario Giacobini, Penousal Machado, Anthony Brabazon, Jon McCormack, Stefano Cagnoni,
Michael O’Neill, Gianni A. Di Caro, Ferrante Neri, Anikó Ekárt, Mike Preuß, Anna Is-
abel Esparcia-Alcázar, Franz Rothlauf, Muddassar Farooq, Ernesto Tarantino, Andreas
Fink, and Shengxiang Yang, editors. Applications of Evolutionary Computing – Proceedings
of EvoWorkshops 2009: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, Evo-
HOT, EvoIASP, EvoINTERACTION, EvoMUSART, EvoNUM, EvoSTOC, EvoTRANSLOG
(EvoWorkshops’09), April 15–17, 2009, Eberhard-Karls-Universität Tübingen, Fakultät für
Informations- und Kognitionswissenschaften: Tübingen, Germany, volume 5484/2009 in The-
oretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01129-0.
isbn: 3-642-01128-4. Partly available at http://evostar.na.icar.cnr.it/
EvoWorkshops/EvoWorkshops.html [accessed 2011-02-28]. Google Books ID: NvOd8Rz997AC.
1053. K. C. Giannakoglou, D. T. Tsahalis, J. Périaux, K. D. Papailiou, and Terence Claus Fogarty,
editors. Proceedings of the Conference on Evolutionary Methods for Design Optimization
and Control with Applications to Industrial Problems (EUROGEN’01), September 19–21,
2001, Athens, Greece. International Center for Numerical Methods in Engineering (Cmine):
Barcelona, Catalonia, Spain. isbn: 84-89925-97-6. Partly available at http://www.
mech.ntua.gr/˜eurogen2001 [accessed 2007-09-16]. GBV-Identification (PPN): 37304500X.
Published in 2002.
1054. Basilis Gidas. Nonstationary Markov Chains and Convergence of the Annealing Algorithm.
Journal of Statistical Physics, 39(1-2):73–131, April 1985, Springer Netherlands: Dordrecht,
Netherlands. doi: 10.1007/BF01007975.
1055. Yolanda Gil and Raymond J. Mooney, editors. Proceedings, The Twenty-First National
Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial
Intelligence Conference (AAAI’06 / IAAI’06), July 16–20, 2006, Boston, MA, USA. AAAI
Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai06.php and http://www.aaai.org/Conferences/IAAI/iaai06.php [ac-
cessed 2007-09-06].
1056. Walter Gilbert. Why Genes in Pieces? Nature, 271(5645):501, February 9, 1978.
doi: 10.1038/271501a0. PubMed ID: 622185.
1057. John H. Gillespie. Molecular Evolution Over the Mutational Landscape. Evolution – In-
ternational Journal of Organic Evolution, 38(5):1116–1129, September 1984, Society for
the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.2307/2408444.
1058. Jean-Numa Gillet and Yunlong Sheng. Simulated Quenching with Temperature Rescaling
for Designing Diffractive Optical Elements. In Proceedings of the 18th Congress of the Inter-
national Commission for Optics [1062], pages 683–684, 1999. doi: 10.1117/12.354954.
1059. Jean-Numa Gillet and Yunlong Sheng. Iterative Simulated Quenching for Designing Irregular-
Spot-Array Generators. Applied Optics, 39(20):3456–3465, July 10, 2000, Optical Society of
America (OSA): Washington, DC, USA. doi: 10.1364/AO.39.003456.
1060. Manfred Gilli and Peter Winker. Heuristic Optimization Methods in Econometrics. In Hand-
book on Computational Econometrics [263]. Wiley Interscience: Chichester, West Sussex, UK,
October 21, 2007. doi: 10.1002/9780470748916.ch3. Fully available at http://www.unige.
REFERENCES 1039
ch/ses/metri/gilli/Teaching/GilliWinkerHandbook.pdf [accessed 2010-09-17].
1061. Third European Working Session on Learning (EWSL’88), October 3–5, 1988, Glasgow,
Scotland, UK.
1062. Alexander J. Glass, Joseph W. Goodman, Milton Chang, Arthur H. Guenther, and Toshim-
itsu Asakura, editors. Proceedings of the 18th Congress of the International Commission for
Optics, August 2, 1999, San Francisco, CA, USA, volume 3749 in The Proceedings of SPIE.
Society of Photographic Instrumentation Engineers (SPIE): Bellingham, WA, USA.
1063. Fred Glover. Heuristics for Integer Programming using Surrogate Constraints. Management
Science Report Series 74-6, University of Colorado, Business Research Division: Boulder,
CO, USA, 1974. asin: B0006WBQOG. ID: OL14555131M. See also [1064].
1064. Fred Glover. Heuristics for Integer Programming using Surrogate Constraints. Decision
Sciences – A Journal of the Decision Sciences Institute, 8(1):156–166, January 1977, Decision
Sciences Institute: Atlanta, GA, USA and John Wiley & Sons Ltd.: New York, NY, USA.
doi: 10.1111/j.1540-5915.1977.tb01074.x. See also [1063].
1065. Fred Glover. Future Paths for Integer Programming and Links to Artificial Intelligence.
Computers & Operations Research, 13(5):533–549, 1986, Pergamon Press: Oxford, UK and
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0305-0548(86)90048-1. Fully
available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Future
%20Paths%20for%20Integer%20Programming.pdf [accessed 2010-10-03].
1066. Fred Glover. Tabu Search – Part I. ORSA Journal on Computing, 1(3):190–206, Sum-
mer 1989, Operations Research Society of America (ORSA). doi: 10.1287/ijoc.1.3.190.
Fully available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Part
%20I-ORSA.pdf [accessed 2008-03-27]. See also [1067].
1067. Fred Glover. Tabu Search – Part II. ORSA Journal on Computing, 2(1):190–206,
Winter 1990, Operations Research Society of America (ORSA). doi: 10.1287/ijoc.2.1.4.
Fully available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Part
%20II-ORSA-aw.pdf [accessed 2011-12-05]. See also [1066].
1068. Fred Glover and Gary A. Kochenberger, editors. Handbook of Metaheuristics, volume 57
in International Series in Operations Research & Management Science. Kluwer Academic
Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands, 2003.
doi: 10.1007/b101874. isbn: 0-306-48056-5 and 1-4020-7263-5. Series Editor Frederick
S. Hillier.
1069. Fred Glover, Éric D. Taillard, and Dominique de Werra. A User’s Guide to Tabu Search.
Annals of Operations Research, 41(1):3–28, March 1993, Springer Netherlands: Dordrecht,
Netherlands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1007/BF02078647. Special issue on Tabu search.
1070. Helen G. Gobb and John J. Grefenstette. Genetic Algorithms for Tracking Changing En-
vironments. In ICGA’93 [969], pages 523–529, 1993. Fully available at ftp://ftp.aic.
nrl.navy.mil/pub/papers/1993/AIC-93-004.ps [accessed 2008-07-19].
1071. William L. Goffe, Gary D. Ferrier, and John Rogers. Global Optimization of Statistical Func-
tions With Simulated Annealing. Journal of Econometrics, 60(1-2):65–99, January–February
1994, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0304-4076(94)90038-8. Fully
available at http://cook.rfe.org/anneal-preprint.pdf [accessed 2009-09-18]. Partly
available at http://econpapers.repec.org/software/codfortra/simanneal.htm
[accessed 2009-09-18].
1072. J. Peter Gogarten and Elena Hilario. Inteins, Introns, and Homing Endonucleases:
Recent Revelations about the Life Cycle of Parasitic Genetic Elements. BMC Evo-
lutionary Biology, 6(94), November 13, 2006, BioMed Central Ltd.: London, UK.
doi: 10.1186/1471-2148-6-94. Fully available at http://www.biomedcentral.com/
content/pdf/1471-2148-6-94.pdf [accessed 2008-03-23].
1073. David Edward Goldberg. Computer-Aided Gas Pipeline Operation using Genetic Algorithms
and Rule Learning. Dissertation Abstracts International (DAI), 44(10):3174B, April 1984,
ProQuest: Ann Arbor, MI, USA. ID: 750217171. See also [1074].
1074. David Edward Goldberg. Computer-aided Gas Pipeline Operation using Genetic Algorithms
and Rule Learning. PhD thesis, University of Michigan: Ann Arbor, MI, USA, 1983. Uni-
versity Microfilms No.: 8402282. See also [1073].
1075. David Edward Goldberg. Genetic Algorithms in Search, Optimization, and Ma-
chine Learning. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA,
1040 REFERENCES
1989. isbn: 0-201-15767-5. Google Books ID: 2IIJAAAACAAJ, 3 RQAAAAMAAJ, and
bO9SAAAACAAJ. OCLC: 17674450, 246916144, 255664278, and 299467773. Library of
Congress Control Number (LCCN): 88006276. GBV-Identification (PPN): 250764105,
301364087, 357619773, 357663551, 478167571, 499485440, 558515053, and
62589894X. LC Classification: QA402.5 .G635 1989.
1076. David Edward Goldberg. Genetic Algorithms and Walsh Functions: Part I, A Gentle Intro-
duction. Complex Systems, 3(2):129–152, 1989, Complex Systems Publications, Inc.: Cham-
paign, IL, USA. Fully available at http://www.complex-systems.com/pdf/03-2-2.
pdf [accessed 2010-07-23].
1077. David Edward Goldberg. Genetic Algorithms and Walsh Functions: Part II, Deception
and Its Analysis. Complex Systems, 3(2):153–171, 1989, Complex Systems Publications,
Inc.: Champaign, IL, USA. Fully available at http://www.complex-systems.com/pdf/
03-2-3.pdf [accessed 2010-07-23].
1078. David Edward Goldberg. Making Genetic Algorithm Fly: A Lesson from the Wright Broth-
ers. Advanced Technology For Developers, 2:1–8, February 1993, High-Tech Communications:
Sewickley, PA, USA.
1079. David Edward Goldberg and Kalyanmoy Deb. A Comparative Analysis of Selection Schemes
Used in Genetic Algorithms. In FOGA’90 [2562], pages 69–93, 1990. Fully available
at http://www.cse.unr.edu/˜sushil/class/gas/papers/Select.pdf [accessed 2010-
x
08-03]. CiteSeer : 10.1.1.101.9494.
1080. David Edward Goldberg and Jon T. Richardson. Genetic Algorithms with Sharing for Mul-
timodal Function Optimization. In ICGA’87 [1132], pages 41–49, 1987.
1081. David Edward Goldberg and Liwei Wang. Adaptive Niching via Coevolutionary Sharing. In
Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163], Chapter 2, pages 21–38. John Wiley & Sons
Ltd.: New York, NY, USA, 1998. See also [1082].
1082. David Edward Goldberg and Liwei Wang. Adaptive Niching via Coevolutionary Sharing.
IlliGAL Report 97007, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of
Computer Science, Department of General Engineering, University of Illinois at Urbana-
Champaign: Urbana-Champaign, IL, USA, August 1997. Fully available at http://
www.illigal.uiuc.edu/pub/papers/IlliGALs/97007.ps.Z and www.lania.mx/
x
˜ccoello/EMOO/goldberg98.pdf.gz [accessed 2009-08-28]. CiteSeer : 10.1.1.15.5500.
See also [1081].
1083. David Edward Goldberg, Kalyanmoy Deb, and Bradley Korb. Messy Genetic Algorithms:
Motivation, Analysis, and First Results. Complex Systems, 3(5):493–530, 1989, Com-
plex Systems Publications, Inc.: Champaign, IL, USA. Fully available at http://www.
complex-systems.com/pdf/03-5-5.pdf [accessed 2010-08-09]. asin: B00071YQK2.
1084. David Edward Goldberg, Kalyanmoy Deb, and Bradley Korb. Messy Genetic Algorithms
Revisited: Studies in Mixed Size and Scale. Complex Systems, 4(4):415–444, 1990, Com-
plex Systems Publications, Inc.: Champaign, IL, USA. Fully available at http://www.
complex-systems.com/pdf/04-4-4.pdf [accessed 2010-08-09].
1085. David Edward Goldberg, Kalyanmoy Deb, and Jeffrey Horn. Massive Multimodality, Decep-
tion, and Genetic Algorithms. In PPSN II [1827], pages 37–48, 1992. See also [1086].
1086. David Edward Goldberg, Kalyanmoy Deb, and Jeffrey Horn. Massive Multimodality,
Deception, and Genetic Algorithms. Illigal report, Illinois Genetic Algorithms Labora-
tory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, April 1992.
CiteSeerx : 10.1.1.48.8442. See also [1085].
1087. David Edward Goldberg, Kalyanmoy Deb, Hillol Kargupta, and Georges Raif Harik. Rapid,
Accurate Optimization of Difficult Problems Using Fast Messy Genetic Algorithms. In
ICGA’93 [969], pages 56–64, 1993. Fully available at https://eprints.kfupm.edu.
sa/60781/ [accessed 2008-10-18]. CiteSeerx : 10.1.1.49.3524. See also [1088].
1088. David Edward Goldberg, Kalyanmoy Deb, Hillol Kargupta, and Georges Raif Harik. Rapid,
Accurate Optimization of Difficult Problems Using Fast Messy Genetic Algorithms. IlliGAL
Report 93004, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer
Science, Department of General Engineering, University of Illinois at Urbana-Champaign:
Urbana-Champaign, IL, USA, February 1993. Fully available at http://www.illigal.
uiuc.edu/pub/papers/IlliGALs/93004.ps.Z [accessed 2010-08-09]. See also [1087].
REFERENCES 1041
1089. Bruce L. Golden, James S. DeArmon, and Edward K. Baker. Computational Experiments
with Algorithms for a Class of Routing Problems. Computers & Operations Research, 10(1):
47–59, 1983, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0305-0548(83)90026-6.
1090. Bruce L. Golden, Edward A. Wasil, James P. Kelly, and I-Ming Chao. The Impact of
Metaheuristics on Solving the Vehicle Routing Problem: Algorithms, Problem Sets, and
Computational Results. In Fleet Management and Logistics [651], pages 33–56. Springer
Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA,
USA, 1998.
1091. Bruce L. Golden, S. (Raghu) Raghavan, and Edward A. Wasil, editors. The Vehicle
Routing Problem: Latest Advances and New Challenges, volume 43 in Operations Re-
search/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin, Germany, 2008.
doi: 10.1007/978-0-387-77778-8. isbn: 0-387-77777-6. Library of Congress Control Number
(LCCN): 2008921562.
1092. Oded Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge Univer-
sity Press: Cambridge, UK, 2008. Google Books ID: EuguvA-w5OEC. Library of Congress
Control Number (LCCN): 2008006750.
1093. Oded Goldreich, Arnold L. Rosenberg, and Alan L. Selman, editors. Theoretical Computer
Science: Essays in Memory of Shimon Even, volume 3895 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany, 2006. doi: 10.1007/11685654.
isbn: 3-540-32880-7. OCLC: 65427613, 318290447, 487447128, 612704970, and
629703194. Library of Congress Control Number (LCCN): 2006922002. GBV-
Identification (PPN): 508402778 and 512697760. LC Classification: MLCM 2006/41498
(Q).
1094. Erik F. Golen, Bo Yuan, and Nirmala Shenoy. An Evolutionary Approach to
Underwater Sensor Deployment. In GECCO’09-I [2342], pages 1925–1926, 2009.
doi: 10.1145/1569901.1570240.
1095. Matteo Golfarelli, Dario Maio, and Stefano Rizzi. Vertical Fragmentation of Views in
Relational Data Warehouses. In Atti del Settimo Convegno Nazionale Sistemi Evoluti
per Basi di Dati [283], pages 19–33, 1999. Fully available at http://www-db.deis.
unibo.it/˜srizzi/PDF/sebd99.pdf [accessed 2011-04-11]. CiteSeerx : 10.1.1.45.5959
and 10.1.1.64.414.
1096. Matteo Golfarelli, Vittorio Maniezzo, and Stefano Rizzi. Materialization of Fragmented Views
in Multidimensional Databases. Data & Knowledge Engineering, 49(3):325–351, June 2004,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland
Scientific Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/j.datak.2003.11.001.
CiteSeerx : 10.1.1.81.6534.
1097. Gene Howard Golub and Charles F. Van Loan. Matrix Computations, volume 3 in Johns
Hopkins Studies in the Mathematical Sciences. Johns Hopkins University (JHU) Press
Press: Baltimore, MD, USA, 3rd edition, 1996. isbn: 0801837391, 080185413X, and
0801854148. Google Books ID: mlOa7wPX6OYC. OCLC: 34515797. Library of Congress
Control Number (LCCN): 96014291. LC Classification: QA188 .G65 1996.
1098. Karthik Gomadam, Jon Lathem, John A. Miller, Meenakshi Nagarajan, Cary Pennington,
Amit P. Sheth, Kunal Verma, and Zixin Wu. Semantic Web Services Challenge 2006 - Phase
I. In First Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating
Web Services Mediation, Choreography and Discovery [2688], 2006. See also [2990].
1099. Takashi Gomi, editor. Proceedings of the International Symposium on Evolutionary Robotics
– From Intelligent Robotics to Artificial Life (ER’01), October 18–19, 2001, Tokyo, Japan,
volume 2217 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-42737-6.
1100. G. Gong and Bojan Cestnik. Hepatitis Data Set. UCI Machine Learning Reposi-
tory, Center for Machine Learning and Intelligent Systems, Donald Bren School of In-
formation and Computer Science, University of California: Irvine, CA, USA, Novem-
ber 1, 1988. Fully available at http://archive.ics.uci.edu/ml/datasets/
Breast+Cancer+Wisconsin+(Original) [accessed 2008-12-27].
1101. Yiyuan Gong and Alex S. Fukunaga. Fault Tolerance in Distributed Genetic Algorithms with
Tree Topologies. In CEC’09 [1350], pages 968–975, 2009. doi: 10.1109/CEC.2009.4983050.
INSPEC Accession Number: 10688632.
1042 REFERENCES
1102. Juan R. González, David Alejandro Pelta, Carlos Cruz, Germán Terrazas, and Natalio
Krasnogor, editors. Proceedings of the 4th International Workshop Nature Inspired Coop-
erative Strategies for Optimization (NICSO’10), May 12–14, 2010, Granada, Spain, vol-
ume 284 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-642-12538-6. isbn: 3-642-12537-9. Google Books ID: OAfiU1kGWWAC.
Library of Congress Control Number (LCCN): 20100924760.
1103. Teofilo F. Gonzalez, editor. Handbook of Approximation Algorithms and Metaheuristics,
Chapmann & Hall/CRC Computer and Information Science Series. Chapman & Hall/CRC:
Boca Raton, FL, USA, 2007. isbn: 1584885505. Google Books ID: QK3 VU8ngK8C.
1104. Erik D. Goodman, editor. Late Breaking Papers at Genetic and Evolutionary Computation
Conference (GECCO’01 LBP), July 7–11, 2001, Holiday Inn Golden Gateway Hotel: San
Francisco, CA, USA. AAAI Press: Menlo Park, CA, USA. Google Books ID: FjhHHQAACAAJ
and FzhHHQAACAAJ. See also [2570].
1105. Janaki Gopalan, Reda Alhajj, and Ken Barker. Discovering Accurate and Interesting Classi-
fication Rules Using Genetic Algorithm. In DMIN’06 [656], pages 389–395, 2006. Fully avail-
able at http://ww1.ucmss.com/books/LFS/CSREA2006/DMI5509.pdf [accessed 2007-07-
29].
1106. V. Scott Gordon and L. Darrell Whitley. Serial and Parallel Genetic Algorithms as Function
Optimizers. In ICGA’93 [969], pages 177–183, 1993. Fully available at http://eprints.
kfupm.edu.sa/64675/ and www.cs.colostate.edu/˜genitor/1993/icga93.ps.
gz [accessed 2009-08-20]. CiteSeerx : 10.1.1.54.3472.
1107. Martina Gorges-Schleuter. ASPARAGOS: An Asynchronous Parallel Genetic Optimization
Strategy. In ICGA’89 [2414], pages 422–427, 1989.
1108. Martina Gorges-Schleuter. Explicit Parallelism of Genetic Algorithms through Population
Structures. In PPSN I [2438], pages 150–159, 1990. doi: 10.1007/BFb0029746.
1109. James Gosling and Henry McGilton. The Java Language Environment – A White Paper.
Technical Report, Sun Microsystems, Inc.: Santa Clara, CA, USA, May 1996. Fully available
at http://java.sun.com/docs/white/langenv/ [accessed 2007-07-03].
1110. James Gosling, Bill Joy, Guy Steele, and Gilad Bracha. The JavaTM Language Specification,
The Java Series. Prentice Hall International Inc.: Upper Saddle River, NJ, USA, Sun Mi-
crosystems Press (SMP): Santa Clara, CA, USA, and Addison-Wesley Professional: Reading,
MA, USA, 3rd edition, May 2005. isbn: 0-321-24678-0. Fully available at http://java.
sun.com/docs/books/jls/ [accessed 2007-09-14].
1111. Simon Goss, R. Beckers, Jean-Louis Deneubourg, S. Aron, and Jacques M. Pasteels. How
trail Laying and Trail following can solve Foraging Problems for Ant Colonies. In NATO
Advanced Research Workshop on Behavioural Mechanisms of Food Selection [1306], pages
661–678, 1989.
1112. William Sealy Gosset. The Probable Error of a Mean. Biometrika, 6(1):1–25, March 1908,
Oxford University Press, Inc.: New York, NY, USA. Fully available at http://www.york.
ac.uk/depts/maths/histstat/student.pdf [accessed 2007-09-30]. Because the author was
not allowed to publish this article, he used the pseudonym “Student”. Reprinted in [1113].
1113. William Sealy Gosset. The Probable Error of a Mean. In “Student’s” Collected Papers [1114],
pages 11–34. Cambridge University Press: Cambridge, UK and Biometrika Office: London,
UK, 1942. Reprint of [1112].
1114. William Sealy Gosset, author. Egon Sharpe Pearson and John Wishart, editors. “Student’s”
Collected Papers. Cambridge University Press: Cambridge, UK and Biometrika Office:
London, UK, 1942. Google Books ID: 39fvGAAACAAJ, 3tfvGAAACAAJ, SdKEAAAAIAAJ,
UoJsAAAAMAAJ, and rqM2HQAACAAJ. OCLC: 220708760, 258570743, 313515170, and
318188805. With a Foreword by Launce McMullen.
1115. Jens Gottlieb and Günther R. Raidl, editors. Proceedings of the 4th European Confer-
ence on Evolutionary Computation in Combinatorial Optimization (EvoCOP’04), April 5–7,
2004, Coimbra, Portugal, volume 3004/2004 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b96499. isbn: 3-540-21367-8.
1116. Jens Gottlieb and Günther R. Raidl, editors. Proceedings of the 6th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP’06), April 10–12,
2006, Budapest, Hungary, volume 3906/2006 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-33178-6.
1117. Jens Gottlieb, Elena Marchiori, and Claudio Rossi. Evolutionary Algorithms for the Satisfi-
REFERENCES 1043
ability Problem. Evolutionary Computation, 10(1):35–50, Spring 2002, MIT Press: Cam-
bridge, MA, USA. doi: 10.1162/106365602317301763. Fully available at http://www.
cs.ru.nl/˜elenam/fsat.pdf [accessed 2010-08-01]. CiteSeerx : 10.1.1.77.4306. PubMed
ID: 11911782. See also [2335].
1118. Georg Gottlob and Toby Walsh, editors. Proceedings of the Eighteenth International Joint
Conference on Artificial Intelligence (IJCAI’03), August 9–15, 2003, Acapulco, México. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 0-127-05661-0. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-2003/content.htm [accessed 2008-04-01].
See also [1485].
1119. Gang Gou, Jeffrey Xu Yu, and Hongjun Lu. A* Search: An Efficient and Flexible Approach
to Materialized View Selection. IEEE Transactions on Systems, Man, and Cybernetics – Part
C: Applications and Reviews, 36(3):411–425, May 2006, IEEE Systems, Man, and Cybernet-
ics Society: New York, NY, USA. doi: 10.1109/TSMCC.2004.843248. INSPEC Accession
Number: 8906628.
1120. Paul Grace. Genetic Programming and Protocol Configuration. Master’s thesis, Lancaster
University, Computing Department: Lancaster, Lancashire, UK, September 2000, Gordon
Blair, Supervisor. Fully available at http://www.lancs.ac.uk/postgrad/gracep/
msc.pdf [accessed 2008-06-23].
1121. Jörn Grahl. Estimation of Distribution Algorithms in Logistics: Analysis, Design, and Appli-
cation. PhD thesis, Universität Mannheim, Fakultät für Betriebswirtschaftslehre: Mannheim,
Germany, April 16, 2008, Daniel J. Veit, Supervisor, Hans H. Bauer and Stefan Minner,
Committee members. Fully available at http://madoc.bib.uni-mannheim.de/madoc/
volltexte/2008/1897/ [accessed 2010-06-18]. urn: urn:nbn:de:bsz:180-madoc-18974.
1122. The 2006 China-UK Workshop on Nature Inspired Computation and Applications
(CUWNICA’06), October 25–27, 2006, Grand Hotel Overseas Tradesrs Club: Héféi, Ānhuı̄,
China.
1123. Pierre-Paul Grassé. La Reconstruction du Nid et les Coordinations Inter-Individuelles
chez Bellicositermes Natalensis et Cubitermes sp. la Théorie de la Stigmergie: Essai
d’Interprétation des Termites Constructeurs. Insectes Sociaux, 6(1):41–80, March 1959,
Springer-Verlag GmbH: Berlin, Germany and Birkhäuser Verlag: Basel, Switzerland.
doi: 10.1007/BF02223791.
1124. Federico Greco, editor. Traveling Salesman Problem. IN-TECH Education and Publish-
ing: Vienna, Austria, September 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-10-7&type=B [accessed 2009-01-13].
1125. Garrison W. Greenwood, Xiaobo Sharon Hu, and Joseph G. D’Ambrosio. Fitness Functions
for Multiple Objective Optimization Problems: Combining Preferences with Pareto Rankings.
In FOGA’96 [256], pages 437–455, 1996.
1126. John J. Grefenstette. Incorporating Problem Specific Knowledge into Genetic Algorithms.
In Genetic Algorithms and Simulated Annealing [694], pages 42–60. Pitman: London, UK,
1987–1990.
1127. John J. Grefenstette. Credit Assignment in Rule Discovery Systems Based on Ge-
netic Algorithms. Machine Learning, 3(2-3):225–245, October 1988, Kluwer Academic
Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1022614421909.
1128. John J. Grefenstette. Conditions for Implicit Parallelism. In FOGA’90 [2562], pages 252–261,
1990. Fully available at http://www.aic.nrl.navy.mil/papers/1991/AIC-91-012.
ps.Z [accessed 2010-07-30]. CiteSeerx : 10.1.1.52.7729.
1129. John J. Grefenstette. Deception Considered Harmful. In FOGA’92 [2927], pages 75–91, 1992.
CiteSeerx : 10.1.1.49.1448.
1130. John J. Grefenstette. Predictive Models Using Fitness Distributions of Genetic Operators. In
FOGA 3 [2932], pages 139–161, 1994. Fully available at http://reference.kfupm.edu.
sa/content/p/r/predictive_models_using_fitness_distribu_672029.pdf [ac-
x
cessed 2009-07-10]. CiteSeer : 10.1.1.52.4341.
1131. John J. Grefenstette, editor. Proceedings of the 1st International Conference on Ge-
netic Algorithms and their Applications (ICGA’85), June 24–26, 1985, Carnegy Mellon
University (CMU): Pittsburgh, PA, USA. Lawrence Erlbaum Associates: Hillsdale, NJ,
USA. isbn: 0-8058-0426-9. OCLC: 19702892, 174343913, 475815642, 493612786,
497548640, and 606101493.
1044 REFERENCES
1132. John J. Grefenstette, editor. Proceedings of the Second International Conference on Genetic
Algorithms and their Applications (ICGA’87), July 28–31, 1987, Massachusetts Institute of
Technology (MIT): Cambridge, MA, USA. Lawrence Erlbaum Associates, Inc. (LEA): Mah-
wah, NJ, USA. isbn: 0-8058-0158-8. Google Books ID: 0hdRAAAAMAAJ, 5N7LAQAACAAJ,
and bvlQAAAAMAAJ. OCLC: 17252562.
1133. John J. Grefenstette and James E. Baker. How Genetic Algorithms Work: A Critical Look
at Implicit Parallelism. In ICGA’89 [2414], pages 20–27, 1989.
1134. Horst Greiner. Robust Filter Design by Stochastic Optimization. In Optical Interference
Coatings [6], pages 150–160, 1994. doi: 10.1117/12.192107.
1135. Horst Greiner. Robust Optical Coating Design with Evolutionary Strategies. Applied Optics,
35(28):5477–5483, October 1, 1996, Optical Society of America (OSA): Washington, DC,
USA. doi: 10.1364/AO.35.005477.
1136. Andreas Griewank and Philippe L. Toint. Local Convergence Analysis for Parti-
tioned Quasi-Newton Updates. Numerische Mathematik, 39(3):429–448, October 1982.
doi: 10.1007/BF01407874.
1137. Andreas Griewank and Philippe L. Toint. Partitioned Variable Metric Updates for Large
Structured Optimization Problems. Numerische Mathematik, 39(1):119–137, February 1982.
doi: 10.1007/BF01399316.
1138. David F. Griffiths and G. A. Watson, editors. Numerical Analysis – Proceedings of the 16th
Dundee Biennial Conference in Numerical Analysis, June 27–30, 1995, University of Dundee:
Nethergate, Dundee, Scotland, UK, volume 344 in Pitman Research Notes in Mathemat-
ics. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, Chapman & Hall:
London, UK, and CRC Press, Inc.: Boca Raton, FL, USA. isbn: 0-582-27633-0. GBV-
Identification (PPN): 198456158 and 27380491X.
1139. Crina Grosan and Ajith Abraham. Hybrid Evolutionary Algorithms: Methodolo-
gies, Architectures, and Reviews. In Hybrid Evolutionary Algorithms [1141], pages
1–17. Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-73297-6 1.
Fully available at http://www.softcomputing.net/hea1.pdf [accessed 2010-09-11].
CiteSeerx : 10.1.1.160.9007.
1140. Crina Grosan and Ajith Abraham. A Novel Global Optimization Technique for High Dimen-
sional Functions. International Journal of Intelligent Systems, 24(4):421–440, April 2009,
Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1002/int.20343. Fully available
at http://www.softcomputing.net/wil2009.pdf [accessed 2009-08-07].
1141. Crina Grosan, Ajith Abraham, and Hisao Ishibuchi, editors. Hybrid Evolutionary Algorithms,
volume 75/2007 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Hei-
delberg, 2007. doi: 10.1007/978-3-540-73297-6. isbn: 3-540-73296-9. Google Books
ID: II7LCqiGFlEC. Library of Congress Control Number (LCCN): 2007932399.
1142. David A. Grossman, Luis Gravano, ChengXiang Zhai, Otthein Herzog, and David A. Evans,
editors. Proceedings of ACM Thirteenth Conference on Information and Knowledge Man-
agement (CIKM’04), November 8–13, 2004, Hyatt Arlington Hotel: Washington, DC, USA.
isbn: 1-58113-874-1. Partly available at http://ir.iit.edu/cikm2004/ [accessed 2011-
04-11].
1143. Frédéric Gruau. Neural Network Synthesis using Cellular Encoding and the Genetic Algo-
rithm. PhD thesis, l’Ecole Normale Supérior de Lyon, Laboratoire de l’Informatique du
Parallélisme (LIP-IMAG), Unité de Recherche Associée au CNRS: Lyon, France and Centre
d’Étude Nucléaire de Grenoble (CENG), Départment de Recherche Fondamentale Matière
Condensée: Grenoble, France, January 4, 1994, Maurice Nivat, Jean-Arcady Meyer, Jacques
Demongeot, Michel Cosnard, Jacques Mazoyer, Pierre Peretto, and L. Darrell Whitley, Com-
mittee members. Fully available at ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/PhD/
PhD1994/PhD1994-01-E.ps.Z [accessed 2010-08-05]. CiteSeerx : 10.1.1.29.5939.
1144. Frédéric Gruau and L. Darrell Whitley. Adding Learning to the Cellular Development
of Neural Networks: Evolution and the Baldwin Effect. Evolutionary Computation, 1(3):
213–233, Fall 1993, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1993.1.3.213.
CiteSeerx : 10.1.1.34.1656.
1145. Jun-hua Gu, Xiu-li Zhao, and Qing Tan. Application of Ant Colony System to Materi-
alized Views Selection. Journal of Computer Applications (Jisuan Ji Yı̀ngyòng), 27(11):
2763–2765, November 2007, Chinese Electronic Periodical Services (CEPS): Taipei, Tai-
wan. Fully available at http://epub.cnki.net/grid2008/docdown/docdownload.
REFERENCES 1045
aspx?filename=JSJY200711049&dbcode=CJFD&dflag=pdfdown [accessed 2011-04-10].
ID: CNKI:SUN:JSJY.0.2007-11-049.
1146. Zhifeng Gu, Bin Xu, and Juanzi Li. Inheritance-Aware Document-Driven Service Composi-
tion. In CEC/EEE’07 [1344], pages 513–516, 2007. doi: 10.1109/CEC-EEE.2007.56. INSPEC
Accession Number: 9868636. See also [319, 1797, 2998, 3010, 3011].
1147. Frank Guerin and Wamberto Weber Vasconcelos, editors. AISB 2008 Convention: Com-
munication, Interaction and Social Intelligence (AISB’08), April 1–4, 2008, University of
Aberdeen: Aberdeen, Scotland, UK, volume 12 in Computing & Philosophy. Society for
the Study of Artificial Intelligence and the Simulation of Behaviour (SSAISB): Chichester,
West Sussex, UK. isbn: 1-902956-71-0. Fully available at http://www.aisb.org.uk/
publications/proceedings.shtml [accessed 2008-09-10].
1148. Michael Guntsch and Martin Middendorf. Pheromone Modification Strategies for Ant
Algorithms Applied to Dynamic TSP. In EvoWorkshops’01 [343], pages 213–222, 2001.
doi: 10.1007/3-540-45365-2 22. Fully available at http://pacosy.informatik.
uni-leipzig.de/pv/Personen/middendorf/Papers/Papers/EvoCOP2001.
Report.ps [accessed 2009-06-29].
1149. Michael Guntsch, Martin Middendorf, and Hartmut Schmeck. An Ant Colony Optimiza-
tion Approach to Dynamic TSP. In GECCO’01 [2570], pages 860–867, 2001. Fully
available at http://pacosy.informatik.uni-leipzig.de/middendorf/Papers/
GECCO2001-Pap.ps [accessed 2009-07-13]. CiteSeerx : 10.1.1.58.1398.
1150. Maozu Guo, Liang Zhao, and Lipo Wang, editors. Proceedings of the 4th International
Conference on Natural Computation (ICNC’08), October 18–20, 2008, Jı̀’nán, Shāndōng,
China. IEEE Computer Society: Piscataway, NJ, USA. Library of Congress Control Number
(LCCN): 2008904182. Catalogue no.: CFP08CNC-PRT.
1151. Ashish Gupta, Oded Shmueli, and Jennifer Widom, editors. Proceedings of 24rd International
Conference on Very Large Data Bases (VLDB’98), August 24–26, 1998, New York, NY, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-566-5.
1152. Himanshu Gupta. Selection of Views to Materialize in a Data Warehouse. In
ICDT’97 [34], pages 98–112, 1997. doi: 10.1007/3-540-62222-5 39. Fully avail-
able at http://ilpubs.stanford.edu:8090/161/1/1996-34.pdf [accessed 2011-04-12].
CiteSeerx : 10.1.1.30.3892. See also [1154].
1153. Himanshu Gupta and Inderpal Singh Mumick. Selection of Views to Material-
ize under Maintenance Cost Constraint. In ICDT’99 [249], pages 453–470, 1999.
doi: 10.1007/3-540-49257-7 28.
1154. Himanshu Gupta and Inderpal Singh Mumick. Selection of Views to Materialize
in a Data Warehouse. IEEE Transactions on Knowledge and Data Engineering, 17
(1):24–43, January 2005, IEEE Computer Society Press: Los Alamitos, CA, USA.
doi: 10.1109/TKDE.2005.16. INSPEC Accession Number: 8287192. See also [1152].
1155. L. S. Gurin and Leonard A. Rastrigin. Convergence of the Random Search Method in
the Presence of Noise. Automation and Remote Control, 26:1505–1511, 1965, Springer Sci-
ence+Business Media, Inc.: New York, NY, USA and MAIK Nauka/Interperiodica – Pleiades
Publishing, Inc.: Moscow, Russia.
1156. Steven Matt Gustafson. An Analysis of Diversity in Genetic Programming. PhD thesis, Uni-
versity of Nottingham, School of Computer Science & IT: Nottingham, UK, February 2004.
Fully available at http://www.gustafsonresearch.com/thesis_html/index.html
x
[accessed 2009-07-09]. CiteSeer : 10.1.1.2.9746.
1157. Steven Matt Gustafson, Anikó Ekárt, Edmund K. Burke, and Graham Kendall.
Problem Difficulty and Code Growth in Genetic Programming. Genetic Program-
ming and Evolvable Machines, 5(3):271–290, September 2004, Springer Netherlands:
Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA,
USA. doi: 10.1023/B:GENP.0000030194.98244.e3. Fully available at http://www.
gustafsonresearch.com/research/publications/gustafson-gpem2004.pdf [ac-
x
cessed 2009-07-09]. CiteSeer : 10.1.1.59.6659. Submitted March 4, 2003; Revised August 7,
2003.
1158. Sergio Gutiérrez, Grégory Valigiani, Pierre Collet, and Carlos Delgado Kloos. Adaptation
of the ACO Heuristic for Sequencing Learning Activities. In EC-TEL’07 Posters [2962],
2007. Fully available at http://ftp.informatik.rwth-aachen.de/Publications/
CEUR-WS/Vol-280/p15.pdf and http://sunsite.informatik.rwth-aachen.de/
1046 REFERENCES
Publications/CEUR-WS/Vol-280/p15.pdf [accessed 2009-09-05].
1159. Tomasz Dominik Gwiazda. Genetic Algorithms Reference – Volume 1 – Crossover for
Single-Objective Numerical Optimization Problems. TOMASZGWIAZDA Ebooks: Lomianki,
Poland, Agata Szczepańska, Translator. isbn: 83-923958-3-2 and 83-923958-4-0.
OCLC: 300393138.
1160. Lorenz Gygax. Statistik für Nutztierethologen – Einführung in die statistische Denkweise:
Was ist, was macht ein statistischer Test? Swiss Federal Veterinary Office (FVO), Centre
for Proper Housing of Ruminants and Pigs: Zürich, Switzerland, 1.1 edition, June 2003. Fully
available at http://www.lorenzgygax.ch/documents/introEtho.pdf and http://
www.proximate-biology.ch/documents/introEtho.pdf [accessed 2009-07-29].
1161. Atidel Ben Hadj-Alouane and James C. Bean. A Genetic Algorithm for the Multiple-Choice
Integer Program. Technical Report 92-50, University of Michigan, Department of Indus-
trial and Operations Engineering: Ann Arbor, MI, USA, September 1992. Fully available
at http://hdl.handle.net/2027.42/3516 [accessed 2008-11-15]. Revised in July 1993.
1162. George Hadley. Nonlinear and Dynamics Programming, World Student. Addison-Wesley
Professional: Reading, MA, USA, December 1964. isbn: 0201026643.
1163. Proceedings of the 27th International Conference on Machine Learning (ICML’10), June 21–
24, 2010, Haifa, Israel.
1164. Karen Haigh and Nestor Rychtyckyj, editors. Proceedings of the Twenty-first Innovative
Applications of Artificial Intelligence Conference (IAAI’09), July 14–16, 2009, Pasadena,
CA, USA. AAAI Press: Menlo Park, CA, USA.
1165. Yacov Y. Haimes, editor. Proceedings of the 6th International Conference on Multiple Cri-
teria Decision Making: Decision Making with Multiple Objectives (MCDM’84), June 4–8,
1984, Cleveland, OH, USA, volume 242 in Lecture Notes in Economics and Mathemati-
cal Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387152237 and 3-540-15223-7.
OCLC: 45927944, 310680663, 450849831, 470794588, 613247950, 634256419, and
638892702. Published December 31, 1985.
1166. Yacov Y. Haimes and Ralph E. Steuer, editors. Proceedings of the 14th International Con-
ference on Multiple Criteria Decision Making: Research and Practice in Multi Criteria De-
cision Making (MCDM’98), June 12–18, 1998, University of Virginia: Charlottesville, VA,
USA, volume 487 in Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 3-540-67266-1. Google Books ID: skkEdJir95QC.
1167. Bruce Hajek. Cooling Schedules for Optimal Annealing. Mathematics of Operations Research
(MOR), 13(2):311–329, May 1988, Institute for Operations Research and the Management
Sciences (INFORMS): Linthicum, ML, USA. doi: 10.1287/moor.13.2.311. Fully available
at http://web.mit.edu/6.435/www/Hajek88.pdf [accessed 2010-09-25].
1168. Hesham H. Hallal, Arnaud Dury, and Alexandre Petrenko. Web-FIM: Automated Framework
for the Inference of Business Software Models. In SERVICES Cup’09 [1351], pages 130–138,
2009. doi: 10.1109/SERVICES-I.2009.106. INSPEC Accession Number: 10975112.
1169. Paul Richard Halmos. Naive Set Theory, Undergraduate Texts in Mathematics. Lit-
ton Educational Publishing, Inc.: New York, NY, USA, 1st, new edition, 1960–1998.
isbn: 0-3879-0092-6. Google Books ID: atw2Z3lqfBIC.
1170. Proceedings of the First International Conference on Metaheuristics and Nature Inspired
Computing (META’06), November 2–4, 2006, Hammamet, Tunesia.
1171. Proceedings of the 2nd International Conference on Metaheuristics and Nature Inspired Com-
puting (META’08), October 29–31, 2008, Hammamet, Tunesia.
1172. Ulrich Hammel and Thomas Bäck. Evolution Strategies on Noisy Functions: How
to Improve Convergence Properties. In PPSN III [693], pages 159–168, 1994.
doi: 10.1007/3-540-58484-6 260. CiteSeerx : 10.1.1.57.5798.
1173. Richard W. Hamming. Error-Detecting and Error-Correcting Codes. Bell System Technical
Journal, 29(2):147–169, 1950, Bell Laboratories: Berkeley Heights, NJ, USA. Fully avail-
able at http://garfield.library.upenn.edu/classics1982/A1982NY35700001.
pdf and http://guest.engelschall.com/˜sb/hamming/ [accessed 2010-12-04].
1174. Michelle Okaley Hammond and Terence Claus Fogarty. Co-operative OuLiPian Genera-
REFERENCES 1047
tive Literature using Human Based Evolutionary Computing. In GECCO’05 LBP [2337],
2005. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gecco2005lbp/
papers/56-hammond.pdf [accessed 2010-08-07].
1175. Lin Han and Xingshi He. A Novel Opposition-Based Particle Swarm Optimization for Noisy
Problems. In ICNC’07 [1711], pages 624–629, volume 3, 2007. doi: 10.1109/ICNC.2007.119.
1176. David J. Hand, Heikki Mannila, and Padhraic Smyth. Principles of Data Mining, volume
6 in Adaptive Computation and Machine Learning, Bradford Books. Prentice-Hall Of India
Pvt. Ltd. (PHI): New Delhi, India, August 2001. isbn: 0-262-08290-X and 8120324579.
Google Books ID: 5i2WPwAACAAJ and SdZ-bhVhZGYC. OCLC: 46866355.
1177. Hisashi Handa, Dan Lin, Lee Chapman, and Xin Yao. Robust Solution of Salting
Route Optimisation Using Evolutionary Algorithms. In CEC’06 [3033], pages 3098–3105,
2006. doi: 10.1109/CEC.2006.1688701. Fully available at http://escholarship.lib.
okayama-u.ac.jp/industrial_engineering/2/ [accessed 2008-06-19].
1178. Julia Handl, Simon C. Lovell, and Joshua D. Knowles. Multiobjectivization by
Decomposition of Scalar Cost Functions. In PPSN X [2354], pages 31–40, 2008.
doi: 10.1007/978-3-540-87700-4 4. Fully available at http://dbkgroup.org/handl/
ppsn_decomp.pdf [accessed 2010-07-18].
1179. Proceedings of the Ninth International Conference on Simulated Evolution And Learning
(SEAL’12), December 16–19, 2012, Hanoi, Vietnam.
1180. James V. Hansen, Paul Benjamin Lowry, Rayman Meservy, and Dan McDonald. Ge-
netic Programming for Prevention of Cyberterrorism through Previous Dynamic and Evolv-
ing Intrusion Detection. Decision Support Systems, 43(4):1362–1374, August 2007, El-
sevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.dss.2006.04.004. Fully avail-
able at http://dx.doi.org/10.1016/j.dss.2006.04.004 and http://ssrn.com/
abstract=877981 [accessed 2008-06-17]. Special Issue on Clusters.
1181. James V. Hansen, Raymond Ros, Nikolas Mauny, Marc Schoenauer, and Anne Auger.
PSO Facing Non-Separable and Ill-Conditioned Problems. Technical Report 6447, In-
stitut National de Recherche en Informatique et en Automatique (INRIA), February 11,
2008. Fully available at http://hal.archives-ouvertes.fr/docs/00/25/01/60/
PDF/RR-6447.pdf [accessed 2009-10-20]. See also [147].
1182. Nikolaus Hansen and Andreas Ostermeier. Adapting Arbitrary Normal Mutation Distri-
butions in Evolution Strategies: The Covariance Matrix Adaptation. In CEC’96 [1445],
pages 312–317, 1996. doi: 10.1109/ICEC.1996.542381. Fully available at http://
www.bionik.tu-berlin.de/ftp-papers/CMAES.ps.Z, http://www.ml.inf.
ethz.ch/education/ws_06_07_seminar_mustererkennung/paper_hansen_
ostermeier, and https://eprints.kfupm.edu.sa/22718/1/22718.pdf [accessed 2009-
x
09-18]. CiteSeer : 10.1.1.35.2147. INSPEC Accession Number: 5375406.
1183. Nikolaus Hansen and Andreas Ostermeier. Convergence Properties of Evolution Strate-
gies with the Derandomized Covariance Matrix Adaption: The (µ/µI , λ)-CMA-ES. In EU-
FIT’97 [3091], pages 650–654, volume 1, 1997. CiteSeerx : 10.1.1.30.648.
1184. Nikolaus Hansen and Andreas Ostermeier. Completely Derandomized Self-Adaptation
in Evolution Strategies. Evolutionary Computation, 9(2):159–195, 2001, MIT Press:
Cambridge, MA, USA. Fully available at http://www.bionik.tu-berlin.
de/user/niko/cmaartic.pdf, http://www.cs.colostate.edu/˜whitley/CS640/
cmaartic.pdf, and http://www.lri.fr/˜hansen/cmaartic.pdf [accessed 2010-08-10].
CiteSeerx : 10.1.1.13.7462.
1185. Nikolaus Hansen, Andreas Ostermeier, and Andreas Gawelczyk. On the Adaptation of Ar-
bitrary Normal Mutation Distributions in Evolution Strategies: The Generating Set Adap-
tation. In ICGA’95 [883], pages 57–64, 1995. CiteSeerx : 10.1.1.29.9321.
1186. Nikolaus Hansen, Sibylle D. Müller, and Petros Koumoutsakos. Reducing the Time Com-
plexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-
ES). Evolutionary Computation, 11(1):1–18, Spring 2003, MIT Press: Cambridge, MA,
USA. doi: 10.1162/106365603321828970. Fully available at http://mitpress.mit.edu/
journals/pdf/evco_11_1_1_0.pdf [accessed 2010-03-08]. CiteSeerx : 10.1.1.13.1008.
1187. Nikolaus Hansen, Fabian Gemperle, Anne Auger, and Petros Koumoutsakos. When
Do Heavy-Tail Distributions Help? In PPSN IX [2355], pages 62–71, 2006.
doi: 10.1007/11844297 7.
1188. Nikolaus Hansen, Anne Auger, Steffen Finck, and Raymond Ros. Real-Parameter Black-
1048 REFERENCES
Box Optimization Benchmarking 2009: Experimental Setup. Rapports de Recherche
RR-6828, Institut National de Recherche en Informatique et en Automatique (INRIA),
October 16, 2009. Fully available at http://coco.gforge.inria.fr/doku.php?
id=bbob-2009, http://hal.archives-ouvertes.fr/inria-00362649/en/, and
http://hal.inria.fr/inria-00362649/en [accessed 2009-10-28]. Version 3.
1189. Pierre Hansen. The Steepest Ascent Mildest Descent Heuristic for Combinatorial Program-
ming. In Congress on Numerical Methods in Combinatorial Optimization [17], 1986.
1190. Pierre Hansen, editor. Proceedings of the 5th International Conference on Multiple Criteria
Decision Making: Essays and Surveys on Multiple Criteria Decision Making (MCDM’82),
August 9–13, 1982, Mons, Brussels, Belgium, volume 209 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387119914 and
3540119914. OCLC: 9350544, 239745974, 310666373, 455931654, 470793901,
497290816, 498330203, 610956168, and 634251590. Published in March 1983.
1191. Jin-Kao Hao, Evelyne Lutton, Edmund M. A. Ronald, Marc Schoenauer, and Dominique
Snyers, editors. Selected Papers of the Third European Conference on Artificial Evolu-
tion (AE’97), October 22–24, 1997, Ecole pour les Etudes et la Recherche en Informa-
tique et Electronique (EERIE): Nı̂mes, France, volume 1363/1998 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0026588.
isbn: 3-540-64169-6. Google Books ID: DtQQwyST3RkC. OCLC: 38519371, 65845265,
190833428, 243486301, and 246325993.
1192. Yue Hao, Jiming Liu, Yuping Wang, Yiu-ming Cheung, and Hujun Yin, editors. Proceedings
of the 2005 International Conference on Computational Intelligence and Security, Part 1
(CIS’05), December 15–19, 2005, Xı̄’ān, Shǎnxı̄, China, volume 3801 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11596448. isbn: 3540308180. Google Books
ID: lGLPmtQFyJkC.
1193. Yue Hao, Jiming Liu, Yuping Wang, Yiu-ming Cheung, and Hujun Yin, editors. Proceedings
of the 2005 International Conference on Computational Intelligence and Security, Part 2
(CIS’05), December 15–19, 2005, Xı̄’ān, Shǎnxı̄, China, volume 3802 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11596981. isbn: 3540308199. Google Books
ID: IH VSc0d1EYC.
1194. Jenny A. Harding, Muhammad Shahbaz, Srinivas, and Andrew Kusiak. Data Min-
ing in Manufacturing: A Review. Journal of Manufacturing Science and Engineer-
ing, 128(4):969–977, November 2006, American Society of Mechanical Engineers (ASME):
New York, NY, USA. Fully available at http://www.icaen.uiowa.edu/˜ankusiak/
Journal-papers/Harding.pdf [accessed 2010-06-27].
1195. Simon Harding and Wolfgang Banzhaf. Fast Genetic Programming on GPUs. In Eu-
roGP’07 [856], pages 90–101, 2007. doi: 10.1007/978-3-540-71605-1 9. Fully avail-
able at http://www.cs.mun.ca/˜banzhaf/papers/eurogp07.pdf [accessed 2008-12-24].
CiteSeerx : 10.1.1.93.1862.
1196. Georges Raif Harik. Learning Gene Linkage to Efficiently Solve Problems of Bounded Diffi-
culty using Genetic Algorithms. PhD thesis, University of Michigan: Ann Arbor, MI, USA,
1997, Keki B. Irani, David Edward Goldberg, Edmund H. Durfee, and Robert K. Lindsay,
Committee members. CiteSeerx : 10.1.1.54.7092. UMI Order No.: GAX97-32090.
1197. Georges Raif Harik. Linkage Learning via Probabilistic Modeling in the ECGA. Il-
liGAL Report 99010, Illinois Genetic Algorithms Laboratory (IlliGAL), Department
of Computer Science, Department of General Engineering, University of Illinois at
Urbana-Champaign: Urbana-Champaign, IL, USA, January 1999. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/99010.ps.Z [accessed 2009-
x
08-27]. CiteSeer : 10.1.1.33.5508.
1198. Georges Raif Harik, Fernando G. Lobo, and David Edward Goldberg. The Compact
Genetic Algorithm. IlliGAL Report 97006, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, Uni-
versity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, August 1997.
CiteSeerx : 10.1.1.48.906. See also [1199].
1199. Georges Raif Harik, Fernando G. Lobo, and David Edward Goldberg. The Compact Ge-
netic Algorithm. IEEE Transactions on Evolutionary Computation (IEEE-EC), 3(4):287–297,
REFERENCES 1049
November 1999, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.797971.
INSPEC Accession Number: 6411650. See also [1198].
1200. Venky Harinarayan, Anan Rajaraman, and Jeffrey David Ullman. Implementing Data Cubes
Efficiently. In SIGMOD’96 [1428], pages 205–216, 1996. doi: 10.1145/233269.233333. See
also [1201].
1201. Venky Harinarayan, Anan Rajaraman, and Jeffrey David Ullman. Implementing Data Cubes
Efficiently. SIGMOD Record, 25(2):205–216, June 1996, Association for Computing Ma-
chinery Special Interest Group on Management of Data (ACM SIGMOD): New York, NY,
USA. doi: 10.1145/235968.233333. Fully available at http://infolab.stanford.edu/
pub/papers/cube.ps, http://www.cs.aau.dk/˜simas/dat5_08/presentations/
Implementing_Data_Cubes_Efficiently.pdf, and http://www.cs.sfu.ca/CC/
459/han/papers/harinarayan96.pdf [accessed 2011-03-28]. CiteSeerx : 10.1.1.63.9928.
See also [1200].
1202. Lisa Lavoie Harlow, Stanley A. Mulaik, and James H. Steiger, editors. What If There Were
No Significance Tests?, Multivariate Applications Book Series. Lawrence Erlbaum Asso-
ciates, Inc. (LEA): Mahwah, NJ, USA, August 1997. isbn: 0805826343. Google Books
ID: lZ2ybZzrnroC. OCLC: 36892849 and 264623722.
1203. Emma Hart, Chris McEwan, Jonathan Timmis, and Andy Hone, editors. Proceedings of the
10th International Conference on Artificial Immune Systems (ICARIS’11), July 18–21, 2011,
University of Cambridge, Computer Laboratory, William Gates Building: Cambridge, UK,
volume 6209 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany.
1204. Peter Elliot Hart, Nils J. Nilsson, and Bertram Raphael. Correction to “A Formal Basis for
the Heuristic Determination of Minimum Cost Paths”. ACM SIGART Bulletin, -(37):28–29,
December 1972, ACM Press: New York, NY, USA. doi: 10.1145/1056777.1056779.
1205. William Eugene Hart, Natalio Krasnogor, and James E. Smith, editors. Recent Advances in
Memetic Algorithms, volume 166/2005 in Studies in Fuzziness and Soft Computing. Springer-
Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/3-540-32363-5. isbn: 3-540-22904-3.
Google Books ID: LYf7YW4DmkUC. OCLC: 56697114 and 318297267. Library of Congress
Control Number (LCCN): 2004111139.
1206. Brad Harvey, James A. Foster, and Deborah Frincke. Byte Code Genetic Programming. In
GP’98 LBP [1605], 1998. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/harvey_1998_bcGP.html and http://www.csds.uidaho.edu/deb/jvm.
pdf [accessed 2008-09-16].
1207. Brad Harvey, James A. Foster, and Deborah Frincke. Towards Byte Code Genetic Program-
ming. In GECCO’99 [211], page 1234, volume 2, 1999. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/harvey_1999_TBCGP.html [accessed 2008-09-16].
CiteSeerx : 10.1.1.22.4974.
1208. Inman Harvey. The Puzzle of the Persistent Question Marks: A Case Study of Genetic Drift.
In ICGA’93 [969], pages 15–22, 1993. See also [1209].
1209. Inman Harvey. The Puzzle of the Persistent Question Marks: A Case Study of Genetic Drift.
Cognitive Science Research Papers (CSRP) 278, University of Sussex, School of Cognitive
Science: Brighton, UK, April 1993. CiteSeerx : 10.1.1.56.3830. issn: 1350-3162. See
also [1208].
1210. Thomas Haselwanter, Paavo Kotinurmi, Matthew Moran, Tomas Vitvar, and Maciej
Zaremba. Dynamic B2B Integration on the Semantic Web Services: SWS Challenge Phase
2. In Second Workshop of the Semantic Web Service Challenge 2006 – Challenge on Au-
tomating Web Services Mediation, Choreography and Discovery [2689], 2006. Fully avail-
able at http://sws-challenge.org/workshops/2006-Budva/papers/DERI_WSMX_
SWSChallenge_II.pdf [accessed 2010-12-12]. See also [582, 1940, 3057–3059].
1211. Rainer Hauser and Reinhard Männer. Implementation of Standard Genetic Algorithm on
MIMD Machines. In PPSN III [693], pages 504–513, 1994. doi: 10.1007/3-540-58484-6 293.
CiteSeerx : 10.1.1.26.9333.
1212. Patrick J. Hayes, editor. Proceedings of the 7th International Joint Conference on Artificial
Intelligence (IJCAI’81-I), August 24–28, 1981, University of British Columbia: Vancouver,
BC, Canada, volume 1. John W. Kaufmann Inc.: Washington, DC, USA. Fully available
at http://dli.iiit.ac.in/ijcai/IJCAI-81-VOL%201/CONTENT/content.htm [ac-
cessed 2008-04-01]. OCLC: 490050000. See also [1213].
1050 REFERENCES
1213. Patrick J. Hayes, editor. Proceedings of the 7th International Joint Conference on Artifi-
cial Intelligence (IJCAI’81-II), August 24–28, 1981, University of British Columbia: Van-
couver, BC, Canada, volume 2. John W. Kaufmann Inc.: Washington, DC, USA. Fully
available at http://dli.iiit.ac.in/ijcai/IJCAI-81-VOL-2/CONTENT/content.
htm [accessed 2008-04-01]. OCLC: 490050000. See also [1212].
1214. Barbara Hayes-Roth and Richard E. Korf, editors. Proceedings of the 12th National Con-
ference on Artificial Intelligence (AAAI’94), July 31–August 4, 1994, Convention Center:
Seattle, WA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-61102-3. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai94.php [accessed 2007-09-06].
2 volumes.
1215. Thomas D. Haynes, Roger L. Wainwright, Sandip Sen, and Dale A. Schoenefeld. Strongly
Typed Genetic Programming in Evolving Cooperation Strategies. In ICGA’95 [883], pages
271–278, 1995. Fully available at http://www.mcs.utulsa.edu/˜rogerw/papers/
Haynes-icga95.pdf [accessed 2007-10-03]. CiteSeerx : 10.1.1.48.1037.
1216. Thomas D. Haynes, Dale A. Schoenefeld, and Roger L. Wainwright. Type Inheritance
in Strongly Typed Genetic Programming. In Advances in Genetic Programming II [106],
Chapter 18, pages 359–376. MIT Press: Cambridge, MA, USA, 1996. Fully available
at http://www.mcs.utulsa.edu/˜rogerw/papers/Haynes-hier.pdf [accessed 2011-11-
x
23]. CiteSeer : 10.1.1.109.5646.
1217. Jun He and Xin Yao. Towards an Analytic Framework for Analysing the Computation
Time of Evolutionary Algorithms. Artificial Intelligence, 145(1-2):59–97, April 2003, El-
sevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0004-3702(02)00381-8. Fully
available at http://www.cs.bham.ac.uk/˜xin/papers/published_aij_03.pdf [ac-
x
cessed 2011-08-06]. CiteSeer : 10.1.1.118.4632.
1218. Richard Heady, George F. Luger, Arthur Maccabe, and Mark Servilla. The Architecture of
a Network Level Intrusion Detection System. Technical Report CS90-20, LA-SUB–93-219,
W-7405, University of New Mexico, Department of Computer Science: Albuquerque, NM,
USA, August 15, 1990. doi: 10.2172/425295. Fully available at http://www.osti.gov/
energycitations/product.biblio.jsp?osti_id=425295 [accessed 2008-09-07].
1219. Wendi Rabiner Heinzelman, Anantha Chandrakasan, and Hari Balakrishnan. Energy-
Efficient Communication Protocol for Wireless Microsensor Networks. In HICSS’00 [2574],
page 10, volume 2, 2000. Fully available at http://pdos.csail.mit.edu/decouto/
papers/heinzelman00.pdf and https://eprints.kfupm.edu.sa/37623/1/
37623.pdf [accessed 2008-11-16]. INSPEC Accession Number: 6530466.
1220. Jörg Heitkötter and David Beasley, editors. Hitch-Hiker’s Guide to Evolutionary Compu-
tation: A List of Frequently Asked Questions (FAQ) (HHGT). ENCORE (The Evolu-
tioNary Computation REpository Network): Santa Fé, NM, USA, 8.1 edition, March 29,
2000. Fully available at http://www.aip.de/˜ast/EvolCompFAQ/, http://www.cse.
dmu.ac.uk/˜rij/gafaq/top.htm, and http://www.etsimo.uniovi.es/ftp/pub/
EC/FAQ/www/ [accessed 2010-08-07]. USENET: comp.ai.genetic.
1221. 12th European Conference on Machine Learning (ECML’02), August 19–23, 2002, Helsinki,
Finland.
1222. Jim Hendler and Devika Subramanian, editors. Proceedings of the Sixteenth National Con-
ference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Ar-
tificial Intelligence (AAAI’99, IAAI’99), July 18–22, 1999, Orlando, FL, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51106-1. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai99.php and http://
www.aaai.org/Conferences/IAAI/iaai99.php [accessed 2007-09-06].
1223. Eighth European Conference on Machine Learning (ECML’95), April 25–27, 1995, Heraclion,
Crete, Greece.
1224. Nanna Suryana Herman, Siti Mariyam Shamsuddin, and Ajith Abraham, editors. Interna-
tional Conference on SOft Computing and PAttern Recognition (SoCPaR’09), December 4–
7, 2009, Melaka International Trade Centre (MITC): Malacca, Malaysia. IEEE (Institute of
Electrical and Electronics Engineers): Piscataway, NJ, USA. Partly available at http://
fke.uitm.edu.my/index.php/socpar-2009 [accessed 2011-02-28].
1225. Hugo Hernández and Christian Blum. Self-Synchronized Duty-Cycling in Sensor Networks
with Energy Harvesting Capabilities: The Static Network Case. In GECCO’09-I [2342],
pages 33–40, 2009. doi: 10.1145/1569901.1569907.
REFERENCES 1051
1226. José Herskovits, Sandro Mazorche, and Alfredo Canelas, editors. Proceedings of the 6th
World Congresses of Structural and Multidisciplinary Optimization (WCSMO6), May 30–
June 3, 2005, Rio de Janeiro, RJ, Brazil. COPPE Publication: Rio de Janeiro, RJ, Brazil.
isbn: 8528500705. OCLC: 71284315 and 254952854.
1227. José Herskovits, Alfredo Canelas, Henry Cortes, and Miguel Aroztegui, editors. International
Conference on Engineering Optimization (EngOpt’08), June 1–5, 2008, Rio de Janeiro, RJ,
Brazil.
1228. Alain Hertz, Éric D. Taillard, and Dominique de Werra. A Tutorial on Tabu Search. In
AIRO’95 [2169], pages 13–24, 1995. Fully available at http://www.cs.colostate.edu/
˜whitley/CS640/hertz92tutorial.pdf and http://www.dei.unipd.it/˜fisch/
ricop/tabu_search_tutorial.pdf [accessed 2009-07-02]. CiteSeerx : 10.1.1.28.521 and
10.1.1.50.7347.
1229. Malcom Ian Heywood and A. Nur Zincir-Heywood. Dynamic Page Based Linear Genetic
Programming. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics,
32(3):380–388, June 2002, IEEE Systems, Man, and Cybernetics Society: New York, NY,
USA. doi: 10.1109/TSMCB.2002.999814. INSPEC Accession Number: 7284736.
1230. Joseph F. Hicklin. Application of the Genetic Algorithm to Automatic Program Generation.
Master’s thesis, University of Idaho, Computer Science Department: Moscow, ID, USA, 1986.
Google Books ID: oC5IOAAACAAJ.
1231. Tetsuya Higuchi, Masaya Iwata, and Weixin Liu, editors. The First International Con-
ference on Evolvable Systems: From Biology to Hardware, Revised Papers (ICES’96),
October 7–8, 1996, Tsukuba, Japan, volume 1259/1997 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-63173-9.
isbn: 3-540-63173-9.
1232. Frederick S. Hillier and Gerard J. Liebermann. Introduction to Operations Research. McGraw-
Hill Ltd.: Maidenhead, UK, England, Tsinghua University Press: Běijı̄ng, China, and Holden-
Day: San Francisco, CA, USA, 8th edition, 1980–2005. isbn: 0070600929, 0816238677,
7302122431, and 9751400147. Google Books ID: toNN2yNKRBIC.
1233. W. Daniel Hillis. Co-Evolving Parasites Improve Simulated Evolution as an Optimization
Procedure. Physica D: Nonlinear Phenomena, 42(1-2):228–234, June 1990, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/0167-2789(90)90076-2.
1234. Philip F. Hingston, Luigi C. Barone, and Zbigniew Michalewicz, editors. Design by Evo-
lution: Advances in Evolutionary Design, Natural Computing Series. Springer New York:
New York, NY, USA. doi: 10.1007/978-3-540-74111-4. isbn: 3540741097. Google Books
ID: FzJF0sYDvw8C. asin: 3540741097.
1235. Geoffrey E. Hinton and Steven J. Nowlan. How Learning Can Guide Evolution. Complex
Systems, 1(3):495–502, 1987, Complex Systems Publications, Inc.: Champaign, IL, USA.
Fully available at http://htpprints.yorku.ca/archive/00000172/ [accessed 2008-09-10].
See also [1236].
1236. Geoffrey E. Hinton and Steven J. Nowlan. How Learning Can Guide Evolution. In Adaptive
Individuals in Evolving Populations: Models and Algorithms [255], pages 447–454. Addison-
Wesley Longman Publishing Co., Inc.: Boston, MA, USA and Westview Press, 1996. See
also [1235].
1237. Tomoyuki Hiroyasu, Hiroyuki Ishida, Mitsunori Miki, and Hisatake Yokouchi. Difficulties of
Evolutionary Many-Objective Optimization. Intelligent Systems Design Laboratory Research
Reports (ISDL Reports) 20081006004, Doshisha University, Department of Knowledge Engi-
neering and Computer Sciences, Intelligent Systems Design Laboratory: Kyoto, Japan, Oc-
tober 13, 2008. Fully available at http://mikilab.doshisha.ac.jp/dia/research/
report/2008/1006/004/report20081006004.html [accessed 2009-07-18].
1238. Haym Hirsh and Steve A. Chien, editors. Proceedings of the Thirteenth Innovative Appli-
cations of Artificial Intelligence Conference (IAAI’01), April 7–9, 2001, Seattle, WA, USA.
AAAI Press: Menlo Park, CA, USA. isbn: 1-57735-134-7. Partly available at http://
www.aaai.org/Conferences/IAAI/iaai01.php [accessed 2007-09-06].
1239. Brahim Hnich, Mats Carlsson, François Fages, and Francesca Rossi, editors. Recent Ad-
vances in Constraints – Joint ERCIM/CoLogNET International Workshop on Constraint
Solving and Constraint Logic Programming, CSCLP 2005 – Revised Selected and Invited
Papers, June 20–22, 2005, Uppsala, Sweden, volume 3978/2006 in Lecture Notes in Artifi-
1052 REFERENCES
cial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11754602. isbn: 3-540-34215-X. Google Books
ID: YUdkAAAACAAJ. Library of Congress Control Number (LCCN): 2006925094.
1240. Sir Charles Antony Richard (Tony) Hoare. Quicksort. The Computer Journal, Oxford Jour-
nals, 5(1):10–16, 1962, British Computer Society: Swindon, UK. doi: 10.1093/comjnl/5.1.10.
1241. Cem Hocaoğlu and Arthur C. Sanderson. Evolutionary Speciation Using Minimal Represen-
tation Size Clustering. In EP’95 [1848], pages 187–203, 1995.
1242. Frank Hoffmann, Mario Köppen, Klaus Klawonn, and Rajkumar Roy, editors. Soft Comput-
ing: Methodologies and Applications, volume 32 in Advances in Intelligent and Soft Com-
puting. Birkhäuser Verlag: Basel, Switzerland, May 2005. doi: 10.1007/3-540-32400-3.
isbn: 3-540-25726-8. Google Books ID: ATswE4Z1m14C. Library of Congress Control
Number (LCCN): 2005925479. Post-conference proceedings of the 8th Online World Con-
ferences on Soft Computing (WSC8) which took place from September 29th to October 10th,
2003, organized by World Federation of Soft Computing (WFSC).
1243. Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies – Simi-
larities and Differences. In PPSN I [2438], pages 455–469, 1990. doi: 10.1007/BFb0029787.
See also [1243].
1244. Frank Hoffmeister and Hans-Paul Schwefel. KORR 2.1 – An implementation of a (µ +,
λ)-evolution strategy. Technical Report, Universität Dortmund, Fachbereich Informatik:
Dortmund, North Rhine-Westphalia, Germany, November 1990. Interner Bericht.
1245. Birgit Hofreiter, editor. Proceedings of the 11th IEEE Conference on Commerce and En-
terprise Computing (CEC’09), July 20–23, 2009, Vienna University of Technology: Vienna,
Austria. IEEE Computer Society: Piscataway, NJ, USA and Curran Associates, Inc.: Red
Hook, NY, USA. isbn: 0-7695-3755-3. Partly available at http://cec2009.isis.
tuwien.ac.at/ [accessed 2011-02-28]. GBV-Identification (PPN): 615078826 and 615079865.
1246. Christian Höhn and Colin R. Reeves. Are Long Path Problems Hard for Genetic Algorithms?
In PPSN IV [2818], pages 134–143, 1996. doi: 10.1007/3-540-61723-X 977.
1247. John Henry Holland. Outline for a Logical Theory of Adaptive Systems. Journal of the
Association for Computing Machinery (JACM), 9(3):297–314, ACM Press: New York, NY,
USA. doi: 10.1145/321127.321128.
1248. John Henry Holland. Nonlinear Environments Permitting Efficient Adaptation. In Proceed-
ings of the Symposium on Computer and Information Sciences II [2718], pages 147–164,
1966.
1249. John Henry Holland. Adaptive Plans Optimal for Payoff-Only Environments. In
HICSS’69 [2056], pages 917–920, 1969. DTIC Accession Number: AD0688839.
1250. John Henry Holland. Adaptation in Natural and Artificial Systems: An Introduc-
tory Analysis with Applications to Biology, Control, and Artificial Intelligence. Uni-
versity of Michigan Press: Ann Arbor, MI, USA, 1975. isbn: 0-472-08460-7.
Google Books ID: JE5RAAAAMAAJ, JOv3AQAACAAJ, Qk5RAAAAMAAJ, YE5RAAAAMAAJ, and
vk5RAAAAMAAJ. OCLC: 1531617 and 266623839. Library of Congress Control Number
(LCCN): 74078988. GBV-Identification (PPN): 075982293. LC Classification: QH546
.H64 1975. Newly published as [1252].
1251. John Henry Holland. Adaptive Algorithms for Discovering and Using General Patterns in
Growing Knowledge Bases. International Journal of Policy Analysis and Information Sys-
tems, 4(3):245–268, 1980, Plenum Press: New York, NY, USA.
1252. John Henry Holland. Adaptation in Natural and Artificial Systems: An Introductory Anal-
ysis with Applications to Biology, Control, and Artificial Intelligence. NetLibrary, Inc.:
Dublin, OH, USA, May 1992. isbn: 0585038449. Google Books ID: Jev3AQAACAAJ and
nALfIAAACAAJ. Reprint of [1250].
1253. John Henry Holland. Genetic Algorithms – Computer programs that “evolve“ in ways that
resemble natural selection can solve complex problems even their creators do not fully under-
stand. Scientific American, 267(1):44–50, July 1992, Scientific American, Inc.: New York,
NY, USA. Fully available at http://members.fortunecity.com/templarseries/
algo.html, http://www.cc.gatech.edu/˜turk/bio_sim/articles/genetic_
algorithm.pdf, http://www.im.isu.edu.tw/faculty/pwu/heuristic/GA_1.
pdf, and http://www2.econ.iastate.edu/tesfatsi/holland.gaintro.htm
[accessed 2011-11-20].
1254. John Henry Holland and Arthur W. Burks. Adaptive Computing System Capable of Learn-
REFERENCES 1053
ing and Discovery. Technical Report 619349 filed on 1984-06-11, United States Patent
and Trademark Office (USPTO): Alexandria, VA, USA, 1987. Fully available at http://
www.freepatentsonline.com/4697242.html and http://www.patentstorm.us/
patents/4697242.html [accessed 2008-04-01]. US Patent Issued on September 29, 1987, Cur-
rent US Class 706/13, Genetic algorithm and genetic programming system 382/155, LEARN-
ING SYSTEMS 706/62 MISCELLANEOUS, Foreign Patent References 8501601 WO Apr.,
1985. Editors: Jerry Smith and John R Lastova.
1255. John Henry Holland and Judith S. Reitman. Cognitive Systems based on Adaptive Algo-
rithms. ACM SIGART Bulletin, 0(63):49, June 1977, ACM Press: New York, NY, USA.
doi: 10.1145/1045343.1045373. See also [1256].
1256. John Henry Holland and Judith S. Reitman. Cognitive Systems based on Adaptive Algo-
rithms. In Selected Papers from the Workshop on Pattern Directed Inference Systems [2869],
pages 313–329, 1977. See also [1255]. Reprinted in [939].
1257. John Henry Holland, Lashon Bernard Booker, Marco Colombetti, Marco Dorigo, David Ed-
ward Goldberg, Stephanie Forrest, Rick L. Riolo, Robert Elliott Smith, Pier Luca
Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson. What Is a Learning Classi-
fier System? In IWLCS’00 [1678], pages 3–32, 2000. doi: 10.1007/3-540-45027-0 1.
Fully available at http://www.cs.unm.edu/˜forrest/publications/Learning
%20Classifier%20Systems.pdf [accessed 2010-12-12].
1258. R. B. Hollstien. Artificial Genetic Adaptation in Computer Control Systems. PhD thesis,
University of Michigan: Ann Arbor, MI, USA, 1971. University Microfilms No.: 71-23.
1259. Geoffrey Holmes, Andrew Donkin, and Ian H. Witten. WEKA: A Machine Learning
Workbench. In ANZIIS’98 [1359], pages 357–361, 1994. doi: 10.1109/ANZIIS.1994.396988.
Fully available at http://www.cs.waikato.ac.nz/˜ml/publications/1994/
Holmes-ANZIIS-WEKA.pdf [accessed 2008-11-19]. INSPEC Accession Number: 4962353.
1260. Diana Holstein and Pablo Moscato. Memetic Algorithms using Guided Local Search: A Case
Study. In New Ideas in Optimization [638], pages 235–244. McGraw-Hill Ltd.: Maidenhead,
UK, England, 1999.
1261. Robert C. Holte. Combinatorial Auctions, Knapsack Problems, and Hill-Climbing Search.
In AI’01 [2624], pages 57–66, 2001. Fully available at http://www.cs.ubc.ca/˜hoos/
Courses/Trento-05/Holte01.pdf [accessed 2010-10-12]. CiteSeerx : 10.1.1.28.754.
1262. Robert C. Holte and Adele Howe, editors. Proceedings of the Twenty-Second AAAI Confer-
ence on Artificial Intelligence (AAAI’07), July 22–26, 2007, Vancouver, BC, Canada. AAAI
Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai07.php [accessed 2007-09-06].
1263. Holger H. Hoos and Thomas Stützle. SATLIB: An Online Resource for Research on SAT. In
SAT2000 – Highlights of Satisfiability Research in the Year 2000 [1042], pages 283–292. IOS
Press: Amsterdam, The Netherlands, 2000. Fully available at http://www.cs.ubc.ca/
x
˜hoos/Publ/sat2000-satlib.pdf [accessed 2011-09-07]. CiteSeer : 10.1.1.42.5395. See
also [1264].
1264. Holger H. Hoos and Thomas Stützle, editors. SATLIB - The Satisfiability Library. TU Darm-
stadt: Darmstadt, Germany and University of British Columbia: Vancouver, BC, Canada,
March 22, 2005. Fully available at http://www.cs.ubc.ca/˜hoos/SATLIB/benchm.
html and http://www.satlib.org/ [accessed 2011-09-28]. See also [1263].
1265. J. Hopf, editor. Genetic Algorithms within the Framework of Evolutionary Computation,
Workshop at KI-94. Technical Report MPI-I-94-241, September 1994.
1266. Jeffrey Horn and David Edward Goldberg. Genetic Algorithm Difficulty and the
Modality of the Fitness Landscape. In FOGA 3 [2932], pages 243–269, 1994.
CiteSeerx : 10.1.1.31.3340.
1267. Jeffrey Horn, David Edward Goldberg, and Kalyanmoy Deb. Long Path Prob-
lems. In PPSN III [693], pages 149–158, 1994. doi: 10.1007/3-540-58484-6 259.
CiteSeerx : 10.1.1.51.9758.
1268. Jeffrey Horn, Nicholas Nafpliotis, and David Edward Goldberg. A Niched Pareto Genetic
Algorithm for Multiobjective Optimization. In CEC’94 [1891], pages 82–87, volume 1, 1994.
doi: 10.1109/ICEC.1994.350037. Fully available at http://www.lania.mx/˜ccoello/
EMOO/horn2.ps.gz [accessed 2007-08-28]. CiteSeerx : 10.1.1.34.4189. Also: IEEE Sympo-
sium on Circuits and Systems, pp. 2264–2267, 1991.
1269. Jorng-Tzong Horng, Yu-Jan Chang, Baw-Jhiune Liu, and Cheng-Yan Kao. Materialized View
1054 REFERENCES
Selection using Genetic Algorithms in a Data Warehouse System. In CEC’99 [110], pages
22–27, 1999. doi: 10.1109/CEC.1999.785551. INSPEC Accession Number: 6339040.
1270. Jorng-Tzong Horng, Yu-Jan Chang, and Baw-Jhiune Liu. Applying Evolutionary Algo-
rithms to Materialized View Selection in a Data Warehouse. Soft Computing – A Fusion of
Foundations, Methodologies and Applications, 7(8):574–581, August 2003, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/s00500-002-0243-1.
1271. Reiner Horst and Tuy Hoang. Global Optimization: Deterministic Approaches. Springer-
Verlag GmbH: Berlin, Germany, 3rd edition, 1996. isbn: 3-540-61038-3. Google Books
ID: HyyaAAAAIAAJ and usFjGFvuBDEC.
1272. Reiner Horst, Panos M. Pardalos, and H. Edwin Romeijm. Handbook of Global Optimiza-
tion, volume 62(1-2) in Nonconvex Optimization and Its Applications. Kluwer Academic
Publishers: Norwell, MA, USA, 1995–2002. isbn: 1-4020-0632-2 and 1-4020-0742-6.
Fully available at http://www.optimization-online.org/DB_FILE/2002/03/456.
pdf [accessed 2009-06-22]. Google Books ID: 9W0Ast3pDIC.
1273. Learning and Intelligent OptimizatioN
Learning and Intelligent OptimizatioN (LION 1), February 12–18, 2007, Hotel Adler: Andalo,
Trento, Italy.
1274. XX Simpósio Brasileiro de Telecomunicações (SBrT’03), October 3–5, 2003, Hotel Glória:
Rio de Janeiro, RJ, Brazil.
1275. Proceedings of the 4th International Conference on Bio-Inspired Models of Network, Infor-
mation, and Computing Systems (BIONETICS’09), December 9–11, 2009, Hotel MERCURE
Pont d’Avignon: Avignon, France.
1276. Proceedings of the International Conference on Evolutionary Computation (ICEC’10), 2010,
Hotel Sidi Saler: València, Spain. Part of [914].
1277. Daniel Howard. Frontiers in the Convergence of Bioscience and Information Technologies
(FBIT’07), October 11–13, 2007, Jeju City, South Korea. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 0-7695-2999-2. Google Books
ID: AcgoPwAACAAJ and KbupPQAACAAJ. OCLC: 191820852, 230950453, 254360958,
and 314010039.
1278. Nicola Howarth. Abstract Syntax Tree Design. ANSA Documents: Official Record of the
ANSA Project 23/08/95 APM.1551.01, Architecture Projects Management Limited: Posei-
don House, Castle Park, Cambridge, UK, August 23, 1995. Fully available at http://www.
ansa.co.uk/ANSATech/95/Primary/155101.pdf [accessed 2007-07-03].
1279. Robert J. Howlett and Lakhmi C. Jain, editors. Proceedings of the Fourth International
Conference on Knowledge-Based Intelligent Information Engineering Systems & Allied Tech-
nologies (KES’00), August 30–September 1, 2000, University of Brighton: Sussex, UK.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-6400-7. Google Books
ID: 369VAAAAMAAJ, IHAfPwAACAAJ, and VuFVAAAAMAAJ.
1280. Peter Hrastnik and Albert Rainer. Web Service Discovery and Composition for Vir-
tual Enterprises. International Journal on Web Services Research (IJWSR), 4(1):23–
29, 2007, Idea Group Publishing (Idea Group Inc., IGI Global): New York, NY, USA.
doi: 10.4018/jwsr.2007010102. CiteSeerx : 10.1.1.68.5392.
1281. Juraj Hromkovič. Algorithmics for Hard Computing Problems: Introduction to Combina-
torial Optimization, Randomization, Approximation, and Heuristics, Texts in Theoretical
Computer Science – An EATCS Series. Springer-Verlag GmbH: Berlin, Germany, 2001.
isbn: 3-540-66860-8. Google Books ID: Ut1QAAAAMAAJ. OCLC: 46713245. Library of
Congress Control Number (LCCN): 2001031301. GBV-Identification (PPN): 319756777.
LC Classification: QA76.9.A43 H76 2001.
1282. Juraj Hromkovič and I. Zámecniková. Design and Analysis of Randomized Algorithms:
Introduction to Design Paradigms, Texts in Theoretical Computer Science – An EATCS
Series. Springer-Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/3-540-27903-2.
isbn: 3-540-23949-9. Google Books ID: EfuMfn8T5 oC. OCLC: 60800705, 318291536,
and 474955415. Library of Congress Control Number (LCCN): 2005926236. GBV-
Identification (PPN): 524971269. LC Classification: QA274 .H76 2005.
1283. 15th Conference on Technologies and Applications of Artificial Intelligence (TAAI’10),
November 18–20, 2010, Hsinchu, Taiwan.
1284. Chien-Chang Hsu and Yao-Wen Tang. An Intelligent Mobile Learning System for
On-the-Job Training of Luxury Brand Firms. In AI’06 [2406], pages 749–759, 2006.
REFERENCES 1055
doi: 10.1007/11941439 79.
1285. P. L. Hsu, R. Lai, and C. C. Chiu. The Hybrid of Association Rule Algorithms and Ge-
netic Algorithms for Tree Induction: An Example of Predicting the Student Course Perfor-
mance. Expert Systems with Applications – An International Journal, 25(1):51–62, July 2003,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/S0957-4174(03)00005-8.
1286. Ting Hu and Wolfgang Banzhaf. Measuring Rate of Evolution in Genetic Programming Using
Amino Acid to Synonymous Substitution Ratio ka /ks . In GECCO’08 [1519], pages 1337–
1338, 2008. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
Hu_2008_gecco.html and http://www.cs.mun.ca/˜tingh/papers/GECCO08.pdf
[accessed 2008-08-08].
1287. Ting Hu and Wolfgang Banzhaf. Nonsynonymous to Synonymous Substitution Ratio ka /ks :
Measurement for Rate of Evolution in Evolutionary Computation. In PPSN X [2354], pages
448–457, 2008. doi: 10.1007/978-3-540-87700-4 45. Fully available at http://www.cs.mun.
ca/˜tingh/papers/PPSN08.pdf [accessed 2009-07-10].
1288. Ting Hu and Wolfgang Banzhaf. Evolvability and Acceleration in Evolutionary Computa-
tion. Technical Report MUN-CS-2008-04, Memorial University, Department of Computer
Science: St. John’s, Canada, October 2008. Fully available at http://www.cs.mun.ca/
˜tingh/papers/TR08.pdf and http://www.mun.ca/computerscience/research/
MUN-CS-2008-04.pdf [accessed 2009-08-05].
1289. Ting Hu and Wolfgang Banzhaf. Evolvability and Speed of Evolutionary Algorithms in the
Light of Recent Developments in Biology. Journal of Artificial Evolution and Applications,
2010, Hindawi Publishing Corporation: Nasr City, Cairo, Egypt. doi: 10.1155/2010/568375.
Fully available at http://www.cs.mun.ca/˜tingh/papers/JAEA10.pdf and http://
www.hindawi.com/archive/2010/568375.html [accessed 2010-10-22]. ID: 568375.
1290. Ting Hu, Yuanzhu Peter Chen, Wolfgang Banzhaf, and Robert Benkoczi. An Evolutionary
Approach to Planning IEEE 802.16 Networks. In GECCO’09-I [2342], pages 1929–1930,
2009. doi: 10.1145/1569901.1570242. Fully available at http://www.cs.mun.ca/˜tingh/
papers/GECCO09-network.pdf [accessed 2009-09-05].
1291. Xiaobing Hu, Mark Leeson, and Evor Hines. An Effective Genetic Algorithm for the Network
Coding Problem. In CEC’09 [1350], pages 1714–1720, 2009. doi: 10.1109/CEC.2009.4983148.
INSPEC Accession Number: 10688743.
1292. Xiaomin Hu, Jun Zhang, and Liming Zhang. Swarm Intelligence Inspired Multicast Routing:
An Ant Colony Optimization Approach. In EvoWorkshops’09 [1052], pages 51–60, 2009.
doi: 10.1007/978-3-642-01129-0 6.
1293. De-Shuang Huang, Laurent Heutte, and Marco Loog, editors. Advanced Intelligent Com-
puting Theories and Applications. With Aspects of Contemporary Intelligent Computing
Techniques – Proceedings of the Third International Conference on Intelligent Comput-
ing (ICIC’07-2), August 21–24, 2007, Qı̄ngdǎo, Shāndōng, China, volume 2 in Commu-
nications in Computer and Information Science. Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-74282-1. Fully available at http://www.springerlink.com/
content/k74022/ [accessed 2008-03-28].
1294. Joshua (Zhexue) Huang, Longbing Cao, and Jaideep Srivastava, editors. Proceedings of
the 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD’11),
May 24–27, 2011, Shēnzhèn, Guǎngdōng, China, Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. Partly available at http://pakdd2011.pakdd.
org/ [accessed 2011-02-28].
1295. Sheng Huang, Xiaoling Wang, and Aoying Zhou. Efficient Web Service Composition Based
on Syntactical Matching. In EEE’05 [1371], pages 782–783, 2005. doi: 10.1109/EEE.2005.66.
See also [317].
1296. Thomas S. Huang, Anton Nijholt, Maja Pantic, and Alex Pentland, editors. Artifical Intelli-
gence for Human Computing, ICMI 2006 and IJCAI 2007 International Workshops, Banff,
Canada, November 3, 2006, Hyderabad, India, January 6, 2007, Revised Seleced and Invited
Papers (ICMI’06 / IJCAI’07), 2006–2007, Banff, AB, Canada and Hyderabad, India, vol-
ume 4451 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. See also [2797].
1297. V. Ling Huang, Ponnuthurai Nagaratnam Suganthan, Alex Kai Qin, and S. Baskar. Multi-
objective Differential Evolution with External Archive and Harmonic Distance-Based Diver-
1056 REFERENCES
sity Measure. Technical Report, Nanyang Technological University (NTU): Singapore, 2006.
Fully available at http://www3.ntu.edu.sg/home/epnsugan/index_files/papers/
Tech-Report-MODE-2005.pdf [accessed 2009-07-17].
1298. Yueh-Min Huang, Yen-Ting Lin, and Shu-Chen Cheng. An Adaptive Testing System
for Supporting Versatile Educational Assessment. Computers & Education – An Interna-
tional Journal, 52(1):53–67, January 2009, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.compedu.2008.06.007.
1299. Zhenqiu Huang, Wei Jiang, Songlin Hu, and Zhiyong Liu. Effective Pruning Algo-
rithm for QoS-Aware Service Composition. In CEC’09 [1245], pages 519–522, 2009.
doi: 10.1109/CEC.2009.41. INSPEC Accession Number: 10839126. See also [1571, 1801].
1300. Claudia Huber and Günter Wächterhäuser. Peptides by Activation of Amino Acids with
CO on (Ni,Fe)S Surfaces: Implications for the Origin of Life. Science Magazine, 281(5377):
670–672, July 31, 1998, American Association for the Advancement of Science (AAAS):
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.281.5377.670. PubMed ID: 9685253.
1301. Lorenz Huelsbergen. Toward Simulated Evolution of Machine Learning Interaction. In
GP’96 [1609], pages 315–320, 1996. Fully available at http://cognet.mit.edu/
library/books/mitpress/0262611279/cache/chap41.pdf [accessed 2010-11-21].
1302. Barry D. Hughes. Random Walks and Random Environments: Volume 1: Random Walks.
Oxford University Press, Inc.: New York, NY, USA, May 16, 1995. isbn: 0-19-853788-3.
Google Books ID: QhOen t0LeQC.
1303. Evan J. Hughes. Evolutionary Many-Objective Optimisation: Many Once or One Many? In
CEC’05 [641], pages 222–227, volume 1, 2005. doi: 10.1109/CEC.2005.1554688. Fully avail-
able at http://www.lania.mx/˜ccoello/hughes05.pdf.gz [accessed 2009-07-18]. INSPEC
Accession Number: 9004154.
1304. Evan J. Hughes. MSOPS-II: A General-Purpose Many-Objective Optimiser. In
CEC’07 [1343], pages 3944–3951, 2007. doi: 10.1109/CEC.2007.4424985. INSPEC Acces-
sion Number: 9889597.
1305. Evan J. Hughes. Fitness Assignment Methods for Many-Objective Problems. In Multiobjective
Problem Solving from Nature – From Concepts to Applications [1554], Chapter IV-1, pages
307–329. Springer New York: New York, NY, USA, 2008. doi: 10.1007/978-3-540-72964-8 15.
1306. Roger N. Hughes, editor. NATO Advanced Research Workshop on Behavioural Mecha-
nisms of Food Selection, July 17–21, 1989, Gregynog, Wales, UK, volume 20 in NATO Ad-
vanced Science Institutes (ASI) Series G. Ecological Sciences (NATO ASI). Springer-Verlag
GmbH: Berlin, Germany. isbn: 0-387-51762-6 and 3-540-51762-6. OCLC: 20854001,
311111175, and 472872808. Library of Congress Control Number (LCCN): 89026334.
GBV-Identification (PPN): 025232932 and 157637646. LC Classification: QL756.5 .N38
1989.
1307. F. J. Humphreys and M. Hatherly. Recrystallization and Related Annealing Phenomena,
Pergamon Materials Series. Elsevier Science Publishers B.V.: Amsterdam, The Netherlands,
2004. isbn: 0080441645. Google Books ID: 52Gloa7HxGsC.
1308. Ming-Chuan Hung, Man-Lin Huang, Don-Lin Yang, and Nien-Lin Hsueh. Efficient Ap-
proaches for Materialized Views Selection in a Data Warehouse. Information Sciences
– Informatics and Computer Science Intelligent Systems Applications: An International
Journal, 177(6):1333–1348, March 2007, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.ins.2006.09.007.
1309. Phil Husbands and Inman Harvey, editors. Proceedings of the Fourth European Conference on
Advances in Artificial Life (ECAL’97), July 28–31, 1997, Brighton, UK, Complex Adaptive
Systems, Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-58157-4. GBV-
Identification (PPN): 234965770.
1310. Phil Husbands and Jean-Arcady Meyer, editors. Proceedings of the First European Workshop
on Evolutionary Robotics (EvoRobot’98), April 16–17, 1998, Paris, France, volume 1468/1998
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-64957-3. OCLC: 246316294, 316835898, 441663096, 471681293, and
502374580. Library of Congress Control Number (LCCN): 98039439.
1311. Phil Husbands and Frank Mill. Simulated Co-Evolution as the Mechanism for
Emergent Planning and Scheduling. In ICGA’91 [254], pages 264–270, 1991.
Fully available at http://www.informatics.sussex.ac.uk/users/philh/pubs/
REFERENCES 1057
icga91Husbands.pdf [accessed 2010-09-13].
1312. Marcus Hutter. Fitness Uniform Selection to Preserve Genetic Diversity. In CEC’02 [944],
pages 783–788, 2002. doi: 10.1109/CEC.2002.1007025. Fully available at ftp://
ftp.idsia.ch/pub/techrep/IDSIA-01-01.ps.gz, http://www.hutter1.net/
ai/pfuss.pdf, http://www.hutter1.net/ai/pfuss.ps, and http://www.
hutter1.net/ai/pfuss.tar [accessed 2011-04-29]. CiteSeerx : 10.1.1.106.9784. arXiv
ID: cs/0103015. INSPEC Accession Number: 7336007. Also: Technical Report IDSIA-01-
01, 17 January 2001. See also [1313, 1708, 1709].
1313. Marcus Hutter and Shane Legg. Fitness Uniform Optimization. IEEE Transactions
on Evolutionary Computation (IEEE-EC), 10(5):568–589, October 2006, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/TEVC.2005.863127. Fully avail-
able at http://www.idsia.ch/idsiareport/IDSIA-16-06.pdf and http://
www.vetta.org/documents/FitnessUniformOptimization.pdf [accessed 2011-03-
1315. Martijn A. Huynen, Peter F. Stadler, and Walter Fontana. Smoothness within Rugged-
ness: The Role of Neutrality in Adaptation. Proceedings of the National Academy of
Science of the United States of America (PNAS), 93(1):397–401, January 9, 1996, Na-
tional Academy of Sciences: Washington, DC, USA. Fully available at http://fontana.
med.harvard.edu/www/Documents/WF/Papers/neutrality.pdf and http://www.
pnas.org/cgi/reprint/93/1/397.pdf [accessed 2011-12-05]. CiteSeerx : 10.1.1.31.1013.
Communicated by Hans Frauenfelder, Los Alamos National Laboratory, Los Alamos, NM,
September 20, 1995 (received for review June 29, 1995).
1316. Chii-Ruey Hwang. Simulated Annealing: Theory and Applications. Acta Applicandae
Mathematicae: An International Survey Journal on Applying Mathematics and Mathemat-
ical Applications, 12(1):108–111, May 1988, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/BF00047572. Academia Sinica. Review of [2776].
1317. Hitoshi Iba. Emergent Cooperation for Multiple Agents Using Genetic Programming. In
PPSN IV [2818], pages 32–41, 1996. doi: 10.1007/3-540-61723-X 967.
1318. Hitoshi Iba. Evolutionary Learning of Communicating Agents. Information Sciences
– Informatics and Computer Science Intelligent Systems Applications: An International
Journal, 108(1–4):181–205, July 1998, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0020-0255(97)10055-X. See also [1320].
1319. Hitoshi Iba. Evolving Multiple Agents by Genetic Programming. In Advances in Genetic
Programming [2569], pages 447–466. MIT Press: Cambridge, MA, USA, 1996. Fully avail-
able at http://www.cs.bham.ac.uk/˜wbl/aigp3/ch19.pdf [accessed 2007-10-03]. See also
[1320].
1320. Hitoshi Iba, Tishihide Nozoe, and Kanji Ueda. Evolving Communicating Agents based on Ge-
netic Programming. In CEC’97 [173], pages 297–302, 1997. doi: 10.1109/ICEC.1997.592321.
Fully available at http://lis.epfl.ch/˜markus/References/Iba97.pdf [accessed 2008-
07-28]. INSPEC Accession Number: 5573068. See also [1318, 1319].
1321. Toshihide Ibaraki, Koji Nonobe, and Mutsunori Yagiura, editors. Fifth Metaheuristics In-
ternational Conference – Metaheuristics: Progress as Real Problem Solvers (MIC’03), Au-
gust 25–28, 2003, Kyoto International Conference Hall: Kyoto, Japan, volume 32 in Op-
erations Research/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 0-387-25382-3. Partly available at http://www-or.amp.i.kyoto-u.ac.
jp/mic2003/ [accessed 2007-09-12]. Published in 2005.
1322. Proceedings of the IEEE Aerospace Conference, March 18–20, 2000, Big Sky, MT, USA.
IEEE Aerospace and Electronic Systems Society, IEEE Computer Society: Piscataway, NJ,
1058 REFERENCES
USA. isbn: 0-7803-5846-5 and 0780358473. INSPEC Accession Number: 6763001. LC
Classification: TL3000.A1 I22 2000. Catalogue no.: 00TH8484.
1323. Joint IEEE/IEE Workshop on Natural Algorithms in Signal Processing (NASP’93), Novem-
ber 14–16, 1993, Chelmsford, Essex, UK. IEEE Computer Society: Piscataway, NJ, USA.
1324. 1994 IEEE World Congress on Computation Intelligence (WCCI’94), June 26–July 2, 1994,
Walt Disney World Dolphin Hotel: Orlando, FL, USA. IEEE Computer Society: Piscataway,
NJ, USA.
1325. Proceedings of the 1995 American Control Conference (ACC), June 21–23, 1995, Seattle,
WA, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-2445-5 and
0780324463. Google Books ID: EvlVAAAAMAAJ, MfhVAAAAMAAJ, e VVAAAAMAAJ, and
pPlVAAAAMAAJ. INSPEC Accession Number: 5080557.
1326. Proceedings of the Sixth International Symposium on Micro Machine and Human Sci-
ence (MHS’95), October 4–6, 1995, Nagoya, Japan. IEEE Computer Society: Piscat-
away, NJ, USA. doi: 10.1109/MHS.1995.494249. isbn: 0-7803-2676-8. Google Books
ID: wqFVAAAAMAAJ. INSPEC Accession Number: 5303058.
1327. Second IEEE International Conference on Engineering of Complex Computer Systems
(ICECCS’96), October 21–25, 1996, Montréal, QC, Canada. IEEE Computer Society: Pis-
cataway, NJ, USA. isbn: 0-8186-7614-0. INSPEC Accession Number: 5436943. held
jointly with 6th CSESAW and 4th IEEE RTAW.
1328. Digest of Technical Papers of the IEEE/ACM International Conference on Computer-Aided
Design (ICCAD), November 9–13, 1997, San José, CA, USA. IEEE Computer Society:
Piscataway, NJ, USA. isbn: 0-8186-8200-0, 0-8186-8201-9, and 0-8186-8202-7.
issn: 1093-3152. Order No.: PRO8200. Catalogue no.: 97CB36142. ACM Order No.: 477973.
1329. Proceedings of the 22nd IEEE Conference on Local Computer Networks (LCN’97), Novem-
ber 2–5, 1997, Mineapolis, MN, USA. IEEE Computer Society: Piscataway, NJ,
USA. doi: 10.1109/LCN.1997.630893. isbn: 0-8186-8141-1, 0-8186-8142-X, and
0-8186-8143-8. Google Books ID: 5e1VAAAAMAAJ, ANQWKwAACAAJ, and j 4GMwAACAAJ.
OCLC: 38188270 and 60166824. issn: 0742-1303. Order No.: 97TB100179 and PR08141.
1330. 1998 IEEE World Congress on Computation Intelligence (WCCI’98), May 4–9, 1998, Egan
Civic & Convention Center and Anchorage Hilton Hotel: Anchorage, AK, USA. IEEE Com-
puter Society: Piscataway, NJ, USA.
1331. IEEE International Conference on Systems, Man, and Cybernetics – Human Communi-
cation and Cybernetics (SMC’99), October 12–15, 1999, Tokyo University, Department of
Aeronautics and Space Engineering: Tokyo, Japan. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-5731-0, 0-7803-5732-9, and 0-7803-5733-7. Google Books
ID: J-kOAAAACAAJ and Yv6cGAAACAAJ. OCLC: 43823469 and 213032676.
1332. Proceedings of the 2000 American Control Conference (ACC), June 28–30, 2000, Hyatt
Regency Chicago: Chicago, IL, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-5519-9. Google Books ID: pQANAAAACAAJ and xD11HAAACAAJ. INSPEC
Accession Number: 6805288.
1333. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’00), July 16–19, 2000,
La Jolla Marriott Hotel: La Jolla, CA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-6375-2. Google Books ID: ttlVAAAAMAAJ. Library of Congress Control
Number (LCCN): 00-018644. Catalogue no.: 00TH8512. CEC-2000 – A joint meeting of
the IEEE, Evolutionary Programming Society, Galesia, and the IEE.
1334. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’01), May 27–30,
2001, COEX, World Trade Center: Gangnam-gu, Seoul, Korea. IEEE Computer Soci-
ety: Piscataway, NJ, USA. isbn: 0-7803-6657-3 and 0-7803-6658-1. Google Books
ID: R6FVAAAAMAAJ, WXH PQAACAAJ, XaBVAAAAMAAJ, and 1iEAAAACAAJ. Catalogue
no.: 01TH8546. CEC-2001 – A joint meeting of the IEEE, Evolutionary Programming Society,
Galesia, and the IEE.
1335. Proceedings of the American Control Conference (ACC’02), May 8–10, 2002, Honeywell,
Houston, TX, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-7298-0.
Google Books ID: 98SNAAAACAAJ and IxlaHAAACAAJ. issn: 0743-1619.
1336. IEEE AP-S International Symposium and USNC/URSI National Radio Science Meeting,
June 16–21, 2002, San Antonio, TX, USA. IEEE Computer Society: Piscataway, NJ, USA.
1337. Proceedings of the 1st IEEE International Conference on Cognitive Informatics (ICCI’02),
August 19–20, 2002, Calgary, AB, Canada. IEEE Computer Society: Piscataway, NJ, USA.
REFERENCES 1059
isbn: 0-7695-1724-2.
1338. 2002 IEEE World Congress on Computation Intelligence (WCCI’02), May 12–17, 2002,
Hilton Hawaiian Village Hotel (Beach Resort & Spa): Honolulu, HI, USA. IEEE Computer
Society: Piscataway, NJ, USA.
1339. Proceedings of the 4th IEEE International Conference on Cognitive Informatics (ICCI’05),
August 8–10, 2005, University of California: Irvine, CA, USA. IEEE Computer Society:
Piscataway, NJ, USA.
1340. 6th International Conference on Hybrid Intelligent Systems (HIS’06), December 13–15, 2006,
AUT Technology Park: Auckland, New Zealand. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 0-7695-2662-4. Partly available at http://his06.hybridsystem.com/
[accessed 2007-09-01].
1341. 2006 IEEE World Congress on Computation Intelligence (WCCI’06), July 16–21, 2006, Sher-
aton Vancouver Wall Centre Hotel: Vancouver, BC, Canada. IEEE Computer Society: Pis-
cataway, NJ, USA.
1342. Proceedings of the 2nd International Conference on Bio-Inspired Models of Network, In-
formation, and Computing Systems (BIONETICS’07), December 10–13, 2007, Radisson
SAS Beke Hotel: Budapest, Hungary. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 963-9799-05-X. Partly available at http://www.bionetics.org/2007/ [ac-
cessed 2011-02-28]. OCLC: 259234301 and 288980779. Catalogue no.: 08EX2011.
1343. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’07), September 25–
28, 2007, Swissôtel The Stamford: Singapore. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 1-4244-1339-7. Google Books ID: duo6GQAACAAJ and yBzVPQAACAAJ.
OCLC: 257656187.
1344. Proceedings of IEEE Joint Conference on E-Commerce Technology (9th CEC) and Enter-
prise Computing, E-Commerce and E-Services (4th EEE) (CEC/EEE’07), July 23–26, 2007,
National Center of Sciences: Tokyo, Japan. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2913-5. Partly available at http://ieejc.ise.eng.osaka-u.ac.jp/
CEC2007/ [accessed 2011-02-28]. Google Books ID: FXdEPAAACAAJ. OCLC: 170925181 and
288976020.
1345. Proceedings of the First IEEE Symposium Series on Computational Intelligence (SSCI’07),
April 1–5, 2007, Honolulu, HI, USA. IEEE Computer Society: Piscataway, NJ, USA.
1346. Proceedings of IEEE Joint Conference on E-Commerce Technology (10th CEC) and En-
terprise Computing, E-Commerce and E-Services (5th EEE) (CEC/EEE’08), July 21–
24, 2008, Washington, DC, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-3340-X. Partly available at http://cec2008.cs.georgetown.edu/ [ac-
cessed 2011-02-28]. Google Books ID: XrakOgAACAAJ. OCLC: 401711110. Library of Congress
1397. Christian Igel and Marc Toussaint. On Classes of Functions for which No Free Lunch
Results Hold. Information Processing Letters, 86(6):317–321, June 30, 2003, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scien-
tific Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0020-0190(03)00222-9.
CiteSeerx : 10.1.1.72.4278 and 10.1.1.8.6772.
1398. Christian Igel and Marc Toussaint. Recent Results on No-Free-Lunch Theorems for Opti-
mization. arXiv.org, Cornell Universiy Library: Ithaca, NY, USA, March 31, 2003. Fully
available at http://www.citebase.org/abstract?id=oai:arXiv.org:cs/0303032
[accessed 2008-03-28]. arXiv ID: abs/cs/0303032.
1399. Auke Jan Ijspeert, Masayuki Murata, and Naoki Wakamiya, editors. Revised Selected Papers
of the First International Workshop on Biologically Inspired Approaches to Advanced In-
formation Technology (BioADIT’04), January 29–30, 2004, Lausanne, Switzerland, volume
3141/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b101281. isbn: 3-540-23339-3. Google Books ID: PdYn-iHZFDYC
and WOR7wu jMMQC. OCLC: 56878906 and 56922928. Library of Congress Control Number
(LCCN): 2004113283.
1400. Kokolo Ikeda, Hajime Kita, and Shigenobu Kobayashi. Failure of Pareto-based MOEAs:
Does Non-Dominated Really Mean Near to Optimal? In CEC’01 [1334], pages 957–962,
REFERENCES 1063
volume 2, 2001. doi: 10.1109/CEC.2001.934293. INSPEC Accession Number: 7018245.
1401. Michael Iles and Dwight Lorne Deugo. A Search for Routing Strategies in a Peer-to-
Peer Network Using Genetic Programming. In SRDS’02 [1384], pages 341–346, 2002.
doi: 10.1109/RELDIS.2002.1180207. INSPEC Accession Number: 7516795.
1402. Lester Ingber. Adaptive Simulated Annealing (ASA): Lessons learned. Control and Cybernet-
ics, 25(1):33–54, 1996, Polish Academy of Sciences (Polska Akademia Nauk, PAN): Poland.
Fully available at http://www.ingber.com/asa96_lessons.pdf [accessed 2010-09-25] and
http://www.ingber.com/asa96_lessons.ps.gz. CiteSeerx : 10.1.1.15.2777.
1403. Lester Ingber. Simulated Annealing: Practice versus Theory. Mathematical and Com-
puter Modelling, 18(11):29–57, November 1993, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0895-7177(93)90204-C. Fully available at http://www.ingber.com/asa93_
sapvt.pdf [accessed 2009-09-18]. CiteSeerx : 10.1.1.15.1046.
1404. Proceedings of the 3rd International Conference on Agents and Artificial Intelligence
(ICAART’11), January 28–30, 2011, Rome, Italy. Institute for Systems and Technologies
of Information, Control and Communication (INSTICC) Press: Setubal, Portugal.
1405. Proceedings of the 2nd International Conference on Engineering Optimization (EngOpt’10),
September 6–9, 2010, Instituto Superior Técnico (IST) Congress Center: Lisbon, Portugal.
1406. Proceedings of the World Congress on Engineering and Computer Science (WCECS’07), Oc-
tober 26–27, 2007, University of California: Berkeley, CA, USA, Lecture Notes in Engineer-
ing and Computer Science. International Association of Engineers (IAENG): Hong Kong
(Xiānggǎng), China and Newswood Publications Ltd.: Hong Kong (Xiānggǎng), China.
isbn: 988-98671-6-8. Google Books ID: Lg8PLwAACAAJ. OCLC: 184910962.
1407. Proceedings of the IASTED International Conference on Databases and Applications
(DBA’06), February 13–15, 2006, Innsbruck, Austria. International Association of Science
and Technology for Development (IASTED): Calgary, AB, Canada. Part of [1408].
1408. Proceedings of the 24th IASTED International Multi-Conference on Applied Informatics
(AI’06), February 14–16, 2006, Innsbruck, Austria. International Association of Science and
Technology for Development (IASTED): Calgary, AB, Canada.
1409. Proceedings of the Tenth IASTED International Conference on Parallel and Distributed Com-
puting and Systems (PDCS’98), October 1–4, 1998, Las Vegas, NV, USA. International As-
sociation of Science and Technology for Development (IASTED): Calgary, AB, Canada and
ACTA Press: Calgary, AB, Canada.
1410. Proceedings of the Eleventh IASTED International Conference on Parallel and Distributed
Computing and Systems (PDCS’99), November 3–6, 1999, Massachusetts Institute of Tech-
nology (MIT): Cambridge, MA, USA. International Association of Science and Technology
for Development (IASTED): Calgary, AB, Canada and ACTA Press: Calgary, AB, Canada.
1411. The 2012 IEEE World Congress on Computational Intelligence (WCCI’12), June 10–15,
2012, International Convention Centre: Brisbane, QLD, Australia.
1412. 8th International Conference on Systems Research, Informatics and Cybernetics (Inter-
Symp’96), August 14–18, 1996, Baden-Baden, Baden-Württemberg, Germany. International
Institute for Advanced Studies in Systems Research and Cybernetic (IIAS): Tecumseh, ON,
Canada.
1413. Proceedings of the Symposium on Network and Distributed Systems Security (NDSS’00),
February 2–4, 2000, Catamaran Resort Hotel: San Diego, CA, USA. Internet Society
(ISOC): Reston, VA, USA. isbn: 1-891562-07-X, 1-891562-08-8, and 1-891562-11-8.
Google Books ID: 49fJPgAACAAJ, LzfCPgAACAAJ, rAGIPgAACAAJ, and sGuCPgAACAAJ.
OCLC: 59522955 and 236489300.
1414. Proceedings of the First Conference on Artificial General Intelligence (AGI’08), March 1–3,
2008, University of Memphis, FedEx Institute of Technology: Memphis, TN, USA. IOS Press:
Amsterdam, The Netherlands.
1415. Proceedings of the 4th International Workshop on Machine Learning (ML’87), 1987, Irvine,
CA, USA.
1416. Hisao Ishibuchi and Yusuke Nojima. Optimization of Scalarizing Functions Through
Evolutionary Multiobjective Optimization. In EMO’07 [2061], pages 51–65, 2007.
doi: 10.1007/978-3-540-70928-2 8. Fully available at http://ksuseer1.ist.psu.edu/
viewdoc/summary?doi=10.1.1.78.8014 and http://www.lania.mx/˜ccoello/
EMOO/ishibuchi07a.pdf.gz [accessed 2009-07-18].
1417. Hisao Ishibuchi, Tsutomu Doi, and Yusuke Nojima. Incorporation of Scalar-
1064 REFERENCES
izing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms.
In PPSN IX [2355], pages 493–502, 2006. doi: 10.1007/11844297 50. Fully
available at http://www.ie.osakafu-u.ac.jp/˜hisaoi/ci_lab_e/research/pdf_
file/multiobjective/PPSN_2006_EMO_Camera-Ready-HP.pdf [accessed 2009-07-18].
1418. Hisao Ishibuchi, Yusuke Nojima, and Tsutomu Doi. Comparison between Single-Objective
and Multi-Objective Genetic Algorithms: Performance Comparison and Performance Mea-
sures. In CEC’06 [3033], pages 3959–3966, 2006. doi: 10.1109/CEC.2006.1688438. INSPEC
Accession Number: 9723561.
1419. Hisao Ishibuchi, Noritaka Tsukamoto, and Yusuke Nojima. Iterative Approach to
Indicator-based Multiobjective Optimization. In CEC’07 [1343], pages 3967–3974, 2007.
doi: 10.1109/CEC.2007.4424988. INSPEC Accession Number: 9889480.
1420. Hisao Ishibuchi, Noritaka Tsukamoto, and Yusuke Nojima. Evolutionary Many-
Objective Optimization: A Short Review. In CEC’08 [1889], pages 2424–2431, 2008.
doi: 10.1109/CEC.2008.4631121. Fully available at http://www.ie.osakafu-u.ac.
jp/˜hisaoi/ci_lab_e/research/pdf_file/multiobjective/CEC2008_Many_
Objective_Final.pdf [accessed 2009-07-18]. INSPEC Accession Number: 10250829.
1421. Masumi Ishikawa, editor. Fourth International Conference on Hybrid Intelligent Systems
(HIS’04), December 5–8, 2004, Kitakyushu, Japan. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7695-2291-2. OCLC: 58543493, 58804879, 61262247, 71500201,
254581031, and 423969575. INSPEC Accession Number: 8470883.
1422. Hajira Jabeen and Abdul Rauf Baig. Review of Classification using Genetic Programming. In-
ternational Journal of Engineering Science and Technology, 2(2):94–103, February 2010, Engg
Journals Publications (EJP): Otteri, Vandalur, Chennai, India. Fully available at http://
www.ijest.info/docs/IJEST10-02-02-06.pdf [accessed 2010-06-25].
1423. David Jackson. Parsing and Translation of Expressions by Genetic Programming. In
GECCO’05 [304], pages 1681–1688, 2005. doi: 10.1145/1068009.1068291.
1424. Christian Jacob, Marcin L. Pilat, Peter J. Bentley, and Jonathan Timmis, editors. Proceedings
of the 4th International Conference on Artificial Immune Systems (ICARIS’05), August 14–
17, 2005, Banff, AB, Canada, volume 3627 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-28175-4.
1425. Matthias Jacob, Alexander Kuscher, Max Plauth, and Christoph Thiele. Automated Data
Augmentation Services Using Text Mining, Data Cleansing and Web Crawling Techniques.
In Third IEEE International Services Computing Contest [1347], pages 136–143, 2008.
doi: 10.1109/SERVICES-1.2008.67. INSPEC Accession Number: 10117645.
1426. Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code
Optimization. ACM SIGMICRO Newsletter, 13(4):143–148, December 1982, Association for
Computing Machinery (ACM): New York, NY, USA. See also [1427].
1427. Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code
Optimization. In MICRO 15 [1385], pages 143–146, 1982. See also [1426].
1428. H. V. Jagadish and Inderpal Singh Mumick, editors. Proceedings of the 1996 ACM SIGMOD
International Conference on Management of Data (SIGMOD’96), June 4–6, 1996, Montréal,
QC, Canada. ACM Press: New York, NY, USA.
1429. Martin Jähne, Xiaodong Li, and Jürgen Branke. Evolutionary Algorithms and Multi-
Objectivization for the Travelling Salesman Problem. In GECCO’09-I [2342], pages 595–602,
2009. doi: 10.1145/1569901.1569984. Fully available at http://goanna.cs.rmit.edu.
au/˜xiaodong/publications/multi-objectivization-jahne-gecco09.pdf [ac-
cessed 2010-07-18].
1430. Lakhmi C. Jain and Janusz Kacprzyk, editors. New Learning Paradigms in Soft Computing,
volume 84 (PR-200 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH:
Berlin, Germany, February 2002. isbn: 3-7908-1436-9. Google Books ID: u2NcykOhYYwC.
1431. Nick Jakobi. Harnessing Morphogenesis. Technical Report CSRP 423, University of Sussex,
School of Cognitive Science: Brighton, UK, 1995. CiteSeerx : 10.1.1.39.372.
1432. Mohammad Jamshidi, editor. Multimedia, Image Processing, and Soft Computing : Trends,
Principles, and Applications: Proceedings of the 5th Biannual World Automation Congress
REFERENCES 1065
(WAC’02), June 9–13, 2002, Orlando, FL, USA, volume 13/14 in Proceedings of the World
Automation Congress. Technology Software & Information Enterprise (TSI Press) Inc.: San
Antonio, TX, USA. isbn: 1889335185 and 1889335193. Google Books ID: jPGbAAAACAAJ.
OCLC: 51296904, 54530539, 80832353, and 248985844. Catalogue no.: D2EX548.
1433. Thomas Jansen and Ingo Wegener. A Comparison of Simulated Annealing with a Simple
Evolutionary Algorithm on Pseudo-boolean Functions of Unitation. Theoretical Computer
Science, 386(1-2):73–93, October 28, 2007, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.tcs.2007.06.003.
1434. Matthias Jarke, Michael J. Carey, Klaus R. Dittrich, Frederick H. Lochovsky, Pericles
Loucopoulos, and Manfred A. Jeusfeld, editors. Proceedings of 23rd International Confer-
ence on Very Large Data Bases (VLDB’97), August 25–27, 1997, Athens, Greece. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-470-7.
1435. Jiřı́ Jaroš and Václav Dvořák. An Evolutionary Design Technique for Collective Communica-
tions on Optimal Diameter-Degree Networks. In GECCO’08 [1519], pages 1539–1546, 2008.
doi: 10.1145/1389095.1389391.
1436. Grahame A. Jastrebski and Dirk V. Arnold. Improving Evolution Strategies through
Active Covariance Matrix Adaptation. In CEC’06 [3033], pages 9719–9726, 2006.
doi: 10.1109/CEC.2006.1688662. INSPEC Accession Number: 9723774.
1437. Andrzej Jaszkiewicz. On the Computational Efficiency of Multiple Objective Metaheuristics:
The Knapsack Problem Case Study. European Journal of Operational Research (EJOR), 158
(2):418–433, October 16, 2004, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2003.06.015.
1438. Edwin Thompson Jaynes, author. G. Larry Bretthorst, editor. Probability Theory: The Logic
of Science. Cambridge University Press: Cambridge, UK, May 2002–June 2003, Washington
University: St. Louis, MO, USA. isbn: 0521592712. Fully available at http://bayes.
wustl.edu/etj/prob/book.pdf [accessed 2009-07-25]. Google Books ID: tTN4HuUNXjgC.
OCLC: 49859931 and 271784842.
1439. Henrik Jeldtoft Jensen. Self-Organized Criticality: Emergent Complex Behavior in Physical
and Biological Systems, volume 10 in Cambridge Lecture Notes in Physics. Cambridge Uni-
versity Press: Cambridge, UK, 1998. isbn: 0521483719. Google Books ID: 7u3x1Luq50cC.
1440. Mikkel T. Jensen. Guiding Single-Objective Optimization Using Multi-Objective Methods.
In EvoWorkshop’03 [2253], pages 91–98, 2003. doi: 10.1007/3-540-36605-9 25. See also [1442].
1441. Mikkel T. Jensen. Reducing the Run-Time Complexity of Multiobjective EAs: The
NSGA-II and other Algorithms. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 7(5):503–515, October 2003, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.817234. INSPEC Accession Number: 7757990.
1442. Mikkel T. Jensen. Helper-Objectives: Using Multi-Objective Evolutionary Algo-
rithms for Single-Objective Optimisation. Journal of Mathematical Modelling and Al-
gorithms, 3(4):323–347, December 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:JMMA.0000049378.57591.c6. See also [1440].
1443. Licheng Jiao, Lipo Wang, Xinbo Gao, Jing Liu, and Feng Wu, editors. Proceedings of the
Second International Conference on Advances in Natural Computation, Part I (ICNC’06-
I), September 24–28, 2006, Xı̄’ān, Shǎnxı̄, China, volume 4221 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11881070.
isbn: 3-540-45901-4. See also [1444].
1444. Licheng Jiao, Lipo Wang, Xinbo Gao, Jing Liu, and Feng Wu, editors. Proceedings of the
Second International Conference on Advances in Natural Computation, Part II (ICNC’06-
II), September 24–28, 2006, Xı̄’ān, Shǎnxı̄, China, volume 4222 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11881223.
isbn: 3-540-45907-3. See also [1443].
1445. Keisoku Jidō and Seigyo Gakkai, editors. Proceedings of IEEE International Conference on
Evolutionary Computation (CEC’96), May 20–22, 1996, Nagoya University, Symposium &
Toyoda Auditorium: Nagoya, Japan. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7803-2902-3. Google Books ID: C ZPAAAACAAJ, dNZVAAAAMAAJ, and
qi7jGAAACAAJ. OCLC: 312788226 and 639955150. INSPEC Accession Number: 5398378.
1446. Wan-rong Jih and Jane Yung-jen Hsu. Dynamic Vehicle Routing Using Hy-
brid Genetic Algorithms. In ICRA’99 [1386], pages 453–458, volume 1, 1999.
1066 REFERENCES
doi: 10.1109/ROBOT.1999.770019. Fully available at http://agents.csie.ntu.
edu.tw/˜jih/publication/ICRA99-GAinDVRP.pdf, http://neo.lcc.uma.es/
radi-aeb/WebVRP/data/articles/DVRP.pdf, and http://ntur.lib.ntu.edu.
tw/bitstream/246246/2007041910021646/1/00770019.pdf [accessed 2010-11-02]. IN-
SPEC Accession Number: 6345736.
1447. Prasanna Jog, Jung Y. Suh, and Dirk Van Gucht. Parallel Genetic Algorithms Applied
to the Traveling Salesman Problem. SIAM Journal on Optimization (SIOPT), 1(4):515–
529, 1991, Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA.
doi: 10.1137/0801031.
1448. Wilhelm Ludvig Johannsen. Elemente der exakten Erblichkeitslehre (mit Grundzügen der Bi-
ologischen Variationsstatistik). Verlag von Gustav Fischer: Jena, Thuringia, Germany, second
german, revised and very extended edition, 1909–1913. Fully available at http://www.zum.
de/stueber/johannsen/elemente/ [accessed 2008-08-20]. Google Books ID: KF8MGQAACAAJ,
RmRVAAAAMAAJ, dxb GwAACAAJ, ggglAAAAMAAJ, l1Q1AAAAMAAJ, oSHVJgAACAAJ, and
yoBUAAAAMAAJ. Limitations of natural selection on pure lines.
1449. Brad Johanson and Riccardo Poli. GP-Music: An Interactive Genetic Programming Sys-
tem for Music Generation with Automated Fitness Raters. Technical Report CSRP-98-13,
University of Birmingham, School of Computer Science: Birmingham, UK, 1998. Fully avail-
able at http://graphics.stanford.edu/˜bjohanso/gp-music/tech-report/ [ac-
cessed 2009-06-18]. See also [1453].
1450. Matthias John and Max J. Ammann. Design of a Wide-Band Printed Antenna Using a
Genetic Algorithm on an Array of Overlapping Sub-Patches. In iWAT’06 [1745], pages 92–
95, 2006–2005. Fully available at http://www.ctvr.ie/docs/RF%20Pubs/01608983.
pdf [accessed 2008-09-03]. See also [1451].
1451. Matthias John and Max J. Ammann. Optimisation of a Wide-Band Printed Monopole An-
tenna using a Genetic Algorithm. In PAPC’06 [1778], pages 237–240, 2006. Fully available
at http://www.ctvr.ie/docs/RF%20Pubs/LAPC_2006_MJ.pdf [accessed 2008-09-02]. See
also [1450].
1452. Colin G. Johnson and Juan Jesús Romero Cardalda. Workshop “Genetic Algorithms in
Visual Art and Music” (GAVAM’00). In GECCO’00 WS [2986], July 8, 2000, Las Vegas,
NV, USA. Bird-of-a-feather Workshop at the 2000 Genetic and Evolutionary Computation
Conference (GECCO-2000). Part of [2986].
1453. Colin G. Johnson and Riccardo Poli. GP-Music: An Interactive Genetic Programming Sys-
tem for Music Generation with Automated Fitness Raters. In GP’98 [1612], pages 181–
186, 1998. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
johanson_1998_GP-Music.html [accessed 2009-06-18]. CiteSeerx : 10.1.1.14.1076. See
also [1449].
1454. David S. Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine Schevon. Optimiza-
tion by Simulated Annealing: An Experimental Evaluation. Part I, Graph Partitioning. Op-
erations Research, 37(6), November–December 1989, Institute for Operations Research and
the Management Sciences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford
University): Cambridge, MA, USA. doi: 10.1287/opre.37.6.865. See also [1455].
1455. David S. Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine Schevon. Op-
timization by Simulated Annealing: An Experimental Evaluation; Part II, Graph Color-
ing and Number Partitioning. Operations Research, 39(3):378–406, May–June 1991, Insti-
tute for Operations Research and the Management Sciences (INFORMS): Linthicum, ML,
USA and HighWire Press (Stanford University): Cambridge, MA, USA. doi: 10.1287/o-
pre.39.3.378. Fully available at http://www.cs.uic.edu/˜jlillis/courses/cs594_
f02/JohnsonSA.pdf [accessed 2009-09-09]. See also [1454].
1456. Derek M. Johnson, Ankur M. Teredesai, and Robert T. Saltarelli. Genetic Programming in
Wireless Sensor Networks. In EuroGP’05 [1518], pages 96–107, 2005. doi: 10.1007/b107383.
Fully available at http://www.cs.rit.edu/˜amt/pubs/EuroGP05FinalTeredesai.
pdf [accessed 2008-06-17].
1457. Timothy D. Johnstona. Selective Costs and Benefits in the Evolution of Learning. In Advances
in the Study of Behavior [2332], pages 65–106. Academic Press Professional, Inc.: San Diego,
CA, USA. doi: 10.1016/S0065-3454(08)60046-7.
1458. Jeffrey A. Joines and Christopher R. Houck. On the Use of Non-Stationary Penalty Func-
tions to Solve Nonlinear Constrained Optimization Problems with GA’s. In CEC’94 [1891],
REFERENCES 1067
pages 579–584, volume 2, 1994. doi: 10.1109/ICEC.1994.349995. Fully available
at http://www.cs.cinvestav.mx/˜constraint/papers/gacons.ps.gz [accessed 2008-
x
11-15]. CiteSeer : 10.1.1.28.1065.
1459. Donald Forsha Jones, editor. Proceedings of the Sixth Annual Congress of Genetics, 1932,
Ithaca, NY, USA. Brooklyn Botanic Gardens: New York, NY, USA and Menasha Corpora-
tion: Neenah, WI, USA.
1460. Donald R. Jones and Mark A. Beltramo. Solving Partitioning Problems with Genetic Algo-
rithms. In ICGA’91 [254], pages 442–449, 1991.
1461. Gareth Jones. Genetic and Evolutionary Algorithms. In Encyclopedia of Computational
Chemistry. Volume III: Databases and Expert Systems [2824], volume 29. John Wiley & Sons
Ltd.: New York, NY, USA, 1998. Fully available at http://www.wiley.com/legacy/
wileychi/ecc/samples/sample10.pdf [accessed 2010-08-01].
1462. Joel Jones. Abstract Syntax Tree Implementation Idioms. In PLoP’03 [2691],
2003. Fully available at http://hillside.net/plop/plop2003/Papers/
Jones-ImplementingASTs.pdf [accessed 2010-08-19].
1463. Josh Jones and Terence Soule. Comparing Genetic Robustness in Generational vs.
Steady State Evolutionary Algorithms. In GECCO’06 [1516], pages 143–150, 2006.
doi: 10.1145/1143997.1144024.
1464. Terry Jones. A Description of Holland’s Royal Road Function. Evolutionary Computation, 2
(4):411–417, Winter 1994, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1994.2.4.409.
See also [1465].
1465. Terry Jones. A Description of Holland’s Royal Road Function. Working Papers 94-11-
059, Santa Fe Institute: Santa Fé, NM, USA, November 1994. Fully available at http://
www.santafe.edu/media/workingpapers/94-11-059.pdf [accessed 2010-12-04]. See also
[1464].
1466. Terry Jones. Crossover, Macromutation, and Population-based Search. In ICGA’95 [883],
pages 73–80, 1995. CiteSeerx : 10.1.1.42.3194.
1467. Terry Jones. Evolutionary Algorithms, Fitness Landscapes and Search. PhD thesis, Uni-
versity of New Mexico: Albuquerque, NM, USA, May 1995, Ron Mullin, Ian Munro,
Charlie Colbourn, Douglas Hofstadter, Gregory J. E. Rawlins, John Henry Holland, and
Stephanie Forrest, Advisors. Fully available at http://jon.es/research/phd.pdf
and http://www.cs.unm.edu/˜forrest/dissertations-and-proposals/terry.
pdf [accessed 2009-07-10].
1468. Terry Jones and Stephanie Forrest. Fitness Distance Correlation as a Measure of Prob-
lem Difficulty for Genetic Algorithms. In ICGA’95 [883], pages 184–192, 1995. Fully
available at http://www.santafe.edu/research/publications/workingpapers/
95-02-022.ps [accessed 2009-07-10]. CiteSeerx : 10.1.1.42.415.
1469. Aravind K. Joshi, editor. Proceedings of the 9th International Joint Conference on Artificial
Intelligence (IJCIA’85-I), August 1985, Los Angeles, CA, USA, volume 1. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-85-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1470].
1470. Aravind K. Joshi, editor. Proceedings of the 9th International Joint Conference on Artificial
Intelligence (IJCIA’85-II), August 1985, Los Angeles, CA, USA, volume 2. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-85-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [1469].
1471. Bryant A. Julstrom. Seeding the Population: Improved Performance in a Genetic Al-
gorithm for the Rectilinear Steiner Problem. In SAC’94 [734], pages 222–226, 1994.
doi: 10.1145/326619.326728.
1472. Bryant A. Julstrom. Evolutionary Codings and Operators for the Terminal Assignment
Problem. In GECCO’09-I [2342], pages 1805–1806, 2009. doi: 10.1145/1569901.1570171.
1473. Lukasz Juszcyk, Anton Michlmayer, and Christian Platzer. Large Scale Web Service Discov-
ery and Composition using High Performance In-Memory Indexing. In CEC/EEE’07 [1344],
pages 509–512, 2007. doi: 10.1109/CEC-EEE.2007.60. Fully available at https://berlin.
vitalab.tuwien.ac.at/˜florian/papers/cec2007.pdf [accessed 2007-10-24]. INSPEC
Accession Number: 9868635. See also [319].
1068 REFERENCES
K
1474. Leslie Pack Kaelbling and Alessandro Saffiotti, editors. Proceedings of the Nineteenth In-
ternational Joint Conference on Artificial Intelligence (IJCAI’05), July 30–August 5, 2005,
Edinburgh, Scotland, UK. International Joint Conferences on Artificial Intelligence: Den-
ver, CO, USA. isbn: 0938075934. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-05/IJCAI-05%20CONTENT.htm [accessed 2008-04-01]. OCLC: 442578924.
1475. Professor Kaelo. Some Population-set based Methods for Unconstrained Global Optimization.
PhD thesis, Witwatersrand University, School of Computational and Applied Mathematics:
Johannesburg, South Africa, 2005, M. Montaz Ali, Advisor.
1476. Professor Kaelo and M. Montaz Ali. A Numerical Study of Some Modified Differen-
tial Evolution Algorithms. European Journal of Operational Research (EJOR), 169(3):
1176–1184, March 16, 2006, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2004.08.047.
1477. Professor Kaelo and M. Montaz Ali. Differential Evolution Algorithms using Hybrid Muta-
tion. Computational Optimization and Applications, 37(2):231–246, June 2007, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10589-007-9014-3.
1478. Tatiana Kalganova. An Extrinsic Function-Level Evolvable Hardware Approach. In Eu-
roGP’00 [2198], pages 60–75, 2000. doi: 10.1007/b75085. CiteSeerx : 10.1.1.42.1386.
1479. Tatiana Kalganova and Julian Francis Miller. Evolving More Efficient Digital Circuits by
Allowing Circuit Layout Evolution and Multi-Objective Fitness. In EH’99 [2614], pages 54–
63, 1999. doi: 10.1109/EH.1999.785435. CiteSeerx : 10.1.1.47.948. INSPEC Accession
Number: 6338575.
1480. Leila Kallel, Bart Naudts, and Alex Rogers, editors. Proceedings of the Second Evonet Sum-
mer School on Theoretical Aspects of Evolutionary Computing, September 1–7, 1999, Uni-
versity of Antwerp (RUCA): Antwerp, Belgium, Natural Computing Series. Springer New
York: New York, NY, USA. isbn: 3-540-67396-2. Google Books ID: 6TWWVJD2yXYC.
OCLC: 174724209. Library of Congress Control Number (LCCN): 00061910. GBV-
Identification (PPN): 319367169. LC Classification: QA76.618 .T47 2001.
1481. Olav Kallenberg. Foundations of Modern Probability, Probability and Its Applications,
Springer Series in Statistics (SSS). Springer New York: New York, NY, USA, 2nd edi-
tion, January 8, 2002. isbn: 0-3879-4957-7 and 0-3879-5313-2. Google Books
ID: L6fhXh13OyMC, TBgFslMy8V4C, and lzLVjXkERQQC. OCLC: 46937587.
1482. Olav Kallenberg. Probabilistic Symmetries and Invariance Principles, Probability and
Its Applications, Springer Series in Statistics (SSS). Springer New York: New York,
NY, USA, June 27, 2005. isbn: 0-3872-5115-4 and 0-3872-8861-9. Google Books
ID: E2mUQjV09y4C, Zn3rKAAACAAJ, and apT0AQAACAAJ. OCLC: 60740995, 64668233,
315796434, and 317470414.
1483. Subrahmanyam Kalyanasundaram, Richard J. Lipton, Kenneth W. Regan, and Farbod
Shokrieh. Improved Simulation of Nondeterministic Turing Machines. In MFCS’10 [2582],
2010. Fully available at http://www.cc.gatech.edu/˜subruk/pdf/simulate.pdf [ac-
cessed 2010-06-24].
1484. Subbarao Kambhampati, editor. Proceedings of the Twentieth National Conference on Artifi-
cial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Confer-
ence (AAAI’05 / IAAI’05), July 9–13, 2005, Pittsburgh, PA, USA. AAAI Press: Menlo Park,
CA, USA and MIT Press: Cambridge, MA, USA. isbn: 1-57735-236-X. Partly available
at http://www.aaai.org/Conferences/AAAI/aaai05.php and http://www.aaai.
org/Conferences/IAAI/iaai06.php [accessed 2007-09-06].
1485. Subbarao Kambhampati and Craig A. Knoblock, editors. Proceedings of IJCAI’03 Work-
shop on Information Integration on the Web (IIWeb’03), August 9–10, 2003, Acapulco,
México. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. Fully available
at http://www.isi.edu/info-agents/workshops/ijcai03/proceedings.htm [ac-
cessed 2008-04-01]. Part of [1118].
ACM Order No.: 910080 and 910081. See also [585, 1872, 2268, 2537].
1520. Jozef Kelemen and Petr Sosı́k, editors. Proceedings of the 6th European Conference on Ad-
vances in Artificial Life (ECAL’01), September 10–14, 2001, Prague, Czech Republic, volume
2159/-1 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/3-540-44811-X. isbn: 3-540-42567-5.
1521. Evelyn Fox Keller and Elisabeth A. Lloyd, editors. Keywords in Evolutionary Biology. Har-
vard University Press: Cambridge, MA, USA, November 1992. isbn: 0-674-50312-0 and
0-674-50313-9. Google Books ID: Hvm7sCuyRV4C and fgRpAAAACAAJ.
1522. James M. Keller and Olfa Nasraoui, editors. Proceedings of the Annual Meeting of the
North American Fuzzy Information Processing Society (NAFIPS’00), June 27–29, 2002, Tu-
lane University: New Orleans, LA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-7461-4. Google Books ID: mpdVAAAAMAAJ. OCLC: 50258796, 63197367,
248814326, and 268938117. Catalogue no.: 02TH8622.
1523. James Kennedy and Russel C. Eberhart. Particle Swarm Optimization. In ICNN’95 [1362],
pages 1942–1948, volume 4, 1995. doi: 10.1109/ICNN.1995.488968. Fully available
at http://www.engr.iupui.edu/˜shi/Coference/psopap4.html [accessed 2010-12-24].
INSPEC Accession Number: 5263228.
1524. James Kennedy and Russel C. Eberhart. Swarm Intelligence: Collective, Adaptive. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2001, Yuhui Shi, Publisher.
isbn: 1558605959. Partly available at http://www.engr.iupui.edu/˜eberhart/web/
PSObook.html [accessed 2008-08-22]. Google Books ID: pHku3gYfL2UC and vOx-QV3sRQsC.
1525. Brian Wilson Kernighan and Dennis M. Ritchie. The C Programming Language, Prentice-
Hall Software Series. Prentice Hall PTR: Englewood Cliffs, Upper Saddle River, NJ,
USA, first/second edition, 1978–1988. isbn: 0-13-110163-3, 0-13-110362-8, and
0-13-110370-9. OCLC: 3608698, 251418906, 257254858, and 299378788. Library
of Congress Control Number (LCCN): 77028983. GBV-Identification (PPN): 023146052,
076304213, and 423015281. LC Classification: QA76.73.C15 K47.
1526. Martin L. Kersten, Boris Novikov, Jens Teubner, Vladimir Polutin, and Stefan Manegold,
editors. Proceedings of the 12th International Conference on Extending Database Technology:
Advances in Database Technology (EDBT’09), March 24–26, 2009, Saint Petersburg, Russia,
volume 360 in ACM International Conference Proceeding Series (AICPS). ACM Press: New
York, NY, USA.
1527. Faten Kharbat, Larry Bull, and Mohammed Odeh. Mining Breast Cancer Data with
1072 REFERENCES
XCS. In GECCO’07-I [2699], pages 2066–2073, 2007. doi: 10.1145/1276958.1277362. Fully
available at http://www.cs.york.ac.uk/rts/docs/GECCO_2007/docs/p2066.pdf
[accessed 2010-07-26].
1586. Petros Koumoutsakos, J. Freund, and D. Parekh. Evolution Strategies for Parameter Opti-
mization in Jet Flow Control. In Proceedings of the 1998 Summer Program [1925], 1998. Fully
available at http://ctr.stanford.edu/Summer98/koumoutsakos.pdf [accessed 2009-06-
16].
1587. Tim Kovacs. XCS Classifier System Reliably Evolves Accurate, Complete, and Minimal
Representations for Boolean Functions. In WSC2 [535], pages 59–68, 1997. Fully available
at http://www.cs.bris.ac.uk/Publications/Papers/1000239.pdf [accessed 2007-08-
x
18]. CiteSeer : 10.1.1.38.4522 and 10.1.1.58.5539.
1588. Tim Kovacs. Two Views of Classifier Systems. In IWLCS’01 [1679], pages 74–87,
2001. doi: 10.1007/3-540-48104-4 6. Fully available at http://www.cs.bris.ac.uk/
Publications/Papers/1000647.pdf [accessed 2007-09-11].
1589. Tim Kovacs, Xavier Llorà, Keiki Takadama, Pier Luca Lanzi, Wolfgang Stolzmann, and
Stewart W. Wilson, editors. Revised Selected Papers of the International Workshops on
Learning Classifier Systems (IWLCS 03-05), 2007, volume 4399/2007 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-71231-2. isbn: 3-540-71230-5.
OCLC: 612928449. Library of Congress Control Number (LCCN): 2007923322. See also
[27, 2280, 2578].
1590. John R. Koza. Non-Linear Genetic Algorithms for Solving Problems. United States Patent
4,935,877, United States Patent and Trademark Office (USPTO): Alexandria, VA, USA,
1988. Filed May 20, 1988. Issued June 19, 1990. Australian patent 611,350 issued september
21, 1991. Canadian patent 1,311,561 issued december 15, 1992.
1591. John R. Koza. Hierarchical Genetic Algorithms Operating on Populations of Com-
puter Programs. In IJCAI’89-I [2593], pages 768–774, 1989. Fully available
at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Koza89.html and http://
www.genetic-programming.com/jkpdf/ijcai1989.pdf [accessed 2010-08-19].
1592. John R. Koza. A Hierarchical Approach to Learning the Boolean Multi-
plexer Function. In FOGA’90 [2562], pages 171–191, 1990. Fully available
at http://www.genetic-programming.com/jkpdf/foga1990.pdf [accessed 2010-08-19].
CiteSeerx : 10.1.1.141.255.
1593. John R. Koza. Concept Formation and Decision Tree Induction using the Genetic Pro-
gramming Paradigm. In PPSN I [2438], pages 124–128, 1990. Fully available at http://
citeseer.ist.psu.edu/61578.html [accessed 2008-05-29].
1594. John R. Koza. Evolution and Co-Evolution of Computer Programs to Control
Independent-Acting Agents. In SAB’90 [1876], pages 366–375, 1990. Fully avail-
able at http://www.genetic-programming.com/jkpdf/sab1990.pdf [accessed 2008-10-
x
19]. CiteSeer : 10.1.1.140.7387.
1595. John R. Koza. Genetic Evolution and Co-Evolution of Computer Programs. In Artificial
Life II [1668], pages 603–629, 1990. Fully available at http://citeseer.ist.psu.
edu/177879.html, http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Koza_
geCP.html, and http://www.genetic-programming.com/jkpdf/alife1990.pdf
x
[accessed 2008-05-29]. CiteSeer : 10.1.1.140.6634. Revised November 29, 1990.
1596. John R. Koza. Genetic Programming: A Paradigm for Genetically Breeding Populations
of Computer Programs to Solve Problems. Technical Report STAN-CS-90-1314, Stanford
University, Computer Science Department: Stanford, CA, USA, June 1990. Fully avail-
REFERENCES 1077
able at http://www.genetic-programming.com/jkpdf/tr1314.pdf [accessed 2010-08-19].
CiteSeerx : 10.1.1.4.9961. See also [2559].
1597. John R. Koza. Concept Formation and Decision Tree Induction using the Genetic Pro-
gramming Paradigm. In PPSN I [2438], pages 124–128, 1990. doi: 10.1007/BFb0029742.
CiteSeerx : 10.1.1.53.5800.
1598. John R. Koza. Non-Linear Genetic Algorithms for Solving Problems by Finding a Fit Com-
position of Functions. United States Patent 5,136,686, United States Patent and Trademark
Office (USPTO): Alexandria, VA, USA. Filed March 28, 1990, Issued August 4, 1992.
1599. John R. Koza. Genetic Programming II: Automatic Discovery of Reusable Programs, Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, July 4, 1994.
isbn: 0-262-11189-6. Google Books ID: t7tpQgAACAAJ. OCLC: 30725207, 180673805,
264562231, 311925261, 493266030, and 634102150. Library of Congress Control
Number (LCCN): 94076375. GBV-Identification (PPN): 153763795, 186785119, and
308224353. LC Classification: QA76.6 .K693 1994.
1600. John R. Koza. A Response to the ML-95 Paper entitled “Hill Climbing beats Genetic Search
on a Boolean Circuit Synthesis of Koza’s”. self-published. Fully available at http://
www.genetic-programming.com/jktahoe24page.html [accessed 2009-06-16]. Version 1 dis-
tributed at the Twelfth International Machine Learning Conference (ML-95). Version 2 dis-
tributed at the Sixth International Conference on Genetic Algorithms (ICGA-95). Version 3
deposited at WWW site on May 26, 1996. See also [883, 1665, 2221].
1601. John R. Koza. Genetic Algorithms and Genetic Programming at Stanford. Stanford University
Bookstore, Stanford University: Stanford, CA, USA, Fall 2003. Book of Student Papers from
John Koza’s Course at Stanford on Genetic Algorithms and Genetic Programming. Computer
Science 426 (CS 426), Biomedical Informatics 226 (BMI 226).
1602. John R. Koza. Genetic Programming: On the Programming of Computers by
Means of Natural Selection, Bradford Books. MIT Press: Cambridge, MA, USA,
December 1992. isbn: 0-262-11170-5. Google Books ID: Bhtxo60BV0EC and
Wd8TAAAACAAJ. OCLC: 26263956, 245844858, 312492070, 473279968, 476715745,
and 610975642. Library of Congress Control Number (LCCN): 92025785. GBV-
Identification (PPN): 110450795, 152621075, 185924921, 191489522, 237675633,
246890320, 282081046, and 49354836X. LC Classification: QA76.6 .K695 1992. 1992
first edition, 1993 second edition.
1603. John R. Koza, editor. Late Breaking Papers at the First Annual Conference Genetic Pro-
gramming (GP’96 LBP), July 28–31, 1996, Stanford University: Stanford, CA, USA. Stanford
University Bookstore, Stanford University: Stanford, CA, USA. isbn: 0-18-201031-7. See
also [1609].
1604. John R. Koza, editor. Late Breaking Papers at the 1997 Genetic Programming Conference
(GP’97 LBP), July 13–16, 1997, Stanford University: Stanford, CA, USA. Stanford Univer-
sity Bookstore, Stanford University: Stanford, CA, USA. isbn: 0-18-206995-8. Google
Books ID: Vn-nAAAACAAJ and cFI7OAAACAAJ. OCLC: 38045123. See also [1611].
1605. John R. Koza, editor. Late Breaking Papers at the Genetic Programming 1998 Conference
(GP’98 LBP), June 22–25, 1998, University of Wisconsin: Madison, WI, USA. Stanford
University Bookstore, Stanford University: Stanford, CA, USA. See also [1612].
1606. John R. Koza and Eric V. Siegel, editors. Proceedings of 1995 AAAI Fall Symposium Series,
Genetic Programming Track, November 10–12, 1995, Massachusetts Institute of Technology
(MIT): Cambridge, MA, USA. The symposium proceedings appeared as Technical Report
FS-95-01.
1607. John R. Koza, David Andre, Forrest H. Bennett III, and Martin A. Keane. Evolution of a Low-
Distortion, Low-Bias 60 Decibel Op Amp with Good Frequency Generalization using Genetic
Programming. In GP’96 LBP [1603], pages 94–100, 1996. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/koza_1996_eldld60db.html and http://
www.genetic-programming.com/jkpdf/gp1996lbpamplifier.pdf [accessed 2009-06-16].
CiteSeerx : 10.1.1.50.6221. See also [271].
1608. John R. Koza, David Andre, Forrest H. Bennett III, and Martin A. Keane. Use of Au-
tomatically Defined Functions and Architecture-Altering Operations in Automated Cir-
cuit Synthesis Using Genetic Programming. In GP’96 [1609], pages 132–149, 1996.
CiteSeerx : 10.1.1.51.3933.
1609. John R. Koza, David Edward Goldberg, David B. Fogel, and Rick L. Riolo, editors. Pro-
1078 REFERENCES
ceedings of the First Annual Conference of Genetic Programming (GP’96), July 28–31, 1996,
Stanford University: Stanford, CA, USA, Complex Adaptive Systems, Bradford Books. MIT
Press: Cambridge, MA, USA. isbn: 0-262-61127-9. Google Books ID: ndy0QgAACAAJ.
OCLC: 35580073, 70655786, and 422034061. See also [1603].
1610. John R. Koza, Forrest H. Bennett III, Jeffrey L. Hutchings, Stephen L. Bade, Martin A.
Keane, and David Andre. Evolving Sorting Networks using Genetic Programming and the
Rapidly Reconfigurable Xilinx 6216 Field-Programmable Gate Array. In Conference Record
of the Thirty-First Asilomar Conference on Signals, Systems & Computers [893], pages 404–
410, volume 1, 1997. doi: 10.1109/ACSSC.1997.680275. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/koza_1997_ASILIMOAR.html [accessed 2008-12-
24]. INSPEC Accession Number: 6020885.
1611. John R. Koza, Kalyanmoy Deb, Marco Dorigo, David B. Fogel, Max H. Garzon, Hitoshi
Iba, and Rick L. Riolo, editors. Proceedings of the Second Annual Conference on Genetic
Programming (GP’97), July 13–16, 1997, Stanford University: Stanford, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-558-60483-9. Google Books
ID: PFIZAQAAIAAJ. OCLC: 60136918. GBV-Identification (PPN): 235727091. See also
[1604].
1612. John R. Koza, Wolfgang Banzhaf, Kumar Chellapilla, Kalyanmoy Deb, Marco Dorigo,
David B. Fogel, Max H. Garzon, David Edward Goldberg, Hitoshi Iba, and Rick L. Ri-
olo, editors. Proceedings of the Third Annual Genetic Programming Conference (GP’98),
July 22–25, 1998, University of Wisconsin: Madison, WI, USA. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. isbn: 1-55860-548-7. OCLC: 123292121. See also
[1605].
1613. John R. Koza, Forrest H. Bennett III, David Andre, and Martin A. Keane. The Design of Ana-
log Circuits by Means of Genetic Programming. In Evolutionary Design by Computers [272],
Chapter 16, pages 365–385. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA,
1999. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/koza_
1999_dacGP.html and http://www.genetic-programming.com/jkpdf/edc1999.
pdf [accessed 2009-06-16].
1614. John R. Koza, Forrest H. Bennett III, David Andre, and Martin A. Keane. Automatic
Design of Analog Electrical Circuits using Genetic Programming. In Hugh Cartwright, editor,
Intelligent Data Analysis in Science [503], Chapter 8, pages 172–200. Oxford University Press,
Inc.: New York, NY, USA, 2000. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/koza_2000_idas.html [accessed 2009-06-16].
1615. John R. Koza, Martin A. Keane, Matthew J. Streeter, William Mydlowec, Jessen Yu, and
Guido Lanza. Genetic Programming IV: Routine Human-Competitive Machine Intelligence,
volume 5 in Genetic Programming Series. Springer Science+Business Media, Inc.: New York,
NY, USA, 2003. isbn: 0-387-25067-0, 0-387-26417-5, 1402074468, and 6610611831.
Google Books ID: YQxWzAEnINIC and vMaVhoI-hVUC. OCLC: 51817183, 60594679,
249361385, 255445558, 318293809, and 403761573.
1616. Dexter C. Kozen. The Design and Analysis of Algorithms, Texts and Monographs in Com-
puter Science. Springer-Verlag GmbH: Berlin, Germany, 1992. isbn: 0-387-97687-6 and
3-540-97687-6. Google Books ID: L AMnf9UF9QC. OCLC: 26014868, 318320484,
475812425, and 610835985. Library of Congress Control Number (LCCN): 91036759.
GBV-Identification (PPN): 018981704 and 118353721. LC Classification: QA76.9.A43
K69 1991.
1617. Oliver Kramer. Self-Adaptive Heuristics for Evolutionary Computation, volume 147 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, July 2008.
doi: 10.1007/978-3-540-69281-2. isbn: 3-540-69280-0. Google Books ID: f4QwC AJEjwC.
OCLC: 233933242. Library of Congress Control Number (LCCN): 2008928449. LC Clas-
sification: QA76.618 .K73 2008.
1618. Natalio Krasnogor and James E. Smith. A Tutorial for Competent Memetic Algorithms:
Model, Taxonomy, and Design Issues. IEEE Transactions on Evolutionary Computa-
tion (IEEE-EC), 9(5):474–488, October 2005, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/TEVC.2005.850260. Fully available at http://www.cs.nott.ac.uk/
˜nxk/PAPERS/IEEE-TEC-lastVersion.pdf [accessed 2008-03-28]. INSPEC Accession Num-
ber: 8608338.
1619. Natalio Krasnogor, Giuseppe Nicosia, Mario Pavone, and David Alejandro Pelta, editors.
REFERENCES 1079
Proceedings of the 2nd International Workshop Nature Inspired Cooperative Strategies for
Optimization (NICSO’07), November 8–10, 2007, Acireale, Catania, Sicily, Italy, volume
129/2008 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-540-78987-1. Library of Congress Control Number (LCCN): 2008924783.
1620. Natalio Krasnogor, Marı́a Belén Melián-Batista, José Andrés Moreno Pérez, J. Marcos
Moreno-Vega, and David Alejandro Pelta, editors. Proceedings of the 3rd International
Workshop Nature Inspired Cooperative Strategies for Optimization (NICSO’08), Novem-
ber 12–14, 2008, Puerto de la Cruz, Tenerife, Spain, volume 236/2009 in Studies in Com-
putational Intelligence. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-642-03211-0.
isbn: 3-642-03210-9.
1621. Ramprasad S. Krishnamachari and Panos Y. Papalambros. Hierarchical Decomposition Syn-
thesis in Optimal Systems Design. Technical Report 95-16, University of Michigan, Depart-
ment of Mechanical Engineering and Applied Mechanics, Design Laboratory: Ann Arbor, MI,
USA, August 1995–September 1996. Fully available at http://hdl.handle.net/2027.
42/6077 [accessed 2009-10-25]. CiteSeerx : 10.1.1.25.9434. See also [1622].
1622. Ramprasad S. Krishnamachari and Panos Y. Papalambros. Hierarchical Decomposition
Synthesis in Optimal Systems Design. Journal of Mechanical Design, 119(4):448–457,
December 1997, American Society of Mechanical Engineers (ASME): New York, NY,
USA. doi: 10.1115/1.2826388. Fully available at http://ode.engin.umich.edu/
publications/PapalambrosPapers/1996/102.pdf [accessed 2009-10-25]. See also [1621].
1623. David Kristoffersson. A Software Framework for Genetic Programming. Bachelor’s
thesis, Luleå University of Technology (LTU), Skellefteå Campus: Skellefteå, Swe-
den, Spring 2005, Johann Nordlander and Daniel Gjörwell, Advisors. Fully avail-
able at http://epubl.ltu.se/1404-5494/2005/28/LTU-HIP-EX-0528-SE.pdf [ac-
cessed 2010-11-21]. issn: 1404-5494. 2005:28 HIP. ISRN: LTU-HIP-EX–05/28–SE.
1624. Antonio Krüger and Rainer Malaka, editors. Proceedings of Artificial Intelligence in Mobile
System (AIMS’03), October 12, 2003, Westin Seattle: Seattle, WA, USA. Fully available
at http://w5.cs.uni-sb.de/˜krueger/aims2003/ [accessed 2009-09-05]. Held in conjunc-
tion with Ubicomp 2003.
1625. Joseph. B. Kruskal, Jr. On the Shortest Spanning Subtree of a Graph and the Traveling
Salesman Problem. Proceedings of the American Mathematical Society, 7(1):48–50, Febru-
ary 1956, American Mathematical Society (AMS) Bookstore: Providence, RI, USA. JSTOR
Stable ID: 2033241.
1626. Christian Kubczak, Tiziana Margaria, and Bernhard Steffen. Semantic Web Services Chal-
lenge 2006 – An Approach to Mediation and Discovery with jABC and miAamic. In
Third Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automat-
ing Web Services Mediation, Choreography and Discovery [2690], 2006. Fully available
at http://sws-challenge.org/workshops/2006-Athens/papers/sws_stage_3.
pdf [accessed 2010-12-16]. See also [1627–1632, 1712, 1831, 1832].
1627. Christian Kubczak, Tiziana Margaria, and Bernhard Steffen. Semantic Web Services Chal-
lenge 2006 – The jABC Approach to Mediation and Choreography. In Second Workshop of the
Semantic Web Service Challenge 2006 – Challenge on Automating Web Services Mediation,
Choreography and Discovery [2689], 2006. Fully available at http://sws-challenge.
org/workshops/2006-Budva/papers/unido_sws_2006_draft.pdf [accessed 2010-12-14].
See also [1626, 1628–1632, 1712, 1831, 1832].
1628. Christian Kubczak, Ralf Nagel, Tiziana Margaria, and Bernhard Steffen. The jABC Ap-
proach to Mediation and Choreography. In First Workshop of the Semantic Web Ser-
vice Challenge 2006 – Challenge on Automating Web Services Mediation, Choreography and
Discovery [2688], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Stanford/papers/06.pdf. See also [1626, 1627, 1629–1632, 1712, 1831, 1832].
1629. Christian Kubczak, Tiziana Margaria, Bernhard Steffen, and Stefan Naujokat. Service-
Oriented Mediation with jETI/jABC: Verification and Export. In WI/IAT Work-
shops’07 [1731], pages 144–147, 2007. doi: 10.1109/WI-IATW.2007.27. INSPEC Accession
Number: 9894901. See also [1626–1628, 1630–1632, 1712, 1831, 1832].
1630. Christian Kubczak, Tiziana Margaria, Christian Winkler, and Bernhard Steffen. Se-
mantic Web Services Challenge 2007 – An Approach to Discovery with miAam-
ics and jABC. In KWEB SWS Challenge Workshop [2450], 2007. Fully avail-
able at http://sws-challenge.org/workshops/2007-Innsbruck/papers/2-sws_
1080 REFERENCES
stage_4_disc.pdf [accessed 2010-12-16]. See also [1626–1629, 1631, 1632, 1712, 1831, 1832].
1631. Christian Kubczak, Christian Winkler, Tiziana Margaria, and Bernhard Steffen. An Ap-
proach to Discovery with miAamics and jABC. In WI/IAT Workshops’07 [1731], pages
157–160, 2007. doi: 10.1109/WI-IATW.2007.26. INSPEC Accession Number: 9894904. See
also [1626–1630, 1712, 1831, 1832, 2166].
1632. Christian Kubczak, Tiziana Margaria, Matthias Kaiser, Jens Lemcke, and Björn Knuth.
Abductive Synthesis of the Mediator Scenario with jABC and GEM. In EON-
SWSC’08 [1023], 2008. Fully available at http://sunsite.informatik.rwth-aachen.
de/Publications/CEUR-WS/Vol-359/Paper-5.pdf [accessed 2010-12-16]. See also [1626–
1628, 1630, 1631, 1712, 1831, 1832, 2166].
1633. Michal Kudelski and Andrzej Pacut. Ant Routing with Distributed Geographical Localiza-
tion of Knowledge in Ad-Hoc Networks. In EvoWorkshops’09 [1052], pages 111–116, 2009.
doi: 10.1007/978-3-642-01129-0 14.
1634. Ben Kuipers and Bonnie Lynn Webber, editors. Proceedings of the Fourteenth National
Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelli-
gence Conference, (AAAI’97, IAAI’97), July 27–31, 1997, Providence, RI, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51095-2. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai97.php and http://
www.aaai.org/Conferences/IAAI/iaai97.php [accessed 2007-09-06].
1635. Saku Kukkonen and Jouni A. Lampinen. Ranking-Dominance and Many-Objective Optimiza-
tion. In CEC’07 [1343], pages 3983–3990, 2007. doi: 10.1109/CEC.2007.4424990. INSPEC
Accession Number: 9889569.
1636. Anup Kumar, Rakesh M. Pathak, M. C. Gupta, and Yash P. Gupta. Genetic Algorithm
based Approach for Designing Computer Network Topolgies. In CSC’93 [1654], pages 358–
365, 1993. doi: 10.1145/170791.170871.
1637. Sanjeev Kumar and Peter J. Bentley. Computational Embryology: Past, Present and Future.
In Advances in Evolutionary Computing – Theory and Applications [1049], pages 461–477.
Springer New York: New York, NY, USA, 2002. Fully available at http://www.cs.ucl.
ac.uk/staff/P.Bentley/KUBECH1.pdf [accessed 2010-08-06]. CiteSeerx : 10.1.1.63.1505.
1638. T. V. Vijay Kumar and Aloke Ghoshal. A Reduced Lattice Greedy Algo-
rithm for Selecting Materialized Views. In ICISTM’09 [2218], pages 6–18, 2009.
doi: 10.1007/978-3-642-00405-6 5. See also [1639].
1639. T. V. Vijay Kumar and Aloke Ghoshal. Greedy Selection of Materialized Views. International
Journal of Computer and Communication Technology (IJCCT), 1(1):47–58, 2009, Khandgiri,
Bhubaneswar, Orissa, India. Fully available at http://www.interscience.in/ijcct/
IJCCT_Paper5.pdf [accessed 2011-04-13]. See also [1638].
1640. Viktor M. Kureichik, Sergey P. Malyukov, Vladimir V. Kureichik, and Alexan-
der S. Malioukov. Genetic Algorithms for Applied CAD Problems, volume 212
in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-540-85281-0. isbn: 3-540-85280-8. Google Books ID: lqCoF45aYn0C.
1641. Věra Kůrková, Nigel C. Steele, Roman Neruda, and Miroslav Kárný, editors. Proceed-
ings of the 5th International Conference on Artificial Neural Networks and Genetic Algo-
rithms (ICANNGA’01), April 22–25, 2001, Lichtenstein Palace: Prague, Czech Republic.
isbn: 3-211-83651-9. OCLC: 47701381, 248270107, and 440721146.
1642. Frank Kursawe. A Variant of Evolution Strategies for Vector Optimization. In PPSN I [2438],
pages 193–197, 1990. doi: 10.1007/BFb002972. Fully available at http://eprints.kfupm.
edu.sa/22064/ and http://www.lania.mx/˜ccoello/kursawe.ps.gz [accessed 2009-06-
x
22]. CiteSeer : 10.1.1.47.8050.
1643. Selahattin Kuru, editor. Proceedings of the Second Turkish Symposium on Artificial In-
telligence and Neural Networks (Tuärk Yapay Zeka ve Yapay Sinir Ağları Sempozyumu)
(TAINN’93), June 24–25, 1993, Boğaziçi Üniversitesi: Istanbul, Turkey. OCLC: 283728082.
1644. Ulrich Küster and Birgitta König-Ries. Discovery and Mediation using the DIANE Ser-
vice Description. In Third Workshop of the Semantic Web Service Challenge 2006 – Chal-
lenge on Automating Web Services Mediation, Choreography and Discovery [2690], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Athens/papers/
SWS-Challenge-2006-Athens-Full.pdf [accessed 2010-12-16]. See also [1645, 1646, 1648–
1650].
1645. Ulrich Küster and Birgitta König-Ries. Service Discovery using DIANE Service De-
REFERENCES 1081
scriptions – A Solution to the SWS-Challenge Discovery Scenarios. In KWEB SWS
Challenge Workshop [2450], 2007. Fully available at http://sws-challenge.org/
workshops/2007-Innsbruck/papers/1-Ulrich-2007.pdf [accessed 2010-12-16]. See also
[1644, 1646, 1648–1650].
1646. Ulrich Küster and Birgitta König-Ries. Semantic Mediation between Business Partners
– A SWS-Challenge Solution Using DIANE Service Descriptions. In WI/IAT Work-
shops’07 [1731], pages 139–143, 2007. doi: 10.1109/WI-IATW.2007.14. INSPEC Accession
Number: 9894900. See also [1644, 1645, 1648–1650].
1647. Ulrich Küster and Birgitta König-Ries. Semantic Service Discovery with DIANE Service
Descriptions. In WI/IAT Workshops’07 [1731], pages 152–156, 2007. doi: 10.1109/WI-I-
ATW.2007.87. INSPEC Accession Number: 9894903.
1648. Ulrich Küster, Christian Klein, and Birgitta König-Ries. Discovery and Mediation using the
DIANE Service Description. In First Workshop of the Semantic Web Service Challenge 2006
– Challenge on Automating Web Services Mediation, Choreography and Discovery [2688],
2006. Fully available at http://sws-challenge.org/workshops/2006-Stanford/
papers/07.pdf [accessed 2010-12-12]. See also [1644–1646, 1649, 1650].
1649. Ulrich Küster, Birgitta König-Ries, and Michael Klein. Discovery and Mediation us-
ing DIANE Service Descriptions. In Second Workshop of the Semantic Web Service
Challenge 2006 – Challenge on Automating Web Services Mediation, Choreography and
Discovery [2689], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Budva/papers/SWS-Challenge-2006-Budva.pdf [accessed 2010-12-14]. See also
[1644–1646, 1648, 1650].
1650. Ulrich Küster, Andrea Turati, Maciej Zaremba, Birgitta König-Ries, Dario Cerizza, Emanuele
Della Valle, Marco Brambilla, Stefano Ceri, Federico Michele Facca, and Christina Tziviskou.
Service Discovery with SWE-ET and DIANE – A Comparative Evaluation by Means
of Solutions to a Common Scenario. In ICEIS’07 [494], pages 430–437, volume 4,
2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/4-ICEIS2007_DIANE_CEFRIEL.pdf [accessed 2010-12-16]. See also [383–
386, 1644–1646, 1648, 1649].
1651. Ugur Kuter, Dana Nau, Marco Pistore, and Paolo Traverso. A Hierarchical Task-Network
Planner based on Symbolic Model Checking. In ICAPS’05 [315], pages 300–309, 2005. Fully
available at http://www.cs.umd.edu/˜nau/papers/kuter05hierarchical.pdf [ac-
x
cessed 2011-01-11]. CiteSeer : 10.1.1.81.669.
1652. Vladimir Kvasnicka, Martin Pelikan, and Jiri Pospichal. Hill Climbing with Learning
(An Abstraction of Genetic Algorithm). Neural Network World – International Journal
on Non-Standard Computing and Artificial Intelligence, 6:773–796, 1996, Academy of Sci-
ences of the Czech Republic, Institute of Computer Science: Prague, Czech Republic.
Fully available at http://130.203.133.121:8080/viewdoc/summary?doi=10.1.1.
56.4888 [accessed 2009-08-21]. CiteSeerx : 10.1.1.56.4888.
1653. Chung Min Kwan and C. S. Chang. Application of Evolutionary Algorithm on a Trans-
portation Scheduling Problem – The Mass Rapid Transit. In CEC’05 [641], pages 987–994,
volume 2, 2005. Fully available at http://www.lania.mx/˜ccoello/EMOO/kwan05.
pdf.gz [accessed 2007-08-27].
1654. Stan C. Kwasny and John F. Buck, editors. Proceedings of the 21s/1993 ACM Confer-
ence on Computer Science (CSC’93), February 16–18, 1993, Indianapolis, IN, USA. ACM
Press: New York, NY, USA. isbn: 0-89791-558-5. Google Books ID: CJq9PwAACAAJ and
fHpQAAAAMAAJ. OCLC: 28196251, 84641266, 221446969, 221446969, 257836504,
258290585, and 311876190.
1655. Jeffrey C. Lagraias, James A. Reeds, Margaret H. Wright, and Paul E. Wright. Conver-
gence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal
on Optimization (SIOPT), 9(1):112–147, 1998, Society for Industrial and Applied Mathe-
matics (SIAM): Philadelphia, PA, USA. doi: 10.1137/S1052623496303470. Fully available
at http://www.aoe.vt.edu/˜cliff/aoe5244/nelder_mead_2.pdf [accessed 2008-06-14].
1082 REFERENCES
1656. Chih-Chung Lai, Chuan-Kang Ting, and Ren-Song Ko. An Effective Genetic Algorithm for
Improving Wireless Sensor Network Lifetime. In GECCO’07-I [2699], pages 2260–2260, 2007.
doi: 10.1145/1276958.1277395. Poster Session: Real-world applications. See also [1657].
1657. Chih-Chung Lai, Chuan-Kang Ting, and Ren-Song Ko. An Effective Genetic Algorithm
to Improve Wireless Sensor Network Lifetime for Large-Scale Surveillance Applications. In
CEC’07 [1343], pages 3531–3538, 2007. doi: 10.1109/CEC.2007.4424930. See also [1656].
1658. Timothy Lai. Discovery of Understandable Math Formulas Using Genetic Programming.
In Genetic Algorithms and Genetic Programming at Stanford [1601], pages 118–127. Stan-
ford University Bookstore, Stanford University: Stanford, CA, USA, 2003. Fully available
at http://www.genetic-programming.org/sp2003/Lai.pdf [accessed 2010-11-21].
1659. John E. Laird, editor. Proceedings of 5th International Conference on Machine Learning
(ICML’88), June 12–14, 1988, University of Michigan: Ann Arbor, MI, USA. Morgan Kauf-
mann: San Mateo, CA, USA. isbn: 0-934613-64-8. Google Books ID: FldQAAAAMAAJ.
OCLC: 18017017, 82090777, 260147544, 441149879, 471061365, 475831474,
476283025, 499414195, and 610512831. GBV-Identification (PPN): 04232307X.
1660. John E. Laird, editor. Proceedings of the 5th International Workshop on Machine Learning
(ML’88), June 12–14, 1988, Ann Arbor, MI, USA. Morgan Kaufmann Publishers Inc.: San
Francisco, CA, USA. isbn: 0-934613-64-8.
1661. Gary B. Lamont and Eric M. Holloway. Military Network Security using Self Orga-
nized Multi-Agent Entangled Hierarchies. In GECCO’09-II [2343], pages 2559–2566, 2009.
doi: 10.1145/1570256.1570361.
1662. Jouni A. Lampinen and Ivan Zelinka. On Stagnation of the Differential Evolution Algorithm.
In MENDEL’00 [2106], pages 76–83, 2000. CiteSeerx : 10.1.1.35.7932.
1663. Ailsa H. Land and A. G. Doig. An Automatic Method of Solving Discrete Programming
Problems. Econometrica – Journal of the Econometric Society, 28(3):497–520, July 1960,
Wiley Interscience: Chichester, West Sussex, UK and Econometric Society. Fully available
at http://jmvidal.cse.sc.edu/library/land60a.pdf [accessed 2009-06-29].
1664. Edmund Landau. Handbuch der Lehre von der Verteilung der Primzahlen. B. G. Teub-
ner: Leipzig, Germany and AMS Chelsea Publishing: Providence, RI, USA, 1909–1953.
isbn: 0821826506. Google Books ID: 2ivxUiSogLgC, 81m4AAAAIAAJ, SFm4AAAAIAAJ,
fbyvOgAACAAJ, and u YAOwAACAAJ.
1665. Kevin J. Lang. Hill Climbing Beats Genetic Search on a Boolean Circuit Synthesis Problem
of Koza’s. In ICML’95 [2221], pages 340–343, 1995. Fully available at http://www.cs.
cornell.edu/Courses/cs472/2004fa/Materials/icml-GP.pdf and http://www.
genetic-programming.com/lang-ml95.ps [accessed 2009-06-16]. See also [1600].
1666. Christopher Gale Langdon, editor. The Proceedings of an Interdisciplinary Workshop on
the Synthesis and Simulation of Living Systems (Artificial Life’87), September 21, 1987,
Oppenheimer Study Center: Los Alamaos, NM, USA, volume 6 in Santa Fe Institue Stud-
ies in the Sciences of Complexity. Addison-Wesley Longman Publishing Co., Inc.: Boston,
MA, USA and Westview Press. isbn: 0201093464 and 0201093561. Google Books
ID: 5W05AgAACAAJ. OCLC: 257011165 and 260158809. Published in January 1989.
1667. Christopher Gale Langdon, editor. Proceedings of the Workshop on Artificial Life (Arti-
ficial Life III), June 15–19, 1992, Santa Fé, NM, USA, volume 17 in Santa Fe Institue
Studies in the Sciences of Complexity. Addison-Wesley Longman Publishing Co., Inc.:
Boston, MA, USA and Westview Press. isbn: 0-201-62492-3 and 0-201-62494-X.
OCLC: 29548167, 475766246, 501682096, 624447801, and 633335938. GBV-
Identification (PPN): 181621835. Published in August 1993.
1668. Christopher Gale Langdon, Charles E. Taylor, Doyne J. Farmer, and Steen Rasmussen, ed-
itors. Proceedings of the Workshop on Artificial Life (Artificial Life II), February 1990,
volume X in Santa Fe Institue Studies in the Sciences of Complexity. Addison-Wesley Long-
man Publishing Co., Inc.: Boston, MA, USA and Westview Press. isbn: 0201525704 and
0201525712. Google Books ID: 7WZAGQAACAAJ, 7mZAGQAACAAJ, and wAx7AAAACAAJ.
OCLC: 23733587, 177349139, 180517573, 223313171, 257010417, and 258496439.
1669. William B. Langdon. Mapping Non-conventional Extensions of Genetic Programming.
In UC’06 [472], pages 166–180, 2006. doi: 10.1007/11839132 14. Fully available
at http://www.cs.ucl.ac.uk/staff/wlangdon/ftp/papers/wbl_uc2002.pdf [ac-
cessed 2010-11-21].
1670. William B. Langdon. A SIMD Interpreter for Genetic Programming on GPU Graphics Cards.
REFERENCES 1083
Computer Science Technical Report CSM-470, University of Essex, Departments of Mathe-
matical and Biological Sciences: Wivenhoe Park, Colchester, Essex, UK, July 3, 2007. Fully
available at http://cswww.essex.ac.uk/technical-reports/2007/csm_470.pdf
[accessed 2008-12-24]. issn: 1744-8050. See also [1671].
1671. William B. Langdon and Wolfgang Banzhaf. A SIMD Interpreter for Genetic Pro-
gramming on GPU Graphics Cards. In EuroGP’08 [2080], pages 73–85, 2008.
doi: 10.1007/978-3-540-78671-9 7. Fully available at http://www.cs.bham.ac.uk/
˜wbl/biblio/gp-html/langdon_2008_eurogp.html and http://www.cs.ucl.ac.
uk/staff/W.Langdon/ftp/papers/langdon_2008_eurogp.pdf [accessed 2009-08-04]. See
also [1670].
1672. William B. Langdon and Riccardo Poli. Fitness Causes Bloat: Mutation. In Eu-
roGP’98 [210], pages 37–48, 1998. Fully available at ftp://ftp.cwi.nl/pub/W.
B.Langdon/papers/WBL.euro98_bloatm.ps.gz and http://citeseer.ist.psu.
edu/langdon98fitness.html [accessed 2007-09-09]. CiteSeerx : 10.1.1.40.8288.
1673. William B. Langdon and Riccardo Poli. Foundations of Genetic Programming. Springer-
Verlag GmbH: Berlin, Germany, January 2002. isbn: 3-540-42451-2. Partly avail-
able at http://www.cs.ucl.ac.uk/staff/W.Langdon/FOGP/ [accessed 2008-02-15]. Google
Books ID: PZli0s95Jd0C. OCLC: 47717957, 248023954, and 491544616. Library of
Congress Control Number (LCCN): 2001049394. GBV-Identification (PPN): 334151872.
LC Classification: QA76.623 .L35 2002.
1674. William B. Langdon, Terence Soule, Riccardo Poli, and James A. Foster. The Evolution
of Size and Shape. In Advances in Genetic Programming [2569], Chapter 8, pages 163–190.
MIT Press: Cambridge, MA, USA, 1996. Fully available at http://www.cs.bham.ac.uk/
˜wbl/aigp3/ch08.pdf and http://www.cs.bham.ac.uk/˜wbl/aigp3/ch08.ps.gz
x
[accessed 2011-11-23]. CiteSeer : 10.1.1.59.1581.
1675. William B. Langdon, Erick Cantú-Paz, Keith E. Mathias, Rajkumar Roy, David Davis, Ric-
cardo Poli, Hari Balakrishnan, Vasant Honavar, Günter Rudolph, Joachim Wegener, Larry
Bull, Mitchell A. Potter, Alan C. Schultz, Julian Francis Miller, Edmund K. Burke, and
Natasa Jonoska, editors. Proceedings of the Genetic and Evolutionary Computation Confer-
ence (GECCO’02), July 9–13, 2002, Roosevelt Hotel: New York, NY, USA. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-878-8, 1-55860-881-8,
and 1-55860-903-2. Google Books ID: 7YZQAAAAMAAJ, N3kCAAAACAAJ, QDq-IgAACAAJ,
W28dHQAACAAJ, and tBThPAAACAAJ. A joint meeting of the eleventh International Con-
ference on Genetic Algorithms (ICGA-2002) and the seventh Annual Genetic Programming
Conference (GP-2002). See also [224, 478, 1790].
1676. Pat Langley, editor. Proceedings of the 17th International Conference on Machine Learn-
ing (ICML’00), June 29–July 2, 2000, Stanford University: Stanford, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-707-2.
1677. Pier Luca Lanzi and Rick L. Riolo. A Roadmap to the Last Decade of Learning
Classifier System Research (From 1989 to 1999). In IWLCS’00 [1678], page 33, 2000.
doi: 10.1007/3-540-45027-0 2.
1678. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Advances in Learning
Classifier Systems, Revised Papers of the Third International Workshop on Learning Classi-
fier Systems (IWLCS’00), September 15–16, 2000, Paris, France, volume 1996/2001 in Lec-
ture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-42437-7. OCLC: 441809319. co-
den: LNCSD9. A joint workshop of the sixth International Conference on Simulation of
Adaptive Behaviour (SAB2000, and the sixth International Conference on Parallel Problem
Solving from Nature (PPSN VI, seeAlsoilPROC2000PPSN), published in 2001.
1679. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Advances in Learn-
ing Classifier Systems, Revised Papers of the Fourth International Workshop on Learn-
ing Classifier Systems (IWLCS’01), July 7–8, 2001, San Francisco, CA, USA, volume
2321/2002 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48104-4.
isbn: 3-540-43793-2. Google Books ID: LX6ky1cIgXAC. Held during GECCO-2001, pub-
lished in 2002. See also [2570].
1680. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Revised Papers of
the 5th International Workshop on Learning Classifier Systems (IWLCS’02), September 7–8,
1084 REFERENCES
2002, Granada, Spain, volume 2661/2003 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-20544-6. Published in 2003.
1681. Pier Luca Lanzi, Daniele Loiacono, Stewart W. Wilson, and David Edward Goldberg. Gen-
eralization in the XCSF Classifier System: Analysis, Improvement, and Extension. Evo-
lutionary Computation, 15(2):133–168, Summer 2007, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.2007.15.2.133. CiteSeerx : 10.1.1.114.7870. PubMed ID: 17535137.
1682. Patrick LaRoche and A. Nur Zincir-Heywood. 802.11 Network Intrusion Detec-
tion Using Genetic Programming. In GECCO’05 WS [2339], pages 170–171,
2005. doi: 10.1145/1102256.1102295. Fully available at http://larocheonline.ca/
˜plaroche/papers/gecco-laroche-heywood.pdf [accessed 2008-06-16].
1683. Pedro Larrañaga and José Antonio Lozano, editors. Estimation of Distribution Algorithms
– A New Tool for Evolutionary Computation, volume 2 in Genetic and Evolutionary Com-
putation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA,
USA, 2001. isbn: 0-7923-7466-5. Google Books ID: kyU74gxp1rsC and o0llxS4u93wC.
OCLC: 47996547.
1684. Christian W. G. Lasarczyk and Wolfgang Banzhaf. An Algorithmic Chemistry for Genetic
Programming. In EuroGP’05 [1518], pages 1–12, 2005. doi: 10.1007/978-3-540-31989-4 1.
Fully available at http://ls11-www.cs.uni-dortmund.de/downloads/papers/
LaBa05.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/eurogp_
LasarczykB05.html [accessed 2010-08-01]. CiteSeerx : 10.1.1.131.4314.
1685. Christian W. G. Lasarczyk and Wolfgang Banzhaf. Total Synthesis of Algo-
rithmic Chemistries. In GECCO’05 [304], pages 1635–1640, volume 2, 2005.
doi: 10.1145/1068009.1068285. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gecco2005/docs/p1635.pdf [accessed 2009-08-02].
1686. Jörg Lässig and Achim G. Hoffmann. Threshold-selecting Strategy for Best Possible
Ground State Detection with Genetic Algorithms. Physical Review E, 79(4):046702–046702–
8, April 2009, American Physical Society: College Park, MD, USA. doi: 10.1103/Phys-
RevE.79.046702. See also [1687].
1687. Jörg Lässig, Karl Heinz Hoffmann, and Mihaela Enăchescu. Threshold Selecting: Best
Possible Probability Distribution for Crossover Selection in Genetic Algorithms. In
GECCO’08 [1519], pages 2181–2185, 2008. doi: 10.1145/1388969.1389044. See also [1686].
1688. Michael Lässig and Angelo Valleriani, editors. Biological Evolution and Statistical Physics,
volume 585/2002 in Lecture Notes in Physics. Springer-Verlag: Berlin/Heidelberg, 2002.
doi: 10.1007/3-540-45692-9. isbn: 3-540-43188-8. Google Books ID: b7SL7kc5N7oC.
issn: 0075-8450.
1689. Antonio LaTorre, José Marı́a Peña, Santiago Muelas, and Manuel Zaforas. Hybrid Evolu-
tionary Algorithms for Large Scale Continuous Problems. In GECCO’09-I [2342], pages
1863–1864, 2009. doi: 10.1145/1569901.1570205.
1690. Hui Keng Lau, Jonathan Timmis, and Ian Bate. Anomaly Detection Inspired by
Immune Network Theory: A Proposal. In CEC’09 [1350], pages 3045–3051, 2009.
doi: 10.1109/CEC.2009.4983328. Fully available at ftp://ftp.cs.york.ac.uk/papers/
rtspapers/R:Lau:2009.pdf [accessed 2009-09-05]. INSPEC Accession Number: 10688923.
1691. Marco Laumanns, Eckart Zitzler, and Lothar Thiele. A Unified Model for Multi-Objective
Evolutionary Algorithms with Elitism. In CEC’00 [1333], pages 46–53, volume 1, 2000.
doi: 10.1109/CEC.2000.870274. Fully available at http://www.lania.mx/˜ccoello/
EMOO/laumanns00.ps.gz [accessed 2010-08-01]. CiteSeerx : 10.1.1.36.4389. INSPEC Ac-
cession Number: 6734602.
1692. Marco Laumanns, Lothar Thiele, Kalyanmoy Deb, and Eckart Zitzler. On the Convergence
and Diversity-Preservation Properties of Multi-Objective Evolutionary Algorithms. TIK-
Report 108, Kanpur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical
Engineering, Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India,
June 13, 2001. Fully available at http://e-collection.ethbib.ethz.ch/view/eth:
24717 [accessed 2009-07-09]. CiteSeerx : 10.1.1.28.6480.
1693. Eugène L. Lawler. Introduction to Stochastic Processes. Chapman & Hall: London, UK and
CRC Press, Inc.: Boca Raton, FL, USA, 2nd edition, July 1995. isbn: 0412995115 and
158488651X. Google Books ID: 1L8uPQAACAAJ, LATJdzBlajUC, and ypD8ZFqdPlUC.
OCLC: 31288133, 64084881, and 266086758.
1694. Eugène L. Lawler, Jan Karel Lenstra, A. H. G. Rinnooy-Kan, and David B. Shmoys.
REFERENCES 1085
The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, Wiley-
Interscience Series in Discrete Mathematics, Estimation, Simulation, and Control – Wiley-
Interscience Series in Discrete Mathematics and Optimization. Wiley Interscience: Chich-
ester, West Sussex, UK, September 1985. isbn: 0-471-90413-9. OCLC: 11756468,
260186267, 468006920, 488801427, 611882247, and 631997749. Library of
Congress Control Number (LCCN): 85003158. GBV-Identification (PPN): 011848790,
02528360X, 042459834, 042637112, 043364950, 078586313, 156527677, 175957169,
and 190388226. LC Classification: QA164 .T73 1985.
1695. Mark Lawrence and Colin Wilsdon, editors. Operational Research Tutorial Papers – Annual
Conference of the Operational Research Society, 1995, Canterbury, Kent, UK. Operations Re-
search Society: Birmingham, UK, Hoskyns Cap Gemini Sogeti, and Stockton Press: Hamp-
shire, UK. isbn: 0-903440-15-6. Google Books ID: pLkmAQAACAAJ. OCLC: 222984071
and 249133773.
1696. Michael Lawrence. Multiobjective Genetic Algorithms for Materialized View Se-
lection in OLAP Data Warehouses. In GECCO’06 [1516], pages 669–706, 2006.
doi: 10.1145/1143997.1144120.
1697. Steve Lawrence and C. Lee Giles. Overfitting and Neural Networks: Conjugate Gradient
and Backpropagation. In IJCNN’00 [82], pages 1114–1119, volume 1, 2000. doi: 10.1109/I-
JCNN.2000.857823. CiteSeerx : 10.1.1.27.9735. INSPEC Accession Number: 6696828.
1698. Antonio Lazcano and Stanley L. Miller. How long did it take for Life to begin and evolve
to Cyanobacteria? Journal of Molecular Evolution, 39(6):546–554, December 1994, Springer
New York: New York, NY, USA. doi: 10.1007/BF00160399. PubMed ID: 11536653.
1699. A. C. G. Verdugo Lazo and P. N. Rathie. On the Entropy of Continuous Probability Dis-
tributions. IEEE Transactions on Information Theory (TIT), 24(1):120–122, January 1978,
Institute of Electrical and Electronics Engineers (IEEE) Press: New York, NY, USA.
1700. Lucien Marie Le Cam and Jerzy Neyman, editors. Proceedings of the Fifth Berkeley Sympo-
sium on Mathematical Statistics and Probability (June 21-July 18, 1965 and December 27,
1965-January 7, 1966), 1967, University of California, Statistical Laboratory: Berkeley, CA,
USA. University of California Press: Berkeley, CA, USA. Fully available at http://
projecteuclid.org/euclid.bsmsp/1200513453, http://projecteuclid.
org/euclid.bsmsp/1200513615, http://projecteuclid.org/euclid.bsmsp/
1200513981, and http://ucblibrary4.berkeley.edu:8088/xdlib//math/ucb/
mets/cumath_9_1_00019737.xml [accessed 2009-10-20]. Google Books ID: IX9WAAAAMAAJ,
LVI4AAAAIAAJ, NA-3QQAACAAJ, and f7xWAAAAMAAJ. OCLC: 5772953, 25234711,
25234721, 25234803, 25234809, 36997875, 71753794, 221157481, 223463967, and
300992815. issn: 0097-0433.
1701. Joshua Lederberg and Alexa T. McCray. ’Ome Sweet ’Omics – A Genealogical Treasury of
Words. The Scientist – Magazine of the Life Sciences, 15(7):8, April 2001. Fully available
at http://lhncbc.nlm.nih.gov/lhc/docs/published/2001/pub2001047.pdf [ac-
cessed 2009-06-28].
1702. Chang-Yong Lee and Xin Yao. Evolutionary Programming using the Mutations based
on the Lévy Probability Distribution. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 8(1):1–13, February 2004, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.816583. CiteSeerx : 10.1.1.6.2145. INSPEC Accession Num-
ber: 8003613.
1703. Jack Yiu-Bun Lee and P. C. Wong. The Effect of Function Noise on GP Efficiency. In
Progress in Evolutionary Computation, AI’93 (Melbourne, Victoria, Australia, 1993-11-16)
and AI’94 Workshops (Armidale, NSW, Australia, 1994-11-22/23) on Evolutionary Compu-
tation, Selected Papers [3023], pages 1–16, 1993–1994. doi: 10.1007/3-540-60154-6 43.
1704. Jongsoo Lee and Prabhat Hajela. Parallel Genetic Algorithm Implementation in Multidisci-
plinary Rotor Blade Design. Technical Report ADA305771, Renssalaer Polytechnic Institute,
Aeronautical Engineering and Mechanics Department, Mechanical Engineering: Troy, NY,
USA, Defense Technical Information Center (DTIC): Fort Belvoir, VA, USA, November 1995.
Fully available at http://handle.dtic.mil/100.2/ADA305771 [accessed 2009-06-17]. See
also [1705].
1705. Jongsoo Lee and Prabhat Hajela. Parallel Genetic Algorithm Implementation in Multi-
disciplinary Rotor Blade Design. Journal of Aircraft, 33(5):962–966, September–October
1996, American Institute of Aeronautics and Astronautics (AIAA): Reston, VA, USA.
1086 REFERENCES
doi: 10.2514/3.47042. See also [1704].
1706. Minsoo Lee and Joachim Hammer. Speeding up Materialized View Selection in Data Ware-
house using a Randomized Algorithm. International Journal of Cooperative Information
Systems (IJCIS), 10(3):327–353, September 2001, World Scientific Publishing Co.: Singa-
pore. doi: 10.1142/S0218843001000370. Fully available at http://www.cise.ufl.edu/
˜jhammer/publications/IJCIS-DW2001/minsoolee.doc [accessed 2011-04-09].
1707. S. Lee, S. Soak, K. Kim, H. Park, and M. Jeon. Statistical Properties Analysis of Real
World Tournament Selection in Genetic Algorithms. Applied Intelligence – The Interna-
tional Journal of Artificial Intelligence, Neural Networks, and Complex Problem-Solving
Technologies, 28(2):195–205, April 2008, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10489-007-0062-2.
1708. Shane Legg, Marcus Hutter, and Akshat Kumar. Tournament versus Fitness Uniform Selec-
tion. Technical Report IDSIA-04-04, Dalle Molle Institute for Artificial Intelligence (IDSIA),
University of Lugano, Faculty of Informatics / University of Applied Sciences of Southern
Switzerland (SUPSI), Department of Innovative Technologies: Manno-Lugano, Switzerland,
March 4, 2004. Fully available at http://www.idsia.ch/idsiareport/IDSIA-04-04.
pdf [accessed 2011-04-29]. See also [1313, 1709].
1709. Shane Legg, Marcus Hutter, and Akshat Kumar. Tournament versus Fitness Uniform Selec-
tion. In CEC’04 [1369], pages 2144–2151, 2004. doi: 10.1109/CEC.2004.1331162. Fully avail-
able at http://www.hutter1.net/ai/fussexp.pdf, http://www.hutter1.net/
ai/fussexp.ps, and http://www.hutter1.net/ai/fussexp.zip [accessed 2011-04-29].
CiteSeerx : 10.1.1.71.2663. arXiv ID: cs/0403038v1. INSPEC Accession Num-
ber: 8229180. See also [1312, 1313, 1708].
1710. Joel Lehman and Kenneth Owen Stanley. Exploiting Open-Endedness to Solve Prob-
lems Through the Search for Novelty. In Artificial Life XI [446], pages 329–336,
2008. Fully available at http://alifexi.alife.org/papers/ALIFExi_pp329-336.
pdf and http://eplex.cs.ucf.edu/papers/lehman_alife08.pdf [accessed 2011-03-01].
1711. Jingsheng Lei, JingTao Yao, and Qingfu Zhang, editors. Proceedings of the Third In-
ternational Conference on Advances in Natural Computation (ICNC’07), August 24–27,
2007, Hǎikǒu, Hǎinán, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
isbn: 0-7695-2875-9. Google Books ID: oAIEHwAACAAJ. OCLC: 166273050 and
181014549. Held jointly with the 4th International Conference on Fuzzy Systems and
Knowledge Discovery (FSKD 2007).
1712. Jens Lemcke, Matthias Kaiser, Christian Kubczak, Tiziana Margaria, and Björn Knuth. Ad-
vances in Solving the Mediator Scenario with jABC and jABC/GEM. In SWSC’08, Seman-
tic Web Services Challenge: Proceedings of the 2008 Workshops [2166, 2590], pages 89–102.
Stanford University, Computer Science Department, Stanford Logic Group: Stanford, CA,
USA, 2008. Fully available at http://logic.stanford.edu/reports/LG-2009-01.
pdf [accessed 2010-12-16]. Part of [2166]. See also [1626–1632, 1831, 1832].
1713. James G. Lennox. Aristotle’s Philosophy of Biology: Studies in the Origins of Life Science,
Cambridge Studies in Philosophy and Biology. Cambridge University Press: Cambridge,
UK, April 2001. isbn: 0521650275, 0521659760, and 978-0521650274. Google Books
ID: CqmXeBDkb6oC and IijTXV3 ZLMC. OCLC: 43694261 and 45592829. LC Classifica-
tion: QH331 .L528 2001.
1714. Henry Leung, editor. Proceedings of the Sixth IASTED International Conference on Artificial
Intelligence and Soft Computing (ASC’02), July 17–19, 2002, Banff, AB, Canada. ACTA
Press: Calgary, AB, Canada. isbn: 0-88986-346-6. Google Books ID: Zd3NPQAACAAJ.
OCLC: 52161283 and 249212159.
1715. Kwong Sak Leung, Kin Hong Lee, and Sin Man Cheang. Genetic Parallel Programming –
Evolving Linear Machine Codes on a Multiple-ALU processor. In ICAIET’02 [3004], pages
207–213, 2002.
1716. Joel R. Levin. What If There Were No More Bickering about Statistical Significance Tests?
Research in the Schools (RITS), 5(2):43–53, 1998, Tony Onwuegbuzie and John Slate, ed-
itors. Fully available at http://www.personal.psu.edu/users/d/m/dmr/sigtest/
6mspdf.pdf [accessed 2008-08-15].
1717. Andrew Lewis, Gerhard Weis, Marcus Randall, Amir Galehdar, and David Thiel. Optimising
Efficiency and Gain of Small Meander Line RFID Antennas using Ant Colony System. In
CEC’09 [1350], pages 1486–1492, 2009. doi: 10.1109/CEC.2009.4983118. INSPEC Accession
REFERENCES 1087
Number: 10688713.
1718. Peter R. Lewis, Paul Marrow, and Xin Yao. Evolutionary Market Agents and Heterogeneous
Service Providers: Achieving Desired Resource Allocations. In CEC’09 [1350], pages 904–910,
2009. doi: 10.1109/CEC.2009.4983041. INSPEC Accession Number: 10688568.
1719. Robert Michael Lewis, Virginia Joanne Torczon, and Michael W. Trosset. Direct Search Meth-
ods: Then and Now. Technical Report NASA/CR-2000-210125, NASA Langley Research
Center, Institute for Computer Applications in Science and Engineering (ICASE): Hampton,
VA, USA, May 2000. Fully available at http://www.cs.wm.edu/˜va/research/jcam.
pdf [accessed 2010-12-25]. CiteSeerx : 10.1.1.35.5338. ICASE Report No. 2000-26. See also
[1720].
1720. Robert Michael Lewis, Virginia Joanne Torczon, and Michael W. Trosset. Direct Search
Methods: Then and Now. Journal of Computational and Applied Mathematics, 124(1-
2):191–207, December 2000, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/S0377-0427(00)00423-4. See also [1719].
1721. Jin Li. FGP: A Genetic Programming Based Tool for Financial Forecasting. PhD thesis,
University of Essex: Wivenhoe Park, Colchester, Essex, UK, October 6, 2001. See also
[1722].
1722. Jin Li, Xiaoli Li, and Xin Yao. Cost-Sensitive Classification with Genetic Programming. In
CEC’05 [641], pages 2114–2121, 2005. Fully available at http://www.cs.bham.ac.uk/
x
˜xin/papers/LiLiYaoCEC05.pdf [accessed 2010-06-25]. CiteSeer : 10.1.1.66.7637. See
also [1721].
1723. Leon Y. O. Li. Vehicle Routeing for Winter Gritting. PhD thesis, Lancaster University,
Department of Management Science: Lancaster, Lancashire, UK, 1992.
1724. Leon Y. O. Li and Richard W. Eglese. An Interactive Algorithm for Vehicle Routeing for
Winter - Gritting. The Journal of the Operational Research Society (JORS), 47(2):217–228,
February 1996, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and
Operations Research Society: Birmingham, UK. JSTOR Stable ID: 2584343.
1725. Wei Li. Using Genetic Algorithm for Network Intrusion Detection. In Proceedings of
the United States Department of Energy Cyber Security Group 2004 Training Confer-
ence [1492], 2004. Fully available at http://www.security.cse.msstate.edu/docs/
Publications/wli/DOECSG2004.pdf [accessed 2009-05-11].
1726. Xiang Li and Vic Ciesielski. Using Loops in Genetic Programming for A Two Class Bi-
nary Image Classication Problem. In AI’04 [2871], pages 898–909, 2004. Fully available
at http://goanna.cs.rmit.edu.au/˜vc/papers/aust-ai04.pdf [accessed 2010-11-20].
CiteSeerx : 10.1.1.87.3912.
1727. Xiaodong Li. Niching without Niching Parameters: Particle Swarm Optimization
Using a Ring Topology. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 14(1):150–169, February 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2009.2026270. Fully available at http://goanna.cs.rmit.edu.au/
˜xiaodong/publications/rpso-ieee-tec.pdf [accessed 2011-11-10]. INSPEC Accession
Number: 11088362.
1728. Xiaodong Li, Jürgen Branke, and Tim Blackwell. Particle Swarm with Speciation and
Adaptation in a Dynamic Environment. In GECCO’06 [1516], pages 51–58, 2006.
doi: 10.1145/1143997.1144005.
1729. Xiaodong Li, Michael Kirley, Mengjie Zhang, David Green, Vic Ciesielski, Hussein A.
Abbass, Zbigniew Michalewicz, Tim Hendtlass, Kalyanmoy Deb, Kay Chen Tan, Jürgen
Branke, and Yuhui Shi, editors. Proceedings of the Seventh International Conference on
Simulated Evolution And Learning (SEAL’08), December 7–10, 2008, Melbourne, VIC,
Australia, volume 5361/2008 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-89694-4. isbn: 3-540-89693-7. Library of Congress Control Number
(LCCN): 2008939923.
1730. Yong Li and Shihua Gong. Dynamic Ant Colony Optimisation for TSP. The International
Journal of Advanced Manufacturing Technology, 22(7-8):528–533, November 2003, Springer-
Verlag London Limited: London, UK. doi: 10.1007/s00170-002-1478-9.
1731. Yuefeng Li and Vijay V. Raghavan, editors. Proceedings of the 2007 IEEE/WIC/ACM In-
ternational Conferences on Web Intelligence and Intelligent Agent Technology Workshops
1088 REFERENCES
(WI/IAT Workshops’07), November 2–5, 2007, Silicon Valley, CA, USA. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7695-3028-1. Google Books
ID: E3ALMwAACAAJ, TTBEPwAACAAJ, and wWCDRAAACAAJ. Library of Congress Control
Number (LCCN): 2007933794. Order No.: P3028.
1732. J. J. Liang, Thomas Philip Runarsson, Efrén Mezura-Montes, Maurice Clerc, Ponnuthu-
rai Nagaratnam Suganthan, Carlos Artemio Coello Coello, and Kalyanmoy Deb. Problem
Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-
Parameter Optimization. Technical Report, Nanyang Technological University (NTU): Sin-
gapore, September 18, 2006. Fully available at http://www.lania.mx/˜emezura/util/
files/tr_cec06.pdf [accessed 2009-10-26]. Partly available at http://www3.ntu.edu.sg/
home/EPNSugan/index_files/CEC-06/CEC06.htm [accessed 2009-10-26].
1733. Suiliong Liang, A. Nur Zincir-Heywood, and Malcom Ian Heywood. The Effect of Rout-
ing under Local Information using a Social Insect Metaphor. In CEC’02 [944], pages
1438–1443, 2002. Fully available at http://users.cs.dal.ca/˜mheywood/X-files/
Publications/01004454.pdf [accessed 2008-07-26]. See also [1734].
1734. Suiliong Liang, A. Nur Zincir-Heywood, and Malcom Ian Heywood. Adding More In-
telligence to the Network Routing Problem: AntNet and GA-Agents. Applied Soft
Computing, 6(3):244–257, March 2006, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2005.01.005. See also [1733].
1735. Weifa Liang, Hui Wang, and Maria E. Orlowska. Materialized View Selection under the
Maintenance Time Constraint. Data & Knowledge Engineering, 37(2):203–216, May 2001,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Sci-
entific Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0169-023X(01)00007-6.
CiteSeerx : 10.1.1.90.9530.
1736. Pierre Liardet, Pierre Collet, Cyril Fonlupt, Evelyne Lutton, and Marc Schoenauer, editors.
Proceedings of the 6th International Conference on Artificial Evolution, Evolution Artificielle
(EA’03), October 27–30, 2003, Marseilles, France, volume 2936 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-21523-9. Published
in 2003.
1737. Ching-Fang Liaw. A Hybrid Genetic Algorithm for the Open Shop Scheduling Problem. Eu-
ropean Journal of Operational Research (EJOR), 124(1):28–42, July 1, 2000, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(99)00168-X.
1738. Vibeke Libby, editor. Proceedings of Data Structures and Target Classification, the SPIE’s In-
ternational Symposium on Optical Engineering and Photonics in Aerospace Sensing, April 1–
2, 1991, Orlando, FL, USA, volume 1470 in The Proceedings of SPIE. Society of Photographic
Instrumentation Engineers (SPIE): Bellingham, WA, USA. isbn: 0819405795. Google Books
ID: Ne1RAAAAMAAJ. OCLC: 24461715.
1739. Gunar E. Liepins and Michael D. Vose. Deceptiveness and Genetic Algorithm Dynamics. In
FOGA’90 [2562], pages 36–50, 1990. Fully available at http://www.osti.gov/bridge/
servlets/purl/6445602-CfqU6M/ [accessed 2007-11-05].
1740. Pedro Lima, editor. Robotic Soccer. I-Tech Education and Publishing: Vienna, Aus-
tria, 2007. isbn: 3-902613-07-6. Fully available at http://s.i-techonline.
com/Book/Robotic-Soccer/ISBN978-3-902613-21-9.html [accessed 2008-04-23]. Google
Books ID: 5lAuQwAACAAJ. OCLC: 441015574.
1741. JiGuan G. Lin Lin. On Min-Norm and Min-Max Methods of Multi-Objective Optimiza-
tion. Mathematical Programming, 103(1):1–33, May 2005, Springer-Verlag: Berlin/Heidel-
berg. doi: 10.1007/s10107-003-0462-y.
1742. Charles X. Ling. Overfitting and Generalization in Learning Discrete Patterns. Neu-
rocomputing, 8(3):341–347, August 1995, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0925-2312(95)00050-G. CiteSeerx : 10.1.1.17.3198. Optimization and Com-
binatorics, Part I-III.
1743. Marc Lipsitch. Adaptation on Rugged Landscapes Generated by Iterated Local Interactions
of Neighboring Genes. In ICGA’91 [254], pages 128–135, 1991.
1744. Proceedings of the 15th Portuguese Conference on Artificial Intelligence (EPIA’11), Octo-
ber 10–13, 2011, Lisbon, Portugal. Partly available at http://epia2011.appia.pt/
[accessed 2011-04-24].
1745. Duixian Liu and Brian Gaucher, editors. 2006 IEEE International Workshop on Antenna
REFERENCES 1089
Technology: Small Antennas and Novel Metamaterials (iWAT’06), March 6, 2006–March 8,
2005, Crowne Plaza Hotel: White Plains, NY, USA. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-9443-7. Google Books ID: IxvhAAAACAAJ. OCLC: 122521207,
289019234, 326736309, and 423939179. Catalogue no.: 06EX1191.
1746. Fang Liu and Le-Ping Lin. Unsupervised Anomaly Detection Based on an Evolutionary
Artificial Immune Network. In EvoWorkshops’05 [2340], pages 166–174, 2005.
1747. Pu Liu, Francis C. M. Lau, Michael J. Lewis, and Cho-li Wang. A New Asynchronous
Parallel Evolutionary Algorithm for Function Optimization. In PPSN VII [1870], pages
401–410, 2002. doi: 10.1007/3-540-45712-7 39.
1748. Tie-Yan Liu, Yiming Yang, Hao Wan, Hua-Jun Zeng, Zheng Chen, and Wei-Ying Ma. Sup-
port Vector Machines Classification with a Very Large-Scale Taxonomy. ACM SIGKDD Ex-
plorations Newsletter, 7(1):36–43, June 2005, Association for Computing Machinery (ACM):
New York, NY, USA. doi: 10.1145/1089815.1089821. CiteSeerx : 10.1.1.72.1506.
1749. Yongguo Liu, Kefei Chen, Xiaofeng Liao, and Wei Zhang. A Genetic Clustering
Method for Intrusion Detection. Pattern Recognition, 37(5):927–942, May 2004, El-
sevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.patcog.2003.09.011.
1750. Silvana Livramento, Arnaldo V. Moura, Flávio K. Miyazawa, Mário M. Harada, and Ed-
uardo Reck Miranda. A Genetic Algorithm for Telecommunication Network Design. In
EvoWorkshops’04 [2254], pages 140–149, 2004.
1751. Xavier Llorà and Kumara Sastry. Fast Rule Matching for Learning Classifier Systems via Vec-
tor Instructions. In GECCO’06 [1516], pages 1513–1520, 2006. doi: 10.1145/1143997.1144244.
See also [1752].
1752. Xavier Llorà and Kumara Sastry. Fast Rule Matching for Learning Classifier Systems via
Vector Instructions. IlliGAL Report 2006001, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, Univer-
sity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2006. Fully
available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2006001.pdf
[accessed 2007-09-12]. See also [1751].
1753. Fernando G. Lobo, Cláudio F. Lima, and Zbigniew Michalewicz, editors. Parameter
Setting in Evolutionary Algorithms, volume 54 in Studies in Computational Intelli-
gence. Springer-Verlag: Berlin/Heidelberg, March 2007. doi: 10.1007/978-3-540-69432-8.
isbn: 3-540-69431-5 and 3-540-69432-3. Google Books ID: WgMmGQAACAAJ,
vfjLsGBmi7kC, and voAuHwAACAAJ. Library of Congress Control Number
(LCCN): 2006939345.
1754. José Lobo, John H. Miller, and Walter Fontana. Neutrality in Technological Landscapes.
Working papers, Santa Fe Institute: Santa Fé, NM, USA, January 26, 2004. Fully avail-
able at http://fontana.med.harvard.edu/www/Documents/WF/Papers/tech.
neut.pdf, http://time.dufe.edu.cn/jingjiwencong/waiwenziliao/neut.
pdf, and http://zia.hss.cmu.edu/miller/papers/neut.pdf [accessed 2010-12-04].
CiteSeerx : 10.1.1.61.9296.
1755. Marco Locatelli. A Note on the Griewank Test Function. Journal of Global Opti-
mization, 25(2):169–174, February 2003, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1021956306041.
1756. Darrell F. Lochtefeld. Multi-Objectivization in Genetic Algorithms. PhD thesis, Wright
State University (WSU), School of Graduate Studies: Dayton, OH, USA, May 26, 2011,
Frank William Ciarallo, Supervisor, Raymond R. Hill, Yan Liu, W. Paul Murdock, and
Mateen M. Rizki, Committee members. Fully available at http://etd.ohiolink.edu/
send-pdf.cgi/Lochtefeld%20Darrell%20F.pdf?wright1308765665 [accessed 2011-12-
04].
1757. Darrell F. Lochtefeld and Frank William Ciarallo. Deterministic Helper Objective Sequence
Applied to the Job-Shop Scheduling Problem. In GECCO’10 [30], pages 431–438, 2010.
doi: 10.1145/1830483.1830566.
1758. Darrell F. Lochtefeld and Frank William Ciarallo. Helper-Objective Optimization Strategies
for the Job-Shop Scheduling Problem. Applied Soft Computing, 11(6):4161–4174, Septem-
ber 2011, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2011.03.007.
1759. A. Geoff Lockett and Gerd Islei, editors. Proceedings of the 8th International Confer-
ence on Multiple Criteria Decision Making: Improving Decision Making in Organizations
1090 REFERENCES
(MCDM’88), August 21–26, 1988, University of Manchester, Manchester Business School:
Manchester, UK, Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 0387517952 and 3540517952. OCLC: 20672173, 230998532,
311213400, 613307351, and 634248012. Published in December 1989.
1760. Reinhard Lohmann. Structure Evolution and Incomplete Induction. Biological Cybernetics,
69(4):319–326, August 1993, Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/BF00203128.
1761. Reinhard Lohmann. Structure Evolution and Neural Systems. In Dynamic, Genetic, and
Chaotic Programming: The Sixth-Generation [2559], pages 395–411. Wiley Interscience:
Chichester, West Sussex, UK, 1992.
1762. Jason D. Lohn, editor. The 2003 NASA/DoD Conference on Evolvable Hardware (EH’03),
June 9–11, 2003, Chicago, IL, USA. IEEE Computer Society: Washington, DC, USA.
1763. Jason D. Lohn and Silvano P. Colombano. Automated Analog Circuit Synthesis using a
Linear Representation. In ICES’98 [2509], pages 125–133, 1999. Fully available at http://
ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-16]. See also [1764].
1764. Jason D. Lohn and Silvano P. Colombano. A Circuit Representation Technique for Automated
Circuit Design. IEEE Transactions on Evolutionary Computation (IEEE-EC), 3(3):205–219,
September 1999, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.788491.
Fully available at http://ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2010-08-
x
27]. CiteSeer : 10.1.1.6.7507. INSPEC Accession Number: 6366632. See also [1763, 1765,
1766].
1765. Jason D. Lohn, Silvano P. Colombano, Garyl L. Haith, and Dimitris Stassinopoulos. A
Parallel Genetic Algorithm for Automated Electronic Circuit Design. In CAS’00 [2001], 2000.
Fully available at http://ic.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-
16]. See also [1764, 1766].
1766. Jason D. Lohn, Garyl L. Haith, Silvano P. Colombano, and Dimitris Stassinopoulos. To-
wards Evolving Electronic Circuits for Autonomous Space Applications. In Proceedings of
the IEEE Aerospace Conference [1322], 2000. doi: 10.1109/AERO.2000.878523. Fully avail-
able at http://ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-16]. INSPEC
Accession Number: 6795690. See also [1764, 1765].
1767. Jason D. Lohn, William F. Kraus, and Derek Linden. Evolutionary Optimization of a Quadri-
filar Helical Antenna. In IEEE AP-S International Symposium and USNC/URSI National Ra-
dio Science Meeting [1336], 2002. Fully available at http://ti.arc.nasa.gov/people/
jlohn/Papers/aps2002.pdf [accessed 2009-06-17]. See also [1768, 1770].
1768. Jason D. Lohn, Derek Linden, Gregory S. Hornby, William F. Kraus, and Adán Rodrı́guez-
Arroyo. Evolutionary Design of an X-Band Antenna for NASA’s Space Technology 5 Mission.
In EH’03 [1762], pages 155–163, 2003. doi: 10.1109/EH.2003.1217660. See also [1767, 1770].
1769. Jason D. Lohn, Gregory S. Hornby, and Derek Linden. An Evolved Antenna for Deploy-
ment on NASA’s Space Technology 5 Mission. In GPTP’04 [2089], pages 301–315, 2004.
doi: 10.1007/b101112. Fully available at http://ic.arc.nasa.gov/people/hornby/
papers/lohn_gptp04.ps.gz [accessed 2007-08-17].
1770. Jason D. Lohn, Derek Linden, Gregory S. Hornby, William F. Kraus, Adán Rodrı́guez-Arroyo,
and Stephen E. Seufert. Evolutionary Design of an X-Band Antenna for NASA’s Space Tech-
nology 5 Mission. In 2004 IEEE Antennas and Propagation Society (A-P-S) International
Symposium and USNC / URSI National’Radio Science Meeting [2121], pages 2313–2316, vol-
ume 3, 2004. doi: 10.1109/APS.2004.1331834. Fully available at http://ti.arc.nasa.
gov/projects/esg/publications/lohn_papers/aps2004.pdf [accessed 2009-06-17]. See
also [1767, 1768].
1771. Proceedings of The Annual Conference of the South Eastern Accounting Group (SEAG) 2003,
September 8, 2003, London Metropolitan University: London, UK.
1772. Heitor Silverio Lopes and Wagner R. Weinert. EGIPSYS: an Enhanced Gene Expression
Programming Approach for Symbolic Regression Problems. International Journal of Ap-
plied Mathematics and Computer Science (AMCS), 14(3), September 2004, University of
Zielona Góra: Zielona Góra, Poland and Lubuskie Scientific Society: Zielona Góra, Poland.
Fully available at http://matwbn.icm.edu.pl/ksiazki/amc/amc14/amc1437.pdf
and http://www.amcs.uz.zgora.pl/?action=get_file&file=AMCS_2004_14_3_
7.pdf [accessed 2009-09-08]. Special Issue: Evolutionary Computation. AMCS Centro Federal
de Educacao Tecnologica do Parana / CPGEI Av. 7 de setembro, 3165, 80230-901 Curitiba
(PR), Brazil.
REFERENCES 1091
1773. Heitor Silverio Lopes, Fabiano Luis de Sousa, and Luiz Carlos Gadelha DeSouza. The Gen-
eralized Extremal Optimization with Real Codification. In EngOpt’08 [1227], 2008. Fully
available at http://www.engopt.org/nukleo/pdfs/0201_engopt_2008_mainenti_
etal.pdf [accessed 2009-09-09].
1774. Luı́s Seabra Lopes, Nuno Lau, Pedro Mariano, and Luı́s Mateus Rocha, editors. Progress in
Artificial Intelligence – Proceedings of the 14th Portuguese Conference on Artificial Intelli-
gence (EPIA’09), October 12–15, 2009, Aveiro, Portugal, volume 5816 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1775. Antonio López Jaimes and Carlos Artemio Coello Coello. Some Techniques to Deal
with Many-Objective Problems. In GECCO’09-II [2343], pages 2693–2696, 2009.
doi: 10.1145/1570256.1570386.
1776. Pascal Lorenz, Petre Dini, and Vladimir Uskov, editors. Proceedings of the 10th International
Conference on Telecommunications (ICT’03), February 21–March 1, 2003, Sofitel Coralia
Maeva Beach Hotel, Tahiti, Papeete, French Polynesia. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-7661-7. Google Books ID: -TurAAAACAAJ. OCLC: 52223801,
80325313, and 423983590. Catalogue no.: 03EX628.
1777. D. Lorenzoli, S. Mussino, M. Pezzé, A. Sichel, D. Tosi, and Daniela Schilling. A SOA based
Self-Adaptive PERSONAL MOBILITY MANAGER. In SCContest’06 [554], pages 479–486,
2006. doi: 10.1109/SCC.2006.16. INSPEC Accession Number: 9165412.
1778. Loughborough Antennas & Propagation Conference (PAPC’06), April 11–12, 2006. Lough-
borough University: Loughborough, Leicestershire, UK. isbn: 0947974415. Google Books
ID: K5K4NwAACAAJ.
1779. Jorge Loureiro and Orlando Belo. An Evolutionary Approach to the Selection and Allocation
of Distributed Cubes. In IDEAS’06 [1372], pages 243–248, 2006. doi: 10.1109/IDEAS.2006.9.
INSPEC Accession Number: 9353377.
1780. Jorge Loureiro and Orlando Belo. Swarm Intelligence in Cube Selection and Allo-
cation for Multi-Node OLAP Systems. In SCSS’06 [878], pages 229–239. Springer-
Verlag GmbH: Berlin, Germany, 2006, University of Bridgeport: Bridgeport, CT, USA.
doi: 10.1007/978-1-4020-6264-3 41.
1781. Jorge Loureiro and Orlando Belo. A Metamorphosis Algorithm for the Optimiza-
tion of a Multi-Node OLAP System. In EPIA’07 [2026], pages 383–394, 2007.
doi: 10.1007/978-3-540-77002-2 32.
1782. José Antonio Lozano, Pedro Larrañaga, Iñaki Inza, and Endika Bengoetxea, editors. To-
wards a New Evolutionary Computation – Advances on Estimation of Distribution Algo-
rithms, volume 192/2006 in Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, 2006. doi: 10.1007/11007937. isbn: 3-540-29006-0. Google Books
ID: 0dku9OKxl6AC and r0UrGB8y2V0C. OCLC: 63763942, 181473672, and 318299594.
Library of Congress Control Number (LCCN): 2005932568.
1783. Haiming Lu. State-of-the-Art Multiobjective Evolutionary Algorithms – Pareto Ranking, Den-
sity Estimation and Dynamic Population. PhD thesis, Oklahoma State University, Faculty of
the Graduate College: Stillwater, OK, USA, August 2002, Gary G. Yen, Advisor. Fully avail-
able at http://www.lania.mx/˜ccoello/EMOO/thesis_lu.pdf.gz [accessed 2010-08-02].
ID: AAT 3080536.
1784. Quiang Lu and Xin Yao. Clustering and Learning Gaussian Distribution for Contin-
uous Optimization. IEEE Transactions on Systems, Man, and Cybernetics – Part C:
Applications and Reviews, 35(2):195–204, May 2005, IEEE Systems, Man, and Cyber-
netics Society: New York, NY, USA. doi: 10.1109/TSMCC.2004.841914. Fully avail-
able at http://www.cs.bham.ac.uk/˜xin/papers/published_IEEETSMC_LuYa05.
pdf [accessed 2010-12-02]. CiteSeerx : 10.1.1.65.7063. INSPEC Accession Number: 8396860.
1785. Wei Lu and Issa Traore. Detecting New Forms of Network Intrusions Using Genetic Pro-
gramming. Computational Intelligence, 20(3):475–494, August 2004, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1111/j.0824-7935.2004.00247.x. Fully available at http://
www.isot.ece.uvic.ca/publications/journals/coi-2004.pdf [accessed 2008-06-11].
1786. Proceedings of third Conference on Artificial General Intelligence (AGI’10), March 5–8, 2010,
Lugano, Switzerland.
1787. Sean Luke and Liviu Panait. A Survey and Comparison of Tree Generation Algorithms.
In GECCO’01 [2570], pages 81–88, 2001. Fully available at http://www.cs.gmu.edu/
x
˜sean/papers/treegenalgs.pdf [accessed 2009-08-07]. CiteSeer : 10.1.1.28.4837.
1092 REFERENCES
1788. Sean Luke and Lee Spector. Evolving Graphs and Networks with Edge Encoding: A Prelimi-
nary Report. In GP’96 LBP [1603], 1996. Fully available at http://cs.gmu.edu/˜sean/
papers/graph-paper.pdf [accessed 2008-08-02]. CiteSeerx : 10.1.1.25.9484.
1789. Sean Luke and Lee Spector. Evolving Teamwork and Coordination with Genetic Pro-
gramming. In GP’96 [1609], pages 150–156, 1996. Fully available at http://www.ifi.
uzh.ch/groups/ailab/people/nitschke/refs/Cooperation/cooperation.pdf
x
[accessed 2009-08-01]. CiteSeer : 10.1.1.29.6914.
1790. Sean Luke, Conor Ryan, and Una-May O’Reilly, editors. GECCO 2002: Graduate Student
Workshop (GECCO’02 GSWS), July 9, 2002, Roosevelt Hotel: New York, NY, USA. AAAI
Press: Menlo Park, CA, USA. See also [224, 478, 1675].
1791. Sean Luke, Liviu Panait, Gabriel Balan, Sean Paus, Zbigniew Skolicki, Jeff Bassett, Robert
Hubley, and Alexander Chircop. ECJ: A Java-based Evolutionary Computation Research
System. George Mason University (GMU), Evolutionary Computation Laboratory (EClab):
Fairfax, VA, USA, 2006. Fully available at http://cs.gmu.edu/˜eclab/projects/
ecj/ [accessed 2010-11-02].
1792. Eduard Lukschandl, Magnus Holmlund, Eric Moden, Mats G. Nordahl, and Peter Nordin.
Induction of Java Bytecode with Genetic Programming. In GP’98 LBP [1605], pages 135–142,
1998.
1793. Eduard Lukschandl, Henrik Borgvall, Lars Nohle, Mats G. Nordahl, and Peter Nordin. Evolv-
ing Routing Algorithms with the JBGP-System. In EvoWorkshops’99 [2197], pages 193–202,
1999. doi: 10.1007/10704703 16. See also [1794, 1795].
1794. Eduard Lukschandl, Henrik Borgvall, Lars Nohle, Mats G. Nordahl, and Peter Nordin.
Distributed Java Bytecode Genetic Programming with Telecom Applications. In Eu-
roGP’00 [2198], pages 316–325, 2000. See also [1793].
1795. Eduard Lukschandl, Peter Nordin, and Mats G. Nordahl. Using the Java Method Evolver
for Load Balancing in Communication Networks. In GECCO’00 LBP [2928], pages 236–239,
2000. See also [1793].
1796. F. Luna, Antonio Jesús Nebro Urbaneja, Bernabé Dorronsoro Dı́az, Enrique Alba Tor-
res, Pascal Bouvry, and L. Hogie. Optimal Broadcasting in Metropolitan MANETs Us-
ing Multiobjective Scatter Search. In EvoWorkshops’06 [2341], pages 255–266, 2006.
doi: 10.1007/11732242 23.
1797. Sen Luo, Bin Xu, and Yixin Yan. An Accumulated-QoS-First Search Approach for Semantic
Web Service Composition. In SOCA’10 [1378], 2010. See also [320, 1146, 1571, 2998, 3010,
3011].
1798. Mikulas Luptacik and Rudolf Vetschera, editors. Proceedings of the First MCDM Winter Con-
ference, 16th International Conference on Multiple Criteria Decision Making (MCDM’02),
February 12–22, 2002, Hotel Panhans: Semmering, Austria. Partly available at http://
orgwww.bwl.univie.ac.at/mcdm2002 [accessed 2007-09-10].
1799. Jay L. Lush. Progeny Test and Individual Performance as Indicators of an Animal’s Breeding
Value. Journal of Dairy Science (JDS), 18(1):1–19, January 1935, American Dairy Science
Association (ADSA): Champaign, IL, USA. Fully available at http://jds.fass.org/
cgi/reprint/18/1/1 [accessed 2007-11-27].
1800. Luc Luyet, Sacha Varone, and Nicolas Zufferey. An Ant Algorithm for the
Steiner Tree Problem in Graphs. In EvoWorkshops’07 [1050], pages 42–51, 2007.
doi: 10.1007/978-3-540-71805-5 5.
1801. Huanyu Ma, Wei Jiang, Songlin Hu, Zhenqiu Huang, and Zhiyong Liu. Two-Phase Graph
Search Algorithm for QoS-Aware Automatic Service Composition. In SOCA’10 [1378], 2010.
See also [320, 1299].
1802. Wolfgang Maass, editor. Proceedings of the Eighth Annual Conference on Computational
Learning Theory (COLT’95), July 5–8, 1995, Santa Cruz, CA, USA. ACM Press: New York,
NY, USA. isbn: 0-89791-723-5. Google Books ID: L2NQAAAAMAAJ and mDtrOgAACAAJ.
OCLC: 33144512, 82595159, 283286354, 300096337, and 318352896. ACM Order
No.: 604950.
1803. Penousal Machado, Jorge Tavares, Francisco Baptista Pereira, and Ernesto Jorge Fer-
REFERENCES 1093
nandes Costa. Vehicle Routing Problem: Doing It The Evolutionary Way.
In GECCO’02 [1675], page 690, 2002. Fully available at http://osiris.
tuwien.ac.at/˜wgarn/VehicleRouting/GECCO02_VRPCoEvo.pdf [accessed 2008-10-27].
CiteSeerx : 10.1.1.16.8208.
1804. Michael Machtey and Paul Young. An Introduction to the General Theory of Algorithms,
volume 2 in Theory of Computation, The Computer Science Library. Elsevier Science Pub-
lishing Company, Inc.: New York, NY, USA and North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands, 1978. isbn: 0-444-00226-X and 0-444-00227-8. Google
Books ID: qncEAQAAIAAJ.
1805. Kenneth J. Mackin and Eiichiro Tazaki. Emergent Agent Communication in Multi-Agent
Systems using Automatically Defined Function Genetic Programming (ADF-GP). In
SMC’99 [1331], pages 138–142, volume 5, 1999. doi: 10.1109/ICSMC.1999.815536. See also
[1806, 1807].
1806. Kenneth J. Mackin and Eiichiro Tazaki. Unsupervised Training of Multiobjective Agent
Communication using Genetic Programming. In KES’00 [1279], pages 738–741, volume
2, 2000. doi: 10.1109/KES.2000.884152. Fully available at http://www.cs.bham.ac.
uk/˜wbl/biblio/gp-html/Mackin00.html and http://www.lania.mx/˜ccoello/
EMOO/mackin00.pdf.gz [accessed 2008-07-28]. See also [1805].
1807. Kenneth J. Mackin and Eiichiro Tazaki. Multiagent Communication Combining Genetic
Programming and Pheromone Communication. Kybernetes, 31(6):827–843, 2002, Thales
Publications (W.O.) Ltd.: Irvine, CA, USA and Emerald Group Publishing Limited: Bingley,
UK. doi: 10.1108/03684920210432808. Fully available at http://www.emeraldinsight.
com/10.1108/03684920210432808 [accessed 2008-07-28]. See also [1805].
1808. James B. MacQueen. Some Methods of Classification and Analysis of Multivariate Observa-
tions. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Proba-
bility (June 21-July 18, 1965 and December 27, 1965-January 7, 1966) [1700], pages 281–297,
volume I: Statistics, 1967. Fully available at http://digitalassets.lib.berkeley.
edu/math/ucb/text/math_s5_v1_article-17.pdf [accessed 2009-09-20].
1809. Proceedings of the International Joint Conference on Computational Intelligence (IJCCI’09),
October 5, 2009, Madeira, Portugal.
1810. Alexander Maedche, Steffen Staab, Claire Nedellec, and Eduard H. Hovy, editors. Work-
shop on Ontology Learning, Proceedings of the Second Workshop on Ontology Learning
(OL’2001), August 4, 2001, Seattle, WA, USA, volume 38 in CEUR Workshop Pro-
ceedings (CEUR-WS.org). Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen,
Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-47/ [accessed 2008-04-01]. Held in conjunction with the 17th International Conference on
Artificial Intelligence IJCAI’2001. See also [2012].
1811. Pattie Maes, Maja J. Mataric, Jean-Arcady Meyer, Jordan B. Pollack, and Stewart W.
Wilson, editors. Proceedings of the Fourth International Conference on Simulation of Adap-
tive Behavior: From Animals to Animats 4 (SAB’96), September 9–13, 1996, Cape Code,
MA, USA. MIT Press: Cambridge, MA, USA. isbn: 0-262-63178-4. Google Books
ID: V3pksEEKxUkC. issn: 1089-4365.
1812. Anne E. Magurran. Ecological Diversity and Its Measurement. Princeton University Press:
Princeton, NJ, USA and Taylor and Francis LLC: London, UK, 1987. isbn: 0-691-08485-8,
0-691-08491-2, and 0709935390. Google Books ID: GV5eJQAACAAJ, MhsuGQAACAAJ,
f38pHgAACAAJ, gH8pHgAACAAJ, iHoOAAAAQAAJ, and wvSEAAAACAAJ.
1813. Anne E. Magurran. Biological Diversity. Current Biology, 15(4):R116–R118, February 22,
2005, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.cub.2005.02.006. Fully
available at http://dx.doi.org/10.1016/j.cub.2005.02.006 [accessed 2008-11-10].
1814. Hadj Mahboubi, Kamel Aouiche, and Jérôme Darmont. Materialized View Se-
lection by Query Clustering in XML Data Warehouses. In CSIT’06 [2687],
2006. Fully available at http://eric.univ-lyon2.fr/˜jdarmont/publications/
files/CSIT06-Mahboubi-et-al.pdf and http://hal.archives-ouvertes.fr/
docs/00/32/06/32/PDF/CSIT_2006-Mahboubi-et-al.pdf [accessed 2011-04-12]. arXiv
ID: 0809.1963v1.
1815. Oded Maimon and Lior Rokach, editors. The Data Mining and Knowledge Discov-
ery Handbook. Springer Science+Business Media, Inc.: New York, NY, USA, 2005.
1094 REFERENCES
doi: 10.1007/b107408. isbn: 0-387-24435-2 and 0-387-25465-X.
1816. Mohammad A. Makhzan and Kwei-Jay Lin. Solution to a Complete Web Ser-
vice Discovery and Composition. In CEC/EEE’06 [2967], pages 455–457, 2006.
doi: 10.1109/CEC-EEE.2006.83. INSPEC Accession Number: 9189347. See also [318].
1817. Masud Ahmad Malik. Evolution of the High Level Programming Languages: A Critical
Perspective. SIGPLAN Notices, 33(12):72–80, December 1998, ACM Press: New York, NY,
USA. doi: 10.1145/307824.307882.
1818. David Malkin. The Evolutionary Impact of Gradual Complexification on Complex Sys-
tems. PhD thesis, University College London (UCL), Department of Computer Science:
London, UK, 2008. Fully available at http://www.cs.ucl.ac.uk/staff/P.Bentley/
DavidMalkinsthesis.pdf [accessed 2010-07-27].
1819. Proceedings of the 2nd European Computing Conference (ECC’08), September 11–13, 2008,
Malta.
1820. Bernard Manderick and Frans Moyson. The Collective Behaviour of Ants: An Example
of Self-Organization in Massive Parallelism. In Proceedings of AAAI Spring Symposium on
Parallel Models of Intelligence: How Can Slow Components Think So Fast? [2600], 1988.
1821. Bernard Manderick, Mark de Weger, and Piet Spiessens. The Genetic Algorithm and the
Structure of the Fitness Landscape. In ICGA’91 [254], pages 143–150, 1991.
1822. Vittorio Maniezzo and Matteo Milandri. An Ant-Based Framework for Very Strongly Con-
straint Problem. In ANTS’02 [816], pages 115–148, 2002. doi: 10.1007/3-540-45724-0 19.
1823. Vittorio Maniezzo, Luca Maria Gambardella, and Fabio de Luigi. Ant Colony Optimiza-
tion. In New Optimization Techniques in Engineering [2083], Chapter 5, pages 101–117.
Springer-Verlag GmbH: Berlin, Germany. Fully available at http://www.idsia.ch/
x
˜luca/aco2004.pdf [accessed 2007-09-13]. CiteSeer : 10.1.1.10.7560.
1824. Vittorio Maniezzo, Antonella Carbonaro, Matteo Golfarelli, and Stefano Rizzi. An ANTS Al-
gorithm for Optimizing the Materialization of Fragmented Views in Data Warehouse: Prelim-
inary Results. In EvoWorkshops’01 [343], pages 80–89, 2001. doi: 10.1007/3-540-45365-2 9.
Fully available at http://bias.csr.unibo.it/golfarelli/Papers/evocop01.pdf
[accessed 2011-04-10].
1825. Vittorio Maniezzo, Antonella Carbonaro, Matteo Golfarelli, and Stefano Rizzi. ANTS for
Data Warehouse Logical Design. In MIC’01 [2294], pages 249–254, 2001. Fully avail-
able at http://www-db.deis.unibo.it/˜srizzi/PDF/mic01.pdf [accessed 2011-04-10].
CiteSeerx : 10.1.1.10.1324 and 10.1.1.74.9797.
1826. Vittorio Maniezzo, Roberto Battiti, and Jean-Paul Watson, editors. Learning and In-
telligent OptimizatioN (LION 2), December 8–12, 2007, University of Trento: Trento,
Italy, volume 5313/2008 in Theoretical Computer Science and General Issues (SL 1),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-92695-5. isbn: 3-540-92694-1. Library of Congress Control Number
(LCCN): 2008941292.
1827. Reinhard Männer and Bernard Manderick, editors. Proceedings of Parallel Problem Solv-
ing from Nature 2 (PPSN II), September 28–30, 1992, Brussels, Belgium. Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands and North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands. isbn: 0-444-89730-5. Google Books ID: jjjlAAAACAAJ.
OCLC: 26362049, 180616056, and 636493751. Library of Congress Control Number
(LCCN): 92023499. GBV-Identification (PPN): 110538498. LC Classification: QA76.58.
1828. Yannis Manolopoulos, Jaroslav Pokorný, and Timos K. Sellis, editors. Proceedings of the
10th East European Conference on Advances in Databases and Information Systems (AD-
BIS’06), September 3–7, 2009, Thessaloniki, Greece, volume 4152 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11827252.
isbn: 3-540-37899-5. Library of Congress Control Number (LCCN): 2006931262.
1829. Steven Manos, Leon Poladian, Peter J. Bentley, and Maryanne Large. A Genetic Algo-
rithm with a Variable-Length Genotype and Embryogeny for Microstructured Optical Fi-
bre Design. In GECCO’06 [1516], pages 1721–1728, 2006. doi: 10.1145/1143997.1144278.
CiteSeerx : 10.1.1.152.7629.
1830. Elena Marchiori and Adri G. Steenbeek. An Evolutionary Algorithm for Large Scale Set
Covering Problems with Application to Airline Crew Scheduling. In EvoWorkshops’00 [464],
pages 370–384, 2000. doi: 10.1007/3-540-45561-2 36. CiteSeerx : 10.1.1.36.8069.
1831. Tiziana Margaria, Christian Winkler, Christian Kubczak, Bernhard Steffen, Marco
REFERENCES 1095
Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele
Facca, and Christina Tziviskou. SWS Mediator with WEBML/WEBRATIO and
JABC/JETI: A Comparison. In ICEIS’07 [494], pages 422–429, volume 4,
2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/3-SWSC06-med-webml-jabc-fin.pdf [accessed 2010-12-16]. See also [383–
387, 1626–1632, 1832].
1832. Tiziana Margaria, Marco Bakera, Harald Raffelt, and Bernhard Steffen. Synthesiz-
ing the Mediator with jABC/ABC. In EON-SWSC’08 [1023], 2008. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/Paper-4.pdf [accessed 2010-12-16]. See also [1626–1632, 1712, 1831, 2166].
1833. Pedro José Marrón, editor. 5. GI/ITG KuVS Fachgespräch “Drahtlose Sensornetze”, July 17–
18, 2006, volume 2006/07. Universität Stuttgart, Fakultät 5: Informatik, Elektrotechnik und
Informationstechnik, Institut für Parallele und Verteilte Systeme (IPVS): Stuttgart, Ger-
many. Fully available at http://elib.uni-stuttgart.de/opus/volltexte/2006/
2838/pdf/TR_2006_07.pdf [accessed 2009-08-01]. urn: urn:nbn:de:bsz:93-opus-28388.
1834. Silvano Martello and Paolo Toth. Knapsack Problems – Algorithms and Computer Im-
plementations, volume 25 in Estimation, Simulation, and Control – Wiley-Interscience
Series in Discrete Mathematics and Optimization. Wiley Interscience: Chichester,
West Sussex, UK, 1990. isbn: 0471924202. Google Books ID: 0dhQAAAAMAAJ and
BrcjQAAACAAJ. OCLC: 21339027 and 231141136. Library of Congress Control Number
(LCCN): 90012279. GBV-Identification (PPN): 032258186, 073756830, and 115819762.
LC Classification: QA267.7 .M37 1990.
1835. Max Martin, Eric Drucker, and Walter Don Potter. GA, EO, and PSO Applied to the Discrete
Network Configuration Problem. In GEM’08 [120], 2008. Fully available at http://www.
cs.uga.edu/˜potter/CompIntell/GEM_Martin-et-al.pdf [accessed 2008-08-23].
1836. Trevor Philip Martin and Anca L. Ralescu, editors. Fuzzy Logic in Artificial Intelligence, To-
wards Intelligent Systems, IJCAI’95 Selected Papers from Workshop (IJCAI’95 Fuzzy WS),
August 19–21, 1995, volume 1188/1997 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-62474-0. isbn: 3-540-62474-0. Google Books ID: MGngXEbCI64C.
OCLC: 36215016, 150397549, 301584096, and 312826327. See also [1861, 2919].
1837. William Martin and Michael J. Russel. On the Origins of Cells: A Hypothesis for the
Evolutionary Transitions from Abiotic Geochemistry to Chemoautotrophic Prokaryotes, and
from Prokaryotes to Nucleated Cells. Philosophical Transactions of the Royal Society B:
Biological Sciences, 358(1429):59–85, 2003, Royal Society of Edinburgh: Edinburgh, Scotland,
UK. doi: 10.1098/rstb.2002.1183. PubMed ID: 12594918.
1838. Worthy Neil Martin, Jens Lienig, and James P. Cohoon. Island (Migration) Models: Evo-
lutionary Algorithms Based on Punctuated Equilibria. In Handbook of Evolutionary Com-
putation [171], Chapter C6.3, pages 448–463. Oxford University Press, Inc.: New York, NY,
USA, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK,
and CRC Press, Inc.: Boca Raton, FL, USA, 1997.
1839. Jean-Philippe Martin-Flatin, Joe Sventek, and Kurt Geihs, editors. IFIP/IEEE International
Workshop on Self-Managed Systems and Services (SelfMan’05), May 19, 2005, Nice Acropolis
Exhibition Center: Nice, France.
1840. Flávio Martins, Eduardo Carrano, Elizabeth Wanner, Ricardo H. C. Takahashi, and Ger-
aldo Robson Mateus. A Dynamic Multiobjective Hybrid Approach for Designing Wireless
Sensor Networks. In CEC’09 [1350], pages 1145–1152, 2009. doi: 10.1109/CEC.2009.4983075.
INSPEC Accession Number: 10688670.
1841. Radek Matoušek and Lars Nolle. GAHC: Improved GA with HC mutation. In
WCECS’07 [1406], pages 915–920, 2007. Fully available at http://www.iaeng.org/
publication/WCECS2007/WCECS2007_pp915-920.pdf [accessed 2009-10-24].
1842. Radek Matoušek and Pavel Ošmera, editors. Proceedings of the 9th International Confer-
ence on Soft Computing (MENDEL’03), June 4–6, 2003, Brno University of Technology:
Brno, Czech Republic. Brno University of Technology, Ústav Automatizace a Informatiky:
Brno, Czech Republic. isbn: 80-214-2411-7. OCLC: 249853754 and 320595534. GBV-
Identification (PPN): 390844543.
1843. Yoshiyuki Matsumura, Kazuhiro Ohkura, and Kanji Ueda. Advantages of Global Discrete
Recombination in (µ/µ, λ)-Evolution Strategies. In CEC’02 [944], pages 1848–1853, volume
1096 REFERENCES
2, 2002. doi: 10.1109/CEC.2002.1004524.
1844. Giles Mayley. Guiding or Hiding: Explorations into the Effects of Learning on the Rate of
Evolution. In ECAL’97 [1309], pages 135–144, 1997. CiteSeerx : 10.1.1.50.9241.
1845. Ernst Mayr. The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Har-
vard University Press: Cambridge, MA, USA and Belknap Press: Cambridge, MA, USA,
1982. isbn: 0-674-36445-7 and 0-674-36446-5. Google Books ID: V 0TAQAAIAAJ and
pHThtE2R0UQC. OCLC: 7875904, 185404286, and 239745622.
1846. John P. McDermott, editor. Proceedings of the 10th International Joint Conference on Ar-
tificial Intelligence (IJCAI’87-I), August 1987, Milano, Italy, volume 1. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-87-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1847].
1847. John P. McDermott, editor. Proceedings of the 10th International Joint Conference on Ar-
tificial Intelligence (IJCAI’87-II), August 1987, Milano, Italy, volume 2. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-87-VOL2/content/content.htm [accessed 2008-04-01]. See also [1846].
1848. John Robert McDonnell, Robert G. Reynolds, and David B. Fogel, editors. Proceedings of
the 4th Annual Conference on Evolutionary Programming (EP’95), March 1–2, 1995, San
Diego, CA, USA, Complex Adaptive Systems, Bradford Books. MIT Press: Cambridge,
MA, USA. isbn: 0-262-13317-2. Google Books ID: OPhQAAAAMAAJ and ipiiDmTrJeAC.
OCLC: 32666904.
1849. Deborah L. McGuinness and George Ferguson, editors. Proceedings of the Nineteenth
National Conference on Artificial Intelligence and the Sixteenth Conference on Innova-
tive Applications of Artificial Intelligence (AAAI’04 / IAAI’04), July 25–29, 2004, San
José, CA, USA. AAAI Press: Menlo Park, CA, USA and MIT Press: Cambridge, MA,
USA. isbn: 0-262-51183-5. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai04.php and http://www.aaai.org/Conferences/IAAI/iaai04.php [ac-
cessed 2007-09-06].
1850. Sheila A. McIlraith and Tran Cao Son. Adapting Golog for Composition of Seman-
tic Web Services. In KR’02 [900], pages 482–496, 2002. Fully available at http://
reference.kfupm.edu.sa/content/a/d/adapting_golog_for_composition_
of_semant_10675.pdf and http://www.cs.nmsu.edu/˜tson/papers/kr02gl.pdf
[accessed 2011-01-11].
1851. Sheila A. McIlraith, Tran Cao Son, and Honglei Zeng. Semantic Web Services. IEEE Intelli-
gent Systems Magazine, 16(2):46–53, March–April 2001, IEEE Computer Society Press: Los
Alamitos, CA, USA. doi: 10.1109/5254.920599. INSPEC Accession Number: 6941088.
1852. Robert Ian McKay, Xin Yao, Charles S. Newton, Jong-Hwan Kim, and Takeshi Fu-
ruhashi, editors. Selected Papers from the Second Asia-Pacific Conference on Simulated
Evolution and Learning (SEAL’98), November 24–27, 1998, Canberra, Australia, volume
1585/1999 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48873-1.
isbn: 3-540-65907-2.
1853. Robert Ian McKay, Xuan Hoai Nguyen, Peter Alexander Whigham, and Yin Shan. Grammars
in Genetic Programming: A Brief Review. In ISICA [1489], pages 3–18, 2005. Fully available
at http://sc.snu.ac.kr/PAPERS/isica05.pdf [accessed 2007-08-15].
1854. Ken I. M. McKinnon. Convergence of the Nelder-Mead Simplex Method to a Nonstationary
Point. SIAM Journal on Optimization (SIOPT), 9(1):148–158, 1998, Society for Industrial
and Applied Mathematics (SIAM): Philadelphia, PA, USA. doi: 10.1137/S1052623496303482.
1855. Nicholas Freitag McPhee and Justin Darwin Miller. Accurate Replication in Genetic
Programming. In ICGA’95 [883], pages 303–309, 1995. Fully available at http://
www.mrs.umn.edu/˜mcphee/Research/Accurate_replication.ps [accessed 2011-11-23].
CiteSeerx : 10.1.1.51.6390.
1856. Nicholas Freitag McPhee and Riccardo Poli. Memory with Memory: Soft As-
signment in Genetic Programming. In GECCO’08 [1519], pages 1235–1242, 2008.
doi: 10.1145/1389095.1389336. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/McPhee_2008_gecco.html [accessed 2009-08-02].
1857. Jörn Mehnen, Mario Köppen, Ashraf Saad, and Ashutosh Tiwari, editors. Applications of
Soft Computing – From Theory to Praxis: 13th Online World Conference on Soft Computing
in Industrial Applications, November 10–21, 2008, volume 58 in Advances in Intelligent and
REFERENCES 1097
Soft Computing. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-540-89619-7.
1858. Yi Mei, Ke Tang, and Xin Yao. A Global Repair Operator for Capacitated Arc
Routing Problem. IEEE Transactions on Systems, Man, and Cybernetics – Part
B: Cybernetics, 39(3):723–734, June 2009, IEEE Systems, Man, and Cybernetics So-
ciety: New York, NY, USA. doi: 10.1109/TSMCB.2008.2008906. Fully available
at http://www.cs.bham.ac.uk/˜xin/papers/TSMC09_MeiTangYao.pdf [accessed 2011-
x
10-01]. CiteSeer : 10.1.1.147.5933. PubMed ID: 19211356.
1859. Yi Mei, Ke Tang, and Xin Yao. Decomposition-Based Memetic Algorithm for Multi-
Objective Capacitated Arc Routing Problem. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 15(2):151–165, April 2011, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/TEVC.2010.2051446. Fully available at http://
ieee-cis.org/_files/EAC_Research_2009_Report_Mei.pdf and http://mail.
ustc.edu.cn/˜meiyi/Papers/D-MAENS.pdf [accessed 2011-02-22]. INSPEC Accession Num-
ber: 11887930. to appear.
1860. Proceedings of the Second International Workshop on Local Search Techniques in Constraint
Satisfaction, October 1, 2005, Melia Sitges Hotel: Sitges, Barcelona, Catalonia, Spain.
1861. Christopher S. Mellish, editor. Proceedings of the Fourteenth International Joint Conference
on Artificial Intelligence (IJCAI’95), August 20–25, 1995, Montréal, QC, Canada. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-363-8. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL%201/content/content.
htm and http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL2/CONTENT/content.htm
[accessed 2008-04-01]. Google Books ID: yI7UPwAACAAJ. OCLC: 41327748, 263632421, and
1898. Brad L. Miller and David Edward Goldberg. Genetic Algorithms, Selection Schemes, and
the Varying Effects of Noise. Evolutionary Computation, 4(2):113–131, Summer 1996, MIT
Press: Cambridge, MA, USA, Marc Schoenauer, editor. doi: 10.1162/evco.1996.4.2.113.
CiteSeerx : 10.1.1.31.3449. See also [1897].
1899. Brad L. Miller and Michael J. Shaw. Genetic Algorithms with Dynamic Niche Sharing
for Multimodal Function Optimization. IlliGAL Report 95010, Illinois Genetic Algorithms
Laboratory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, December 1, 1995.
CiteSeerx : 10.1.1.52.3568.
1900. George A. Miller. The Magical Number Seven, Plus or Minus Two: Some Limits on
our Capacity for Processing Information. Psychological Review, 63(2):81–97, March 1956,
American Psychological Association (APA): Washington, DC, USA. Fully available
at http://www.psych.utoronto.ca/users/peterson/psy430s2001/Miller%20GA
%20Magical%20Seven%20Psych%20Review%201955.pdf [accessed 2009-07-17]. PubMed
ID: 13310704.
1901. Julian Francis Miller and Peter Thomson. Cartesian Genetic Programming. In
EuroGP’00 [2198], pages 121–132, 2000. doi: 10.1007/b75085. Fully available
at http://www.cartesiangp.co.uk/papers/2000/mteurogp2000.pdf [accessed 2009-
x
07-04]. CiteSeer : 10.1.1.26.8515.
1902. Julian Francis Miller, Adrian Thompson, Peter Thomson, and Terence Claus Fogarty, edi-
tors. Proceedings of Third International Conference on Evolvable Systems – From Biology
to Hardware (ICES’00), April 17–19, 2000, Edinburgh, Scotland, UK, volume 1801/2000
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-46406-9. isbn: 3-540-67338-5.
1903. Julian Francis Miller, Marco Tomassini, Pier Luca Lanzi, Conor Ryan, Andrea G. B. Tet-
tamanzi, and William B. Langdon, editors. Proceedings of the 4th European Conference on
Genetic Programming (EuroGP’01), April 18–20, 2001, Lake Como, Milan, Italy, volume
2038/2001 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-41899-7. OCLC: 46502093, 48730451, 66624523, 76281028,
456658150, 501702396, 614899119, and 634229636. Library of Congress Control Num-
ber (LCCN): 2001020853. LC Classification: QA76.623 .E94 2001.
1904. Stanley L. Miller. Production of Amino Acids under Possible Primitive Earth Conditions.
Science Magazine, 117(3046):528–529, May 15, 1953, American Association for the Advance-
ment of Science (AAAS): Washington, DC, USA and HighWire Press (Stanford University):
Cambridge, MA, USA. doi: 10.1126/science.117.3046.528. Fully available at http://www.
issol.org/miller/miller1953.pdf [accessed 2009-08-05]. PubMed ID: 13056598.
1905. Stanley L. Miller and Harold C. Urey. Organic Compound Synthesis on the Primitive Earth:
Several Questions about the Origin of Life have been Answered, but much Remains to be
Studied. Science Magazine, 130(3370):245–251, July 31, 1959, American Association for the
Advancement of Science (AAAS): Washington, DC, USA and HighWire Press (Stanford Uni-
versity): Cambridge, MA, USA. doi: 10.1126/science.130.3370.245. PubMed ID: 13668555.
1906. Hokey Min, Tomasz G. Smolinski, and Grzegorz M. Boratyn. A Genetic Algorithm-based
Data Mining Approach to Profiling the Adopters And Non-Adopters of E-Purchasing. In
IRI’01 [2524], pages 1–6, 2001. CiteSeerx : 10.1.1.11.4711.
1907. Marvin Lee Minsky. Steps toward Artificial Intelligence. pages 8–30, volume 49 (1). Institu-
tion of Electrical Engineers (IEE): London, UK and Institution of Engineering and Technology
REFERENCES 1101
(IET): Stevenage, Herts, UK, January 1961. doi: 10.1109/JRPROC.1961.287775. Reprinted
in [1908].
1908. Marvin Lee Minsky. Steps toward Artificial Intelligence. In Computers and Thought [897],
pages 406–450. McGraw-Hill: New York, NY, USA, 1963–1995. Fully available at http://
web.media.mit.edu/˜minsky/papers/steps.html [accessed 2007-08-05]. Reprint of [1907].
1909. Daniele Miorandi, Paolo Dini, Eitan Altman, and Hisao Kameda, authors. Lidia A. R. Ya-
mamoto, editor. WP 2.2 – Paradigm Applications and Mapping, D2.2.2 Framework for Dis-
tributed On-line Evolution of Protocols and Services. BIONETS (European Commission –
Future and Emerging Technologies Project FP6-027748): Trento, Italy, 2nd edition, June 14,
2007, David Linner and Françoise Baude, Supervisors. Fully available at http://www.
bionets.eu/docs/BIONETS_D2_2_2.pdf [accessed 2008-06-23]. BIONETS/CN/wp2.2/v2.1.
Status: Final, Public.
1910. Teresa Miquélez, Endika Bengoetxea, and Pedro Larrañaga. Evolutionary Computation based
on Bayesian Classifiers. International Journal of Applied Mathematics and Computer Science
(AMCS), 14(3):335–349, September 2004, University of Zielona Góra: Zielona Góra, Poland
and Lubuskie Scientific Society: Zielona Góra, Poland. Fully available at http://matwbn.
icm.edu.pl/ksiazki/amc/amc14/amc1434.pdf and http://www.amcs.uz.zgora.
pl/?action=get_file&file=AMCS_2004_14_3_4.pdf [accessed 2009-08-27].
1911. Proceedings of the Fifth International Workshop on the Synthesis and Simulation of Living
Systems (Artificial Life V), May 16–18, 1996, Nara, Japan. MIT Press: Cambridge, MA,
USA.
1912. Melanie Mitchell. An Introduction to Genetic Algorithms, Complex Adaptive Systems, Brad-
ford Books. MIT Press: Cambridge, MA, USA, February 1998. isbn: 0-262-13316-4
and 0-262-63185-7. Google Books ID: 0eznlz0TF-IC. OCLC: 44708663 and
256769418. Library of Congress Control Number (LCCN): 95024489. GBV-Identification
(PPN): 083660852, 240449282, 269703497, and 557262135. LC Classification: QH441.2
.M55 1996.
1913. Melanie Mitchell, Stephanie Forrest, and John Henry Holland. The Royal Road for Genetic
Algorithms: Fitness Landscapes and GA Performance. In ECAL’91 [2789], pages 245–254,
1991. Fully available at http://web.cecs.pdx.edu/˜mm/ecal92.pdf [accessed 2010-08-03].
CiteSeerx : 10.1.1.49.212.
1914. Tom M. Mitchell. Generalization as Search. In Readings in Artificial Intelligence: A Col-
lection of Articles [2872], pages 517–542. Tioga Publishing Co.: Wellsboro, PA, USA and
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1981. See also [1915].
1915. Tom M. Mitchell. Generalization as Search. Artificial Intelligence, 18
(2):203–226, March 1982, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(82)90040-6. CiteSeerx : 10.1.1.121.5764. See also [1914].
1916. Tom M. Mitchell and Reid G. Smith, editors. Proceedings of the 7th National Conference
on Artificial Intelligence (AAAI’88), August 21–26, 1988, St. Paul, MN, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51055-3. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai88.php [accessed 2007-09-06].
1917. Debasis Mitra, Fabio I. Romeo, and Alberto L. Sangiovanni-Vincentelli. Convergence and
Finite-Time Behavior of Simulated Annealing. In CDC’85 [984], pages 761–767, 1985.
doi: 10.1109/CDC.1985.268600.
1918. Mamoru Mitsuishi, Kanji Ueda, and Fumihiko Kimura, editors. Manufacturing Systems and
Technologies for the New Frontier – The 41st CIRP Conference on Manufacturing Systems,
May 26–28, 2008, Hongo Campus, The University of Tokyo: Tokyo, Japan. Springer-Verlag
London Limited: London, UK. Library of Congress Control Number (LCCN): 2008924671.
1919. Michael Mitzenmacher and Eli Upfal. Probability and Computing: Randomized Algo-
rithms and Probabilistic Analysis. Cambridge University Press: Cambridge, UK, 2005.
isbn: 0-521-83540-2. Google Books ID: 0bAYl6d7hvkC. OCLC: 55845691, 58999923,
and 502475686. Library of Congress Control Number (LCCN): 2004054540. GBV-
Identification (PPN): 390183326 and 555603164.
1920. Jun’ichi Mizoguchi, Hitoshi Hemmi, and Katsunori Shimohara. Production Genetic Algo-
rithms for Automated Hardware Design Through an Evolutionary Process. In CEC’94 [1891],
pages 661–664, volume 2, 1994. doi: 10.1109/ICEC.1994.349980. INSPEC Accession Num-
ber: 4812352.
1921. Arvind S. Mohais, Sven Schellenberg, Maksud Ibrahimov, Neal Wagner, and Zbigniew
1102 REFERENCES
Michalewicz. An Evolutionary Approach to Practical Constraints in Scheduling: A Case-
Study of the Wine Bottling Problem. In Variants of Evolutionary Algorithms for Real-World
Applications [567], Chapter 2, pages 31–58. Springer-Verlag: Berlin/Heidelberg, 2011–2012.
doi: 10.1007/978-3-642-23424-8 2.
1922. Masoud Mohammadian, Ruhul A. Sarker, and Xin Yao, editors. Computational Intelligence
in Control. Idea Group Publishing (Idea Group Inc., IGI Global): New York, NY, USA,
2003. isbn: 1591400376. Google Books ID: N3m6XYXL5K0C.
1923. Masoud Mohammadian, Lotfi A. Zadeh, and Stephen Grossberg, editors. Proceedings
of the International Conference on Computational Intelligence for Modelling, Control and
Automation and International Conference on Intelligent Agents, Web Technologies and
Internet Commerce Vol-2 (CIMCA-IAWTIC’06), November 28–December 1, 2006, Syd-
ney, NSW, Australia, volume 02. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7695-2731-0. OCLC: 85810664 and 123929091. GBV-Identification
(PPN): 593426258. ID: E2731.
1924. Mukesh K. Mohania and A. Min Tjoa, editors. Proceedings of the First International Con-
ference DataWarehousing and Knowledge Discovery (DaWaK’99), August 30–September 1,
1999, Florence, Italy, volume 1676 in Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48298-9. isbn: 3-540-66458-0.
1925. Parviz Moin and William C. Reynolds, editors. Proceedings of the 1998 Summer Program,
July 5–31, 1998. NASA Center for Turbulence Research (CTR): Stanford, CA, USA. Fully
available at http://www.stanford.edu/group/ctr/Summer/SP98.html [accessed 2009-
06-16].
1926. Guillermo Molina and Enrique Alba Torres. Location Discovery in Wireless Sensor Networks
Using a Two-Stage Simulated Annealing. In EvoWorkshops’09 [1052], pages 11–20, 2009.
doi: 10.1007/978-3-642-01129-0 2.
1927. Daniel Molina Cabrera, Manuel Lozano Márquez, and Francisco Herrera Triguero. Memetic
Algorithm with Local Search Chaining for Large Scale Continuous Optimization Problems.
In CEC’09 [1350], pages 830–837, 2009. doi: 10.1109/CEC.2009.4983031. INSPEC Accession
Number: 10688603.
1928. Nicolas Monmarché, El-Ghazali Talbi, Pierre Collet, Marc Schoenauer, and Evelyne
Lutton, editors. Revised Selected Papers of the 8th International Conference on Ar-
tificial Evolution, Evolution Artificielle (EA’07), October 29–31, 2007, Tours, France,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-79305-2. isbn: 3-540-79304-6. Partly available at http://ea07.
hant.li.univ-tours.fr/ [accessed 2007-09-09]. Published in 2008.
1929. Raul Monroy, Gustavo Arroyo-Figueroa, Luis Enrique Sucar, and Juan Humberto Sossa
Azuela, editors. Advances in Artificial Intelligence, Proceedings of the Third Mexican In-
ternational Conference on Artificial Intelligence (MICAI’04), April 26–30, 2004, Mexico
City, México, volume 2972/2004 in Lecture Notes in Artificial Intelligence (LNAI, SL7),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b96521. isbn: 3-540-21459-3.
1930. David J. Montana. Strongly Typed Genetic Programming. BBN Technical Report #7866,
Bolt Beranek and Newman Inc. (BBN): Cambridge, MA, USA, May 7, 1993. Fully available
at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/stgp.
ps.Z [accessed 2010-12-04]. CiteSeerx : 10.1.1.38.4986. See also [1931, 1932].
1931. David J. Montana. Strongly Typed Genetic Programming. BBN Technical Report
#7866, Bolt Beranek and Newman Inc. (BBN): Cambridge, MA, USA, March 25, 1994.
Fully available at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/
papers/stgp2.ps.Z [accessed 2010-12-04]. CiteSeerx : 10.1.1.38.4986. See also [1930, 1932].
1932. David J. Montana. Strongly Typed Genetic Programming. Evolutionary Computation, 3(2):
199–230, Summer 1995, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.2.199.
Fully available at http://personal.d.bbn.com/˜dmontana/papers/stgp.pdf
and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/montana_stgpEC.html
[accessed 2011-11-23].
1933. David J. Montana and Talib Hussain. Adaptive Reconfiguration of Data Networks using
Genetic Algorithms. Applied Soft Computing, 4(4):433–444, October 2004, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2004.02.002. CiteSeerx : 10.1.1.95.1975.
See also [1935].
REFERENCES 1103
1934. David J. Montana and Jason Redi. Optimizing Parameters of a Mobile Ad Hoc Net-
work Protocol with a Genetic Algorithm. In GECCO’05 [304], pages 1993–1998, 2005.
doi: 10.1145/1068009.1068342. Fully available at http://vishnu.bbn.com/papers/
erni.pdf [accessed 2008-10-16]. CiteSeerx : 10.1.1.86.5827.
1935. David J. Montana, Talib Hussain, and Tushar Saxena. Adaptive Reconfiguration of Data
Networks using Genetic Algorithms. In GECCO’02 [1675], pages 1141–1149, 2002. See also
[1933].
1936. Proceedings of the 2nd International Workshop on Machine Learning (ML’83), 1983, Monti-
cello, IL, USA.
1937. The Seventh Metaheuristics International Conference (MIC’07), June 25–29, 2007, Montréal,
QC, Canada. Partly available at http://www.mic2007.ca/ [accessed 2007-09-12].
1938. H. L. Moore. Cours d’Économie Politique. By VILFREDO PARETO, Professeur à
l’Université de Lausanne. Vol. I. Pp. 430. I896. Vol. II. Pp. 426. I897. Lausanne: F. Rouge.
The ANNALS of the American Academy of Political and Social Science, 9(3):128–131, 1897,
SAGE Publications: Thousand Oaks, CA, USA and American Academy of Political and
Social Science: Philadelphia, PA, USA. doi: 10.1177/000271629700900314. See also [2127].
1939. Federico Morán, Alvaro Moreno, Juan Julián Merelo-Guervós, and Pablo Chacón, editors.
Advances in Artificial Life – Proceedings of the Third European Conference on Artificial
Life (ECAL’95), June 4–6, 1995, Granada, Spain, volume 929/1995 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-59496-5. isbn: 3-540-59496-5. Google
Books ID: 6CujMeca5S0C.
1940. Matthew Moran, Maciej Zaremba, and Tomas Vitvar. Service Discovery and Composition
with WSMX for SWS Challenge Workshop IV. In KWEB SWS Challenge Workshop [2450],
2007. Fully available at http://sws-challenge.org/workshops/2007-Innsbruck/
papers/4-SWSCha_IV_2pager.pdf. See also [582, 1210, 3057–3059].
1941. Jorge J. Moré. A Collection of Nonlinear Model Problems. In Computational Solution of
Nonlinear Systems of Equations – Proceedings of the 1988 SIAM-AMS Summer Seminar on
Computational Solution of Nonlinear Systems of Equations [73], pages 723–762, 1988. See
also [1942].
1942. Jorge J. Moré. A Collection of Nonlinear Model Problems. Technical Report MCS-P60-0289,
Argonne National Laboratory: Argonne, IL, USA, February 1989. See also [1941].
1943. Conwy Lloyd Morgan. On Modification and Variation. Simulation – Transactions of The
Society for Modeling and Simulation International, SAGE Journals Online, 4(99):733–740,
November 20, 1896, Society for Modeling and Simulation International (SCS): San Diego,
CA, USA. doi: 10.1126/science.4.99.733. See also [1944].
1944. Conwy Lloyd Morgan. On Modification and Variation. In Adaptive Individuals in Evolving
Populations: Models and Algorithms [255], pages 81–90. Addison-Wesley Longman Publish-
ing Co., Inc.: Boston, MA, USA and Westview Press, 1996. See also [1943].
1945. Proceedings of the 10th International Conference on Machine Learning (ICML’93), June 27–
29, 1993, University of Massachusetts (UMASS): Amherst, MA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-307-7.
1946. Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence
(IJCAI’97-I), August 23–29, 1997, Nagoya, Japan, volume 1. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-97-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1947, 2260].
1947. Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence
(IJCAI’97-II), August 23–29, 1997, Nagoya, Japan, volume 2. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-97-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [1946, 2260].
1948. Naoki Mori, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to a Changing Environment
by Means of the Thermodynamical Genetic Algorithm. In PPSN IV [2818], pages 513–522,
1996. doi: 10.1007/3-540-61723-X 1015.
1949. Naoki Mori, Seiji Imanishi, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to Changing
Environments by Means of the Memory Based Thermodynamical Genetic Algorithm. In
ICGA’97 [166], pages 299–306, 1997.
1950. Naoki Mori, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to a Changing Environment
by Means of the Feedback Thermodynamical Genetic Algorithm. In PPSN V [866], pages
1104 REFERENCES
149–158, 1998. doi: 10.1007/BFb0056858.
1951. David E. Moriarty, Alan C. Schultz, and John J. Grefenstette. Evolutionary Algorithms for
Reinforcement Learning. Artificial Intelligence Review – An International Science and En-
gineering Journal, 11:241–276, September 1999, Springer Netherlands: Dordrecht, Nether-
lands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1613/jair.613.
Fully available at http://www.jair.org/media/613/live-613-1809-jair.pdf [ac-
x
cessed 2007-09-12]. CiteSeer : 10.1.1.40.2231.
1952. Artemis Moroni, Fernando José Von Zuben, and Jonatas Manzolli. ArTbitration. In
GAVAM’00 [1452], pages 143–145, 2000. See also [1953].
1953. Artemis Moroni, Fernando José Von Zuben, and Jonatas Manzolli. ArTbitrariness in Mu-
sic (Musical ArTbitrariness). In GA’00 [2201], 2000. Fully available at http://www.
generativeart.com/on/cic/2000/MORONI.HTM [accessed 2009-06-18]. See also [1952].
1954. Robert Morris, editor. Deep Blue Versus Kasparov: The Significance for Artificial In-
telligence, Collected Papers from the 1997 AAAI Workshop (AAAI’97 WS), July 28,
1997. AAAI Press: Menlo Park, CA, USA. isbn: 1-57735-031-6. GBV-Identification
(PPN): 242427014.
1955. Ronald Walter Morrison. Designing Evolutionary Algorithms for Dynamic Environments.
PhD thesis, George Mason University (GMU): Fairfax, VA, USA, 2002, Kenneth Alan De
Jong, Advisor. See also [1956].
1956. Ronald Walter Morrison. Designing Evolutionary Algorithms for Dynamic Environments,
volume 24 in Natural Computing Series. Springer New York: New York, NY, USA, June 2004.
isbn: 3-540-21231-0. Google Books ID: EaNkNDYq 8. OCLC: 55746331 and 57168232.
Library of Congress Control Number (LCCN): 2004102479. See also [1955].
1957. Ronald Walter Morrison and Kenneth Alan De Jong. A Test Problem Generator for
Non-Stationary Environments. In CEC’99 [110], pages 2047–2053, volume 3, 1999.
doi: 10.1109/CEC.1999.785526.
1958. Ronald Walter Morrison and Kenneth Alan De Jong. Measurement of Population
Diversity. In EA’01 [606], pages 1047–1074, 2001. doi: 10.1007/3-540-46033-0 3.
CiteSeerx : 10.1.1.108.1903.
1959. Joel N. Morse. Reducing the Size of the Nondominated Set: Pruning by Clustering. Comput-
ers & Operations Research, 7(1–2):55–66, 1980, Pergamon Press: Oxford, UK and Elsevier
Science Publishers B.V.: Essex, UK. doi: 10.1016/0305-0548(80)90014-3.
1960. Joel N. Morse, editor. Proceedings of the 4th International Conference on Multiple Crite-
ria Decision Making: Organizations, Multiple Agents With Multiple Criteria (MCDM’80),
August 10–15, 1980, University of Delaware: Newark, DE, USA, volume 190 in Lec-
ture Notes in Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg.
isbn: 0387108211. Google Books ID: ms5-AAAACAAJ. OCLC: 7575489, 246648572,
310683024, 421655673, 470793740, and 612090473.
1961. Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Mar-
tial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program
C3P 826, California Institute of Technology (Caltech), Caltech Concurrent Computa-
tion Program (C3P): Pasadena, CA, USA, 1989. Fully available at http://www.each.
usp.br/sarajane/SubPaginas/arquivos_aulas_IA/memetic.pdf [accessed 2011-12-05].
CiteSeerx : 10.1.1.27.9474.
1962. Pablo Moscato. Memetic Algorithms. In Handbook of Applied Optimization [2125], Chapter
3.6.4, pages 157–167. Oxford University Press, Inc.: New York, NY, USA, 2002.
1963. Pablo Moscato and Carlos Cotta. A Gentle Introduction to Memetic Algorithms. In
Handbook of Metaheuristics [1068], Chapter 5, pages 105–144. Kluwer Academic Pub-
lishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands, 2003.
doi: 10.1007/0-306-48056-5 5. Fully available at http://www.lcc.uma.es/˜ccottap/
papers/handbook03memetic.pdf [accessed 2010-09-11]. CiteSeerx : 10.1.1.77.5300.
1964. Sanaz Mostaghim. Multi-objective Evolutionary Algorithms: Data structures, Convergence
and, Diversity, Elektrotechnik. PhD thesis, Universität Paderborn, Fakultät für Elektrotech-
nik, Informatik und Mathematik: Paderborn, Germany, Shaker Verlag GmbH: Aachen,
North Rhine-Westphalia, Germany, February 2005. isbn: 3-8322-3661-9. Google Books
ID: HoVLAAAACAAJ and pJpeOQAACAAJ. OCLC: 62900366.
1965. Jack Mostow and Chuck Rich, editors. Proceedings of the Fifteenth National Conference
on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Confer-
REFERENCES 1105
ence (AAAI’98, IAAI’98), July 26–30, 1998, Madison, WI, USA. AAAI Press: Menlo Park,
CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51098-7. Partly available
at http://www.aaai.org/Conferences/AAAI/aaai98.php and http://www.aaai.
org/Conferences/IAAI/iaai98.php [accessed 2007-09-06].
1966. Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms, Cambridge Interna-
tional Series on Parallel Computation. Cambridge University Press: Cambridge, UK, 1995.
isbn: 0-521-47465-5. Google Books ID: QKVY4mDivBEC. OCLC: 31606869. GBV-
Identification (PPN): 180148303, 310077257, 508849136, 555592650, and 563593563.
LC Classification: QA274 .M68 1995.
1967. Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms. ACM Comput-
ing Surveys (CSUR), 28(1):33–37, March 1996, ACM Press: New York, NY, USA.
doi: 10.1145/234313.234327.
1968. VIII Metaheuristics International Conference (MIC’09), July 13–16, 2009, Mövenpick Hotel
Hamburg: Hamburg, Germany.
1969. Heinz Mühlenbein. Parallel Genetic Algorithms, Population Genetics and Combinatorial
Optimization. In WOPPLOT’89 [245], pages 398–406, 1989. doi: 10.1007/3-540-55027-5 23.
See also [1970].
1970. Heinz Mühlenbein. Parallel Genetic Algorithms, Population Genetics and Combinatorial
Optimization. In ICGA’89 [2414], pages 416–421, 1989. See also [1969].
1971. Heinz Mühlenbein. Evolution in Time and Space – The Parallel Genetic Algorithm. In
FOGA’90 [2562], pages 316–337, 1990. Fully available at http://muehlenbein.org/
parallel.PDF and http://www.iais.fraunhofer.de/fileadmin/images/pics/
Abteilungen/INA/muehlenbein/PDFs/evolutionary/I/1991_Evolution.PDF [ac-
x
cessed 2009-08-12]. CiteSeer : 10.1.1.54.6763.
1972. Heinz Mühlenbein. How Genetic Algorithms Really Work – I. Mutation and Hillclimbing. In
PPSN II [1827], pages 15–26, 1992. Fully available at http://muehlenbein.org/mut92.
pdf [accessed 2010-09-11].
1973. Heinz Mühlenbein. The Equation for Response to Selection and its Use for Prediction.
Evolutionary Computation, 5(3):303–346, Fall 1997, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1997.5.3.303. Fully available at http://citeseer.ist.psu.edu/old/
730919.html [accessed 2009-08-21]. PubMed ID: 10021762.
1974. Heinz Mühlenbein and Gerhard Paaß. From Recombination of Genes to the Estima-
tion of Distributions I. Binary Parameters. In PPSN IV [2818], pages 178–187, 1996.
doi: 10.1007/3-540-61723-X 982. Fully available at ftp://ftp.ais.fraunhofer.de/
pub/as/ga/gmd_as_ga-96_04.ps [accessed 2009-08-20]. CiteSeerx : 10.1.1.7.7030. See
also [1976].
1975. Heinz Mühlenbein and Dirk Schlierkamp-Voosen. Predictive Models for the Breeder
Genetic Algorithm I: Continuous Parameter Optimization. Evolutionary Computation,
1(1):25–49, Spring 1993, MIT Press: Cambridge, MA, USA, Marc Schoenauer, edi-
tor. doi: 10.1162/evco.1993.1.1.25. Fully available at http://www.muehlenbein.org/
breeder93.pdf [accessed 2010-12-04]. CiteSeerx : 10.1.1.30.9381.
1976. Heinz Mühlenbein, Jürgen Bendisch, and Hans-Michael Voigt. From Recombination of Genes
to the Estimation of Distributions II. Continuous Parameters. In PPSN IV [2818], pages 188–
197, 1996. doi: 10.1007/3-540-61723-X 983. Fully available at http://www.amspr.gfai.
de/techreports/ppsn96-1.pdf [accessed 2009-08-20]. CiteSeerx : 10.1.1.9.2941. See also
[1974].
1977. Srinivas Mukkamala, Andrew H. Sung, and Ajith Abraham. Modeling Intrusion Detection
Systems using Linear Genetic Programming Approach. In Robert Orchard, Chunsheng Yang,
and Moonis Ali, editors, IEA/AIE’04 [2088], pages 633–642, 2004. Fully available at http://
www.rmltech.com/LGP%20Based%20IDS.pdf [accessed 2008-06-17].
1978. Jörg Müller, W. Lewis Johnson, and Barbara Hayes-Roth, editors. Proceedings of the First
International Conference on Autonomous Agents (AGENTS’97), February 5–8, 1997, Marina
del Rey, CA, USA. ACM Press: New York, NY, USA. isbn: 0-89791-877-0. Fully available
at http://sigart.acm.org/proceedings/agents97/ [accessed 2009-06-27]. Google Books
ID: ktPaAQAACAAJ. OCLC: 37622491 and 71484604. Sponsor: SIGGRAPH – ACM
Special Interest Group on Computer Graphics and Interactive Techniques.
1979. Masaharu Munetomo. Designing Genetic Algorithms for Adaptive Routing Algorithms
in the Internet. In GECCO’02 WS [224], pages 215–216, 2002. Fully available
1106 REFERENCES
at http://uk.geocities.com/markcsinclair/ps/etppf_mun.ps.gz [accessed 2009-09-
02]. CiteSeerx : 10.1.1.50.2448. Partly available at http://assam.cims.hokudai.
ac.jp/˜munetomo/documents/gecco99-ws.ppt and http://uk.geocities.com/
markcsinclair/ps/etppf_mun_slides.ps.gz [accessed 2009-09-02]. See also [1982].
1980. Masaharu Munetomo and David Edward Goldberg. Linkage Identification by Non-
monotonicity Detection for Overlapping Functions. IlliGAL Report 99005, Illinois Ge-
netic Algorithms Laboratory (IlliGAL), Department of Computer Science, Department of
General Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL,
USA, January 1999. Fully available at http://www.illigal.uiuc.edu/pub/papers/
IlliGALs/99005.ps.Z [accessed 2008-10-17]. CiteSeerx : 10.1.1.33.9322. See also [1981].
1981. Masaharu Munetomo and David Edward Goldberg. Linkage Identification by Non-
monotonicity Detection for Overlapping Functions. Evolutionary Computation, 7(4):377–398,
Winter 1999, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1999.7.4.377. See also
[1980].
1982. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. An Adaptive Network Routing
Algorithm Employing Path Genetic Operators. In ICGA’97 [166], pages 643–649, 1997. See
also [1979, 1983, 1984].
1983. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. An Adaptive Routing Algorithm
with Load Balancing by a Genetic Algorithm. Transactions of the Information Processing
Society of Japan (IPSJ), 39(2):219–227, 1998. in Japanese. See also [1982].
1984. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. A Migration Scheme for the
Genetic Adaptive Routing Algorithm. In SMC’98 [1364], pages 2774–2779, volume 3, 1998.
doi: 10.1109/ICSMC.1998.725081. See also [1982].
1985. Durga Prasand Muni, Nikhil R. Pal, and Jyotirmoy Das. A Novel Approach to Design
Classifiers using Genetic Programming. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 8(2):183–196, April 2004, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2004.825567. INSPEC Accession Number: 7973773.
1986. Nitin Muttil and Shie-Yui Liong. Superior Exploration–Exploitation Balance in Shuf-
fled Complex Evolution. Journal of Hydraulic Engineering, 130(12):1202–1205, Decem-
ber 2004, ASCE (American Society of Civil Engineers) Publications: Reston, VA, USA.
doi: 10.1061/(ASCE)0733-9429(2004)130:12(1202).
1987. John Mylopoulos and Raymond Reiter, editors. Proceedings of the 12th International
Joint Conference on Artificial Intelligence (IJCAI’91-I), August 24–30, 1991, Sydney,
NSW, Australia, volume 1. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-160-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-91-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [831, 1988, 2122].
1988. John Mylopoulos and Raymond Reiter, editors. Proceedings of the 12th International
Joint Conference on Artificial Intelligence (IJCAI’91-II), August 24–30, 1991, Sydney,
NSW, Australia, volume 2. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-160-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-91-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [831, 1987, 2122].
1989. Francesco Nachira, Paolo Dini, Andrea Nicolai, Marion Le Louarn, and Lorena Rivera
Lèon, editors. Digital Business Ecosystems. Office for Official Publications of the
European Communities: Luxembourg, 2007. isbn: 92-79-01817-5. Fully avail-
able at http://www.digital-ecosystems.org/book/2006-4156_PROOF-DCS.pdf
and http://www.digital-ecosystems.org/book/2006-4156_PROOF-DCS.pdf [ac-
cessed 2009-07-21].
1990. Thomas P. Nadeau and Toby J. Teorey. Achieving Scalability in OLAP Materialized
View Selection. In DOLAP’02 [2558], pages 28–34, 2002. doi: 10.1145/583890.583895.
CiteSeerx : 10.1.1.96.8347.
1991. Proceedings of the Third Asia-Pacific Conference on Simulated Evolution And Learning
(SEAL’00), October 25–27, 2000, Nagoya Congress Center: Nagoya, Japan.
1992. Third Online Workshop on Soft Computing (WSC3), August 1998, Nagoya, Japan.
1993. Fourth Online Workshop on Soft Computing (WSC4), September 1999, Nagoya, Japan.
REFERENCES 1107
1994. Akito Naito, Fumio Harashima, Shun-Ichi Amari, Toshio Fukuda, and Kunihiko Fukushima,
editors. Proceedings of 1993 International Joint Conference on Neural Networks (IJCNN
’93), October 25–29, 1993, Nagoya Congress Center: Nagoya, Japan. IEEE Computer
Society Press: Los Alamitos, CA, USA. isbn: 0-7803-1421-2, 0-7803-1422-0, and
0-7803-1423-9. Library of Congress Control Number (LCCN): 93-79884. Catalogue
no.: 93CH3353-0.
1995. Tadashi Nakano and Tatsuya Suda. Adaptive and Evolvable Network Services. In GECCO’04-
I [751], pages 151–162, 2004. Fully available at http://www.cs.york.ac.uk/rts/docs/
GECCO_2004/Conference%20proceedings/papers/3102/31020151.pdf [accessed 2008-
08-29]. See also [1996, 1997].
1996. Tadashi Nakano and Tatsuya Suda. Self-Organizing Network Services with Evolutionary
Adaptation. IEEE Transactions on Neural Networks, 16(5):1269–1278, September 2005,
IEEE Computer Society Press: Los Alamitos, CA, USA. doi: 10.1109/TNN.2005.853421.
CiteSeerx : 10.1.1.74.9149. INSPEC Accession Number: 8586518. See also [1995].
1997. Tadashi Nakano and Tatsuya Suda. Applying Biological Principles to Designs of Network
Services. Applied Soft Computing, 7(3):870–878, June 2007, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.asoc.2006.04.006. See also [1995].
1998. Wonhong Nam, Hyunyoung Kil, and Dongwon Lee. Type-Aware Web Service Composi-
tion Using Boolean Satisfiability Solver. In CEC/EEE’08 [1346], pages 331–334, 2008.
doi: 10.1109/CECandEEE.2008.108. INSPEC Accession Number: 10475104. See also
[204, 1999, 2064, 2066, 3034].
1999. Wonhong Nam, Hyunyoung Kil, and Jungjae Lee. QoS-Driven Web Service Composi-
tion Using Learning-Based Depth First Search. In CEC’09 [1245], pages 507–510, 2009.
doi: 10.1109/CEC.2009.50. INSPEC Accession Number: 10839131. See also [662, 1571, 1998,
2064, 2066, 2067, 3034].
2000. Srini Narayanan and Sheila A. McIlraith. Simulation, Verification and Automated Compo-
sition of Web Services. In WWW’02 [26], pages 77–88, 2002. doi: 10.1145/511446.511457.
Fully available at http://reference.kfupm.edu.sa/content/s/i/simulation__
verification_and_automated_c_10676.pdf, http://www.cs.toronto.
edu/kr/publications/nar-mci-www11.pdf, http://www.cs.toronto.edu/
˜sheila/publications/nar-mci-www11.pdf, http://www.cs.uoregon.edu/
classes/08W/cis607sii/papers/NarayananMcIlraith02.pdf, and http://
www.icsi.berkeley.edu/˜snarayan/nar-mci-www11.pdf [accessed 2011-01-11].
CiteSeerx : 10.1.1.16.4126.
2001. NASA High Performance Computing and Communications Computational Aerosciences
Workshop (CAS’00), February 15–17, 2000, NASA Ames Research Center: Moffett Field,
CA, USA.
2002. Collected Abstracts for the First International Workshop on Learning Classifier Systems
(IWLCS’92), October 6–9, 1992, NASA Johnson Space Center: Houston, TX, USA. See
also [2534].
2003. 9th Conference on Technologies and Applications of Artificial Intelligence (TAAI’04), Novem-
ber 5–6, 2004, National Chengchi University (NCCU): Muzha, Wenshan District, Taipei City,
Taiwan. Fully available at http://www.taai.org.tw/taai2004/ [accessed 2010-07-30].
2004. 10th Conference on Technologies and Applications of Artificial Intelligence (TAAI’05), De-
cember 2–3, 2005, National University of Kaohsiung: Kaohsiung, Taiwan. Fully available
at http://www.taai.org.tw/taai2005/ [accessed 2010-07-30].
2005. Techniques foR Implementing Constraint Programming Systems (TRICS). Technical Report
TRA9/00, National University of Singapore, School of Computing: Singapore, September 22,
2000, Singapore. Workshop in conjunction with CP 2000.
2006. 12th Conference on Technologies and Applications of Artificial Intelligence (TAAI’07),
November 16–17, 2007, National Yunlin University of Science and Technology: Douliou,
Yunlin, Taiwan.
2007. Bart Naudts and Alain Verschoren. Epistasis on Finite and Infinite Spaces. In Inter-
Symp’96 [1412], pages 19–23, 1996. CiteSeerx : 10.1.1.32.6455.
2008. Bart Naudts and Alain Verschoren. Epistasis and Deceptivity. Bulletin of the Belgian Mathe-
matical Society – Simon Stevin, 6(1):147–154, 1999, Belgian Mathematical Society: Campus
de la Plaine, Brussels, Belgium. Fully available at http://projecteuclid.org/euclid.
bbms/1103149975 [accessed 2009-07-11]. CiteSeerx : 10.1.1.32.4912. Zentralblatt MATH
1108 REFERENCES
identifier: 0915.68143. Mathematical Reviews number (MathSciNet): MR1674709.
2009. Bart Naudts, Dominique Suys, and Alain Verschoren. Generalized Royal Road Functions
and their Epistasis. Computers and Artificial Intelligence, 19(4), 2000, Slovak Academy of
Science: Bratislava, Slovakia.
2010. Boris Naujoks, Nicola Beume, and Michael T.M. Emmerich. Multi-objective Optimisa-
tion using S-metric Selection: Application to Three-Dimensional Solution Spaces. In
CEC’05 [641], pages 1282–1289, volume 2, 2005. doi: 10.1109/CEC.2005.1554838. Fully
available at http://www.lania.mx/˜ccoello/naujoks05.pdf.gz [accessed 2009-07-18].
CiteSeerx : 10.1.1.60.4181. INSPEC Accession Number: 8715411.
2011. Yehuda Naveh and Pierre Flener, editors. Fifth International Workshop on Local Search Tech-
niques in Constraint Satisfaction (LSCS’08), September 18, 2008, Sydney, NSW, Australia.
Fully available at http://sysrun.haifa.il.ibm.com/hrl/lscs2008/ [accessed 2009-06-
24].
2012. Bernhard Nebel, editor. Proceedings of the Seventeenth International Joint Conference on
Artificial Intelligence (IJCAI’01), August 4–10, 2001, Seattle, WA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-777-3 and 1-55860-813-3.
Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-2001/content/content.
htm [accessed 2008-04-01]. Google Books ID: BHlFAAAAYAAJ. OCLC: 48481676. See also [1810].
2013. Nadia Nedjah and Luiza de Macedo Mourelle, editors. Swarm Intelligent Systems, vol-
ume 26 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2006.
doi: 10.1007/978-3-540-33869-7. isbn: 3-540-33868-3. Google Books ID: 32PmGAAACAAJ
and MJOvAAAACAAJ. Library of Congress Control Number (LCCN): 2006925434. Series
editor: Janusz Kacprzyk.
2014. Nadia Nedjah, Luiza de Macedo Mourelle, Ajith Abraham, and Mario Köppen, editors.
5th International Conference on Hybrid Intelligent Systems (HIS’05), November 6–9, 2005,
Pontifical Catholic University of Rio de Janeiro (PUC-Rio): Rio de Janeiro, RJ, Brazil.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-2457-5. Partly available
at http://his05.hybridsystem.com/ [accessed 2007-09-01].
2015. Nadia Nedjah, Enrique Alba Torres, and Luiza de Macedo Mourelle, editors. Parallel Evolu-
tionary Computations, volume 22/2006 in Studies in Computational Intelligence. Springer-
Verlag: Berlin/Heidelberg, June 2006. doi: 10.1007/3-540-32839-4. isbn: 3-540-32837-8
and 3-540-32839-4. Google Books ID: J36LHQAACAAJ and kOC8GAAACAAJ. Library of
Congress Control Number (LCCN): 2006921792.
2016. Nadia Nedjah, Leandro dos Santos Coelho, and Luiza de Macedo Mourelle, editors.
Multi-Objective Swarm Intelligent Systems: Theory & Experiences, volume 261/2010
in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2010.
doi: 10.1007/978-3-642-05165-4. Library of Congress Control Number (LCCN): 2009940420.
2017. John Ashworth Nelder and Roger A. Mead. A Simplex Method for Function Minimization.
The Computer Journal, Oxford Journals, 7(4):308–313, January 1965, British Computer Soci-
ety: Swindon, UK. doi: 10.1093/comjnl/7.4.308. Fully available at http://www.garfield.
library.upenn.edu/classics1979/A1979HZ22100001.pdf and http://www.
rupley.com/˜jar/Rupley/Code/src/simplex/nelder-mead-simplex.pdf [ac-
x
cessed 2010-12-24]. CiteSeer : 10.1.1.144.5920.
2018. Kevin M. Nelson. A Comparison of Evolutionary Programming and Genetic Algorithms for
Electronic Part Placement. In EP’95 [1848], pages 503–519, 1995.
2019. Venkata Ranga Neppalli, Chuen-Lung Chen, and Jatinder N. D. Gupta. Genetic Algo-
rithms for the two-stage bicriteria Flowshop Problem. European Journal of Operational Re-
search (EJOR), 95(2), December 6, 1996, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/0377-2217(95)00275-8.
2020. Ferrante Neri, Niko Kotilainen, and Mikko Vapa. An Adaptive Global-Local Memetic Al-
gorithm to Discover Resources in P2P Networks. In EvoWorkshops’07 [1050], pages 61–70,
2007. doi: 10.1007/978-3-540-71805-5 7.
2021. Ferrante Neri, Jari Toivanen, Giuseppe Leonardo Cascella, and Yew-Soon Ong.
An Adaptive Multimeme Algorithm for Designing HIV Multidrug Therapies.
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB),
4(2), April 2007, IEEE Computer Society Press: Los Alamitos, CA, USA.
doi: 10.1109/TCBB.2007.070202 and 10.1109/tcbb.2007.1039. Fully available at http://
REFERENCES 1109
ntu-cg.ntu.edu.sg/ysong/journal/Adaptive-Multimeme-HIV.pdf [accessed 2011-04-
23]. CiteSeerx : 10.1.1.124.5677. PubMed ID: 17473319.
2022. Ferrante Neri, Jari Toivanen, and Raino A. E. Mäkinen. An Adaptive Evolutionary Al-
gorithm with Intelligent Mutation Local Searchers for Designing Multidrug Therapies for
HIV. Applied Intelligence – The International Journal of Artificial Intelligence, Neural Net-
works, and Complex Problem-Solving Technologies, 27(3):219–235, December 2007, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/s10489-007-0069-8.
2023. Arnold Neumaier. Global Optimization and Constraint Satisfaction. In GICOLAG’06 [2024],
2006. Fully available at http://www.mat.univie.ac.at/˜neum/glopt.html [ac-
cessed 2009-06-11].
2024. Arnold Neumaier, Immanuel Bomze, Laurence A. Wolsey, and Ioannis Emiris, editors. Global
Optimization – Integrating Convexity, Optimization, Logic Programming, and Computational
Algebraic Geometry (GICOLAG’06), December 4–8, 2006, International Erwin Schrödinger
Institute for Mathematical Physics (ESI): Vienna, Austria. Fully available at http://www.
mat.univie.ac.at/˜neum/gicolag.html [accessed 2009-06-11].
2025. Frank Neumann and Ingo Wegener. Can Single-Objective Optimization Profit from Multi-
objective Optimization? In Multiobjective Problem Solving from Nature – From Concepts
to Applications [1554], pages 115–130. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-3-540-72964-8 6.
2026. José Neves, Manuel Filipe Santos, and José Manuel Machado, editors. Progress in Ar-
tificial Intelligence – Proceedings of the 13th Portuguese Conference on Aritficial Intelli-
gence; Workshops: GAIW, AIASTS, ALEA, AMITA, BAOSW, BI, CMBSB, IROBOT,
MASTA, STCS, and TEMA (EPIA’07), December 3–7, 2007, Guimarães, Portugal, volume
4874 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-77002-2.
isbn: 3-540-77000-3. Library of Congress Control Number (LCCN): 2007939905.
2027. Proceedings of the 12th IEEE Congress on Evolutionary Computation (CEC’11), June 5–8,
2011, New Orleans, LA, USA.
2028. Mark E. J. Newman and Robin Engelhardt. Effect of Neutral Selection on the Evolution of
Molecular Species. Proceedings of the Royal Society B: Biological Sciences, 256(1403):1333–
1338, July 22, 1998, Royal Society Publishing: London, UK. doi: 10.1098/rspb.1998.0438.
Fully available at http://www.santafe.edu/media/workingpapers/98-01-001.
pdf [accessed 2010-12-04]. CiteSeerx : 10.1.1.43.9103. arXiv ID: adap-org/9712005v1.
2029. Sir Isaac Newton. Philosophiæ Naturalis Principia Mathematica. Apud GUIL & Joh. INNYS,
Regiæ Societatis Typographos: London, UK, July 5, 1687. Fully available at http://gdz.
sub.uni-goettingen.de/no_cache/dms/load/img/?IDDOC=294010 [accessed 2008-08-
21].
2030. Jerzy Neyman and Egon Sharpe Pearson. On the Use and Interpretation of Certain Test Cri-
teria for Purposes of Statistical Inference, Part I. Biometrika, 20A(1-2):175–240, July 1928,
Oxford University Press, Inc.: New York, NY, USA. doi: doi:10.1093/biomet/20A.1-2.175.
Reprinted in [2031].
2031. Jerzy Neyman and Egon Sharpe Pearson. Joint Statistical Papers. Cambridge Univer-
sity Press: Cambridge, UK, University of California Press: Berkeley, CA, USA, Hodder
Arnold: London, UK, and Lubrecht & Cramer, Ltd.: Port Jervis, NY, USA, October 1967.
isbn: 0520009916 and 0852647069. Google Books ID: KH1GHgAACAAJ. OCLC: 527105.
asin: B000JM5XB0 and B000P0O0T2.
2032. Xuan Hoai Nguyen. A Flexible Representation for Genetic Programming: Lessons from
Natural Language Processing. PhD thesis, University of New South Wales, University Col-
lege, School of Information Technology and Electrical Engineering: Sydney, NSW, Aus-
tralia, December 2004. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/hoai_thesis.html and http://www.cs.ucl.ac.uk/staff/W.Langdon/
ftp/papers/hoai_thesis.tar.gz [accessed 2010-07-26]. OCLC: 224847044, 455148847,
and 648818432.
2033. Xuan Hoai Nguyen, Robert Ian McKay, and Daryl Leslie Essam. Some Experimental Results
with Tree Adjunct Grammar Guided Genetic Programming. In EuroGP’02 [977], pages
229–238, 2002. doi: 10.1007/3-540-45984-7 22. Fully available at http://sc.snu.ac.kr/
PAPERS/TAGGGP.pdf [accessed 2007-09-09].
2034. Xuan Hoai Nguyen, Robert Ian McKay, Daryl Leslie Essam, and R. Chau. Solv-
1110 REFERENCES
ing the Symbolic Regression Problem with Tree-Adjunct Grammar Guided Genetic
Programming: The Comparative Results. In CEC’02 [944], pages 1326–1331, 2002.
doi: 10.1109/CEC.2002.1004435. Fully available at http://sc.snu.ac.kr/PAPERS/
MCEC2002.pdf [accessed 2007-09-09]. INSPEC Accession Number: 7328878.
2035. Giuseppe Nicosia, Vincenzo Cutello, Peter J. Bentley, and Jonathan Timmis, editors. Pro-
ceedings of the 3th International Conference on Artificial Immune Systems (ICARIS’04),
September 13–16, 2004, Catania, Sicily, Italy, volume 3239 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-23097-1.
2036. Søren R. Nielsen, David Pisinger, and Per Marquardsen. Automatic Transformation of
Constraint Satisfaction Problems to Integer Linear Form – An Experimental Study. In
TRICS [2005], 2000. CiteSeerx : 10.1.1.36.9656 and 10.1.1.37.2112.
2037. Stefan Niemczyk. Ein Modellproblem mit einstellbarer Schwierigkeit zur Evaluierung Evo-
lutionärer Algorithmen. Master’s thesis, University of Kassel, Fachbereich 16: Elektrotech-
nik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, May 5, 2008, Thomas
Weise, Advisor, Kurt Geihs and Albert Zündorf, Committee members. Fully available
at http://www.it-weise.de/documents/files/N2008MPMES.pdf.
2038. Stefan Niemczyk and Thomas Weise. A General Framework for Multi-Model Estima-
tion of Distribution Algorithms. Technical Report, University of Kassel, Fachbereich
16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany,
March 10, 2010. Fully available at http://www.it-weise.de/documents/files/
NW2010AGFFMMEODA.pdf [accessed 2010-06-19]. See also [2917].
2039. Nils J. Nilsson, editor. Proceedings of the 3rd International Joint Conference on Artificial
Intelligence (IJCAI’73), August 20–23, 1973, Stanford University: Stanford, CA, USA. Fully
available at http://dli.iiit.ac.in/ijcai/IJCAI-73/CONTENT/content.htm [ac-
cessed 2008-04-01].
2040. Jorge Nocedal and Stephen J. Wright. Numerical Optimization, volume 18 in Springer Series
in Operations Research and Financial Engineering. Springer-Verlag GmbH: Berlin, Germany,
2nd edition, 2006. doi: 10.1007/978-0-387-40065-5. isbn: 0-387-98793-2. Google Books
ID: 2B9LUYAP3n4C, eNlPAAAAMAAJ, and epc5fX0lqRIC. Library of Congress Control
Number (LCCN): 2006923897.
2041. David Noever and Subbiah Baskaran. Steady-State vs. Generational Genetic Algorithms: A
Comparison of Time Complexity and Convergence Properties. Technical Report 92-07-032,
Santa Fe Institute: Santa Fé, NM, USA.
2042. Andreas Nolte and Rainer Schrader. A Note on the Finite Time Behaviour of Simulated
Annealing. In SOR’96 [3092], pages 175–180, 1996. See also [2043].
2043. Andreas Nolte and Rainer Schrader. A Note on the Finite Time Behaviour of Simulated An-
nealing. Mathematics of Operations Research (MOR), 25(3):476–484, August 2000, Institute
for Operations Research and the Management Sciences (INFORMS): Linthicum, ML, USA.
doi: 10.1287/moor.25.3.476.12211. Fully available at http://www.zaik.de/˜paper/
unzip.html?file=zaik1999-347.ps [accessed 2009-07-03]. CiteSeerx : 10.1.1.40.9456.
Revised version from March 1999. See also [2042].
2044. Peter Nordin. A Compiling Genetic Programming System that Directly Manipulates the
Machine Code. In Advances in Genetic Programming Volume 1 [1539], Chapter 14, pages
311–331. MIT Press: Cambridge, MA, USA, 1994. Machine code GP Sun Spark and i868.
2045. Peter Nordin. Evolutionary Program Induction of Binary Machine Code and its Appli-
cations. PhD thesis, Universität Dortmund, Fachbereich Informatik: Dortmund, North
Rhine-Westphalia, Germany, Krehl Verlag: Münster, Germany. isbn: 3931546071.
OCLC: 40285878. GBV-Identification (PPN): 231072783. asin: 3931546071 and
B0033DLUWS.
2046. Peter Nordin and Wolfgang Banzhaf. Evolving Turing-Complete Programs for a
Register Machine with Self-Modifying Code. In ICGA’95 [883], pages 318–325,
1995. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
Nordin_1995_tcp.html [accessed 2008-09-16]. CiteSeerx : 10.1.1.34.4526.
2047. Peter Nordin and Wolfgang Banzhaf. Programmatic Compression of Images and Sound.
In GP’96 [1609], pages 345–350, 1996. Fully available at http://www.cs.mun.ca/
x
˜banzhaf/papers/gp96.pdf [accessed 2010-08-22]. CiteSeer : 10.1.1.32.7927.
2048. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Defined Introns
and Destructive Crossover in Genetic Programming. In Proceedings of the Workshop
REFERENCES 1111
on Genetic Programming: From Theory to Real-World Applications [2326], pages 6–
22, 1995. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
nordin_1995_introns.html [accessed 2010-08-22]. CiteSeerx : 10.1.1.38.8934. See also
[2049, 2051].
2049. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Defined Introns and De-
structive Crossover in Genetic Programming. SysReport 3/95, Universität Dortmund, Fach-
bereich Informatik, Systems Analysis: Dortmund, North Rhine-Westphalia, Germany, 1995.
Fully available at http://www.cs.mun.ca/˜banzhaf/papers/ML95.pdf [accessed 2010-08-
x
22]. CiteSeer : 10.1.1.38.8934. See also [2048, 2051].
2050. Peter Nordin, Wolfgang Banzhaf, and Frank D. Francone. Efficient Evolution of Machine
Code for CISC Architectures using Instruction Blocks and Homologous Crossover. In Ad-
vances in Genetic Programming [2569], Chapter 12, pages 275–299. MIT Press: Cam-
bridge, MA, USA, 1996. Fully available at http://www.cs.bham.ac.uk/˜wbl/aigp3/
ch12.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/nordin_1999_
aigp3.html [accessed 2008-09-16].
2051. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Defined Introns and De-
structive Crossover in Genetic Programming. In Advances in Genetic Programming II [106],
pages 111–134. MIT Press: Cambridge, MA, USA, 1996. See also [2048, 2049].
2052. Peter Nordin, Wolfgang Banzhaf, and Markus F. Brameier. Evolution of a World
Model for a Miniature Robot using Genetic Programming. Automation and Remote
Control, 25(1–2):105–116, October 31, 1998, Springer Science+Business Media, Inc.:
New York, NY, USA and MAIK Nauka/Interperiodica – Pleiades Publishing, Inc.:
Moscow, Russia. doi: 10.1016/S0921-8890(98)00004-9. Fully available at http://
www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Nordin_1998_RAS.html [accessed 2008-09-
x
16]. CiteSeer : 10.1.1.54.1211.
2053. Peter Nordin, Frank Hoffmann, Frank D. Francone, and Markus F. Brameier. AIM-
GP and Parallelism. In CEC’99 [110], pages 1059–1066, 1999. Fully available
at http://www.cs.mun.ca/˜banzhaf/papers/cec_parallel.pdf [accessed 2008-09-16].
CiteSeerx : 10.1.1.6.5450.
2054. Michael G. Norman and Pablo Moscato. A Competitive and Cooperative Approach to Com-
plex Combinatorial Search. Caltech Concurrent Computation Program 790, California Insti-
tute of Technology (Caltech), Caltech Concurrent Computation Program (C3P): Pasadena,
CA, USA, 1989. CiteSeerx : 10.1.1.44.776. See also [2055].
2055. Michael G. Norman and Pablo Moscato. A Competitive and Cooperative Approach to Com-
plex Combinatorial Search. In JAIIO’91 [514], pages 3.15–3.29, 1991. Also published as
Technical Report Caltech Concurrent Computation Program, Report. 790, California Insti-
tute of Technology, Pasadena, California, USA, 1989. See also [2054].
2056. Proceedings of the Second Hawaii International Conference on System Sciences (HICSS’69),
January 22–24, 1969, University of Hawaii at Manoa: Honolulu, HI, USA. North-Holland
Scientific Publishers Ltd.: Amsterdam, The Netherlands.
2057. M. Nowak and Peter K. Schuster. Error Thresholds of Replication in Finite Populations
Mutation Frequencies and the Onset of Muller’s Ratchet. Journal of Theoretical Biology, 137
(4):375–395, April 20, 1989, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
Imprint: Academic Press Professional, Inc.: San Diego, CA, USA, D. Kirschner, Y. Iwasa,
and L. Wolpert, editors. PubMed ID: 2626057.
2058. Martin J. Oates and David Wolfe Corne. Global Web Server Load Balancing using Evo-
lutionary Computational Techniques. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 5(4):297–312, August 2001, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s005000100103.
2059. Shigeru Obayashi. Multidisciplinary Design Optimization of Aircraft Wing Planform
Based on Evolutionary Algorithms. In SMC’98 [1364], pages 3148–3153, volume 4, 1998.
doi: 10.1109/ICSMC.1998.726486. Fully available at http://www.lania.mx/˜ccoello/
obayashi98a.pdf.gz [accessed 2009-06-16]. INSPEC Accession Number: 6195712.
2060. Shigeru Obayashi and Daisuke Sasaki. Visualization and Data Mining of Pareto
1112 REFERENCES
Solutions Using Self-Organizing Map. In EMO’03 [957], pages 796–806, 2003.
doi: 10.1007/3-540-36970-8 56. Fully available at http://www.ifs.tohoku.ac.jp/
edge/library/Final%20EMO2003.pdf [accessed 2009-07-18].
2061. Shigeru Obayashi, Kalyanmoy Deb, Carlo Poloni, Tomoyuki Hiroyasu, and Tadahiko
Murata, editors. Proceedings of the Fourth International Conference on Evolution-
ary Multi-Criterion Optimization (EMO’07), March 5–8, 2007, Matsushima, Sendai,
Japan, volume 4403/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/978-3-540-70928-2. isbn: 3-540-70927-4, 3-540-70928-2, and
6610865078. Google Books ID: f9Lcf6imJ8MC. OCLC: 84611501, 180728448,
315394594, 318300130, and 403748482. Library of Congress Control Number
(LCCN): 2007921125.
2062. Gabriela Ochoa, Inman Harvey, and Hilary Buxton. Error Thresholds and Their Relation to
Optimal Mutation Rates. In ECAL’99 [934], pages 54–63, 1999. doi: 10.1007/3-540-48304-7.
Fully available at ftp://ftp.cogs.susx.ac.uk/pub/users/inmanh/ecal99.ps.gz
x
[accessed 2009-07-10]. CiteSeer : 10.1.1.43.9263.
2063. Christopher K. Oei, David Edward Goldberg, and Shau-Jin Chang. Tournament Selection,
Niching, and the Preservation of Diversity. IlliGAL Report 91011, Illinois Genetic Algo-
rithms Laboratory (IlliGAL), Department of Computer Science, Department of General En-
gineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, Decem-
ber 1991. Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/
91011.ps.Z [accessed 2008-03-15].
2064. Seog-Chan Oh, Byung-Won On, Eric J. Larson, and Dongwon Lee. BF*: Web Services
Discovery and Composition as Graph Search Problem. In EEE’05 [1371], pages 784–786,
2005. doi: 10.1109/EEE.2005.41. INSPEC Accession Number: 8530439. See also [317, 662,
1999, 2065–2067, 3034].
2065. Seog-Chan Oh, Hyunyoung Kil, Dongwon Lee, and Soundar R. T. Kumara. Algorithms
for Web Services Discovery and Composition Based on Syntactic and Semantic Service De-
scriptions. In CEC/EEE’06 [2967], pages 433–435, 2006. doi: 10.1109/CEC-EEE.2006.12.
Fully available at http://pike.psu.edu/publications/eee06.pdf [accessed 2010-12-10].
CiteSeerx : 10.1.1.77.9275. INSPEC Accession Number: 9189341. See also [318, 662,
1998, 2064, 2066, 2067, 3034].
2066. Seog-Chan Oh, John Jung-Woon Yoo, Hyunyoung Kil, Dongwon Lee, and Soundar R. T. Ku-
mara. Semantic Web-Service Discovery and Composition Using Flexible Parameter Match-
ing. In CEC/EEE’07 [1344], pages 533–536, 2007. doi: 10.1109/CEC-EEE.2007.86. INSPEC
Accession Number: 9868641. See also [319, 662, 1999, 2064, 2065, 2067, 3034].
2067. Seog-Chan Oh, Ju-Yeon Lee, Seon-Hwa Cheong, Soo-Min Lim, Min-Woo Kim, Sang-
Seok Lee, Jin-Bum Park, Sang-Do Noh, and Mye M. Sohn. WSPR*: Web-Service
Planner Augmented with A* Algorithm. In CEC’09 [1245], pages 515–518, 2009.
doi: 10.1109/CEC.2009.13. INSPEC Accession Number: 10839129. See also [662, 1571, 1999,
2064–2066, 3034].
2068. Miloš Ohlı́dal, Jiřı́ Jaroš, Josef Schwarz, and Václav Dvořák. Evolutionary Design of OAB and
AAB Communication Schedules for Interconnection Network. In EvoWorkshops’06 [2341],
pages 267–278, 2006. doi: 10.1007/11732242 24.
2069. Tatsuya Okabe, Yaochu Jin, Bernhard Sendhoff, and Markus Olhofer. Voronoi-based Es-
timation of Distribution Algorithm for Multi-objective Optimization. In CEC’04 [1369],
pages 1594–1601, volume 2, 2004. doi: 10.1109/CEC.2004.1331086. Fully available
at http://www.honda-ri.org/intern/hriliterature/PN%2008-04 and http://
www.soft-computing.de/cec-2004-1445-final.pdf [accessed 2009-08-29]. INSPEC Ac-
cession Number: 8229117.
2070. Nicole Oldham, Christopher Thomas, Amit P. Sheth, and Kunal Verma. METEOR-S Web
Service Annotation Framework with Machine Learning Classification. In SWSWPC’04 [493],
pages 137–146, 2004. doi: 10.1007/978-3-540-30581-1 12.
2071. Gina Oliveira and Stefano Vita. A Multi-Objective Evolutionary Algorithm with ǫ-dominance
to Calculate Multicast Routes with QoS Requirements. In CEC’09 [1350], pages 182–189,
2009. doi: 10.1109/CEC.2009.4982946. INSPEC Accession Number: 10688528.
2072. Anne L. Olsen. Penalty Functions and the Knapsack Problem. In CEC’94 [1891], pages
554–558, volume 2, 1994. doi: 10.1109/ICEC.1994.350000.
REFERENCES 1113
2073. Donald M. Olsson and Lloyd S. Nelson. The Nelder-Mead Simplex Procedure for Function
Minimization. Technometrics, 17(1):45–51, February 1975, American Statistical Association
(ASA): Alexandria, VA, USA and American Society for Quality (ASQ): Milwaukee, WI,
USA. JSTOR Stable ID: 1267998.
2074. Mihai Oltean. Searching for a Practical Evidence of the No Free Lunch Theorems. In
BioADIT’04 [1399], pages 472–483, 2004. doi: 10.1007/b101281. Fully available at http://
www.cs.ubbcluj.ro/˜moltean/oltean_bioadit_springer2004.pdf [accessed 2009-03-
01].
2075. Beatrice M. Ombuki-Berman and Franklin T. Hanshar. Using Genetic Algorithms for Multi-
depot Vehicle Routing. In Bio-inspired Algorithms for the Vehicle Routing Problem [2162],
pages 77–99. Springer-Verlag: Berlin/Heidelberg, 2009. doi: 10.1007/978-3-540-85152-3 4.
2076. Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife’07), April 1–5, 2007,
Honolulu, HI, USA. Omipress: Madison, WI, USA and IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA. isbn: 1-4244-0701-X.
2077. Michael O’Neill and Conor Ryan, editors. Grammatical Evolution Workshop (GEWS’02),
2002, Roosevelt Hotel: New York, NY, USA. AAAI Press: Menlo Park, CA, USA. Fully
available at http://www.grammatical-evolution.org/gews2002/ [accessed 2009-07-09].
Partly available at http://www.grammatical-evolution.org/pubs.html [accessed 2009-
07-09]. Part of [1790].
2078. Michael O’Neill and Conor Ryan, editors. Proceedings of the Second Grammatical Evolution
Workshop (GEWS’03), July 12, 2003, Holiday Inn Chicago: Chicago, IL, USA. See also
[484, 485].
2079. Michael O’Neill and Conor Ryan, editors. The 3rd Grammatical Evolution Workshop
(GEWS’04), June 26, 2004, Red Lion Hotel: Seattle, WA, USA. Published as CD. See
also [751, 752].
2080. Michael O’Neill, Leonardo Vanneschi, Steven Matt Gustafson, Anna Isabel Esparcia-
Alcázar, Ivanoe de Falco, Antonio Della Cioppa, and Ernesto Tarantino, editors. Ge-
netic Programming – Proceedings of the 11th European Conference on Genetic Program-
ming (EuroGP’08), March 26–28, 2008, Naples, Italy, volume 4971/2008 in Theoret-
ical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-78671-9.
isbn: 3-540-78670-8. Partly available at http://evostar.iti.upv.es/index.
php?option=com_content&view=article&id=46 [accessed 2011-02-28]. Google Books
ID: OAHUcHbXoNkC. OCLC: 280852852. Library of Congress Control Number
(LCCN): 2008922954.
2081. Yew-Soon Ong and Andy J. Keane. Meta-Lamarckian Learning in Memetic Algorithms.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 8(2):99–110, 2004, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/TEVC.2003.819944. Fully available
at http://eprints.soton.ac.uk/22794/1/ong_04.pdf [accessed 2011-04-23]. INSPEC
Accession Number: 7973767.
2082. Seizo Onoe, Mohsen Guizani, Hsiao-Hwa Chen, and Mamoru Sawahashi, editors. Proceed-
ings of the International Conference on Wireless Communications and Mobile Computing
(IWCMC’06), July 3–6, 2006, Vancouver, BC, Canada. ACM Press: New York, NY, USA.
isbn: 1-59593-306-9. OCLC: 288959906.
2083. Godfrey C. Onwubolu and B. V. Babu, editors. New Optimization Techniques in Engineering,
volume 141 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-20167-X. Google Books ID: Yrsq-Pz0kAAC.
2084. Aleksandr Ivanovich Oparin. The Origin of Life. Courier Dover Publications: North
Chelmsford, MA, USA, 2nd edition, 1953–2003. isbn: 0486495221 and 0486602133.
Partly available at http://www.valencia.edu/˜orilife/textos/The%20Origin
%20of%20Life.pdf. Google Books ID: 3CYJOwAACAAJ, 65c5AAAAMAAJ, Jv8psJCtI0gC,
lDqJwAACAAJ, kW8ZKwAACAAJ, and qZB2o69KWSwC.
2085. Stan Openshaw. Building an Automated Modelling System to Explore a Universe of Spatial
Interaction Models. Geographical Analysis – An International Journal of Theoretical Geog-
raphy, 26(1):31–46, 1998, Wiley Periodicals, Inc.: Wilmington, DE, USA and Ohio State
University, Department of Geography: Columbus, OH, USA.
2086. Stan Openshaw and Ian Turton. Building New Spatial Interaction Models Using
Genetic Programming. In AISB’94 [936], 1994. Fully available at http://www.
1114 REFERENCES
cs.bham.ac.uk/˜wbl/biblio/gp-html/openshaw_1994_bnsim.html and http://
www.geog.leeds.ac.uk/papers/94-1/94-1.pdf [accessed 2008-09-16].
2087. Abstracts of the National Conference of the Operations Research Society of Japan. Operations
Research Society of Japan (OSRJ): Tokyo, Japan.
2088. Robert Orchard, Chunsheng Yang, and Moonis Ali, editors. Proceedings of the 17th
International Conference on Industrial and Engineering Applications of Artificial Intelli-
gence and Expert Systems (IEA/AIE’04), May 17–20, 2004, Ottawa, ON, Canada, vol-
ume 3029/2004 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b97304.
isbn: 3-540-22007-0. Google Books ID: T52Lf6oWKuMC. OCLC: 55592278, 59669961,
66574763, and 249601813. Library of Congress Control Number (LCCN): 2004105117.
2089. Una-May O’Reilly, Gwoing Tina Yu, Rick L. Riolo, and Bill Worzel, editors. Genetic Pro-
gramming Theory and Practice II, Proceedings of the Second Workshop on Genetic Pro-
gramming (GPTP’04), May 13–15, 2004, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA, volume 8 in Genetic Programming Series.
Kluwer Publishers: Boston, MA, USA. doi: 10.1007/b101112. isbn: 0-387-23253-2 and
0-387-23254-0.
2090. Agustin Orfila, Juan M. Estevez-Tapiador, and Arturo Ribagorda. Evolving High-Speed,
Easy-to-Understand Network Intrusion Detection Rules with Genetic Programming. In
EvoWorkshops’09 [1052], pages 93–98, 2009. doi: 10.1007/978-3-642-01129-0 11.
2091. Leslie E. Orgel. Prebiotic Adenine Revisited: Eutectics and Photochemistry. Origins of Life
and Evolution of the Biosphere, 34(4):361–369, August 2004, Springer Netherlands: Dor-
drecht, Netherlands. doi: 10.1023/B:ORIG.0000029882.52156.c2.
2092. Helcio R. B. Orlande, editor. 4th International Conference on Inverse Problems in En-
gineering: Theory and Practice – Inverse Problems in Engineering: Theory and Practice
(ICIPE’02), May 26–31, 2002, Angra dos Reis, Rio de Janeiro, Brazil. United Engineering
Foundation (UEF): New York, NY, USA. doi: 10.1080/089161502753341870. Fully available
at http://www.lttc.coppe.ufrj.br/4icipe/ [accessed 2009-07-03].
2093. Late Breaking Papers at Genetic and Evolutionary Computation Conference (GECCO’99
LBP), July 13–17, 1999, Orlando, FL, USA. See also [211, 2502].
2094. The Second International Workshop on Learning Classifier Systems (IWLCS’99), July 13,
1999, Orlando, FL, USA. Co-located with GECCO-99. See also [211].
2095. Albert Orriols-Puig and Ester Bernadó-Mansilla. Evolutionary Rule-Based Systems
for Imbalanced Data Sets. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 13(3), February 2009, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s00500-008-0319-7.
2096. Albert Orriols-Puig, Jorge Casillas, and Ester Bernadó-Mansilla. Genetic-based Ma-
chine Learning Systems are Competitive for Pattern Recognition. Evolutionary
Intelligence, 1(3):209–232, October 2008, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/s12065-008-0013-9.
2097. First European Working Session on Learning (EWSL’86), February 3–4, 1986, Orsay, France.
2098. Proceedings of the 10th International Symposium on Experimental Algorithms (SEA’11),
May 5–7, 2011, Orthodox Academy of Crete (OAC): Kolymvari, Kissamos, Chania, Crete,
Greece.
2099. Domingo Ortiz-Boyer, César Hervás-Martı́nez, and Carlos A. Reyes Garcı́a. CIXL2: A
Crossover Operator for Evolutionary Algorithms Based on Population Features. Journal of
Artificial Intelligence Research (JAIR), 24:1–48, July 2005, AI Access Foundation, Inc.: El
Segundo, CA, USA and AAAI Press: Menlo Park, CA, USA. Fully available at http://
www.aaai.org/Papers/JAIR/Vol24/JAIR-2401.pdf and http://www.cs.cmu.
edu/afs/cs/project/jair/pub/volume24/ortizboyer05a-html/Ortiz-Boyer.
html [accessed 2009-10-24].
2100. Emilio G. Ortiz-Garcı́a, Lucas Martı́nez-Bernabeu, Sancho Salcedo-Sanz, Francisco Flórez,
José Antonio Portilla-Figueras, and Angel M. Pérez-Bellido. A Parallel Evolutionary Algo-
rithm for the Hub Location Problem with Fully Interconnected Backbone and Access Net-
works. In CEC’09 [1350], pages 1501–1506, 2009. doi: 10.1109/CEC.2009.4983120. INSPEC
Accession Number: 10688715.
2101. Henry F. Osborn. Oytogenic and Phylogenic Variation. Science Magazine, 4(100):786–
789, November 27, 1896, American Association for the Advancement of Science (AAAS):
REFERENCES 1115
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.4.100.786.
2102. Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press: Cambridge,
MA, USA, July 1994. isbn: 0-2626-5040-1. Google Books ID: 5ntdaYX4LPkC.
2103. Ibrahim H. Osman. An Introduction to Metaheuristics. In Operational Research Tutorial
Papers – Annual Conference of the Operational Research Society [1695], pages 92–122, 1995.
2104. Ibrahim H. Osman and James P. Kelly, editors. 1st Metaheuristics International Conference
– Meta-Heuristics: The Theory and Applications (MIC’95), July 22–25, 1995, Breckenridge,
CO, USA. Kluwer Publishers: Boston, MA, USA. isbn: 0-7923-9700-2. Published in 1996.
2105. Ibrahim H. Osman and Gilbert Laporte. Metaheuristics: A bibliography. Annals
of Operations Research, 63(5):511–623, October 1996, Springer Netherlands: Dordrecht,
Netherlands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1007/BF02125421. Fully available at http://sb.aub.edu.lb/PersonalSites/
DrOsman/docs/Articles-pdf/BIBmh96.pdf [accessed 2010-09-17].
2106. Pavel Osmera, editor. Proceedings of the 6th International Conference on Soft Comput-
ing (MENDEL’00), June 7–9, 2000, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-1609-2.
2107. Pavel Ošmera and Radek Matoušek, editors. Proceedings of the 13th International Conference
on Soft Computing (MENDEL’07), September 5–7, 2007, Prague, Czech Republic. Brno
University of Technology, Faculty of Mechanical Engineering: Brno, Czech Republic.
2108. Fernando E. B. Otero, Monique M. S. Silvia, and Alex Alves Freitas. Genetic Programming
For Attribute Construction In Data Mining. In GECCO’02 [1675], page 1270, 2002. See also
[2109].
2109. Fernando E. B. Otero, Monique M. S. Silva, Alex Alves Freitas, and Julio C. Nievola. Genetic
Programming for Attribute Construction in Data Mining. In EuroGP’03 [2360], pages 101–
121, 2003. doi: 10.1007/3-540-36599-0 36. Fully available at http://www.cs.kent.ac.uk/
people/staff/aaf/pub_papers.dir/EuroGP-2003-Fernando.pdf [accessed 2007-09-09].
See also [2108].
2110. James G. Oxley. Matroid Theory, volume 3 in Oxford Graduate Texts in Mathematics.
Oxford University Press, Inc.: New York, NY, USA, 1992. isbn: 0-19-853563-5 and
0-19-920250-8. OCLC: 26053360, 440044001, and 474911564. Library of Congress
Control Number (LCCN): 92020802. GBV-Identification (PPN): 111250307, 230783953,
383918693, 470663685, 501250778, and 522307191. LC Classification: QA166.6 .O95
1992.
2111. Akira Oyama. Wing Design Using Evolutionary Algorithm. PhD thesis, Tokyo University,
Department of Aeronautics and Space Engineering: Tokyo, Japan, March 2000, Kazuhiro
Nakahashi and Shigeru Obayashi, Advisors. Fully available at http://flab.eng.isas.
ac.jp/member/oyama/index2e.html [accessed 2009-06-16]. CiteSeerx : 10.1.1.24.3306.
2112. Ingo Paenke, Jürgen Branke, and Yaochu Jin. On the Influence of Phenotype Plasticity on
Genotype Diversity. In FOCI’07 [1865], pages 33–40, 2007. doi: 10.1109/FOCI.2007.372144.
Fully available at http://www.honda-ri.org/intern/Publications/PN%2052-06
and http://www.soft-computing.de/S001P005.pdf [accessed 2008-11-10]. Best student
paper.
2113. Charles Campbell Palmer. An Approach to a Problem in Network Design using Genetic
Algorithms. PhD thesis, Polytechnic University: New York, NY, USA, April 1994, Aaron
Kershenbaum, Supervisor, Susan Flynn Hummel, Richard M. van Slyke, and Robert R.
Boorstyn, Committee members. Fully available at ftp://ftp.cs.cuhk.hk/pub/EC/
thesis/phd/palmer-phd.ps.gz [accessed 2010-07-28]. CiteSeerx : 10.1.1.38.1716. UMI
Order No.: GAX94-31740. See also [2114].
2114. Charles Campbell Palmer and Aaron Kershenbaum. An Approach to a Problem in Network
Design using Genetic Algorithms. Networks, 26(3):151–163, 1995, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1002/net.3230260305. See also [2113].
2115. Charles Campbell Palmer and Aaron Kershenbaum. Representing Trees in Genetic Algo-
1116 REFERENCES
rithms. In CEC’94 [1891], pages 379–384, volume 1, 1994. CiteSeerx : 10.1.1.36.1655.
See also [2116].
2116. Charles Campbell Palmer and Aaron Kershenbaum. Representing Trees in Genetic Algo-
rithms. In Handbook of Evolutionary Computation [171], Chapter G1.3. Oxford University
Press, Inc.: New York, NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac House,
Temple Back, Bristol, UK, and CRC Press, Inc.: Boca Raton, FL, USA, 1997. Fully available
at http://www.cs.cinvestav.mx/˜constraint/papers/palmer94representing.
ps [accessed 2010-07-26]. See also [2115].
2117. Themis Palpanas, Nick Koudas, and Alberto Mendelzon. On Space Constrained Set Selection
Problem. Data & Knowledge Engineering, 67(1):200–218, October 2008, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/j.datak.2008.06.011.
2118. Guanyu Pan, Quansheng Dou, and Xiaohua Liu. Performance of two Improved Particle
Swarm Optimization In Dynamic Optimization Environments. In ISDA’06 [553], pages 1024–
1028, volume 2, 2006. doi: 10.1109/ISDA.2006.253752. INSPEC Accession Number: 9275253.
2119. Hui Pan, Ling Wang, and Bo Liu. Particle Swarm Optimization for Function Optimization in
Noisy Environment. Journal of Applied Mathematics and Computation, 181(2):908–919, Oc-
tober 15, 2006, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.amc.2006.01.066.
2120. Giselher Pankratz and Veikko Krypczyk. Benchmark Data Sets for Dynamic Vehicle
Routing Problems. FernUniversität in Hagen, Lehrstuhl für Wirtschaftsinformatik: Ha-
gen, North Rhine-Westphalia, Germany, June 12, 2007. Fully available at http://www.
fernuni-hagen.de/WINF/inhfrm/benchmark_data.htm [accessed 2008-10-27].
2121. Isueh-Yuan Pao. 2004 IEEE Antennas and Propagation Society (A-P-S) International Sympo-
sium and USNC / URSI National’Radio Science Meeting, June 20–25, 2004. IEEE Computer
Society: Piscataway, NJ, USA. doi: 10.1109/APS.2004.1330376. isbn: 0-7803-8302-8.
2122. Mike P. Papazoglou and John Zeleznikow, editors. The Next Generation of Informa-
tion Systems: From Data to Knowledge – A Selection of Papers Presented at Two
IJCAI-91 Workshops (IJCAI’91 WS), August 26, 1991, Sydney, NSW, Australia, volume
611 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-55616-8 and
3-540-55616-8. OCLC: 26095701, 123310072, 230127949, 231363363, 320295775,
321349775, 423036170, 441767836, 471535298, and 499252204. Library of Congress
Control Number (LCCN): 92021889. GBV-Identification (PPN): 119292769. LC Classifi-
cation: QA76.9.D3 N485 1992. See also [831, 1987, 1988].
2123. Proceedings of the 5th International Conference on Simulated Evolution And Learning
(SEAL’04), October 26–29, 2004, Paradise Hotel: Busan, South Korea.
2124. Panos M. Pardalos and Ding-Zhu Du, editors. Handbook of Combinatorial Op-
timization, volume 1–3. Kluwer Academic Publishers: Norwell, MA, USA, Oc-
tober 31, 1990. isbn: 0-7923-5018-9, 0-7923-5019-7, 0-7923-5285-8, and
0-7923-5293-9. OCLC: 38485795 and 490888316. Library of Congress Control Number
(LCCN): 98002942. LC Classification: QA402.5 .H335 1998.
2125. Panos M. Pardalos and Mauricio G.C. Resende, editors. Handbook of Applied Opti-
mization. Oxford University Press, Inc.: New York, NY, USA, February 22, 2002.
isbn: 0-19-512594-0. OCLC: 45532495 and 248337411.
2126. Panos M. Pardalos, Nguyen Van Thoai, and Reiner Horst. Introduction to Global Op-
timization, volume 3 & 48 in Nonconvex Optimization and Its Applications. Springer
Science+Business Media, Inc.: New York, NY, USA, 1st/2nd edition, 1995–2000.
isbn: 0-7923-3556-2 and 0-7923-6756-1. Google Books ID: D8LxE6PB8fIC,
atHlWfTt9KAC, dbu02-1JbLIC, and w6bRM8W-oTgC.
2127. Vilfredo Federico Pareto. Cours d’Économie Politique. F. Rouge: Lausanne/Paris, France,
1896–1897. Partly available at http://ann.sagepub.com/cgi/reprint/9/3/128.pdf
[accessed 2009-04-23]. 2 volumes. See also [1938].
2128. Jong-man Park, Jeon-gue Park, Chong-hyun Lee, and Mun-sung Han. Robust and Effi-
cient Genetic Crossover Operator: Homologous Recombination. In IJCNN ’93 [1994], pages
2975–2978, volume 3, 1993. doi: 10.1109/IJCNN.1993.714347. INSPEC Accession Num-
ber: 4962425.
2129. Julia K. Parrish and William M. Hamner, editors. Animal Groups in Three Dimen-
sions: How Species Aggregate. Cambridge University Press: Cambridge, UK, Decem-
REFERENCES 1117
ber 1997. doi: 10.2277/0521460247. isbn: 0521460247. Google Books ID: zj4I4HNZSR0C.
OCLC: 37156372 and 243890243.
2130. Konstantinos E. Parsopoulos. Cooperative Micro-Differential Evolution for High-Dimensional
Problems. In GECCO’09-I [2342], pages 531–538, 2009. doi: 10.1145/1569901.1569975.
2131. Janice Partyka and Randolph Hall. Vehicle Routing Software Survey, volume 37 (1). Lion-
heart Publishing, Inc.: Marietta, GA, USA, February 2010. Fully available at http://www.
lionhrtpub.com/orms/surveys/Vehicle_Routing/vrss.html [accessed 2011-04-25]. See
also [2132].
2132. Janice Partyka and Randolph Hall. Vehicle Routing Software Survey – On the Road to
Connectivity – Creative integration of computer, communication and location technologies
help a wide range of industries thrive in difficult times. OR/MS Today, February 2010,
Lionheart Publishing, Inc.: Marietta, GA, USA and Institute for Operations Research and the
Management Sciences (INFORMS): Linthicum, ML, USA. Fully available at http://www.
lionhrtpub.com/orms/orms-2-10/frsurvey.html [accessed 2011-04-25]. See also [2131].
2133. Dilip Patel, Shushma Patel, and Yingxu Wang, editors. Proceedings of the 2nd IEEE Inter-
national Conference on Cognitive Informatics (ICCI’03), August 18–20, 2003, London, UK.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-1986-5.
2134. Norman R. Paterson. Genetic Programming with Context-Sensitive Grammars. PhD thesis,
University of St Andrews, School of Computer Science: St Andrews, Scotland, UK, Septem-
ber 2002, Michael Livesey, Supervisor. Fully available at http://www.cs.bham.ac.uk/
˜wbl/biblio/gp-html/paterson_thesis.html [accessed 2009-06-29].
2135. David A. Patterson and John L. Hennessy. Computer Organization and Design: The
Hardware/Software Interface, The Morgan Kaufmann Series in Computer Architecture
and Design. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 3rd edition,
2004. isbn: 0-12-088433-X and 1-558-60604-1. Google Books ID: 1lD9LZRcIZ8C.
OCLC: 56213091 and 503402668. GBV-Identification (PPN): 390849650.
2136. Diane B. Paul. FITNESS: Historical Perspectives. In Keywords in Evolutionary Biol-
ogy [1521], pages 112–114. Harvard University Press: Cambridge, MA, USA, 1992.
2137. Linus Carl Pauling. The Nature of the Chemical Bond and the Structure of Molecules
and Crystals: An Introduction to Modern Structural Chemistry. Cornell University Press:
Ithaca, NY, USA, 3rd edition, 1960. isbn: 0-8014-0333-2. Fully available at http://
osulibrary.oregonstate.edu/specialcollections/coll/pauling/ [accessed 2007-
07-12].
2138. Terry Payne and Valentina Tamma, editors. AAAI Fall Symposium on Agents and the Seman-
tic Web, November 3–6, 2005, Arlington, VA, USA, volume FS-05-01. AAAI Press: Menlo
Park, CA, USA.
2139. Darren Pearce, editor. The 12th White House Papers Graduate Research in Cognitive and
Computing Sciences at Sussex. Cognitive Science Research Papers (CSRP) CSRP 512, Uni-
versity of Sussex, School of Cognitive Science: Brighton, UK, November 1999, Isle of Thorns,
UK. Fully available at ftp://ftp.informatics.sussex.ac.uk/pub/reports/csrp/
csrp512.ps.Z [accessed 2009-09-02]. issn: 1350-3162.
2140. Judea Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving, The
Addison-Wesley Series in Artificial Intelligence. Addison-Wesley Longman Publishing Co.,
Inc.: Boston, MA, USA, 1984. isbn: 0-201-05594-5. Google Books ID: 0XtQAAAAMAAJ
and 1HpQAAAAMAAJ. OCLC: 9644728 and 251831871. Library of Congress Control Num-
ber (LCCN): 83012217. GBV-Identification (PPN): 013593463 and 021186472. LC
Classification: Q335 .P38 1984.
2141. David W. Pearson, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the
2nd International Conference on Artificial Neural Nets and Genetic Algorithms (ICAN-
NGA’95), April 18–21, 1995, Alès, France. Springer New York: New York, NY, USA.
isbn: 3-211-82692-0. Google Books ID: 77hQAAAAMAAJ. OCLC: 32241732.
2142. David W. Pearson, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the 6th
International Conference on Artificial Neural Nets and Genetic Algorithms (ICANNGA’03),
April 23–25, 2003, Roanne, France.
2143. Karl Pearson. The Problem of the Random Walk. Nature, 72:294, July 27, 1905.
doi: 10.1038/072294b0.
2144. Witold Pedrycz and Athanasios V. Vasilakos, editors. Computational Intelligence in
Telecommunications Networks. CRC Press, Inc.: Boca Raton, FL, USA, September 15,
1118 REFERENCES
2000. isbn: 084931075X. Google Books ID: 7lVj5pqSxXIC and dE6d-jPoDEEC.
OCLC: 43894172 and 55103484.
2145. Martin Pelikan. Hierarchical Bayesian Optimization Algorithm – Toward a New Generation
of Evolutionary Algorithms, volume 170/2005 in Studies in Fuzziness and Soft Computing.
Springer-Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/b10910. isbn: 3-540-23774-7.
Google Books ID: R0QHqcaTfIC. OCLC: 57730016 and 288213853. Library of Congress
Control Number (LCCN): 2004116659.
2146. Martin Pelikan. Analysis of Estimation of Distribution Algorithms and Genetic Algorithms on
NK Landscapes. In GECCO’08 [1519], pages 1033–1040, 2008. doi: 10.1145/1389095.1389287.
arXiv ID: 0801.3111. Session: Genetic Algorithms Papers.
2147. Martin Pelikan and David Edward Goldberg. Genetic Algorithms, Clustering, and the Break-
ing of Symmetry. IlliGAL Report 2000013, Illinois Genetic Algorithms Laboratory (Illi-
GAL), Department of Computer Science, Department of General Engineering, University of
Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, March 11, 2003. Fully avail-
able at www.illigal.uiuc.edu/pub/papers/IlliGALs/2000013.ps.Z [accessed 2009-08-
x
28]. CiteSeer : 10.1.1.35.8620. See also [2148].
2148. Martin Pelikan and David Edward Goldberg. Genetic Algorithms, Clustering, and the Break-
ing of Symmetry. In PPSN VI [2421], pages 385–394, 2000. doi: 10.1007/3-540-45356-3 38.
See also [2147].
2149. Martin Pelikan and Kumara Sastry, editors. Proceedings of the Optimization by Building
and Using Probabilistic Models Workshop (OBUPM’01), July 7, 2001, Holiday Inn Golden
Gateway Hotel: San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. Part of [2570].
2150. Martin Pelikan, David Edward Goldberg, and Erick Cantú-Paz. Linkage Problem, Distri-
bution Estimation, and Bayesian Networks. Technical Report 98013, Illinois Genetic Algo-
rithms Laboratory (IlliGAL), Department of Computer Science, Department of General Engi-
neering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, Novem-
ber 1998. Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/
98013.ps.Z [accessed 2009-07-11]. CiteSeerx : 10.1.1.45.213. See also [483]. Newly published
as [2152].
2151. Martin Pelikan, David Edward Goldberg, and Erick Cantú-Paz. BOA: The Bayesian Opti-
mization Algorithm. In GECCO’99 [211], pages 525–532, 1999. Fully available at https://
eprints.kfupm.edu.sa/28537/ [accessed 2008-10-18]. CiteSeerx : 10.1.1.46.8131. See also
[2152].
2152. Martin Pelikan, David Edward Goldberg, and Erick Cantú-Paz. BOA: The Bayesian Op-
timization Algorithm. IlliGAL Report 99003, Illinois Genetic Algorithms Laboratory (Illi-
GAL), Department of Computer Science, Department of General Engineering, University of
Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 1999. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/99003.ps.Z [accessed 2009-
x
07-11]. CiteSeer : 10.1.1.45.934. See also [2150, 2151].
2153. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Opti-
mization by Building and Using Probabilistic Models. Technical Report 99018, Illinois
Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science, Department
of General Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign,
IL, USA, September 1999. Fully available at http://eprints.kfupm.edu.sa/21437/
and http://www.google.de/url?sa=t&source=web&ct=res&cd=8&url=http
%3A%2F%2Fwww.illigal.uiuc.edu%2Fpub%2Fpapers%2FIlliGALs%2F99018.ps.
Z&ei=N5JPSoz9DZOqmgP_wfiZCw&usg=AFQjCNEuIEIxh7My7MJ2wIKuLw7tQ34DMg
x
[accessed 2009-07-04]. CiteSeer : 10.1.1.40.1920. See also [2154, 2156].
2154. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Optimization
by Building and Using Probabilistic Models. In ACC [1332], pages 3289–3293, volume 5,
2000. doi: 10.1109/ACC.2000.879173. See also [2153, 2156].
2155. Martin Pelikan, Heinz Mühlenbein, and A. O. Rodriguez, editors. Proceedings of the Opti-
mization by Building and Using Probabilistic Models Workshop (OBUPM’2000), 2000, Riviera
Hotel and Casino: Las Vegas, NV, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. Part of [2935].
2156. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Optimiza-
tion by Building and Using Probabilistic Models. Computational Optimization and Appli-
REFERENCES 1119
cations, 21(1):5–20, January 2002, Kluwer Academic Publishers: Norwell, MA, USA and
Springer Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1013500812258. Fully avail-
able at http://w3.ualg.pt/˜flobo/papers/pmbga-survey.pdf [accessed 2009-07-04]. See
also [2153, 2154].
2157. Martin Pelikan, Kumara Sastry, and David Edward Goldberg. Multiobjective hBOA, Clus-
tering, and Scalability. IlliGAL Report 2005005, Illinois Genetic Algorithms Laboratory
(IlliGAL), Department of Computer Science, Department of General Engineering, Univer-
sity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, February 2005. Fully
available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2005005.pdf
[accessed 2009-08-28]. arXiv ID: cs/0502034v1. See also [2398].
2158. Martin Pelikan, Kumara Sastry, and Erick Cantú-Paz, editors. Scalable Optimiza-
tion via Probabilistic Modeling – From Algorithms to Applications, volume 33 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2006.
doi: 10.1007/978-3-540-34954-9. isbn: 3-540-34953-7. Google Books ID: znzGDXPh6NAC.
OCLC: 72437004, 180920474, and 318298233. Library of Congress Control Number
(LCCN): 2006928618.
2159. David Alejandro Pelta and Natalio Krasnogor, editors. Proceedings of the 1st International
Workshop Nature Inspired Cooperative Strategies for Optimization (NICSO’06), June 29–
30, 2006, Granada, Spain. Rosillo’s: Granada, Spain. isbn: 8468993239. Google Books
ID: jzgDGQAACAAJ.
2160. Parag C. Pendharkar and Gary J. Koehler. A General Steady State Distribution based
Stopping Criteria for Finite Length Genetic Algorithms. European Journal of Operational
Research (EJOR), 176(3):1436–1451, February 2007, Elsevier Science Publishers B.V.: Am-
sterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam,
The Netherlands, Roman Slowinski, Jesus Artalejo, Jean-Charles Billaut, Robert Dyson, and
Lorenzo Peccati, editors. doi: 10.1016/j.ejor.2005.10.050.
2161. Fei Peng, Ke Tang, Guoliang Chen, and Xin Yao. Population-based Algorithm Portfo-
lios for Numerical Optimization. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 14(5):782–800, March 29, 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2010.2040183. INSPEC Accession Number: 11559790.
2162. Francisco Baptista Pereira and Jorge Tavares, editors. Bio-inspired Algorithms for the Vehi-
cle Routing Problem, volume 161 in Studies in Computational Intelligence. Springer-Verlag:
Berlin/Heidelberg, 2009. doi: 10.1007/978-3-540-85152-3. Library of Congress Control Num-
ber (LCCN): 2008933225.
2163. J. Périaux, Carlo Poloni, and Gabriel Winter, authors. D. Quagliarella, editor. Genetic
Algorithms and Evolution Strategy in Engineering and Computer Science: Recent Ad-
vances and Industrial Applications. John Wiley & Sons Ltd.: New York, NY, USA, 1998.
isbn: 0471977101. Google Books ID: y NQAAAAMAAJ. OCLC: 38935523.
2164. Timothy Perkis. Stack Based Genetic Programming. In CEC’94 [1891], pages
148–153, volume 1, 1994. Fully available at ftp://coast.cs.purdue.edu/pub/
doc/genetic_algorithms/GP/Stack-Genetic-Programming.ps.gz [accessed 2010-08-
x
19]. CiteSeer : 10.1.1.53.2622.
2165. Adrian Perrig and Dawn Xiaodong Song. A First Step Towards the Automatic
Generation of Security Protocols. In NDSS’00 [1413], pages 73–83, 2000. Fully
available at http://sparrow.ece.cmu.edu/˜adrian/projects/protgen/protgen.
pdf, http://www.cs.berkeley.edu/˜dawnsong/papers/apg.pdf, and http://
www.isoc.org/isoc/conferences/ndss/2000/proceedings/031.pdf [accessed 2009-
x
09-02]. CiteSeer : 10.1.1.42.2688.
2166. Charles J. Petrie, editor. Semantic Web Services Challenge: Proceedings of the 2008 Work-
shops, 2009, LG-2009-01. Stanford University, Computer Science Department, Stanford Logic
Group: Stanford, CA, USA. Fully available at http://logic.stanford.edu/reports/
LG-2009-01.pdf [accessed 2010-12-12]. See also [1023].
2167. Alan Pétrowski. A Clearing Procedure as a Niching Method for Genetic Algorithms. In
CEC’96 [1445], pages 798–803, 1996. doi: 10.1109/ICEC.1996.542703. Fully available
at http://sci2s.ugr.es/docencia/cursoMieres/clearing-96.pdf [accessed 2008-12-
x
16]. CiteSeer : 10.1.1.33.8027. INSPEC Accession Number: 5375492.
2168. Alan Pétrowski. An Efficient Hierarchical Clustering Technique for Speciation. Tech-
nical Report, Institut National des Télécommunications: Evry Cedex, France, 1997.
1120 REFERENCES
CiteSeerx : 10.1.1.54.1131.
2169. Ferdinando Pezzella, editor. Giornate di Lavoro (Entreprise Systems: Management of Tech-
nological and Organizational Changes, “Gestione del cambiamento tecnologico ed organizza-
tivo nei sistemi d’impresa”) (AIRO’95), September 20–22, 1995, Ancona, Italy. Associazione
Italiana di Ricerca Operativa – Optimization and Decision Sciences (AIRO): Italy.
2170. Patrick C. Phillips. The Language of Gene Interaction. Genetics, 149(3):1167–1171,
July 1998, Genetics Society of America (GSA): Bethesda, MD, USA. Fully available
at http://www.genetics.org/cgi/reprint/149/3/1167.pdf [accessed 2009-07-11].
2171. Jiratta Phuboon-on and Raweewan Auepanwiriyakul. Selecting Materialized View Using
Two-Phase Optimization with Multiple View Processing Plan. Technical Report 27, World
Academy of Science, Engineering and Technology (WASET): Baky, AZ, USA, 2007. Fully
available at http://www.waset.org/journals/waset/v27/v27-30.pdf [accessed 2011-
04-13].
2172. Samuel Pierre. Inferring New Design Rules by Machine Learning: A Case Study of Topological
Optimization. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and
Humans, 28(5):575–585, September 1998, IEEE Systems, Man, and Cybernetics Society:
New York, NY, USA. doi: 10.1109/3468.709602. issn: 6006762.
2173. Samuel Pierre and Ali Elgibaoui. Improving Communication Network Topologies using Tabu
Search. In LCN’97 [1329], pages 44–53, 1997. doi: 10.1109/LCN.1997.630901. INSPEC
Accession Number: 5766134.
2174. Samuel Pierre and Fabien Houéto. Assigning Cells to Switches in Cellular Mobile Networks
using Taboo Search. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cyber-
netics, 32(3):351–356, June 2000, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/TSMCB.2002.999810. PubMed ID: 18238132. See also [2175].
2175. Samuel Pierre and Fabien Houéto. A Tabu Search Approach for Assigning Cells to Switches
in Cellular Mobile Networks. Computer Communications – The International Journal for
the Computer and Telecommunications Industry, 25(5):464–477, March 15, 2002, Elsevier
Science Publishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(01)00371-1. See also [2174].
2176. Samuel Pierre and Gabriel Ioan Ivascu, editors. Proceedings of the 2nd IEEE Interna-
tional Conference on Wireless and Mobile Computing, Networking and Communications
(WiMob’06), June 19–21, 2006, Montréal, QC, Canada. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 1-4244-0494-0. OCLC: 77118231 and 122525160. Order No.: 06EX1436.
2177. Samuel Pierre and Gisèle Legault. An Evolutionary Approach for Configuring Economical
Packet Switched Computer Networks. Artificial Intelligence in Engineering, 10(2):127–134,
1996, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0954-1810(95)00022-4. See
also [2178].
2178. Samuel Pierre and Gisèle Legault. A Genetic Algorithm for Designing Distributed Computer
Network Topologies. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cyber-
netics, 28(2):249–258, May 1998, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/3477.662766. See also [2177].
2179. Massimo Pigliucci, Courtney J. Murren, and Carl D. Schlichting. Review Article:
Phenotypic Plasticity in Evolution. Journal of Experimental Biology (JEB), 209(12):
2362–2367, June 2006, The Company of Biologists Ltd. (COB): Cambridge, MA, USA.
doi: 10.1242/jeb.02070. Fully available at http://jeb.biologists.org/cgi/reprint/
209/12/2362.pdf [accessed 2008-09-11].
2180. Martin Pincus. A Monte Carlo Method for the Approximate Solution of Certain Types
of Constrained Optimization Problems. Operations Research, 18(6):1225–1228, November–
December 1970, Institute for Operations Research and the Management Sciences (IN-
FORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge, MA,
USA.
2181. János D. Pintér. Global Optimization in Action – Continuous and Lipschitz Optimiza-
tion: Algorithms, Implementations and Applications, volume 6 in Nonconvex Optimization
and Its Applications. Springer Science+Business Media, Inc.: New York, NY, USA. Im-
print: Kluwer Academic Publishers: Norwell, MA, USA, 1996. isbn: 0792337573. Google
Books ID: G8pF982ckNsC. GOiA won the 2000 INFORMS Computing Society prize (typed
in by E. Loew June 2008).
2182. 15th European Conference on Machine Learning (ECML’04), September 20–24, 2004, Pisa,
Italy.
REFERENCES 1121
2183. Proceedings of the 1st Workshop on Machine Learning (ML’80), 1980, Pittsburgh, PA, USA.
2184. R. L. Plackett. Some Theorems in Least Squares. Biometrika, 37(12):149–157, 1950, Oxford
University Press, Inc.: New York, NY, USA.
2185. Michaël Defoin Platel, Manuel Clergue, and Philippe Collard. Maximum Homologous
Crossover for Linear Genetic Programming. In EuroGP’03 [2360], pages 29–48, 2003. Fully
available at http://hal.archives-ouvertes.fr/docs/00/15/97/39/PDF/eurogp_
03.pdf, http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/platel83.html, and
http://www.i3s.unice.fr/˜clergue/siteCV/publi/eurogp_03.pdf [accessed 2010-
x
12-04]. CiteSeer : 10.1.1.105.1838.
2186. Michaël Defoin Platel, Sébastien Vérel, Manuel Clergue, and Philippe Collard. From Royal
Road to Epistatic Road for Variable Length Evolution Algorithm. In EA’03 [1736], pages 3–
14, 2003. doi: 10.1007/b96080. Fully available at http://arxiv.org/abs/0707.0548 and
http://www.i3s.unice.fr/˜verel/publi/ea03-fromRRtoER.pdf [accessed 2007-12-01].
2187. Michaël Defoin Platel, Stefan Schliebs, and Nikola Kasabov. Quantum-Inspired Evolutionary
Algorithm: A Multimodel EDA. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 13(6):1218–1232, December 2009, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2008.2003010. INSPEC Accession Number: 10998923.
2188. Alexander Podlich. Intelligente Planung und Optimierung des Güterverkehrs auf Straße und
Schiene mit Evolutionären Algorithmen. Master’s thesis, University of Kassel, Fachbereich
16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, Febru-
ary 2009, Thomas Weise, Christian Gorldt, and Manfred Menze, Advisors, Kurt Geihs and
Bernd Scholz-Reiter, Committee members.
2189. Alexander Podlich, Thomas Weise, Manfred Menze, and Christian Gorldt. Intel-
ligente Wechselbrückensteuerung für die Logistik von Morgen. In GSN’09 [2836],
pages 1–10, 2009. Fully available at http://eceasst.cs.tu-berlin.de/index.
php/eceasst/article/viewFile/205/207, http://journal.ub.tu-berlin.de/
index.php/eceasst/article/viewFile/205/207, and http://www.it-weise.
de/documents/files/PWMG2009IWFDLVM.pdf [accessed 2010-11-02].
2190. Hartmut Pohlheim. GEATbx Introduction – Evolutionary Algorithms: Overview, Methods
and Operators. GEATbx.com: Berlin, Germany, version 3.7 edition, November 2005. Fully
available at http://www.geatbx.com/download/GEATbx_Intro_Algorithmen_v38.
pdf [accessed 2007-07-03]. Documentation for GEATbx version 3.7 (Genetic and Evolutionary
Algorithm Toolbox for use with Matlab).
2191. Riccardo Poli. Parallel Distributed Genetic Programming. Technical Report CSRP-96-15,
University of Birmingham, School of Computer Science: Birmingham, UK, September 1996.
CiteSeerx : 10.1.1.54.7666. See also [2192].
2192. Riccardo Poli. Parallel Distributed Genetic Programming. In New Ideas in Optimiza-
tion [638], pages 403–432. McGraw-Hill Ltd.: Maidenhead, UK, England, 1999. Fully avail-
able at http://cswww.essex.ac.uk/staff/rpoli/papers/Poli-NIO-1999-PDGP.
pdf [accessed 2007-11-04]. CiteSeerx : 10.1.1.15.118. See also [2191].
2193. Riccardo Poli and William B. Langdon. A New Schema Theory for Genetic Programming
with One-point Crossover and Point Mutation. In GP’97 [1611], pages 278–285, 1997.
CiteSeerx : 10.1.1.16.990. See also [2194].
2194. Riccardo Poli and William B. Langdon. A New Schema Theory for Ge-
netic Programming with One-Point Crossover and Point Mutation. Technical Re-
port CSRP-97-3, University of Birmingham, School of Computer Science: Birm-
ingham, UK, January 1997. Fully available at ftp://ftp.cs.bham.ac.uk/pub/
tech-reports/1997/CSRP-97-03.ps.gz and http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/poli_1997_schemaTR.html [accessed 2008-06-17]. Revised August 1997.
See also [2193].
2195. Riccardo Poli, William B. Langdon, Marc Schoenauer, Terence Claus Fogarty, and Wolfgang
Banzhaf, editors. Late Breaking Papers at EuroGP’98: The First European Workshop on
Genetic Programming (EuroGP’98 LBP), April 14–15, 1998, Paris, France. Distributed at
the workshop. See also [210].
2196. Riccardo Poli, Peter Nordin, William B. Langdon, and Terence Claus Fogarty, editors.
Proceedings of the Second European Workshop on Genetic Programming (EuroGP’99),
May 26–27, 1999, Göteborg, Sweden, volume 1598/1999 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-65899-8.
1122 REFERENCES
OCLC: 634229593. Library of Congress Control Number (LCCN): 99028995. LC Classi-
fication: QA76.623 .E94 1999.
2197. Riccardo Poli, Hans-Michael Voigt, Stefano Cagnoni, David Wolfe Corne, George D. Smith,
and Terence Claus Fogarty, editors. Proceedings of the First European Workshops on Evo-
lutionary Image Analysis, Signal Processing and Telecommunications: EvoIASP’99, Eu-
roEcTel’99 (EvoWorkshops’99), May 26–27, 1999, Göteborg, Sweden, volume 1596/1999
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/10704703. isbn: 3-540-65837-8. Google Books ID: CYDLxvi0-SYC.
OCLC: 41165439, 59385457, 190833621, and 202788825.
2198. Riccardo Poli, Wolfgang Banzhaf, William B. Langdon, Julian Francis Miller, Peter Nordin,
and Terence Claus Fogarty, editors. Proceedings of the Third European Conference on Genetic
Programming (EuroGP’00), April 15–16, 2000, Edinburgh, Scotland, UK, volume 1802/2000
in Lecture Notes in Computer Science (LNCS). EvoGP, The Genetic Programming Working
Group of EvoNET, The Network of Excellence in Evolutionary Computing: France, Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/b75085. isbn: 3-540-67339-3. Google Books
ID: KhaR0XZHedEC. OCLC: 43811252, 174380171, 243487736, and 313540688. In
conjunction with ICES2000 and EvoWorkshops2000 on evolutionary computing.
2199. Riccardo Poli, William B. Langdon, and Nicholas Freitag McPhee. A Field Guide
to Genetic Programming. Lulu Enterprises UK Ltd: London, UK, March 2008.
isbn: 1-4092-0073-6. Fully available at http://www.gp-field-guide.org.uk/
and http://www.lulu.com/items/volume_63/2167000/2167025/2/print/book.
pdf [accessed 2008-03-27]. Google Books ID: 3PBrqNK5fFQC. OCLC: 225855345. With contri-
butions by John R. Koza.
2200. Riccardo Poli, Nicholas Freitag McPhee, Luca Citi, and Ellery Crane. Memory with Memory
in Genetic Programming. Journal of Artificial Evolution and Applications, 2009(570606),
2009, Hindawi Publishing Corporation: Nasr City, Cairo, Egypt, Leonardo Vanneschi, edi-
tor. doi: 10.1155/2009/570606. Fully available at http://www.hindawi.com/journals/
jaea/2009/570606.html [accessed 2009-09-09].
2201. 3rd Generative Art International Conference (GA’00), December 14–16, 2000, Politecnico di
Milano: Milano, Italy. Politecnico di Milano, Generative Design Lab: Milano, Italy. Fully
available at http://www.generativeart.com/on/cic/GA2000.htm [accessed 2009-06-18].
2202. Shankar R. Ponnekanti and Armando Fox. SWORD: A Developer Toolkit for Web Service
Composition. In WWW’02 [26], 2002. Fully available at http://www2002.org/CDROM/
alternate/786/ [accessed 2011-01-11].
2203. H. Vincent Poor. An Introduction to Signal Detection and Estimation, Springer
Texts in Electrical Engineering. Birkhäuser Verlag: Basel, Switzerland, 2nd edition,
1994. isbn: 0-387-94173-8 and 3-540-94173-8. Google Books ID: eI1DBIUbzAQC.
OCLC: 28891765.
2204. Lucian Popa, Serge Abiteboul, and Phokion G. Kolaitis, editors. Proceedings of the
Twenty-First ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Sys-
tems (PODS’02), June 2–6, 2002, Madison, WI, USA. ACM Press: New York, NY, USA.
isbn: 1-58113-507-6.
2205. Proceedings of the 4th International Conference on Metaheuristics and Nature Inspired Com-
puting (META’12), October 27–31, 2012, Port El Kantaoui, Sousse, Tunisia.
2206. Bruce W. Porter and Raymond J. Mooney, editors. Proceedings of the 7th International
Workshop on Machine Learning (ML’90), June 21–23, 1990, Austin, TX, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-141-4.
2207. Agent Modeling, Papers from the 1996 AAAI Workshop, August 4, 1996, Portland, OR, USA.
isbn: 1-57735-008-1. GBV-Identification (PPN): 230032508. Technical report WS-96-02
of the American Association for Artificial Intelligence. See also [586].
2208. Vincent William Porto, N. Saravanan, D. Waagen, and Ágoston E. Eiben, editors. Evolu-
tionary Programming VII – Proceedings of the 7th International Conference on Evolutionary
Programming (EP’98), May 25–27, 1998, Mission Valley Marriott: San Diego, CA, USA,
volume 1447/1998 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/BFb0040753. isbn: 3-540-64891-7. OCLC: 39539448,
150397169, 247514628, and 313127561. In cooperation with IEEE Neural Networks
Council.
2209. 16th European Conference on Machine Learning (ECML’05), October 3–7, 2005, Porto, Por-
REFERENCES 1123
tugal.
2210. Mitchell A. Potter and Kenneth Alan De Jong. A Cooperative Coevolutionary
Approach to Function Optimization. In PPSN III [693], pages 249–257, 1994.
doi: 10.1007/3-540-58484-6 269. CiteSeerx : 10.1.1.54.7033.
2211. Mitchell A. Potter and Kenneth Alan De Jong. Cooperative Coevolution: An Ar-
chitecture for Evolving Coadapted Subcomponents. Evolutionary Computation, 8(1):
1–29, Spring 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600568086.
Fully available at http://mitpress.mit.edu/journals/EVCO/Potter.pdf and
http://mitpress.mit.edu/journals/pdf/evco_8_1_1_0.pdf [accessed 2010-09-13].
CiteSeerx : 10.1.1.134.2926.
2212. Jean-Yves Potvin. A Review of Bio-inspired Algorithms for Vehicle Routing. In Bio-inspired
Algorithms for the Vehicle Routing Problem [2162], pages 1–34. Springer-Verlag: Berlin/Hei-
delberg, 2009. doi: 10.1007/978-3-540-85152-3 1.
2213. Kata Praditwong, Mark Harman, and Xin Yao. Software Module Clustering as
a Multi-Objective Search Problem. IEEE Transactions on Software Engineering,
37(2):264–282, March–April 2011, IEEE Computer Society: Piscataway, NJ, USA.
doi: 10.1109/TSE.2010.26. Fully available at http://www.cs.bham.ac.uk/˜xin/
papers/PraditwongHarmanYao10TSE.pdf [accessed 2011-08-07]. INSPEC Accession Num-
ber: 11883163.
2214. Bhanu Prasad, editor. Proceedings of the 1st Indian International Conference on Artificial
Intelligence (IICAI-03), December 18–20, 2003, Hyderabad, India. isbn: 0-9727412-0-8.
2215. Bhanu Prasad, editor. Proceedings of the 2nd Indian International Conference on Artificial
Intelligence (IICAI-05), December 20–22, 2005, Pune, India. isbn: 0-9727412-1-6.
2216. Bhanu Prasad, editor. Proceedings of the 3rd Indian International Conference on Artificial
Intelligence (IICAI-07), December 17–19, 2007, Pune, India.
2217. Bhanu Prasad, Pawan Lingras, and Ashwin Ram, editors. Proceedings of the 4th Indian In-
ternational Conference on Artificial Intelligence (IICAI-09), December 16–18, 2009, Tumkur,
India.
2218. Sushil K. Prasad, Susmi Routray, Reema Khurana, and Sartaj Sahni, editors. Proceed-
ings of the Third International Conference on Information Systems, Technology and Man-
agement (ICISTM’09), March 12–13, 2009, Ghaziabad, India, volume 31 in Communi-
cations in Computer and Information Science. Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-00405-6. isbn: 3-642-00404-0. Library of Congress Control Number
(LCCN): 2009921607.
2219. Steven D. Prestwich, S. Armagan Tarim, and Brahim Hnich. Template Design Under
Demand Uncertainty by Integer Linear Local Search. International Journal of Produc-
tion Research, 44(22):4915–4928, November 2006, Taylor and Francis LLC: London, UK.
doi: 10.1080/00207540600621060.
2220. Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen. Differential Evolution – A
Practical Approach to Global Optimization, Natural Computing Series. Birkhäuser Ver-
lag: Basel, Switzerland, 2005. isbn: 3-540-20950-6 and 3-540-31306-0. Google Books
ID: S67vX-KqVqUC. OCLC: 63127211, 63381529, and 318292603.
2221. Armand Prieditis and Stuart J. Russell, editors. Proceedings of the Twelfth International
Conference on Machine Learning (ICML’95), July 9–12, 1995, Tahoe City, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-377-8.
2222. Fifth Annual Princeton Conference on Information Science and Systems, March 1971. Prince-
ton University Press: Princeton, NJ, USA.
2223. Les Proll and Barbara Smith. Integer Linear Programming and Constraint Programming
Approaches to a Template Design Problem. INFORMS Journal on Computing (JOC), 10
(03):225–275, Summer 1998, Institute for Operations Research and the Management Sciences
(INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge,
MA, USA. doi: 10.1287/ijoc.10.3.265.
2224. Philip E. Protter. Stochastic Integration and Differential Equations, volume 21 in Stochastic
Modelling and Applied Probability. Springer-Verlag: Berlin/Heidelberg, 2nd edition, 2004.
isbn: 3-540-00313-4. Google Books ID: mJkFuqwr5xgC.
2225. Proceedings of the Fourth International Workshop on Local Search Techniques in Constraint
Satisfactio (LSCS’07), September 23, 2007, Providence, RI, USA.
2226. Adam Prügel-Bennett. Modelling GA Dynamics. In Proceedings of the Second Evonet Sum-
1124 REFERENCES
mer School on Theoretical Aspects of Evolutionary Computing [1480], pages 59–86, 1999.
Fully available at http://eprints.ecs.soton.ac.uk/13205 [accessed 2010-08-01].
2227. William F. Punch, Douglas Zongker, and Erik D. Goodman. The Royal Tree Problem,
a Benchmark for Single and Multiple Population Genetic Programming. In Advances
in Genetic Programming II [106], pages 299–316. MIT Press: Cambridge, MA, USA,
1996. Fully available at http://citeseer.ist.psu.edu/147908.html [accessed 2007-10-
x
14]. CiteSeer : 10.1.1.48.6434.
2228. Robin Charles Purshouse. On the Evolutionary Optimisation of Many Objectives. PhD
thesis, University of Sheffield, Department of Automatic Control and Systems Engineering:
Sheffield, UK, September 2003, Peter J. Fleming, Advisor. CiteSeerx : 10.1.1.10.8816.
2229. Robin Charles Purshouse and Peter J. Fleming. The Multi-Objective Genetic Algorithm Ap-
plied to Benchmark Problems – An Analysis. Research Report 796, University of Sheffield,
Department of Automatic Control and Systems Engineering: Sheffield, UK, August 2001.
Fully available at http://citeseer.ist.psu.edu/old/499210.html and http://
www.lania.mx/˜ccoello/EMOO/purshouse01.pdf.gz [accessed 2009-08-07].
2230. Robin Charles Purshouse and Peter J. Fleming. Conflict, Harmony, and Independence: Re-
lationships in Evolutionary Multi-Criterion Optimisation. In EMO’03 [957], pages 16–30,
2003. doi: 10.1007/3-540-36970-8 2. Fully available at http://www.lania.mx/˜ccoello/
EMOO/purshouse03.pdf.gz [accessed 2009-07-18].
2231. Robin Charles Purshouse and Peter J. Fleming. Evolutionary Many-Objective Optimi-
sation: An Exploratory Analysis. In CEC’03 [2395], pages 2066–2073, volume 3, 2003.
doi: 10.1109/CEC.2003.1299927. Fully available at http://www.lania.mx/˜ccoello/
EMOO/purshouse03b.pdf.gz. INSPEC Accession Number: 8494841.
2232. D. Quagliarella, J. Périaux, Carlo Poloni, and Gabriel Winter, editors. Genetic Algorithms
and Evolution Strategy in Engineering and Computer Science: Recent Advances and Indus-
trial Applications – Proceedings of the 2nd European Short Course on Genetic Algorithms
and Evolution Strategies (EUROGEN’97), November 28–December 4, 1997, Triest, Italy, Re-
cent Advances in Industrial Applications. John Wiley & Sons Ltd.: New York, NY, USA.
isbn: 0-471-97710-1. GBV-Identification (PPN): 236675036 and 252253957.
2233. R. J. Quick, Victor J. Rayward-Smith, and George D. Smith. The Royal Road Func-
tions: Description, Intent and Experimentation. In AISB’96 [938], pages 223–235, 1996.
doi: 10.1007/BFb0032786.
2234. John Ross Quinlan. C4.5: Programs for Machine Learning, Morgan Kaufmann Series in
Machine Learning. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993.
isbn: 1-558-60238-0. Google Books ID: AABrrnZWgPEC. OCLC: 26547590, 249341510,
257317819, and 473499718. Library of Congress Control Number (LCCN): 92032653.
GBV-Identification (PPN): 376624361.
2235. Alejandro Quintero and Samuel Pierre. Assigning Cells to Switches in Cellular Mobile Net-
works: A Comparative Study. Computer Communications – The International Journal for
the Computer and Telecommunications Industry, 26(9):950–960, June 2, 2003, Elsevier Sci-
ence Publishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(02)00224-4. See also [2237].
2236. Alejandro Quintero and Samuel Pierre. Evolutionary Approach to Optimize the As-
signment of Cells to Switches in Personal Communication Networks. Computer Com-
munications – The International Journal for the Computer and Telecommunications In-
dustry, 26(9):927–938, June 2, 2003, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0140-3664(02)00238-4. See also [2235, 2237].
2237. Alejandro Quintero and Samuel Pierre. Sequential and Multi-Population Memetic Algo-
rithms for Assigning Cells to Switches in Mobile Networks. Computer Networks, 43(3):
247–261, October 22, 2003, Elsevier Science Publishers B.V.: Essex, UK. Imprint: North-
Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands, Ian F. Akyildiz, editor.
doi: 10.1016/S1389-1286(03)00270-6. See also [2235, 2236].
2238. Mohammad Adil Qureshi. Evolving Agents. In GP’96 [1609], pages 369–374, 1996. Fully
available at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/AQ.gp96.
ps.gz [accessed 2007-09-17]. See also [2239, 2240].
REFERENCES 1125
2239. Mohammad Adil Qureshi. Evolving Agents. Research Note RN/96/4, University Col-
lege London (UCL): London, UK, January 1996. Fully available at http://www.cs.
bham.ac.uk/˜wbl/biblio/gp-html/qureshi_1996_eaRN.html and http://www.
cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/AQ.gp96.ps.gz [accessed 2008-09-02]. See
also [2238].
2240. Mohammad Adil Qureshi. The Evolution of Agents. PhD thesis, University College Lon-
don (UCL), Department of Computer Science: London, UK, July 2001, Jon Crowcroft,
Supervisor. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
qureshi_thesis.html [accessed 2009-09-02]. CiteSeerx : 10.1.1.4.3698. See also [2238,
2239].
2241. Nicholas J. Radcliffe. Equivalence Class Analysis of Genetic Algorithms. Complex Sys-
tems, 5(2):183–205, 1991, Complex Systems Publications, Inc.: Champaign, IL, USA.
Fully available at ftp://ftp.epcc.ed.ac.uk/pub/tr/90/tr9003.ps.Z [accessed 2010-07-
x
26]. CiteSeer : 10.1.1.7.2988.
2242. Nicholas J. Radcliffe. The Algebra of Genetic Algorithms. Technical Report TR92-11, Uni-
versity of Edinburgh, Edinburgh Parallel Computing Centre (EPCC): Edinburgh, Scotland,
UK, 1992. Fully available at ftp://ftp.epcc.ed.ac.uk/pub/tr/92/tr9211.ps.Z [ac-
cessed 2010-07-26]. See also [2244].
2243. Nicholas J. Radcliffe. Non-Linear Genetic Representations. In PPSN II [1827], pages 259–
268, 1992. Fully available at http://users.breathe.com/njr/papers/ppsn92.pdf
x
[accessed 2010-07-26]. CiteSeer : 10.1.1.7.1530.
2244. Nicholas J. Radcliffe. The Algebra of Genetic Algorithms. Annals of Mathematics and Arti-
ficial Intelligence, 10(4):339–384, December 1994, Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1007/BF01531276. Fully available at http://stochasticsolutions.com/
pdf/amai94.pdf [accessed 2010-07-27]. CiteSeerx : 10.1.1.21.9675, 10.1.1.51.2195, and
10.1.1.8.2450. See also [2242].
2245. Nicholas J. Radcliffe and Patrick David Surry. Formal Memetic Algorithms. In AISB’94 [936],
pages 1–16, 1994. doi: 10.1007/3-540-58483-8 1. CiteSeerx : 10.1.1.38.9885.
2246. Nicholas J. Radcliffe and Patrick David Surry. Fitness Variance of Formae and Performance
Prediction. In FOGA 3 [2932], pages 51–72, 1994. Fully available at http://users.
breathe.com/njr/papers/foga94.pdf [accessed 2009-07-10]. CiteSeerx : 10.1.1.40.4508
and 10.1.1.57.635.
2247. Nicholas J. Radcliffe and Patrick David Surry. Fundamental Limitations on Search Al-
gorithms: Evolutionary Computing in Perspective. In Computer Science Today – Recent
Trends and Developments [2777], pages 275–291. Springer-Verlag GmbH: Berlin, Germany,
1995. doi: 10.1007/BFb0015249. CiteSeerx : 10.1.1.49.2376.
2248. Sabine Radke and Deutsches Institut für Wirtschaftsforschung (DIW): Berlin, Germany.
Verkehr in Zahlen 2006/2007. Deutscher Verkehrs-Verlag GmbH: Hamburg, Germany
and Bundesministerium für Verkehr, Bau- und Stadtentwicklung: Berlin, Germany, 2006.
isbn: 3-87154-349-7. OCLC: 255962850. GBV-Identification (PPN): 526782250.
2249. R. Raghuram. Computer Simulation of Electronic Circuits. New Age International (P)
Ltd., Publishers: New Delhi, India, 1989–August 2007. isbn: 8122401112. Google Books
ID: kVs2liYNir0C.
2250. Yahya Rahmat-Samii and Eric Michielssen, editors. Electromagnetic Optimization by Genetic
Algorithms, Wiley Series in Microwave and Optical Engineering. John Wiley & Sons Ltd.:
New York, NY, USA, June 1999. isbn: 0-471-29545-0.
2251. Günther R. Raidl. A Hybrid GP Approach for Numerically Robust Symbolic Regres-
sion. In GP’98 [1612], pages 323–328, 1998. Fully available at http://www.ads.
tuwien.ac.at/publications/bib/pdf/raidl-98c.pdf and http://www.cs.
bham.ac.uk/˜wbl/biblio/gp-html/raidl_1998_hGPnrsr.html [accessed 2009-09-09].
CiteSeerx : 10.1.1.20.6555.
2252. Günther R. Raidl and Jens Gottlieb, editors. Proceedings of the 5th European Conference on
Evolutionary Computation in Combinatorial Optimization (EvoCOP’05), March 30–April 1,
1126 REFERENCES
2005, Lausanne, Switzerland, volume 3448/2005 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-25337-8.
2253. Günther R. Raidl, Jean-Arcady Meyer, Martin Middendorf, Stefano Cagnoni, Juan
Jesús Romero Cardalda, David Wolfe Corne, Jens Gottlieb, Agnès Guillot, Emma Hart,
Colin G. Johnson, and Elena Marchiori, editors. Applications of Evolutionary Computing,
Proceedings of EvoWorkshop 2003: EvoBIO, EvoCOP, EvoIASP, EvoMUSART, EvoROB,
and EvoSTIM (EvoWorkshop’03), April 14–16, 2003, University of Essex: Wivenhoe Park,
Colchester, Essex, UK, volume 2611/2003 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-00976-0.
2254. Günther R. Raidl, Stefano Cagnoni, Jürgen Branke, David Wolfe Corne, Rolf Drech-
sler, Yaochu Jin, Colin G. Johnson, Penousal Machado, Elena Marchiori, Franz Roth-
lauf, George D. Smith, and Giovanni Squillero, editors. Applications of Evolution-
ary Computing – Proceedings of EvoWorkshops 2004: EvoBIO, EvoCOMNET, EvoHOT,
EvoIASP, EvoMUSART, and EvoSTOC (EvoWorkshops’04), April 5–7, 2004, Coimbra, Por-
tugal, volume 3005/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/b96500. isbn: 3-540-21378-3. Google Books
ID: 1K9iHdLz0BQC. OCLC: 54880715 and 249136935. Library of Congress Control
Number (LCCN): 2004102415.
2255. Tapani Raiko, Pentti Haikonen, and Jaakko Väyrynen, editors. AI and Machine Con-
sciousness – Proceedings of the 13th Finnish Artificial Intelligence Conference (STeP’08),
August 20–22, 2008, Helsinki University of Technology: Espoo, Finland, volume 24
in Publications of the Finnish Artificial Intelligence Society. Nokia Research Center:
Helsinki, Finland. Fully available at http://www.stes.fi/step2008/proceedings/
step2008proceedings.pdf [accessed 2009-10-25].
2256. Albert Rainer. Web Service Composition using Answer Set Programming.
In PuK’05 [2409], 2005. Fully available at http://www-is.informatik.
uni-oldenburg.de/˜sauer/puk2005/paper/4_puk2005_rainer.pdf [accessed 2011-01-
x
11]. CiteSeer : 10.1.1.136.1048.
2257. Albert Rainer and Jürgen Dorn. MOVE: A Generic Service Composition Frame-
work for Service Oriented Architectures. In CEC’09 [1245], pages 503–506, 2009.
doi: 10.1109/CEC.2009.56. INSPEC Accession Number: 10839130. See also [822, 823, 1571].
2258. Hassan Rajaei, Gabriel Wainer, and Michael Chinni, editors. Proceedings of the 2008 Spring
Simulation Multiconference (SpringSim’08), April 14–17, 2008, Crowne Plaza Ottawa Hotel:
Ottawa, ON, Canada, volume 40 in Simulation Series. Society for Computer Simulation, Inter-
national: San Diego, CA, USA. isbn: 1-56555-319-5. Google Books ID: Y5juPgAACAAJ.
OCLC: 316331262.
2259. Anca L. Ralescu, editor. Fuzzy Logic in Artificial Intelligence, Workshop Proceedings (IJ-
CAI’93 Fuzzy WS), August 28, 1993, Chambéry, France, volume 847 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-58409-9 and
3-540-58409-9. OCLC: 31010125, 246325420, 311898734, 422861463, 473142458,
474000977, 497538589, and 502373916. GBV-Identification (PPN): 165340169,
272032549, and 595124968. See also [182, 183, 927].
2260. Anca L. Ralescu and James G. Shanahan, editors. Fuzzy Logic in Artificial Intelligence,
Selected and Invited Workshop Papers (IJCAI’97 Fuzzy WS), August 23–24, 1997, Nagoya,
Japan, volume 1566 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-66374-6.
GBV-Identification (PPN): 300375778 and 534418899. See also [1946, 1947].
2261. Theodore K. Ralphs. Vehicle Routing Data Sets. COmputational INfrastructure for Opera-
tions Research (COIN-OR Foundation, Inc.), October 3, 2003. Fully available at http://
www.coin-or.org/SYMPHONY/branchandcut/VRP/data/ [accessed 2008-10-27].
2262. Theodore K. Ralphs, L. Kopman, William R. Pulleyblank, and L. E. Trotter. On the Capaci-
tated Vehicle Routing Problem. Mathematical Programming, 94(2–3):343–359, January 2003,
Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/s10107-002-0323-0.
2263. Proceedings of the 2005 International Conference on Machine Learning and Cybernetics
(ICMLC’05), August 18–21, 2005, Ramada Pearl Hotel: Guǎngzhōu, Guǎngdōng, China.
isbn: 0-7803-9091-1.
2264. Krishna Raman, Yue Zhang, Mark Panahi, and Kwei-Jay Lin. Customizable Business
Process Composition with Query Optimization. In CEC/EEE’08 [1346], pages 363–366,
REFERENCES 1127
2008. doi: 10.1109/CECandEEE.2008.152. INSPEC Accession Number: 10475112. See also
[204, 3067, 3073, 3074].
2265. Valliappan Ramasamy. Syntactical & Semantical Web Services Discovery And Composition.
In CEC/EEE’06 [2967], pages 439–441, 2006. doi: 10.1109/CEC-EEE.2006.86. INSPEC
Accession Number: 9189343. See also [318].
2266. Federico Ramı́rez and Olac Fuentes. Spectral Analysis Using Evolution Strategies. In
ASC’02 [1714], pages 357–391, 2002. Fully available at http://www.cs.utep.edu/
ofuentes/ASC02_RF.pdf [accessed 2009-09-18].
2267. Soraya Rana. Examining the Role of Local Optima and Schema Processing in Genetic
Search. PhD thesis, Colorado State University, Department of Computer Science, GENI-
TOR Research Group in Genetic Algorithms and Evolutionary Computation: Fort Collins,
CO, USA, July 1, 1999. Fully available at http://www.cs.colostate.edu/˜genitor/
dissertations/rana.ps.gz [accessed 2010-07-23]. CiteSeerx : 10.1.1.18.2879.
2268. William Rand, Sevan G. Ficici, and Rick L. Riolo, editors. Evolutionary Computation and
Multi-Agent Systems and Simulation Workshop (ECoMASS’08), July 12, 2008, Renaissance
Atlanta Hotel Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. Part of
[1519].
2269. Alain T. Rappaport and Reid G. Smith, editors. Proceedings of the The Second Confer-
ence on Innovative Applications of Artificial Intelligence (IAAI’90), May 1–3, 1990, Wash-
ington, DC, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-68068-8. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai90.php [accessed 2007-09-06].
Published in 1991.
2270. Khaled Rasheed and Brian D. Davison. Effect of Global Parallelism on the Behavior of a
Steady State Genetic Algorithm for Design Optimization. In CEC’99 [110], pages 534–541,
1999. Fully available at http://www.cse.lehigh.edu/˜brian/pubs/1999/cec99/
pgado.cec99.pdf [accessed 2010-08-01]. CiteSeerx : 10.1.1.114.2664. See also [698].
2271. Reza Rastegar and Arash Hariri. A Step Forward in Studying the Compact Genetic Algo-
rithm. Evolutionary Computation, 14(3):277–289, Fall 2006, MIT Press: Cambridge, MA,
USA. doi: 10.1162/evco.2006.14.3.277. arXiv ID: 0901.0598v1.
2272. Rajeev Rastogi and Kyuseok Shim. PUBLIC: A Decision Tree Classifier that Integrates
Building and Pruning. In VLDB’98 [1151], pages 404–415, 1998. Fully available at http://
eprints.kfupm.edu.sa/60034/ and http://www.vldb.org/conf/1998/p404.pdf
[accessed 2009-05-11].
2273. Tapabrata Ray, Kang Tai, and Kin Chye Seow. Multiobjective Design Optimization by an
Evolutionary Algorithm. Engineering Optimization, 33(4):399–424, 2001, Taylor and Francis
LLC: London, UK and Informa plc: London, UK. doi: 10.1080/03052150108940926.
2274. Victor J. Rayward-Smith, editor. Applications of Modern Heuristic Methods – Proceedings
of the UNICOM Seminar on Adaptive Computing and Information Processing, January 25–
27, 1994. Alfred Waller Ltd.: Henley-on-Thames, Oxfordshire, UK, Nelson Thornes Ltd.:
Cheltenham, Gloucestershire, UK, and Unicom Seminars Ltd.: Uxbridge, Middlesex, UK.
isbn: 1872474284. Google Books ID: BjOTAAAACAAJ and NmBRAAAAMAAJ.
2275. Victor J. Rayward-Smith. A Unified Approach to Tabu Search, Simulated Annealing and Ge-
netic Algorithms. In Applications of Modern Heuristic Methods – Proceedings of the UNICOM
Seminar on Adaptive Computing and Information Processing [2274], pages 55–78, volume 1,
1994.
2276. Victor J. Rayward-Smith, Ibrahim H. Osman, Colin R. Reeves, and George D. Smith, editors.
Modern Heuristic Search Methods. John Wiley & Sons Ltd.: New York, NY, USA, 1996.
isbn: 0470866500 and 0471962805. Google Books ID: DHFRAAAAMAAJ, Sd4LAAAACAAJ,
and btnFAAAACAAJ.
2277. Ingo Rechenberg. Cybernetic Solution Path of an Experimental Problem. Royal Aircraft
Establishment: Farnborough, Hampshire, UK, August 1965. Library Translation 1122.
Reprinted in [939].
2278. Ingo Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien
der biologischen Evolution, volume 15 in Problemata. PhD thesis, Technische Univer-
sität Berlin: Berlin, Germany, Friedrick Frommann Verlag: Stuttgart, Germany, 1971–
1973. isbn: 3-7728-0373-3 and 3-7728-0374-1. Google Books ID: QcNNGQAACAAJ.
OCLC: 9020616, 462818122, and 500569005. Library of Congress Control Number
(LCCN): 74320689. GBV-Identification (PPN): 024852090. LC Classification: QH371
1128 REFERENCES
.R33.
2279. Ingo Rechenberg. Evolutionsstrategie ’94, volume 1 in Werkstatt Bionik und Evolutionstech-
nik. Frommann-Holzboog Verlag: Bad Cannstadt, Stuttgart, Baden-Württemberg, Germany,
1994. isbn: 3-7728-1642-8. Google Books ID: savAAAACAAJ. OCLC: 75424354. GBV-
Identification (PPN): 153251220. Libri-Number: 8263485. See also [2278].
2280. Seventh International Workshop on Learning Classifier Systems (IWLCS’04), June 26, 2004,
Red Lion Hotel: Seattle, WA, USA. Held during GECCO 2004. See also [751, 752, 1589].
2281. R. Reddy, editor. Proceedings of the 5th International Joint Conference on Artificial In-
telligence (IJCAI’77-I), August 22–25, 1977, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA, volume 1. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-77-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [2282].
2282. R. Reddy, editor. Proceedings of the 5th International Joint Conference on Artificial Intel-
ligence (IJCAI’77-II), August 22–25, 1977, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA, volume 2. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-77-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [2281].
2283. Colin R. Reeves, editor. Modern Heuristic Techniques for Combinatorial Problems, Ad-
vanced Topics in Computer Science Series. Blackwell Publishing Ltd: Chichester, West Sus-
sex, UK, April 1993. isbn: 0077092392, 0-470-22079-1, and 0-632-03238-3. Google
Books ID: G6NfQgAACAAJ. OCLC: 26805931. Library of Congress Control Number
(LCCN): 92033335. GBV-Identification (PPN): 121998312 and 17228810X. LC Clas-
sification: QA402.5 .M62 1993. Outgrowth of a conference held at Bangor, North Wales in
1990.
2284. Colin R. Reeves and Christine C. Wright. An Experimental Design Perspective
on Genetic Algorithms. In FOGA 3 [2932], pages 7–22, 1994. Fully avail-
able at http://www.cs.colostate.edu/˜whitley/CS640/walshdesign.ps.gz [ac-
x
cessed 2010-09-14]. CiteSeer : 10.1.1.54.1692.
2285. Colin R. Reeves and Christine C. Wright. Epistasis in Genetic Algorithms: An Experimental
Design Perspective. In ICGA’95 [883], pages 217–224, 1995. CiteSeerx : 10.1.1.39.29.
2286. Dirk Reichelt, Peter Gmilkowsky, and Sebastian Linser. A Study of an Iterated Local Search
on the Reliable Communication Networks Design Problem. In EvoWorkshops’05 [2340], pages
156–165, 2005.
2287. Christian M. Reidys and Peter F. Stadler. Neutrality in Fitness Landscapes. Jour-
nal of Applied Mathematics and Computation, 117(2–3):321–350, January 25, 2001,
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0096-3003(99)00166-6.
CiteSeerx : 10.1.1.46.962.
2288. Gerhard Reinelt. TSPLIB. Office Research Group Discrete Optimization, Ruprecht Karls
University of Heidelberg: Heidelberg, Germany, 1995. Fully available at http://comopt.
ifi.uni-heidelberg.de/software/TSPLIB95/ [accessed 2010-11-23]. See also [2289, 2290].
2289. Gerhard Reinelt. TSPLIB – A Traveling Salesman Problem Library. ORSA Journal on
Computing, 3(4):376–384, Fall 1991, Operations Research Society of America (ORSA).
doi: 10.1287/ijoc.3.4.376. See also [2288, 2290].
2290. Gerhard Reinelt. TSPLIB 95. Technical Report, Universität Heidelberg, Institut für
Mathematik: Heidelberg, Germany, 1995. Fully available at http://comopt.ifi.
uni-heidelberg.de/software/TSPLIB95/DOC.PS [accessed 2011-09-18]. See also [2288,
2289].
2291. Mark J. Rentmeesters, Wei K. Tsai, and Kwei-Jay Lin. A Theory of Lexi-
cographic Multi-Criteria Optimization. In ICECCS’96 [1327], pages 76–96, 1996.
doi: 10.1109/ICECCS.1996.558386.
2292. Alfréd Rényi. Probability Theory. Dover Publications: Mineola, NY, USA,
May 2007. isbn: 0486458679. Google Books ID: Z9ueAQAACAAJ and yx8IAQAAIAAJ.
OCLC: 77708371.
2293. Proceedings of the Fourth Joint Conference on Information Science (JCIS 1998), Section:
The Second International Workshop on Frontiers in Evolutionary Algorithms (FEA’98), Oc-
tober 24–28, 1998, Research Triangle Park, NC, USA, volume 2. Workshop held in conjunc-
tion with Fourth Joint Conference on Information Sciences.
2294. Mauricio G.C. Resende, Jorge Pinho de Sousa, and Ana Viana, editors. 4th Metaheuristics
International Conference – Metaheuristics: Computer Decision-Making (MIC’01), July 16–
20, 2001, Porto, Portugal, volume 86 in Applied Optimization. Springer-Verlag GmbH:
REFERENCES 1129
Berlin, Germany. isbn: 1-4020-7653-3. Partly available at http://paginas.fe.up.
pt/˜gauti/MIC2001/ [accessed 2007-09-12]. OCLC: 53123574 and 639072881. Library of
Congress Control Number (LCCN): 2003061990. Published in November 30, 2003.
2295. Bernd Reusch, editor. International Conference on Computational Intelligence: The-
ory and Applications – 6th Fuzzy Days, May 25–28, 1996, Dortmund, North Rhine-
Westphalia, Germany, volume 1625/1999 in Lecture Notes in Computer Science (LNCS).
International Biometric Society (IBS): Washington, DC, USA. doi: 10.1007/3-540-48774-3.
isbn: 3-540-66050-X. Google Books ID: KQm0joXBr8IC. OCLC: 41380465 and
209204083.
2296. Proceedings of the IJCAI-99 Workshop on Intelligent Information Integration (IJCAI’99
WS), July 31, 1999, City Conference Center: Stockholm, Sweden, volume 23 in CEUR Work-
shop Proceedings (CEUR-WS.org). Rheinisch-Westfälische Technische Hochschule (RWTH)
Aachen, Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-23/ [accessed 2008-04-01]. Held in conjunction with the Sixteenth International Joint Con-
ference on Artificial Intelligence. See also [730, 731].
2297. Bernardete Ribeiro, Rudolf F. Albrecht, Andrej Dobnikar, David W. Pearson, and Nigel C.
Steele, editors. Proceedings of the 7th International Conference on Adaptive and Natural
Computing Algorithms (ICANNGA’05), March 21–23, 2005, University of Coimbra: Coimbra,
Portugal. Springer Verlag GmbH: Vienna, Austria. Partly available at http://icannga05.
dei.uc.pt/ [accessed 2007-08-31].
2298. Celso C. Ribeiro and Pierre Hansen, editors. 3rd Metaheuristics International Conference
– Essays and Surveys in Metaheuristics (MIC’99), July 19–23, 1999, Angra dos Reis, Rio
de Janeiro, Brazil, volume 15 in Operations Research/Computer Science Interfaces Series.
Springer-Verlag GmbH: Berlin, Germany. isbn: 0-7923-7520-3. Published in 2001.
2299. James P. Rice. Mathematical Statistics and Data Analysis, Discussion Paper Series In Eco-
nomics And Econometrics. Academic Internet Publishers, Inc. (AIPI): www.cram101.com,
University of Southampton, 1995–April 2006. isbn: 0495110892, 0534209343, and
0534399428. Google Books ID: BXQOAAAACAAJ, bIkQAQAAIAAJ, and bbCPAAAACAAJ.
OCLC: 71364578 and 315216152.
2300. Judith Rich Harris. Parental Selection: A Third Selection Process in the Evolution of Human
Hairlessness and Skin Color. Medical Hypotheses, 66(6):1053–1059, 2006, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.mehy.2006.01.027. PubMed ID: 16527428.
2301. Sebastian Richly, Georg Püschel, Dirk Habich, and Sebastian Götz. MapReduce for Scalable
Neural Nets Training. In SERVICES Cup’10 [2457], pages 99–106, 2010. doi: 10.1109/SER-
VICES.2010.36. INSPEC Accession Number: 11535225.
2302. Hendrik Richter. Behavior of Evolutionary Algorithms in Chaotically Changing Fitness Land-
scapes. In PPSN VIII [3028], pages 111–120, 2008. doi: 10.1007/b100601.
2303. Mark Ridley. Evolution. Blackwell Publishing Ltd: Chichester, West Sussex, UK, 8th edi-
tion, 2003. isbn: 0192892878, 0-632-03481-5, 0-86542-226-5, and 1-4051-0345-0.
Google Books ID: HCh7n6zuWeUC, WU wAAAAMAAJ, b-HGB9PqXCUC, xPAPgcZflzYC,
and ysddSQAACAAJ. GBV-Identification (PPN): 076515354 and 126473129. Libri-
Number: 3190331.
2304. John Riedl and Randall W. Hill Jr., editors. Proceedings of the Fifteenth Conference
on Innovative Applications of Artificial Intelligence (IAAI’03), August 12–14, 2003, Aca-
pulco, México. isbn: 1-57735-188-6. Partly available at http://www.aaai.org/
Conferences/IAAI/iaai03.php [accessed 2007-09-06].
2305. Rupert J. Riedl. A Systems-Analytical Approach to Macro-Evolutionary Phenomena. Quar-
terly Review of Biology (QRB), Chicago Journals, 52(4):351–370, December 1977, Stony
Brook University: Stony Brook, NY, USA. doi: 10.1086/410123.
2306. Rick L. Riolo and Bill Worzel, editors. Genetic Programming Theory and Practice, Pro-
ceedings of the Genetic Programming Theory Practice 2003 Workshop (GPTP’03), May 15–
17, 2003, University of Michigan, Center for the Study of Complex Systems (CSCS): Ann
Arbor, MI, USA. Kluwer Publishers: Boston, MA, USA and Springer New York: New
York, NY, USA. isbn: 1402075812. Partly available at http://www.cscs.umich.edu/
gptp-workshops/gptp2003/ [accessed 2007-09-28].
2307. Rick L. Riolo, Terence Soule, and Bill Worzel, editors. Genetic Programming Theory
and Practice IV, Proceedings of the Genetic Programming Theory Practice 2006 Workshop
1130 REFERENCES
(GPTP’06), May 11–13, 2006, University of Michigan, Center for the Study of Complex
Systems (CSCS): Ann Arbor, MI, USA. Springer-Verlag GmbH: Berlin, Germany. Partly
available at http://www.cscs.umich.edu/gptp-workshops/gptp2006/ [accessed 2007-
09-28].
2308. Rick L. Riolo, Terence Soule, and Bill Worzel, editors. Genetic Programming Theory
and Practice VI, Proceedings of the Sixth Workshop on Genetic Programming (GPTP’08),
May 15–17, 2008, University of Michigan, Center for the Study of Complex Systems (CSCS):
Ann Arbor, MI, USA, Genetic and Evolutionary Computation. Springer US: Boston, MA,
USA and Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1007/978-0-387-87623-8.
Library of Congress Control Number (LCCN): 2008936134.
2309. Rick L. Riolo, Una-May O’Reilly, and Trent McConaghy, editors. Genetic Programming
Theory and Practice VII, Proceedings of the Seventh Workshop on Genetic Programming
(GPTP’09), May 14–16, 2009, University of Michigan, Center for the Study of Com-
plex Systems (CSCS): Ann Arbor, MI, USA, Genetic and Evolutionary Computation.
Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1007/978-1-4419-1626-6. Library of Congress Control Number (LCCN): 2009939939.
2310. Rick L. Riolo, Trent McConaghy, and Ekaterina Vladislavleva, editors. Genetic Pro-
gramming Theory and Practice VIII, Proceedings of the Eighth Workshop on Genetic Pro-
gramming (GPTP’10), May 20–22, 2010, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA, volume 8 in Genetic and Evolutionary
Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Nor-
well, MA, USA. doi: 10.1007/978-1-4419-7747-2. Library of Congress Control Number
(LCCN): 20100938320.
2311. Jose L. Risco-Martı́n, Francisco Fernández de Vega, and Juan Lanchares, editors. Third
Workshop on Parallel Architectures and Bioinspired Algorithms (WPBA’10), September 11–
12, 2010, Vienna, Austria.
2312. Irina Rish. An Empirical Study of the Naive Bayes Classifier. In IJCAI’01 [2012], pages 41–
46, 2001. Fully available at http://www.cc.gatech.edu/˜isbell/classes/reading/
papers/Rish.pdf [accessed 2007-08-11].
2313. Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of
Mathematical Statistics, 22(3):400–407, September 1951, Institute of Mathematical Statis-
tics: Beachwood, OH, USA. doi: 10.1214/aoms/1177729586. Fully available at http://
projecteuclid.org/euclid.aoms/1177729586 [accessed 2008-10-11]. Zentralblatt MATH
identifier: 0054.05901. Mathematical Reviews number (MathSciNet): MR42668.
2314. Luı́s Mateus Rocha, Larry S. Yaeger, Mark A. Bedau, Dario Floreano, Robert L. Gold-
stone, and Alessandro Vespignani, editors. ALIFE X: Proceedings of the Tenth Interna-
tional Conference on the Simulation and Synthesis of Living Systems (ALIFE’06), June 3–
7, 2006, Bloomington, IN, USA, Bradford Books. MIT Press: Cambridge, MA, USA.
isbn: 0-262-68162-5. Library of Congress Control Number (LCCN): 2006922394. GBV-
Identification (PPN): 516051768. LC Classification: QH324.2 .I556 2006.
2315. Miguel Rocha, Pedro Sousa, Paulo Cortez, and Miguel Rio. Evolutionary Computation for
Quality of Service Internet Routing Optimization. In EvoWorkshops’07 [1050], pages 71–80,
2007. doi: 10.1007/978-3-540-71805-5 8.
2316. Alex Rogers and Adam Prügel-Bennett. Modelling the Dynamics of a Steady-State Genetic
Algorithm. In FOGA’98 [208], pages 57–68, 1998. Fully available at http://eprints.
ecs.soton.ac.uk/451/ [accessed 2010-08-01]. CiteSeerx : 10.1.1.28.1226.
2317. Daniel S. Rokhsar, Philip Warren Anderson, and Daniel L. Steingart. Self-Organization in
Prebiological Systems: Simulations of a Model for the Origin of Genetic Information. Journal
of Molecular Evolution, 23(2):119–126, June 1986, Springer New York: New York, NY, USA,
Martin Kreitman, editor. doi: 10.1007/BF02099906.
2318. Learning and Intelligent OptimizatioN (LION 5), January 17–21, 2011, Rome, Italy.
2319. Cristóbal Romero and Sebastián Ventura. Educational Data Mining: A Survey From 1995 to
2005. Expert Systems with Applications – An International Journal, 33(1):135–146, July 2007,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.eswa.2006.04.005.
2320. Juan Romero and Penousal Machado, editors. The Art of Artificial Evolution: A Handbook
on Evolutionary Art and Music, Natural Computing Series. Springer New York: New York,
NY, USA, November 2007. doi: 10.1007/978-3-540-72877-1. Library of Congress Control
REFERENCES 1131
Number (LCCN): 2007937632.
2321. Simon Ronald. Preventing Diversity Loss in a Routing Genetic Algorithm with Hash Tag-
ging. Complexity International, 2(2), April 1995, Monash University, Faculty of Information
Technology: Victoria, Australia. Fully available at http://www.complexity.org.au/
ci/vol02/sr_hash/ [accessed 2009-07-09].
2322. Simon Ronald. Genetic Algorithms and Permutation-Encoded Problems. Diversity Preser-
vation and a Study of Multimodality. PhD thesis, University of South Australia (UniSA),
Department of Computer and Information Science: Mawson Lakes, SA, Australia, 1996.
2323. Simon Ronald. Robust Encodings in Genetic Algorithms: A Survey of Encoding Issues. In
CEC’97 [173], pages 43–48, 1997. doi: 10.1109/ICEC.1997.592265.
2324. Simon Ronald, John Asenstorfer, and Millist Vincent. Representational Redundancy
in Evolutionary Algorithms. In CEC’95 [1361], pages 631–637, volume 2, 1995.
doi: 10.1109/ICEC.1995.487457.
2325. Agostinho Rosa, editor. Proceedings of the International Conference on Evolutionary Com-
putation (ICEC’09), 2009, Madeira, Portugal. Part of [1809].
2326. Justinian P. Rosca. Proceedings of the Workshop on Genetic Programming: From Theory to
Real-World Applications. Technical Report 95.2, University of Rochester, National Resource
Laboratory for the Study of Brain and Behavior: Rochester, NY, USA, Morgan Kaufmann:
San Mateo, CA, USA, July 9, 1995, Tahoe City, CA, USA. Held in conjunction with the
twelfth International Conference on Machine Learning.
2327. Justinian P. Rosca. An Analysis of Hierarchical Genetic Programming. Technical Re-
port TR566, University of Rochester, Computer Science Department: Rochester, NY, USA,
March 1995. Fully available at ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/95.
tr566.ps.gz [accessed 2009-07-10]. CiteSeerx : 10.1.1.26.2552.
2328. Justinian P. Rosca. Generality versus Size in Genetic Programming. In GP’96 [1609],
pages 381–387, 1996. Fully available at ftp://ftp.cs.rochester.edu/pub/u/rosca/
gp/96.gp.ps.gz and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/rosca_
1996_gVsGP.html [accessed 2011-11-23]. CiteSeerx : 10.1.1.54.2363.
2329. Justinian P. Rosca and Dana H. Ballard. Causality in Genetic Programming. In
ICGA’95 [883], pages 256–263, 1995. CiteSeerx : 10.1.1.26.2503.
2330. Florian Rosenberg, Christoph Nagl, and Schahram Dustdar. Applying Distributed Busi-
ness Rules – The VIDRE Approach. In SCContest’06 [554], pages 471–478, 2006.
doi: 10.1109/SCC.2006.22. INSPEC Accession Number: 9165411.
2331. Richard S. Rosenberg. Simulation of Genetic Populations with Biochemical Properties. PhD
thesis, University of Michigan: Ann Arbor, MI, USA, June 1967. Fully available at http://
deepblue.lib.umich.edu/handle/2027.42/7321 [accessed 2010-08-03]. University Micro-
films No.: UMR3370. ID: bad1552.0001.001. ORA Project 08333.
2332. Jay S. Rosenblatt, Robert A. Hinde, Colin Beer, and Marie-Claire Busnel, editors. Advances
in the Study of Behavior, volume 12 in Advances in the Study of Behavior. Academic Press
Professional, Inc.: San Diego, CA, USA. isbn: 0-12-004512-5. OCLC: 646758333. GBV-
Identification (PPN): 011437138.
2333. Howard H. Rosenbrock. An Automatic Method for Finding the Greatest or Least Value
of a Function. The Computer Journal, Oxford Journals, 3(3):175–184, March 1960, British
Computer Society: Swindon, UK. doi: 10.1093/comjnl/3.3.175.
2334. Paul L. Rosin and Freddy Fierens. Improving Neural Network Generalisation. In
IGARSS’95 [2609], pages 1255–1257, volume 2, 1995. doi: 10.1109/IGARSS.1995.521718.
Fully available at http://users.cs.cf.ac.uk/Paul.Rosin/resources/papers/
overfitting.pdf [accessed 2009-07-12]. CiteSeerx : 10.1.1.48.3136. INSPEC Accession
Number: 5112834.
2335. Claudio Rossi, Elena Marchiori, and Joost N. Kok. An Adaptive Evolutionary Algo-
rithm for the Satisfiability Problem. In SAC’00 [23], pages 463–469, volume 1, 2000.
doi: 10.1145/335603.335912. CiteSeerx : 10.1.1.37.4771. See also [1117].
2336. Gerald P. Roston. A Genetic Methodology for Configuration Design, CMU-RI-TR-94-42. PhD
thesis, Carnegie Mellon University (CMU), Department of Mechanical Engineering: Pitts-
burgh, PA, USA, December 1994, Robert Sturges, Jr. and William “Red“ Wittaker, Advi-
sors. Fully available at http://www.ri.cmu.edu/pubs/pub_3335.html [accessed 2010-08-
x
19]. CiteSeer : 10.1.1.152.7521.
2337. Franz Rothlauf, editor. Late Breaking Papers at Genetic and Evolutionary Computation
1132 REFERENCES
Conference (GECCO’05 LBP), June 25–29, 2005, Loews L’Enfant Plaza Hotel: Washington,
DC, USA. Also distributed on CD-ROM at GECCO-2005. See also [304, 2339].
2338. Franz Rothlauf. Representations for Genetic and Evolutionary Algorithms, volume 104 in
Studies in Fuzziness and Soft Computing. Physica-Verlag GmbH & Co.: Heidelberg, Ger-
many, 2nd edition, 2002–2006. isbn: 3-540-25059-X and 3790814962. Google Books
ID: fQrSUwop4JkC, rNxNAAAACAAJ, and tHYlQfvsP6cC. Foreword by David E. Gold-
berg.
2339. Franz Rothlauf, Misty Blowers, Jürgen Branke, Stefano Cagnoni, Ivan I. Garibay, Ozlem
Garibay, Jörn Grahl, Gregory S. Hornby, Edwin D. de Jong, Tim Kovacs, Sanjeev Kumar,
Cláudio F. Lima, Xavier Llorà, Fernando G. Lobo, Laurence D. Merkle, Julian Francis Miller,
Jason H. Moore, Michael O’Neill, Martin Pelikan, Terry P. Riopka, Marylyn D. Ritchie, Ku-
mara Sastry, Stephen L. Smith, Hal Stringer, Keiki Takadama, Marc Toussaint, Stephen C.
Upton, Alden H. Wright, and Shengxiang Yang, editors. Proceedings of the 2005 Work-
shops on Genetic and Evolutionary Computation (GECCO’05 WS), June 25–26, 2005, Loews
L’Enfant Plaza Hotel: Washington, DC, USA. ACM Press: New York, NY, USA. See also
[304, 2337].
2340. Franz Rothlauf, Jürgen Branke, Stefano Cagnoni, David Wolfe Corne, Rolf Drech-
sler, Yaochu Jin, Penousal Machado, Elena Marchiori, Juan Romero, George D. Smith,
and Giovanni Squillero, editors. Applications of Evolutionary Computing, Proceedings
of EvoWorkshops 2005 – EvoBIO, EvoCOMNET, EvoHOT, EvoIASP, EvoMUSART,
and EvoSTOC (EvoWorkshops’05), March 30–April 1, 2005, Lausanne, Switzerland,
volume 3449/2005 in Theoretical Computer Science and General Issues (SL 1), Lec-
ture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b106856. isbn: 3-540-25396-3. Google Books ID: -4dQAAAAMAAJ and
3X86XlOROyMC. OCLC: 59009564, 59223261, 145832358, and 254368497. Library
of Congress Control Number (LCCN): 2005922824.
2341. Franz Rothlauf, Jürgen Branke, Stefano Cagnoni, Ernesto Jorge Fernandes Costa, Carlos
Cotta, Rolf Drechsler, Evelyne Lutton, Penousal Machado, Jason H. Moore, Juan Romero,
George D. Smith, Giovanni Squillero, and Hideyuki Takagi, editors. Applications of Evo-
lutionary Computing – Proceedings of EvoWorkshops 2006: EvoBIO, EvoCOMNET, Evo-
HOT, EvoIASP, EvoINTERACTION, EvoMUSART, and EvoSTOC (EvoWorkshops’06),
April 10–12, 2006, Budapest, Hungary, volume 3907/2006 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11732242. isbn: 3-540-33237-5. Google Books
ID: S4VQAAAAMAAJ. OCLC: 65216918, 67831340, 145829509, 150398193, 181551989,
and 316263582. Library of Congress Control Number (LCCN): 2006922618.
2342. Franz Rothlauf, Günther R. Raidl, Anna Isabel Esparcia-Alcázar, Ying-Ping Chen, Gabriela
Ochoa, Ender Ozcan, Marc Schoenauer, Anne Auger, Hans-Georg Beyer, Nikolaus Hansen,
Steffen Finck, Raymond Ros, L. Darrell Whitley, Garnett Wilson, Simon Harding, William B.
Langdon, Man Leung Wong, Laurence D. Merkle, Frank W. Moore, Sevan G. Ficici, William
Rand, Rick L. Riolo, Nawwaf Kharma, William R. Buckley, Julian Francis Miller, Ken-
neth Owen Stanley, Jaume Bacardit i Peñarroya, Will N. Browne, Jan Drugowitsch, Nicola
Beume, Mike Preuß, Stephen Frederick Smith, Stefano Cagnoni, Alexandru Floares, Aaron
Baughman, Steven Matt Gustafson, Maarten Keijzer, Arthur Kordon, and Clare Bates
Congdon, editors. Proceedings of the 11th Annual Conference on Genetic and Evolution-
ary Computation (GECCO’09-I), July 8–12, 2009, Delta Centre-Ville Hotel: Montréal, QC,
Canada. Association for Computing Machinery (ACM): New York, NY, USA. Partly avail-
able at http://www.sigevo.org/gec-summit-2009/instructions-papers.html
[accessed 2011-02-28]. See also [2343].
2343. Franz Rothlauf, Günther R. Raidl, Anna Isabel Esparcia-Alcázar, Ying-Ping Chen, Gabriela
Ochoa, Ender Ozcan, Marc Schoenauer, Anne Auger, Hans-Georg Beyer, Nikolaus Hansen,
Steffen Finck, Raymond Ros, L. Darrell Whitley, Garnett Wilson, Simon Harding, William B.
Langdon, Man Leung Wong, Laurence D. Merkle, Frank W. Moore, Sevan G. Ficici, William
Rand, Rick L. Riolo, Nawwaf Kharma, William R. Buckley, Julian Francis Miller, Ken-
neth Owen Stanley, Jaume Bacardit i Peñarroya, Will N. Browne, Jan Drugowitsch, Nicola
Beume, Mike Preuß, Stephen L. Smith, Stefano Cagnoni, Alexandru Floares, Aaron Baugh-
man, Steven Matt Gustafson, Maarten Keijzer, Arthur Kordon, and Clare Bates Congdon,
editors. Proceedings of the 11th Annual Conference – Companion on Genetic and Evolu-
REFERENCES 1133
tionary Computation Conference (GECCO’09-II), July 8–12, 2009, Delta Centre-Ville Hotel:
Montréal, QC, Canada. Association for Computing Machinery (ACM): New York, NY, USA.
ACM Order No.: 910092. See also [2342].
2344. Rick D. Routledge. Diversity Indices: Which ones are admissible? Journal of Theoreti-
cal Biology, 76(4):503–515, February 21, 1979, Elsevier Science Publishers B.V.: Amster-
dam, The Netherlands. Imprint: Academic Press Professional, Inc.: San Diego, CA, USA.
doi: 10.1016/0022-5193(79)90015-8.
2345. J. P. Royston. Some Techniques for Assessing Multivariate Normality Based on the Shapiro-
Wilk W. Journal of the Royal Statistical Society: Series C – Applied Statistics, 32(2):121–133,
1983, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
2346. Stefan Rudlof and Mario Köppen. Stochastic Hill Climbing with Learning by Vectors of
Normal Distribution. In WSC1 [1001], pages 60–70, 1996. Fully available at http://
eprints.kfupm.edu.sa/66958/1/66958.pdf [accessed 2009-08-21]. Second corrected and
enhanced version, 1997-09-03.
2347. William Michael Rudnick. Genetic Algorithms and Fitness Variance with an Application to
the Automated Design of Artificial Neural Networks. PhD thesis, Oregon Graduate Institute
of Science & Technology: Beaverton, OR, USA, 1992. UMI Order No.: GAX92-22642.
2348. Günter Rudolph. An Evolutionary Algorithm for Integer Programming. In PPSN
III [693], pages 139–148, 1994. doi: 10.1007/3-540-58484-6 258. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/people/rudolph/publications/
papers/PPSN94.pdf [accessed 2010-08-12]. CiteSeerx : 10.1.1.40.5981.
2349. Günter Rudolph. How Mutation and Selection Solve Long-Path Problems in Polynomial
Expected Time. Evolutionary Computation, 4(2):195–205, Summer 1996, MIT Press: Cam-
bridge, MA, USA. doi: 10.1162/evco.1996.4.2.195. CiteSeerx : 10.1.1.42.2612.
2350. Günter Rudolph. Convergence Properties of Evolutionary Algorithms, volume 35 in
Forschungsergebnisse zur Informatik. PhD thesis, Universität Dortmund: Dortmund,
North Rhine-Westphalia, Germany, 1996. isbn: 3-86064-554-4. Google Books
ID: V9f9AAAACAAJ. Published 1997.
2351. Günter Rudolph. Self-Adaptation and Global Convergence: A Counter-Example. In
CEC’99 [110], pages 646–651, volume 1, 1999. doi: 10.1109/CEC.1999.781994. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/people/rudolph/publications/
papers/CEC99.pdf [accessed 2009-07-10]. CiteSeerx : 10.1.1.36.3755 and 10.1.1.56.9366.
2352. Günter Rudolph. Self-Adaptive Mutations may lead to Premature Convergence. IEEE Trans-
actions on Evolutionary Computation (IEEE-EC), 5(4):410–414, 2001, IEEE Computer So-
ciety: Washington, DC, USA. doi: 10.1109/4235.942534. See also [2353].
2353. Günter Rudolph. Self-Adaptive Mutations may lead to Premature Convergence. Technical
Report CI–73/99, Universität Dortmund, Fachbereich Informatik: Dortmund, North
Rhine-Westphalia, Germany, April 30, 2001. Fully available at http://ls11-www.
cs.uni-dortmund.de/people/rudolph/publications/papers/tec335.pdf [ac-
x
cessed 2009-07-10]. CiteSeer : 10.1.1.28.8514 and 10.1.1.36.5633. See also [2352].
2354. Günter Rudolph, Simon M. Lucas, and Carlo Poloni. Proceedings of 10th International
Conference on Parallel Problem Solving from Nature (PPSN X), September 13–17, 2008,
Dortmund, North Rhine-Westphalia, Germany, volume 5199/2008 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-87700-4. isbn: 3-540-87699-5.
Google Books ID: OO-5OgAACAAJ and PPxMVCKTbtoC. Library of Congress Control Number
(LCCN): 2008934679.
2355. Thomas Philip Runarsson, Hans-Georg Beyer, Edmund K. Burke, Juan Julián Merelo-
Guervós, L. Darrell Whitley, and Xin Yao, editors. Proceedings of 9th International Con-
ference on Parallel Problem Solving from Nature (PPSN IX), September 9–13, 2006, Uni-
versity of Iceland: Reykjavik, Iceland, volume 4193/2006 in Theoretical Computer Science
and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11844297. isbn: 3-540-38990-3. Google Books
ID: L2QFAQAACAAJ. OCLC: 71747790, 255610906, 315830350, and 318289018. Li-
brary of Congress Control Number (LCCN): 2006931845.
2356. Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach (AIMA).
Prentice Hall International Inc.: Upper Saddle River, NJ, USA and Pearson Education:
Upper Saddle River, NJ, USA, 2nd edition, December 20, 2002. isbn: 0-13-080302-2,
1134 REFERENCES
0-13-790395-2, and 8120323823. Google Books ID: 0GldGwAACAAJ, 5WfMAQAACAAJ,
5mfMAQAACAAJ, and DvqIIwAACAAJ.
2357. Tomas Ruzgas, Rimantas Rudzkis, and Mindaugas Kavaliauskas. Application of Clustering in
the Non-Parametric Estimation of Distribution Density. Nonlinear Analysis: Modelling and
Control (NLAMC), 11(4):393–411, November 4, 2006, Lithuanian Association of Nonlinear
Analysts (LANA): Vilnius, Lithuania. Fully available at http://www.lana.lt/journal/
23/Ruzgas.pdf [accessed 2009-08-28].
2358. Conor Ryan, J. J. Collins, and Michael O’Neill. Grammatical Evolution: Evolving Pro-
grams for an Arbitrary Language. In EuroGP’98 [210], pages 83–95, 1998. Fully available
at http://www.grammatical-evolution.org/papers/eurogp98.ps [accessed 2009-07-
x
04]. CiteSeer : 10.1.1.38.7697.
2359. Conor Ryan, Michael O’Neill, and J. J. Collins. Grammatical Evolution: Solving Trigono-
metric Identities. In MENDEL’98 [417], pages 111–119, 1998. Fully available at http://
www.grammatical-evolution.org/papers/mendel98/index.html [accessed 2010-12-02].
CiteSeerx : 10.1.1.30.4253.
2360. Conor Ryan, Terence Soule, Maarten Keijzer, Edward P. K. Tsang, Riccardo Poli, and
Ernesto Jorge Fernandes Costa, editors. Proceedings of the 6th European Conference on
Genetic Programming (EuroGP’03), April 14–16, 2003, University of Essex, Departments
of Mathematical and Biological Sciences: Wivenhoe Park, Colchester, Essex, UK, vol-
ume 2610/2003 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-36599-0. isbn: 3-540-00971-X. Google Books
ID: pVhXLZKMF-YC.
2361. Nestor Rychtyckyj and Daniel Shapiro, editors. Proceedings of the Twenty-second Innova-
tive Applications of Artificial Intelligence Conference (IAAI’10), July 11–15, 2010, Westin
Peachtree Plaza: Atlanta, GA, USA. AAAI Press: Menlo Park, CA, USA.
2362. Proceedings of the Second National Conference on Evolutionary Computation and Global Op-
timization (Materialy II Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Glob-
alna), September 16–19, 1997, Rytro, Poland. Google Books ID: IGP4tgAACAAJ.
2363. Paul S. Andrews, Jonathan Timmis, Nick D. L. Owens, Uwe Aickelin, Emma Hart, Andy
Hone, and Andrew M. Tyrrell, editors. Proceedings of the 8th International Conference on
Artificial Immune Systems (ICARIS’09), August 9–12, 2009, York, UK, volume 5666 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2364. Leonardo Sá, Pedro Vieira, and Antônio Carneiro de Mesquita Filho. Evolutionary Synthesis
of Low-Sensitivity Antenna Matching Networks using Adjacency Matrix Representation. In
CEC’09 [1350], pages 1201–1208, 2009. doi: 10.1109/CEC.2009.4983082. INSPEC Accession
Number: 10688677.
2365. Ashish Sabharwal. Combinatorial Problems I: Finding Solutions. In 2nd
Asian-Pacific School on Statistical Physics and Interdisciplinary Applications [987],
2008. Fully available at http://www.cs.cornell.edu/˜sabhar/tutorials/
kitpc08-combinatorial-problems-I.ppt [accessed 2010-08-29].
2366. Krzystof Sacha, editor. Software Engineering Techniques: Design for Quality – IFIP Working
Conference on Software Engineering Techniques (SET’06), October 17–20, 2006, Warsaw
University: Warsaw, Poland. Springer-Verlag: Berlin/Heidelberg. isbn: 0-387-39387-0.
Co-located with VIII Conference on Software Engineering – KKIO 2006.
2367. Lorenza Saitta, editor. Proceedings of the 13th International Conference on Machine Learning
(ICML’96), July 3–6, 1996, Bari, Italy. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-419-7.
2368. Proceedings of the Third World Congress on Nature and Biologically Inspired Computing
(NaBIC’11), November 19–21, 2011, Salamanca, Spain.
2369. Peter Salamon, Paolo Sibani, and Richard Frost. Facts, Conjectures, and Improvements
for Simulated Annealing, volume 7 in SIAM Monographs on Mathematical Modeling and
Computation. Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA,
USA, 2002. isbn: 0898715083. Google Books ID: jhAldlYvClcC.
2370. Sancho Salcedo-Sanz and Xin Yao. A Hybrid Hopfield Network-Genetic Algorithm Ap-
REFERENCES 1135
proach for the Terminal Assignment Problem. IEEE Transactions on Systems, Man,
and Cybernetics – Part B: Cybernetics, 34(6):2343–2353, December 2004, IEEE Systems,
Man, and Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCB.2004.836471.
CiteSeerx : 10.1.1.62.7879. INSPEC Accession Number: 8207612. See also [2371].
2371. Sancho Salcedo-Sanz and Xin Yao. Assignment of Cells to Switches in a Cellular Mo-
bile Network using a Hybrid Hopfield Network-Genetic Algorithm Approach. Applied Soft
Computing, 8(1):216–224, January 2008, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2007.01.002. See also [2370].
2372. Sancho Salcedo-Sanz and Xin Yao. Assignment of Cells to Switches in a Cellular Mo-
bile Network using a Hybrid Hopfield Network-Genetic Algorithm Approach. Applied Soft
Computing, 8(1):216–224, January 2008, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2007.01.002.
2373. Sancho Salcedo-Sanz, Yong Xu, and Xin Yao. Hybrid Meta-Heuristics Algorithms for Task
Assignment in Heterogeneous Computing Systems. Computers & Operations Research, 33(3):
820–835, March 2006, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.cor.2004.08.010. Fully available at http://www.cs.bham.ac.
uk/˜xin/papers/SalcedoSanzXuYao2006.pdf [accessed 2009-08-10].
2374. Sancho Salcedo-Sanz, José Antonio Portilla-Figueras, Emilio G. Ortiz-Garcı́a, Angel M.
Pérez-Bellido, Christopher Thraves, Antonio Fernández-Anta, and Xin Yao. Optimal Switch
Location in Mobile Communication Networks using Hybrid Genetic Algorithms. Applied Soft
Computing, 8(4):1486–1497, September 2008, Elsevier Science Publishers B.V.: Essex, UK,
Rajkumar Roy, editor. doi: 10.1016/j.asoc.2007.10.021. Fully available at http://nical.
ustc.edu.cn/papers/asc2007b.pdf [accessed 2008-09-01]. CiteSeerx : 10.1.1.87.8731.
2375. Muhammad Saleem and Muddassar Farooq. BeeSensor: A Bee-Inspired Power Aware Rout-
ing Protocol for Wireless Sensor Networks. In EvoWorkshops’07 [1050], pages 81–90, 2007.
doi: 10.1007/978-3-540-71805-5 9.
2376. Abdellah Salhi, Hugh Glaser, and David de Roure. Parallel Implementation of a Genetic-
Programming based Tool for Symbolic Regression. DSSE Technical Reports DSSE-TR-97-
3, University of Southampton, Department of Electronics and Computer Science, Declara-
tive Systems & Software Engineering Group: Highfield, Southampton, UK, July 18, 1997.
Fully available at http://www.dsse.ecs.soton.ac.uk/techreports/95-03/97-3.
html [accessed 2007-09-09]. See also [2377].
2377. Abdellah Salhi, Hugh Glaser, and David de Roure. Parallel Implementation of a Genetic-
Programming Based Tool for Symbolic Regression. Information Processing Letters, 66
(6):299–307, June 30, 1998, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/S0020-0190(98)00056-8. See also [2376].
2378. David Salomon. Assemblers andLoaders, Ellis Horwood Series in Computers and Their Ap-
plications. Ellis Horwood: Chichester, West Sussex, UK, 1993. isbn: 0-13-052564-2.
OCLC: 26404704, 473153955, and 636481452. Library of Congress Control Num-
ber (LCCN): 92028178. GBV-Identification (PPN): 110448634. LC Classifica-
tion: QA76.76.A87 S35 1992.
2379. Proceedings of the Eighth Joint Conference on Information Science (JCIS 2005), Section: The
Sixth International Workshop on Frontiers in Evolutionary Algorithms (FEA’05), July 16–
21, 2005, Salt Lake City Mariott City Center: Salt Lake City, UT, USA. Partly avail-
able at http://fs.mis.kuas.edu.tw/˜cobol/JCIS2005/jcis05/Track4.html [ac-
cessed 2007-09-16]. Workshop held in conjunction with Eighth Joint Conference on Information
Sciences.
2380. Rafal Salustowicz and Jürgen Schmidhuber. Probabilistic Incremental Program Evolution:
Stochastic Search Through Program Space. Evolutionary Computation, 5(2):123–141, Febru-
ary 1997, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1997.5.2.123. See also [2381].
2381. Rafal Salustowicz and Jürgen Schmidhuber. Probabilistic Incremental Program Evolu-
tion: Stochastic Search Through Program Space. In ECML’97 [2781], pages 213–220,
1997. doi: 10.1007/3-540-62858-4 86. Fully available at ftp://ftp.idsia.ch/pub/
juergen/ECML_PIPE.ps.gz and http://eprints.kfupm.edu.sa/59137/1/59137.
pdf [accessed 2009-08-23]. CiteSeerx : 10.1.1.45.231. See also [2380].
2382. Claude Sammut and Achim G. Hoffmann, editors. Proceedings of the 19th International
Conference on Machine Learning (ICML’02), July 8–12, 2002, University of New South
1136 REFERENCES
Wales: Sydney, NSW, Australia. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-873-7.
2383. Arthur L. Samuel. Some Studies in Machine Learning using the Game of Checkers. IBM
Journal of Research and Development, 3(3):210–229, July 1959. doi: 10.1147/rd.33.0210. See
also [2384, 2385].
2384. Arthur L. Samuel. Some Studies in Machine Learning Using the Game of Checkers. In
Computers and Thought [897], pages 71–105. McGraw-Hill: New York, NY, USA, 1963–1995.
See also [2383, 2385].
2385. Arthur L. Samuel. Some Studies in Machine Learning using the Game of Check-
ers II – Recent Progress. IBM Journal of Research and Development, 11(6):601–
617, November 1967. Fully available at http://pages.cs.wisc.edu/˜dyer/cs540/
handouts/samuel-checkers.pdf and http://pages.cs.wisc.edu/˜jerryzhu/
cs540/handouts/samuel-checkers.pdf [accessed 2010-08-18]. See also [2383].
2386. Proceedings of the 2014 International Conference on Systems, Man, and Cybernetics
(SMC’14), October 5–8, 2014, San Diego, CA, USA.
2387. Harilaos G. Sandalidis, Constandinos X. Mavromoustakis, and Peter P. Stavroulakis. Per-
formance Measures of an Ant Based Decentralised Routing Scheme for Circuit Switch-
ing Communication Networks. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 5(4):313–317, August 2001, Springer-Verlag: Berlin/Heidel-
berg. doi: 10.1007/s005000100104. Fully available at http://www.springerlink.com/
content/g1u7lygrugwk3nl5/fulltext.pdf [accessed 2008-09-03].
2388. Francisco Sandoval, Carlos Camacho-Penãlosa, and Antonio Puerta, editors. IEEE
Mediterranean Electrotechnical Conference (MELECON’06), May 16–19, 2006, Be-
nalmádena (Málaga), Spain, volume 2. IEEE Computer Society: Washington, DC, USA.
isbn: 1-4244-0087-2. Google Books ID: cXMpAAAACAAJ. OCLC: 75490138, 81252104,
237880307, and 317661437.
2389. Yasuhito Sano and Hajime Kita. Optimization of Noisy Fitness Functions by Means of
Genetic Algorithms Using History of Search. In PPSN VI [2421], pages 571–580, 2000.
doi: 10.1007/3-540-45356-3 56.
2390. Yasuhito Sano and Hajime Kita. Optimization of Noisy Fitness Functions by means of
Genetic Algorithms using History of Search with Test of Estimation. In CEC’02 [944], pages
360–365, 2002. doi: 10.1109/CEC.2002.1006261.
2391. Bruno Sareni and Laurent Krähenbühl. Fitness Sharing and Niching Methods Revisited.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 2(3):97–106, September 1998,
IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.735432. Fully avail-
able at http://hal.archives-ouvertes.fr/hal-00359799/en/ [accessed 2010-08-02]. IN-
SPEC Accession Number: 6114042. ID: hal-00359799, version 1.
2392. Manish Sarkar. Evolutionary Programming-Based Fuzzy Clustering. In EP’96 [946], pages
247–256, 1996.
2393. Ruhul A. Sarker, Hussein A. Abbass, and Samin Karim. An Evolutionary Algorithm for
Constrained Multiobjective Optimization Problems. In AJWIS’01 [1498], pages 113–122,
2001. Fully available at http://www.lania.mx/˜ccoello/EMOO/sarker01a.pdf.gz
x
[accessed 2008-11-14]. CiteSeer : 10.1.1.8.995.
2394. Ruhul A. Sarker, Masoud Mohammadian, and Xin Yao, editors. Evolutionary Optimization,
volume 48 in International Series in Operations Research & Management Science. Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands,
2002. doi: 10.1007/b101816. isbn: 0-306-48041-7 and 0-7923-7654-4.
2395. Ruhul A. Sarker, Robert G. Reynolds, Hussein A. Abbass, Kay Chen Tan, Robert Ian
McKay, Daryl Leslie Essam, and Tom Gedeon, editors. Proceedings of the IEEE Congress
on Evolutionary Computation (CEC’03), December 8–13, 2003, Canberra, Australia.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-7804-0. Google Books
ID: ALhVAAAAMAAJ, CyBpAAAACAAJ, hblVAAAAMAAJ, and u7hVAAAAMAAJ. INSPEC Ac-
cession Number: 8188933. Catalogue no.: 03TH8674. CEC 2003 – A joint meeting of the IEEE,
the IEAust, the EPS, and the IEE. Congress on Evolutionary Computation, IEEE/Neural
Networks Council, Evolutionary Programming Society, Galesia, IEE.
2396. Warren S. Sarle. Stopped Training and Other Remedies for Overfitting. In Interface’95 [1877],
pages 352–360, 1995. Fully available at ftp://ftp.sas.com/pub/neural/inter95.ps.
Z [accessed 2009-07-12]. CiteSeerx : 10.1.1.42.3920.
REFERENCES 1137
2397. Warren S. Sarle. What is overfitting and how can I avoid it? In Usenet FAQs: comp.ai.neural-
nets FAQ [892], Chapter Part 3 of 7: General, Section 3. FAQS.ORG – Internet FAQ Archives:
USA, 2009. Fully available at http://www.faqs.org/faqs/ai-faq/neural-nets/
part3/section-3.html [accessed 2008-02-29].
2398. Kumara Sastry and David Edward Goldberg. Multiobjective hBOA, Clustering, and
Scalability. In GECCO’05 [304], pages 663–670, 2005, Martin Pelikan, Advisor.
doi: 10.1145/1068009.1068122. See also [2157].
2399. Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection With
Replacement Using Apparent Added Noise. In GECCO’01 [2570], page 781, 2001.
CiteSeerx : 10.1.1.16.786. See also [2400].
2400. Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection With Replace-
ment Using Apparent Added Noise. IlliGAL Report 2001014, Illinois Genetic Algorithms Lab-
oratory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2001.
Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2001014.
ps.Z [accessed 2010-08-03]. See also [2399].
2401. Kumara Sastry and David Edward Goldberg. Let’s Get Ready to Rumble Redux: Crossover
versus Mutation Head to Head on Exponentially Scaled Problems. In GECCO’07-I [2699],
pages 1380–1387, 2007. doi: 10.1145/1276958.1277215. See also [2402].
2402. Kumara Sastry and David Edward Goldberg. Let’s Get Ready to Rumble Redux: Crossover
versus Mutation Head to Head on Exponentially Scaled Problems. IlliGAL Report 2007005,
Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science, De-
partment of General Engineering, University of Illinois at Urbana-Champaign: Urbana-
Champaign, IL, USA, February 11, 2007. Fully available at http://www.illigal.uiuc.
edu/pub/papers/IlliGALs/2007005.pdf [accessed 2008-07-21]. See also [2401].
2403. Hiroyuki Sato, Arturo Hernández Aguirre, and Kiyoshi Tanaka. Controlling Dominance Area
of Solutions and Its Impact on the Performance of MOEAs. In EMO’07 [2061], pages 5–20,
2007. doi: 10.1007/978-3-540-70928-2 5.
2404. Satoshi Sato, Tadashi Ishihara, and Hikaru Inooka. Systematic Design via the Method of
Inequalities. IEEE Control Systems Magazine, pages 57–65, October 1996, IEEE Computer
Society: Piscataway, NJ, USA. doi: 10.1109/37.537209.
2405. Shinji Sato. Simulated Quenching: A New Placement Method for Module Generation.
In ICCAD [1328], pages 538–541, 1997. doi: 10.1109/ICCAD.1997.643591. Fully avail-
able at http://www.sigda.org/Archives/ProceedingArchives/Iccad/Iccad97/
papers/1997/iccad97/pdffiles/09c_2.pdf [accessed 2010-09-25]. INSPEC Accession
Number: 5781467.
2406. Abdul Sattar and Byeong-Ho Kang, editors. Advances in Artificial Intelligence. Proceed-
ings of the 19th Australian Joint Conference on Artificial Intelligence (AI’06), December 4–
6, 2006, Hobart, Australia, volume 4304/2006 in Lecture Notes in Artificial Intelligence
(LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/11941439. isbn: 3-540-49787-0. Library of Congress Control Num-
ber (LCCN): 2006936897.
2407. Franklin E. Satterthwaite. Random Balance Experimentation. Technometrics, 1(2):111–137,
May 1959, American Statistical Association (ASA): Alexandria, VA, USA and American
Society for Quality (ASQ): Milwaukee, WI, USA. JSTOR Stable ID: 1266466.
2408. Franklin E. Satterthwaite. REVOP or Random Evolutionary Operation. Technical Report
10-10-59, Merrimack College: North Andover, MA, USA, 1959.
2409. Jürgen Sauer, editor. 19. Workshop “Planen, Scheduling und Konfigurieren, Entwerfen“
(PuK’05), September 11, 2005, Koblenz, Germany. Fully available at http://www-is.
informatik.uni-oldenburg.de/˜sauer/puk2005/paper.html [accessed 2011-01-11].
2410. Yoshikazu Sawaragi, Koichi Inoue, and Hirotaka Nakayama, editors. Proceedings of the 7th
International Conference on Multiple Criteria Decision Making (MCDM’1986), August 18–
22, 1986, Kyoto, Japan. isbn: 0-387-17718-3, 0-387-17719-1, 3-540-17718-3, and
3-540-17719-1.
2411. Dhish Kumar Saxena and Kalyanmoy Deb. Dimensionality Reduction of Objectives and
Constraints in Multi-Objective Optimization Problems: A System Design Perspective. In
CEC’08 [1889], pages 3204–3211, 2008. doi: 10.1109/CEC.2008.4631232. INSPEC Accession
Number: 10250936.
1138 REFERENCES
2412. Robert Schaefer, Carlos Cotta, Joanna Kolodziej, and Günter Rudolph, editors. Pro-
ceedings of the 11th International Conference on Parallel Problem Solving From Nature,
Part 1 (PPSN XI-1), September 11–15, 2010, AGH University of Science and Technol-
ogy: Kraków, Poland, volume 6238 in Theoretical Computer Science and General Issues
(SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 3-642-15843-9. Partly available at http://home.agh.edu.pl/˜ppsn/
[accessed 2011-02-28]. Library of Congress Control Number (LCCN): 2010934114.
2413. Robert Schaefer, Carlos Cotta, Joanna Kolodziej, and Günter Rudolph, editors. Pro-
ceedings of the 11th International Conference on Parallel Problem Solving From Nature,
Part 2 (PPSN’10-2), September 11–15, 2010, AGH University of Science and Technol-
ogy: Kraków, Poland, volume 6239 in Theoretical Computer Science and General Issues
(SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 3-642-15870-6. Partly available at http://home.agh.edu.pl/˜ppsn/
[accessed 2011-02-28]. Library of Congress Control Number (LCCN): 2010934114.
2414. J. David Schaffer, editor. Proceedings of the 3rd International Conference on Genetic Algo-
rithms (ICGA’89), June 4–7, 1989, George Mason University (GMU): Fairfax, VA, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-066-3.
Google Books ID: BPBQAAAAMAAJ and TYvqPAAACAAJ. OCLC: 19722958, 84251559,
180515150, 231091143, 257885359, and 264619464.
2415. J. David Schaffer and L. Darrell Whitley, editors. Proceedings of the International Workshop
on Combinations of GeneticAlgorithms and Neural Networks (COGANN’92), June 2, 1992,
Baltimore, MD, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA. doi: 10.1109/COGANN.1992.273951. isbn: 0-8186-2787-5. OCLC: 27675198,
65776456, 174278971, 224222497, 311746303, 475797050, and 496330593. INSPEC
Accession Number: 4487943. Catalogue no.: 92TH0435-8.
2416. J. David Schaffer, Richard A. Caruana, Larry J. Eshelman, and Rajarshi Das. A Study
of Control Parameters Affecting Online Performance of Genetic Algorithms for Function
Optimization. In ICGA’89 [2414], pages 51–60, 1989.
2417. J. David Schaffer, Larry J. Eshelman, and Daniel Offutt. Spurious Correlations and Prema-
ture Convergence in Genetic Algorithms. In FOGA’90 [2562], pages 102–112, 1990.
2418. Robert E. Schapire. The Strength of Weak Learnability. Machine Learning, 5:
197–227, June 1990, Kluwer Academic Publishers: Norwell, MA, USA and Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00116037. Fully available
at http://citeseer.ist.psu.edu/schapire90strength.html and http://www.
springerlink.com/content/x02406w7q5038735/fulltext.pdf [accessed 2007-09-15].
2419. R. Schilling, W. Haase, Jacques Periaux, and H. Baier, editors. Proceedings of the Sixth
Conference on Evolutionary and Deterministic Methods for Design, Optimization and Control
with Applications to Indutsrial and Societal Problems (EUROGEN’05), September 12–14,
2005, Technische Universität München: Munich, Bavaria, Germany. Technische Universität
München: Munich, Bavaria, Germany. Partly available at http://www.flm.mw.tum.de/
EUROGEN05/ [accessed 2007-09-16]. in association with ECCOMAS and ERCOFTAC.
2420. Jürgen Schmidhuber. Evolutionary Principles in Self-Referential Learning. (On Learning
how to Learn: The Meta-Meta-... Hook.). Master’s thesis, Technische Universität München,
Institut für Informatik, Lehrstuhl Professor Radig: Munich, Bavaria, Germany, May 14, 1987.
Fully available at http://www.idsia.ch/˜juergen/diploma.html [accessed 2007-11-01].
2421. Marc Schoenauer, Kalyanmoy Deb, Günter Rudolph, Xin Yao, Evelyne Lutton, Juan Julián
Merelo-Guervós, and Hans-Paul Schwefel, editors. Proceedings of the 6th International Con-
ference on Parallel Problem Solving from Nature (PPSN VI), September 18–20, 2000, Paris,
France, volume 1917/2000 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/3-540-45356-3. isbn: 3-540-41056-2. Google Books
ID: GDz2nCAJpJ4C and gI26Cld2BY0C. OCLC: 44914243, 247592642, and 313676512.
2422. Armin Scholl and Robert Klein. Bin Packing. Friedrich Schiller University of Jena,
Wirtschaftswissenschaftliche Fakultät: Jena, Thuringia, Germany, September 2, 2003. Fully
available at http://www.wiwi.uni-jena.de/Entscheidung/binpp/ [accessed 2010-10-12].
See also [1834].
2423. Armin Scholl, Robert Klein, and Christian Jürgens. BISON: A Fast Hybrid Procedure for Ex-
actly Solving the One-Dimensional Bin Packing Problem. Computers & Operations Research,
24(7):627–645, July 1997, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.:
REFERENCES 1139
Essex, UK. doi: 10.1016/S0305-0548(96)00082-2.
2424. Ruud Schoonderwoerd. Collective Intelligence for Network Control. Master’s thesis,
Delft University of Technology, Faculty of Technical Mathematics and Informatics: Delft,
The Netherlands, May 30, 1996, Henk Koppelaar, Eugene J. H. Kerckhoffs, Leon J. M.
Rothkrantz, and Janet L. Bruten, Committee members. Fully available at http://staff.
washington.edu/paymana/swarm/schoon96-thesis.pdf [accessed 2008-06-12]. See also
[2425].
2425. Ruud Schoonderwoerd, Owen E. Holland, Janet L. Bruten, and Leon J. M. Rothkrantz. Ant-
Based Load Balancing in Telecommunications Networks. Adaptive Behavior, 5(2):169–207,
Fall 1996, SAGE Publications: Thousand Oaks, CA, USA. doi: 10.1177/105971239700500203.
Fully available at http://www.ifi.uzh.ch/ailab/teaching/FG06/ [accessed 2009-06-27].
CiteSeerx : 10.1.1.4.3655. See also [2424, 2426].
2426. Ruud Schoonderwoerd, Owen E. Holland, and Janet L. Bruten. Ant-like Agents for
Load Balancing in Telecommunications Networks. In AGENTS’97 [1978], pages 209–
216, 1997. doi: 10.1145/267658.267718. Fully available at http://sigart.acm.
org/proceedings/agents97/A153/A153.ps and http://staff.washington.edu/
paymana/swarm/schoon97-icaa.pdf [accessed 2009-06-26]. See also [2425].
2427. Thomas J. M. Schopf, editor. Models in Paleobiology. Freeman, Cooper & Co: San Francisco,
CA, USA and DoubleDay: New York, NY, USA, January 1972. isbn: 0877353255. Google
Books ID: Db0eAAAACAAJ and OjjlOgAACAAJ. OCLC: 572084.
2428. Herb Schorr and Alain T. Rappaport, editors. Proceedings of the The First Conference
on Innovative Applications of Artificial Intelligence (IAAI’89), March 28–30, 1989, Stan-
ford University: Stanford, CA, USA. AAAI Press: Menlo Park, CA, USA. Partly avail-
able at http://www.aaai.org/Conferences/IAAI/iaai89.php [accessed 2007-09-06]. Pub-
lished in 1991.
2429. Nicol N. Schraudolph and Richard K. Belew. Dynamic Parameter Encoding for Ge-
netic Algorithms. Machine Learning, 9:9–21, June 1992, Kluwer Academic Pub-
lishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1022624728869. Fully available at http://cnl.salk.edu/˜schraudo/
pubs/SchBel92.pdf. CiteSeerx : 10.1.1.42.1387.
2430. Alexander Schrijver. A Course in Combinatorial Optimization. self-published, January 22,
2009. Fully available at http://homepages.cwi.nl/˜lex/files/dict.pdf [accessed 2010-
08-04].
2431. Crystal Schuil, Matthew Valente, Justin Werfel, and Radhika Nagpal. Collective Construc-
tion Using Lego Robots. In AAAI’06 / IAAI’06 [1055], volume 2, 2006. Fully avail-
able at http://www.eecs.harvard.edu/˜rad/ssr/papers/aaai06-schuil.pdf [ac-
cessed 2008-06-12].
2432. Peter K. Schuster and Manfred Eigen. The Hypercycle, A Principle of Natural Self-
Organization. Springer-Verlag GmbH: Berlin, Germany, 1979. isbn: 0-387-09293-5. Google
Books ID: Af49AAAAYAAJ and P54PPAAACAAJ. OCLC: 4665354.
2433. Hans-Paul Schwefel. Kybernetische Evolution als Strategie der exprimentellen Forschung
in der Strömungstechnik. Master’s thesis, Technische Universität Berlin: Berlin, Germany,
1965.
2434. Hans-Paul Schwefel. Experimentelle Optimierung einer Zweiphasendüse Teil I. Technical
Report 35, AEG Research Institute: Berlin, Germany, 1968. Project MHD–Staustrahlrohr
11.034/68.
2435. Hans-Paul Schwefel. Evolutionsstrategie und numerische Optimierung. PhD thesis, Technis-
che Universität Berlin, Institut für Meß- und Regelungstechnik, Institut für Biologie und An-
thropologie: Berlin, Germany, 1975. OCLC: 52361662 and 251655294. GBV-Identification
(PPN): 156923181.
2436. Hans-Paul Schwefel. Numerical Optimization of Computer Models. John Wiley & Sons
Ltd.: New York, NY, USA, 1981. isbn: 0-471-09988-0. OCLC: 8011455 and
301251708. Library of Congress Control Number (LCCN): 81173223. GBV-Identification
(PPN): 01489498X. LC Classification: QA402.5 .S3813.
2437. Hans-Paul Schwefel. Evolution and Optimum Seeking, Sixth-Generation Computer Technol-
ogy Series. John Wiley & Sons Ltd.: New York, NY, USA, 1995. isbn: 0-471-57148-2.
Google Books ID: dfNQAAAAMAAJ. OCLC: 30701094 and 222866514. Library of Congress
Control Number (LCCN): 94022270. GBV-Identification (PPN): 194661636. LC Classifi-
1140 REFERENCES
cation: QA402.5 .S375 1995.
2438. Hans-Paul Schwefel and Reinhard Männer, editors. Proceedings of the 1st Workshop on Par-
allel Problem Solving from Nature (PPSN I), October 1–3, 1990, FRG Dortmund: Dort-
mund, North Rhine-Westphalia, Germany, volume 496/1991 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0029723.
isbn: 0-387-54148-9 and 3-540-54148-9. Google Books ID: W4r2KQAACAAJ,
a7LiAAAACAAJ, and cZhQAAAAMAAJ.
2439. A. Carlisle Scott and Philip Klahr, editors. Proceedings of the The Fourth Conference on Inno-
vative Applications of Artificial Intelligence (IAAI’92), July 12–16, 1992, San José, CA, USA.
AAAI Press: Menlo Park, CA, USA. isbn: 0-262-69155-8. Partly available at http://
www.aaai.org/Conferences/IAAI/iaai912.php [accessed 2007-09-06].
2440. Ian Scriven, Andrew Lewis, and Sanaz Mostaghim. Dynamic Search Initialisation Strategies
for Multi-Objective Optimisation in Peer-to-Peer Networks. In CEC’09 [1350], pages 1515–
1522, 2009. doi: 10.1109/CEC.2009.4983122. INSPEC Accession Number: 10688717.
2441. Ninth International Workshop on Learning Classifier Systems (IWLCS’06), July 8–9, 2006,
Seattle, WA, USA. Held during GECCO-2006 (. See also [1516].
2442. Proceedings of the 28th International Conference on Machine Learning (ICML’11), June 11–
14, 2011, Seattle, WA, USA.
2443. Michèle Sebag and Antoine Ducoulombier. Extending Population-Based Incremental
Learning to Continuous Search Spaces. In PPSN V [866], pages 418–427, 1998.
CiteSeerx : 10.1.1.42.1884.
2444. Anthony V. Sebald and Lawrence Jerome Fogel, editors. Proceedings of the 3rd Annual
Conference on Evolutionary Programming (EP’94), February 24–26, 1994, San Diego, CA,
USA. World Scientific Publishing: River Edge, NJ, USA. isbn: 9810218109. Library of
Congress Control Number (LCCN): 95118447.
2445. Jimmy Secretan, Nicholas Beato, David B. D’Ambrosio, Adelein Rodriguez, Adam Campbell,
and Kenneth Owen Stanley. Evolving Pictures Collaboratively Online. In CHI’08 [139],
pages 1759–1768, 2008. doi: 10.1145/1357054.1357328. Fully available at http://eplex.
cs.ucf.edu/papers/secretan_chi08.pdf [accessed 2011-02-23].
2446. Robert Sedgewick. Algorithms in Java, Parts 1–4 (Fundamentals: Data Structures, Sort-
ing, Searching). Addison-Wesley Professional: Reading, MA, USA, 3rd edition, Septem-
ber 2002. isbn: 0-201-36120-5. Google Books ID: hyvdUQUmf2UC. GBV-Identification
(PPN): 247734144, 356227073, 511295804, 525555153, and 595338534. With Java
consultation by Michael Schidlowsky.
2447. Alberto Maria Segre, editor. Proceedings of the 6th International Workshop on Machine
Learning (ML’89), June 26–27, 1989, Cornell University Press: Ithaca, NY, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-036-1.
2448. Y. Ahmet Şekercioğlu, Andreas Pitsilides, and Athanasios V. Vasilakos. Computa-
tional Intelligence in Management of ATM Networks: A Survey of Current State of
Research. In ESIT’99 [888], 1999. Fully available at http://www.cs.ucy.ac.cy/
networksgroup/pubs/published/1999/CI-ATM-London99.pdf and http://www.
erudit.de/erudit/events/esit99/12525_P.pdf [accessed 2008-09-03].
2449. Bart Selman, Hector Levesque, and David Mitchell. A New Method for Solving Hard Satisfi-
ability Problems. In AAAI’92 [2639], pages 440–446, 1992. Fully available at http://www.
aaai.org/Papers/AAAI/1992/AAAI92-068.pdf [accessed 2009-06-24].
2450. KWEB SWS Challenge Workshop, June 6–7, 2007, Innsbruck Congress Center: Inns-
bruck, Austria. Semantic Technology Institute (STI) Innsbruck, Austria: Innsbruck, Aus-
tria. Fully available at http://sws-challenge.org/wiki/index.php/Workshop_
Innsbruck [accessed 2010-12-12].
2451. Bernhard Sendhoff, Martin Kreutz, and Werner von Seelen. A Condition for the Genotype-
Phenotype Mapping: Causality. In ICGA’97 [166], pages 73–80, 1997. Fully avail-
able at ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/manuscripts/
articles/frame-label.ps.gz [accessed 2010-08-12]. CiteSeerx : 10.1.1.38.6750. arXiv
ID: adap-org/9711001v1.
2452. Bernhard Sendhoff, Martin Kreutz, and Werner von Seelen. Causality and the Analysis of Lo-
cal Search in Evolutionary Algorithms. INI Internal Report IR-INI 97-16, Ruhr-Universität
Bochum, Institut für Neuroinformatik: Bochum, North Rhine-Westphalia, Germany,
September 1997. Fully available at ftp://ftp.neuroinformatik.ruhr-uni-bochum.
REFERENCES 1141
de/pub/manuscripts/IRINI/irini97-16/irini97-16.ps.gz [accessed 2009-07-15].
2459. M. Zubair Shafiq, Muddassar Farooq, and Syed Ali Khayam. A Comparative Study
of Fuzzy Inference Systems, Neural Networks and Adaptive Neuro Fuzzy Inference
Systems for Portscan Detection. In EvoWorkshops’08 [1051], pages 52–61, 2008.
doi: 10.1007/978-3-540-78761-7 6.
2460. M. Zubair Shafiq, S. Momina Tabish, and Muddassar Farooq. Are Evolutionary Rule Learn-
ing Algorithms Appropriate for Malware Detection? In GECCO’09-I [2342], pages 1915–
1916, 2009. doi: 10.1145/1569901.1570233. Fully available at http://www.nexginrc.org/
papers/gecco09-zubair.pdf [accessed 2009-09-05]. See also [2461].
2461. M. Zubair Shafiq, S. Momina Tabish, and Muddassar Farooq. On the Appropriateness of Evo-
lutionary Rule Learning Algorithms for Malware Detection. In GECCO’09-II [2343], pages
2609–2616, 2009. doi: 10.1145/1570256.1570370. Fully available at http://nexginrc.
org/papers/iwlcs09-zubair.pdf [accessed 2009-09-05]. See also [2460].
2462. Biren Shah, Karthik Ramachandran, and Vijay V. Raghavan. A Hybrid Approach for
Data Warehouse View Selection. International Journal of Data Warehousing and Mining
(IJDWM), 2(2):1–37, 2006, Idea Group Publishing (Idea Group Inc., IGI Global): New
York, NY, USA. doi: 10.4018/jdwm.2006040101. CiteSeerx : 10.1.1.85.5014.
2463. S. H. Shami, I. M. A. Kirkwood, and Mark C. Sinclair. Evolving Simple Fault-Tolerant
Routing Rules using Genetic Programming. Electronics Letters, 33(17):1440–1441, Au-
gust 14, 1997, Institution of Engineering and Technology (IET): Stevenage, Herts, UK.
doi: 10.1049/el:19970996. See also [1542].
2464. Yun-Wei Shang and Yu-Huang Qiu. A Note on the Extended Rosenbrock Function. Evo-
lutionary Computation, 14(1):119–126, Spring 2006, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.2006.14.1.119. PubMed ID: 16536893.
2465. Claude Elwood Shannon. A Mathematical Theory of Communication. Bell System Technical
Journal, 27:379–423/623–656, July–October 1948, Bell Laboratories: Berkeley Heights, NJ,
USA. Fully available at http://plan9.bell-labs.com/cm/ms/what/shannonday/
paper.html [accessed 2007-09-15]. Reprinted in [2466, 2467, 2521].
2466. Claude Elwood Shannon, author. Neil James Alexander Sloane and Aaron D. Wyner, ed-
itors. Claude Elwood Shannon: Collected Papers. Institute of Electrical and Electronics
Engineers (IEEE) Press: New York, NY, USA, 1993. isbn: 0780304349. Google Books
ID: 06QKAAAACAAJ. OCLC: 26403297, 246915902, and 318255507. See also [2465].
2467. Claude Elwood Shannon and Warren Weaver. The Mathematical Theory of Communication.
University of Illinois Press (UIP): Champaign, IL, USA, 1949–1962. isbn: 0252725468
1142 REFERENCES
and 0252725484. Google Books ID: BHi4OgAACAAJ, H1MiAAAACAAJ, VB COwAACAAJ,
ZiYIAQAAIAAJ, and dk0n eGcqsUC. OCLC: 13553507, 250803800, and 318164805.
See also [2465].
2468. Nicholas Peter Sharples. Evolutionary Approaches to Adaptive Protocol Design. In The
12th White House Papers Graduate Research in Cognitive and Computing Sciences at Sus-
sex [2139], pages 60–62, 1999. See also [2470].
2469. Nicholas Peter Sharples. Evolutionary Approaches to Adaptive Protocol Design. PhD thesis,
University of Sussex, School of Cognitive Science: Brighton, UK, August 2001–2003. Google
Books ID: -BA3HAAACAAJ. OCLC: 59295838 and 195331508. asin: B001ABO1DK. See
also [2470].
2470. Nicholas Peter Sharples and Ian Wakeman. Protocol Construction Using Genetic Search
Techniques. In EvoWorkshops’00 [464], pages 73–95, 2000. doi: 10.1007/3-540-45561-2 23.
See also [2469].
2471. Jude W. Shavlik, editor. Proceedings of the 15th International Conference on Machine Learn-
ing (ICML’98), July 24–27, 1998, Madison, WI, USA. Morgan Kaufmann Publishers Inc.:
San Francisco, CA, USA. isbn: 1-55860-556-8.
2472. Katharine Jane Shaw. Using Genetic Algorithms for Practical Multi-Objective Production
Schedule Optimisation. PhD thesis, University of Sheffield, Department of Automatic Control
and Systems Engineering: Sheffield, UK, 1997. OCLC: 53552498. asin: B001AJODJE.
2473. Katharine Jane Shaw and Peter J. Fleming. Including Real-Life Problem Preferences in Ge-
netic Algorithms to Improve Optimisation of Production Schedules. In GALESIA’97 [3056],
pages 239–244, 1997. INSPEC Accession Number: 5780801.
2474. J. Shekel. Test Functions for Multimodal Search Techniques. In Fifth An-
nual Princeton Conference on Information Science and Systems [2222], pages 354–
359, 1971. Partly available at http://en.wikipedia.org/wiki/Shekel_function
and http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_
files/TestGO_files/Page2354.htm [accessed 2007-11-06].
2475. Proceedings of the Third Joint Conference on Information Science (JCIS 1997), Section: The
First International Workshop on Frontiers in Evolutionary Algorithms (FEA’98), March 1–
5, 1997, Sheraton Imperial Hotel & Convention Center: Research Triangle Park, NC, USA.
Workshop held in conjunction with Third Joint Conference on Information Sciences.
2476. David J. Sheskin. Handbook of Parametric and Nonparametric Statistical Procedures. Chap-
man & Hall: London, UK and CRC Press, Inc.: Boca Raton, FL, USA, 2nd/3rd edi-
tion, 2004. isbn: 0203489535, 0849331196, 1-58488-133-X, and 1-58488-440-1.
Google Books ID: L9c4IAAACAAJ, bmwhcJqq01cC, t6smAAAACAAJ, and tasmAAAACAAJ.
OCLC: 42643434, 52134708, 247821285, and 265211382.
2477. Amit P. Sheth, Steffen Staab, Mike Dean, Massimo Paolucci, Diana Maynard, Timothy W.
Finin, and Krishnaprasad Thirunarayan, editors. Proceedings of the 7th International Seman-
tic Web Conference – The Semantic Web (ISWC’08), October 26–30, 2008, Kongresszen-
trum: Karlsruhe, Germany, volume 5318 in Information Systems and Application, incl. In-
ternet/Web and HCI (SL 3), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. isbn: 3-540-88563-3. Google Books ID: -YeUh7-PCkgC. Library
of Congress Control Number (LCCN): 2008937502.
2478. Masaya Shinkai, Arturo Hernández Aguirre, and Kiyoshi Tanaka. Mutation Strategy Im-
proves GAs Performance on Epistatic Problems. In CEC’02 [944], pages 968–973, volume 1,
2002. doi: 10.1109/CEC.2002.1007056.
2479. Rob Shipman. Genetic Redundancy: Desirable or Problematic for Evolutionary Adaptation?
In ICANNGA’99 [801], pages 1–11, 1999. CiteSeerx : 10.1.1.32.2032.
2480. Rob Shipman, Mark Shackleton, Marc Ebner, and Richard A. Watson. Neutral Search
Spaces for Artificial Evolution: A Lesson from Life. In Artificial Life VII [246], pages
162–167, 2000. Fully available at http://homepages.feis.herts.ac.uk/˜nehaniv/
al7ev/shipmanUH.ps and http://www.demo.cs.brandeis.edu/papers/alife7_
shipman.ps [accessed 2011-12-05]. CiteSeerx : 10.1.1.17.664.
2481. Rob Shipman, Mark Shackleton, and Inman Harvey. The Use of Neutral Genotype-Phenotype
Mappings for Improved Evolutionary Search. BT Technology Journal, 18(4):103–111, Octo-
ber 2000, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1026714927227.
CiteSeerx : 10.1.1.9.5548.
2482. Shinichi Shirakawa and Tomoharu Nagao. Graph Structured Program Evolu-
REFERENCES 1143
tion: Evolution of Loop Structures. In GPTP’09 [2309], pages 177–194, 2009.
doi: 10.1007/978-1-4419-1626-6 11. Fully available at http://www.cscs.umich.edu/
PmWiki/Farms/GPTP-09/uploads/Main/shirakawa-03-24-2009.pdf [accessed 2010-11-
21].
2483. Amit Shukla, Prasad M. Deshpande, and Jeffrey F. Naughton. Materialized View Selection
for Multidimensional Datasets. In VLDB’98 [1151], pages 488–499, 1998. Fully available
at http://www.vldb.org/conf/1998/p488.pdf [accessed 2011-04-10].
2484. Patrick Siarry and Gérard Berthiau. Fitting of Tabu Search to Optimize Func-
tions of Continuous Variables. International Journal for Numerical Methods in En-
gineering, 40(13):2449–2457, 1997, Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.1002/(SICI)1097-0207(19970715)40:13<2449::AID-NME172>3.0.CO;2-O.
2485. Patrick Siarry and Zbigniew Michalewicz, editors. Advances in Metaheuristics for Hard
Optimization, Natural Computing Series. Springer New York: New York, NY, USA,
November 2007. doi: 10.1007/978-3-540-72960-0. isbn: 3-540-72959-3. Google Books
ID: TkMR1b-VCR4C. Library of Congress Control Number (LCCN): 2007929485. Series
editors: G. Rozenberg, Thomas Bäck, Ágoston E. Eiben, J.N. Kok, and H.P. Spaink.
2486. Patrick Siarry, Gérard Berthiau, François Durdin, and Jacques Haussy. Enhanced Simulated
Annealing for Globally Minimizing Functions of Many-Continuous Variables. ACM Transac-
tions on Mathematical Software (TOMS), 23(2):209–228, June 1997, ACM Press: New York,
NY, USA. doi: 10.1145/264029.264043.
2487. Wojciech Wladyslaw Siedlecki and Jack Sklansky. Constrained Genetic Optimization via
Dynamic Reward-Penalty Balancing and its Use in Pattern Recognition. In ICGA’89 [2414],
pages 141–150, 1989. See also [2488].
2488. Wojciech Wladyslaw Siedlecki and Jack Sklansky. Constrained Genetic Optimization via
Dynamic Reward-Penalty Balancing and its Use in Pattern Recognition. In Handbook of
Pattern Recognition and Computer Vision [539], Chapter 1.3.3, pages 108–124. World Scien-
tific Publishing Co.: Singapore, 1993. See also [2487].
2489. Eric V. Siegel and John R. Koza, editors. Papers from the 1995 AAAI Fall Symposium on
Genetic Programming, November 10–12, 1995, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-92-X. Google
Books ID: 03UmAAAACAAJ. GBV-Identification (PPN): 226307441. AAAI Technical Report
FS-95-01.
2490. Sidney Siegel and N. John Castellan Jr. Nonparametric Statistics for The Behavioral
Sciences, Humanities/Social Sciences/Languages. McGraw-Hill: New York, NY, USA,
1956–1988. isbn: 0-07-057357-3 and 070573434X. Google Books ID: ElwEAAAACAAJ,
MO1lGwAACAAJ, QYFCGgAACAAJ, and YmxEAAAAIAAJ. OCLC: 16131891 and 255536918.
2491. Karl Sigurjónsson. Taboo Search Based Metaheuristic for Solving Multiple Depot VRPPD
with Intermediary Depots. Master’s thesis, Technical University of Denmark (DTU), Infor-
matics and Mathematical Modelling (IMM): Lyngby, Denmark, September 1, 2008. Fully
available at http://orbit.dtu.dk/getResource?recordId=224453&objectId=1&
versionId=1 [accessed 2010-11-02].
2492. Sara Silva, James A. Foster, Miguel Nicolau, Penousal Machado, and Mario Giacobini, ed-
itors. Proceedings of the 14th European Conference on Genetic Programming (EuroGP’11),
April 27–29, 2011, Torino, Italy, volume 6621/2011 in Theoretical Computer Science and
General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-642-20407-4. Partly available at http://evostar.
dei.uc.pt/call-for-contributions/eurogp/ [accessed 2011-02-28]. Library of Congress
Control Number (LCCN): 2011925001.
2493. Kwang Mon Sim and Weng Hong Sun. Multiple Ant-Colony Optimization for Network
Routing. In CW’02 [672], pages 277–281, 2002. doi: 10.1109/CW.2002.1180890.
2494. Dan Simon. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. John
Wiley & Sons Ltd.: New York, NY, USA, June 2006. isbn: 0470045345 and 0471708585.
Google Books ID: UiMVoP 7TZkC. OCLC: 64084871 and 85821130.
2495. George Gaylord Simpson. The Baldwin Effect. Evolution – International Journal of Organic
Evolution, 7(2):110–117, June 1953, Society for the Study of Evolution (SSE) and Wiley
Interscience: Chichester, West Sussex, UK.
2496. Patrick K. Simpson, editor. The 1998 IEEE International Conference on Evolutionary Com-
putation (CEC’98), 1998, Egan Civic & Convention Center and Anchorage Hilton Hotel:
1144 REFERENCES
Anchorage, AK, USA. IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer
Society: Piscataway, NJ, USA. isbn: 0-7803-4869-9 and 0-7803-4870-2. Google
Books ID: FMRVAAAAMAAJ, OUizJgAACAAJ, and uvhJAAAACAAJ. INSPEC Accession Num-
ber: 6016643. Catalogue no.: 98TH8360. Part of [1330].
2497. Karl Sims. Artificial Evolution for Computer Graphics. In SIGGRAPH’91 [2702],
pages 319–328, 1991. doi: 10.1145/122718.122752. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/sims91a.html [accessed 2009-06-19] and http://
www.naturewizard.com/papers/evolution%20-%20p319-sims.pdf [accessed 2009-06-
18].
2498. Mark C. Sinclair. Minimum Cost Topology Optimisation of the COST 239 European Optical
Network. In ICANNGA’95 [2141], pages 26–29, 1995. CiteSeerx : 10.1.1.53.6542. See
also [2500, 2501].
2499. Mark C. Sinclair. Evolutionary Telecommunications: A Summary. In GECCO’99 WS [2502],
pages 209–212, 1999. Fully available at http://uk.geocities.com/markcsinclair/
ps/etppf_sin_sum.ps.gz [accessed 2008-08-01]. CiteSeerx : 10.1.1.38.6797. Partly avail-
able at http://uk.geocities.com/markcsinclair/ps/etppf_sin_slides.ps.gz
[accessed 2008-08-01].
2500. Mark C. Sinclair. Optical Mesh Network Topology Design using Node-Pair Encoding
Genetic Programming. In GECCO’99 [211], pages 1192–1197, volume 2, 1999. Fully
available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/sinclair_1999_
OMNTDNEGP.html [accessed 2008-07-21]. CiteSeerx : 10.1.1.49.453. See also [2498].
2501. Mark C. Sinclair. Node-Pair Encoding Genetic Programming for Optical Mesh Network
Topology Design. In Telecommunications Optimization: Heuristic and Adaptive Tech-
niques [640], Chapter 6, pages 99–114. John Wiley & Sons Ltd.: New York, NY, USA,
2000. See also [2498].
2502. Mark C. Sinclair, David Wolfe Corne, and George D. Smith, editors. Proceedings of Evolu-
tionary Telecommunications: Past, Present and Future – A Bird-of-a-Feather Workshop at
GECCO’99 (GECCO’99 WS), July 13, 1999, Orlando, FL, USA. Fully available at http://
uk.geocities.com/markcsinclair/etppf.html [accessed 2008-06-25]. See also [211, 2093].
2503. Gulshan Singh and Kalyanmoy Deb. Comparison of Multi-Modal Optimization Algo-
rithms based on Evolutionary Algorithms. In GECCO’06 [1516], pages 1305–1312, 2006.
doi: 10.1145/1143997.1144200. Fully available at https://ceprofs.civil.tamu.edu/
ezechman/CVEN689/References/singh_deb_2006.pdf [accessed 2011-03-14]. Genetic algo-
rithms session.
2504. Madan G. Singh, editor. Systems and Control Encyclopedia. Pergamon Press: Ox-
ford, UK, 1987. isbn: 0-08-028709-3, 0080359337, and 0080406017. Google Books
ID: EbE9AAAACAAJ, ErE9AAAACAAJ, and re9QAAAAMAAJ.
2505. Rama Shankar Singh, Costas B. Krimbas, Diane B. Paul, and John Beatty, editors. Think-
ing about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge Uni-
versity Press: Cambridge, UK, November 2000. isbn: 0521620708. Google Books
ID: HmVbYJ93d-AC. LC Classification: QH366.2 .T53 2000.
2506. Sameer Singh, Maneesha Singh, Chid Apte, and Petra Perner, editors. Pattern Recognition
and Data Mining – Third International Conference on Advances in Pattern Recognition –
Part I (ICAPR’05), August 22–25, 2005, Bath, UK, volume 3686/2005 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11551188.
isbn: 3-540-28757-4.
2507. Abhishek Sinha and David Edward Goldberg. A Survey of Hybrid Genetic and Evolu-
tionary Algorithms. IlliGAL Report 2003004, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, University
of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2003. Fully avail-
able at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2003004.ps.Z [ac-
cessed 2010-09-11].
2508. Transportation Optimization Portal – TOP. SINTEF: Trondheim, Norway, October 21, 2010.
Fully available at http://www.sintef.no/Projectweb/TOP/ [accessed 2011-10-13].
2509. Moshe Sipper, Daniel Mange, and Andrés Pérez-Uribe, editors. Proceedings of the Sec-
ond International Conference on Evolvable Systems: From Biology to Hardware (ICES’98),
September 23–25, 1999, Lausanne, Switzerland, volume 1478/1998 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0057601.
REFERENCES 1145
isbn: 3-540-64954-9.
2510. Michael Sipser. Introduction to the Theory of Computation. Thomson Course Tech-
nology: Boston, MA, USA, Cengage Learning (Wadsworth Publishing Company):
Stamford, CT, USA, and Prindle, Weber, Schmidt (PWS) Publishing Company:
Boston, MA, USA, 1997–2006. isbn: 053-4947-28X, 0534948103, and 0619217642.
Google Books ID: -wlWAAAACAAJ, 4CR8AQAACAAJ, wlWAAAACAAJ, and eRYFAAAACAAJ.
OCLC: 35558950 and 64006340.
2511. Evren Sirin, Bijan Parsia, Dan Wu, James Hendler, and Dana Nau. HTN Planning for Web
Service Composition using SHOP2. Web Semantics: Science, Services and Agents on the
World Wide Web, 1(4):377–396, October 2004, Elsevier Science Publishers B.V.: Essex, UK.
International Semantic Web Conference 2003.
2512. Evren Sirin, Bijan Parsia, and James Hendler. Template-based Composition of Semantic Web
Services. In AAAI Fall Symposium on Agents and the Semantic Web [2138], pages 85–92,
2005. Fully available at http://www.mindswap.org/papers/extended-abstract.
pdf [accessed 2011-01-12]. CiteSeerx : 10.1.1.74.2584.
2513. Jaroslaw Skaruz and Franciszek Seredynski. Detecting Web Application Attacks with Use of
Gene Expression Programming. pages 2029–2035. doi: 10.1109/CEC.2009.4983190. INSPEC
Accession Number: 10688785.
2514. Jaroslaw Skaruz and Franciszek Seredynski. Web Application Security through
Gene Expression Programming. In EvoWorkshops’09 [1052], pages 1–10, 2009.
doi: 10.1007/978-3-642-01129-0 1.
2515. Benjamin Skellett, Benjamin Cairns, Nicholas Geard, Bradley Tonkes, and Janet Wiles.
Rugged NK Landscapes Contain the Highest Peaks. In GECCO’05 [304], pages 579–584,
2005. CiteSeerx : 10.1.1.1.5011.
2516. Hendrik Skubch. Hierarchical Strategy Learning for FLUX Agents. Master’s thesis, Technis-
che Universität Dresden: Dresden, Sachsen, Germany, February 18, 2007, Michael Thielscher,
Supervisor. See also [2517].
2517. Hendrik Skubch. Hierarchical Strategy Learning for FLUX Agents: An Applied Tech-
nique. VDM Verlag Dr. Müller AG und Co. KG: Saarbrücken, Saarland, Germany, 2006.
isbn: 3-8364-5271-5. Google Books ID: HONbJgAACAAJ and syxkPQAACAAJ. See also
[2516].
2518. Hendrik Skubch, Michael Wagner, Roland Reichle, Stefan Triller, and Kurt Geihs. Towards a
Comprehensive Teamwork Model for Highly Dynamic Domains. In ICAART’10 [917], pages
121–127, volume 2, 2010.
2519. Proceedings of the 3rd International Workshop on Machine Learning (ML’88), 1985, Skytop,
PA, USA.
2520. Derek H. Sleeman and Peter Edwards, editors. Proceedings of the 9th International Workshop
on Machine Learning (ML’92), July 1–3, 1992, Aberdeen, Scotland, UK. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-247-X.
2521. David Slepian, editor. Key Papers in the Development of Information Theory. Insti-
tute of Electrical and Electronics Engineers (IEEE) Press: New York, NY, USA, 1974.
isbn: 0471798525 and 0879420286. Google Books ID: AItQAAAAMAAJ, JSfdGgAACAAJ,
VPFeAQAACAAJ, and VfFeAQAACAAJ. OCLC: 940335, 214976152, and 257564453. See
also [2465].
2522. Proceedings of the 7th International Conference on Soft Computing, Evolutionary Compu-
tation, Genetic Programing, Fuzzy Logic, Rough Sets, Neural Networks, Fractals, Bayesian
Methods (MENDEL’01), June 6–8, 2001, Brno University of Technology: Brno, Czech Re-
public. Slezská Univerzita V Opavě, Filozoficko-přı́rodoveděcká fakulta, Ústav informatiky:
Opava, Czech Republic. isbn: 80-7248-107-X.
2523. N. J. H. Small. Miscellanea: Marginal Skewness and Kurtosis in Testing Multivariate Nor-
mality. Journal of the Royal Statistical Society: Series C – Applied Statistics, 29(1):85–87,
1980, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
2524. Waleed W. Smari, editor. Third International Conference on Information Reuse and In-
tegration (IRI’01), November 27–29, 2001, Luxor Hotel: Las Vegas, NV, USA. Inter-
national Society for Computers and Their Applications, Inc. (ISCA): Cary, NC, USA.
isbn: 1-880843-41-2. Google Books ID: QnxyAAAACAAJ. OCLC: 55764490.
2525. Mikhail I. Smirnov, editor. Autonomic Communication – Revised Selected Papers from
the First International IFIP Workshop (WAC’04), October 18–19, 2004, Berlin, Ger-
1146 REFERENCES
many, volume 3457/2005 in Computer Communication Networks and Telecommunica-
tions, Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/11520184. isbn: 3-540-27417-0. Google Books ID: kE3DeVNfaMcC
and pfPiPgAACAAJ. OCLC: 61029791, 145832250, 255319050, 318300920, and
403681622. Library of Congress Control Number (LCCN): 2005928376.
2526. Alice E. Smith and David W. Coit. Penalty Functions. In Handbook of Evolutionary Com-
putation [171], Chapter C 5.2. Oxford University Press, Inc.: New York, NY, USA, Institute
of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press,
Inc.: Boca Raton, FL, USA, 1997. Fully available at http://www.cs.cinvestav.mx/
˜constraint/papers/chapter.pdf [accessed 2008-11-15].
2527. George D. Smith, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the In-
ternational Conference on Artificial Neural Nets and Genetic Algorithms (ICANNGA’97),
April 1–4, 1997, Vienna, Austria, SpringerComputerScience. Springer Verlag GmbH: Vienna,
Austria. isbn: 3-211-83087-1. Google Books ID: KqZQAAAAMAAJ. OCLC: 40146054 and
246391089. Published in 1998.
2528. John Maynard Smith. Natural Selection and the Concept of a Protein Space. Nature, 225
(5232):563–564, February 7, 1970. doi: 10.1038/225563a0. PubMed ID: 5411867. Received
November˜7, 1969.
2529. Joshua R. Smith. Designing Biomorphs with an Interactive Genetic Algorithm. In
ICGA’91 [254], pages 535–538, 1991. Fully available at http://web.media.mit.edu/
˜jrs/biomorphs.pdf [accessed 2009-06-18].
2530. Michael H. Smith, M. Lee, James M. Keller, and John Yen, editors. Proceedings of the Bien-
nial Conference of the North American Fuzzy Information Processing Society (NAFIPS’96),
June 19–22, 1996, Berkeley, CA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-3225-3 and 0-7803-3226-1. Google Books ID: 6chVAAAAMAAJ and
X9SOJgAACAAJ. OCLC: 35188367 and 63197371.
2531. Murray Smith. Neural Networks for Statistical Modeling. John Wiley & Sons Ltd.: New York,
NY, USA, Van Nostrand Reinhold Company, and International Thomson Computer Press:
London, UK, 1993. isbn: 0442013108 and 1850328420. Google Books ID: TjjFAAAACAAJ,
TzjFAAAACAAJ, and lEADKgAACAAJ. OCLC: 27145760 and 34181200.
2532. Peter W. H. Smith and Kim Harries. Code Growth, Explicitly Defined Introns, and Alterna-
tive Selection Schemes. Evolutionary Computation, 6(4):339–360, Winter 1998, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1998.6.4.339. CiteSeerx : 10.1.1.24.2202.
2533. Reid G. Smith and A. Carlisle Scott, editors. Proceedings of the The Third Conference
on Innovative Applications of Artificial Intelligence (IAAI’91), June 14–19, 1991, Anaheim,
CA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-69148-5. Partly available
at http://www.aaai.org/Conferences/IAAI/iaai91.php [accessed 2007-09-06].
2534. Robert Elliott Smith, editor. A Report on The First International Workshop on Learning
Classifier Systems (IWLCS-92). Technical Report, NASA Johnson Space Center: Houston,
TX, USA, 1992. CiteSeerx : 10.1.1.45.4955. See also [2002].
2535. Shana Shiang-Fong Smith. Using Multiple Genetic Operators to Reduce Premature Conver-
gence in Genetic Assembly Planning. Computers in Industry – An International, Application
Oriented Research Journal, 54(1):35–49, May 2004, Elsevier Science Publishers B.V.: Ams-
terdam, The Netherlands. doi: 10.1016/j.compind.2003.08.001.
2536. Stephen Frederick Smith. A Learning System based on Genetic Adaptive Algorithms. PhD
thesis, University of Pittsburgh: Pittsburgh, PA, USA, 1980. Order No.: AAI8112638. Uni-
versity Microfilms No.: 81-12638.
2537. Stephen L. Smith and Stefano Cagnoni, editors. Medical Applications of Genetic and Evo-
lutionary Computation Workshop (MedGEC’08), July 12, 2008, Renaissance Atlanta Hotel
Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
2538. Tom Smith, Phil Husbands, Paul Layzell, and Michael O’Shea. Fitness Landscapes and
Evolvability. Evolutionary Computation, 10(1):1–34, Summer 2002, MIT Press: Cambridge,
MA, USA. doi: 10.1162/106365602317301754. Fully available at http://wotan.liu.edu/
docis/dbl/evocom/2002_10_1_1_FLAE.html [accessed 2007-07-28].
2539. Donald L. Snyder and Michael I. Miller. Random Point Processes in Time and Space, Springer
Texts in Electrical Engineering. Springer New York: New York, NY, USA, June 19, 1991.
isbn: 0387975772 and 3540975772. Google Books ID: MIkwAQAACAAJ. OCLC: 23287518
and 246549179.
REFERENCES 1147
2540. Lawrence V. Snyder and Mark S. Daskin. A Random-Key Genetic Algorithm for the Gen-
eralized Traveling Salesman Problem. European Journal of Operational Research (EJOR),
174(1):38–53, October 1, 2006, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2004.09.057. Fully available at http://www.lehigh.edu/˜lvs2/
Papers/gtsp.pdf and http://www.lehigh.edu/˜lvs2/Papers/gtsp_ejor.pdf [ac-
x
cessed 2010-11-22]. CiteSeer : 10.1.1.1.9730, 10.1.1.71.5810, and 10.1.1.87.6258.
2541. Marcus Henrique Soares Mendes, Gustavo Luı́s Soares, and João Antônio de Vasconcelos.
PID Step Response Using Genetic Programming. In SEAL’10 [2585], pages 359–368, 2010.
doi: 10.1007/978-3-642-17298-4 37.
2542. Elliott Sober. The Two Faces of Fitness. In Thinking about Evolution: Historical, Philo-
sophical, and Political Perspectives [2505], Chapter 15, pages 309–321. Cambridge University
Press: Cambridge, UK, 2000. Fully available at http://philosophy.wisc.edu/sober/
tff.pdf [accessed 2008-08-11].
2543. Proceedings of the twelfth annual ACM-SIAM Symposium on Discrete Algorithms, 2001,
Washington, DC, USA. Society for Industrial and Applied Mathematics (SIAM): Philadel-
phia, PA, USA. isbn: 0-89871-490-7.
2544. AISB 2002 Convention: Logic, Language and Learning (AISB’02), April 3–5, 2002, Imperial
College London: London, UK. Society for the Study of Artificial Intelligence and the Sim-
ulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://
www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2545. AISB 2003 Convention: Cognition in Machines and Animals (AISB’03), April 7–11, 2003,
University of Wales: Aberystwyth, Wales, UK. Society for the Study of Artificial Intelligence
and the Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available
at http://www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2546. AISB 2004 Convention: Motion, Emotion and Cognition (AISB’04), March 29–April 1, 2003,
University of Leeds: Leeds, UK. Society for the Study of Artificial Intelligence and the
Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://
www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2547. Social Intelligence and Interaction in Animals, Robots and Agents (AISB’05), April 12–15,
2005, University of Hertfordshire: Hatfield, England, UK. Society for the Study of Artifi-
cial Intelligence and the Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK.
Fully available at http://www.aisb.org.uk/publications/proceedings.shtml [ac-
cessed 2008-09-10].
2548. Adaptation in Artificial and Biological Systems (AISB’06), April 3–6, 2005, University of
Bristol: Bristol, UK. Society for the Study of Artificial Intelligence and the Simulation of
Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://www.aisb.
org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2549. Artificial and Ambient Intelligence (AISB’07), April 1–4, 2007, Newcastle University: New-
castle upon Tyne, UK. Society for the Study of Artificial Intelligence and the Simulation of
Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://www.aisb.
org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2550. Bambang I. Soemarwoto, O. J. Boelens, M. Allan, M. T. Arthur, K. Bütefisch, N. Ceresola,
and W. Fritz, editors. 21st Applied Aerodynamics Conference, Towards the Simulation of
Unsteady Manoeuvre Dominated by Vortical Flow, Common Exercise I of WEAG THALES
JP12.15 (AIAA’03), June 23–26, 2003, Hilton Walt Disney World, Walt Disney World Resort:
Orlando, FL, USA. American Institute of Aeronautics and Astronautics (AIAA): Reston, VA,
USA. isbn: 1563476002 and 9781563476006.
2551. Richard Soland, Ambrose Goicoechea, L. Duckstein, and Stanley Zionts, editors. Proceed-
ings of the 9th International Conference on Multiple Criteria Decision Making: Theory
and Applications in Business, Industry, and Government (MCDM’90), August 5–8, 1990,
Fairfax, VA, USA. Springer New York: New York, NY, USA. isbn: 0-387-97805-4
and 3-540-97805-4. OCLC: 25282714, 232664357, 471769870, 633632497, and
638193262. Library of Congress Control Number (LCCN): 92002703. GBV-Identification
(PPN): 120911701. Published in July 1992.
2552. Ashu M. G. Solo, editor. Proceedings of the 2011 International Conference on Genetic and
Evolutionary Methods (GEM’11), July 18–21, 2011, Las Vegas, NV, USA.
2553. Dawn Xiaodong Song. A Linear Genetic Programming Approach to Intrusion Detection.
1148 REFERENCES
Master’s thesis, Dalhousie University: Halifax, NS, Canada, March 2003, Malcom Ian Hey-
wood and A. Nur Zincir-Heywood, Advisors. Fully available at http://users.cs.dal.
ca/˜mheywood/Thesis/DSong.pdf [accessed 2008-06-16]. CiteSeerx : 10.1.1.91.3779. See
also [2555].
2554. Dawn Xiaodong Song, Adrian Perrig, and Doantam Phan. AGVI – Automatic Generation,
Verification, and Implementation of Security Protocols. In CAV’01 [280], pages 241–245,
2001. doi: 10.1007/3-540-44585-4 21.
2555. Dawn Xiaodong Song, Malcom Ian Heywood, and A. Nur Zincir-Heywood. A Linear
Genetic Programming Approach to Intrusion Detection. In GECCO’03-II [485], pages
2325–2336, 2003. Fully available at http://users.cs.dal.ca/˜mheywood/X-files/
Publications/27242325.pdf [accessed 2008-06-16]. CiteSeerx : 10.1.1.61.7475. See also
[2553].
2556. Il-Yeol Song and Torben Bach Pedersen, editors. Proceedings of the ACM Tenth Interna-
tional Workshop on Data Warehousing and OLAP (DOLAP’07), November 9, 2007, Lisbon,
Portugal. ACM Press: New York, NY, USA. isbn: 978-1-59593-827-5. Part of [28].
2557. Il-Yeol Song and Karen C. Davis, editors. Proceedings of the 7th ACM International Workshop
on Data Warehousing and OLAP (DOLAP’04), November 12–13, 2004, Hyatt Arlington
Hotel: Washington, DC, USA. ACM Press: New York, NY, USA. isbn: 1-58113-977-2.
Part of [1142].
2558. Il-Yeol Song and Dimitri Theodoratos, editors. Proceedings of the Fifth International Work-
shop on Data Warehousing and OLAP (DOLAP’02), November 8, 2002, McLean, VA, USA.
ACM Press: New York, NY, USA. Part of [25? ].
2559. Branko Souček and IRIS Group, editors. Dynamic, Genetic, and Chaotic Programming: The
Sixth-Generation, Sixth Generation Computer Technologies. Wiley Interscience: Chichester,
West Sussex, UK, April 1992. isbn: 047155717X. Partly available at http://www.
amazon.com/gp/reader/047155717X/ref=sib_dp_pt/002-6076954-4198445#
reader-link [accessed 2008-05-29]. Google Books ID: tIFQAAAAMAAJ.
2560. Terence Soule and James A. Foster. Removal Bias: a New Cause of Code Growth
in Tree Based Evolutionary Programming. In CEC’98 [2496], pages 781–186, 1998.
doi: 10.1109/ICEC.1998.700151. CiteSeerx : 10.1.1.35.3659. INSPEC Accession Num-
ber: 6016655.
2561. James C. Spall. Introduction to Stochastic Search and Optimization, Estimation, Simulation,
and Control – Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley
Interscience: Chichester, West Sussex, UK, first edition, June 2003. isbn: 0-471-33052-3
and 0-471-72213-8. Google Books ID: f66OIvvkKnAC. OCLC: 50773216, 51235522,
474647590, and 637018778. Library of Congress Control Number (LCCN): 2002038049.
LC Classification: QA274 .S63 2003.
2562. Bruce M. Spatz and Gregory J. E. Rawlins, editors. Proceedings of the First Workshop on
Foundations of Genetic Algorithms (FOGA’90), July 15–18, 1990, Indiana University, Bloom-
ington Campus: Bloomington, IN, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-170-8. Google Books ID: Df12yLrlUZYC. OCLC: 24168098,
263581771, and 311490755. Published July 1, 1991.
2563. William M. Spears and Kenneth Alan De Jong. On the Virtues of Parameterized Uniform
Crossover. In ICGA’91 [254], pages 230–236, 1991. Fully available at http://www.mli.
gmu.edu/papers/91-95/91-18.pdf [accessed 2010-08-02]. CiteSeerx : 10.1.1.51.8141.
2564. William M. Spears and Kenneth Alan De Jong. Using Genetic Algorithms for Supervised Con-
cept Learning. In TAI’90 [1358], pages 335–341, 1990. doi: 10.1109/TAI.1990.130359. Fully
available at http://www.cs.uwyo.edu/˜wspears/papers/ieee90.pdf [accessed 2009-07-
x
21]. CiteSeer : 10.1.1.57.147. INSPEC Accession Number: 4076618.
2565. William M. Spears and Worthy Neil Martin. Proceedings of the Sixth Workshop on Founda-
tions of Genetic Algorithms (FOGA’00), July 21–23, 2001, Charlottesville, VA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-734-X. Google Books
ID: QPFQAAAAMAAJ and uiX9TNaSxxYC.
2566. Lee Spector. Evolving Control Structures with Automatically Defined Macros. In
Papers from the 1995 AAAI Fall Symposium on Genetic Programming [2489], pages
99–105, 1995. Fully available at http://helios.hampshire.edu/lspector/pubs/
ADM-fallsymp-e.pdf and http://www.aaai.org/Papers/Symposia/Fall/1995/
FS-95-01/FS95-01-014.pdf [accessed 2010-08-21]. CiteSeerx : 10.1.1.51.6239. See also
REFERENCES 1149
[2567].
2567. Lee Spector. Simultaneous Evolution of Programs and their Control Structures. In
Advances in Genetic Programming II [106], Chapter 7, pages 137–154. MIT Press:
Cambridge, MA, USA, 1996. Fully available at http://hampshire.edu/˜lasCCS/
pubs/AiGP2-post-final-e.pdf, http://helios.hampshire.edu/lspector/
pubs/AiGP2-post-final-e.pdf, and http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/spector_1996_aigp2.html [accessed 2010-08-21]. CiteSeerx : 10.1.1.39.9677.
2568. Lee Spector. Automatic Quantum Computer Programming – A Genetic Programming Ap-
proach, volume 7 in Genetic Programming Series. Springer Science+Business Media, Inc.:
New York, NY, USA, paperback edition 2007 edition, 2006. isbn: 0-387-36496-X and
0-387-36791-8. Google Books ID: 9wxzm8Gm-fUC and mEKfvtxmn9MC. Library of
Congress Control Number (LCCN): 2006931640. Series editor: John Koza.
2569. Lee Spector, William B. Langdon, Una-May O’Reilly, and Peter John Angeline, editors.
Advances in Genetic Programming, volume 3 in Complex Adaptive Systems, Bradford Books.
MIT Press: Cambridge, MA, USA, July 16, 1996. isbn: 0-262-19423-6. Google Books
ID: 5Qwbal3AY6oC. OCLC: 29595260, 42274498, 42579104, 60710976, 60863107,
154701926, and 222641812.
2570. Lee Spector, Erik D. Goodman, Annie S. Wu, William B. Langdon, Hans-Michael Voigt,
Mitsuo Gen, Sandip Sen, Marco Dorigo, Shahram Pezeshk, Max H. Garzon, and Ed-
mund K. Burke, editors. Proceedings of the Genetic and Evolutionary Computation Confer-
ence (GECCO’01), July 7–11, 2001, Holiday Inn Golden Gateway Hotel: San Francisco, CA,
USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-774-9.
Google Books ID: 81QKAAAACAAJ and cYVQAAAAMAAJ. See also [1104].
2571. Herbert Spencer. The Principles of Biology, volume 1. D. Appleton and Company: New York,
NY, USA and Williams and Norgate: London & Edinburgh, UK, 1st edition, 1864–1868. Fully
available at http://books.google.com/books?id=rHoOAAAAYAAJ and http://www.
archive.org/details/ThePrinciplesOfBiology [accessed 2009-06-11].
2572. W. Spendley, G. R. Hext, and F. R. Himsworth. Sequential Application of Simplex Designs
in Optimisation and Evolutionary Operation. Technometrics, 4(4):441–461, November 1962,
American Statistical Association (ASA): Alexandria, VA, USA and American Society for
Quality (ASQ): Milwaukee, WI, USA. OCLC: 486202935. JSTOR Stable ID: 1266283.
2573. Christian Spieth, Felix Streichert, Nora Speer, and Andreas Zell. Utilizing an Island
Model for EA to Preserve Solution Diversity for Inferring Gene Regulatory Networks. In
CEC’04 [1369], pages 146–151, volume 1, 2004. doi: 10.1109/CEC.2004.1330850. Fully
available at http://www-ra.informatik.uni-tuebingen.de/software/JCell/
publications.html [accessed 2007-08-24].
2574. Ralph H. Sprague, Jr., editor. Proceedings of the 33rd Hawaii International Conference
on System Sciences (HICSS’00), January 4–7, 2000, Maui, HI, USA. IEEE Computer So-
ciety: Washington, DC, USA. isbn: 0-7695-0493-0, 0769504949, and 0769504957.
Google Books ID: 7QIEAAAACAAJ and IK9VAAAAMAAJ. OCLC: 43892251, 228269935,
and 423974973. Order No.: PR00493.
2575. Genetic Programming Theory and Practice V, Proceedings of the Genetic Programming The-
ory Practice 2007 Workshop (GPTP’07), May 17–19, 2007, University of Michigan, Center
for the Study of Complex Systems (CSCS): Ann Arbor, MI, USA, Genetic and Evolution-
ary Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Nor-
well, MA, USA. Partly available at http://www.cscs.umich.edu/gptp-workshops/
gptp2007/ [accessed 2007-09-28].
2576. Proceedings of the V International Workshop on Nature Inspired Cooperative Strategies for
Optimization (NICSO’11), October 20–22, 2011, Cluj-Napoca, Romania, Studies in Compu-
tational Intelligence. Springer-Verlag: Berlin/Heidelberg. Partly available at http://www.
nicso2011.org/ [accessed 2011-05-25].
2577. Seclected Papers from the First Asia-Pacific Conference on Simulated Evolution and
Learning (SEAL’96), November 9–12, 1996, Taejon, South Korea, volume 1285/1997 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BFb0028514.
2578. Sixth International Workshop on Learning Classifier Systems (IWLCS’03), July 12–16, 2003,
Holiday Inn Chicago: Chicago, IL, USA. Springer-Verlag GmbH: Berlin, Germany. Held
during GECCO-2003 (. Part of [484, 485]. See also [1589].
1150 REFERENCES
2579. The Tenth International Workshop on Learning Classifier Systems (IWLCS’07), July 8, 2007,
University College London (UCL), Department of Computer Science: London, UK, Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. Held in association with GECCO-2007 (. See also
[2699, 2700].
2580. Sixth International Conference on Ant Colony Optimization and Swarm Intelligence
(ANTS’08), September 22–28, 2008, Brussels, Belgium, Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
2581. Learning and Intelligent OptimizatioN (LION 4), January 18–22, 2010, Ca’ Dolfin Building,
Universita’ Ca’ Foscari: Venice, Italy, Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany.
2582. Proceedings of the 35th International Symposium on Mathematical Foundations of Computer
Science (MFCS’10), August 21–29, 2010, Masaryk University, Faculty of Informatics: Brno,
Czech Republic, Advanced Research in Computing and Software Science (ARCoSS), Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2583. Proceedings of the 9th Mexican International Conference on Artificial Intelligence (MI-
CAI’10), November 8–13, 2010, Autonomous University of Hidalgo State: Pachuca, Mexico,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2584. Proceedings of the 9th International Symposium on Experimental Algorithms (SEA’10),
May 20–22, 2010, Hotel Continental Terme: Ischia Porto, Ischa Island, Naples, Italy, volume
6049 in Programming and Software Engineering Series, Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3642131921. Google Books
ID: XerGSAAACAAJ.
2585. Proceedings of the Eighth International Conference on Simulated Evolution And Learning
(SEAL’10), December 1–4, 2010, Indian Institute of Technology Kanpur (IIT): Kanpur,
Uttar Pradesh, India, Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany.
2586. Evolution Artificielle: Proceedings of the 10th Biannual International Conference on Artificial
Evolution (AE’11), October 24–26, 2011, Angers Congress Centre: Angers, France, Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. Partly avail-
able at http://www.info.univ-angers.fr/ea2011/ [accessed 2011-03-30].
2587. Proceedings of the Fourth Conference on Artificial General Intelligence (AGI’11), August 3–
6, 2011, Mountain View, CA, USA, Lecture Notes in Artificial Intelligence (LNAI, SL7),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2588. Proceedings of the 4th European Event on Bio-Inspired Algorithms for Continuous Param-
eter Optimisation (EvoNUM’11), 2011, Torino, Italy. Springer-Verlag GmbH: Berlin, Ger-
many. Partly available at http://evostar.dei.uc.pt/call-for-contributions/
evoapplications/evonum/ [accessed 2011-02-28]. Part of [788? ].
2589. Proceedings of the 10th International Conference on Adaptive and Natural Computing Al-
gorithms (ICANNGA’11), April 14–16, 2011, University of Ljubljana: Ljubljana, Slovenia,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2590. Proceedings of the 7th Semantic Web Services Challenge Workshop (SWSC’08), October 26,
2008, Kongresszentrum: Karlsruhe, Germany. Springer-Verlag GmbH: Berlin, Germany and
Stanford University, Computer Science Department, Stanford Logic Group: Stanford, CA,
USA. Fully available at http://sws-challenge.org/wiki/index.php/Workshop_
Karlsruhe [accessed 2010-12-12]. Part of [2166, 2477].
2591. Proceedings of the 11th UK Performance Engineering Workshop (UKPEW’95), September 5–
9, 1995, Liverpool John Moores University: Liverpool, UK. Springer-Verlag London Limited:
London, UK.
2592. Giovanni Squillero. MicroGP – An Evolutionary Assembly Program Generator. Genetic
Programming and Evolvable Machines, 6(3):247–263, September 2005, Springer Nether-
lands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1007/s10710-005-2985-x.
2593. N. S. Sridharan, editor. Proceedings of the 11th International Joint Conference on Arti-
ficial Intelligence (IJCAI’89-I), August 1989, Detroit, MI, USA, volume 1. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-094-9. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-89-VOL1/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [2594].
REFERENCES 1151
2594. N. S. Sridharan, editor. Proceedings of the 11th International Joint Conference on Artifi-
cial Intelligence (IJCAI’89-II), August 1989, Detroit, MI, USA, volume 2. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-094-9. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-89-VOL2/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [2593].
2630. Fuchun Sun, Yingxu Wang, Jianhua Lu, Bo Zhang, Witold Kinsner, and Lotfi A. Zadeh,
editors. Proceedings of the 9th IEEE International Conference on Cognitive Informatics
(ICCI’10), July 7–9, 2010, Tsinghua University: Běijı̄ng, China. IEEE Computer Society
Press: Los Alamitos, CA, USA. Partly available at http://ieeexplore.ieee.org/xpl/
mostRecentIssue.jsp?punumber=5560187 [accessed 2011-02-28]. Catalogue no.: CFP10312-
CDR.
2631. Jianyong Sun, Qingfu Zhang, Jin Li, and Xin Yao. A Hybrid Estimation of Distri-
bution Algorithm for CDMA Cellular System Design. In SEAL’06 [2856], pages 905–
912, 2006. doi: 10.1007/11903697 114. Fully available at http://dces.essex.ac.uk/
staff/qzhang/conferencepaper/SEAL06SunZhangLIYao.pdf [accessed 2009-08-10]. See
also [2632].
2632. Jianyong Sun, Qingfu Zhang, Jin Li, and Xin Yao. A Hybrid Estimation of Distribution
Algorithm for CDMA Cellular System Design. International Journal of Computational Intel-
ligence and Applications (IJCIA), 7(2):187–200, June 2008, World Scientific Publishing Co.:
Singapore and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026808002235.
1154 REFERENCES
See also [2631].
2633. Xia Sun and Ziqiang Wang. An Efficient Materialized View Selection Algorithm Based on
PSO. In ISA’09 [2847], pages 11–14, 2009. doi: 10.1109/IWISA.2009.5072711. INSPEC
Accession Number: 10717741.
2634. Patrick David Surry. A Prescriptive Formalism for Constructing Domain-specific Evolution-
ary Algorithms. PhD thesis, University of Edinburgh, Edinburgh Parallel Computing Centre
(EPCC): Edinburgh, Scotland, UK, 1998. CiteSeerx : 10.1.1.39.7961. Google Books
ID: HOHxSAAACAAJ. OCLC: 281436714 and 606028198.
2635. Andrew M. Sutton, Monte Lunacek, and L. Darrell Whitley. Differential Evolution and Non-
Separability: Using Selective Pressure to Focus Search. In GECCO’07-I [2699], pages 1428–
1435, 2007. doi: 10.1145/1276958.1277221. Fully available at http://www.cs.colostate.
edu/˜sutton/pubs/Sutton2007de.ps.gz [accessed 2009-10-20].
2636. Masuo Suzuki, H. Nishimori, Yoshiyuki Kabashima, Kazuyuki Tanaka, and T. Tanaka, edi-
tors. 2003 Joint Workshop of Hayashibara Foundation and Hayashibara Forum – Physics and
Information – and Workshop on Statistical Mechanical Approach to Probabilistic Information
Processing (SMAPIP), July 11, 2003, Okayama, Japan. Partly available at http://www.
smapip.is.tohoku.ac.jp/˜smapip/2003/hayashibara/ [accessed 2009-07-11].
2637. Reiji Suzuki and Takaya Arita. Repeated Occurrences of the Baldwin Effect Can
Guide Evolution on Rugged Fitness Landscapes. In CI-ALife’07 [2076], pages 8–14,
2007. doi: 10.1109/ALIFE.2007.367652. Fully available at http://www.alife.cs.is.
nagoya-u.ac.jp/˜reiji/publications/2007_ieeealife_suzuki.pdf [accessed 2008-
09-08]. INSPEC Accession Number: 9507350.
2638. Reiji Suzuki and Takaya Arita. How Learning Can Guide Evolution of Communication. In
Artificial Life XI [446], pages 608–615, 2008. CiteSeerx : 10.1.1.161.4097.
2639. William R. Swartout, Paul Rosenbloom, and Peter Szolovits, editors. Proceedings of the
Tenth National Conference on Artificial Intelligence (AAAI’92), July 12–16, 1992, San
Jose Convention Center: San José, CA, USA. AAAI Press: Menlo Park, CA, USA and
MIT Press: Cambridge, MA, USA. isbn: 0-262-51063-4. Fully available at http://
www.aaai.org/Library/AAAI/aaai92contents.php [accessed 2009-06-24]. Google Books
ID: Q55QAAAAMAAJ, ZZxQAAAAMAAJ, and zSkYAAAACAAJ.
2640. Philip H. Sweany and Steven J. Beaty. Instruction Scheduling Using Simulated Annealing.
In MPCS’98 [611], 1998. CiteSeerx : 10.1.1.67.3149.
2641. Gilbert Syswerda. A Study of Reproduction in Generational and Steady State Genetic Al-
gorithms. In FOGA’90 [2562], pages 94–101, 1990.
2642. Piotr S. Szczepaniak, Janusz Kacprzyk, and Adam Niewiadomski, editors. Advances in Web
Intelligence, Proceedings of the Third International Atlantic Web Intelligence Conference
(AWIC’05), June 6–9, 2005, Lodz, Poland, volume 3528/2005 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-26219-9.
2643. George G. Szpiro. A Search for Hidden Relationships: Data Mining with Ge-
netic Algorithms. Computational Economics, 10(3):267–277, August 1997, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1008673309609. Fully available
at http://ideas.repec.org/a/kap/compec/v10y1997i3p267-77.html [accessed 2009-
x
09-10]. CiteSeer : 10.1.1.21.6037.
2644. Heidi A. Taboada and David W. Coit. Data Clustering of Solutions for Multiple Objec-
tive System Reliability Optimization Problems. Quality Technology & Quantitative Man-
agement – An International Journal, 4(2):191–210, June 2007, Chung Hua University, De-
partment of Industrial Engineering & System Management: Hsinchu, Taiwan. Fully avail-
able at http://web2.cc.nctu.edu.tw/˜qtqm/qtqmpapers/2007V4N2/2007V4N2_
F3.pdf [accessed 2010-12-16]. See also [2645].
2645. Heidi A. Taboada, Fatema Baheranwala, David W. Coit, and Naruemon Wattanapongsakorn.
Practical Solutions for Multi-Objective Optimization: An Application to System Reliabil-
ity Design Problems. Reliability Engineering & System Safety, 92(3):314–322, March 2007,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands, European Safety and
Reliability Association (ESRA), and Safety Engineering and Risk Analysis Division
REFERENCES 1155
(SERAD) of the American Society of Mechanical Engineers (ASME): New York, NY,
USA. doi: 10.1016/j.ress.2006.04.014. Fully available at http://www.rci.rutgers.edu/
˜coit/RESS_2007.pdf [accessed 2007-09-10]. See also [605].
2646. Walter Alden Tackett. Recombination, Selection, and the Genetic Construction of Com-
puter Programs. PhD thesis, University of Southern California (USC), Department of
Electrical Engineering Systems: Los Angeles, CA, USA, April 17, 1994. Fully avail-
able at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/
watphd.tar.Z [accessed 2007-09-07]. CENG Technical Report 94-13.
2647. Genichi Taguchi. Introduction to Quality Engineering: Designing Quality into Products and
Processes. Asian Productivity Organization (APO): Chiyoda-ku, Tokyo, Japan and Kraus
International Publications: Millwood, NY, USA, January 1986. isbn: 9283310837 and
9283310845. Google Books ID: 1NtTAAAAMAAJ, NeA3JAAACAAJ, NuA3JAAACAAJ, and
yMv-AQAACAAJ. OCLC: 16277853 and 180475635. Translation of “Sekkeisha no tame no
hinshitsu kanri”.
2648. Éric D. Taillard. Parallel Iterative Search Methods for Vehicle Routing Problems. Net-
works, 23(8):661–673, December 1993, Wiley Periodicals, Inc.: Wilmington, DE, USA.
doi: 10.1002/net.3230230804.
2649. Éric D. Taillard, Gilbert Laporte, and Michel Grendau. Vehicle Routeing with Multiple
Use of Vehicles. The Journal of the Operational Research Society (JORS), 47(8):1065–1070,
August 1996, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and Op-
erations Research Society: Birmingham, UK. CiteSeerx : 10.1.1.49.3923. JSTOR Stable
ID: 3010414.
2650. Éric D. Taillard, Luca Maria Gambardella, Michael Gendrau, and Jean-Yves Potvin.
Adaptive Memory Programming: A Unified View of Metaheuristics. European Journal
of Operational Research (EJOR), 135(1):1–16, November 6, 2001, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(00)00268-X. Fully available
at http://ina2.eivd.ch/Collaborateurs/etd/articles.dir/ejor135_1_1_16.
pdf [accessed 2008-11-11]. CiteSeerx : 10.1.1.54.8460.
2651. Hideyuki Takagi. Active User Intervention in an EC Search. In FEA’00 [2853], pages 995–
998, 2000. Fully available at http://www.design.kyushu-u.ac.jp/˜takagi/TAKAGI/
IECpaper/JCIS2K_2.pdf [accessed 2010-09-03].
2652. Hideyuki Takagi. Interactive Evolutionary Computation: Fusion of the Capacities of EC
Optimization and Human Evaluation. Proceedings of the IEEE, 89(9):1275–1296, Septem-
ber 2001, Institute of Electrical and Electronics Engineers (IEEE) Press: New York,
NY, USA. doi: 10.1109/5.949485. Fully available at http://www.design.kyushu-u.
ac.jp/˜takagi/TAKAGI/IECsurvey.html [accessed 2007-08-29]. INSPEC Accession Num-
ber: 7053972.
2653. Hiromitsu Takahashi and Akira Matsuyama. An Approximate Solution for the Steiner Prob-
lem in Graphs. Mathematica Japonica, 24(6):573–577, 1980, Japanese Association of Mathe-
matical Sciences (JAMS): Sakai, Osaka, Japan. Fully available at http://monet.skku.ac.
kr/course_materials/undergraduate/al/lecture/2006/TM.pdf [accessed 2010-09-27].
2654. Ricardo H. C. Takahashi, Elizabeth Wanner, Kalyanmoy Deb, and Salvatore Greco, editors.
Proceedings of the Sixth International Conference on Evolutionary Multi-Criterion Decision
Making (EMO’11), April 5, 2011, Hotel Estalagem das Minas Gerais: Ouro Preto, MG,
Brazil.
2655. Ahmet Cagatay Talay. A Gateway Access-Point Selection Problem and Traffic Bal-
ancing in Wireless Mesh Networks. In EvoWorkshops’07 [1050], pages 161–168, 2007.
doi: 10.1007/978-3-540-71805-5 18.
2656. Ahmet Cagatay Talay and Sema Oktug. A GA/Heuristic Hybrid Technique for Routing and
Wavelength Assignment in WDM Networks. In EvoWorkshops’04 [2254], pages 150–159,
2004.
2657. El-Ghazali Talbi, Pierre Liardet, Pierre Collet, Evelyne Lutton, and Marc Schoenauer,
editors. Revised Selected Papers of the 7th International Conference on Artificial Evo-
lution, Evolution Artificielle (EA’05), October 26–28, 2005, Lille, France, volume 3871
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-33589-7. Published in 2006.
2658. Seyed Hamid Talebian and Sameem Abdul Kareem. Using Genetic Algorithm to Select
1156 REFERENCES
Materialized Views Subject to Dual Constraints. In ICSPS’09 [1376], pages 633–638, 2009.
doi: 10.1109/ICSPS.2009.162. INSPEC Accession Number: 10789147.
2659. 13th Conference on Technologies and Applications of Artificial Intelligence (TAAI’08),
November 21–22, 2008, Tamkang University, Lanyang campus: Chiao-hsi Shiang, I-lan
County, Taiwan.
2660. Honghua Tan and Qi Luo, editors. Proceedings of the IITA International Conference on
Control, Automation and Systems Engineering (CASE’09), July 11–12, 2009, Zhāngjiājiè,
Húnán, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
2661. Kay Chen Tan, Meng Hot Lim, Xin Yao, and Lipo Wang, editors. Recend Advances
in Simulated Evolution and Learning – Proceedings of the Fourth Asia-Pacific Conference
on Simulated Evolution And Learning (SEAL’02), November 18–22, 2002, Singapore, vol-
ume 2 in Advances in Natural Computation. World Scientific Publishing Co.: Singapore.
isbn: 981-238-952-0. Google Books ID: Vn4bdMbo488C.
2662. Kay Chen Tan, Eik Fun Khor, Tong Heng Lee, and Ramasubramanian Sathikannan. An
Evolutionary Algorithm with Advanced Goal and Priority Specification for Multi-Objective
Optimization. Journal of Artificial Intelligence Research (JAIR), 18:183–215, February 2003,
AI Access Foundation, Inc.: El Segundo, CA, USA and AAAI Press: Menlo Park, CA,
USA. Fully available at http://www.jair.org/media/842/live-842-1969-jair.
pdf [accessed 2009-06-23]. CiteSeerx : 10.1.1.14.6112.
2663. L. G. Tan and Mark C. Sinclair. Wavelength Assignment between the Central Nodes of
the COST 239 European Optical Network. In UKPEW’95 [2591], pages 235–247, 1995.
Fully available at http://uk.geocities.com/markcsinclair/ps/ukpew11_tan.ps.
gz [accessed 2009-09-02]. CiteSeerx : 10.1.1.55.3604.
2664. Ming Tan, Hong-Bin Fang, Guo-Liang Tian, and Gang Wei. Testing Multivariate Normality
in Incomplete Data of Small Sample Size. Journal of Multivariate Analysis (JMVA), 93(1):
164–179, March 2005, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Academic Press
Professional, Inc.: San Diego, CA, USA. doi: 10.1016/j.jmva.2004.02.014.
2665. Ying Tan, Yuhui Shi, and Kay Chen Tan, editors. Advances in Swarm Intelligence – Proceed-
ings of the First International Conference on Swarm Intelligence, Part I (ICSI’10-I), June 12–
15, 2010, Běijı̄ng, China, volume 6145/2010 in Theoretical Computer Science and General
Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-642-13495-1. isbn: 3-642-13494-7. Library of Congress
Control Number (LCCN): 2010927598.
2666. Ying Tan, Yuhui Shi, and Kay Chen Tan, editors. Advances in Swarm Intelligence – Pro-
ceedings of the First International Conference on Swarm Intelligence, Part II (ICSI’10-
II), June 12–15, 2010, Běijı̄ng, China, volume 6146/2010 in Theoretical Computer Science
and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-13498-2. isbn: 3-642-13497-1. Library
of Congress Control Number (LCCN): 2010927598.
2667. Jing Tang, Meng Hot Lim, and Yew-Soon Ong. Diversity-Adaptive Parallel Memetic Algo-
rithm for Solving Large Scale Combinatorial Optimization Problems. Soft Computing – A
Fusion of Foundations, Methodologies and Applications, 11(9):873–888, July 2007, Springer-
Verlag: Berlin/Heidelberg. doi: 10.1007/s00500-006-0139-6.
2668. Ke Tang, Xin Yao, Ponnuthurai Nagaratnam Suganthan, Cara MacNish, Ying-Ping Chen,
Chih-Ming Chen, and Zhenyu Yang. Benchmark Functions for the CEC’2008 Special Session
and Competition on Large Scale Global Optimization. Technical Report, University of Science
and Technology of China (USTC), School of Computer Science and Technology, Nature In-
spired Computation and Applications Laboratory (NICAL): Héféi, Ānhuı̄, China, 2007. Fully
available at http://nical.ustc.edu.cn/cec08ss.php and http://sci2s.ugr.es/
programacion/workshop/Tech.Report.CEC2008.LSGO.pdf [accessed 2009-08-15].
2669. Ke Tang, Yi Mei, and Xin Yao. Memetic Algorithm with Extended Neighbor-
hood Search for Capacitated Arc Routing Problems. IEEE Transactions on Evo-
lutionary Computation (IEEE-EC), 13(5):1151–1166, October 2009, IEEE Computer
Society: Washington, DC, USA. doi: 10.1109/TEVC.2009.2023449. Fully avail-
able at http://staff.ustc.edu.cn/˜ketang/papers/TangMeiYao_TEVC09.pdf [ac-
x
cessed 2011-10-01]. CiteSeer : 10.1.1.160.9949.
2670. Ke Tang, Xiaodong Li, Ponnuthurai Nagaratnam Suganthan, Zhenyu Yang, and Thomas
Weise. Benchmark Functions for the CEC’2010 Special Session and Competition on Large-
REFERENCES 1157
Scale Global Optimization. Technical Report, University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Héféi, Ānhuı̄, China, January 8, 2010. Fully available
at http://www.it-weise.de/documents/files/TLSYW2009BFFTCSSACOLSGO.pdf.
2671. Ke Tang, Pu Wang, and Tianshi Chen. Towards Maximizing The Area Under The
ROC Curve For Multi-Class Classification Problems. In AAAI’11 [3], pages 483–488,
2011. Fully available at http://aaai.org/ocs/index.php/AAAI/AAAI11/paper/
download/3485/3882 [accessed 2001-09-08].
2672. Eiichi Taniguchi and Russell G. Thompson, editors. Recent Advances in City Logis-
tics. Proceedings of the 4th International Conference on City Logistics, July 12–14, 2005,
Langkawi, Malaysia. Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
isbn: 0-08-044799-6. Google Books ID: dMMQb83KH-EC.
2673. Eiichi Taniguchi, Russell G. Thompson, and Tadashi Yamada. Data Collection for Modelling,
Evaluating, and Benchmarking City Logistics Schemes. In Recent Advances in City Logistics.
Proceedings of the 4th International Conference on City Logistics [2672], pages 1–14, 2005.
2674. Ajay Kumar Tanwani and Muddassar Farooq. Performance Evaluation of Evolutionary Al-
gorithms in Classification of Biomedical Datasets. In GECCO’09-II [2343], pages 2617–
2624, 2009. doi: 10.1145/1570256.1570371. Fully available at http://nexginrc.org/
nexginrcAdmin/PublicationsFiles/iwlcs09-ajay.pdf [accessed 2010-07-26].
2675. Proceedings of the 12th International Conference on Parallel Problem Solving From Nature
(PPSN’12), September 1–5, 2012, Taormina, Italy.
2676. José Juan Tapia, Enrique Morett, and Edgar E. Vallejo. A Clustering Genetic Algorithm
for Genomic Data Mining. In Foundations of Computational Intelligence – Volume 4:
Bio-Inspired Data Mining [13], pages 249–275. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-642-01088-0 11.
2677. Jonathan Tate, Benjamin Woolford-Lim, Ian Bate, and Xin Yao. Comparing Design of
Experiments and Evolutionary Approaches to Multi-Objective Optimisation of Sensornet
Protocols. In CEC’09 [1350], pages 1137–1144, 2009. doi: 10.1109/CEC.2009.4983074.
Fully available at ftp://ftp.cs.york.ac.uk/papers/rtspapers/R:Tate:2009a.
pdf and http://www.cercia.ac.uk/projects/research/SEBASE/pdfs/2009/
jt-bwl-ib-xy-cec2009.pdf [accessed 2009-11-26]. INSPEC Accession Number: 10688669.
2678. Advance Papers of the Fourth International Joint Conference on Artificial Intelligence (IJ-
CAI’75), September 3–8, 1975, Tbilisi, Georgia, USSR, volume 1 and 2. Fully available
at http://dli.iiit.ac.in/ijcai/IJCAI-75-VOL-1&2/CONTENT/content.htm [ac-
cessed 2008-04-01].
2679. Technical Committee CEN/TC 119 “Swap body for combined transport“: Brussels, Belgium.
Swap Bodies – Non-Stackable Swap Bodies of Class C – Dimensions and General Require-
ments, EN-284:2006. CEN-CEN ELEC: Brussels, Belgium, November 10, 2006. See also:
DIN 70013/14, BS 3951:Part 1, ISO 668, ISO 1161, ISO 6346, 88/218/EEC, UIC 592-4, UIC
596-6.
2680. Y. S. Teh and G. P. Rangaiah. Tabu Search for Global Optimization of Continuous Func-
tions with Application to Phase Equilibrium Calculations. Computers and Chemical Engi-
neering, 27(11):1665–1679, November 15, 2003, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0098-1354(03)00134-0.
2681. Astro Teller. Genetic Programming, Indexed Memory, the Halting Problem, and other Cu-
riosities. In FLAIRS’94 [1360], pages 270–274, 1994. Fully available at http://www.
cs.cmu.edu/afs/cs/usr/astro/public/papers/Curiosities.ps [accessed 2009-07-21].
CiteSeerx : 10.1.1.39.5589.
2682. Astro Teller. Turing Completeness in the Language of Genetic Programming with Indexed
Memory. In CEC’94 [1891], pages 136–141, 1994. doi: 10.1109/ICEC.1994.350027. Fully
available at http://www.astroteller.net/pdf/Turing.pdf and http://www.cs.
cmu.edu/afs/cs/usr/astro/public/papers/Turing.ps [accessed 2010-11-21].
2683. Marc Terwilliger, Ajay K. Gupta, Ashfaq A. Khokhar, and Garrison W. Greenwood. Local-
ization Using Evolution Strategies in Sensornets. In CEC’05 [641], pages 322–327, volume
1, 2005. doi: 10.1109/CEC.2005.1554701. Fully available at http://twig.lssu.edu/
publications/LESS.pdf [accessed 2009-09-18]. INSPEC Accession Number: 9004167.
2684. Igor V. Tetko, David J. Livingstone, and Alexander I. Luik. Neural Network Studies. 1.
Comparison of Overfitting and Overtraining. Journal of Chemical Information and Computer
1158 REFERENCES
Sciences, 35(5):826–833, September 1995, American Chemical Society: Washington, DC,
USA. doi: 10.1021/ci00027a006.
2685. Andrea G. B. Tettamanzi and Marco Tomassini. Soft Computing: Integrating Evolu-
tionary, Neural, and Fuzzy Systems. Springer-Verlag GmbH: Berlin, Germany, 2001.
isbn: 3-540-42204-8. Google Books ID: 86jQw6v30KoC.
2686. Sam R. Thangiah. Vehicle Routing with Time Windows using Genetic Algorithms. In
Practical Handbook of Genetic Algorithms: New Frontiers [522], pages 253–277. CRC Press,
Inc.: Boca Raton, FL, USA, 1995. CiteSeerx : 10.1.1.23.7903.
2687. Proceedings of the 4th International Multiconference on Computer Science and Information
Technology (CSIT’06), April 5–7, 2006, Amman, Jordan. The Computing Research Reposi-
tory (CoRR): Ithaca, NY, USA.
2688. First Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating
Web Services Mediation, Choreography and Discovery, March 8–10, 2006, Stanford Univer-
sity, David Packard EE Building Building: Stanford, CA, USA. The Digital Enterprise
Research Institute (DERI) Stanford: Stanford, CA, USA. Fully available at http://
sws-challenge.org/wiki/index.php/Workshop_Stanford [accessed 2010-12-12].
2689. Second Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating
Web Services Mediation, Choreography and Discovery, June 15–16, 2006, Hotel Maestral:
Przno, Sveti Stefan, Budva, Republic of Montenegro. The Digital Enterprise Research Insti-
tute (DERI) Stanford: Stanford, CA, USA. Fully available at http://sws-challenge.
org/wiki/index.php/Workshop_Budva [accessed 2010-12-12].
2690. Third Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automat-
ing Web Services Mediation, Choreography and Discovery, November 10–11, 2006, Geor-
gia Center: Athens, GA, USA. The Digital Enterprise Research Institute (DERI) Stanford:
Stanford, CA, USA. Fully available at http://sws-challenge.org/wiki/index.php/
Workshop_Athens [accessed 2010-12-12].
2691. Proceedings of The 10th Conference on Pattern Languages of Programs (PLoP’03), Septem-
ber 8–12, 2003, University of Illinois, Allerton House: Monticello, IL, USA. The Hill-
side Group: Champaign, IL, USA. Fully available at http://hillside.net/plop/
plop2003/papers.html [accessed 2010-08-19].
2692. Dimitri Theodoratos and Wugang Xu. Constructing Search Spaces for Materialized View
Selection. In DOLAP’04 [2557], pages 112–121, 2004. doi: 10.1145/1031763.1031783.
2693. Guy Théraulaz and Eric W. Bonabeau. Coordination in Distributed Building. Science Mag-
azine, 269(5224):686–688, American Association for the Advancement of Science (AAAS):
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.269.5224.686.
2694. Guy Théraulaz and Eric W. Bonabeau. Swarm Smarts. Scientific American, pages 72–79,
March 2000, Scientific American, Inc.: New York, NY, USA. Fully available at http://
www.santafe.edu/˜vince/press/SciAm-SwarmSmarts.pdf [accessed 2008-06-12].
2695. Lothar Thiele, Kaisa Miettinen, Pekka J. Korhonen, and Julian Molina. A Preference-
based Interactive Evolutionary Algorithm for Multiobjective Optimization. HSE Working
Paper W-412, Helsinki School of Economics (HSE, Helsingin kauppakorkeakoulu): Helsinki,
Finland, January 2007. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/people/
thiele/paper/TMKM07.pdf and http://hsepubl.lib.hse.fi/pdf/wp/w412.pdf
[accessed 2009-07-18]. issn: 1235-5674.
2696. Dirk Thierens. On the Scalability of Simple Genetic Algorithms. Technical Report UU-CS-
1999-48, Utrecht University, Department of Information and Computing Sciences: Utrecht,
The Netherlands, 1999. Fully available at http://igitur-archive.library.uu.nl/
math/2007-0118-201740/thierens_99_on_the_scalability.pdf and http://
www.cs.uu.nl/research/techreps/repo/CS-1999/1999-48.pdf [accessed 2008-08-12].
CiteSeerx : 10.1.1.40.5295. urn: URN:NBN:NL:UI:10-1874/18986.
2697. Dirk Thierens and David Edward Goldberg. Convergence Models of Genetic Algorithm Se-
lection Schemes. In PPSN III [693], pages 119–129, 1994. doi: 10.1007/3-540-58484-6 256.
Fully available at http://www.cs.uu.nl/groups/DSS/publications/convergence/
convMdl.ps and http://www.csie.ntu.edu.tw/˜b91069/PDF/convMdl.pdf [ac-
cessed 2010-12-04].
2698. Dirk Thierens, David Edward Goldberg, and Ângela Guimarães Pereira. Domino Conver-
gence, Drift, and the Temporal-Salience Structure of Problems. In CEC’98 [2496], pages
REFERENCES 1159
535–540, 1998. doi: 10.1109/ICEC.1998.700085. CiteSeerx : 10.1.1.46.6116.
2699. Dirk Thierens, Hans-Georg Beyer, Josh C. Bongard, Jürgen Branke, John Andrew Clark,
Dave Cliff, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Doerr, Tim Kovacs, Sanjeev
Kumar, Julian Francis Miller, Jason H. Moore, Frank Neumann, Martin Pelikan, Riccardo
Poli, Kumara Sastry, Kenneth Owen Stanley, Thomas Stützle, Richard A. Watson, and Ingo
Wegener, editors. Proceedings of 9th Genetic and Evolutionary Computation Conference
(GECCO’07-I), July 7–11, 2007, University College London (UCL): London, UK. ACM Press:
New York, NY, USA. isbn: 1-59593-697-1. OCLC: 184827588, 232392364, 271369878,
and 288957084. ACM Order No.: 910070. See also [2700].
2700. Dirk Thierens, Hans-Georg Beyer, Josh C. Bongard, Jürgen Branke, John Andrew Clark,
Dave Cliff, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Doerr, Tim Kovacs, Sanjeev
Kumar, Julian Francis Miller, Jason H. Moore, Frank Neumann, Martin Pelikan, Riccardo
Poli, Kumara Sastry, Kenneth Owen Stanley, Thomas Stützle, Richard A. Watson, and Ingo
Wegener, editors. Genetic and Evolutionary Computation Conference – Companion Material
(GECCO’07-II), July 7–11, 2007, University College London (UCL): London, UK. ACM
Press: New York, NY, USA. ACM Order No.: 910071. See also [2699].
2701. Herve Thiriez and Stanley Zionts, editors. Proceedings of the 1st International Conference on
Multiple Criteria Decision Making (MCDM’75), May 21–23, 1975, Jouy-en-Josas, France,
volume 130 in Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 0387077944. OCLC: 2332357. Published in 1976.
2702. James J. Thomas, editor. Proceedings of the 18th Annual Conference on Computer Graphics
and Interactive Techniques (SIGGRAPH’91), July 28–August 2, 1991, Las Vegas, NV, USA.
SIGGRAPH: ACM Special Interest Group on Computer Graphics and Interactive Techniques,
ACM Press: New York, NY, USA. isbn: 0-89791-436-8. Order No.: 428911.
2703. Richard K. Thompson and Alden H. Wright. Additively Decomposable Fitness Func-
tions. Technical Report, University of Montana, Computer Science Department: Missoula,
MT, USA, 1996. Fully available at http://www.cs.umt.edu/u/wright/papers/add_
decomp.ps.gz [accessed 2007-11-27].
2704. Henk Tijms. Understanding Probability: Chance Rules in Everyday Life. Cambridge Uni-
versity Press: Cambridge, UK, August 16, 2004. isbn: 0521540364. Google Books
ID: y2361S44XLsC. OCLC: 54079595.
2705. Jonathan Timmis and Peter J. Bentley, editors. Proceedings of the 1st International Con-
ference on Artificial Immune Systems (ICARIS’02), University of Kent: Canterbury, Kent,
UK. isbn: 1902671325.
2706. Jonathan Timmis, Peter J. Bentley, and Emma Hart, editors. Proceedings of the 2nd In-
ternational Conference on Artificial Immune Systems (ICARIS’03), September 1–3, 2003,
Edinburgh, Scotland, UK, volume 2787 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-40766-9.
2707. Ville Tirronen, Ferrante Neri, Tommi Kärkkäinen, Kirsi Majava, and Tuomo Rossi. An
Enhanced Memetic Differential Evolution in Filter Design for Defect Detection in Paper
Production. Evolutionary Computation, 16(4):529–555, Winter 2008, MIT Press: Cambridge,
MA, USA. doi: 10.1162/evco.2008.16.4.529. PubMed ID: 19053498.
2708. Philippe L. Toint. Test Problems for Partially Separable Optimization and Results for the
Routine PSPMIN. Technical Report 83/4, University of Namur (FUNDP), Department of
Mathematics: Namur, Belgium, 1983.
2709. Proceedings of the 6th International Joint Conference on Artificial Intelligence (IJCAI’79-I),
August 20–23, 1979, Tokyo, Japan, volume 1. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-79-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [2710].
2710. Proceedings of the 6th International Joint Conference on Artificial Intelligence (IJCAI’79-II),
August 20–23, 1979, Tokyo, Japan, volume 2. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-79-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [2709].
2711. Shisanu Tongchim and Xin Yao. Parallel Evolutionary Programming. In CEC’04 [1369],
pages 1362–1367, volume 2, 2004. doi: 10.1109/CEC.2004.1331055. Fully available
at http://www.cs.bham.ac.uk/˜xin/papers/TongchimYao_CEC04.pdf [accessed 2011-
11-10]. INSPEC Accession Number: 8229086.
2712. Virginia Joanne Torczon. Multi-Directional Search: A Direct Search Algorithm for Parallel
Machines. PhD thesis, Rice University, Department of Computational & Applied Mathe-
matics (CAAM): Houston, TX, USA, May 1989. CiteSeerx : 10.1.1.48.9967. Supervisors:
1160 REFERENCES
John E. Dennis, Jr., Andrew E. Boyd, Kenneth W. Kennedy, Jr., and Richard A. Tapia.
2713. P. Tormos, A. Lova, F. Barber, L. Ingolotti, M. Abril, and M. A. Salido. A Ge-
netic Algorithm for Railway Scheduling Problems. In Metaheuristics for Scheduling in
Industrial and Manufacturing Applications [2992], Chapter 10, pages 255–276. Springer-
Verlag: Berlin/Heidelberg, January 2008. doi: 10.1007/978-3-540-78985-7 10. Fully available
at http://arrival.cti.gr/uploads/Documents.0081/ARRIVAL-TR-0081.pdf [ac-
x
cessed 2011-02-19]. CiteSeer : 10.1.1.157.9570. STREP (Member of the FET Open Scheme)
Project Number FP6-021235-2: ARRIVAL “Algorithms for Robust and online Railway op-
timization: Improving the Validity and reliAbility of Large scale systems“; ARRIVAL-TR-
0081.
2714. Aimo Törn and Antanas Žilinskas. Global Optimization, volume 350/1989 in Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany, 1989.
doi: 10.1007/3-540-50871-6. isbn: 0-387-50871-6 and 3-540-50871-6.
2715. Proceedings of the 1st International Workshop on Local Search Techniques in Constraint
Satisfaction (LSCS’04), September 27, 2004, Toronto, ON, Canada.
2716. Douglas E. Torres D. and Claudio M. Rocco S. Empirical Models Based on Hybrid Intelligent
Systems for Assessing the Reliability of Complex Networks. In EvoWorkshops’05 [2340],
pages 147–155, 2005.
2717. Paolo Toth and Daniele Vigo, editors. The Vehicle Routing Problem, volume 9 in SIAM
Monographs on Discrete Mathematics and Applications. Society for Industrial and Applied
Mathematics (SIAM): Philadelphia, PA, USA, 2002. isbn: 0898715792. Google Books
ID: TeMgA5S74skC.
2718. Julius T. Tou, editor. Proceedings of the Symposium on Computer and Information Sciences
II. Academic Press: London, New York, August 22–24, 1966, Columbus, OH, USA. Google
Books ID: V NBAAAAIAAJ. GBV-Identification (PPN): 173850669.
2719. Marc Toussaint and Christian Igel. Neutrality: A Necessity for Self-Adaptation.
In CEC’02 [944], pages 1354–1359, 2002. doi: 10.1109/CEC.2002.1004440.
CiteSeerx : 10.1.1.16.8492. arXiv ID: nlin/0204038v1.
2720. Amy J.C. Trappey, Charles V. Trappey, and Chang-Ru Wu. Genetic Algorithm Dy-
namic Performance Evaluation for RFID Reverse Logistic Management. Expert Sys-
tems with Applications – An International Journal, 37(11):7329–7335, November 2010,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.eswa.2010.04.026. See also [2721].
2721. Amy J.C. Trappey, Charles V. Trappey, and Chang-Ru Wu. Corrigendum to “Genetic Al-
gorithm Dynamic Performance Evaluation for RFID Reverse Logistic Management” [Expert
Systems with Applications 37 (11) (2010) 7329–7335]. Expert Systems with Applications –
An International Journal, 38(3):2920, March 2011, Elsevier Science Publishers B.V.: Essex,
UK. Imprint: Pergamon Press: Oxford, UK. doi: 10.1016/j.eswa.2010.10.005. See also [2720].
2722. Ioan Cristian Trelea. The Particle Swarm Optimization Algorithm: Convergence Analysis and
Parameter Selection. Information Processing Letters, 85(6):317–325, March 31, 2003, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific
Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0020-0190(02)00447-7.
2723. Krzysztof Trojanowski. Evolutionary Algorithms with Redundant Genetic Material for Non-
Stationary Environments. PhD thesis, Warsaw University: Warsaw, Poland, June 21, 2000,
Zbigniew Michalewicz, Advisor.
2724. Michael W. Trosset. I Know It When I See It: Toward a Definition of Direct Search Methods.
SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization, 9:7–
10, Fall 1997, SIAM Activity Group on Optimization – SIAG/OPT: Philadelphia, PA, USA.
Fully available at http://www.mcs.anl.gov/˜leyffer/views/09.pdf [accessed 2010-12-
25].
2725. Constantino Tsallis and Ricardo Ferreira. On the Origin of Self-Replicating Information-
Containing Polymers from Oligomeric Mixtures. Physics Letters A, 99(9):461–463,
December 26, 1983, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/0375-9601(83)90958-1.
2726. Edward P. K. Tsang, James M. Butler, and Jin Li. EDDIE Beats the
Bookies. International Journal of Software, Practice and Experience, 28(10):
1033–1043, August 1998, John Wiley & Sons Ltd.: New York, NY, USA.
REFERENCES 1161
doi: 10.1002/(SICI)1097-024X(199808)28:10<1033::AID-SPE198>3.0.CO;2-1. Fully avail-
able at http://cswww.essex.ac.uk/Research/CSP/finance/Eddie/Overview/ [ac-
x
cessed 2010-06-25]. CiteSeer : 10.1.1.63.7597. See also [2727, 2728].
2727. Edward P. K. Tsang, Jin Li, Sheri Marina Markose, Hakan Er, Abdellah Salhi, and Guilia
Iori. EDDIE in Financial Decision Making. Journal of Management and Economics,
4(4), November 2000, Universidad de Buenos Aires, Facultad de Ciencias Económicas:
Buenos Aires, Argentina. Fully available at http://www.bracil.net/finance/papers/
Tsang-Eddie-JMgtEcon2000.pdf [accessed 2010-06-25]. CiteSeerx : 10.1.1.64.7200. See
also [2726, 2728].
2728. Edward P. K. Tsang, Paul Yung, and Jin Li. EDDIE-Automation – A Decision Support Tool
for Financial Forecasting. Decision Support Systems, 37(4):559–565, September 2004, Else-
vier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0167-9236(03)00087-3. Fully avail-
able at http://www.bracil.net/finance/papers/TsYuLi-Eddie-Dss2004.pdf [ac-
cessed 2010-06-25]. See also [2726, 2727].
2746. Tatsuo Unemi. SBART 2.4: An IEC Tool for Creating 2D Images, Movies, and Collage. In
GAVAM’00 [1452], pages 153–157, 2000. CiteSeerx : 10.1.1.31.3280.
2747. Proceedings of the 8th Argentinean Conference of Computer Science, Congresso Argentino
de Ciências de la Computacı́on (CACIC’02), October 15–17, 2002, Universidad de Buenos
Aires (UBA): Buenos Aires, Argentina.
2748. AISB 2000 Convention: Artificial Intelligence and Society (AISB’00), April 17–21, 2000,
University of Birmingham: Birmingham, UK. Fully available at http://www.aisb.org.
uk/publications/proceedings.shtml [accessed 2008-09-10].
2749. 4th International Conference on Hybrid Artificial Intelligence Systems (HAIS’09), June 10–
12, 2009, Salamanca, Spain, Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture
Notes in Computer Science (LNCS). University of Burgos, Applied Computational Intelli-
gence Group (GICAP): Burgos, Spain, Springer-Verlag GmbH: Berlin, Germany. co-located
with 10th International Work-Conference on Artificial Neural Networks (IWANN2009).
2750. “New State of MCDM in 21st Century” – Proceedings of the 20th International Conference on
Multiple Criteria Decision Making (MCDM’09), June 21–26, 2009, University of Electronic
Science and Technology of China (Shahe Campus): Chéngdū, Sı̀chuān, China.
2751. Proceedings of the Seventh Conference on Evolutionary and Deterministic Methods for De-
sign, Optimization and Control with Applications to Indutsrial and Societal Problems (EU-
ROGEN’07), June 11–13, 2007, University of Jyväskylä: Jyväskylä, Finland. Partly available
at http://www.mit.jyu.fi/scoma/Eurogen2007/ [accessed 2007-09-16].
2752. Proceedings of the First International Workshop on Advanced Computational Intelligence
(IWACI’08), June 7–8, 2008, University of Macau: Macau, China.
2753. Genetic Programming Theory and Practice IX, Proceedings of the Ninth Workshop on Genetic
Programming (GPTP’11), May 12–14, 2011, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA.
2754. 2011 International Workshop on Nature Inspired Computation and Its Applications
(IWNICA’11), April 17, 2011, University of Science and Technology of China (USTC): Héféi,
Ānhuı̄, China. Partly available at http://www.cercia.ac.uk/projects/research/
NICaiA/2011workshopsinfor.php [accessed 2011-03-20].
2755. The 2004 International Workshop on Nature Inspired Computation and Applications
REFERENCES 1163
(IWNICA’04), October 24–29, 2004, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Héféi, Ānhuı̄, China.
2756. The 2004 UK-China Workshop on Nature Inspired Computation and Applications
(UCWNICA’04), October 25–29, 2004, University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Héféi, Ānhuı̄, China.
2757. The 2008 International Workshop on Nature Inspired Computation and Applications
(IWNICA’08), May 27–29, 2008, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Héféi, Ānhuı̄, China.
2758. The 2010 International Workshop on Nature Inspired Computation and Applications
(IWNICA’10), October 23–27, 2010, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Héféi, Ānhuı̄, China.
2759. AISB 2001 Convention: Agents and Cognition (AISB’01), March 21–24, 2001, University of
York: Heslington, York, UK. isbn: 1902956170, 1902956189, 1902956197, 1902956205,
1902956213, and 1902956221.
2760. Rasmus K. Ursem. Models for Evolutionary Algorithms and Their Applications in System
Identification and Control Optimization. PhD thesis, University of Aarhus, Department of
Computer Science: Århus, Denmark, April 1, 2003, Thiemo Krink and Brian H. Mayoh,
Advisors. Fully available at http://www.daimi.au.dk/˜ursem/publications/RKU_
thesis_2003.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.13.4971.
2761. Rob J. M. Vaessens, Emile H. L. Aarts, and Jan Karel Lenstra. A Local Search Template.
In PPSN II [1827], pages 67–76, 1992. CiteSeerx : 10.1.1.54.4457. See also [2762].
2762. Rob J. M. Vaessens, Emile H. L. Aarts, and Jan Karel Lenstra. A Local Search
Template. Computers & Operations Research, 25(11):969–979, November 1, 1998,
Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0305-0548(97)00093-2. See also [2761].
2763. Grégory Valigiani, Evelyne Lutton, Cyril Fonlupt, and Pierre Collet. Optimisation
par “Hommilière” de Chemins Pédagogiques pour un Logiciel d’e-Learning. Technique
et Science Informatique (TSI), 26(10):1245–1267, 2007, TSI: Cachan cedex, France.
doi: 10.3166/tsi.26.1245-1267.
2764. Satyanarayana R. Valluri, Soujanya Vadapalli, and Kamalakar Karlapalem. View Relevance
Driven Materialized View Selection in Data Warehousing Environment. In ADC’02 [3081],
2002. doi: 10.1145/563932.563927. Fully available at http://crpit.com/confpapers/
CRPITV5Valluri.pdf [accessed 2011-03-29]. See also [2765].
2765. Satyanarayana R. Valluri, Soujanya Vadapalli, and Kamalakar Karlapalem. View Relevance
Driven Materialized View Selection in Data Warehousing Environment. Australian Computer
Science Communications, 24(2), January–February 2002, IEEE Computer Society Press: Los
Alamitos, CA, USA. doi: 10.1145/563932.563927. Fully available at http://crpit.com/
confpapers/CRPITV5Valluri.pdf [accessed 2011-03-29]. CiteSeerx : 10.1.1.18.9497. See
also [2764].
2766. Werner Van Belle. Automatic Generation of Concurrency Adaptors by Means of Learning
Algorithms. PhD thesis, Vrije Universiteit Brussel, Faculteit van de Wetenschappen, Departe-
ment Informatica: Brussels, Belgium, August 2001, Theo D’Hondt and Tom Tourwé, Advi-
sors. Fully available at http://borg.rave.org/adaptors/Thesis6.pdf [accessed 2008-06-
23]. See also [2767].
2767. Werner Van Belle, Tom Mens, and Theo D’Hondt. Using Genetic Programming
to Generate Protocol Adaptors for Interprocess Communication. In ICES’03 [2743],
pages 67–73, 2003. Fully available at http://prog.vub.ac.be/Publications/
2003/vub-prog-tr-03-33.pdf and http://werner.yellowcouch.org/Papers/
genadap03/ICES-online.pdf [accessed 2008-06-23]. See also [2766].
2768. Klemens van Betteray. Gesetzliche und handelsspezifische Anforderungen an die
1164 REFERENCES
Rückverfolgung. In Vorträge des 7. VDEB-Infotags 2004 – Mobile Computing, RFID und
Rückverfolgbarkeit [2800], 2004. EU Verordnung 178/2002.
2769. Alex Van Breedam. An Analysis of the Behavior of Heuristics for the Vehicle Routing Prob-
lem for a Selection of Problems with Vehicle-related, Customer-related, and Time-related
Constraints. PhD thesis, University of Antwerp (RUCA): Antwerp, Belgium, 1994.
2770. Alex Van Breedam. An Analysis of the Behavior of Heuristics for the Vehicle Routing Prob-
lem for a Selection of Problems with Vehicle-Related, Customer-Related, and Time-Related
Constraints. PhD thesis, University of Antwerp (RUCA): Antwerp, Belgium, 1994.
2771. Gertrudis Van de Vijver, Stanley N. Salthe, and Manuela Delpos, editors. Evolutionary Sys-
tems – Biological and Epistemological Perspectives on Selection and Self-Organization. Kluwer
Academic Publishers: Norwell, MA, USA, 1998. isbn: 0792352602. OCLC: 39556318. Li-
brary of Congress Control Number (LCCN): 98036791. LC Classification: QH360.5 .E96
1998.
2772. J. Koen van der Hauw. Evaluating and Improving Steady State Evolutionary Algo-
rithms on Constraint Satisfaction Problems. Master’s thesis, Leiden University: Leiden,
The Netherlands, August 9, 1996, Ágoston E. Eiben and Han La Poutré, Supervisors.
CiteSeerx : 10.1.1.29.1568.
2773. Christiaan M. van der Walt and Etienne Barnard. Data Characteristics that Determine Clas-
sifier Performance. In ICAPR’05 [2506], pages 160–165, 2005. Fully available at http://
www.meraka.org.za/pubs/CvdWalt.pdf and http://www.patternrecognition.
co.za/publications/cvdwalt_data_characteristics_classifiers.pdf [ac-
cessed 2009-09-09].
2774. Bas C. van Fraassen. Laws and Symmetry, Clarendon Paperbacks. Oxford University Press,
Inc.: New York, NY, USA, 1989. isbn: 0198248601. Google Books ID: LzXuE1Sri wC.
2775. Jano I. van Hemert and Carlos Cotta, editors. Proceedings of the 8th European Conference on
Evolutionary Computation in Combinatorial Optimization (EvoCOP’08), March 26–28, 2008,
Naples, Italy, volume 4972/2008 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-78603-1. Library of Congress Control Number (LCCN): 2008922955.
2776. Peter J. M. van Laarhoven and Emile H. L. Aarts, editors. Simulated Annealing: Theory
and Applications, volume 37 in Mathematics and its Applications. Kluwer Academic Pub-
lishers: Norwell, MA, USA, 1987. isbn: 90-277-2513-6. Google Books ID: -IgUab6Dp IC
and URDXYgEACAAJ. OCLC: 15548651, 230902948, 471714266, 489996631, and
638812897. Library of Congress Control Number (LCCN): 87009666. GBV-Identification
(PPN): 020442645 and 043026036. LC Classification: QA402.5 .L3 1987. Reviewed in
[1316].
2777. Jan van Leeuwen, editor. Computer Science Today – Recent Trends and Developments,
volume 1000/1995 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany, 1995. doi: 10.1007/BFb0015232. isbn: 3-540-60105-8. Google Books
ID: YHxQAAAAMAAJ.
2778. Erik van Nimwegen and James P. Crutchfield. Optimizing Epochal Evolutionary Search:
Population-Size Dependent Theory. Machine Learning, 45(1):77–114, October 2001, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands,
Leslie Pack Kaelbling, editor. doi: 10.1023/A:1012497308906.
2779. Erik van Nimwegen, James P. Crutchfield, and Martijn A. Huynen. Neutral Evolution of
Mutational Robustness. Proceedings of the National Academy of Science of the United States
of America (PNAS), 96(17):9716–9720, August 17, 1999, National Academy of Sciences:
Washington, DC, USA. Fully available at http://www.pnas.org/cgi/reprint/96/
17/9716.pdf [accessed 2008-07-02].
2780. Erik van Nimwegen, James P. Crutchfield, and Melanie Mitchell. Statistical Dynamics of the
Royal Road Genetic Algorithm. Theoretical Computer Science, 229(1–2):41–102, November 6,
1999, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0304-3975(99)00119-X.
CiteSeerx : 10.1.1.43.3594.
2781. Maarten van Someren and Gerhard Widmer, editors. 9th European Conference on Ma-
chine Learning (ECML’97), April 23–25, 1997, Prague, Czech Republic, volume 1224/1997
in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-62858-4.
isbn: 3-540-62858-4. Google Books ID: BC4- V0D-qUC and w4lfPwAACAAJ.
REFERENCES 1165
OCLC: 36676064, 56867902, and 82399375.
2782. Harry L. Van Trees. Detection, Estimation, and Modulation Theory, Part I. Wiley
Interscience: Chichester, West Sussex, UK, 1968–February 2007. isbn: 0471093904,
0471095176, and 0471899550. Google Books ID: 2kkHAAAACAAJ, NSddAAAACAAJ,
RywEAAAACAAJ, RywEAAAACAAJ, Xzp7VkuFqXYC, and udD3AQAACAAJ. OCLC: 49665206,
85820539, 234257633, and 249067915.
2783. David A. van Veldhuizen and Laurence D. Merkle. Multiobjective Evolutionary Algorithms:
Classifications, Analyses, and New Innovations. PhD thesis, Air University, Air Force Insti-
tute of Technology: Wright-Patterson Air Force Base, OH, USA, June 1999, Gary B. Lamont,
Supervisor, Richard F. Deckro and Thomas F. Reid, Committee members. Fully avail-
able at http://citeseer.ist.psu.edu/old/vanveldhuizen99multiobjective.
html, http://handle.dtic.mil/100.2/ADA364478, and http://neo.lcc.uma.es/
emoo/veldhuizen99a.ps.gz [accessed 2009-08-05]. ID: AFIT/DS/ENG/99-01.
2784. Leonardo Vanneschi and Marco Tomassini. A Study on Fitness Distance Correlation
and Problem Difficulty for Genetic Programming. In GECCO’02 GSWS [1790], pages
307–310, 2002. Fully available at http://personal.disco.unimib.it/Vanneschi/
GECCO_2002_PHD_WORKSHOP.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/vanneschi_2002_gecco_workshop.html [accessed 2008-07-20]. See also [590].
2785. Leonardo Vanneschi, Manuel Clergue, Philippe Collard, Marco Tomassini, and Sébastien
Vérel. Fitness Clouds and Problem Hardness in Genetic Programming. In GECCO’04-
II [752], pages 690–701, 2004. doi: 10.1007/b98645. Fully available at http://hal.
archives-ouvertes.fr/hal-00160055/en/ [accessed 2007-10-14], http://hal.inria.
fr/docs/00/16/00/55/PDF/fc_gp.pdf, http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/vanneschi_fca_gecco2004.html [accessed 2008-07-20], and http://
www.i3s.unice.fr/˜verel/publi/gecco04-FCandPbHardnessGP.pdf [accessed 2007-
10-14].
2786. Leonardo Vanneschi, Marco Tomassini, Philippe Collard, and Sébastien Vérel. Negative
Slope Coefficient. A Measure to Characterize Genetic Programming. In EuroGP’06 [607],
pages 178–189, 2006. doi: 10.1007/11729976 16. Fully available at http://www.cs.bham.
ac.uk/˜wbl/biblio/gp-html/eurogp06_VanneschiTomassiniCollardVerel.
html [accessed 2008-07-20].
2787. Leonardo Vanneschi, Steven Matt Gustafson, Alberto Moraglio, Ivanoe de Falco, and
Marc Ebner, editors. Proceedings of the 12th European Conference on Genetic Program-
ming (EuroGP’09), April 15–17, 2009, Eberhard-Karls-Universität Tübingen, Fakultät für
Informations- und Kognitionswissenschaften: Tübingen, Germany, volume 5481/2009 in
Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01181-8.
isbn: 3-642-01180-2.
2788. Vladimir Naumovich Vapnik. The Nature of Statistical Learning Theory, Information Science
and Statistics. Springer-Verlag GmbH: Berlin, Germany, 1995–1999. isbn: 0-387-98780-0.
Google Books ID: sna9BaxVbj8C. OCLC: 41959173 and 247524710.
2789. Francisco J. Varela and Paul Bourgine, editors. Toward a Practice of Autonomous Sys-
tems: Proceedings of the First European Conference on Artificial Life (ECAL’91), De-
cember 11–13, 1991, Paris, France, Bradford Books. MIT Press: Cambridge, MA, USA.
isbn: 0-262-72019-1. Published in 1992.
2790. Himanshu Vashishtha, Michael Smit, and Eleni Stroulia. Moving Text Analysis Tools to the
Cloud. In SERVICES Cup’10 [2457], pages 107–114, 2010. doi: 10.1109/SERVICES.2010.91.
INSPEC Accession Number: 11535163.
2791. Athanasios V. Vasilakos, Kostas G. Anagnostakis, and Witold Pedrycz. Application of
Computational Intelligence Techniques in Active Networks. Soft Computing – A Fusion of
Foundations, Methodologies and Applications, 5(4):264–271, August 2001, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/s005000100100.
2792. Vesselin K. Vassilev and Julian Francis Miller. The Advantages of Landscape Neu-
trality in Digital Circuit Evolution. In ICES’00 [1902], pages 252–263, 2000.
doi: 10.1007/3-540-46406-9 25. CiteSeerx : 10.1.1.41.3777.
2793. Miguel A. Vega-Rodrı́guez, Juan A. Gómez-Pulido, Enrique Alba Torres, David Vega-
Pérez, Silvio Priem-Mendes, and Guillermo Molina. Evaluation of Different Metaheuris-
tics Solving the RND Problem. In EvoWorkshops’07 [1050], pages 101–110, 2007.
1166 REFERENCES
doi: 10.1007/978-3-540-71805-5 11.
2794. Miguel A. Vega-Rodrı́guez, David Vega-Pérez, Juan A. Gómez-Pulido, and Juan M.
Sánchez-Pérez. Radio Network Design Using Population-Based Incremental Learning
and Grid Computing with BOINC. In EvoWorkshops’07 [1050], pages 91–100, 2007.
doi: 10.1007/978-3-540-71805-5 10.
2795. Goran Velinov, Danilo Gligoroski, and Margita Kon-Popovska. Hybrid Greedy and Genetic
Algorithms for Optimization of Relational Data Warehouses. In AIAP’07 [776], pages 470–
475, 2007.
2796. Goran Velinov, Boro Jakimovski, Darko Cerepnalkoski, and Margita Kon-Popovska. Improve-
ment of Data Warehouse Optimization Process by Workflow Gridification. In ADBIS [143],
pages 295–304, 2008. doi: 10.1007/978-3-540-85713-6 21.
2797. Manuela M. Veloso, editor. “AI and its benefits to society” – Proceedings of the 20th Inter-
national Joint Conference on Artificial Intelligence (IJCAI’07), January 6–12, 2007, Hyder-
abad, India. AAAI Press: Menlo Park, CA, USA. Fully available at http://dli.iiit.
ac.in/ijcai/IJCAI-2007/CONTENT/contents2007.html and http://www.ijcai.
org/papers07/contents.php [accessed 2008-04-01]. See also [1296].
2798. Gerhard Venter and Jaroslaw Sobieszczanski-Sobieski. Particle Swarm Optimization. In 43rd
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference
and Exhibit [84], 2002. CiteSeerx : 10.1.1.57.7431. See also [2799].
2799. Gerhard Venter and Jaroslaw Sobieszczanski-Sobieski. Particle Swarm Optimization. AIAA
Journal, 41(8):1583–1589, August 2003, American Institute of Aeronautics and Astronautics:
Reston, VA, USA. Fully available at http://pdf.aiaa.org/getfile.cfm?urlX=2
%3CWIG7D%2FQKU%3E6B5%3AKF5%2B%5CQ%3AK%3E%0A&urla=%26%2B%22L%2F%22P
%22%40%0A&urlb=%21%2A%20%20%20%0A&urlc=%21%2A0%20%20%0A&urld=%21%2A0
%20%20%0A&urle=%27%28%22H%22%23PJAT%20%20%20%0A [accessed 2010-12-24]. See also
[2798].
2800. Vorträge des 7. VDEB-Infotags 2004 – Mobile Computing, RFID und Rückverfolgbarkeit,
July 28, 2004. Verband der EDV-Software-und Beratungsunternehmen (VDEB): Aachen,
North Rhine-Westphalia, Germany and VDEB Verband IT-Mittelstand e.V.: Stuttgart, Ger-
many.
2801. Proceedings of the Workshop SAKS 2007, March 1, 2007, Bern, Switzerland. Verband der
Elektrotechnik, Elektronik und Informationstechnik e.V. (VDE): Frankfurt am Main, Ger-
many.
2802. Sébastien Vérel, Philippe Collard, and Manuel Clergue. Measuring the Evolvabil-
ity Landscape to Study Neutrality. In GECCO’06 [1516], pages 613–614, 2006.
doi: 10.1145/1143997.1144107. arXiv ID: 0709.4011v1.
2803. Proceedings of the Second European Congress on Fuzzy and Intelligent Technologies (EU-
FIT’94), September 20–23, 1994, Aachen, North Rhine-Westphalia, Germany. Verlag der
Augustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany.
2804. Proceedings of the First European Congress on Fuzzy and Intelligent Technologies (EU-
FIT’93), September 7–9, 1993, Aachen, North Rhine-Westphalia, Germany. Verlag der Au-
gustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany and ELITE Foundation:
Aachen, North Rhine-Westphalia, Germany.
2805. Proceedings of the Third European Congress on Intelligent Techniques and Soft Computing
(EUFIT’95), August 28–31, 1995, Aachen, North Rhine-Westphalia, Germany. Verlag der
Augustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany, Verlag und Druck
Mainz GmbH: Aachen, North Rhine-Westphalia, Germany, and ELITE Foundation: Aachen,
North Rhine-Westphalia, Germany.
2806. Proceedings of the 6th European Congress on Intelligent Techniques and Soft Computing
(EUFIT’98), September 7–10, 1998, Rheinisch-Westfälische Technische Hochschule (RWTH)
Aachen: Aachen, North Rhine-Westphalia, Germany. Verlag und Druck Mainz GmbH:
Aachen, North Rhine-Westphalia, Germany. isbn: 3-89653-500-5.
2807. Proceedings of the 7th European Congress on Intelligent Techniques and Soft Computing
(EUFIT’99), September 13–16, 1999, Aachen, North Rhine-Westphalia, Germany. Verlag
und Druck Mainz GmbH: Aachen, North Rhine-Westphalia, Germany.
2808. William T. Vettering, Saul A. Teukolsky, William H. Press, and Brian P. Flannery. Numeri-
cal Recipes in C++ – Example Book – The Art of Scientific Computing. Cambridge Univer-
sity Press: Cambridge, UK, second edition, February 7, 2002. isbn: 0521750342. Google
REFERENCES 1167
Books ID: gwijz-OyIYEC. OCLC: 48265610, 314335535, 423496209, 469630365,
and 610691802. Library of Congress Control Number (LCCN): 2001052753. GBV-
Identification (PPN): 336225350 and 374254729. LC Classification: QA76.73.C153 N83
2002.
2809. Sixth European Conference on Machine Learning (ECML’93), April 5–7, 1993, Vienna, Aus-
tria.
2810. 6th Metaheuristics International Conference (MIC’05), August 22–26, 2005, Vienna, Austria.
Partly available at http://www.mic2005.org/ [accessed 2007-09-12].
2811. Savings Ants for the Vehicle Routing Problem. Technical Report 63, Vienna University of
Economics and Business Administration: Vienna, Austria, December 2001. ID: oai:epub.wu-
wien.ac.at:epub-wu-01 1f1. See also [802].
2812. Daniele Vigo. VRPLIB: A Vehicle Routing Problem LIBrary. Università degli
Studi di Bologna, Dipartimento di Elettronica, Informatica e Sistemistica (DEIS),
Operations Research Group (Ricerca Operativa): Bológna, Emilia-Romagna, Italy,
October 3, 2003. Fully available at http://or.ingce.unibo.it/research/
vrplib-a-vehicle-routing-problem-resources-library and http://www.or.
deis.unibo.it/research_pages/ORinstances/VRPLIB/VRPLIB.html [accessed 2011-
10-12].
2813. Daniele Vigo and Maria Battarra. Database of Heterogeneous Vehicle Routing Problems.
Università degli Studi di Bologna, Dipartimento di Elettronica, Informatica e Sistemistica
(DEIS), Operations Research Group (Ricerca Operativa): Bológna, Emilia-Romagna, Italy,
January 19, 2007. Fully available at http://apice54.ingce.unibo.it/hvrp/index.
php [accessed 2011-10-12].
2814. Frank J. Villegas, Tom Cwik, Yahya Rahmat-Samii, and Majid Manteghi. A Parallel Elec-
tromagnetic Genetic-Algorithm Optimization (EGO) Application for Patch Antenna Design.
IEEE Transactions on Antennas and Propagation, 52(9):2424–2435, September 3, 2004, IEEE
Antennas & Propagation Society: Marsfield, NSW, Australia. doi: 10.1109/TAP.2004.834071.
2815. Andrei Vladimirescu. The SPICE Book. John Wiley & Sons Ltd.: New York, NY, USA,
1994. isbn: 0471609269.
2816. Bernhard Friedrich Voigt. Der Handlungsreisende – wie er sein soll und was er zu thun hat,
um Aufträge zu erhalten und eines glücklichen Erfolgs in seinen Geschäften gewiß zu sein –
von einem alten Commis-Voyageur. Voigt: Ilmenau, Germany, 1832. OCLC: 258013333.
Excerpt: “. . . Durch geeignete Auswahl und Planung der Tour kann man oft so viel Zeit
sparen, daß wir einige Vorschläge zu machen haben. . . . Der wichtigste Aspekt ist, so viele
Orte wie möglich zu erreichen, ohne einen Ort zweimal zu besuchen. . . . ”. Newly published
as [2817].
2817. Bernhard Friedrich Voigt. Der Handlungsreisende – wie er sein soll und was er zu thun
hat, um Aufträge zu erhalten und eines glücklichen Erfolgs in seinen Geschäften gewiß zu
sein – von einem alten Commis-Voyageur. Verlag Bernd Schramm: Kiel, Germany, 1981.
isbn: 3-921361-24-9. New edition of [2816].
2818. Hans-Michael Voigt, Werner Ebeling, Ingo Rechenberg, and Hans-Paul Schwefel, editors. Pro-
ceedings of the 4th International Conference on Parallel Problem Solving from Nature (PPSN
IV), September 22–24, 1996, Berlin, Germany, volume 1141/1996 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-61723-X.
isbn: 3-540-61723-X. Google Books ID: uN0mgUx-VRUC. OCLC: 35331240, 150419867,
and 246323673.
2819. Christian Voigtmann. Integration Evolutionärer Klassifikatoren in Weka. Bachelor’s thesis,
University of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Systems Group:
Kassel, Hesse, Germany, October 26, 2008, Thomas Weise, Advisor, Kurt Geihs and Heinrich
Werner, Committee members. See also [2855, 2902].
2820. A. Volgenant. Solving Some Lexicographic Multi-Objective Combinatorial Problems. Eu-
ropean Journal of Operational Research (EJOR), 139(3):578–584, June 16, 2002, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific
Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(01)00214-4.
2821. Paul T. von Hippel. Mean, Median, and Skew: Correcting a Textbook Rule. Journal
of Statistics Education (JSE), 13(2), July 2005, American Statistical Association (ASA):
Alexandria, VA, USA. Fully available at http://www.amstat.org/publications/jse/
v13n2/vonhippel.html [accessed 2007-09-15].
1168 REFERENCES
2822. Richard von Mises. Probability, Statistics, and Truth. W. Hodge & Co.: Glasgow, Scot-
land, UK, Dover Publications: Mineola, NY, USA, and Allen & Unwin: Crows Nest, NSW,
Australia, 2nd edition, 1939–1957. isbn: 0486242145. Google Books ID: KSE4AAAAMAAJ,
enxeAAAAIAAJ, and wFFK P8Cpk0C. OCLC: 8320292 and 256165308.
2823. Richard von Mises. Mathematical Theory of Probability and Statistics. Academic Press
Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers B.V.: Essex, UK,
1964. isbn: 0127273565. Google Books ID: 3s8-AAAAIAAJ and 7gnFJQAACAAJ.
2824. Paul von Ragué Schleyer and Johnny Gasteiger, editors. Encyclopedia of Computational
Chemistry. Volume III: Databases and Expert Systems. John Wiley & Sons Ltd.: New York,
NY, USA, September 1998. isbn: 0-471-96588-X. OCLC: 490503997. Library of Congress
Control Number (LCCN): 98037164. LC Classification: QD39.3.E46 E53 1998.
2825. Matthias von Randow. Güterverkehr und Logistik als tragende Säule der Wirtschaft zukun-
ftssicher gestalten. In Das Beste Der Logistik: Innovationen, Strategien, Umsetzungen [234],
pages 49–53. Springer-Verlag GmbH: Berlin, Germany and Bundesvereinigung Logistik e.V.
(BVL): Bremen, Germany, 2008.
2826. Michael D. Vose. Generalizing the Notion of Schema in Genetic Algorithms. Artificial
Intelligence, 50(3):385–396, August 1991, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(91)90019-G.
2827. Michael D. Vose and Alden H. Wright. Form Invariance and Implicit Parallelism.
Evolutionary Computation, 9(3):355–370, Fall 2001, MIT Press: Cambridge, MA,
USA. doi: 10.1162/106365601750406037. Fully available at http://www.cs.umt.
edu/u/wright/papers/vose_wright2001form_invariance.ps.gz [accessed 2010-07-30].
CiteSeerx : 10.1.1.7.4294. PubMed ID: 11522211.
2828. Stefan Voß, Silvano Martello, Ibrahim H. Osman, and Cathérine Roucairol, editors. 2nd
Metaheuristics International Conference – Meta-Heuristics: Advances and Trends in Local
Search Paradigms for Optimization (MIC’97), July 21–24, 1997, Sophia-Antipolis, France.
Kluwer Publishers: Boston, MA, USA. isbn: 0-7923-8369-9. Published in 1998.
2829. Conrad Hal Waddington. Canalization of Development and the Inheritance of Acquired Char-
acters. Nature, 150(3811):563–565, November 14, 1942. doi: 10.1038/150563a0. Fully available
at http://www.nature.com/nature/journal/v150/n3811/pdf/150563a0.pdf [ac-
cessed 2008-09-10]. See also [2832].
2830. Conrad Hal Waddington. Genetic Assimilation of an Acquired Character. Evolution – In-
ternational Journal of Organic Evolution, 7(2):118–128, June 1953, Society for the Study
of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK. JSTOR Stable
ID: 2405747.
2831. Conrad Hal Waddington. The “Baldwin Effect”, “Genetic Assimilation” and “Homeostasis”.
Evolution – International Journal of Organic Evolution, 7(4):386–387, December 1953, Soci-
ety for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
JSTOR Stable ID: 2405346.
2832. Conrad Hal Waddington. Canalization of Development and the Inheritance of Acquired Char-
acters. In Adaptive Individuals in Evolving Populations: Models and Algorithms [255], pages
91–97. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA and Westview
Press, 1996. See also [2829].
2833. Andreas Wagner. Robustness and Evolvability in Living Systems, volume 9 in Prince-
ton Studies in Complexity. Princeton University Press: Princeton, NJ, USA, 2005–2007.
isbn: 0691122407 and 0691134049. Partly available at http://press.princeton.
edu/titles/8002.html [accessed 2009-07-10]. Google Books ID: 7O1bGwAACAAJ.
2834. Andreas Wagner. Robustness, Evolvability, and Neutrality. FEBS Letters, 579(8):1772–
1778, March 21, 2005, Federation of European Biochemical Societies (FEBS): London, UK
and Elsevier Science Publishers B.V.: Essex, UK, Robert Russell and Giulio Superti-Furga,
editors. doi: 10.1016/j.febslet.2005.01.063. PubMed ID: 15763550.
2835. Günter P. Wagner and Lee Altenberg. Complex Adaptations and the Evolution of Evolv-
ability. Evolution – International Journal of Organic Evolution, 50(3):967–976, June 1996,
Society for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex,
REFERENCES 1169
UK. Fully available at http://dynamics.org/Altenberg/PAPERS/CAEE/ [accessed 2009-
x
07-10]. CiteSeer : 10.1.1.28.1524.
2836. Michael Wagner, Dieter Hogrefe, Kurt Geihs, and Klaus David, editors. First Workshop
on Global Sensor Networks (GSN’09), March 6, 2009, University of Kassel, Fachbereich 16:
Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, volume 17.
European Association of Software Science and Technology (EASST; Universität Potsdam,
Institute for Informatics): Potsdam, Germany. Fully available at http://eceasst.cs.
tu-berlin.de/index.php/eceasst/issue/view/24 [accessed 2009-09-07]. Partly available
at http://ipvs.informatik.uni-stuttgart.de/vs/gsn09/ [accessed 2011-02-28].
2837. Tobias Wagner, Nicola Beume, and Boris Naujoks. Pareto-, Aggregation-, and Indicator-
Based Methods in Many-Objective Optimization. Reihe Computational Intelligence: De-
sign and Management of Complex Technical Processes and Systems by Means of Computa-
tional Intelligence Methods CI-217/06, Universität Dortmund, Collaborative Research Cen-
ter (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia, Germany, Septem-
ber 2006. Fully available at http://hdl.handle.net/2003/26125 [accessed 2009-07-19].
issn: 1433-3325. See also [2838].
2838. Tobias Wagner, Nicola Beume, and Boris Naujoks. Pareto-, Aggregation-, and Indicator-
Based Methods in Many-Objective Optimization. In EMO’07 [2061], pages 742–756, 2007.
doi: 10.1007/978-3-540-70928-2 56. See also [2837].
2839. Markus Wälchli and Torsten Braun. Efficient Signal Processing and Anomaly Detec-
tion in Wireless Sensor Networks. In EvoWorkshops’09 [1052], pages 81–86, 2009.
doi: 10.1007/978-3-642-01129-0 9.
2840. Karl-Heinz Waldmann and Ulrike M. Stocker, editors. Selected Papers of the Annual Inter-
national Conference of the German Operations Research Society (GOR), Jointly Organized
with the Austrian Society of Operations Research (ÖGOR) and the Swiss Society of Op-
erations Research (SVOR), September 6–8, 2006, University of Karlsruhe: Karlsruhe, Ger-
many, volume 2006 in Operations Research Proceedings. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-540-69995-8. isbn: 3-540-69994-5. Google Books ID: HOwaluJXznkC.
OCLC: 185135009, 255964381, 300964495, 307478719, and 315261951.
2841. Donald E. Walker and Lewis M. Norton, editors. Proceedings of the 1st International Joint
Conference on Artificial Intelligence (IJCAI’69), May 7–9, 1969, Washington, DC, USA.
isbn: 0-934613-21-4. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-69/
CONTENT/content.htm [accessed 2008-04-01]. OCLC: 15386096.
2842. David Alexander Ross Wallace. Groups, Rings and Fields, Undergraduate Texts in Mathe-
matics. Springer New York: New York, NY, USA, February 2004. isbn: 3540761772.
2843. David Wallin and Conor Ryan. Maintaining Diversity in EDAs for Real-Valued Optimisation
Problems. In FBIT’07 [1277], pages 795–800, 2007. doi: 10.1109/FBIT.2007.132. INSPEC
Accession Number: 9979463.
2844. David Wallin and Conor Ryan. On the Diversity of Diversity. In CEC’07 [1343], pages
95–102, 2007. doi: 10.1109/CEC.2007.4424459. INSPEC Accession Number: 9889406.
2845. David L. Waltz, editor. Proceedings of the National Conference on Artificial Intelligence
(AAAI’82), August 18–20, 1982, Carnegy Mellon University (CMU): Pittsburgh, PA, USA.
American Association for Artificial Intelligence: Los Altos, CA, USA and John W. Kaufmann
Inc.: Washington, DC, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai82.php [accessed 2007-09-06]. OCLC: 159789798.
2846. Chunzhi Wang and Hongwei Chen, editors. Proceedings of the 2nd International Workshop
on Intelligent Systems and Applications (ISA’10), May 22–23, 2010, Wǔhàn, Húběi, China.
IEEE Computer Society Press: Los Alamitos, CA, USA. Partly available at http://www.
icaiss.org/isa2010/ [accessed 2011-04-10]. Catalogue no.: CFP1060G-ART.
2847. Chunzhi Wang and Zhiwei Ye, editors. Proceedings of the International Workshop on
Intelligent Systems and Applications (ISA’09), May 23–24, 2009, Wǔhàn, Húběi, China.
IEEE Computer Society Press: Los Alamitos, CA, USA. Partly available at http://
www.icaiss.org/isa2009/indexen.html [accessed 2011-04-10]. Library of Congress Con-
trol Number (LCCN): 2009900554. Catalogue no.: CFP0960G.
2848. Haiying Wang, Kay Soon Low, Kexin Wei, and Junqing Sun, editors. Proceedings of the 5th
International Conference on Natural Computation (ICNC’09), August 14–16, 2009, Tiānjı̄n,
China. IEEE Computer Society: Piscataway, NJ, USA. Library of Congress Control Number
(LCCN): 2009903793. Order No.: P3736. Catalogue no.: CFP09CNC-PRT.
1170 REFERENCES
2849. Lipo Wang, editor. Support Vector Machines: Theory and Applications, volume 177/2005
in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, Au-
gust 2005. doi: 10.1007/b95439. isbn: 3-540-24388-7. OCLC: 60800790, 316265870,
318299016, and 403848183. Library of Congress Control Number (LCCN): 2005921894.
2850. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part I (ICNC’05-I), August 27–29, 2005,
Chángshā, Húnán, China, volume 3610 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11539087. isbn: 3-540-28323-4.
See also [2851, 2852].
2851. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part II (ICNC’05-II), August 27–29, 2005,
volume 3611 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/11539117. isbn: 3-540-28325-0. See also [2850, 2852].
2852. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part III (ICNC’05-III), August 27–29,
2005, volume 3612 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/11539902. isbn: 3-540-28320-X. See also [2850, 2851].
2853. Paul P. Wang, editor. Proceedings of the Fifth Joint Conference on Information Science (JCIS
2000), Section: The Third International Workshop on Frontiers in Evolutionary Algorithms
(FEA’00), 2000, Atlantic City, NJ, USA. Part of [142].
2854. Pu Wang, Edward P. K. Tsang, Thomas Weise, Ke Tang, and Xin Yao. Using GP to
Evolve Decision Rules for Classification in Financial Data Sets. In ICCI’10 [2630], pages
722–727, 2010. doi: 10.1109/COGINF.2010.5599820. Fully available at http://mail.
ustc.edu.cn/˜wuyou308/paper1.pdf and http://www.it-weise.de/documents/
files/WTWTY2010UGPTEDRFCIFDS.pdf [accessed 2010-12-09]. CiteSeerx : 10.1.1.170.3794.
INSPEC Accession Number: 11579674. Ei ID: 20105013469298.
2855. Pu Wang, Thomas Weise, and Raymond Chiong. Novel Evolutionary Algo-
rithms for Supervised Classification Problems: An Experimental Study. Evolution-
ary Intelligence, 4(1):3–16, January 12, 2011, Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/s12065-010-0047-7. Fully available at http://www.it-weise.de/
documents/files/WWC2011NEAFSCPAES.pdf. Ei ID: IP51227209. See also [2819, 2854,
2894].
2856. Tzai-Der Wang, Xiaodong Li, Shu-Heng Chen, Xufa Wang, Hussein A. Abbass, Hitoshi
Iba, Guoliang Chen, and Xin Yao, editors. Proceedings of the 6th International Conference
on Simulated Evolution and Learning (SEAL’06), October 15–18, 2006, University of Sci-
ence and Technology of China (USTC): Héféi, Ānhuı̄, China, volume 4247 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11903697. isbn: 3-540-47331-9.
OCLC: 496560934. Library of Congress Control Number (LCCN): 318289226.
2857. Yingxu Wang, Du Zhang, Jean-Claude Latombe, and Witold Kinsner, editors. Proceedings of
the Seventh IEEE International Conference on Cognitive Informatics (ICCI’08), August 14,
2008, Stanford University: Stanford, CA, USA. IEEE Computer Society: Piscataway, NJ,
USA.
2858. Yingxu Wang, Bernard Widrow, Bo Zhang, Witold Kinsner, Kenji Sugawara, Fuchun Sun,
Thomas Weise, Yixin Zhong, and Du Zhang. Perspectives on Cognitive Informatics and Its
Future Development: Summary of Plenary Panel II of IEEE ICCI’10. In ICCI’10 [2630],
pages 17–25, 2010. doi: 10.1109/COGINF.2010.5599693. Fully available at http://
www.it-weise.de/documents/files/WWZKSSWZZ2010POCIAIFDSOPPIIOII.pdf [ac-
cessed 2010-12-09]. Ei ID: 20105013469178. See also [2859, 2890].
2859. Yingxu Wang, Bernard Widrow, Bo Zhang, Witold Kinsner, Kenji Sugawara, Fuchun Sun,
Jianhua Lu, Thomas Weise, and Du Zhang. Perspectives on the Field of Cognitive Informatics
and Its Future Development. The International Journal of Cognitive Informatics and Natural
Intelligence (IJCINI), 5(1):1–17, January–March 2011, Idea Group Publishing (Idea Group
Inc., IGI Global): New York, NY, USA. doi: 10.4018/jcini.2011010101. See also [2858, 2890].
2860. Yu Wang, Bin Li, and Thomas Weise. Estimation of Distribution and Differential Evo-
lution Cooperation for Large Scale Economic Load Dispatch Optimization of Power Sys-
tems. Information Sciences – Informatics and Computer Science Intelligent Systems Ap-
plications: An International Journal, 180(12):2405–2420, June 2010, Elsevier Science Pub-
REFERENCES 1171
lishers B.V.: Essex, UK. doi: 10.1016/j.ins.2010.02.015. Fully available at http://www.
it-weise.de/documents/files/WLW2010EODADECFLSELDOOPS.pdf [accessed 2010-06-19].
Ei ID: 20101412821019. IDS (SCI): 593PM.
2861. Yu Wang, Bin Li, Thomas Weise, Jianyu Wang, Bo Yuan, and Qiongjie Tian. Self-
Adaptive Learning Based Particle Swarm Optimization. Information Sciences – Infor-
matics and Computer Science Intelligent Systems Applications: An International Jour-
nal, 181(20):4515–4538, October 15, 2011, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.ins.2010.07.013. Ei ID: IP51031102.
2862. Zai Wang, Tianshi Chen, Ke Tang, and Xin Yao. A Multi-Objective Approach to Redun-
dancy Allocation Problem in Parallel-Series Systems. In CEC’09 [1350], pages 582–589,
2009. doi: 10.1109/CEC.2009.4982998. Fully available at http://nical.ustc.edu.cn/
uploads/cec2009/P252.pdf [accessed 2009-09-05]. INSPEC Accession Number: 10688610.
2863. Zai Wang, Ke Tang, and Xin Yao. Multi-objective Approaches to Optimal Test-
ing Resource Allocation in Modular Software Systems. IEEE Transactions on
Reliability, 59(3):563–575, September 2010, IEEE Reliability Society: Lafayette,
CO, USA. doi: 10.1109/TR.2010.2057310. Fully available at http://staff.
ustc.edu.cn/˜ketang/papers/WangTangYao_TR10a.pdf [accessed 2011-02-22].
2897. Thomas Weise and Kurt Geihs. DGPF – An Adaptable Framework for Distributed Multi-
Objective Search Algorithms Applied to the Genetic Programming of Sensor Networks. In
BIOMA’06 [919], pages 157–166, 2006. Fully available at http://www.it-weise.de/
documents/files/WG2006DGPFc.pdf. CiteSeerx : 10.1.1.74.8243. See also [2896].
2898. Thomas Weise and Ke Tang. Evolving Distributed Algorithms with Genetic Programming.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 16(2), September 26, 2011–
2012, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/TEVC.2011.2112666.
See also [2888].
2899. Thomas Weise and Michael Zapf. Evolving Distributed Algorithms with Genetic Program-
ming: Election. In GEC’09 [3000], pages 577–584, 2009. doi: 10.1145/1543834.1543913. Fully
available at http://www.it-weise.de/documents/files/WZ2009EDAWGPE.pdf. Ei
ID: 20093012219539. See also [2888].
2900. Thomas Weise, Volker Fickert, Thomas Ziegs, Rico Roßberg, René Kreiner, Roland Fischer,
Stefan Gelbke, Harry Briem, and et al. VisioOS Visual Simulation of Operating Systems.
Chemnitz University of Technology, Faculty of Computer Science, Operating Systems Group:
Chemnitz, Sachsen, Germany, May 15, 2005, Winfried Kalfa, Supervisor. Fully available
at http://www.it-weise.de/documents/files/WEA2005VISIOS.pdf [accessed 2010-12-
07]. See also [910].
2901. Thomas Weise, Mirko Dietrich, Marc Kirchhoff, Lado Kumsiashvili, Stefan Niemczyk, and
Alexander Podlich. DGPF – Various other Documents and Slides. University of Kassel, Fach-
bereich 16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany,
October 26, 2006. Fully available at http://www.it-weise.de/documents/files/
1174 REFERENCES
W2006DGPFo.pdf [accessed 2010-12-07]. See also [2888].
2902. Thomas Weise, Stefan Achler, Martin Göb, Christian Voigtmann, and Michael Zapf. Evolv-
ing Classifiers – Evolutionary Algorithms in Data Mining. Kasseler Informatikschriften
(KIS) 2007, 4, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, September 28, 2007. Fully available at http://www.it-weise.de/
documents/files/WAGVZ2007DMC.pdf [accessed 2010-12-07]. CiteSeerx : 10.1.1.89.6165.
urn: urn:nbn:de:hebis:34-2007092819260.
2903. Thomas Weise, Steffen Bleul, and Kurt Geihs. Web Service Composition Sys-
tems for the Web Service Challenge – A Detailed Review. Technical Report
2007, 7, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, November 19, 2007. Fully available at http://www.it-weise.de/
documents/files/WBG2007WSCb.pdf [accessed 2007-11-20]. CiteSeerx : 10.1.1.86.7774.
urn: urn:nbn:de:hebis:34-2007111919638. See also [334, 335, 2908].
2904. Thomas Weise, Kurt Geihs, and Philipp Andreas Baer. Genetic Program-
ming for Proactive Aggregation Protocols. In ICANNGA’07-I [257], pages 167–
173, 2007. doi: 10.1007/978-3-540-71618-1. Fully available at http://www.
it-weise.de/documents/files/W2007DGPFb.pdf. CiteSeerx : 10.1.1.152.9906. Ei
ID: 20080311022959. IDS (SCI): BGC96. See also [2911].
2905. Thomas Weise, Michael Zapf, and Kurt Geihs. Rule-based Genetic Programming. In
BIONETICS’07 [1342], pages 8–15, 2007. doi: 10.1109/BIMNICS.2007.4610073. Fully
available at http://www.it-weise.de/documents/files/WZG2007RBGP.pdf. Ei
ID: 20084111632894. IDS (SCI): BKM77.
2906. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Genetic
Programming meets Model-Driven Development. Kasseler Informatikschriften (KIS)
2007, 2, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, July 2, 2007. Fully available at http://www.it-weise.de/
documents/files/WZKG2007DGPFd.pdf [accessed 2007-09-17]. CiteSeerx : 10.1.1.94.317.
urn: urn:nbn:de:hebis:34-2007070218786. See also [2907, 2916].
2907. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Genetic Pro-
gramming meets Model-Driven Development. In HIS’07 [1572], pages 332–335, 2007.
doi: 10.1109/HIS.2007.11. Fully available at http://www.it-weise.de/documents/
files/WZKG2007DGPFg.pdf. Ei ID: 20082911386590. See also [2906, 2916].
2908. Thomas Weise, Steffen Bleul, Diana Elena Comes, and Kurt Geihs. Different Ap-
proaches to Semantic Web Service Composition. In ICIW’08 [1862], pages 90–96,
2008. doi: 10.1109/ICIW.2008.32. Fully available at http://www.it-weise.de/
documents/files/WBCG2008ICIW.pdf. INSPEC Accession Number: 10067008. Ei
ID: 20083711536637. IDS (SCI): BRC96. See also [2903].
2909. Thomas Weise, Steffen Bleul, Marc Kirchhoff, and Kurt Geihs. Semantic Web Service
Composition for Service-Oriented Architectures. In CEC/EEE’08 [1346], pages 355–358,
2008. doi: 10.1109/CECandEEE.2008.148. Fully available at http://www.it-weise.de/
documents/files/WBKG2008SWSCFSOA.pdf. INSPEC Accession Number: 10475110. Ei
ID: 20091512027447. IDS (SCI): BJF01. See also [204, 334, 335].
2910. Thomas Weise, Stefan Niemczyk, Hendrik Skubch, Roland Reichle, and Kurt Geihs.
A Tunable Model for Multi-Objective, Epistatic, Rugged, and Neutral Fitness Land-
scapes. In GECCO’08 [1519], pages 795–802, 2008. doi: 10.1145/1389095.1389252.
Fully available at http://www.it-weise.de/documents/files/WNSRG2008GECCO.
pdf. CiteSeerx : 10.1.1.152.8952 and 10.1.1.153.308. Ei ID: 20085111786051.
2911. Thomas Weise, Michael Zapf, and Kurt Geihs. Evolving Proactive Aggregation Proto-
cols. In EuroGP’08 [2080], pages 254–265, 2008. doi: 10.1007/978-3-540-78671-9 22. Fully
available at http://www.it-weise.de/documents/files/WZG2008DGPFa.pdf. Ei
ID: 20083011392084. IDS (SCI): BHN52. See also [2904].
2912. Thomas Weise, Alexander Podlich, and Christian Gorldt. Solving Real-World Vehicle Routing
Problems with Evolutionary Algorithms. In Natural Intelligence for Scheduling, Planning
and Packing Problems [564], Chapter 2, pages 29–53. Springer-Verlag: Berlin/Heidelberg,
2009. doi: 10.1007/978-3-642-04039-9 2. Fully available at http://www.it-weise.de/
documents/files/WPG2009SRWVRPWEA.pdf [accessed 2010-06-19]. See also [2188, 2913, 2914].
2913. Thomas Weise, Alexander Podlich, Manfred Menze, and Christian Gorldt. Optimierte
Güterverkehrsplanung mit Evolutionären Algorithmen. Industrie Management – Zeitschrift
REFERENCES 1175
für industrielle Geschäftsprozesse, 10(3):37–40, June 2, 2009, GITO mbH – Verlag für Indus-
trielle Informationstechnik und Organisation: Berlin, Germany. Fully available at http://
www.it-weise.de/documents/files/WPMG2009OGMEA.pdf and http://www.
logistics.de/logistik/branchen.nsf/D4FF2D93CA69B47FC12575CF004870C8/
$File/gueterverkehrsplanung_optimierung_algorithmen_weise_podlich_
menze_gorldt_gitopdf.pdf [accessed 2009-09-07]. See also [2912, 2914].
2914. Thomas Weise, Alexander Podlich, Kai Reinhard, Christian Gorldt, and Kurt Geihs. Evo-
lutionary Freight Transportation Planning. In EvoWorkshops’09 [1052], pages 768–777,
2009. doi: 10.1007/978-3-642-01129-0 87. Fully available at http://www.it-weise.de/
documents/files/WPRGG2009EFTP.pdf. Ei ID: 20093012218527. IDS (SCI): BJH22. See
also [2912].
2915. Thomas Weise, Michael Zapf, Raymond Chiong, and Antonio Jesús Nebro Urbaneja. Why Is
Optimization Difficult? In Nature-Inspired Algorithms for Optimisation [562], Chapter 1,
pages 1–50. Springer-Verlag: Berlin/Heidelberg, 2009. doi: 10.1007/978-3-642-00267-0 1.
Fully available at http://www.it-weise.de/documents/files/WZCN2009WIOD.pdf.
2916. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Combining Genetic
Programming and Model-Driven Development. International Journal of Computational Intel-
ligence and Applications (IJCIA), 8(1):37–52, March 2009, World Scientific Publishing Co.:
Singapore and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026809002436.
Fully available at http://www.it-weise.de/documents/files/WZKG2009DGPFz.
pdf. Ei ID: 20092012083173. See also [2906, 2907].
2917. Thomas Weise, Stefan Niemczyk, Raymond Chiong, and Mingxu Wan. A Framework for
Multi-Model EDAs with Model Recombination. In EvoNUM’11 [2588], pages 304–313,
2011. doi: 10.1007/978-3-642-20525-5 31. Fully available at http://www.it-weise.de/
documents/files/WNCW2011AFFMMEWMR.pdf. See also [2038].
2918. August Weismann. The Germ-Plasm – A Theory of Heredity. Charles Scribner’s Sons:
New York, NY, USA and Electronic Scholary Publishing (ESP): Seattle, WA, USA, 1893,
W. Newton Parker and Harriet Rönnfeld, Translators. Fully available at http://www.esp.
org/books/weismann/germ-plasm/facsimile/ [accessed 2008-09-10].
2919. Gerhard Weiß and Sandip Sen, editors. Adaption and Learning in Multi-Agent Systems, IJ-
CAI’95 Workshop Proceedings (IJCAI-95 WS), August 21, 1995, Montréal, QC, Canada, vol-
ume 1042/1996 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-60923-7.
isbn: 3-540-60923-7. Google Books ID: joFQAAAAMAAJ. OCLC: 34318834, 150397844,
and 246327094. See also [1836, 1861].
2920. Justin Werfel and Radhika Nagpal. Extended Stigmergy in Collective Construction. IEEE
Intelligent Systems Magazine, 21(2):20–28, March–April 2006, IEEE Computer Society Press:
Los Alamitos, CA, USA. doi: 10.1109/MIS.2006.25. Fully available at http://hebb.mit.
edu/people/jkwerfel/ieeeis06.pdf [accessed 2008-06-12].
2921. Justin Werfel, Yaneer Bar-Yam, Daniela Rus, and Radhika Nagpal. Distributed Construction
by Mobile Robots with Enhanced Building Blocks. In ICRA’06 [1389], pages 2787–2794, 2006.
doi: 10.1109/ROBOT.2006.1642123. Fully available at http://www.eecs.harvard.edu/
x
˜rad/ssr/papers/icra06-werfel.pdf [accessed 2008-06-12]. CiteSeer : 10.1.1.87.6701.
INSPEC Accession Number: 9120372.
2922. Gregory M. Werner and Michael G. Dyer. Evolution of Communication in Artificial Organ-
isms. In Artificial Life II [1668], pages 659–687, 1990. Fully available at http://www.isrl.
uiuc.edu/˜amag/langev/paper/werner92evolutionOf.html [accessed 2008-07-28].
2923. A. Wetzel. Evaluation of the Effectiveness of Genetic Algorithms in Combinatorial Opti-
mization. University of Pittsburgh: Pittsburgh, PA, USA, 1983. Unpublished manuscript,
technical report.
2924. James F. Whidborne, Da-Wei Gu, and Ian Postlethwaite. Algorithms for the Method of
Inequalities – A Comparative Study. In ACC [1325], pages 3393–3397, volume 5, 1995. FA19
= 9:15.
2925. James M. Whitacre, Philipp Rohlfshagen, Axel Bender, and Xin Yao. The Role of Degenerate
Robustness in the Evolvability of Multi-Agent Systems in Dynamic Environments. In PPSN
XI-1 [2412], pages 284–293, 2010. doi: 10.1007/978-3-642-15844-5 29.
2926. R. C. White, jr. A Survey of Random Methods for Parameter Optimization. Simu-
lation – Transactions of The Society for Modeling and Simulation International, SAGE
1176 REFERENCES
Journals Online, 17(5):197–205, 1971, Society for Modeling and Simulation International
(SCS): San Diego, CA, USA. doi: 10.1177/003754977101700504. Fully available at http://
alexandria.tue.nl/extra1/erap/publichtml/7101031.pdf [accessed 2009-07-04]. TH-
Report 70-E-16.
2927. L. Darrell Whitley, editor. Proceedings of the Second Workshop on Foundations of Genetic
Algorithms (FOGA’92), July 26–29, 1992, Vail, CO, USA. Morgan Kaufmann Publishers
Inc.: San Francisco, CA, USA. isbn: 1-55860-263-1. OCLC: 27723077, 312001104,
438787312, 475768457, 490655463, and 612351520. issn: 1081-6593. Published Febru-
ary 1, 1993.
2928. L. Darrell Whitley, editor. Late Breaking Papers at Genetic and Evolutionary Computation
Conference (GECCO’00 LBP), July 10–12, 2000, Riviera Hotel and Casino: Las Vegas, NV,
USA. AAAI Press: Menlo Park, CA, USA. See also [2935, 2986].
2929. L. Darrell Whitley. The GENITOR Algorithm and Selection Pressure: Why Rank-
Based Allocation of Reproductive Trials is Best. In ICGA’89 [2414], pages 116–121,
1989. Fully available at http://citeseer.ist.psu.edu/531140.html [accessed 2007-08-
x
21]. CiteSeer : 10.1.1.18.8195.
2940. Herbert S. Wilf. Algorithms and Complexity. A K Peters, Ltd.: Natick, MA, USA, second
edition, December 2002. isbn: 1-56881-178-0. Google Books ID: jGD9pNFKI2UC. Library
of Congress Control Number (LCCN): 2002072765. LC Classification: QA63 .W55 2002.
2941. Claus O. Wilke. Evolutionary Dynamics in Time-Dependent Environments. PhD thesis,
Ruhr-Universität Bochum, Fakultät für Physik und Astronomie: Bochum, North Rhine-
Westphalia, Germany, Shaker Verlag GmbH: Aachen, North Rhine-Westphalia, Germany,
July 1999. isbn: 3-8265-6199-6. Fully available at http://wlab.biosci.utexas.
edu/˜wilke/ps/PhD.ps.gz [accessed 2007-08-19].
2942. Claus O. Wilke. Adaptive Evolution on Neutral Networks. Bulletin of Mathemat-
ical Biology, 63(4):715–730, July 2001, Springer New York: New York, NY, USA.
doi: 10.1006/bulm.2001.0244. arXiv ID: physics/0101021v1. PubMed ID: 11497165.
2943. Daniel N. Wilke, Schalk Kok, and Albert A. Groenwold. Comparison of Linear and Classical
Velocity Update Rules in Particle Swarm Optimization: Notes on Diversity. International
Journal for Numerical Methods in Engineering, 70(8):962–984, 2007, Wiley Interscience:
Chichester, West Sussex, UK. doi: 10.1002/nme.1867.
2944. George C. Williams. Pleiotropy, Natural Selection, and the Evolution of Senescence. Evo-
lution – International Journal of Organic Evolution, 11(4):398–411, December 1957, Society
for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.2307/2406060. Reprinted in [2946].
2945. George C. Williams. Adaptation and Natural Selection: A Critique of Some Current Evo-
lutionary Thought. Princeton University Press: Princeton, NJ, USA, September 28, 1966.
isbn: 0-691-02615-7. Google Books ID: TztkT5lIGOwC. OCLC: 35230452. GBV-
Identification (PPN): 083639411 and 223362034.
2946. George C. Williams. Pleiotropy, Natural Selection, and the Evolution of Senescence. Science
of Aging Knowledge Environment (SAGE KE), 2001(1), October 3, 2001, American Associ-
ation for the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press
(Stanford University): Cambridge, MA, USA. Reprint of [2944].
2947. Dominic Wilson and Devinder Kaur. Using Quotient Graphs to Model Neutrality in Evolu-
tionary Search. In GECCO’08 [1519], pages 2233–2238, 2008. doi: 10.1145/1388969.1389051.
2948. Edward Osborne Wilson. Sociobiology: The New Synthesis. Belknap Press: Cambridge, MA,
USA and Harvard University Press: Cambridge, MA, USA, 1975. See also [2949].
2949. Edward Osborne Wilson. Sociobiology: The New Synthesis. Belknap Press: Cambridge, MA,
USA, twenty-fifth anniversary edition edition, March 2000. isbn: 0674002350. See also
[2948].
2950. Garnett Wilson and Wolfgang Banzhaf. Discovery of Email Communication Networks
from the Enron Corpus with a Genetic Algorithm using Social Network Analysis. In
CEC’09 [1350], pages 3256–3263, 2009. doi: 10.1109/CEC.2009.4983357. INSPEC Acces-
sion Number: 10688952.
2951. P. B. Wilson and M. D. Macleod. Low Implementation Cost IIR Digital Filter Design using
Genetic Algorithms. In NASP’93 [1323], pages 4/1–4/8, 1993.
2952. Stewart W. Wilson. ZCS: A Zeroth Level Classifier System. Evolution-
ary Computation, 2(1):1–18, Spring 1994, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1994.2.1.1. Fully available at ftp://ftp.cs.bham.ac.uk/pub/
authors/T.Kovacs/lcs.archive/Wilson1994a.ps.gz and http://www.eskimo.
com/˜wilson/ps/zcs.pdf [accessed 2010-12-15]. CiteSeerx : 10.1.1.38.6456.
2953. Stewart W. Wilson. Classifier Fitness Based on Accuracy. Evolutionary Computation, 3(2):
149–175, Summer 1995, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.2.149.
Fully available at ftp://ftp.cs.bham.ac.uk/pub/authors/T.Kovacs/lcs.
archive/Wilson1995a.ps.gz and http://www.eskimo.com/˜wilson/ps/xcs.
pdf [accessed 2010-12-15]. CiteSeerx : 10.1.1.38.6508.
2954. Stewart W. Wilson. Generalization in the XCS Classifier System. In GP’98 [1612], pages 665–
674, 1998. Fully available at ftp://ftp.cs.bham.ac.uk/pub/authors/T.Kovacs/
1178 REFERENCES
lcs.archive/Wilson1998a.ps.gz [accessed 2010-12-15]. CiteSeerx : 10.1.1.38.6472.
2955. Stewart W. Wilson. State of XCS Classifier System Research. In IWLCS’00 [1678], pages
63–82, 2000. Fully available at http://www.eskimo.com/˜wilson/ps/state.ps.gz
x
[accessed 2010-12-15]. CiteSeer : 10.1.1.54.1204.
2956. Stewart W. Wilson and David Edward Goldberg. A Critical Review of Classifier Systems.
In ICGA’89 [2414], pages 244–255, 1989. Fully available at http://www.eskimo.com/
˜wilson/ps/wg.ps.gz [accessed 2007-09-12].
2957. Öjvind Winge. Wilhelm Johannsen: The Creator of the Terms Gene, Genotype, Phe-
notype and Pure Line. Journal of Heredity, Oxford Journals, 49(2):83–88, March 1958,
American Genetic Association: Newport, OR, USA. Fully available at http://jhered.
oxfordjournals.org/cgi/reprint/49/2/83.pdf [accessed 2008-08-21]. See also [1448].
2958. Hans Winkler. Verbreitung und Ursache der Parthenogenesis im Pflanzen- und Tier-
reiche. Verlag von Gustav Fischer: Jena, Thuringia, Germany, 1920. Fully available
at http://www.archive.org/details/verbreitungundur00wink [accessed 2009-06-28].
Google Books ID: Y7xUAAAAMAAJ and kZwZAAAAIAAJ. asin: B0018HWPMU.
2959. Gabriel Winter, Jacques Periaux, M. Galán, and P. Cuesta, editors. Genetic Algorithms
in Engineering and Computer Science, European Short Course on Genetic Algorithms and
Evolution Strategies (EUROGEN’95), December 1995, University of Las Palmas: Las Pal-
mas de Gran Canaria, Spain, Wiley Series in Computational Methods in Applied Sciences.
Wiley Interscience: Chichester, West Sussex, UK. isbn: 0-471-95859-X. Google Books
ID: WPRQAAAAMAAJ. OCLC: 35318862, 174238271, 246825979, and 639660095. Library
of Congress Control Number (LCCN): 96177148. GBV-Identification (PPN): 191484377
and 27870042X. LC Classification: QA76.87 .G46 1995.
2960. Paul C. Winter, G. Ivor Hickey, and Hugh L. Fletcher. Instant Notes in Genetics. BIOS
Scientific Publishers Ltd.: Oxford, UK, Taylor and Francis LLC: London, UK, and Science
Press: Běijı̄ng, China, 1998–2006. isbn: 0-387-91562-1, 041537619X, 1-85996-166-5,
1-85996-262-9, and 703007307X. Google Books ID: -yhJHgAACAAJ, 966qAQAACAAJ,
VOjyJQAACAAJ, iKYFAAAACAAJ, and pB7PAQAACAAJ.
2961. Ian H. Witten and Eibe Frank. Data Mining: Practical Machine Learning Tools and
Techniques with Java Implementations, The Morgan Kaufmann Series in Data Manage-
ment Systems. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, Octo-
ber 1999. isbn: 1558605525. Google Books ID: 6lVEKlrTq8EC, PtBQAAAAMAAJ, and
b9BQAAAAMAAJ. OCLC: 42420775, 42621057, and 300306645. Library of Congress Con-
trol Number (LCCN): 99046067. LC Classification: QA76.9.D343 W58 2000.
2962. Martin Wolpers, Ralf Klamma, and Erik Duval, editors. Proceedings of the EC-TEL
2007 Poster Session (EC-TEL’07 Posters), September 17–20, 2007, Crete, Greece, vol-
ume 280 in CEUR Workshop Proceedings (CEUR-WS.org). Rheinisch-Westfälische Tech-
nische Hochschule (RWTH) Aachen, Sun SITE Central Europe: Aachen, North Rhine-
Westphalia, Germany. Fully available at http://sunsite.informatik.rwth-aachen.
de/Publications/CEUR-WS/Vol-280/ [accessed 2009-09-05]. See also [2962].
2963. David H. Wolpert and William G. Macready. No Free Lunch Theorems for Search. Technical
Report SFI-TR-95-02-010, Santa Fe Institute: Santa Fé, NM, USA, February 6, 1995. Fully
available at http://www.santafe.edu/research/publications/workingpapers/
95-02-010.pdf [accessed 2009-06-30]. CiteSeerx : 10.1.1.33.5447.
2964. David H. Wolpert and William G. Macready. No Free Lunch Theorems for Op-
timization. IEEE Transactions on Evolutionary Computation (IEEE-EC), 1(1):67–82,
April 1997, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.585893.
CiteSeerx : 10.1.1.39.6926.
2965. Laurence A. Wolsey. Integer Programming, volume 52 in Estimation, Simulation, and
Control – Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley In-
terscience: Chichester, West Sussex, UK, 1998. isbn: 0-471-28366-5. Google Books
ID: x7RvQgAACAAJ. OCLC: 38976179 and 472310378. Library of Congress Control Num-
ber (LCCN): 98007296. GBV-Identification (PPN): 244286620. LC Classification: T57.74
.W67 1998.
2966. Jan Wolters. Simulated Annealing. GRIN Verlag GmbH: Munich, Bavaria, Germany, 2007.
doi: 10.3239/9783640298761. isbn: 3640303830. Google Books ID: pI9LGg8u09IC.
2967. Andreas Wombacher, Christian Huemer, and Markus Stolze, editors. Proceedings of 2006
IEEE Joint Conference on E-Commerce Technology and Enterprise Computing, E-Commerce
REFERENCES 1179
and E-Services (CEC/EEE’06), June 26–29, 2006, Westin San Francisco Airport: Mill-
brae, CA, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-2511-3.
Partly available at http://semanticweb.org/wiki/CEC/EEE2006 [accessed 2011-02-28].
OCLC: 72439855, 84651797, 232341468, 289020000, and 326786916. Library of
Congress Control Number (LCCN): 2006927609. Order No.: P2511. LC Classifica-
tion: HF5548.32 .I337 2006.
2968. D. F. Wong, Hon Wai Leong, and Chung Laung Liu. Simulated Annealing for VLSI Design,
volume 24 in Kluwer International Series in Engineering and Computer Science, Frontiers
in Logic Programming Architecture and Machine Design, Kluwer International Series in
Engineering and Computer Science: VLSI, Computer Architecture, and Digital Signal Pro-
cessing. Springer Netherlands: Dordrecht, Netherlands. isbn: 0898382564. Google Books
ID: 1ldJR6BDjkIC.
2969. Man Leung Wong and Kwong Sak Leung. Data Mining Using Grammar Based Genetic
Programming and Applications, volume 3 in Genetic Programming Series. Springer Sci-
ence+Business Media, Inc.: New York, NY, USA, January 2000. isbn: 079237746X. Google
Books ID: j8hQAAAAMAAJ. OCLC: 43031203.
2970. John R. Woodward. Evolving Turing Complete Representations. In CEC’03 [2395], pages
830–837, volume 2, 2003. doi: 10.1109/CEC.2003.1299753. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/Woodward_2003_Etcr.html [accessed 2008-11-
x
08]. CiteSeer : 10.1.1.3.6859.
2971. Nimit Worakul, Wibul Wongpoowarak, and Prapaporn Boonme. Optimization in Develop-
ment of Acetaminophen Syrup Formulation. Drug Development and Industrial Pharmacy, 28
(3):345–351, March 2002, Informa Healthcare: London, UK. doi: 10.1081/DDC-120002850.
PubMed ID: 12026227. See also [2972].
2972. Nimit Worakul, Wibul Wongpoowarak, and Prapaporn Boonme. Optimization in Develop-
ment of Acetaminophen Syrup Formulation – Erratum. Drug Development and Industrial
Pharmacy, 28(8):1043–1045, September 2002, Informa Healthcare: London, UK. See also
[2971].
2973. Robert P. Worden. A Speed Limit for Evolution. Journal of Theoretical Biology, 176(1):137–
152, September 7, 1995, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Im-
print: Academic Press Professional, Inc.: San Diego, CA, USA. doi: 10.1006/jtbi.1995.0183.
Fully available at http://dspace.dial.pipex.com/jcollie/sle/ [accessed 2009-07-10].
2974. 5th Online World Conference on Soft Computing in Industrial Applications (WSC5), 2000.
World Federation on Soft Computing (WFSC).
2975. 6th Online World Conference on Soft Computing in Industrial Applications (WSC6), 2001.
World Federation on Soft Computing (WFSC).
2976. 7th Online World Conference on Soft Computing in Industrial Applications (WSC7), 2002.
World Federation on Soft Computing (WFSC).
2977. 8th Online World Conference on Soft Computing in Industrial Applications (WSC8), 2003.
World Federation on Soft Computing (WFSC).
2978. 9th Online World Conference on Soft Computing in Industrial Applications (WSC9), 2004.
World Federation on Soft Computing (WFSC).
2979. 10th Online World Conference on Soft Computing in Industrial Applications (WSC10), 2005.
World Federation on Soft Computing (WFSC).
2980. 11th Online World Conference on Soft Computing in Industrial Applications (WSC11),
September 18–October 6, 2006. World Federation on Soft Computing (WFSC).
2981. Alden H. Wright. Genetic Algorithms for Real Parameter Optimization. In FOGA’90 [2562],
pages 205–218, 1990. CiteSeerx : 10.1.1.25.5297.
2982. Alden H. Wright, Richard K. Thompson, and Jian Zhang. The Computational Com-
plexity of N-K Fitness Functions. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 4(4):373–379, November 2000, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.887236. Fully available at http://www.cs.umt.edu/u/wright/
papers/nkcomplexity.ps.gz [accessed 2009-02-26]. CiteSeerx : 10.1.1.40.3093. INSPEC
Accession Number: 6791340.
2983. Alden H. Wright, Michael D. Vose, Kenneth Alan De Jong, and Lothar M. Schmitt, editors.
Revised Selected Papers of the 8th International Workshop on Foundations of Genetic Al-
gorithms (FOGA’05), January 5–9, 2005, Aizu-Wakamatsu City, Japan, volume 3469/2005
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1180 REFERENCES
doi: 10.1007/11513575 1. Partly available at http://www.cs.umt.edu/foga05/ [ac-
. Published August 22, 2005.
cessed 2007-09-01]
2984. Margaret H. Wright. Direct Search Methods: Once Scorned, Now Respectable. In Numer-
ical Analysis – Proceedings of the 16th Dundee Biennial Conference in Numerical Analy-
sis [1138], pages 191–208, 1995. Fully available at http://cm.bell-labs.com/cm/cs/
doc/96/4-02.ps.gz and http://www.math.wm.edu/˜buckaroo/pubs/LeToTr00a/
LeToTr00a.pdf [accessed 2010-12-24]. CiteSeerx : 10.1.1.80.9399.
2985. Sewall Wright. The Roles of Mutation, Inbreeding, Crossbreeding and Selection in Evo-
lution. In Proceedings of the Sixth Annual Congress of Genetics [1459], pages 356–366,
volume 1, 1932. Fully available at http://www.blackwellpublishing.com/ridley/
classictexts/wright.pdf [accessed 2009-06-29].
2986. Annie S. Wu, editor. Proceedings of the 2000 Genetic and Evolutionary Computation Con-
ference Workshop Program (GECCO’00 WS), July 8, 2000. GECCO-2000 Bird-of-a-feather
workshops. See also [2928, 2935].
2987. Hsien-Chung Wu. Evolutionary Computation. self-published, February 2005. Fully available
at http://nknucc.nknu.edu.tw/˜hcwu/pdf/evolec.pdf [accessed 2010-08-01].
2988. Nelson Wu. Differential Evolution for Optimisation in Dynamic Environments. Techni-
cal Report, Royal Melbourne Institute of Technology (RMIT) University, School of Com-
puter Science and Information Technology: Melbourne, VIC, Australia, November 2006.
CiteSeerx : 10.1.1.93.3234.
2989. Tin-Yu Wu, Guan-Hsiung Liaw, Sing-Wei Huang, Wei-Tsong Lee, and Chung-Chi Wu. A
GA-based Mobile RFID Localization Scheme for Internet of Things. Personal and Ubiquitous
Computing, Springer-Verlag London Limited: London, UK. doi: 10.1007/s00779-011-0398-9.
2990. Zixin Wu, Karthik Gomadam, Ajith Ranabahu, Amit P. Sheth, and John A. Miller. Auto-
matic Composition of Semantic Web Services Using Process Mediation. In ICEIS’07 [494],
pages 453–462, 2007. Fully available at http://knoesis.cs.wright.edu/library/
publications/download/SWSChallenge-TR-METEOR-S-Feb2007.pdf [accessed 2010-12-
16]. See also [1098].
2991. Roman Wyrzykowski, Jack Dongarra, Marcin Paprzycki, and Jerzy Waśniewski, editors.
Proceedings of the 5th International Conference on Parallel Processing and Applied Math-
ematics, revised papers (PPAM’03), September 7–10, 2003, Czȩstochowa, Poland, volume
3019/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b97218. isbn: 3-540-21946-3. Library of Congress Control Number
(LCCN): 2004104391. GBV-Identification (PPN): 386648921. LC Classification: QA76.58
.P69 2003.
2992. Fatos Xhafa and Ajith Abraham, editors. Metaheuristics for Scheduling in Indus-
trial and Manufacturing Applications, volume 128 in Studies in Computational Intel-
ligence. Springer-Verlag: Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-78985-7.
isbn: 3-540-78984-7. Google Books ID: sn4ksQOnE5MC. Library of Congress Control
Number (LCCN): 2008924363. Libri-Number: 3608646.
2993. Fatos Xhafa, Francisco Herrera Triguero, Mario Köppen, and Jose Manuel Bénitez, editors.
8th International Conference on Hybrid Intelligent Systems (HIS’08), October 10–12, 2008,
Barcelona, Catalonia, Spain. IEEE Computer Society: Piscataway, NJ, USA. Partly available
at http://his2008.lsi.upc.edu/ [accessed 2009-03-02]. Library of Congress Control Number
(LCCN): 2008928438. IEEE Computer Society Order Number P3326. BMS Part Number
CFP08360-CDR.
2994. Bowei Xi, Zhen Liu, Mukund Raghavachari, Cathy H. Xia, and Li Zhang. A Smart Hill-
Climbing Algorithm for Application Server Configuration. In WWW’04 [898], pages 287–296,
2004. doi: 10.1145/988672.988711. CiteSeerx : 10.1.1.2.1211. Session: Server performance
and scalability.
2995. Proceedings of the Third International Workshop on Advanced Computational Intelligence
(IWACI’10), August 25–27, 2010, Xi’an Jiaotong-Liverpool University: Sūzhōu, Jiāngsū,
China.
2996. Huayang Xie. An Analysis of Selection in Genetic Programming. PhD thesis, Vic-
REFERENCES 1181
toria University of Wellington: Wellington, New Zealand, 2009. Fully available
at http://researcharchive.vuw.ac.nz/bitstream/handle/10063/837/thesis.
pdf [accessed 2011-11-25].
2997. Shengwu Xiong, Weiwu Wang, and Feng Li. A New Genetic Programming Approach in Sym-
bolic Regression. In ICTAI’03 [1367], pages 161–167, 2003. doi: 10.1109/TAI.2003.1250185.
INSPEC Accession Number: 7862118.
2998. Bin Xu, Tao Li, Zhifeng Gu, and Gang Wu. SWSDS: Quick Web Service Dis-
covery and Composition in SEWSIP. In CEC/EEE’06 [2967], pages 449–451, 2006.
doi: 10.1109/CEC-EEE.2006.85. INSPEC Accession Number: 9189345. See also [318, 1146,
1797, 3010, 3011].
2999. Lei Xu, Adam Krzyżak, and Erkki Oja. Rival Penalized Competitive Learning for Clus-
tering Analysis, RBF Net, and Curve Detection. IEEE Transactions on Neural Net-
works, 4(4):636–649, July 1993, IEEE Computer Society Press: Los Alamitos, CA,
USA. doi: 10.1109/72.238318. Fully available at http://www.cse.cuhk.edu.hk/˜lxu/
papers/journal/IEEETNNrpcl93.PDF [accessed 2009-08-28].
3000. Lihong Xu, Erik D. Goodman, and Yongsheng Ding, editors. Proceedings of the First
ACM/SIGEVO Summit on Genetic and Evolutionary Computation (GEC’09), June 12–14,
2009, Hua-Ting Hotel & Towers: Shànghǎi, China. ACM Press: New York, NY, USA. Partly
available at http://www.sigevo.org/gec-summit-2009/. ACM Order No.: 910093.
3001. Yong Xu, Sancho Salcedo-Sanz, and Xin Yao. Editorial to the Special Issue on Na-
ture Inspired Approaches to Networks and Telecommunications. International Journal
of Computational Intelligence and Applications (IJCIA), 5(2):iii–viii, June 2005, World
Scientific Publishing Co.: Singapore and Imperial College Press Co.: London, UK.
doi: 10.1142/S1469026805001532.
3002. Yong Xu, Sancho Salcedo-Sanz, and Xin Yao. Metaheuristic Approaches to Traffic Groom-
ing in WDM Optical Networks. International Journal of Computational Intelligence and
Applications (IJCIA), 5(2):231–249, June 2005, World Scientific Publishing Co.: Singapore
and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026805001593. Fully avail-
able at http://www.cs.bham.ac.uk/˜xin/papers/IJCIA_TrafficGrooming2005.
pdf [accessed 2009-08-10].
3003. XXX Seminário Integrado de Software e Hardware in Proceedings of XXIII Congresso da
Sociedade Brasileira de Computação (SEMISH’03), August 4–5, 2003.
3025. Xin Yao, editor. Evolutionary Computation: Theory and Applications. World Scientific
Publishing Co.: Singapore, January 28, 1996. isbn: 981-02-2306-4. Google Books
ID: GP7ChxbJOE4C.
3026. Xin Yao, Yong Liu, and Guangming Lin. Evolutionary Programming Made Faster. IEEE
Transactions on Evolutionary Computation (IEEE-EC), 3(2):82–102, July 1999, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/4235.771163. Fully available at http://
eprints.kfupm.edu.sa/38413/ and http://www.u-aizu.ac.jp/˜yliu/
publication/tec22r2_online.ps.gz [accessed 2009-08-07]. CiteSeerx : 10.1.1.14.1492.
INSPEC Accession Number: 6290499.
3027. Xin Yao, Feng-Sheng Wang, K. Padmanabhan, and Sancho Salcedo-Sanz. Hybrid Evolution-
ary Approaches to Terminal Assignment in Communications Networks. In Recent Advances in
Memetic Algorithms [1205], pages 129–159. Springer-Verlag GmbH: Berlin, Germany, 2005.
doi: 10.1007/3-540-32363-5 7.
3028. Xin Yao, Edmund K. Burke, José Antonio Lozano, Jim Smith, Juan Julián Merelo-
Guervós, John A. Bullinaria, Jonathan E. Rowe, Peter Tiño, Ata Kabán, and Hans-
Paul Schwefel, editors. Proceedings of the 8th International Conference on Parallel Prob-
lem Solving from Nature (PPSN VIII), September 18–22, 2008, Birmingham, UK, volume
3242/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b100601. isbn: 3-540-23092-0. Google Books ID: dsI61Gy0 yAC.
OCLC: 56583354, 57501425, and 249904829. Library of Congress Control Number
(LCCN): 2004112163.
3029. Yiyu Yao, Zhongzhi Shi, Yingxu Wang, and Witold Kinsner, editors. Proceedings of the
Firth IEEE International Conference on Cognitive Informatics (ICCI’06), July 17–19, 2006,
Běijı̄ng, China. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-0475-4.
3030. Frank Yates. The Design and Analysis of Factorial Experiments. Imperial Bureau
of Soil Science, Commonwealth Agricultural Bureaux: Harpenden, England, UK, 1937–
1952. isbn: 0851982204. Google Books ID: 5-7RAAAAMAAJ and YW1OAAAAMAAJ.
asin: B00086SIZ0. Technical Communication No. 35.
3031. Tao Ye. Large-Scale Network Parameter Configuration using On-line Simulation Framework.
PhD thesis, Rensselaer Polytechnic Institute, Computer and System Engineering, Depart-
ment of Electrical: Troy, NY, USA, ProQuest: Ann Arbor, MI, USA, March 2003, Shiv-
kumar Kalyanaraman, Advisor, Shivkumar Kalyanaraman, Biplab Sikdar, Christopher D.
Carothers, and Aparna Gupta, Committee members. Fully available at http://www.
ecse.rpi.edu/Homepages/shivkuma/research/papers/tao-phd-thesis.pdf [ac-
cessed 2008-10-16]. Order No.: AAI3088530. See also [3032].
3032. Tao Ye and Shivkumar Kalyanaraman. An Adaptive Random Search Alogrithm for Opti-
mizing Network Protocol Parameters. Technical Report, Rensselaer Polytechnic Institute,
Computer and System Engineering, Department of Electrical: Troy, NY, USA, 2001. Fully
available at http://www.ecse.rpi.edu/Homepages/shivkuma/research/papers/
tao-icnp2001.pdf [accessed 2008-10-16]. CiteSeerx : 10.1.1.24.8138. See also [3031].
3033. Gary G. Yen, Simon M. Lucas, Gary B. Fogel, Graham Kendall, Ralf Salomon, Byoung-Tak
Zhang, Carlos Artemio Coello Coello, and Thomas Philip Runarsson, editors. Proceedings
of the IEEE Congress on Evolutionary Computation (CEC’06), 2006, Sheraton Vancouver
Wall Centre Hotel: Vancouver, BC, Canada. IEEE Computer Society: Piscataway, NJ, USA,
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-9487-9. Google Books
ID: 4j5kNwAACAAJ and ZP7RPQAACAAJ. OCLC: 181016874 and 192047219. Catalogue
1184 REFERENCES
no.: 04TH8846. Part of [1341].
3034. John Jung-Woon Yoo, Soundar R. T. Kumara, and Dongwon Lee. A Web Service Composi-
tion Framework Using Integer Programming with Non-functional Objectives and Constraints.
In CEC/EEE’08 [1346], pages 347–350, 2008. doi: 10.1109/CECandEEE.2008.144. INSPEC
Accession Number: 10475108. See also [204, 662, 1998, 1999, 2064, 2066, 2067].
3035. Hee Yong Youn and We-Duke Cho, editors. Proceedings of the 10th International Conference
on Ubiquitous Computing (UbiComp’08), September 21–24, 2008, COEX, World Trade Cen-
ter: Gangnam-gu, Seoul, Korea, volume 344 in ACM International Conference Proceeding
Series (AICPS). ACM Press: New York, NY, USA.
3036. Marshall C. Yovits, editor. Advances in Computers, volume 31 in Advances in Computers.
Academic Press Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands, October 1990. isbn: 0-12-012131-X. Google Books
ID: hIZDUKGj1w8C. OCLC: 23189502, 444245898, and 493579034. Library of Congress
Control Number (LCCN): 59-15761.
3037. Marshall C. Yovits, George T. Jacobi, and Gordon D. Goldstein, editors. Self-Organizing
Systems (Proceedings of the conference sponsored by the Information Systems Branch of the
Office of Naval Research and the Armour Research Foundation of the Illinois Institute of
Technology.), May 22–24, 1962, Chicago, IL, USA. Spartan Books: Washington, DC, USA.
asin: B000GXZFFG.
3038. Fei Yu, editor. Proceedings of the International Symposium on Electronic Commerce and Secu-
rity (ISECS’08), August 3–5, 2008, Guǎngzhōu, Guǎngdōng, China. isbn: 0-7695-3258-6.
Google Books ID: soBiPgAACAAJ. OCLC: 253559152 and 262691080.
3039. Gwoing Tina Yu. Program Evolvability Under Environmental Variations and Neutrality. In
ECAL’07 [852], pages 835–844, 2007. doi: 10.1007/978-3-540-74913-4 84. See also [3040].
3040. Gwoing Tina Yu. Program Evolvability Under Environmental Variations and Neutrality.
In GECCO’07-II [2700], pages 2973–2978, 2007. doi: 10.1145/1274000.1274041. Workshop
Session – Evolution of natural and artificial systems - metaphors and analogies in single and
multi-objective problems. See also [3039].
3041. Gwoing Tina Yu and Peter J. Bentley. Methods to Evolve Legal Phenotypes. In PPSN
V [866], pages 280–291, 1998. doi: 10.1007/BFb0056843. Fully available at http://www.
cs.ucl.ac.uk/staff/p.bentley/YUBEC2.pdf [accessed 2010-08-06].
3042. Gwoing Tina Yu and Julian Francis Miller. Finding Needles in Haystacks Is Not Hard
with Neutrality. In EuroGP’02 [977], pages 46–54, 2002. doi: 10.1007/3-540-45984-7 2.
Fully available at http://www.cs.mun.ca/˜tinayu/index_files/addr/public_
html/EuroGP2002.pdf [accessed 2009-07-10]. CiteSeerx : 10.1.1.5.1303.
3043. Gwoing Tina Yu, Lawrence Davis, Cem M. Baydar, and Rajkumar Roy, editors. Evolu-
tionary Computation in Practice, volume 88/2008 in Studies in Computational Intelligence.
Springer-Verlag: Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-75771-9. Google Books
ID: om-uG-vk6GgC. Library of Congress Control Number (LCCN): 2007940149. Series
editor: Janusz Kacprzyk.
3044. Jeffrey Xu Yu, Xin Yao, Chi-Hon Choi, and Gang Gou. Materialized View Selection as Con-
strained Evolutionary Optimization. IEEE Transactions on Systems, Man, and Cybernetics –
Part C: Applications and Reviews, 33(4):458–467, November 2003, IEEE Systems, Man, and
Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCC.2003.818494. Fully avail-
able at http://www.cs.bham.ac.uk/˜xin/papers/published_tsmc_view_nov03.
pdf [accessed 2009-04-09]. CiteSeerx : 10.1.1.6.4190. INSPEC Accession Number: 7782484.
3045. Jeffrey Xu Yu, Xuemin Lin, Hongjun Lu, and Yanchun Zhang, editors. Proceedings of
the 6th Asia-Pacific Web Conference on Advanced Web Technologies and Applications (AP-
Web’04), April 14–17, 2004, Hángzhōu, Zhèjiāng, China, volume 3007 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b96838.
isbn: 3-540-21371-6. Google Books ID: yHylerhe4uIC. Library of Congress Control
Number (LCCN): 2004102546.
3046. Tian-Li Yu, Scott Santarelli, and David Edward Goldberg. Military Antenna Design Using
a Simple Genetic Algorithm and hBOA. In Scalable Optimization via Probabilistic Model-
ing – From Algorithms to Applications [2158], Chapter 12, pages 275–289. Springer-Verlag:
Berlin/Heidelberg, 2006. doi: 10.1007/978-3-540-34954-9.
3047. Jing Yuan, Yu Zheng, Chengyang Zhang, Wenlei Xie, Xing Xie, Guangzhong Sun, and Yan
Huang. T-Drive: Driving Directions Based on Taxi Trajectories. In ACM SIGSPATIAL GIS
REFERENCES 1185
2010 [35], pages 99–108, 2010. doi: 10.1145/1869790.1869807.
3048. Deniz Yuret and Michael de la Maza. Dynamic Hill Climbing: Overcoming the
Limitations of Optimization Techniques. In TAINN’93 [1643], pages 208–212, 1993.
Fully available at http://www.denizyuret.com/pub/tainn93.html [accessed 2010-07-23].
CiteSeerx : 10.1.1.53.6367.
3049. Lotfi A. Zadeh, Youakim Badr, Ajith Abraham, Yukio Ohsawa, Richard Chbeir, Fernando
Ferri, Mario Köppen, and Dominique Laurent, editors. Proceedings of the 5th IEEE/ACM
International Conference on Soft Computing as Transdisciplinary Science and Technology
(CSTST’08), October 27–31, 2008, University of Cergy-Pontoise: Cergy-Pontoise, France.
Association for Computing Machinery (ACM): New York, NY, USA. Library of Congress
Control Number (LCCN): 2009417688.
3050. Vladimir Zakian. New Formulation for the Method of Inequalities. Proceedings of the Insti-
tution of Electrical Engineers, 126:579–584, 1979, Institution of Electrical Engineers (IEE):
London, UK and Institution of Engineering and Technology (IET): Stevenage, Herts, UK.
See also [3051].
3051. Vladimir Zakian. New Formulation for the Method of Inequalities. In Madan G. Singh,
editor, Systems and Control Encyclopedia [2504], pages 3206–3215, volume 5. Pergamon
Press: Oxford, UK, 1987. See also [3050].
3052. Vladimir Zakian. Perspectives on the Principle of Matching and the Method of Inequali-
ties. Control Systems Centre Report 769, University of Manchester Institute of Science and
Technology (UMIST), Control Systems Centre: Manchester, UK, 1992. See also [3053].
3053. Vladimir Zakian. Perspectives on the Principle of Matching and the Method of Inequalities.
International Journal of Control, 65(1):147–175, September 1996, Taylor and Francis LLC:
London, UK. doi: 10.1080/00207179608921691. See also [3052].
3054. Vladimir Zakian and U. AI-Naib. Design of Dynamical and Control Systems by the Method
of Inequalities. IEE Proceedings D: Control Theory – Control Theory and Applications, 120
(11):1421–1427, 1973, Institution of Engineering and Technology (IET): Stevenage, Herts,
UK and Institution of Electrical Engineers (IEE): London, UK.
3055. Ali M. S. Zalzala, editor. First International Conference on Genetic Algorithms in Engineer-
ing Systems: Innovations and Applications (GALESIA’95), September 12–14, 1995, Sheffield,
UK, volume 414 in IET Conference Publications. Institution of Engineering and Technology
(IET): Stevenage, Herts, UK.
3056. Ali M. S. Zalzala, editor. Proceedings of the Second IEE/IEEE International Conference
On Genetic Algorithms in Engineering Systems: Innovations and Applications (GALE-
SIA’97), September 2–4, 1997, University of Strathclyde: Glasgow, Scotland, UK, volume
1997 (CP446) in IET Conference Publications. Institution of Engineering and Technology
(IET): Stevenage, Herts, UK. isbn: 0-85296-693-8. Google Books ID: uBPdAAAACAAJ.
OCLC: 37864166, 38989931, 84640200, 288957978, and 312682751. coden: IECPB4.
3057. Maciej Zaremba, Tomas Vitvar, Matthew Moran, Thomas Haselwanter, and Adina Sirbu.
WSMX Discovery for SWS Challeng. In Third Workshop of the Semantic Web Service
Challenge 2006 – Challenge on Automating Web Services Mediation, Choreography and
Discovery [2690], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Athens/papers/DERI-sws-challenge-3.pdf [accessed 2010-12-16]. See also [582,
1210, 1940, 3058, 3059].
3058. Maciej Zaremba, Tomas Vitvar, Matthew Moran, Marco Brambilla, Stefano Ceri, Dario
Cerizza, Emanuele Della Valle, Federico Michele Facca, and Christina Tziviskou. Towards
Semantic Interoperabilty – In-depth Comparison of Two Approaches to Solving Seman-
tic Web Service Challenge Mediation Tasks. In ICEIS’07 [494], pages 413–421, volume
4, 2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/2-DERI-Milano%20comparison.pdf [accessed 2010-12-16]. See also [582,
1210, 1940, 3057, 3059].
3059. Maciej Zaremba, Maximilian Herold, Raluca Zaharia, and Tomas Vitvar. Data and Pro-
cess Mediation Support for B2B Integration. In EON-SWSC’08 [1023], 2008. Fully avail-
1186 REFERENCES
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/ [accessed 2010-12-16]. See also [582, 1210, 1940, 2166, 3057, 3058].
3060. Marvin Zelen and Norman C. Severo. Probability Functions. In Handbook of Mathemati-
cal Functions with Formulas, Graphs, and Mathematical Tables [2606], Chapter 26. Dover
Publications: Mineola, NY, USA, 1964.
3061. Zhi-hui Zhan and Jun Zhang. Discrete Particle Swarm Optimization for Multiple
Destination Routing Problems. In EvoWorkshops’09 [1052], pages 117–122, 2009.
doi: 10.1007/978-3-642-01129-0 15.
3062. Chuan Zhang and Jian Yang. Genetic Algorithm for Materialized View Selec-
tion in Data Warehouse Environments. In DaWaK’99 [1924], pages 116–125, 1999.
doi: 10.1007/3-540-48298-9 12.
3063. Chuan Zhang, Xin Yao, and Jian Yang. Evolving Materialized Views in Data Warehouse.
In CEC’99 [110], pages 823–829, 1999. doi: 10.1109/CEC.1999.782507. Fully available
at http://www.cs.bham.ac.uk/˜xin/papers/ZhangYaoYangCEC99.pdf [accessed 2011-
04-09]. CiteSeerx : 10.1.1.45.4743 and 10.1.1.63.5130. INSPEC Accession Num-
ber: 6338891.
3064. Chuan Zhang, Xin Yao, and Jian Yang. An Evolutionary Approach to Materialized Views
Selection in a Data Warehouse Environment. IEEE Transactions on Systems, Man, and
Cybernetics – Part C: Applications and Reviews, 31(3):282–294, August 2001, IEEE Sys-
tems, Man, and Cybernetics Society: New York, NY, USA. doi: 10.1109/5326.971656.
CiteSeerx : 10.1.1.13.9065. INSPEC Accession Number: 7141147.
3065. Du Zhang, Yingxu Wang, and Witold Kinsner, editors. Proceedings of the Six IEEE Interna-
tional Conference on Cognitive Informatics (ICCI’07), August 6–8, 2007, Lake Tahoe, CA,
USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-1327-3.
3066. Guoqiang Peter Zhang. Neural Networks for Classification: A Survey. IEEE Trans-
actions on Systems, Man, and Cybernetics – Part C: Applications and Reviews, 30(4):
451–462, 2000, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/5326.897072. Fully available at http://vis.lbl.gov/˜romano/mlgroup/
papers/neural-networks-survey.pdf [accessed 2010-07-24].
3067. Jing Zhang, Weiran Nie, Mark Panahi, Yichin Chang, and Kwei-Jay Lin. Business
Process Composition with QoS Optimization. In CEC’09 [1245], pages 499–502, 2009.
doi: 10.1109/CEC.2009.81. INSPEC Accession Number: 10839133. See also [1571, 2264,
3073, 3074].
3068. Liang-Jie Zhang, editor. Proceedings of the 5th World Conference on Services: Part I
(SERVICES’09-I), July 6–10, 2009, Los Angeles, CA, USA. IEEE Computer Society: Pis-
cataway, NJ, USA.
3069. Liang-Jie Zhang, Wil van der Aalst, and Patrick C. K. Hung, editors. Proceedings of the IEEE
International Conference on Services Computing (SCC’07), July 9–13, 2007, Salt Lake City,
UT, USA. IEEE Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7695-2925-9.
Library of Congress Control Number (LCCN): 2007927952. Order No.: P2925.
3070. P. Zhang and A.H. Coonick. Coordinated Synthesis of PSS Parameters in Multi-Machine
Power Systems Using the Method of Inequalities Applied to Genetic Algorithms. IEEE Trans-
actions on Power Systems, 15(2):811–816, May 2000, IEEE Power & Energy Society (PES):
Piscataway, NJ, USA. doi: 10.1109/59.867178. Fully available at http://www.lania.mx/
x
˜ccoello/EMOO/zhang00.pdf.gz [accessed 2008-11-14]. CiteSeer : 10.1.1.8.9768.
3071. Qingzhou Zhang, Xia Sun, and Ziqiang Wang. An Effiecient MA-Based Materialized Views
Selection Algorithm. In CASE’09 [2660], pages 315–318, 2009. doi: 10.1109/CASE.2009.111.
INSPEC Accession Number: 10813304.
3072. Shichao Zhang and Ray Jarvis, editors. Advances in Artificial Intelligence. Proceedings of the
18th Australian Joint Conference on Artificial Intelligence (AI’05), December 5–9, 2005, Uni-
versity of Technology, Sydney (UTS): Sydney, NSW, Australia, volume 3809/2005 in Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11589990. isbn: 3-540-30462-2.
Library of Congress Control Number (LCCN): 2005936732.
3073. Yue Zhang, Tao Yu, Krishna Raman, and Kwei-Jay Lin. Strategies for Efficient Syntactical
and Semantic Web Services Discovery and Composition. In CEC/EEE’06 [2967], pages 452–
454, 2006. doi: 10.1109/CEC-EEE.2006.84. INSPEC Accession Number: 9189346. See also
[318, 2264, 3067, 3074].
REFERENCES 1187
3074. Yue Zhang, Krishna Raman, Mark Panahi, and Kwei-Jay Lin. Heuristic-based Service Com-
position for Business Processes with Branching and Merging. In CEC/EEE’07 [1344], pages
525–528, 2007. doi: 10.1109/CEC-EEE.2007.53. INSPEC Accession Number: 9868639. See
also [319, 2264, 3067, 3073].
3075. R. T. Zheng, N. Q. Ngo, P. Shum, S. C. Tjin, and L. N. Binh. A Staged Continuous Tabu
Search Algorithm for the Global Optimization and its Applications to the Design of Fiber
Bragg Gratings. Computational Optimization and Applications, 30(3):319–335, March 2005,
Kluwer Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht,
Netherlands. doi: 10.1007/s10589-005-4563-9.
3076. Shijue Zheng, Shaojun Xiong, Yin Huang, and Shixiao Wu. Using Methods of Association
Rules Mining Optimization in in Web-Based Mobile-Learning System. In ISECS’08 [3038],
pages 967–970, 2008. doi: 10.1109/ISECS.2008.223. INSPEC Accession Number: 10183507.
3077. Yu Zheng, Quannan Li, Yukun Chen, Xing Xie, and Wei-Ying Ma. Understand-
ing Mobility based on GPS Data. In UbiComp’08 [3035], pages 312–321, 2008.
doi: 10.1145/1409635.1409677. See also [1892, 3078].
3078. Yu Zheng, Like Liu, Longhao Wang, and Xing Xie. Learning Transportation Mode from
Raw GPS Data for Geographic Applications on the Web. In WWW’08 [140], pages 247–256,
2008. doi: 10.1145/1367497.1367532. Fully available at http://www2008.org/papers/
pdf/p247-zhengA.pdf [accessed 2011-02-19]. See also [1892, 3077].
3079. Jinghui Zhong, Xiaomin Hu, Jun Zhang, and Min Gu. Comparison of Performance
between Different Selection Strategies on Simple Genetic Algorithms. In CIMCA-
IAWTIC’06 [1923], pages 1115–1121, 2006. doi: 10.1109/CIMCA.2005.1631619. Fully avail-
able at http://www.ee.cityu.edu.hk/˜jzhang/papers/zhongjinghui06.pdf [ac-
x
cessed 2010-08-03]. CiteSeer : 10.1.1.140.3747. INSPEC Accession Number: 9109532.
3080. Chi Zhou, Weimin Xiao, Thomas M. Tirpak, and Peter C. Nelson. Evolving Accurate and
Compact Classification Rules with Gene Expression Programming. IEEE Transactions on
Evolutionary Computation (IEEE-EC), 7(6):519–531, December 2003, IEEE Computer So-
ciety: Washington, DC, USA. doi: 10.1109/TEVC.2003.819261. INSPEC Accession Num-
ber: 7826902.
3081. Xiaofang Zhou, editor. Proceedings of the 13th Australasian Database Conference (ADC’02),
January–February 2002, Monash University: Melbourne, VIC, Australia, volume 5 in Con-
ferences in Research and Practice in Information Technology (CRPIT). ACM Press: New
York, NY, USA. isbn: 0-909-92583-6.
3082. Xiaofang Zhou, Maria E. Orlowska, and Yanchun Zhang, editors. Proceedings of the 5th Asia-
Pacific Web Conference (APWeb’03), April 23–25, 2003, Xı̄’ān, Shǎnxı̄, China, volume 2642
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-36901-5. isbn: 3-540-02354-2.
3083. Jianping Zhu, Pete Bettinger, and Rongxia (Tiffany) Li. Additional Insight into the Per-
formance of a New Heuristic for Solving Spatially Constrained Forest Planning Problems.
Silva Fennica, 41(4):687–698, Finnish Society of Forest Science, The Finnish Forest Research
Institute: Finland, Eeva Korpilahti, editor. Fully available at http://www.metla.fi/
silvafennica/full/sf41/sf414687.pdf [accessed 2008-08-23].
3084. Kenny Qili Zhu. A Diversity-Controlling Adaptive Genetic Algorithm for the Vehi-
cle Routing Problem with Time Windows. In ICTAI’03 [1367], pages 176–183, 2003.
doi: 10.1109/TAI.2003.1250187. INSPEC Accession Number: 7862120.
3085. Liming Zhu, Roger L. Wainwright, and Dale A. Schoenefeld. A Genetic Algorithm
for the Point to Multipoint Routing Problem with Varying Number of Requests. In
CEC’98 [2496], pages 171–176, 1998. doi: 10.1109/ICEC.1998.699496. Fully available
at http://euler.mcs.utulsa.edu/˜rogerw/papers/Zhu-ICEC98.pdf [accessed 2009-
x
09-03]. CiteSeer : 10.1.1.32.5655.
3086. Weihang Zhu. A Study of Parallel Evolution Strategy – Pattern Search on a GPU Computing
Platform. In GEC’09 [3000], pages 765–771, 2009. doi: 10.1145/1543834.1543939. See also
[3087].
3087. Weihang Zhu. Nonlinear Optimization with a Massively Parallel Evolution Strategy-
Pattern Search Algorithm on Graphics Hardware. Applied Soft Computing, 11(2):1770–1781,
March 2011, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2010.05.020.
See also [3086].
3088. Karin Zielinski and Rainer Laur. Stopping Criteria for Constrained Optimization with Parti-
1188 REFERENCES
cle Swarms. In BIOMA’06 [919], pages 45–54, 2006. Fully available at http://www.item.
uni-bremen.de/staff/zilli/zielinski06stopping_PSO.pdf [accessed 2007-09-13]. See
also [3089].
3089. Karin Zielinski and Rainer Laur. Stopping Criteria for a Constrained Single-Objective
Particle Swarm Optimization Algorithm. Informatica, 31(1):51–59, 2007, Slovensko
društvo Informatika (Slovenian Society Informatika). Fully available at http://
www.informatica.si/vols/vol31.html and http://www.item.uni-bremen.de/
staff/zilli/zielinski07informatica.pdf [accessed 2007-09-13]. Extension of [3088].
3090. Antanas Žilinskas. Algorithm AS 133: Optimization of One-Dimensional Multimodal Func-
tions. Journal of the Royal Statistical Society: Series C – Applied Statistics, 27(3):367–375,
1978, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
doi: 10.2307/2347182.
3091. Hans-Jürgen Zimmermann, editor. Proceedings of the 5th European Congress on Intelligent
Techniques and Soft Computing (EUFIT’97), September 8–11, 1997, Aachen, North Rhine-
Westphalia, Germany. ELITE Foundation: Aachen, North Rhine-Westphalia, Germany and
Wissenschaftsverlag Mainz: Germany.
3092. Uwe Zimmermann, Ulrich Derigs, Wolfgang Gaul, Rolf H. Möhring, and Karl-Peter Schuster,
editors. Operations Research Proceedings 1996, Selected Papers of the Symposium on Oper-
ations Research (SOR’96), September 3–6, 1996, Braunschweig, Lower Saxony, Germany.
Springer-Verlag: Berlin/Heidelberg. isbn: 3540626301. Google Books ID: FK5gAQAACAAJ.
OCLC: 36865524, 174184822, and 243863641. Published 1997.
3093. Lyudmila Zinchenko, Matthias Radecker, and Fabio Bisogno. Application of the Univariate
Marginal Distribution Algorithm to Mixed Analogue - Digital Circuit Design and Optimisa-
tion. In EvoWorkshops’07 [1050], pages 431–438, 2007. doi: 10.1007/978-3-540-71805-5 48.
3094. Stanley Zionts, editor. Proceedings of the 2nd International Conference on Multiple Crite-
ria Decision Making (MCDM’77), August 22–26, 1977, Buffalo, NY, USA, volume 155 in
Lecture Notes in Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg.
isbn: 0-387-08661-7 and 3-540-08661-7.
3095. Christian Zirpins, Guadalupe Ortiz, Winfried Lamersdorf, and Wolfgang Emmerich, editors.
First International Workshop on Engineering Service Compositions (WESC’05). Technical
Report RC23821, IBM Research Division: Yorktown Heights, NY, USA, December 12, 2005,
Amsterdam, The Netherlands. Partly available at http://fresco-www.informatik.
uni-hamburg.de/wesc05/ [accessed 2011-02-28].
3096. Eckart Zitzler and Simon Künzli. Indicator-based Selection in Multiobjective Search. In
PPSN VIII [3028], pages 832–842, 2008. Fully available at http://www.tik.ee.ethz.
ch/sop/publicationListFiles/zk2004a.pdf [accessed 2009-07-18].
3097. Eckart Zitzler and Lothar Thiele. An Evolutionary Algorithm for Multiobjective Op-
timization: The Strength Pareto Approach. TIK-Report 43, Eidgenössische Technis-
che Hochschule (ETH) Zürich, Department of Electrical Engineering, Computer Engi-
neering and Networks Laboratory (TIK): Zürich, Switzerland, May 1998. Fully avail-
able at http://www.tik.ee.ethz.ch/sop/publicationListFiles/zt1998a.pdf
x
[accessed 2009-07-10]. CiteSeer : 10.1.1.40.7696.
3098. Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of Multiobjective Evo-
lutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2):173–195, Sum-
mer 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600568202. Fully avail-
able at http://sci2s.ugr.es/docencia/cursoMieres/EC-2000-Comparison.pdf
and http://www.lania.mx/˜ccoello/EMOO/zitzler99b.ps.gz [accessed 2009-08-07].
CiteSeerx : 10.1.1.30.5848. PubMed ID: 10843520.
3099. Eckart Zitzler, Kalyanmoy Deb, Lothar Thiele, Carlos Artemio Coello Coello, and
David Wolfe Corne, editors. Proceedings of the First International Conference on
Evolutionary Multi-Criterion Optimization (EMO’01), March 7–9, 2001, Eidgenössische
Technische Hochschule (ETH) Zürich: Zürich, Switzerland, volume 1993/2001 in Lec-
ture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-44719-9. isbn: 3-540-41745-1. Google Books ID: F29RAAAAMAAJ
and d52UjQ5pfcIC. OCLC: 46240282, 59550134, 76254334, 77755314, 150396610,
243488365, and 313725155.
3100. Eckart Zitzler, Marco Laumanns, and Lothar Thiele. SPEA2: Improving the Strength
Pareto Evolutionary Algorithm. TIK-Report 101, Eidgenössische Technische Hochschule
REFERENCES 1189
(ETH) Zürich, Department of Electrical Engineering, Computer Engineering and Net-
works Laboratory (TIK): Zürich, Switzerland, May 2001. Fully available at http://
www.tik.ee.ethz.ch/sop/publicationListFiles/zlt2001a.pdf [accessed 2009-07-10].
CiteSeerx : 10.1.1.17.2440. Errata added 2001-09-27.
3101. Mark Zlochin, Mauro Birattari, Nicolas Meuleau, and Marco Dorigo. Model-Based
Search for Combinatorial Optimization: A Critical Survey. Annals of Operations Re-
search, 132(1-4):373–395, November 2004, Springer Netherlands: Dordrecht, Nether-
lands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1023/B:ANOR.0000039526.52305.af.
3102. Constantin Zopounidis, editor. Proceedings of the 18th International Conference on Multiple
Criteria Decision Making (MCDM’06), June 9–13, 2006, MAICh (Mediterranean Agronomic
Institute of Chania) Conference Centre: Chania, Crete, Greece. Partly available at http://
www.dpem.tuc.gr/fel/mcdm2006/ [accessed 2007-09-10].
3103. Blaz Zupan, Elpida Keravnou, and Nada Lavrac, editors. Proceedings Intelligent Data Anal-
ysis in Medicine and Pharmacology (IDAMAP’01), September 4, 2001, Custom House Hotel
at ExCeL: London, UK, The Springer International Series in Engineering and Computer
Science. Springer US: Boston, MA, USA.
3104. Jacek M. Zurada, Gary G. Yen, and Jun Wang, editors. Computational Intelligence: Re-
search Frontiers – IEEE World Congress on Computational Intelligence – Plenary/Invited
Lectures (WCCI), June 1–6, 2008, Hong Kong Convention and Exhibition Centre: Hong
Kong (Xiānggǎng), China, volume 5050/2008 in Theoretical Computer Science and General
Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-540-68860-0. isbn: 3-540-68858-7 and 3540688609. Google
Books ID: X-WH9-BP6JoC. OCLC: 228414687, 243453135, 244010459, and 315620902.
Library of Congress Control Number (LCCN): 2008927523.
3105. Katharina Anna Zweig. On Local Behavior and Global Structures in the Evolu-
tion of Complex Networks. PhD thesis, Eberhard-Karls-Universität Tübingen, Fakultät
für Informations- und Kognitionswissenschaften: Tübingen, Germany, July 2007.
Fully available at http://www-pr.informatik.uni-tuebingen.de/mitarbeiter/
katharinazweig/downloads/Diss.pdf [accessed 2008-06-12]. See also [3106].
3106. Katharina Anna Zweig and Michael Kaufmann. Evolutionary Algorithms for the
Self-Organized Evolution of Networks. In GECCO’05 [304], pages 563–570, 2005.
doi: 10.1145/1068009.1068105. Fully available at http://www-pr.informatik.
uni-tuebingen.de/mitarbeiter/katharinazweig/downloads/299-lehmann.
pdf [accessed 2008-05-28]. See also [3105].
Index
1190
INDEX 1191
Monte Carlo . . . . . . . . . . . . . . . . . . . . . . 106, 148 trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
optimization . . . . . . . . . . . . . . . . . . . . . . . 23, 103 Best-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
baysian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Best-Limited Search . . . . . . . . . . . . . . . . . . . . . . . . 519
probabilistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 BFS . . . . . . . . . . . . . . . . . . . . . . . . 32, 115, 515 ff., 519
randomized. . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 Bias. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .692
stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 BibTeX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
Algorithmic Chemistry . . . . . . . . . . . . . . . . . . . . . 327 Big-O notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
ALIO/EURO . 133, 228, 235, 237, 251, 321, 355, Bijective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
375, 385, 416, 423, 430, 461, 469, 475, Bijectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
479, 485, 491, 496, 500, 502, 506, 518, Bin Packing . . . . . . . . . . . . . . . 25, 39, 181, 183, 238
521, 524, 528 BinInt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 f., 562 f.
Allele . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 f. Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . 674
Amplifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 BIOMA . . 320, 354, 374, 384, 415, 422, 429, 460,
ANN . . . 187, 191 f., 204, 273, 389, 396, 406, 536, 468, 474, 484
541 BIONETICS. .130, 320, 354, 374, 384, 415, 422,
Annealing 429, 460, 468, 474, 484
simulated . . . . . . . . . . . . 3, 25, 32 f., 36, 156 f., BitCount. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562
160, 181, 231, 243 – 249, 252, 308, 412, Black Box . . . . . . . . . . . . . . . . . . . . 35 f., 42, 91, 1209
463 f., 493, 541, 557, 716, 740, 945 Bloat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Ant Block
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Ant Colony Optimization . 3, 32, 37, 207, 471 f., BLUE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .696
477, 945 BOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Antenna Boolean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Branch And Bound . . . . . . . . . . . . . . . . . . . . . 32, 523
Antisymmetrie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 Breadth-First Search . . . . . 32, 115, 515, 517, 519
ANTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474, 484 Breeding Selection . . . . . . . . . . . . . . . . . . . . . . . . . 289
ANZIIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Brute Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
AO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Bucket Brigade Algorithm . . . . . . . . . . . . . . . . . . 458
Artificial Ant57, 61, 64, 69, 76, 78, 107, 380, 566 Building Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Artificial Development . . . . . . . . . . . . . . . . . . . . . 272 Building Block Hypothesis . . . 84, 121, 183, 217,
Artificial Embryogeny . . . . . . . . . . . . . . . . . . . . . . 272 341, 346, 558 f., 566
Artificial Intelligence . . . . . . . . . . . . 31, 35, 76, 536 building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Artificial Neural Network . . 187, 191 f., 204, 273, Bypass
389, 396, 406, 536, 541 extradimensional . . . . . . . . . . . . . . . . . . . . . . 219
Artificial Ontogeny . . . . . . . . . . . . . . . . . . . . . . . . . 272
ASC 134, 321, 355, 375, 385, 416, 423, 430, 461,
—C—
C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 f.
469, 475, 485
C 745 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
Asexual Reproduction . . . . . . . . . . . . . . . . . 307, 325
CACSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Assimilation
Candidate
genetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
AST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Capacitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Asymmetrie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 CarpeNoctem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Automatically Defined Function.273, 400 f., 403 Cartesian Genetic Programming . . . . . . . . . . . . 174
Automatically Defined Macro. . . . . . . . . .401, 403 Catastrophe
complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
—B— Causality . . . . . . . . . . . . . . . . . . . . . 84, 162, 218, 571
Baldwin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658, 660
effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 continous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
Basin-δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
Bayesian Optimization Algorithm . . . . . . . . . 184 CEC 317, 353, 373, 383, 414, 421, 429, 459, 467,
BB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 523 474, 484
BBH . . . . . 84, 121, 183, 217, 341, 346, 558 f., 566 CEGNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
bcGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Cellular Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Bernoulli Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . 681
distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 CGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 cGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 ff.
1192 INDEX
CGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Completeness . . . . . . . . . . . . 85, 146, 157, 218, 514
CGPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 f. weak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Change Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 143, 203
non-synonymous . . . . . . . . . . . . . . . . . . . . . . 173 complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
synonymous . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Complexity Catastrophe . . . . . . . . . . . . . . . . . . . 555
Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 509 Computational Embryogeny . . . . . . . . . . . . . . . . 272
Chi-square Distribution . . . . . . . . . . . . . . . . . . . . 684 Computing
Chromosome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 evolutionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Chromosomes soft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
string Confidence
fixed-length . . . . . . . . . . . . . . . . . . . . . . . . . 328 coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
variable-length . . . . . . . . . . . . . . . . . . . . . . 339 interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Constrained-domination . . . . . . . . . . . . . . . . . . . . . 75
CI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696 Constraint Handling . . . . . . . . . . . . . . . . . . . . . . . . . 70
CIMCA . . 320, 354, 374, 385, 415, 422, 430, 461, Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 22, 70, 509
468, 475, 485 Continuous Distributions . . . . . . . . . . . . . . . . . . . 676
Circuit Continuous Optimization . . . . . . . . . . . . . . . . . . . . 27
analog Continuous Problem. . . . . . . . . . . . . . . . . . . . . . . . .27
design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Controller
Circuit Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
CIS . 320, 355, 374, 385, 415, 422, 430, 461, 468, Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
475, 485 domino . . . . . . . . . . . . . . . . . . . . 153 f., 218, 562
CISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 non-uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Class premature . . . . . . . . . . . . . . . . . 151 f., 482, 547
equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Convergence Prevention . . . . . . . . . . . . . . . . . . . . 546
Classifier System . . . . . . . . . . . . . . . . . . . . 339, 457 f. Cooperation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .463
learning . . . . . . . . . 3, 32, 264, 457 f., 536, 945 Copyleft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
Classifier Systems Copyright . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947, 955 f.
learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Correlation
Clearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 fitness distance . . . . . . . . . . . . . . . . . . . . . . . . 163
Climbing genotype-fitness . . . . . . . . . . . . . . . . . . . . . . . 163
hill. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 f., operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
41, 85, 96, 99, 114, 116, 157, 160, 165, Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
167, 169, 172, 176, 181, 210 f., 229 – Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
232, 238 ff., 243, 245 f., 248, 252, 266, estimate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .664
296, 359 f., 377, 400, 412, 425, 463 f., CPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
482, 501, 559, 563, 565, 735, 737, 945 Creation . . . . . . . . . . . . . . . . . . . . . 306, 328, 340, 389
random restarts . . . . . . . . . . . . . . . . . . . . . 501 Criterion
CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 termination . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
CO2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Criticality
Co-Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 self-organized . . . . . . . . . . . . . . . . . . . . . . . . . 493
Code CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Crossover . . . . . . . . . . . . . 262, 307, 335, 340 f., 399
Code Bloat. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .534 1-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 2-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Coefficient average . . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 362
negative slope . . . . . . . . . . . . . . . . . . . . . . . . . 163 headless chicken . . . . . . . . . . . . . . . . . . 339, 399
Coefficient of Variation . . . . . . . . . . . . . . . . . . . . . 663 strong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Coevolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 weak. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .400
cooperative. . . . . . . . . . . . . . . . . . . . . . . . . . . .205 homologous . . . . . . . . . . . . . . . . . . 338, 340, 407
COGANN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 k-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 MPX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
Combinatorial Optimization . . . . . . . . . . . . . . . . 181 multi-point . . . . . . . . . . . . . . . . . . . . . . . 335, 338
Combinatorial Problem . . . . . . . . . . . . . . . . . . . . . . 25 point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 random. . . . . . . . . . . . . . . . . . . . . . . . . . .339, 399
Competition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 single-point . . . . . . . . . . . . . . . . . . . . . . . 335, 400
Complete Enumeration . . . . . . . . . . . . . . . . . . . . . 212 SPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
INDEX 1193
sticky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 DFS . . . . . . . . . . . . . . . . . . . . . . . . 32, 115, 516 f., 519
strong context preserving . . . . . . . . . . . . . . 400 DHL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 f., 550
subtree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . 3,
TPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 27, 32, 84, 191, 207, 262 f., 419 f., 425,
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 444, 463, 580, 771, 794, 796
two-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Differentiation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
uniform . . . . . . . . . . . . . . . . . 337, 346, 362, 377 Difficult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
UX . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 346, 362 Dilemma
CS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339, 457 f. prisoner’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
CSTST . . 132, 320, 355, 374, 385, 415, 422, 430, Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
461, 468, 475, 485 Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Cumulative Distribution Function . . . . . . . . . 658 Dinosaur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Customer Relationship Management . . . . . . . 536 Diodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Cut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Direct Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Cut & Splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Cutting-Plane Method . . . . . . . . . . . . . . . . . . . . . . 32 Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . 668
CVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 Discrete Optimization . . . . . . . . . . . . . . . . . . . . . . . 29
Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Discrete Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
—D— Distribution . . . . . . . . . . . . . . . . . . . . . . 658, 668, 676
Darwin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464 χ2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . 154, 536 Binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
DE . . . . 3, 27, 32, 84, 191, 207, 262 f., 419 f., 425, chi-square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
444, 463, 580, 771, 794, 796 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
Death Penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Deceptiveness . . . . . . . . . . . . . . . . 115, 167, 182, 557 estimation of, algorithm . . . . . . . . . . . . . . . 27,
Deceptivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 182 32, 267, 367, 427, 431, 442, 447, 452 f.,
Decile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 455, 554, 775
Decision Maker exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
external . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Decision Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 multivariate . . . . . . . . . . . . . . . . . . . . . . . . 680
Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Decreasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
monotonically . . . . . . . . . . . . . . . . . . . . . . . . . 647 Student’s t . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Defined Length. . . . . . . . . . . . . . . . . . . . . . . . . . . . .341 t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
DELB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 uniform . . . . . . . . . . . . . . . . . . . . . . . . . . 668, 676
Delivery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .544 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . 676
Depth-First Search . . . . . . . . . . 32, 115, 516 f., 519 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
iterative deepenining . . . . . . . . . . . . . . 32, 516 Diversification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
Depth-Limited Search . . . . . . . . . . . . . . . . . . . . 516 f. Diversity . . . . . . . . . . . . 70, 154, 216, 330, 452, 547
DERL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Divisor
DES. . .3, 27, 32, 84, 191, 207, 262 f., 419 f., 425, greatest common . . . . . . . . . . . . . . . . . . . . . . 180
444, 463, 580, 771, 794, 796 DLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 f.
Design DNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272, 338
antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
circuit DoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
analog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Domination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Design Of Experiments . . . . . . . . . . . . . . . . . . . . . 545 constrained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Design of Experiments . . . . . . . . . . . . . . . . . . . . . 178 rank. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
Determinism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 resistance . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 198
Deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Domino
Deterministic Selection . . . . . . . . . . . . . . . . . . . . . 289 convergence . . . . . . . . . . . . . . . . 153 f., 218, 562
Deutsche Post . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Domino Convergence . . . . . . . . . . . . . . . . . . . . . . . 154
Development Don’t Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Downhill Simplex . . . 32, 104, 264, 420, 463, 501,
Developmental Approach . . . . . . . . . . . . . . . . . . . 272 945
Deviation DPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662 Drunkard’s Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
1194 INDEX
DTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145, 147, 149 Enterprise Resource Planning . . . . . . . . . . . . . . 536
Duplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306 f. Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
DVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
—E— information . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
E-Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 f., 945 Enumeration
E-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
EA . . . . . . . . . . . . . . . . . . . . . . . . . . . 3, 32, 36 f., 75 ff., exhaustive . . . 113, 116 f., 143, 168, 210, 212
84, 86, 93 ff., 121, 155 ff., 162 f., 172 f., Environment Selection . . . . . . . . . . . . . . . . . . . . . 261
183 f., 207, 210, 212, 229 f., 245, 253 ff., Environmental Selection . . . . . . . . . . . . . . . . . . . . 260
258, 261 – 267, 269 f., 272 ff., 277, 279, EO . . . . . . . . . . . . . . . . . . . 3, 25, 32, 172, 493 f., 945
285 ff., 289, 294, 302, 306 f., 309, 312, GEO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
324 f., 327, 335, 359 f., 377, 379 f., 393, Eoarchean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
412, 419, 427, 434, 452 f., 457, 463 f.,
EP . . . . . . . . . . . . . . . . . . . 27, 32, 264, 380, 413, 415
466, 512, 538, 541 f., 544 ff., 551, 562,
Ephemeral Random Constants . . . . . . . . . . . . . 532
566 ff., 577, 622, 699, 717, 746, 945
EPIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
hybrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Epistacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
EARL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Epistasis . 163, 177, 181, 204, 389, 404, 554, 560,
EBCOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
562, 569, 577 f.
EC . . . 3, 24, 31 f., 36, 48, 70, 156, 180, 253, 379,
Epistatic Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
537, 553, 945
Epistatic Variance. . . . . . . . . . . . . . . . . . . . . . . . . .163
ECML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132, 461
Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
ECML PKDD . . . . . . . . . . . . . . . . . . . . . . . . . 135, 462
punctuated . . . . . . . . . . . . . . . . . . . . . . . 172, 494
Ecological Selection . . . . . . . . . . . . . . . . . . . . . . . . 260
Equivalence
Economics
class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
mathematical . . . . . . . . . . . . . . . . . . . . . . 23, 509
EDA . . 27, 32, 267, 367, 427, 431, 434, 442, 447, relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
452 f., 455, 554, 775 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 f.
Edge Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 ERL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
EDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 ERP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
EDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Effect β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Baldwin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 mean square . . . . . . . . . . . . . . . . . . . . . . . . . . 692
hiding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 threshold . . . . . . . . . . . . . . . . . . . . . . . . . 163, 562
Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 type 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 type 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Error Threshold . . . . . . . . . . . . . . . . . . . . . . . 163, 562
Elitism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267, 546 ES . . . 3, 27, 32, 37, 155, 162, 188, 191, 216, 229,
Embrogeny . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 263 f., 266 f., 286, 289, 302, 359 f., 362,
Embryogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 364, 367 f., 370 ff., 377, 419, 425, 427,
Embryogenic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 444, 481, 566, 580 f., 760, 786, 788, 945
Embryogeny differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 27, 32, 84, 191, 207, 262 f., 419 f., 425,
computational . . . . . . . . . . . . . . . . . . . . . . . . . 272 444, 463, 580, 771, 794, 796
EMO 55, 65, 202, 253, 257, 259 f., 279, 311, 319, Estimation Of Distribution Algorithm . . . . . . 27,
353, 374, 384, 414, 421, 429, 460, 467, 32, 267, 367, 427, 431, 442, 447, 452 f.,
474, 484 455, 554, 775
EMOO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . . 691
EN 284 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 f.
Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 f. best linear unbiased . . . . . . . . . . . . . . . . . . . 696
Encoding maximum likelihood . . . . . . . . . . . . . . . . . . . 695
cellular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
Endnote . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945 unbiased . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
Endogenous . . . . . . . . . . . . . . . . 90, 360, 365 ff., 481 EUFIT . . 132, 320, 355, 375, 385, 415, 422, 430,
EngOpt . . 132, 228, 234, 237, 251, 320, 355, 375, 461, 468, 475, 485
385, 415, 422, 430, 461, 468, 475, 478, EUROGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . 354, 374
485, 490, 496, 500, 502, 506, 518, 521, EuroGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
524, 528 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
INDEX 1195
Event Explicit Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 270
certain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Explicit Representation . . . . . . . . . . . . . . . . . . . . 270
conflicting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Exploitation . . . . . . . . . . . . . . . . . . . . . . 156, 162, 216
elementary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Exploration . . . . . . . . . . . . . . . . . . 155, 216, 360, 512
impossible. . . . . . . . . . . . . . . . . . . . . . . . . . . . .654 Exponential Distribution . . . . . . . . . . . . . . . . . . . 682
random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 External Decision Maker . . . . . . . . . . . . . 75 ff., 281
EvoCOP . 321, 355, 375, 385, 416, 422, 430, 461, Extinctive Selection . . . . . . . . . . . . . . . . . . . 183, 265
468, 475, 485 left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21, 509 right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Baldwinian . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Extradimensional Bypass . . . . . . . . . . . . . . . . . . . 219
chemical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Extrema Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 174
differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Extremal Optimization3, 25, 32, 172, 493 f., 945
grammatical . . . . . . . . . . . . . . . . . 204, 273, 403 generalized . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Lamarckian . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Extremum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Evolution Strategy . . . . . . . . . . . . . . . . . . . 3, 27, 32, global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
37, 155, 162, 188, 191, 216, 229, 263 f., local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
266 f., 286, 289, 302, 359 f., 362, 364,
367 f., 370 ff., 377, 419, 425, 427, 444,
—F—
Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
481, 566, 580 f., 760, 786, 788, 945
False Negative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
False Positive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
27, 32, 84, 191, 207, 262 f., 419 f., 425,
FDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
444, 463, 580, 771, 794, 796
FDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947
Evolutionary Algorithm . . . . . . . 3, 32, 36 f., 75 ff.,
FEA . 320, 354, 374, 384, 415, 422, 429, 460, 468
84, 86, 93 ff., 121, 155 ff., 162 f., 172 f.,
Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
183 f., 207, 210, 212, 229 f., 245, 253 ff.,
Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
258, 261 – 267, 269 f., 272 ff., 277, 279,
Fibonacci Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
285 ff., 289, 294, 302, 306 f., 309, 312,
Filter
324 f., 327, 335, 359 f., 377, 379 f., 393,
Butterworth . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
412, 419, 427, 434, 452 f., 457, 463 f.,
Chebycheff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
466, 512, 538, 541 f., 544 ff., 551, 562,
Finite State Machine . . . . . . . . . . . . . . . . . . . . . . . 380
566 ff., 577, 622, 699, 717, 746, 945
First Hitting Time . . . . . . . . . . . . . . . . . . . . . . . . . . 38
generational . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94, 257, 259
hybrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
assignment process . . . . . . . . . . . . . . . . . . . . 274
multi-objective . . . . . . . . . . . . . . . . . . . . . . . . 253
nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
steady-state . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Evolutionary Computation . . 3, 24, 31 f., 36, 48, Fitness Assignment . . . . . . . . . . . . . . . . . . . . . . . . 274
70, 156, 180, 253, 379, 537, 553, 945 Pareto ranking . . . . . . . . . . . . . . . . . . . . . . . . 275
Evolutionary Multi-Objective Optimization 253 Prevalence ranking . . . . . . . . . . . . . . . . . . . . 275
Evolutionary Operation . . . . . . . . . . . . . . . 264, 501 Tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
randomized . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 weighted sum . . . . . . . . . . . . . . . . . . . . . . . . . 275
Evolutionary Programming27, 32, 264, 380, 413 Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Evolutionary Reinforcement Learning . . . . . . 457 Fitness Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Evolvability . . . . . . . . . . . . . . . . . . . . . . . . . . . 163, 172 deceptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
EvoNUM 321, 355, 375, 386, 416, 423, 430, 461, ND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
469, 502 neutral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
EVOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 NK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
EvoRobots. . . .322, 356, 375, 386, 416, 423, 431, NKp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
462, 469, 476, 486 NKq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
EvoWorkshops 318, 353, 373, 383, 414, 421, 429, p-Spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
459, 467, 474, 484 rugged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
EWSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134, 461 technological . . . . . . . . . . . . . . . . . . . . . . . . . . 556
Exhaustive Enumeration . . 113, 116 f., 143, 168, FLAIRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
210, 212 FOCI . . . . 321, 355, 375, 386, 416, 423, 431, 461,
Exogenous. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90, 360 469, 475, 485
expand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 FOGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Expected value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 Forma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 f., 163
Experiment analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Miller-Urey . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
1196 INDEX
Free Lunch Gene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
no . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Freight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Generality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Frequency Generalized Extremal Optimization. . . . . . . . 495
absolute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
relative. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655 Generational. . . . . . . . . . . . . . . . . . . . . . . . .265, 546 f.
Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Genetic Algorithm . . . . 3, 25, 32, 37, 43, 48, 82 –
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 85, 90, 95, 101, 105, 119, 139, 154 f.,
automatically-defined . . . . . . 273, 400 f., 403 163, 167, 174, 183, 188, 207, 215, 217,
benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 248, 262 – 265, 271, 278, 290 f., 296,
cumulative distribution . . . . . . . . . . . . . . . . 658 325 – 329, 331, 335, 337 ff., 341 f., 345 –
fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 348, 350, 357, 377, 380 f., 389, 398,
Griewangk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 407, 427, 434, 439, 454 f., 457 f., 463 ff.,
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 541, 554, 558 f., 562 f., 565 f., 568, 573,
Griewank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 580, 945
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 compact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439
heuristic . . . . . . . . . . . . . . . . . . . . . . 34, 274, 518 human based . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
monotone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
objective . . . . . . . . . . . . . . . . . . . 22, 36, 44, 509 messy . . . . . . . . . . . . . . . . . . . . . . 184, 347 f., 395
conflicting . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 380
harmonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 real-coded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
independent . . . . . . . . . . . . . . . . . . . . . . . . . 56 Genetic Assimilation . . . . . . . . . . . . . . . . . . . . . . . 465
penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Genetic Programming. . . . . . . . . . . . . . . . . . . . . . . .3,
adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 32, 37, 107, 155, 162 f., 173, 175, 180,
dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 187, 192, 195, 262 ff., 273, 302, 304 f.,
probability density . . . . . . . . . . . . . . . . . . . . 659 379 ff., 386 f., 389, 396, 398, 400 f., 403,
probability mass . . . . . . . . . . . . . . . . . . . . . . 659 405 f., 408, 411 f., 427, 434, 447, 531,
royal road . . . . 154, 173, 455, 559 – 562, 566 534, 536, 561, 566, 568, 824, 945
sphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .368 byte code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
trap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 557 f. cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Weierstraß . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 compiling system . . . . . . . . . . . . . . . . . . . . . . 405
Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 crossover
Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 homologous . . . . . . . . . . . . . . . . . . . . 338, 407
FWGA . . . . . . . . . . . . . . . . . . . . . . 134, 355, 475, 485 function nodes . . . . . . . . . . . . . . . . . . . . . . . . 387
grammar-guided . . . . . . . . 273, 381, 403, 409
—G— graph-based . . . . . . . . . . . . . . . . . . . . . . . 32, 409
G3P . . . . . . . . . . . . . . . . . . . . . . . . . 273, 381, 403, 409 linear . . . . 32, 327, 381, 400, 403 – 408, 412
GA . . . . . . . . . . . . . . . . . . . 3, 25, 32, 37, 43, 48, 82 – page-based. . . . . . . . . . . . . . . . . . . . . . . . . .408
85, 90, 95, 101, 105, 119, 139, 154 f., non-terminal nodes . . . . . . . . . . . . . . . . . . . . 387
163, 167, 174, 183, 188, 207, 215, 217, stack-based . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
248, 262 – 265, 271, 278, 290 f., 296, Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
325 – 329, 331, 335, 337 ff., 341 f., 345 – standard32, 273, 386 f., 389, 399, 403 f., 412
348, 350, 357, 377, 380 f., 389, 398, strongly-typed . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
407, 427, 434, 439, 454 f., 457 f., 463 ff., 32, 37, 107, 155, 162 f., 173, 175, 180,
541, 554, 558 f., 562 f., 565 f., 568, 573, 187, 192, 195, 262 ff., 273, 302, 304 f.,
580, 945 379 ff., 386 f., 389, 396, 398, 400 f., 403,
compact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439 405 f., 408, 411 f., 427, 434, 447, 531,
messy . . . . . . . . . . . . . . . . . 184, 327, 347 f., 395 534, 536, 561, 566, 568, 824, 945
Gads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 terminal nodes . . . . . . . . . . . . . . . . . . . . . . . . 387
GALESIA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355 tree-based . . . . . . . . . . . . . . . . . . . 381, 386, 389
Gauss-Markov Theorem . . . . . . . . . . . . . . . . . . . . 696 Genetic Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 f.
GCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 GENITOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
GE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204, 273, 403 Genome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82, 326
GEC 321, 355, 375, 385, 416, 422, 430, 461, 468, Genomes
475, 485 string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
GECCO . 317, 353, 373, 383, 414, 421, 429, 459, tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
467, 473, 484 Genotype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39, 82
GEM 321, 355, 375, 385, 416, 422, 430, 461, 469 Genotype-Phenotype Mapping . . . . . . . . 39 ff., 71,
INDEX 1197
86 f., 90 f., 95 f., 101, 103 f., 111, 113, stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
119, 162, 168, 174 f., 179, 181 ff., 204, Grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
209, 215 ff., 219, 254, 258, 263, 270 – Grammar-Guided Genetic Programming . . 273,
273, 325, 327, 348 ff., 357, 360, 377, 381, 403
380, 404, 406 f., 433 f., 437, 443 f., 481, Grammatical Evolution . . . . . . . . . . . 204, 273, 403
542, 569, 580, 711, 834 Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 512, 651
direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 acyclic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
GEO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GEWS . . . . . . . . . . . . . . . . . . . . . . 132, 228, 321, 385 directed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 finite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GGGP . . . . . . . . . . . . . . . . . . . 32, 273, 381, 403, 409 infinite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GICOLAG. . . .134, 228, 235, 237, 251, 321, 356, path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
375, 386, 416, 423, 431, 461, 469, 475, theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651
479, 485, 491, 496, 500, 502, 506, 518, undirected . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
521, 524, 528 weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
Glass GRASP . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 157, 499
spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93, 557 Gray Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Global Optimization . . . . . . . . . . . . . . 3, 22, 31, 48, Greedy Search . . . . . . . . . . . . . . 32, 177, 464, 519 f.
51, 55, 90, 93, 99, 105, 107, 139, 151, Griewangk Function . . . . . . . . . . . . . . . . . . . . . . . . 601
154 f., 172, 174, 187 f., 215, 231, 243, shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
267, 509, 511, 542, 553, 566, 649, 945 Griewank Function . . . . . . . . . . . . . . . . . . . . . . . . . 601
Goal Attainment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
Goal Programming . . . . . . . . . . . . . . . . . . . . . . 72, 75 Grow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
GP . . . . 3, 32, 37, 107, 155, 162 f., 173, 175, 180, GSM/GPRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
187, 192, 195, 262 ff., 273, 302, 304 f.,
379 ff., 384, 386 f., 389, 396, 398, 400 f., —H—
403, 405 f., 408, 411 f., 427, 434, 447, HAIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
531, 534, 536, 561, 566, 568, 824, 945 Halting Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 f.
grammar-guided . . . . . . . . . . . . . 273, 381, 403 Harmony Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
graph-based . . . . . . . . . . . . . . . . . . . . . . . 32, 409 Haystack
linear . . . . 32, 327, 381, 400, 403 – 408, 412 needle in a . . . . . . . . . 103, 168, 175, 180, 465
standard32, 273, 386 f., 389, 399, 403 f., 412 HBGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
STGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3, hBOA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453
32, 37, 107, 155, 162 f., 173, 175, 180, HC . 32 f., 41, 85, 96, 99, 114, 116, 157, 160, 165,
187, 192, 195, 262 ff., 273, 302, 304 f., 167, 169, 172, 176, 181, 210 f., 229 –
379 ff., 386 f., 389, 396, 398, 400 f., 403, 232, 238 ff., 243, 245 f., 248, 252, 266,
405 f., 408, 411 f., 427, 434, 447, 531, 296, 359 f., 377, 400, 412, 425, 463 f.,
534, 536, 561, 566, 568, 824, 945 482, 501, 559, 563, 565, 735, 737, 945
GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 ff., 71, with random restarts . . . . . . . . . . . . . . . . . . 501
86 f., 90 f., 95 f., 101, 103 f., 111, 113, HCwL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
119, 162, 168, 174 f., 179, 181 ff., 204, Headless Chicken . . . . . . . . . . . . . . . . . . . . . . 339, 399
209, 215 ff., 219, 254, 258, 263, 270 – Herman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
273, 325, 327, 348 ff., 357, 360, 377, Hessian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
380, 404, 406 f., 433 f., 437, 443 f., 481, Heuristic . . . . . . . . . . . . . . . . . . . . . . . 34, 36, 274, 518
542, 569, 580, 711, 834 admissible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
direc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 monotonic. . . . . . . . . . . . . . . . . . . . . . . . . . . . .520
direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Hiding Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
explicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 f.,
indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 41, 85, 96, 99, 114, 116, 157, 160, 165,
ontogenic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 167, 169, 172, 176, 181, 210 f., 229 –
GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 232, 238 ff., 243, 245 f., 248, 252, 266,
GPTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 296, 359 f., 377, 400, 412, 425, 463 f.,
GR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 482, 501, 559, 563, 565, 735, 737, 945
GR Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 multi-objective . . . . . . . . . . . . . . . . . . . . . . . . 230
Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53, 104 randomized restarts . . . . . . . . . . . . . . . . . . . 232
descend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Gradient Descent with random restarts . . . . . . . . . . . . . . . . . . 501
1198 INDEX
Hill Climbing With Random Restarts . . . . . . 501 mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
HIS . 128, 227, 234, 236, 250, 319, 353, 374, 384, Intel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
415, 421, 429, 460, 468, 474, 478, 484, Intelligence
490, 496, 500, 502, 506, 518, 521, 524, artificial . . . . . . . . . . . . . . . . . . . . 31, 35, 76, 536
528 swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37
Homologous Crossover . . . . . . . . . . . . 338, 340, 407 Intensification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Homology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
HRO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
HS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Intrinsic Parallelism . . . . . . . . . . . . . . . . . . . . . . . . 263
HX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Intron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Hybridization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 f. explicitly defined . . . . . . . . . . . . . . . . . . . . . . 407
Hyperplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Hypothesis Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
building block . . . . . . . . . . . . . . . . . . . . . . . . . 346 IPO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Irreflexivenss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
—I— ISA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 475, 485
IAAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128, 474, 484 isGoal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
ICAART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 ISICA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
ICANNGA . . . 319, 353, 374, 384, 415, 422, 429, ISIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
460, 468, 474, 484 Isolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
ICARIS . . . . . . . . . . . . . . . . . . . . . . . . . . 130, 227, 320
isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
ICCI 128, 319, 353, 374, 384, 415, 422, 429, 460,
Isotropic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
468, 474, 484
Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
ICEC . . . . 322, 356, 376, 386, 416, 423, 431, 462,
Iteratively Deepening Depth-First Search . . 516
469, 476, 486
IUMDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
ICGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
IWACI . . . 322, 356, 376, 386, 417, 423, 431, 462,
ICML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129, 460
469, 476, 486
ICMLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134, 462
IWLCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
ICNC . . . . 319, 354, 374, 384, 415, 422, 429, 460,
IWNICA.321, 356, 375, 386, 416, 423, 431, 462,
468, 474, 484
469, 476, 486
ICSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475, 485
ICTAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
IDDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 516 f.
—J—
JAPHET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
IEA/AIE . . . . . . . . . . . . . . . . . . . . . . . . . 133, 475, 485
JBGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
IGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
JME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
IICAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Job Shop Scheduling . . . . . . . . . . . . . . . . . . . . . . . . 26
IJCAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
JSS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 f.
IJCCI . . . 321, 356, 375, 386, 416, 423, 431, 462,
Juxtapositional . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
469, 476, 486
JVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
Implicit Parallelistm . . . . . . . . . . . . . . . . . 263, 345 f. —K—
in.west . . . . . . . . . . . . . . . . . . . . . . . . . . . 537, 543, 549 k-PK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338
Increasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 k-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
monotonically . . . . . . . . . . . . . . . . . . . . . . . . . 647 Kauffman NK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Keys
Indirect Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 272 random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Indirect Representation . . . . . . . . . . . . . . . . . . . . 272 Knapsack Problem . . . . . . . . . . . . . . . . . . . . . . . . . 147
Individual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Kung-Fu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Inductor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
inequalities excess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
method of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Infeasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 —L—
Infix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Lamarck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164, 464
Informed Search . . . . . . . . . . . . . . . . . . . . . . . . 32, 518 Lamarckism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Injective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 Landscape
Injectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Input-Processing-Output . . . . . . . . . . . . . . . . . . . 379 problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Integer Programming . . . . . . . . . . . . . . . . . . . . . . . . 29 Laplace
INDEX 1199
assumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 LLN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Large Numbers Local Search . . . . . . . . . . . . . . . . . 229, 243, 463, 514
law of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668 Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . 84, 162, 218
Large-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Las Vegas Algorithm. . . . . . . . . . . . . . .41, 106, 148 Logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537, 622
Layout Long k-Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Long Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
LCS . . . . . . . . . . . . . . . . . 3, 32, 264, 457 f., 536, 945 LS-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Michigan-style . . . . . . . . . . . . . . . . . . . . . . . . 458 LSCS . . . . 133, 228, 235, 237, 251, 321, 355, 375,
Pitt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 385, 416, 423, 430, 461, 469, 475, 478,
Pittsburgh-style . . . . . . . . . . . . . . . . . . . . . . . 458 485, 491, 496, 500, 502, 506, 518, 521,
Learning 524, 528
linkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Lunch
Learning Classifier System . . . . 3, 32, 264, 457 f., no free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
536, 945
Michigan-style . . . . . . . . . . . . . . . . . . . . . . . . 458
—M—
MA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37, 463 f.
Pittsburgh-style . . . . . . . . . . . . . . . . . . . . . . . 458
Machine
Left-total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
finite state . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Length
Turing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
deterministic. . . . . . . . . . . . . . . . . . . . . . . .145
Lexicographic Optimization . . . . . . . . . . . . . . . . . 59
non-deterministic . . . . . . . . . . . . . . . . . . . 146
LGP . . . . . . . . . . 32, 327, 381, 400, 403 – 408, 412
Machine Learning. . . . . . . . . . . .187, 212, 380, 536
page-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Macro
LGPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
automatically-defined . . . . . . . . . . . . . 401, 403
License
Management
FDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947
customer relationship. . . . . . . . . . . . . . . . . .536
LGPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Many-Objectivity . . . . . . . . . . . . 197 f., 201 f., 1213
Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 f. Mapping
Likelihood Genotype-Phenotype . . . . . . . . . . . . . . . . . . 270
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693 genotype-phenotype . . . . . . . . . . . . . 39 ff., 71,
Linear Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . 62 86 f., 90 f., 95 f., 101, 103 f., 111, 113,
Linear Genetic Programming . 32, 327, 381, 400, 119, 162, 168, 174 f., 179, 181 ff., 204,
403 – 408, 412 209, 215 ff., 219, 254, 258, 263, 270 –
page-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 273, 325, 327, 348 ff., 357, 360, 377,
Linear Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 380, 404, 406 f., 433 f., 437, 443 f., 481,
Linkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 542, 569, 580, 711, 834
Linkage Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 ontogenic . . . . . . . . . . . . . . . . . . . . . . . . . . 86, 273
LION . . . . 133, 228, 234, 237, 251, 321, 355, 375, phenotype-genotype . . . . . . . . . . . . . . . . . . . 464
385, 416, 422, 430, 461, 469, 475, 478, MAS . . 37, 59, 125, 190, 207, 226, 315, 352, 382,
485, 490, 496, 500, 502, 506, 518, 521, 459, 473
524, 528 Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 defined length . . . . . . . . . . . . . . . . . . . . . . . . . 341
addItem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
appendList . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Mating Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89, 660
createList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
deleteItem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 89
deleteRange . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Maximum Likelihood Estimator . . . . . . . . . . . . 695
insertItem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 MCDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
listToSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 MCDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76, 129
removeItem . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 MDVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
reverseList . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Mean
setToList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
subList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665, 667
search (sorted) . . . . . . . . . . . . . . . . . . . . . . . . 650 Meme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
search (unsorted) . . . . . . . . . . . . . . . . . . . . . . 650 Memetic Algorithm . . . . . . . . . . . . . . . 32, 37, 463 f.
sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Memory Consumption. . . . . . . . . . . . . . . . . . . . . .514
sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 MENDEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
1200 INDEX
Messy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Multi-Criterion Decision Making . . . . . . . . . . . . 76
Messy GA . . . . . . . . . . . . . . . . . . . . . . 184, 347 f., 395 Multi-Modality 139, 151, 161, 165, 255, 452, 454
Messy Genome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Multi-Objective Evolutionary Algorithm 55, 65,
META . . . 228, 235, 237, 251, 322, 356, 376, 386, 202, 253, 257, 259 f., 279, 311
417, 423, 431, 462, 469, 476, 479, 486, Multi-Objectivity . 39, 54 – 57, 59 – 62, 64 f., 70,
491, 497, 500, 506 76 f., 89, 93 ff., 101, 104, 106, 158, 182,
Metaheuristic . . . . . . . . . . . . . . . . . 3, 24, 31 f., 34 ff., 197 f., 201, 213, 230, 232, 238, 253 ff.,
39, 41, 109, 112 f., 139, 141, 143, 165, 274, 308, 434, 453, 482, 534, 537 f.,
168 f., 197 f., 213, 225, 253 f., 262, 367, 554, 566, 570, 716, 728, 737, 841, 906,
489, 540 f., 1209 1213, 1221 ff.
Method Multi-Objectivization . . . . . . . . . . . . . . . . . . . . . . 158
raindrop . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 235 Mutation . . . . . . . 262, 306 f., 330, 340, 393 f., 543
Method Of Inequalities . . . . . . . . . . . . . 72 – 75, 77 bit-flip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Metropolis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 ff. model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
mGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184, 347, 349 multi-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
MIC 227, 234, 236, 250, 320, 354, 374, 384, 415, sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
422, 429, 460, 468, 474, 478, 484, 490, single-point . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
496, 500, 506 strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
MICAI. . .131, 320, 354, 374, 384, 415, 422, 429, tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
460, 468, 474, 484 vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
MicroGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Miller-Urey experiment. . . . . . . . . . . . . . . . . . . . .258
—N—
NaBIC . . . 322, 356, 375, 386, 416, 423, 431, 462,
Min-Max
469, 476, 486
weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Naivet . . 19 E. . . . . . . . . . . . . . . . . . . 279, 536, 639
Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Natural Selection. . . . . . . . . . . . . . . . . . . . . . . . . . .260
Minimum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89, 660
ND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 172 f., 557
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
ND Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 89
Needle-In-A-Haystack . . 103, 168, 175, 180, 465,
Mining
563
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Negative
MIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
false. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .700
ML . . . . . . . . . . . . . . . . . . . . . . . . . . 187, 212, 380, 536
Negative Slope Coefficient . . . . . . . . . . . . . . . . . . 163
MLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695 Neighborhood Search . . . . . . . . . . . . . . . . . . . . . . . 512
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Neighboring Relation . . . . . . . . . . . . . . . . . . . . . . . . 86
Modularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Nelder & Mead . 32, 104, 264, 420, 463, 501, 945
Module Network
acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 artificial neural . . 187, 191 f., 204, 273, 389,
MOEA55, 65, 72, 202, 253, 257, 259 f., 274, 279, 396, 406, 536, 541
311 Baesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
MOI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 – 75, 77 neutral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 random Boolean. . . . . . . . . . . . . . . . . . . . . . .174
about mean . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Neural Network
central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 artificial187, 191 f., 204, 273, 389, 396, 406,
standardized . . . . . . . . . . . . . . . . . . . . . . . . . . 664 536, 541
statistical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Neutral
Monotone network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Neutrality . . . . . . . . . . . . . 171, 556 f., 559, 569, 578
heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 NFL . . . . . . . . . . . . . . . . . . . . 95, 114, 168, 209 – 213
Monotonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 NIAH . . . . . . . . . . . . . . . . . . . 103, 168, 175, 180, 465
Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Niche Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Niche Preemption Principle . . . . . . . . . . . . . . . . 151
Monte Carlo Algorithm . . . . . . . . . . . 106, 148, 243 NICSO . . 131, 227, 234, 236, 250, 320, 354, 374,
MOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 197 385, 415, 422, 430, 460, 468, 474, 478,
Morphogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 484, 490, 496, 500, 502, 506, 518, 521,
MPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335, 338, 407 524, 528
MSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 NK . . . . . . . . . . . . . . . . . . . . . . . . . . 161, 179, 553, 556
Multi-Agent System . 37, 59, 125, 190, 207, 226, NKp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 556
315, 352, 382, 459, 473 NKq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 556
INDEX 1201
No Free Lunch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
No Free Lunch Theorem 95, 114, 168, 209 – 213 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Node Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 extremal . . . . . . . . . 3, 25, 32, 172, 493 f., 945
nodeWeight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 generalized . . . . . . . . . . . . . . . . . . . . . . . . . 495
Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187, 192 f. global . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 31, 509
Non-Decreasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Non-Dominated 67 ff., 73, 79, 197 f., 200 ff., 277, lexicographic . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
279, 284, 1210 multi-objective . . . . . . . . . . . . . . . . . . . . . . . . . 55
Non-Increasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Non-Synonymous Change . . . . . . . . . . . . . . . . . . 173 prevalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Nonile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 weighted sum . . . . . . . . . . . . . . . . . . . . . . . . 62
Nonseparability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Norm overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Tschebychev . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 particle swarm.3, 27, 32, 37, 155, 188, 191,
Normal Distribution. . . . . . . . . . . . . . . . . . . . . . . .678 207, 463, 481 f., 580, 945
standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678 problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 103
Notation random . . . . . . . . . . . . 3, 32, 36, 116, 232, 505
reverse polish . . . . . . . . . . . . . . . . . . . . . . . . . 404 setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
NSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 structure of . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
NSGA-II . . . . . . . . . . . . . . . . . . . . 198, 202, 266, 453 taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
NTM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149 termination criterion . . . . . . . . . . . . . . . . . . 105
Numbers Optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51, 89
complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 isolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
law of large . . . . . . . . . . . . . . . . . . . . . . . . . . . 668 lexicographical . . . . . . . . . . . . . . . . . . . . . . . . . 60
natural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51, 88 f.
rational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 optimal set . . . . . . . . . . . . . . . . . . . . . . . . . 54, 57
real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 extracting . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Numerical Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 27 obtain by deletion . . . . . . . . . . . . . . . . . . 309
NWGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
updating. . . . . . . . . . . . . . . . . . . . . . . . . . . .308
—O— updating by insertion . . . . . . . . . . . . . . 309
Objective Function . . . . . . . . . . . . . . . . . 36, 44, 544
Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
conflicting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
harmonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 645
independent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
simple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
OBUPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Off-The-Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . 43, 146
Overfitting . . . . . . . . . . . . . . 191, 193, 534, 536, 568
OneMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Ontogenic Mapping. . . . . . . . . . . . . . . . . . . . .86, 273 Overgeneralization . . . . . . . . . . . . . . . . . . . . . . . . . 194
Ontogeny Oversimplification . . . . . . . . . . . . . . . . . . . . . 194, 568
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Overspecification . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Operations Research . . . . . . . . . . . . . . . 23, 197, 509
Operator —P—
search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 p-Spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161, 179, 556
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 PADO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Operator Correlation . . . . . . . . . . . . . . . . . . . . . . . 163 PAES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Optimality PAKDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Parallelism
Optimiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 implicit . . . . . . . . . . . . . . . . . . . . . . . . . 263, 345 f.
Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 intrinsic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 103 Parameter
classification of . . . . . . . . . . . . . . . . . . . . . . 31 endogenous . . . . . . . . . . . . . . 90, 360, 370, 481
baysian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 exogenous . . . . . . . . . . . . . . . . . . . . . . . . . 90, 360
combinatorial . . . . . . . . . . . . . . . . . . . . . . . . . 181 Parental Selection . . . . . . . . . . . . . . . . . . . . . . . . 260 f.
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
derivative-based . . . . . . . . . . . . . . . . . . . . . . . . 53 frontier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
derivative-free . . . . . . . . . . . . . . . . . . . . . . . . . . 53 optimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
1202 INDEX
ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 PPSN. . . .318, 353, 374, 384, 414, 421, 429, 459,
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 467, 474, 484
Pareto Ranking. . . .68, 76, 260, 275 ff., 279, 281, PPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
284, 300, 1210 Preemption
Pareto-Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 niche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Partial Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Partial order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Prehistory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Particle Swarm Optimization . 3, 27, 32, 37, 155, Premature Convergence . . . . . . . . . . 151, 482, 547
188, 191, 207, 463, 481 f., 580, 945 Preservative Selection . . . . . . . . . . . . . . . . . . . . . . 265
Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Prevalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 f.
Fibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 Prevention
long . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
long k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Primordial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
PBIL . . . . . . . . . . . . . . . . . . . . . . . . . . 438, 442 ff., 452 Principle
real-coded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 niche preemption . . . . . . . . . . . . . . . . . . . . . . 151
PBILC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . . . . . . 380
PDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 f. Probabilistic Algorithm . . . . . . . . . . . . . . . . . . . . . 33
Penalty Probability
adaptive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 von Mises [2822]. . . . . . . . . . . . . . . . . . . . . . .655
Death . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Bernoulli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Conditional . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Kolmogorov . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
Percentile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 of success. . . . . . . . . . . . . . . . . . . . . . . . .163, 172
Permission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .656
Permutation . . . . . . . . . . . . . 25, 333, 395, 643, 655 of a random variable . . . . . . . . . . . . . . . . 657
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Probability Density Function . . . . . . . . . . . . . . . 659
Pertinency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Probability Mass Function . . . . . . . . . . . . . . . . . 659
Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Probit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
PESA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Problem
PGM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 bin packing. . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Phase combinatorial . . . . . . . . . . . . . . . . . . . . . . 25, 472
juxtapositional . . . . . . . . . . . . . . . . . . . . . . . . 349 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
primordial. . . . . . . . . . . . . . . . . . . . . . . . . . . . .348 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Phenome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Phenotype . . . . . . . . . . . . . . . . . . . . . . . . . . . 39, 43, 82 numerical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 optimization . . . . . . . . . . . . . . . . . . . . . . . 22, 103
Phenotype-Genotype Mapping . . . . . . . . . . . . . 464 multi-objective . . . . . . . . . . . . . . . . . . . . . . . 55
PIPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447, 449, 452 satisfiability . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Pitfall traveling salesman 25, 34, 45 f., 55, 91, 118,
siren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 122, 147, 209, 243, 327, 333, 350, 377,
Pitt approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 425, 463, 621
Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 Vehicle Routing . . . . . . . . . . . . . . . . . . . . . . . 537
enterprise resource . . . . . . . . . . . . . . . . . . . . 536 vehicle routing . . . . 25, 147, 266, 350, 537 f.,
Plasticity 540 f., 548, 622, 634, 914
phenotypic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Problem Landscape . . . . . . . . . . . . . . . . . . . . . . . . . 95
Pleiotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177, 179 Problem Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
PMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 f. Product
Point Cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Production Systems . . . . . . . . . . . . . . . . . . . . . . . . 457
saddle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Programming
Point Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 genetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 32, 37, 107, 155, 162 f., 173, 175, 180,
Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 187, 192, 195, 262 ff., 273, 302, 304 f.,
Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . 671 379 ff., 386 f., 389, 396, 398, 400 f., 403,
Population. . . . . . . . . . . . . . . . . . . . . . . . .90, 253, 265 405 f., 408, 411 f., 427, 434, 447, 531,
Positive 534, 536, 561, 566, 568, 824, 945
false. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .700 cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . .174
Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 grammar-guided. . . . . . . . . . .273, 381, 403
INDEX 1203
linear . . 32, 327, 381, 400, 403 – 408, 412 Recombination . . . . . . . . . . 262, 307, 335, 399, 544
standard . 32, 273, 386 f., 389, 399, 403 f., discrete . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 362
412 dominant . . . . . . . . . . . . . . . . . . . . 337, 362, 377
strongly-typed . . . . . . . . . . . . . . . . . . . . . . . . 3, homologous . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
32, 37, 107, 155, 162 f., 173, 175, 180, intermediate . . . . . . . . . . . . . . . . . . . . . . 338, 362
187, 192, 195, 262 ff., 273, 302, 304 f., multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
379 ff., 386 f., 389, 396, 398, 400 f., 403, tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
405 f., 408, 411 f., 427, 434, 447, 531, Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
534, 536, 561, 566, 568, 824, 945 Redundancy . . . . . . . . . . . . . . . . . . . . . . . 86, 174, 569
integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Reflexivenss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
mixed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 logistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
constant innovation . . . . . . . . . . . . . . . . . . . 174 symbolic . . 119, 121, 187, 191 f., 380 f., 387,
Proximity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 401, 403, 411 f., 531 – 535, 905 f., 908,
Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 910
PSO3, 27, 32, 37, 155, 188, 191, 207, 463, 481 f., Relation
580, 945 binary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .644
psoReproduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 types and properties of . . . . . . . . . . . . . 644
PVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
—Q— partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
QBB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Quadric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Repairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Quantile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
Repairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Quartile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Representation
Quenching
explicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
simulated . . . . . . . . . . . . . . . . . . . . . . 247 ff., 252
indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Quintile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Reproduction. . . . . . . . . . . . . . . . . . . . . . . . . .306, 542
asexual . . . . . . . . . . . . . . . . . . . . . . . . . . . 307, 325
—R—
Rail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 binary. . . . . . . . .335, 338, 340, 348, 398, 407
Raindrop Method . . . . . . . . . . . . . . . . 36, 235 f., 945 nullary . . . . . . . . . . . . . . . . . . . . . . 328, 340, 389
RAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 sexual . . . . . . . . . . . . . . . . . . 257, 262, 307, 325
Ramped Half-and-Half . . . . . . . . . . . . . . . . . . . . . 390 unary . . . . . 330, 333, 340, 347 f., 393, 395 ff.
Random Resistance
event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 domiance . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 198
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Resistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656 Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 REVOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
continous . . . . . . . . . . . . . . . . . . . . . . . . . . . 658 RFD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 477
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658 RISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Random Access Machine . . . . . . . . . . . . . . . . . . . 145 River Formation Dynamics . . . . . . . . . . . . . 32, 477
Random Boolean Network . . . . . . . . . . . . . . . . . . 174 RO . . . . . . . . . . . . . . . . . . . . . 3, 32, 36, 116, 232, 505
Random Keys. . . . . . . . . . . . . . . . . . .349 f., 377, 834 Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
random neighbors . . . . . . . . . . . . . . . . . . . . . 455, 554 royal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
Random Optimization . . 3, 32, 36, 116, 232, 505 variable-length . . . . . . . . . . . . . . . . . . . . . . 559
Random Sampling . . . . 113 ff., 118, 181, 240, 729 VLR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Random Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 302 RoboCup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Random Walk . . . . 113 – 116, 118, 163, 168, 172, Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
174 f., 181, 210, 212, 240, 245 f., 267, Root2path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563, 565
302, 514, 732 Root2paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
Randomized Algorithm . . . . . . . . . . . . . . . . . . . . . . 33 Royal Road . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 558 f.
Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646, 661 variable-length . . . . . . . . . . . . . . . . . . . . . . . . 559
interquartile . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 VLR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Ranking Royal Road Function . . 154, 173, 455, 558 – 562,
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 566, 1221
RBN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Royal Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
Reachability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 RPCL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .454
Real-Coded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 RPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
1204 INDEX
Ruggedness . . . . . . . . 161, 554, 556, 571, 573, 578 Search Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
—S— Search Procedure
S-Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 greedy randomized adaptive . . . . . 157, 499
SA . . . . . . . . . . . . . . . . . . . . . . . 3, 25, 32 f., 36, 156 f., Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
160, 181, 231, 243 – 249, 252, 308, 412, Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
463 f., 493, 541, 557, 716, 740, 945 Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254, 258
Saddle Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f., 285
Salience breeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
high . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . 289
low. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562 ecological . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Sample space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .653 elitist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Sampling environment. . . . . . . . . . . . . . . . . . . . . . . . . . .261
random . . . . . . . . . . 113 ff., 118, 181, 240, 729 environmental . . . . . . . . . . . . . . . . . . . . . . . . . 260
SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 extinctive . . . . . . . . . . . . . . . . . . . . . . . . 183, 265
SC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 34 left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Scalability Requirement . . . . . . . . . . . . . . . . . . . . 219 extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
large . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Fitness Proportionate . . . . . . . . . . . 290, 292 f.
small . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 mating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Schedule natural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 parental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f.
exponential . . . . . . . . . . . . . . . . . . . . . . . . . 249 preservative . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
logarithmic . . . . . . . . . . . . . . . . . . . . . . . . . 248 ranking
polynomial . . . . . . . . . . . . . . . . . . . . . . . . . 249 linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Roulette Wheel . . . . . . . . . . . . . . . . . 290, 292 f.
job shop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 sexual . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f., 287
Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . 119, 163, 341 survival . . . . . . . . . . . . . . . . . . . . 260 f., 287, 302
Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Schema Theorem . . . . . 119, 167, 217, 263, 341 f., non-deterministic . . . . . . . . . . . . . . . . . . . 300
344 ff., 558 with replacement. . . . . . . . . . . . . . .298, 300
Schwefel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 without replacement. . . . . . . . . . . . . . 298 f.
SCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 f., 305, 546 tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
SEA 135, 228, 235, 237, 251, 322, 356, 376, 386, truncation. . . . . . . . . . . . . . . . . . . . . . . . . . . . .289
417, 423, 431, 462, 469, 476, 479, 486, Self-Adaptation . . . . . . . . . . . . . . 219, 360, 370, 419
491, 497, 500, 502, 506, 518, 521, 525, Self-Organization. . . . . . . . . . . . . . . . . . . . . . . . . . .420
528 criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
SEAL . . . . 130, 320, 354, 374, 384, 415, 422, 429, Self-Organized Criticality . . . . . . . . . . . . . . . . . . 493
460, 468, 474, 484 SEMCCO322, 356, 376, 386, 417, 423, 431, 462,
Search 470, 476, 486
A⋆ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519 Sensor Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
best-first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Separability . . . . . . . . . . . . . . . . . . . . . . . . . 178, 204 f.
breadth-first . . . . . . . . 32, 115, 515, 517, 519 non . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
depth-first. . . . . . . . . . . . . .32, 115, 516 f., 519 partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
iterative deepenining . . . . . . . . . . . . 32, 516 Separbility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
depth-limited . . . . . . . . . . . . . . . . . . . . . . . . 516 f. Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
greedy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519 Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 639
harmony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 cardinality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
informed . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 518 Cartesian product . . . . . . . . . . . . . . . . . . . . . 642
local . . . . . . . . . . . . . . . . . . . . 229, 243, 463, 514 complement . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . 512 connected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
State Space . . . . . . . . . . . . . . . . . . . . . . . . 32, 511 countable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
tabu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
uninformed . . . . . . . . . . . . . . . . . . . . 32, 34, 514 empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
INDEX 1205
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Sparc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
membership . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 SPEA 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
operations on . . . . . . . . . . . . . . . . . . . . . . . . . 641 SPEA-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
optimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54, 57 Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368, 581
Power set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
relations between . . . . . . . . . . . . . . . . . . . . . . 640 SPICE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
special . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Spin Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Spin-Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639 Splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 SPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
uncountable . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 SQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 ff., 252
union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 SSCI 322, 356, 376, 386, 417, 423, 431, 462, 470,
Sexual Reproduction . . . . . . . . . 257, 262, 307, 325 476, 486
Sexual Selection . . . . . . . . . . . . . . . . . . . . . 260 f., 287 SSEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266, 324
SGP . . . . . . . . 32, 273, 386 f., 389, 399, 403 f., 412 SSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
SH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . 662 f.
Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277, 546 Standard Genetic Programming . 32, 273, 386 f.,
Variety Preserving . . . . . . . . . . . . . . . . . . . . 279 389, 399, 403 f., 412
SHCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 f. State Machine
SHCLVND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 finite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
SI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37 State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Simple Order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646 Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 511
Simplex State Space Search . . . . . . . . . . . 34 f., 511 f., 514 f.
downhill . . . 32, 104, 264, 420, 463, 501, 945 Static-φ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Simulated Annealing. . . . . .3, 25, 32 f., 36, 156 f., Statisfiability Problem . . . . . . . . . . . . . . . . . . . . . 147
160, 181, 231, 243 – 249, 252, 308, 412, Statistical Independence. . . . . . . . . . . . . . . . . . . .657
463 f., 493, 541, 557, 716, 740, 945 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
Simulated Quenching . . . . . . . . . . . . . . . . . . 247 Steady-State 157, 255, 266, 286, 289, 324, 546 ff.
temperature schedule . . . . . . . . . . . . . . . . . . 247 Steady-State Evolutionary Algorithm . 266, 324
Simulated Quenching . . . . . . . . . . . 156, 247 ff., 252 STeP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 STGP . 3, 32, 37, 107, 155, 162 f., 173, 175, 180,
annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 187, 192, 195, 262 ff., 273, 302, 304 f.,
Simulation Dynamics . . . . . . . . . . . . . . . . . . . . . . . 163 379 ff., 386 f., 389, 396, 398, 400 f., 403,
Single-Objectivity . . . . . . . . . . . . . . . . . . . 39, 54, 56, 405 f., 408, 411 f., 427, 434, 447, 531,
60 ff., 70 f., 88, 93, 95, 100 f., 106, 109, 534, 536, 561, 566, 568, 824, 945
158, 160, 198, 201 f., 209, 238, 253 f., Sticky Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
274, 278, 287, 325, 360, 362, 434, 439, Stigmergy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
501, 541, 580, 713, 724, 905, 1221 ff. sematectonic . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Siren Pitfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 sign-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
SIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476, 486 Stochastic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 33
Skewness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 Stochastic Gradient Descent . . . . . . . . . . . . . . . . 232
Small-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Stopping Criterion . . . . . . . . . . . . . . . . . . . . . . . . . 105
SMC 131, 227, 234, 236, 251, 320, 354, 374, 385, Strategy
415, 422, 430, 460, 468, 474, 478, 484, evolutionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
490, 496, 500, 502, 506, 518, 521, 524, Strongly-Typed Genetic Programming . . . . 387,
528 392 f., 398
SOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
SoCPaR . 133, 321, 355, 375, 385, 416, 423, 430, Student’s t-Distribution . . . . . . . . . . . . . . . . . . . . 687
461, 469, 475, 485 Subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
Soft Computing . . . . . . . . . . . . . . . . . . . . . . . . . 31, 34 Subtree . . . . . . . . . . 181, 393, 395 – 400, 404, 1211
Solution Success
candidate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 probability of . . . . . . . . . . . . . . . . . . . . . 163, 172
Space Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 sqr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 weightes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Support Vector Machine . . . . . . . . . . . . . . . . . . . 536
search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Surjective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
1206 INDEX
Surjectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 Tree Genomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Survival Selection . . . . . . . . . . . . . . . 260 f., 287, 302 Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Truck-Meets-Truck . . . . . . . . . . . . . . . . . . . . . . . . . 544
Swap Body . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539, 542 Truncation Selection . . . . . . . . . . . . . . . . . . . . . . . 289
Swarm TS . . 3, 25, 32, 36, 155, 231, 463 f., 489, 512, 541
particle/optimization . . . . 3, 27, 32, 37, 155, TSoR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
188, 191, 207, 463, 481 f., 580, 945 TSP25, 34, 45 f., 55, 91, 118, 122, 147, 209, 243,
Swarm Intelligence. . . . . . . . . . . . . . . . . . . . . . .32, 37 327, 333, 350, 377, 425, 463, 621
Symbolic Regression 119, 121, 187, 191 f., 380 f., TSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
387, 401, 403, 411 f., 531 – 535, 905 f., Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
908, 910 Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 Turing Completeness . . . . . . . . . . . . . . . . . . . . . . . 406
Symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Turing Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Symmetry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646 deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Synonymous Change . . . . . . . . . . . . . . . . . . . . . . . 173 non-deterministic. . . . . . . . . . . . . . . . . . . . . .146
System Two-Point Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
classifier . . . . . . . . . . . . . . . . . . . . . . . . 339, 457 f. Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
learning . . . . . . 3, 32, 264, 457 f., 536, 945 Type 1 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
multi-agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Type 2 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
—T— —U—
Ugly Duckling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212
t-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
UMDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434, 437
TAAI . . . . . . . . . . . . . . . . . . . . . . . . 134, 462, 476, 486
Unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Tabu Search . 3, 25, 32, 36, 155, 231, 463 f., 489,
Underspecification . . . . . . . . . . . . . . . . . . . . . . . . . 348
512, 541
Uni-Modality . . . . . . . . . . . . . . . . . . . . . 165, 452, 454
TAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Uniform Distribution
TAINN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
Technological Landscape . . . . . . . . . . . . . . . . . . . 556
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Telematics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .549 f.
uniformSelectNode . . . . . . . . . . . . . . . . . . . . . . . . . 392
Temperature Schedule . . . . . . . . . . . . . . . . . . . . . . 247
Uninformed Search . . . . . . . . . . . . . . . . . . . . . 32, 514
Termination Criterion . . . . . . . . . . . . . . . . . . . . . . 105
UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 346, 362
Test
statistical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
—V—
TGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381, 386, 389 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
Theorem epistatic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
central limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 estimator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .662
Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . 341, 558 Variation
ugly duckling. . . . . . . . . . . . . . . . . . . . . . . . . .212 coefficient of . . . . . . . . . . . . . . . . . . . . . . . . . . 663
Threshold Variety Preserving . . . . . . . . . . . . . . . . . . . . . . . . . 279
error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163, 562 Vehicle Routing Problem . . . . . 25, 147, 266, 350,
Threshold Selection . . . . . . . . . . . . . . . . . . . . . . . . 289 537 f., 540 f., 548, 622, 634, 914
Time Consumption . . . . . . . . . . . . . . . . . . . . . . . . . 514 Capacitated . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Total Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Multiple Depot . . . . . . . . . . . . . . . . . . . . . . . . 538
Tournament Selection . . . . . . . . . . . . . . . . . . . . . . 573 Periodic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
TPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 with Backhauls . . . . . . . . . . . . . . . . . . . . . . . . 538
Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 with Pick-up and Delivery . . . . . . . . . . . . . 538
Train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 VIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 VRP . . 25, 147, 266, 350, 537 f., 540 f., 548, 622,
Transititve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 634, 914
Transitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 VRPB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Trap Function . . . . . . . . . . . . . . . . . . . . . . . 167, 557 f. VRPPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Traveling Salesman Problem25, 34, 45 f., 55, 91,
118, 122, 147, 209, 243, 327, 333, 350, —W—
377, 425, 463, 621 Walk
Tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 adaptive
royal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 fitter dynamics . . . . . . . . . . . . . . . . . . . . . 116
INDEX 1207
greedy dynamics . . . . . . . . . . . . . . . . . . . . 116
one-mutant . . . . . . . . . . . . . . . . . . . . . . . . . 116
drunkard’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
random . . . . . 113 – 116, 118, 163, 168, 172,
174 f., 181, 210, 212, 240, 245 f., 267,
302, 514, 732
WCCI . . . 318, 353, 374, 384, 414, 421, 429, 460,
467, 474, 484
Wechselbrücksteuerung . . . . . . . . . . . . . . . . . . . . . 537
Weierstraß Function . . . . . . . . . . . . . . . . . . . . . . . . 165
Weighted Sum . . . . . . . . . . . . . . . . . . . . . . 62, 78, 275
WHCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Wildcard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
WOMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
WOPPLOT . . 133, 228, 235, 237, 251, 321, 355,
375, 385, 416, 423, 430, 461, 469, 475,
478, 485, 491, 496, 500, 502, 506, 518,
521, 524, 528
WPBA . . 322, 356, 375, 386, 416, 423, 431, 462,
469, 476, 486
Wrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 f.
WSC131, 320, 354, 374, 385, 415, 422, 430, 461,
468, 475, 484
—X—
XCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
—Y—
Yellow Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
—Z—
Z80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
ZCS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .458
List of Figures
12.1 Illustration of some functions illustrating their rising speed, gleaned from [2365]. . . 145
12.2 Turing machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
20.1 Approximated distribution of the domination rank in random samples for several
population sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
20.2 The proportion of non-dominated candidate solutions for several population sizes ps
and dimensionalities n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
24.1 Examples for an increase of the dimensionality of a search space G (1d) to G′ (2d). . 220
27.1 A sketch of annealing in metallurgy as role model for Simulated Annealing. . . . . . 244
27.2 Different temperature schedules for Simulated Annealing. . . . . . . . . . . . . . . . 248
46.1 An example for a possible search space sketched as acyclic graph. . . . . . . . . . . . 513
28.1 The Pareto domination relation of the individuals illustrated in Figure 28.4. . . . . . 277
28.2 An example for Variety Preserving based on Figure 28.4. . . . . . . . . . . . . . . . . 283
28.3 The distance and sharing matrix of the example from Table 28.2. . . . . . . . . . . . 283
28.4 Applications and Examples of Evolutionary Algorithms. . . . . . . . . . . . . . . . . 314
28.5 Conferences and Workshops on Evolutionary Algorithms. . . . . . . . . . . . . . . . 317
50.1 Some long Root2paths for l from 1 to 11 with underlinedbridge elements. . . . . . . 564
50.2 Properties of the sphere function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
50.3 Results obtained for the sphere function with τ function evaluations. . . . . . . . . . 581
LIST OF TABLES 1215
50.4 Properties of Schwefel’s problem 2.22. . . . . . . . . . . . . . . . . . . . . . . . . . . 583
50.5 Results obtained for Schwefel’s problem 2.22 with τ function evaluations. . . . . . . 583
50.6 Properties of Schwefel’s problem 1.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
50.7 Results obtained for Schwefel’s problem 1.2 with τ function evaluations. . . . . . . . 585
50.8 Properties of Schwefel’s problem 2.21. . . . . . . . . . . . . . . . . . . . . . . . . . . 587
50.9 Results obtained for Schwefel’s problem 2.21 with τ function evaluations. . . . . . . 587
50.10Properties of the generalized Rosenbrock’s function. . . . . . . . . . . . . . . . . . . 589
50.11Results obtained for the generalized Rosenbrock’s function with τ function evaluations.590
50.12Properties of the step function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
50.13Results obtained for the step function with τ function evaluations. . . . . . . . . . . 591
50.14Properties of the quartic function with noise. . . . . . . . . . . . . . . . . . . . . . . 593
50.15Results obtained for the quartic function with noise with τ function evaluations. . . 593
50.16Properties of Schwefel’s problem 2.26. . . . . . . . . . . . . . . . . . . . . . . . . . . 595
50.17Results obtained for Schwefel’s problem 2.26 with τ function evaluations. . . . . . . 595
50.18Properties of the generalized Rastrigin’s function. . . . . . . . . . . . . . . . . . . . . 597
50.19Results obtained for the generalized Rastrigin’s function with τ function evaluations. 597
50.20Properties of Ackley’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
50.21Results obtained for Ackley’s function with τ function evaluations. . . . . . . . . . . 600
50.22Properties of the generalized Griewank function. . . . . . . . . . . . . . . . . . . . . 601
50.23Results obtained for the generalized Griewank function with τ function evaluations. 601
50.24Properties of the generalized penalized function 1. . . . . . . . . . . . . . . . . . . . 604
50.25Results obtained for the generalized penalized function 1 with τ function evaluations. 605
50.26Properties of the generalized penalized function 2. . . . . . . . . . . . . . . . . . . . 606
50.27Results obtained for the generalized penalized function 2 with τ function evaluations. 607
50.28Properties of Levy’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
50.29Results obtained for Levy’s function with τ function evaluations. . . . . . . . . . . . 608
50.30Properties of the shifted sphere function. . . . . . . . . . . . . . . . . . . . . . . . . . 610
50.31Results obtained for the shifted sphere function with τ function evaluations. . . . . . 610
50.32Properties of the shifted and rotated elliptic function. . . . . . . . . . . . . . . . . . 611
50.33Results obtained for the shifted and rotated elliptic function with τ function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
50.34Properties of shifted Schwefel’s problem 2.21. . . . . . . . . . . . . . . . . . . . . . . 612
50.35Properties of Schwefel’s modified problem 2.21 with optimum on the bounds. . . . . 613
50.36Results obtained for Schwefel’s modified problem 2.21 with optimum on the bounds
with τ function evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
50.37Properties of the shifted Rosenbrock’s function. . . . . . . . . . . . . . . . . . . . . . 614
50.38Results obtained for the shifted Rosenbrock’s function with τ function evaluations. . 614
50.39Properties of shifted Ackley’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . 616
50.40Properties of the shifted and rotated Ackley’s function. . . . . . . . . . . . . . . . . 617
50.41Results obtained for the shifted and rotated Ackley’s function with τ function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
50.42Properties of the shifted Rastrigin’s function. . . . . . . . . . . . . . . . . . . . . . . 618
50.43Results obtained for the shifted Rastrigin’s function with τ function evaluations. . . 618
50.44Properties of the shifted and rotated Rastrigin’s function. . . . . . . . . . . . . . . . 619
50.45Results obtained for the shifted and rotated Rastrigin’s function with τ function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
50.46Properties of Schwefel’s problem 2.13. . . . . . . . . . . . . . . . . . . . . . . . . . . 620
50.47Results obtained for Schwefel’s problem 2.13 with τ function evaluations. . . . . . . 620
50.48Properties of the shifted Griewank function. . . . . . . . . . . . . . . . . . . . . . . . 621
54.1 The precise developer environment used implementing and testing. . . . . . . . . . . 705
List of Algorithms
1 The CVS command for anonymously checking out the complete book sources. . . . . 4
17.1 A program computing the greatest common divisor. . . . . . . . . . . . . . . . . . . 180
31.1 A genotype of an individual in Brameier and Banzhaf’s LGP system. . . . . . . . . . 407
31.2 The data for Task 91 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
50.1 An example Royal Road function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
50.2 A Java code snippet computing the sphere function. . . . . . . . . . . . . . . . . . . 581
50.3 A Java code snippet computing Schwefel’s problem 2.22. . . . . . . . . . . . . . . . 583
50.4 A Java code snippet computing Schwefel’s problem 1.2. . . . . . . . . . . . . . . . . 585
50.5 A Java code snippet computing Schwefel’s problem 2.21. . . . . . . . . . . . . . . . 587
50.6 A Java code snippet computing the generalized Rosenbrock’s function. . . . . . . . 589
50.7 A Java code snippet computing the step function function. . . . . . . . . . . . . . . 591
50.8 A Java code snippet computing the quartic function with noise. . . . . . . . . . . . 593
50.9 A Java code snippet computing Schwefel’s problem 2.26 function. . . . . . . . . . . 595
50.10A Java code snippet computing the generalized Rastrigin’s function. . . . . . . . . . 597
50.11A Java code snippet computing Ackley’s function. . . . . . . . . . . . . . . . . . . . 599
50.12A Java code snippet computing the generalized Griewank function. . . . . . . . . . 601
50.13A Java code snippet computing the generalized penalized function 1. . . . . . . . . 604
50.14A Java code snippet computing the generalized penalized function 2. . . . . . . . . 606
50.15A Java code snippet computing the shifted sphere function. . . . . . . . . . . . . . . 610
50.16A Java code snippet computing shifted Schwefel’s problem 2.21. . . . . . . . . . . . 612
50.17A Java code snippet computing the shifted Rosenbrock’s function. . . . . . . . . . . 614
50.18A Java code snippet computing shifted Ackley’s function. . . . . . . . . . . . . . . . 616
50.19A Java code snippet computing the shifted Rastrigin’s function. . . . . . . . . . . . 618
50.20A Java code snippet computing the shifted Griewank function. . . . . . . . . . . . . 621
50.21The location data of the first example problem instance. . . . . . . . . . . . . . . . . 626
50.22The order data of the first example problem instance. . . . . . . . . . . . . . . . . . 627
50.23The truck data of the first example problem instance. . . . . . . . . . . . . . . . . . 628
50.24The container data of the first example problem instance. . . . . . . . . . . . . . . . 629
50.25The driver data of the first example problem instance. . . . . . . . . . . . . . . . . . 629
50.26The distance data of the first example problem instance. . . . . . . . . . . . . . . . . 631
50.27The move data of the solution to the first example problem instance. . . . . . . . . . 633
50.28An example output of the features of an example plan (with errors). . . . . . . . . . 634
55.1 The basic interface for everything involved in optimization. . . . . . . . . . . . . . . 707
55.2 The interface for all nullary search operations. . . . . . . . . . . . . . . . . . . . . . . 708
55.3 The interface for all unary search operations. . . . . . . . . . . . . . . . . . . . . . . 708
55.4 The interface for all binary search operations. . . . . . . . . . . . . . . . . . . . . . . 709
55.5 The interface for all ternary search operations. . . . . . . . . . . . . . . . . . . . . . 710
55.6 The interface for all binary search operations. . . . . . . . . . . . . . . . . . . . . . . 711
55.7 The interface common to all genotype-phenotype mappings. . . . . . . . . . . . . . . 711
55.8 The interface for objective functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.9 The interface for termination criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.10An interface to all single-objective optimization algorithms. . . . . . . . . . . . . . . 713
55.11An interface to all multi-objective optimization algorithms. . . . . . . . . . . . . . . 716
1222 LIST OF LISTINGS
55.12The interface for temperature schedules. . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.13The interface for selection algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.14The interface for fitness assignment processes. . . . . . . . . . . . . . . . . . . . . . . 718
55.15The interface for comparator functions. . . . . . . . . . . . . . . . . . . . . . . . . . 719
55.16The interface for model building and updating. . . . . . . . . . . . . . . . . . . . . . 719
55.17The interface for sampling points from a model. . . . . . . . . . . . . . . . . . . . . . 720
56.1 The base class for everything involved in optimization. . . . . . . . . . . . . . . . . . 723
56.2 A base class for all single-objective optimization methods. . . . . . . . . . . . . . . . 724
56.3 A base class for all multi-objective optimization methods. . . . . . . . . . . . . . . . 728
56.4 The random sampling algorithm “randomSampling” given in Algorithm 8.1. . . . . . 729
56.5 The random walk algorithm “randomWalk” given in Algorithm 8.2. . . . . . . . . . 732
56.6 The Hill Climbing algorithm “hillClimbing” given in Algorithm 26.1. . . . . . . . . . 735
56.7 The Multi-Objective Hill Climbing algorithm “hillClimbingMO” given in
Algorithm 26.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
56.8 The Simulated Annealing algorithm “simulatedAnnealing” given in Algorithm 27.1. 740
56.9 The logarithmic temperature schedule as specified in Equation 27.16. . . . . . . . . . 743
56.10The logarithmic temperature schedule as specified in Equation 27.17. . . . . . . . . . 744
56.11A base class providing the features Algorithm 28.1 in a generational way (see
Section 28.1.4.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
56.12A base class providing the features Algorithm 28.2 in a generational way. . . . . . . 750
56.13The truncation selection method given in Algorithm 28.8 on page 289. . . . . . . . . 754
56.14The tournament selection method given in Algorithm 28.11 on page 298. . . . . . . . 756
56.15The random selection method given in Algorithm 28.15 on page 302. . . . . . . . . . 758
56.16Pareto ranking fitness assignment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
56.17An example implementation of the Evolution Strategy Algorithm 30.1 on page 361. . 760
56.18An individual record also holding endogenous information. . . . . . . . . . . . . . . . 770
56.19An example implementation of the Differential Evolution algorithm which uses
ternary crossover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
56.20The Estimation Of Distribution Algorithm algorithm “PlainEDA” given in
Algorithm 34.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
56.21The creator for the initial model to be used in UMDA. . . . . . . . . . . . . . . . . . 781
56.22The model building method used in UMDA. . . . . . . . . . . . . . . . . . . . . . . . 782
56.23Sample points from the UMDA model. . . . . . . . . . . . . . . . . . . . . . . . . . . 783
56.24Create uniformly distributed real vectors. . . . . . . . . . . . . . . . . . . . . . . . . 784
56.25Modify a real vector by adding normally distributed random numbers with a given
standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
56.26A strategy mutation operator for Evolution Strategy. . . . . . . . . . . . . . . . . . . 787
56.27Modify a real vector by adding uniformly distributed random numbers. . . . . . . . 789
56.28Modify a real vector by adding normally distributed random numbers. . . . . . . . . 791
56.29A weighted average crossover operator. . . . . . . . . . . . . . . . . . . . . . . . . . . 792
56.30The binomial crossover operator from Differential Evolution. . . . . . . . . . . . . . 794
56.31The exponential crossover operator from Differential Evolution. . . . . . . . . . . . . 796
56.32The dominant recombination operator defined in Section 30.3.1. . . . . . . . . . . . . 798
56.33The intermedtiate recombination operator defined as Section 30.3.2. . . . . . . . . . 799
56.34A uniform random string generator. . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
56.35A single bit-flip mutator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
56.36The uniform crossover operator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
56.37Create uniformly distributed permutations. . . . . . . . . . . . . . . . . . . . . . . . 804
56.38Modify a permutation by swapping two genes. . . . . . . . . . . . . . . . . . . . . . . 806
56.39Modify a permutation by repeatedly swapping two genes. . . . . . . . . . . . . . . . 808
56.40The base class for search operations for trees. . . . . . . . . . . . . . . . . . . . . . . 809
56.41An utility class to construct random paths in a tree, i. e., to select random nodes. . . 811
56.42The ramped-half-and-half creation procedure. . . . . . . . . . . . . . . . . . . . . . . 814
56.43A simple sub-tree replacement mutator. . . . . . . . . . . . . . . . . . . . . . . . . . 815
56.44A binary search operation for trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
56.45A multiplexing nullary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . 818
56.46A multiplexing unary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . . 820
56.47A multiplexing binary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . 822
LIST OF LISTINGS 1223
56.48A base class for tree nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
56.49A type description for tree nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
56.50A set of tree node types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
56.51The identity mapping for cases where G = X. . . . . . . . . . . . . . . . . . . . . . . 833
56.52The Random Keys genotype-phenotype mapping introduced in Algorithm 29.8. . . . 834
56.53The step-limit termination criterion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
56.54A comparator realizing the lexicographic relationship. . . . . . . . . . . . . . . . . . 837
56.55A comparator realizing the Pareto dominance relationship. . . . . . . . . . . . . . . . 838
56.56The individual record. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
56.57The individual record for multi-objective optimization. . . . . . . . . . . . . . . . . . 841
56.58A base class for archiving non-dominated individuals. . . . . . . . . . . . . . . . . . 843
56.59Some constants useful for optimization. . . . . . . . . . . . . . . . . . . . . . . . . . 850
56.60A small and handy class for computing some useful statistics. . . . . . . . . . . . . . 851
56.61Some simple utilities for converting objects to strings. . . . . . . . . . . . . . . . . . 854
57.1 The Sphere Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
57.2 f1 as defined in Equation 26.1 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 860
57.3 f2 as defined in Equation 26.2 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 861
57.4 f3 as defined in Equation 26.3 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 862
57.5 Testing the function from Task 64. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
57.6 An example output of the Function test program. . . . . . . . . . . . . . . . . . . . . 868
57.7 Solving the Sphere functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
57.8 An example output of the Sphere test program. . . . . . . . . . . . . . . . . . . . . . 887
57.9 An Instance of the Bin Packing Problem. . . . . . . . . . . . . . . . . . . . . . . . . 888
57.10The first objective function f1 from Task 67. . . . . . . . . . . . . . . . . . . . . . . . 892
57.11The second objective function f2 from Task 67. . . . . . . . . . . . . . . . . . . . . . 893
57.12The third objective function f3 from Task 67. . . . . . . . . . . . . . . . . . . . . . . 894
57.13The fourth objective function f4 from Task 67. . . . . . . . . . . . . . . . . . . . . . 896
57.14The fifth objective function f5 from Task 67. . . . . . . . . . . . . . . . . . . . . . . . 897
57.15Solving the Bin Packing problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
57.16An example output of the Bin Packing test program. . . . . . . . . . . . . . . . . . . 903
57.17The data of the att48 problem instance. . . . . . . . . . . . . . . . . . . . . . . . . . 904
57.18A single-objective test program for symbolic regression. . . . . . . . . . . . . . . . . 905
57.19A multi-objective test program for symbolic regression. . . . . . . . . . . . . . . . . . 906
57.20An objective function for Symbolic Regression. . . . . . . . . . . . . . . . . . . . . . 908
57.21An objective function based on Listing 57.20, but also including tree-size information. 910
57.22An objective function for Symbolic Regression. . . . . . . . . . . . . . . . . . . . . . 910
57.23Some example functions to match with regression. . . . . . . . . . . . . . . . . . . . 912
57.24An instance of this class holds the information of one location. . . . . . . . . . . . . 914
57.25An instance of this class holds the information of one order. . . . . . . . . . . . . . . 915
57.26An instance of this class holds the information of one truck. . . . . . . . . . . . . . . 918
57.27An instance of this class holds the information of one container. . . . . . . . . . . . . 920
57.28An instance of this class holds the information of one driver. . . . . . . . . . . . . . 921
57.29This class holds the complete infomation about the distances between locations. . . 923
57.30An object which holds a transportation move. . . . . . . . . . . . . . . . . . . . . . . 925
57.31An object which can be used to compute the features of a candidate solution of the
benchmark problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931