You are on page 1of 1223

Global Optimization Algorithms

– Theory and Application –

Ed
3rd

Author: Thomas Weise


tweise@gmx.de
Edition: third
Version: 2011-12-07-15:31

Newest Version: http://www.it-weise.de/projects/book.pdf


Sources/CVS-Repository: goa-taa.cvs.sourceforge.net
2

,
Don t print me!

This book is much more


useful as electronic resource:
you can search ,terms and click
links. You can t do that in a
printed book.

The book is frequently updated and improved as


well. Printed versions will just outdate.

Also think about


the trees that
would have to
die for the paper!
,
Don t print me!
Preface

This e-book is devoted to Global Optimization algorithms, which are methods for finding
solutions of high quality for an incredible wide range of problems. We introduce the basic
concepts of optimization and discuss features which make optimization problems difficult
and thus, should be considered when trying to solve them. In this book, we focus on
metaheuristic approaches like Evolutionary Computation, Simulated Annealing, Extremal
Optimization, Tabu Search, and Random Optimization. Especially the Evolutionary Com-
putation methods, which subsume Evolutionary Algorithms, Genetic Algorithms, Genetic
Programming, Learning Classifier Systems, Evolution Strategy, Differential Evolution, Par-
ticle Swarm Optimization, and Ant Colony Optimization, are discussed in detail.
In this third edition, we try to make a transition from a pure material collection and
compendium to a more structured book. We try to address two major audience groups:
1. Our book may help students, since we try to describe the algorithms in an under-
standable, consistent way. Therefore, we also provide the fundamentals and much
background knowledge. You can find (short and simplified) summaries on stochastic
theory and theoretical computer science in Part VI on page 638. Additionally, appli-
cation examples are provided which give an idea how problems can be tackled with
the different techniques and what results can be expected.
2. Fellow researchers and PhD students may find the application examples helpful too.
For them, in-depth discussions on the single approaches are included that are supported
with a large set of useful literature references.
The contents of this book are divided into three parts. In the first part, different op-
timization technologies will be introduced and their features are described. Often, small
examples will be given in order to ease understanding. In the second part starting at page
530, we elaborate on different application examples in detail. Finally, in the last part fol-
lowing at page 638, the aforementioned background knowledge is provided.
In order to maximize the utility of this electronic book, it contains automatic, clickable
links. They are shaded with dark gray so the book is still b/w printable. You can click on
1. entries in the table of contents,
2. citation references like “Heitkötter and Beasley [1220]”,
3. page references like “253”,
4. references such as “see Figure 28.1 on page 254” to sections, figures, tables, and listings,
and
5. URLs and links like “http://www.lania.mx/˜ccoello/EMOO/ [accessed 2007-10-25]”.1
1 URLs are usually annotated with the date we have accessed them, like http://www.lania.mx/

˜ccoello/EMOO/ [accessed 2007-10-25]. We can neither guarantee that their content remains unchanged, nor
that these sites stay available. We also assume no responsibility for anything we linked to.
4 PREFACE
The following scenario is an example for using the book: A student reads the text and
finds a passage that she wants to investigate in-depth. She clicks on a citation which seems
interesting and the corresponding reference is shown. To some of the references which are
online available, links are provided in the reference text. By clicking on such a link, the
Adobe Reader R 2 will open another window and load the regarding document (or a browser

window of a site that links to the document). After reading it, the student may use the
“backwards” button in the Acrobat Reader’s navigation utility to go back to the text initially
read in the e-book.
If this book contains something you want to cite or reference in your work, please use
the citation suggestion provided in Chapter A on page 945. Also, I would be very happy
if you provide feedback, report errors or missing things that you have (or have not) found,
criticize something, or have any additional ideas or suggestions. Do not hesitate to contact
me via my email address tweise@gmx.de. Matter of fact, a large number of people helped
me to improve this book over time. I have enumerated the most important contributors in
Chapter D – Thank you guys, I really appreciate your help! At many places in this book
we refer to Wikipedia – The Free Encyclopedia [2938] which is a great source of knowledge.
Wikipedia – The Free Encyclopedia contains articles and definitions for many of the aspects
discussed in this book. Like this book, it is updated and improved frequently. Therefore,
including the links adds greatly to the book’s utility, in my opinion.
The updates and improvements will result in new versions of the book, which will
regularly appear at http://www.it-weise.de/projects/book.pdf. The complete
LATEX source code of this book, including all graphics and the bibliography, is hosted
at SourceForge under http://sourceforge.net/projects/goa-taa/ in the CVS
repository goa-taa.cvs.sourceforge.net. You can browse the repository under
http://goa-taa.cvs.sourceforge.net/goa-taa/ and anonymously download the
complete sources of it by using the CVS command given in Listing 1.3
cvs -z3 -d:pserver:anonymous@goa-taa.cvs.sourceforge.net:/cvsroot/goa-taa
checkout -P Book
Listing 1: The CVS command for anonymously checking out the complete book sources.

Compiling the sources requires multiple runs of LATEX, BibTEX, and makeindex because of
the nifty way the references are incorporated. In the repository, an Ant-Script is provided for
doing this. Also, the complete bibliography of this book is stored in a MySQL4 database and
the scripts for creating this database as well as tools for querying it (Java) and a Microsoft
Access5 frontend are part of the sources you can download. These resources as well as all
contents of this book (unless explicitly stated otherwise) are licensed under the GNU Free
Documentation License (FDL, see Chapter B on page 947). Some of the algorithms provided
are made available under the LGPL license (see Chapter C on page 955)

Copyright c 2011 Thomas Weise.


Permission is granted to copy, distribute and/or modify this document under the terms of
the GNU Free Documentation License, Version 1.3 or any later version published by the Free
Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. A copy of the license is included in the Chapter B on page 947 entitled “GNU Free
Documentation License (FDL)”.
2 The R is available for download at http://www.adobe.com/products/reader/
Adobe Reader [ac-
cessed 2007-08-13].
3 More information can be found at http://sourceforge.net/scm/?type=cvs&group_id=264352
4 http://en.wikipedia.org/wiki/MySQL [accessed 2010-07-29]
5 http://en.wikipedia.org/wiki/Microsoft_Access [accessed 2010-07-29]
Contents

Title Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Part I Foundations

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.1 What is Global Optimization? . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.1.1 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.1.2 Algorithms and Programs . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.1.3 Optimization versus Dedicated Algorithms . . . . . . . . . . . . . . . 24
1.1.4 Structure of this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2 Types of Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.1 Combinatorial Optimization Problems . . . . . . . . . . . . . . . . . . 24
1.2.2 Numerical Optimization Problems . . . . . . . . . . . . . . . . . . . . 27
1.2.3 Discrete and Mixed Problems . . . . . . . . . . . . . . . . . . . . . . 29
1.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3 Classes of Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . 31
1.3.1 Algorithm Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.3.2 Monte Carlo Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.3.3 Heuristics and Metaheuristics . . . . . . . . . . . . . . . . . . . . . . 34
1.3.4 Evolutionary Computation . . . . . . . . . . . . . . . . . . . . . . . . 36
1.3.5 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.4 Classification According to Optimization Time . . . . . . . . . . . . . . . . . 37
1.5 Number of Optimization Criteria . . . . . . . . . . . . . . . . . . . . . . . . 39
1.6 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2 Problem Space and Objective Functions . . . . . . . . . . . . . . . . . . . . 43


2.1 Problem Space: How does it look like? . . . . . . . . . . . . . . . . . . . . . 43
2.2 Objective Functions: Is it good? . . . . . . . . . . . . . . . . . . . . . . . . . 44

3 Optima: What does good mean? . . . . . . . . . . . . . . . . . . . . . . . . . 51


3.1 Single Objective Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1.1 Extrema: Minima and Maxima of Differentiable Functions . . . . . . 52
3.1.2 Global Extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 Multiple Optima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3 Multiple Objective Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6 CONTENTS
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.2 Lexicographic Optimization . . . . . . . . . . . . . . . . . . . . . . . 59
3.3.3 Weighted Sums (Linear Aggregation) . . . . . . . . . . . . . . . . . . 62
3.3.4 Weighted Min-Max . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.5 Pareto Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4 Constraint Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.1 Genome and Phenome-based Approaches . . . . . . . . . . . . . . . . 71
3.4.2 Death Penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.3 Penalty Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.4 Constraints as Additional Objectives . . . . . . . . . . . . . . . . . . 72
3.4.5 The Method Of Inequalities . . . . . . . . . . . . . . . . . . . . . . . 72
3.4.6 Constrained-Domination . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.4.7 Limitations and Other Methods . . . . . . . . . . . . . . . . . . . . . 75
3.5 Unifying Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.5.1 External Decision Maker . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.5.2 Prevalence Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 76

4 Search Space and Operators: How can we find it? . . . . . . . . . . . . . . 81


4.1 The Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2 The Search Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.3 The Connection between Search and Problem Space . . . . . . . . . . . . . . 85
4.4 Local Optima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.4.1 Local Optima of single-objective Problems . . . . . . . . . . . . . . . 89
4.4.2 Local Optima of Multi-Objective Problems . . . . . . . . . . . . . . . 89
4.5 Further Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5 Fitness and Problem Landscape: How does the Optimizer see it? . . . . 93
5.1 Fitness Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Fitness as a Relative Measure of Utility . . . . . . . . . . . . . . . . . . . . . 94
5.3 Problem Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

6 The Structure of Optimization: Putting it together. . . . . . . . . . . . . 101


6.1 Involved Spaces and Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.3 Other General Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3.1 Gradient Descend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3.2 Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3.3 Termination Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3.4 Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3.5 Modeling and Simulating . . . . . . . . . . . . . . . . . . . . . . . . . 106

7 Solving an Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . 109

8 Baseline Search Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113


8.1 Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.2 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.2.1 Adaptive Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.3 Exhaustive Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

9 Forma Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119


CONTENTS 7
10 General Information on Optimization . . . . . . . . . . . . . . . . . . . . . . 123
10.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
10.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
10.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
10.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Part II Difficulties in Optimization

11 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

12 Problem Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143


12.1 Algorithmic Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
12.2 Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
12.2.1 Turing Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
12.2.2 P, N P, Hardness, and Completeness . . . . . . . . . . . . . . . . . . 146
12.3 The Problem: Many Real-World Tasks are N P-hard . . . . . . . . . . . . . 147
12.4 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

13 Unsatisfying Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151


13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
13.2 The Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
13.2.1 Premature Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 152
13.2.2 Non-Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . 152
13.2.3 Domino Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
13.3 One Cause: Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
13.3.1 Exploration vs. Exploitation . . . . . . . . . . . . . . . . . . . . . . . 155
13.4 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.1 Search Operator Design . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.2 Restarting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.3 Low Selection Pressure and Population Size . . . . . . . . . . . . . . 157
13.4.4 Sharing, Niching, and Clearing . . . . . . . . . . . . . . . . . . . . . . 157
13.4.5 Self-Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
13.4.6 Multi-Objectivization . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

14 Ruggedness and Weak Causality . . . . . . . . . . . . . . . . . . . . . . . . . 161


14.1 The Problem: Ruggedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
14.2 One Cause: Weak Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
14.3 Fitness Landscape Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
14.3.1 Autocorrelation and Correlation Length . . . . . . . . . . . . . . . . . 163
14.4 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

15 Deceptiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

16 Neutrality and Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171


16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
16.2 Evolvability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
16.3 Neutrality: Problematic and Beneficial . . . . . . . . . . . . . . . . . . . . . 172
16.4 Neutral Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
16.5 Redundancy: Problematic and Beneficial . . . . . . . . . . . . . . . . . . . . 174
16.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
16.7 Needle-In-A-Haystack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8 CONTENTS
17 Epistasis, Pleiotropy, and Separability . . . . . . . . . . . . . . . . . . . . . 177
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
17.1.1 Epistasis and Pleiotropy . . . . . . . . . . . . . . . . . . . . . . . . . 177
17.1.2 Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
17.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
17.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
17.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
17.3.1 Choice of the Representation . . . . . . . . . . . . . . . . . . . . . . . 182
17.3.2 Parameter Tweaking . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
17.3.3 Linkage Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
17.3.4 Cooperative Coevolution . . . . . . . . . . . . . . . . . . . . . . . . . 184

18 Noise and Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187


18.1 Introduction – Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
18.2 The Problem: Need for Robustness . . . . . . . . . . . . . . . . . . . . . . . 188
18.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

19 Overfitting and Oversimplification . . . . . . . . . . . . . . . . . . . . . . . . 191


19.1 Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
19.1.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
19.1.2 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
19.2 Oversimplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
19.2.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
19.2.2 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

20 Dimensionality (Objective Functions) . . . . . . . . . . . . . . . . . . . . . . 197


20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
20.2 The Problem: Many-Objective Optimization . . . . . . . . . . . . . . . . . . 198
20.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
20.3.1 Increasing Population Size . . . . . . . . . . . . . . . . . . . . . . . . 201
20.3.2 Increasing Selection Pressure . . . . . . . . . . . . . . . . . . . . . . . 201
20.3.3 Indicator Function-based Approaches . . . . . . . . . . . . . . . . . . 201
20.3.4 Scalerizing Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 202
20.3.5 Limiting the Search Area in the Objective Space . . . . . . . . . . . . 202
20.3.6 Visualization Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 202

21 Scale (Decision Variables) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203


21.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
21.2 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.2.1 Parallelization and Distribution . . . . . . . . . . . . . . . . . . . . . 204
21.2.2 Genetic Representation and Development . . . . . . . . . . . . . . . . 204
21.2.3 Exploiting Separability . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.2.4 Combination of Techniques . . . . . . . . . . . . . . . . . . . . . . . . 205

22 Dynamically Changing Fitness Landscape . . . . . . . . . . . . . . . . . . . 207

23 The No Free Lunch Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 209


23.1 Initial Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
23.2 The Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
23.3 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
23.4 Infinite and Continuous Domains . . . . . . . . . . . . . . . . . . . . . . . . 213
23.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
CONTENTS 9
24 Lessons Learned: Designing Good Encodings . . . . . . . . . . . . . . . . . 215
24.1 Compact Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
24.2 Unbiased Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
24.3 Surjective GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
24.4 Injective GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
24.5 Consistent GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
24.6 Formae Inheritance and Preservation . . . . . . . . . . . . . . . . . . . . . . 217
24.7 Formae in Genotypic Space Aligned with Phenotypic Formae . . . . . . . . . 217
24.8 Compatibility of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
24.9 Representation with Causality . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.10Combinations of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.11Reachability of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.12Influence of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.13Scalability of Search Operations . . . . . . . . . . . . . . . . . . . . . . . . . 219
24.14Appropriate Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
24.15Indirect Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
24.15.1Extradimensional Bypass: Example for Good Complexity . . . . . . . 219

Part III Metaheuristic Optimization Algorithms

25 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
25.1 General Information on Metaheuristics . . . . . . . . . . . . . . . . . . . . . 226
25.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 226
25.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
25.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 227
25.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

26 Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229


26.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
26.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
26.3 Multi-Objective Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . 230
26.4 Problems in Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
26.5 Hill Climbing With Random Restarts . . . . . . . . . . . . . . . . . . . . . . 232
26.6 General Information on Hill Climbing . . . . . . . . . . . . . . . . . . . . . . 234
26.6.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 234
26.6.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
26.6.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 234
26.6.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
26.7 Raindrop Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
26.7.1 General Information on Raindrop Method . . . . . . . . . . . . . . . 236

27 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243


27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
27.1.1 Metropolis Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
27.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
27.2.1 No Boltzmann’s Constant . . . . . . . . . . . . . . . . . . . . . . . . . 246
27.2.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
27.3 Temperature Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
27.3.1 Logarithmic Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 248
27.3.2 Exponential Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 249
27.3.3 Polynomial and Linear Scheduling . . . . . . . . . . . . . . . . . . . . 249
27.3.4 Adaptive Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
27.3.5 Larger Step Widths . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
10 CONTENTS
27.4 General Information on Simulated Annealing . . . . . . . . . . . . . . . . . . 250
27.4.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 250
27.4.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
27.4.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 250
27.4.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

28 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253


28.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
28.1.1 The Basic Cycle of EAs . . . . . . . . . . . . . . . . . . . . . . . . . . 254
28.1.2 Biological and Artificial Evolution . . . . . . . . . . . . . . . . . . . . 255
28.1.3 Historical Classification . . . . . . . . . . . . . . . . . . . . . . . . . . 263
28.1.4 Populations in Evolutionary Algorithms . . . . . . . . . . . . . . . . . 265
28.1.5 Configuration Parameters of Evolutionary Algorithms . . . . . . . . . 269
28.2 Genotype-Phenotype Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 270
28.2.1 Direct Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
28.2.2 Indirect Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
28.3 Fitness Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
28.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
28.3.2 Weighted Sum Fitness Assignment . . . . . . . . . . . . . . . . . . . . 275
28.3.3 Pareto Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
28.3.4 Sharing Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
28.3.5 Variety Preserving Fitness Assignment . . . . . . . . . . . . . . . . . 279
28.3.6 Tournament Fitness Assignment . . . . . . . . . . . . . . . . . . . . . 284
28.4 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
28.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
28.4.2 Truncation Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
28.4.3 Fitness Proportionate Selection . . . . . . . . . . . . . . . . . . . . . 290
28.4.4 Tournament Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
28.4.5 Linear Ranking Selection . . . . . . . . . . . . . . . . . . . . . . . . . 300
28.4.6 Random Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
28.4.7 Clearing and Simple Convergence Prevention (SCP) . . . . . . . . . . 302
28.5 Reproduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
28.6 Maintaining a Set of Non-Dominated/Non-Prevailed Individuals . . . . . . . 308
28.6.1 Updating the Optimal Set . . . . . . . . . . . . . . . . . . . . . . . . 308
28.6.2 Obtaining Non-Prevailed Elements . . . . . . . . . . . . . . . . . . . . 309
28.6.3 Pruning the Optimal Set . . . . . . . . . . . . . . . . . . . . . . . . . 311
28.7 General Information on Evolutionary Algorithms . . . . . . . . . . . . . . . 314
28.7.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 314
28.7.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
28.7.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 317
28.7.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

29 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325


29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
29.1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
29.2 Genomes in Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 326
29.2.1 Classification According to Length . . . . . . . . . . . . . . . . . . . . 326
29.2.2 Classification According to Element Type . . . . . . . . . . . . . . . . 327
29.2.3 Classification According to Meaning of Loci . . . . . . . . . . . . . . 327
29.2.4 Introns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
29.3 Fixed-Length String Chromosomes . . . . . . . . . . . . . . . . . . . . . . . 328
29.3.1 Creation: Nullary Reproduction . . . . . . . . . . . . . . . . . . . . . 328
29.3.2 Mutation: Unary Reproduction . . . . . . . . . . . . . . . . . . . . . 330
29.3.3 Permutation: Unary Reproduction . . . . . . . . . . . . . . . . . . . . 333
CONTENTS 11
29.3.4 Crossover: Binary Reproduction . . . . . . . . . . . . . . . . . . . . . 335
29.4 Variable-Length String Chromosomes . . . . . . . . . . . . . . . . . . . . . . 339
29.4.1 Creation: Nullary Reproduction . . . . . . . . . . . . . . . . . . . . . 340
29.4.2 Insertion and Deletion: Unary Reproduction . . . . . . . . . . . . . . 340
29.4.3 Crossover: Binary Reproduction . . . . . . . . . . . . . . . . . . . . . 340
29.5 Schema Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
29.5.1 Schemata and Masks . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
29.5.2 Wildcards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
29.5.3 Holland’s Schema Theorem . . . . . . . . . . . . . . . . . . . . . . . . 342
29.5.4 Criticism of the Schema Theorem . . . . . . . . . . . . . . . . . . . . 345
29.5.5 The Building Block Hypothesis . . . . . . . . . . . . . . . . . . . . . 346
29.5.6 Genetic Repair and Similarity Extraction . . . . . . . . . . . . . . . . 346
29.6 The Messy Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 347
29.6.1 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
29.6.2 Reproduction Operations . . . . . . . . . . . . . . . . . . . . . . . . . 347
29.6.3 Overspecification and Underspecification . . . . . . . . . . . . . . . . 348
29.6.4 The Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
29.7 Random Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
29.8 General Information on Genetic Algorithms . . . . . . . . . . . . . . . . . . 351
29.8.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 351
29.8.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
29.8.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 353
29.8.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

30 Evolution Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359


30.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
30.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
30.3 Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
30.3.1 Dominant Recombination . . . . . . . . . . . . . . . . . . . . . . . . . 362
30.3.2 Intermediate Recombination . . . . . . . . . . . . . . . . . . . . . . . 362
30.4 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
30.4.1 Using Normal Distributions . . . . . . . . . . . . . . . . . . . . . . . . 364
30.5 Parameter Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
30.5.1 The 1/5th Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
30.5.2 Endogenous Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 370
30.6 General Information on Evolution Strategies . . . . . . . . . . . . . . . . . . 373
30.6.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 373
30.6.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
30.6.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 373
30.6.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376

31 Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379


31.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
31.1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
31.2 General Information on Genetic Programming . . . . . . . . . . . . . . . . . 382
31.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 382
31.2.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
31.2.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 383
31.2.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
31.3 (Standard) Tree Genomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
31.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
31.3.2 Creation: Nullary Reproduction . . . . . . . . . . . . . . . . . . . . . 389
31.3.3 Node Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
31.3.4 Unary Reproduction Operations . . . . . . . . . . . . . . . . . . . . . 393
12 CONTENTS
31.3.5 Recombination: Binary Reproduction . . . . . . . . . . . . . . . . . . 398
31.3.6 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
31.4 Linear Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
31.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
31.4.2 Advantages and Disadvantages . . . . . . . . . . . . . . . . . . . . . . 404
31.4.3 Realizations and Implementations . . . . . . . . . . . . . . . . . . . . 405
31.4.4 Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
31.4.5 General Information on Linear Genetic Programming . . . . . . . . . 408
31.5 Grammars in Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . 409
31.5.1 General Information on Grammar-Guided Genetic Programming . . . 409
31.6 Graph-based Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
31.6.1 General Information on Grahp-based Genetic Programming . . . . . 409

32 Evolutionary Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413


32.1 General Information on Evolutionary Programming . . . . . . . . . . . . . . 414
32.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 414
32.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
32.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 414
32.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417

33 Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419


33.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
33.2 Ternary Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
33.2.1 Advanced Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
33.3 General Information on Differential Evolution . . . . . . . . . . . . . . . . . 421
33.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 421
33.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
33.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 421
33.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424

34 Estimation Of Distribution Algorithms . . . . . . . . . . . . . . . . . . . . . 427


34.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
34.2 General Information on Estimation Of Distribution Algorithms . . . . . . . 428
34.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 428
34.2.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
34.2.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 429
34.2.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
34.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
34.4 EDAs Searching Bit Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
34.4.1 UMDA: Univariate Marginal Distribution Algorithm . . . . . . . . . 434
34.4.2 PBIL: Population-Based Incremental Learning . . . . . . . . . . . . . 438
34.4.3 cGA: Compact Genetic Algorithm . . . . . . . . . . . . . . . . . . . . 439
34.5 EDAs Searching Real Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 442
34.5.1 SHCLVND: Stochastic Hill Climbing with Learning by Vectors of
Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
34.5.2 PBILC : Continuous PBIL . . . . . . . . . . . . . . . . . . . . . . . . 444
34.5.3 Real-Coded PBIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
34.6 EDAs Searching Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
34.6.1 PIPE: Probabilistic Incremental Program Evolution . . . . . . . . . . 447
34.7 Difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
34.7.1 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
34.7.2 Epistasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
CONTENTS 13
35 Learning Classifier Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
35.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
35.2 Families of Learning Classifier Systems . . . . . . . . . . . . . . . . . . . . . 457
35.3 General Information on Learning Classifier Systems . . . . . . . . . . . . . . 459
35.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 459
35.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
35.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 459
35.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462

36 Memetic and Hybrid Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 463


36.1 Memetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
36.2 Lamarckian Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
36.3 Baldwin Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
36.4 General Information on Memetic Algorithms . . . . . . . . . . . . . . . . . . 467
36.4.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 467
36.4.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
36.4.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 467
36.4.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470

37 Ant Colony Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471


37.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
37.2 General Information on Ant Colony Optimization . . . . . . . . . . . . . . . 473
37.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 473
37.2.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
37.2.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 473
37.2.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476

38 River Formation Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477


38.1 General Information on River Formation Dynamics . . . . . . . . . . . . . . 478
38.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 478
38.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
38.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 478
38.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479

39 Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 481


39.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
39.2 The Particle Swarm Optimization Algorithm . . . . . . . . . . . . . . . . . . 481
39.2.1 Communication with Neighbors – Social Interaction . . . . . . . . . . 482
39.2.2 Particle Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
39.2.3 Basic Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
39.3 General Information on Particle Swarm Optimization . . . . . . . . . . . . . 483
39.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 483
39.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
39.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 483
39.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486

40 Tabu Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489


40.1 General Information on Tabu Search . . . . . . . . . . . . . . . . . . . . . . 490
40.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 490
40.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
40.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 490
40.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
14 CONTENTS
41 Extremal Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
41.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
41.1.1 Self-Organized Criticality . . . . . . . . . . . . . . . . . . . . . . . . . 493
41.1.2 The Bak-Sneppen model of Evolution . . . . . . . . . . . . . . . . . . 493
41.2 Extremal Optimization and Generalized Extremal Optimization . . . . . . . 494
41.3 General Information on Extremal Optimization . . . . . . . . . . . . . . . . 496
41.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 496
41.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
41.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 496
41.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497

42 GRASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
42.1 General Information on GRAPSs . . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.2 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.3 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500

43 Downhill Simplex (Nelder and Mead) . . . . . . . . . . . . . . . . . . . . . . 501


43.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
43.2 General Information on Downhill Simplex . . . . . . . . . . . . . . . . . . . 502
43.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 502
43.2.2 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 502
43.2.3 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502

44 Random Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505


44.1 General Information on Random Optimization . . . . . . . . . . . . . . . . . 506
44.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 506
44.1.2 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 506
44.1.3 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506

Part IV Non-Metaheuristic Optimization Algorithms

45 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509

46 State Space Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511


46.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
46.1.1 The Baseline: A Binary Goal Criterion . . . . . . . . . . . . . . . . . 511
46.1.2 State Space and Neighborhood Search . . . . . . . . . . . . . . . . . . 511
46.1.3 The Search Space as Graph . . . . . . . . . . . . . . . . . . . . . . . . 512
46.1.4 Key Efficiency Features . . . . . . . . . . . . . . . . . . . . . . . . . . 514
46.2 Uninformed Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
46.2.1 Breadth-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
46.2.2 Depth-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
46.2.3 Depth-Limited Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
46.2.4 Iteratively Deepening Depth-First Search . . . . . . . . . . . . . . . . 516
46.2.5 General Information on Uninformed Search . . . . . . . . . . . . . . . 517
46.3 Informed Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
46.3.1 Greedy Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
46.3.2 A⋆ Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
46.3.3 General Information on Informed Search . . . . . . . . . . . . . . . . 520
CONTENTS 15
47 Branch And Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
47.1 General Information on Branch And Bound . . . . . . . . . . . . . . . . . . 524
47.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 524
47.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
47.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 524

48 Cutting-Plane Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527


48.1 General Information on Cutting-Plane Method . . . . . . . . . . . . . . . . . 528
48.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 528
48.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
48.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 528

Part V Applications

49 Real-World Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531


49.1 Symbolic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
49.1.1 Genetic Programming: Genome for Symbolic Regression . . . . . . . 531
49.1.2 Sample Data, Quality, and Estimation Theory . . . . . . . . . . . . . 532
49.1.3 Limits of Symbolic Regression . . . . . . . . . . . . . . . . . . . . . . 535
49.2 Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
49.2.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
49.3 Freight Transportation Planning . . . . . . . . . . . . . . . . . . . . . . . . . 536
49.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
49.3.2 Vehicle Routing in Theory and Practice . . . . . . . . . . . . . . . . . 538
49.3.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
49.3.4 Evolutionary Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 542
49.3.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
49.3.6 Holistic Approach to Logistics . . . . . . . . . . . . . . . . . . . . . . 549
49.3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551

50 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.2 Bit-String based Problem Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.2.1 Kauffman’s NK Fitness Landscapes . . . . . . . . . . . . . . . . . . . 553
50.2.2 The p-Spin Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
50.2.3 The ND Family of Fitness Landscapes . . . . . . . . . . . . . . . . . . 557
50.2.4 The Royal Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
50.2.5 OneMax and BinInt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
50.2.6 Long Path Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
50.2.7 Tunable Model for Problematic Phenomena . . . . . . . . . . . . . . 566
50.3 Real Problem Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
50.3.1 Single-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . 580
50.4 Close-to-Real Vehicle Routing Problem Benchmark . . . . . . . . . . . . . . 621
50.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
50.4.2 Involved Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
50.4.3 Problem Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
50.4.4 Objectives and Constraints . . . . . . . . . . . . . . . . . . . . . . . . 634
50.4.5 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635

Part VI Background
16 CONTENTS
51 Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
51.1 Set Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
51.2 Special Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
51.3 Relations between Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
51.4 Operations on Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
51.5 Tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
51.6 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
51.7 Binary Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
51.8 Order Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
51.9 Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
51.10Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
51.10.1Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
51.11Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
51.11.1Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
51.11.2Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
51.11.3Transformations between Sets and Lists . . . . . . . . . . . . . . . . . 650

52 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651

53 Stochastic Theory and Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 653


53.1 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
53.1.1 Probabily as defined by Bernoulli (1713) . . . . . . . . . . . . . . . . 654
53.1.2 Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
53.1.3 The Limiting Frequency Theory of von Mises . . . . . . . . . . . . . . 655
53.1.4 The Axioms of Kolmogorov . . . . . . . . . . . . . . . . . . . . . . . . 656
53.1.5 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . . . . 657
53.1.6 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
53.1.7 Cumulative Distribution Function . . . . . . . . . . . . . . . . . . . . 658
53.1.8 Probability Mass Function . . . . . . . . . . . . . . . . . . . . . . . . 659
53.1.9 Probability Density Function . . . . . . . . . . . . . . . . . . . . . . . 659
53.2 Parameters of Distributions and their Estimates . . . . . . . . . . . . . . . . 659
53.2.1 Count, Min, Max and Range . . . . . . . . . . . . . . . . . . . . . . . 660
53.2.2 Expected Value and Arithmetic Mean . . . . . . . . . . . . . . . . . . 661
53.2.3 Variance and Standard Deviation . . . . . . . . . . . . . . . . . . . . 662
53.2.4 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
53.2.5 Skewness and Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . 665
53.2.6 Median, Quantiles, and Mode . . . . . . . . . . . . . . . . . . . . . . 665
53.2.7 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
53.2.8 The Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . 668
53.3 Some Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
53.3.1 Discrete Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . 668
53.3.2 Poisson Distribution πλ . . . . . . . . . . . . . . . . . . . . . . . . . . 671
53.3.3 Binomial Distribution B (n, p) . . . . . . . . . . . . . . . . . . . . . . 674
53.4 Some Continuous Distributions . . . . . . . . . . . . . . . . . . . . . . . . . 676
53.4.1 Continuous Uniform Distribution  . . . . . . . . . . . . . . . . . . . . 676
53.4.2 Normal Distribution N µ, σ 2 . . . . . . . . . . . . . . . . . . . . . . 678
53.4.3 Exponential Distribution exp(λ) . . . . . . . . . . . . . . . . . . . . . 682
53.4.4 Chi-square Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 684
53.4.5 Student’s t-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 687
53.5 Example – Throwing a Dice . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
53.6 Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
53.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
53.6.2 Likelihood and Maximum Likelihood Estimators . . . . . . . . . . . . 693
53.6.3 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
CONTENTS 17
53.7 Statistical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
53.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699

Part VII Implementation

54 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705

55 The Specification Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707


55.1 General Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
55.1.1 Search Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
55.1.2 Genotype-Phenotype Mapping . . . . . . . . . . . . . . . . . . . . . . 711
55.1.3 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.1.4 Termination Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.1.5 Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 713
55.2 Algorithm Specific Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
55.2.1 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.2.2 Estimation Of Distribution Algorithms . . . . . . . . . . . . . . . . . 719

56 The Implementation Package . . . . . . . . . . . . . . . . . . . . . . . . . . . 723


56.1 The Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 724
56.1.1 Algorithm Base Classes . . . . . . . . . . . . . . . . . . . . . . . . . . 724
56.1.2 Baseline Search Patterns . . . . . . . . . . . . . . . . . . . . . . . . . 729
56.1.3 Local Search Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 735
56.1.4 Population-based Metaheuristics . . . . . . . . . . . . . . . . . . . . . 746
56.2 Search Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
56.2.1 Operations for String-based Search Spaces . . . . . . . . . . . . . . . 784
56.2.2 Operations for Tree-based Search Spaces . . . . . . . . . . . . . . . . 809
56.2.3 Multiplexing Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 818
56.3 Data Structures for Special Search Spaces . . . . . . . . . . . . . . . . . . . 824
56.3.1 Tree-based Search Spaces as used in Genetic Programming . . . . . . 824
56.4 Genotype-Phenotype Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 833
56.5 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
56.6 Comparator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
56.7 Utility Classes, Constants, and Routines . . . . . . . . . . . . . . . . . . . . 839

57 Demos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
57.1 Demos for String-based Problem Spaces . . . . . . . . . . . . . . . . . . . . . 859
57.1.1 Real-Vector based Problem Spaces X ⊆ Rn . . . . . . . . . . . . . . . 859
57.1.2 Permutation-based Problem Spaces . . . . . . . . . . . . . . . . . . . 887
57.2 Demos for Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . 905
57.2.1 Demos for Genetic Programming Applied to Mathematical Problems 905
57.3 The VRP Benchmark Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 914
57.3.1 The Involved Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
57.3.2 The Optimization Utilities . . . . . . . . . . . . . . . . . . . . . . . . 925

Appendices

A Citation Suggestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945


18 CONTENTS
B GNU Free Documentation License (FDL) . . . . . . . . . . . . . . . . . . . 947
B.1 Applicability and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
B.2 Verbatim Copying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
B.3 Copying in Quantity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
B.4 Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
B.5 Combining Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
B.6 Collections of Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
B.7 Aggregation with Independent Works . . . . . . . . . . . . . . . . . . . . . . 951
B.8 Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
B.9 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
B.10 Future Revisions of this License . . . . . . . . . . . . . . . . . . . . . . . . . 952
B.11 Relicensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953

C GNU Lesser General Public License (LGPL) . . . . . . . . . . . . . . . . . 955


C.1 Additional Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
C.2 Exception to Section 3 of the GNU GPL . . . . . . . . . . . . . . . . . . . . 955
C.3 Conveying Modified Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
C.4 Object Code Incorporating Material from Library Header Files . . . . . . . . 956
C.5 Combined Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
C.6 Combined Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
C.7 Revised Versions of the GNU Lesser General Public License . . . . . . . . . 957

D Credits and Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213

List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217

List of Listings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221


Part I

Foundations
20 CONTENTS
Chapter 1
Introduction

1.1 What is Global Optimization?

One of the most fundamental principles in our world is the search for optimal states. It begins
in the microcosm in physics, where atom bonds1 minimize the energy of electrons [2137].
When molecules combine to solid bodies during the process of freezing, they assume crystal
structures which come close to energy-optimal configurations. Such a natural optimization
process, of course, is not driven by any higher intention but the result from the laws of
physics.
The same goes for the biological principle of survival of the fittest [2571] which, together
with the biological evolution [684], leads to better adaptation of the species to their environ-
ment. Here, a local optimum is a well-adapted species that can maintain a stable population
in its niche.

biggest ... ... with least energy


smallest ...
best trade-offs between ....
maximal ...
fastest ... ... that fulfills all timing constraints
minimal ...
... on the smallest possible area
fastest ...
cheapest ...
most similar to ... most robust ...
... with least aerodynamic drag
most precise ...

Figure 1.1: Terms that hint: “Optimization Problem” (inspired by [428])

1 http://en.wikipedia.org/wiki/Chemical_bond [accessed 2007-07-12]


22 1 INTRODUCTION
As long as humankind exists, we strive for perfection in many areas. As sketched in
Figure 1.1 there exists an incredible variety of different terms that indicate that a situation
is actually an optimization problem. For example, we want to reach a maximum degree
of happiness with the least amount of effort. In our economy, profit and sales must be
maximized and costs should be as low as possible. Therefore, optimization is one of the oldest
sciences which even extends into daily life [2023]. We also can observe the phenomenon of
trade-offs between different criteria which are also common in many other problem domains:
One person may choose to work hard with the goal of securing stability and pursuing wealth
for her future life, accepting very little spare time in its current stage. Another person can,
deliberately, choose a lax life full of joy in the current phase but rather unsecure in the
long term. Similar, most optimization problems in real-world applications involve multiple,
usually conflicting, objectives.
Furthermore, many optimization problems involve certain constraints. The guy with the
relaxed lifestyle, for instance, regardless how lazy he might become, cannot waive dragging
himself to the supermarket from time to time in order to acquire food. The eager beaver, on
the other hand, will not be able to keep up with 18 hours of work for seven days per week in
the long run without damaging her health considerably and thus, rendering the goal of an
enjoyable future unattainable. Likewise, the parameters of the solutions of an optimization
problem may be constrained as well.

1.1.1 Optimization

Since optimization covers a wide variety of subject areas, there exist different points of view
on what it actually is. We approach this question by first clarifying the goal of optimization:
solving optimization problems.

Definition D1.1 (Optimization Problem (Economical View)). An optimization prob-


lem is a situation which requires deciding for one choice from a set of possible alternatives
in order to reach a predefined/required benefit at minimal costs.

Definition D1.2 (Optimization Problem (Simplified Mathematical View)). Solving


an optimization problem requires finding an input value x⋆ for which a mathematical func-
tion f takes on the smallest possible value (while usually obeying to some restrictions on the
possible values of x⋆ ).

Every task which has the goal of approaching certain configurations considered as optimal in
the context of pre-defined criteria can be viewed as an optimization problem. After giving
these rough definitions which we will later refine in Section 6.2 on page 103 we can also
specify the topic of this book:

Definition D1.3 (Optimization). Optimization is the process of solving an optimization


problem, i. e., finding suitable solutions for it.

Definition D1.4 (Global Optimization). Global optimization is optimization with the


goal of finding solutions x⋆ for a given optimization problem which have the property that
no other, better solutions exist.

If something is important, general, and abstract enough, there is always a mathemati-


cal discipline dealing with it. Global Optimization2 [609, 2126, 2181, 2714] is also the
2 http://en.wikipedia.org/wiki/Global_optimization [accessed 2007-07-03]
1.1. WHAT IS GLOBAL OPTIMIZATION? 23
branch of applied mathematics and numerical analysis that focuses on, well, optimization.
Closely related and overlapping areas are mathematical economics 3 [559] and operations
research 4 [580, 1232].

1.1.2 Algorithms and Programs


The title of this book is Global Optimization Algorithms – Theory and Application and
so far, we have clarified what optimization is so the reader has an idea about the general
direction in which we are going to drift. Before giving an overview about the contents to
follow, let us take a quick look into the second component of the book title, algorithms.
The term algorithm comprises essentially all forms of “directives what to do to reach a
certain goal”. A culinary receipt is an algorithm, for example, since it tells how much of
which ingredient is to be added to a meal in which sequence and how everything should
be heated. The commands inside the algorithms can be very concise or very imprecise,
depending on the area of application. How accurate can we, for instance, carry out the
instruction “Add a tablespoon of sugar.”? Hence, algorithms are a very wide field that
there exist numerous different, rather fuzzy definitions for the word algorithm [46, 141, 157,
635, 1616]. We base ours on the one given by Wikipedia:

Definition D1.5 (Algorithm (Wikipedia)). An algorithm5 is a finite set of well-defined


instructions for accomplishing some task. It starts in some initial state and usually termi-
nates in a final state.

The term algorithms is derived from the name of the mathematician Mohammed ibn-Musa
al-Khwarizmi who was part of the royal court in Baghdad and who lived from about 780 to
850. Al-Khwarizmi’s work is the likely source for the word algebra as well. Definition D1.6
provides a basic understanding about what an optimization algorithm is, a “working hy-
pothesis” for the time being. It will later be refined in Definition D6.3 on page 103.

Definition D1.6 (Optimization Algorithm). An optimization algorithm is an algo-


rithm suitable to solve optimization problems.

An algorithm is a set of directions in a representation which is usually understandable for


human beings and thus, independent from specific hardware and computers. A program,
on the other hand, is basically an algorithm realized for a given computer or execution
environment.

Definition D1.7 (Program). A program6 is a set of instructions that describe a task or


an algorithm to be carried out on a computer.

Therefore, the primitive instructions of the algorithm must be expressed either in a form that
the machine can process directly (machine code7 [2135]), in a form that can be translated
(1:1) into such code (assembly language [2378], Java byte code [1109], etc.), or in a high-
level programming language8 [1817] which can be translated (n:m) into the latter using
special software (compiler) [1556].
In this book, we will provide several algorithms for solving optimization problems. In
our algorithms, we will try to bridge between abstraction, easiness of reading, and closeness
to the syntax of high-level programming languages as Java. In the algorithms, we will use
self-explaining primitives which will still base on clear definitions (such as the list operations
discussed in Section 51.11 on page 647).
3 http://en.wikipedia.org/wiki/Mathematical_economics [accessed 2009-06-16]
4 http://en.wikipedia.org/wiki/Operations_research [accessed 2009-06-19]
5 http://en.wikipedia.org/wiki/Algorithm [accessed 2007-07-03]
6 http://en.wikipedia.org/wiki/Computer_program [accessed 2007-07-03]
7 http://en.wikipedia.org/wiki/Machine_code [accessed 2007-07-04]
8 http://en.wikipedia.org/wiki/High-level_programming_language [accessed 2007-07-03]
24 1 INTRODUCTION
1.1.3 Optimization versus Dedicated Algorithms

During at least fifty five years, research in computer science led to the development of many
algorithms which solve given problem classes to optimality. Basically, we can distinguish
two kinds of problem-solving algorithms: dedicated algorithms and optimization algorithms.
Dedicated algorithms are specialized to exactly solve a given class of problems in the
shortest possible time. They are most often deterministic and always utilize exact theoretical
knowledge about the problem at hand. Kruskal’s algorithm [1625], for example, is the best
way for searching the minimum spanning tree in a graph with weighted edges. The shortest
paths from a source node to the other nodes in a network can best be found with Dijkstra’s
algorithm [796] and the maximum flow in a flow network can be computed quickly with the
algorithm given by Dinitz [800]. There are many problems for which such fast and efficient
dedicated algorithms exist. For each optimization problem, we should always check first if
it can be solved with such a specialized, exact algorithm (see also Chapter 7 on page 109).
However, for many optimization tasks no efficient algorithm exists. The reason can be
that (1) the problem is too specific – after all, researchers usually develop algorithms only
for problems which are of general interest. (2) Furthermore, for some problems (the current
scientific opinion is that) no efficient, dedicated can be constructed at all (see Section 12.2.2
on page 146).
For such problems, the more general optimization algorithms are employed. Most often,
these algorithms only need a definition of the structure of possible solutions and a function
f which tells measures the quality of a candidate solution. Based on this information, they
try to find solutions for which f takes on the best values. Optimization methods utilize much
less information than exact algorithms, they are usually slower and cannot provide the same
solution quality. In the cases 1 and 2, however, we have no choice but to use them. And
indeed, there exists a giant number of situations where either Point 1, 2, or both conditions
hold, as you can see in, for example, Table 25.1 on page 226 or 28.4 on page 314. . .

1.1.4 Structure of this Book

In the first part of this book, we will first discuss the things which constitute an optimization
problem and which are the basic components of each optimization process. In Part II,
we take a look on all the possible aspects which can make an optimization task difficult.
Understanding these issues will allow you to anticipate possible problems when solving an
optimization task and, hopefully, to prevent their occurrence from the start. We then discuss
metaheuristic optimization algorithms (and especially Evolutionary Computation methods)
in Part III. Whereas these optimization approaches are clearly the center of this book, we
also introduce traditional, non-metaheuristic, numerical, and/or exact methods in Part IV.
In Part V, we then list some examples of applications for which optimization algorithms can
be used. Finally, Part VI is a collection of definitions and knowledge which can be useful to
understand the terms and formulas used in the rest of the book. It is provided in an effort
to make the book stand-alone and to allow students to understand it even if they have little
mathematical or theoretical background.

1.2 Types of Optimization Problems


Basically, there are two general types of optimization tasks: combinatorial and numerical
problems.

1.2.1 Combinatorial Optimization Problems


1.2. TYPES OF OPTIMIZATION PROBLEMS 25
Definition D1.8 (Combinatorial Optimization Problem). Combinatorial optimiza-
tion problems9 [578, 628, 2124, 2430] are problems which are defined over a finite (or numer-
able infinite) discrete problem space X and whose candidate solutions structure can either
be expressed as
1. elements from finite sets,
2. finite sequence or permutation of elements xi chosen from finite sets, i. e., x ∈ X ⇒
x = (x1 , x2 , . . . ), as
3. sets of elements, i. e., x ∈ X ⇒ x = {x1 , x2 , . . . }, as
4. tree or graph structures with node or edge properties stemming from any of the above
types, or
5. any form of nesting, combination, partitions, or subsets of the above.

Combinatorial tasks are, for example, the Traveling Salesman Problem (see Example E2.2 on
page 45 [1429]), Vehicle Routing Problems (see Section 49.3 on page 536 and [2162, 2912]),
graph coloring [1455], graph partitioning [344], scheduling [564], packing [564], and satisfia-
bility problems [2335]. Algorithms suitable for such problems are, amongst others, Genetic
Algorithms (Chapter 29), Simulated Annealing (Chapter 27), Tabu Search (Chapter 40),
and Extremal Optimization (Chapter 41).

Example E1.1 (Bin Packing Problem).


The bin packing problem10 [1025] is a combinatorial optimization problem which is defined

a3=1

a5=1
a9=2 a2=4 a8=2
b=5

a1=4
a10=3
a7=2 a6=2 distribution of n=10
a4=1 objects of size a1 to a10
in k=5 bins of size b=5
bin 0 bin 1 bin 2 bin 3 bin 4
Figure 1.2: One example instance of a bin packing problem.

as follows: Given are a number k ∈ N1 of bins, each of which having size b ∈ N1 , and a
number n ∈ N1 with the weights a1 , a2 , . . . , an . The question to answer is: Can the n objects
be distributed over the k bins in a way that none of the bins flows over? A configuration
which meets this requirement is sketched in Figure 1.2. If another object of size a11 = 4 was
added, the answer to the question would become No.
The bin packing problem can be formalized to asking whether a mapping x from the
numbers 1..n to 1..k exists so that Equation 1.1 holds. In Equation 1.1, the mapping is
represented as a list x of k sets where the ith set contains the indices of the objects to be
packed into the ith bin.  
X
∃x : ∀i ∈ 0..k − 1 :  aj  ≤ b (1.1)
∀j∈x[i]

Finding whether this equation holds or not is known to be N P-complete. The question
could be transformed into an optimization problem with the goal of to find the smallest
9 http://en.wikipedia.org/wiki/Combinatorial_optimization [accessed 2010-08-04]
10 http://en.wikipedia.org/wiki/Bin_packing_problem [accessed 2010-08-27]
26 1 INTRODUCTION
number k of bins of size b which can contain all the objects listed in a. Alternatively, we
could try to find the mapping x for which the function fbp given in Equation 1.2 takes on
the lowest value, for example.
   
k−1
X  X 
fbp (x) = max 0,  aj  − b (1.2)
 
i=0 ∀j∈x[i]

This problem is known to be N P-hard. See Section 12.2 for a discussion on


N P-completeness and N P-hardness and Task 67 on page 238 for a homework task con-
cerned with solving the bin packing problem.

Example E1.2 (Circuit Layout).


Given are a card with a finite set S of slots, a set C : |C| ≤ |S| of circuits to be placed on

20 slots
S=20´11
K
A B D J C={A, B, C, D, E, F, G, H, I, J, K, L, M}
11 I R={(A,B), (B,C), (C,F), (C,G), (C,M),
C
slots E H (D,E), (D,J), (E,F), (F,K), (G,J), (G,M),
L (H,J), (H,I), (H,L), (I,K), (I,L), (K,L),
F G M (L,M)}

Figure 1.3: An example for circuit layouts.

that card, and a connection list R ⊆ C × C where (c1 , c2 ) ∈ R means that the two circuits
c1 , c2 ∈ C have to be connected. The goal is to find a layout x which assigns each circuit
c ∈ C to a location s ∈ S so that the overall length of wiring required to establish all the
connections r ∈ R becomes minimal, under the constraint that no wire must cross another
one. An example for such a scenario is given in Figure 1.3.
Similar circuit layout problems exist in a variety of forms for quite a while now [1541]
and are known to be quite hard. They may be extended with constraints such as to also
minimize the heat emission of the circuit and to minimize cross-wire induction and so on.
They may be extended to not only evolve the circuit location but also the circuit types and
numbers [1479, 2792] and thus, can become arbitrarily complex.

Example E1.3 (Job Shop Scheduling).


Job Shop Scheduling11 (JSS) is considered to be one of the hardest classes of scheduling
problems and is also N P-complete [1026]. The goal is to distribute a set of tasks to machines
in a way that all deadlines are met.
In a JSS, a set T of jobs ti ∈ T is given, where each job ti consists of a number of
sub-tasks ti,j . A partial order is defined on the sub-tasks ti,j of each job ti which describes
the precedence of the sub-tasks, i. e., constraints in which order they must be processed.
In the case of a beverage factory, 10 000 bottles of cola (t1 . . . t10 000 ) and 5000 bottles of
lemonade (t10 001 . . . t15 000 ).
For the cola bottles, the jobs would consist of three subjobs, ti,1 to ti,2 for i ∈ 1..10 000,
since the bottles first need to labeled, then they are filled with cola, and finally closed. It is
clear that the bottles cannot be closed before they are filled ti,2 ≺ ti,3 whereas the labeling
11 http://en.wikipedia.org/wiki/Job_Shop_Scheduling [accessed 2010-08-27]
1.2. TYPES OF OPTIMIZATION PROBLEMS 27
can take place at any time. For the lemonade, all production steps are the same as the
ones for the cola bottles, except that step ti,2 ∀i ∈ 10 001..15 000 is replaced with filling the
bottles with lemonade.
It is furthermore obvious that the different sub-jobs have to be performed by different
machines mi ∈ M . Each of the machines mi can only process a single job at a time. Finally,
each of the jobs has a deadline until which it must be finished.
The optimization algorithm has to find a schedule S which defines for each job when it has
to be executed and on what machine while adhering to the order and deadline constraints.
Usually, the optimization process would start with random configurations which violate
some of the constraints (typically the deadlines) and step by step reduces the violations
until results are found that fulfill all requirements.
The JSS can be considered as a combinatorial problem since the optimizer creates a
sequence and assignment of jobs and the set of possible starting times (which make sense)
is also finite.

Example E1.4 (Routing).


In a network N composed of “numNodes”N nodes, find the shortest path from one node n1
to another node n2 . This optimization problem consists of composing a sequence of edges
which form a path from n1 to n2 of the shortest length. This combinatorial problem is
rather simple and has been solved by Dijkstra [796] a long time ago. Nevertheless, it is an
optimization problem.

1.2.2 Numerical Optimization Problems

Definition D1.9 (Numerical Optimization Problem). Numerical optimization prob-


lems 12 [1281, 2040] are problems that are defined over problem spaces which are subspaces
of (uncountable infinite) numerical spaces, such as the real or complex vectors (X ⊆ Rn or
X ⊆ Cn ).

Numerical problems defined over Rn (or any kind of space containing it) are called contin-
uous optimization problems. Function optimization [194], engineering design optimization
tasks [2059], classification and data mining tasks [633] are common continuous optimiza-
tion problems. They often can efficiently be solved with Evolution Strategies (Chapter 30),
Differential Evolution (Chapter 33), Evolutionary Programming (Chapter 32), Estimation
of Distribution Algorithms (Chapter 34) or Particle Swarm Optimization (Chapter 39), for
example.

Example E1.5 (Function Roots and Minimization).


The task Find the roots of function g(x) defined in Equation 1.3 and sketched in Figure 1.4
can – at least by the author’s limited mathematical capabilities – hardly be solved manually.
x 
g(x) = 5 + ln sin x2 + e tan x + cosx − arctan |x − 3| (1.3)

However, we can transform it into the optimization problem Minimize the objective function
2 2
f1 (x) = |0 − g(x)| = |g(x)| or Minimize the objective function f2 (x) = (0 − g(x)) = (g(x)) .
Different from the original formulation, the objective functions f1 and f2 provide us with a
12 http://en.wikipedia.org/wiki/Optimization_(mathematics) [accessed 2010-08-04]
28 1 INTRODUCTION
g(x)
4
2
0
-2
-4
x
-5 x1 x2 0 x3 x4 5

Figure 1.4: The graph of g(x) as given in Equation 1.3 with the four roots x⋆1 to x⋆4 .

measure of how close an element x is to be a valid solution. For the global optima x⋆ of
the problem, f1 (x⋆ ) = f2 (x⋆ ) = 0 holds. Based on this formulation, we can hand any of the
two objective functions to a method capable of solving numerical optimization problems. If
these indeed discover elements x⋆ with f1 (x⋆ ) = f2 (x⋆ ) = 0, we have answered the initial
question since the elements x⋆ are the roots of g(x).
1.2. TYPES OF OPTIMIZATION PROBLEMS 29

Example E1.6 (Truss Optimization).


In a truss design optimization problem [18] as sketched in Figure 1.5, the locations of n

optimization
F F

truss structure: fixed points (left), possible optimization result:


possible beam locations (gray), beam thicknesses (dark gray),
applied force (right, F) unused beams (thin light gray)

Figure 1.5: A truss optimization problem like the one given in [18].

possible beams (light gray), a few fixed points (left hand side of the truss sketches), and


a point where a weight will be attached (right hand side of the truss sketches, F ). The
thickness di of each of the i = 1..n beams which leads to the most stable truss whilst not
exceeding a maximum volume of material V would be subject to optimization. A structure
as the one sketched on the right hand side of the figure could result from a numerical
optimization process: Some beams receive different non-zero thicknesses (illustrated with
thicker lines). Some other beams have been erased from the design (receive thickness di = 0),
sketched as thin, light gray lines.
Notice that, depending on the problem formulation, truss optimization can also be-
come a discrete or even combinatorial problem: Not in all real-world design tasks, we can
choose the thicknesses of the beams in the truss arbitrarily. It may probably be possi-
ble when synthesizing the support structures for air plane wings or motor blocks. How-
ever, when building a truss from pre-defined building blocks, such as the beams avail-
able for house construction, the thicknesses of the beams are finite and, for example,
di ∈ {10mm, 20mm, 30mm, 50mm, 100mm, 300mm} ∀i ∈ 1..n. In such a case, a problem
such as the one sketched in Figure 1.5 becomes effectively combinatorial or, at least, a
discrete internet programming task.

1.2.3 Discrete and Mixed Problems

Of course, numerical problems may also be defined over discrete spaces, such as the Nn0 .
Problems of this type are called integer programming problems:

Definition D1.10 (Integer Programming). An integer programming problem 13 is an


optimization problem which is defined over a finite or countable infinite numerical prob-
lem space (such as Nn0 or Zn ) [1495, 2965].

Basically, combinatorial optimization tasks can be considered as a class of such discrete


optimization problems. Because of the wide range of possible solution structures in combi-
natorial tasks, however, techniques well capable of solving integer programming problems
13 http://en.wikipedia.org/wiki/Integer_programming [accessed 2010-08-04]
30 1 INTRODUCTION
are not necessarily suitable for a given combinatorial task. The same goes for tasks which
are suitable for real-valued vector optimization problems: the class of integer programming
problems is, of course, a member of the general numerical optimization tasks. However,
methods for real-valued problems are not necessarily good for integer-valued ones (and
hence, also not for combinatorial tasks). Furthermore, there exist optimization problems
which are combinations of the above classes. For instance, there may be a problem where
a graph must be built which has edges with real-valued properties, i. e., a crossover of a
combinatorial and a continuous optimization problem.

Definition D1.11 (Mixed Integer Programming). A mixed integer programming


problem (MIP) is a numerical optimization problem where some variables stem from fi-
nite (or countable infinite) (numerical) spaces while others stem from uncountable infinite
(numerical) spaces.

1.2.4 Summary

Figure 1.6 is not intended to provide a precise hierarchy of problem types. Instead, it is only

Numerical Optimization Problems Discrete Optimization


Problems

Continuous Mixed Integer Integer Combinatorial


Optimization Programming Programming Optimization
Problems Problems Problems Problems

Figure 1.6: A rough sketch of the problem type relations.

a rough sketch the relations between them. Basically, there are smooth transitions between
many problem classes and the type of a given problem is often subject to interpretation.
The candidate solutions of combinatorial problems stem from finite sets and are discrete.
They can thus always be expressed as discrete numerical vectors and thus could theoretically
be solved with methods for numerical optimization, such as integer programming techniques.
Of course, every discrete numerical problem can also be expressed as continuous numeri-
cal problem since any vector from the Rn can be easily be transformed to a vector in the Zn
by simply rounding each of its elements. Hence, the techniques for continuous optimization
can theoretically be used for discrete numerical problems (integer programming) as well as
for MIPs and for combinatorial problems.
However, algorithms for numerical optimization problems often apply numerical meth-
ods, such as adding vectors, multiplying them with some number, or rotating vectors. Com-
binatorial problems, on the other hand, are usually solved by applying combinatorial modi-
fications to solution structures, such as changing the order of elements in a sequence. These
two approaches are fundamentally different and thus, although algorithms for continuous
1.3. CLASSES OF OPTIMIZATION ALGORITHMS 31
optimization can theoretically be used for combinatorial optimization, it is quite clear that
this is rarely the case in practice and would generally yield bad results.
Furthermore, the precision with which numbers can be represented on a computer is finite
since the memory available in any possible physical device is limited. From this perspective,
any continuous optimization problem, if solved on a physical machine, is a discrete problem
as well. In theory, it could thus be solved with approaches for combinatorial optimization.
Again, it is quite easy to see that adjusting the values and sequences of bits in a floating
point representation of a real number will probably lead to inferior results or, at least, to
slower optimization speed than using advanced mathematical operations.
Finally, there may be optimization tasks with both, combinatorial and continuous char-
acteristics. Example E1.6 provides a problem which can exist in a continuous and in a
discrete flavor, both of which can be tackled with numerical methods whereas the latter one
can also be solved efficiently with approaches for discrete problems.

1.3 Classes of Optimization Algorithms

In this book, we will only be able to discuss a small fraction of the wide variety of Global
Optimization techniques [1068, 2126]. Before digging any deeper into the matter, we will
attempt to classify some of these algorithms in order to give a rough overview of what will
come in Part III and Part IV. Figure 1.7 sketches a rough taxonomy of Global Optimization
methods. Please be aware that the idea of creating a precise taxonomy of optimization
methods makes no sense. Not only are there many hybrid approaches which combine several
different techniques, there also exist various definitions for what things like metaheuristics,
artificial intelligence, Evolutionary Computation, Soft Computing, and so on exactly are
and which specific algorithms do belong to them and which not. Furthermore, many of
these fuzzy collective terms also widely overlap. The area of Global Optimization is a living
and breathing research discipline. Many contributions come from scientists from various
research areas, ranging from physics to economy and from biology to the fine arts. As
same as important are the numerous ideas developed by practitioners in from engineering,
industry, and business. As stated before, Figure 1.7 should thus be considered as just a rough
sketch, as one possible way to classify optimizers according to their algorithm structure. In
the remainder of this section, we will try to outline some of the terms used in Figure 1.7.

1.3.1 Algorithm Classes

Generally, optimization algorithms can be divided in two basic classes: deterministic and
probabilistic algorithms.

1.3.1.1 Deterministic Algorithms

Definition D1.12 (Deterministic Algorithm). In each execution step of a determinis-


tic algorithm, there exists at most one way to proceed. If no way to proceed exists, the
algorithm has terminated.

For the same input data, deterministic algorithms will always produce the same results.
This means that whenever they have to make a decision on how to proceed, they will come
to the same decision each time the available data is the same.
32 1 INTRODUCTION

Optimization
[609, 2126, 2181, 2714] ✛
✻Probabilistic Approaches Deterministic Approaches
✻ Metaheuristics
[1103, 1271, 2040, 2126]

Part III [1068, 1887] ✛ ✻ Search Algorithms


✻ Hill Climbing Evolutionary Computation
Chapter 46 [635, 2356]

Chapter 26 [2356] [715, 847, 863, 1220, 3025, 3043] ✻Uninformed Search
Simulated Annealing ✻
Evolutionary Algorithms ✻ Swarm Intelligence Section 46.2 [1557, 2140]

Chapter 27 [1541, 2043] Chapter 28 [167, 171, 599] [353, 881, 1524] ✻ BFS

Tabu Search
✻Genetic Algorithms ✻ ACO
Section 46.2.1

Chapter 40 [1069, 1228] Chapter 29 [1075, 1912] Chapter 37 [809, 810] DFS
Section 46.2.2
Extremal Optimization Evolution Strategies RFD
Chapter 41 [345, 725] Chapter 30 [300, 2279] Chapter 38 [229, 230] IDDFS
Section 46.2.4 [2356]
GRASP Genetic Programming PSO
Chapter 42 [902, 906] Chapter 31 [1602, 2199] Chapter 39 [589, 853] Informed Search
Downhill Simplex ✻ SGP Memetic Algorithms
Section 46.3 [1557, 2140]

Chapter 43 [1655, 2017] Section 31.3 [1602, 1615] Chapter 36 [1141, 1961] ✻ Greedy Search
Section 46.3.1 [635]
Random Optimization LGP
Chapter 44 [526, 2926] Section 31.4 [209, 394] A⋆ Search
Section 46.3.2 [758]
Harmony Search GGGP
[1033] Section 31.5 [1853, 2358] Branch & Bound
Chapter 47 [673, 1663]
Graph-based
Section 31.6 [1901, 2191] Cutting-Plane Method
Chapter 48 [152, 643]
Differential Evolution
Chapter 33 [2220, 2621]

EDAs
Chapter 34 [1683, 2153]

Evol. Programming
Chapter 32 [940, 945]

LCS
Chapter 35 [430, 1256]

Figure 1.7: Overview of optimization algorithms.


1.3. CLASSES OF OPTIMIZATION ALGORITHMS 33
Example E1.7 (Decision based on Objective Value).
Let us assume, for example, that the optimization process has to select one of the two
candidate solutions x1 and x2 for further investigation. The algorithm could base its decision
on the value the objective function f subject to minimization takes on. If f(x1 ) < f(x2 ), then
x1 is chosen and x2 otherwise. For each two points xi , xj ∈ X, the decision procedure is the
same and deterministically chooses the candidate solution with the better objective value.

Deterministic algorithms are thus used when decisions can efficiently be made on the data
available to the optimization process. This is the case if a clear relation between the char-
acteristics of the possible solutions and their utility for a given problem exists. Then, the
search space can efficiently be explored with, for example, a divide and conquer scheme14 .

1.3.1.2 Probabilistic Algorithms

There may be situations, however, where deterministic decision making may prevent the
optimizer from finding the best results. If the relation between a candidate solution and its
“fitness” is not obvious, dynamically changing, too complicated, or the scale of the search
space is very high, applying many of the deterministic approaches becomes less efficient and
often infeasible. Then, probabilistic optimization algorithms come into play. Here lies the
focus of this book.

Definition D1.13 (Probabilistic Algorithm). A randomized15 (or probabilistic or


stochastic) algorithm includes at least one instruction that acts on the basis of random
numbers. In other words, a probabilistic algorithm violates the constraint of determin-
ism [1281, 1282, 1919, 1966, 1967].

Generally, exact algorithms can be much more efficient than probabilistic ones in many
domains. Probabilistic algorithms have the further drawback of not being deterministic,
i. e., may produce different results in each run even for the same input. Yet, in situations like
the one described in Example E1.8, randomized decision making can be very advantageous.

Example E1.8 (Probabilistic Decision Making (Example E1.7 Cont.)).


In Example E1.7, we described a possible decision step in a deterministic optimization
algorithm, similar to a statement such as if f(x1 ) < f(x2 ) then investigate(x1 );
else investigate(x2 );. Although this decision may be the right one in most cases, it
may be possible that sometimes, it makes sense to investigate a less fit candidate solution.
Hence, it could be a good idea to rephrase the decision to “if f(x1 ) < f(x2 ), then choose
x1 with 90% probability, but in 10% of the situations, continue with x2 ”. In the case that
f(x1 ) < f(x2 ) actually holds, we could draw a random number uniformly distributed in [0, 1)
and if this number less than 0.9, we would pick x1 and otherwise, investigate x2 . Matter of
fact, such kind of randomized decision making distinguishes the more Simulated Annealing
algorithm (Chapter 27) which has been proven to be able to always find the global optimum
(possibly in infinite time) from simple Hill Climbing (Chapter 26) which does not have this
feature.

Early work in this area, which now has become one of most important research fields in
optimization, was started at least sixty years ago. The foundations of many of the efficient
methods currently in use were laid by researchers such as Bledsoe and Browning [327],
Friedberg [995], Robbins and Monro [2313], and Bremermann [407].
14 http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm [accessed 2007-07-09]
15 http://en.wikipedia.org/wiki/Randomized_algorithm [accessed 2007-07-03]
34 1 INTRODUCTION
1.3.2 Monte Carlo Algorithms

Most probabilistic algorithms used for optimization are Monte Carlo-based approaches (see
Section 12.4). They trade guaranteed global optimality of the solutions in for a shorter
runtime. This does not mean that the results obtained by using them are incorrect – they
may just not be the global optima. Still, a result little bit inferior to the best possible
solution x⋆ is still better than waiting 10100 years until x⋆ is found16 . . .
An umbrella term closely related to Monte Carlo algorithms is Soft Computing17 (SC),
which summarizes all methods that find solutions for computational hard tasks (such as
N P-complete problems, see Section 12.2.2) in relatively short time, again by trading in
optimality for higher computation speed [760, 1242, 1430, 2685].

1.3.3 Heuristics and Metaheuristics

Most efficient optimization algorithms which search good individuals in some targeted way,
i. e., all methods better than uninformed searches, utilize information from objective (see
Section 2.2) or heuristic functions. The term heuristic stems from the Greek heuriskein
which translates to to find or to to discover.

1.3.3.1 Heuristics

Heuristics used in optimization are functions that rates the utility of an element and provides
support for the decision regarding which element out of a set of possible solutions is to be
examined next. Deterministic algorithms usually employ heuristics to define the processing
order of the candidate solutions. Approaches using such functions are called informed search
methods (which will be discussed in Section 46.3 on page 518). Many probabilistic methods,
on the other hand, only consider those elements of the search space in further computations
that have been selected by the heuristic while discarding the rest.

Definition D1.14 (Heuristic). A heuristic18 [1887, 2140, 2276] function φ is a problem


dependent approximate measure for the quality of one candidate solution.

A heuristic may, for instance, estimate the number of search steps required from a given
solution to the optimal results. In this case, the lower the heuristic value, the better the
solution would be. It is quite clear that heuristics are problem class dependent. They can be
considered as the equivalent of fitness or objective functions in (deterministic) state space
search methods [1467] (see Section 46.3 and the more specific Definition D46.5 on page 518).
An objective function f(x) usually has a clear meaning denoting an absolute utility of a
candidate solution x. Different from that, heuristic values φ(p) often represent more indirect
criteria such as the aforementioned search costs required to reach the global optimum [1467]
from a given individual.

Example E1.9 (Heuristic for the TSP from Example E2.2).


A heuristic function in the Traveling Salesman Problem involving five cities in China as
defined in Example E2.2 on page 45 could, for instance, state “The total distance of this
candidate solution is 1024km longer than the optimal tour.” or “You need to swap no more
than three cities in this tour in order to find the optimal solution”. Such a statement has
a different quality from an objective function just stating the total tour length and being
subject to minimization.
16 Kindly refer to Chapter 12 for a more in-depth discussion of complexity.
17 http://en.wikipedia.org/wiki/Soft_computing [accessed 2007-09-17]
18 http://en.wikipedia.org/wiki/Heuristic_%28computer_science%29 [accessed 2007-07-03]
1.3. CLASSES OF OPTIMIZATION ALGORITHMS 35

We will discuss heuristics in more depth in Section 46.3. It should be mentioned that opti-
mization and artificial intelligence19 (AI) merge seamlessly with each other. This becomes
especially clear when considering that one most of important research areas of AI is the
design of good heuristics for search, especially for the state space search algorithms which
here, are listed as deterministic optimization methods.
Anyway, knowing the proximity of the meaning of the terms heuristics and objective
functions, it may as well be possible to consider many of the following, randomized ap-
proaches to be informed searches (which brings us back to the fuzzyness of terms). There
is, however, another difference between heuristics and objective functions: They usually
involve more knowledge about the relation of the structure of the candidate solutions and
their utility as may have become clear from Example E1.9.

1.3.3.2 Metaheuristics
This knowledge is a luxury often not available. In many cases, the optimization algorithms
have a “horizon” limited to the search operations and the objective functions with unknown
characteristics. They can produce new candidate solutions by applying search operators to
the currently available genotypes. Yet, the exact result of the operators is (especially in
probabilistic approaches) not clear in advance and has to be measured with the objective
functions as sketched in Figure 1.8. Then, the only choices an optimization algorithm has
are:
1. Which candidate solutions should be investigated further?
2. Which search operations should be applied to them? (If more than one operator is
available, that is.)
f1(x)

compute fitness or heuristic,


select best solutions, and
create new ones via
search operations

f(xÎX)
black box (3,3)
(3,0) (3,1) (3,2)

(2,0) (2,1) (2,2) (2,3)

(1,0) (1,1) (1,2) (1,3)

(0,0) (0,1) (0,2) (0,3)


Pop(t+1)
(3,0) (3,1) (3,2) (3,3)

(2,0) (2,1) (2,2) (2,3)

(1,0) (1,1) (1,2) (1,3)

(0,0) (0,1) (0,2) (0,3)


Pop(t)
Figure 1.8: Black box treatment of objective functions in metaheuristics.

19 http://en.wikipedia.org/wiki/Artificial_intelligence [accessed 2007-09-17]


36 1 INTRODUCTION
Definition D1.15 (Metaheuristic). A metaheuristic20 is a method for solving general
classes of problems. It combines utility measures such as objective functions or heuristics
in an abstract and hopefully efficient way, usually without utilizing deeper insight into their
structure, i. e., by treating them as black box-procedures [342, 1068, 1103].

Metaheuristics, which you can find discussed in Part III, obtain knowledge on the struc-
ture of the problem space and the objective functions by utilizing statistics obtained from
stochastically sampling the search space. Many metaheuristics are inspired by a model of
some natural phenomenon or physical process. Simulated Annealing, for example, decides
which candidate solution is used during the next search step according to the Boltzmann
probability factor of atom configurations of solidifying metal melts. The Raindrop Method
emulates the waves on a sheet of water after a raindrop fell into it. There also are tech-
niques without direct real-world role model like Tabu Search and Random Optimization.
Unified models of metaheuristic optimization procedures have been proposed by Osman
[2103], Rayward-Smith [2275], Vaessens et al. [2761, 2762], and Taillard et al. [2650].
Obviously, metaheuristics are not necessarily probabilistic methods. The A⋆ search, for
instance, a deterministic informed search approach, is considered to be a metaheuristic too
(although it incorporates more knowledge about the structure of the heuristic functions
than, for instance, Evolutionary Algorithms do). Still, in this book, we are interested in
probabilistic metaheuristics the most.

1.3.3.2.1 Summary: Difference between Heuristics and Metaheuristics

The terms heuristic, objective function, fitness function (will be introduced in 28.1.2.5
and Section 28.3 on page 274 in detail), and metaheuristic may easily be mixed up and
it is maybe not entirely obvious which is which and where is the difference. Therefore, we
here want to give a short outline of the main differences:

1. Heuristic. problem-specific measure of the potential of a solution as step towards the


global optimum. May be absolute/direct (describe the utility of a solution) or indirect
(e.g., approximate the distance of the solution to the optimum in terms of search
steps).

2. Objective Function. Problem-specific absolute/direct quality measure which rates one


aspect of candidate solution.

3. Fitness Function. Secondary utility measure which may combine information from a
set of primary utility measures (such as objective functions or heuristics) to a single
rating for an individual. This process may also take place relative to a set of given
individuals, i. e., rate the utility of a solution in comparison to another set of solutions
(such as a population in Evolutionary Algorithms).

4. Metaheuristic. general, problem-independent (ternary) approach to select solutions


for further investigation in order to solve an optimization problem. Therefore uses
a (primary or secondary) utility measure such as a heuristic, objective function, or
fitness function.

1.3.4 Evolutionary Computation

An important class of probabilistic Monte Carlo algorithms is Evolutionary Computation21


(EC). It encompasses all randomized metaheuristic approaches which iteratively refine a set
(population) of multiple candidate solutions.
20 http://en.wikipedia.org/wiki/Metaheuristic [accessed 2007-07-03]
21 http://en.wikipedia.org/wiki/Evolutionary_computation [accessed 2007-09-17]
1.4. CLASSIFICATION ACCORDING TO OPTIMIZATION TIME 37
Some of the most important EC approaches are Evolutionary Algorithms and Swarm
Intelligence, which will be discussed in-depth in this book. Swarm Intelligence is inspired
by fact that natural systems composed of many independent, simple agents – ants in case of
Ant Colony Optimization (ACO) and birds or fish in case of Particle Swarm Optimization
(PSO) – are often able to find pieces of food or shortest-distance routes very efficiently.
Evolutionary Algorithms, on the other hand, copy the process of natural evolution and
treat candidate solutions as individuals which compete and reproduce in a virtual envi-
ronment. Generation after generation, these individuals adapt better and better to the
environment (defined by the user-specified objective function(s)) and thus, become more
and more suitable solutions to the problem at hand.

1.3.5 Evolutionary Algorithms

Evolutionary Algorithms can again be divided into very diverse approaches, spanning from
Genetic Algorithms (GAs) which try to find bit-or integer-strings of high utility (see, for
instance, Example E4.2 on page 83) and Evolution Strategies (ESs) which do the same with
vectors of real numbers to Genetic Programming (GP), where optimal tree data structures
or programs are sought. The variety of Evolutionary Algorithms is the focus of this book,
also including Memetic Algorithms (MAs) which are EAs hybridized with other search al-
gorithms.

1.4 Classification According to Optimization Time

The taxonomy just introduced classifies the optimization methods according to their algo-
rithmic structure and underlying principles, in other words, from the viewpoint of theory. A
software engineer or a user who wants to solve a problem with such an approach is however
more interested in its “interfacing features” such as speed and precision.
Speed and precision are often conflicting objectives, at least in terms of probabilistic
algorithms. A general rule of thumb is that you can gain improvements in accuracy of
optimization only by investing more time. Scientists in the area of global optimization
try to push this Pareto frontier22 further by inventing new approaches and enhancing or
tweaking existing ones. When it comes to time constraints and hence, the required speed of
the optimization algorithm, we can distinguish two types of optimization use cases.

Definition D1.16 (Online Optimization). Online optimization problems are tasks that
need to be solved quickly in a time span usually ranging between ten milliseconds to a few
minutes.

In order to find a solution in this short time, optimality is often significantly traded in
for speed gains. Examples for online optimization are robot localization, load balancing,
services composition for business processes, or updating a factory’s machine job schedule
after new orders came in.

Example E1.10 (Robotic Soccer).


Figure 1.9 shows some of the robots of the robotic soccer team23 [179] as the play in the
RoboCup German Open 2009. In the games of the RoboCup competition, two teams, each
consisting of a fixed number of robotic players, oppose each other on a soccer field. The
rules are similar to human soccer and the team which scores the most goals wins.
When a soccer playing robot approaches the goal of the opposing team with the ball, it
has only a very limited time frame to decide what to do [2518]. Should it shoot directly at
22 Pareto frontiers have been discussed in Section 3.3.5 on page 65.
23 http://carpenoctem.das-lab.net
38 1 INTRODUCTION

Figure 1.9: The robotic soccer team CarpeNoctem (on the right side) on the fourth day of
the RoboCup German Open 2009.

the goal? Should it try to pass the ball to a team mate? Should it continue to drive towards
the goal in order to get into a better shooting position?
To find a sequence of actions which maximizes the chance of scoring a goal in a contin-
uously changing world is an online optimization problem. If a robot takes too long to make
a decision, opposing and cooperating robots may already have moved to different locations
and the decision may have become outdated.
Not only must the problem be solved in a few milliseconds, this process often involves
finding an agreement between different, independent robots each having a possibly different
view on what is currently going on in the world. Additionally, the degrees of freedom in the
real world, the points where the robot can move or shoot the ball to, are basically infinite.
It is clear that in such situations, no complex optimization algorithm can be applied.
Instead, only a smaller set of possible alternatives can be tested or, alternatively, predefined
programs may be processed, thus ignoring the optimization character of the decision making.

Online optimization tasks are often carried out repetitively – new orders will, for instance,
continuously arrive in a production facility and need to be scheduled to machines in a way
that minimizes the waiting time of all jobs.

Definition D1.17 (Offline Optimization). In offline optimization problems, time is not


so important and a user is willing to wait maybe up to days or weeks if she can get an
optimal or close-to-optimal result.

Such problems regard for example design optimization, some data mining tasks, or creating
long-term schedules for transportation crews. These optimization processes will usually be
carried out only once in a long time.
Before doing anything else, one must be sure about to which of these two classes the
problem to be solved belongs. There are two measures which characterize whether an opti-
mization is suited to the timing constraints of a problem:

1. The time consumption of the algorithm itself.


2. The number of evaluations τ until the optimum (or a sufficiently good candidate
solution) is discovered, i. e., the first hitting time FHT (or its expected value).

An algorithm with high internal complexity and a low FHT may be suitable for a problem
where evaluating the objective functions takes long. On the other hand, it may be less
feasible than a simple approach if the quality of a candidate solution can be determined
quickly.
1.5. NUMBER OF OPTIMIZATION CRITERIA 39
1.5 Number of Optimization Criteria

Optimization algorithms can be divided into

1. such which try to find the best values of single objective functions f (see Section 3.1
on page 51,
2. such that optimize sets f of target functions (see Section 3.3 on page 55, and
3. such that optimize one or multiple objective functions while adhering to additional
constraints (see Section 3.4 on page 70).

This distinction between single-objective, multi-objective, and constraint optimization will


be discussed in depth in Section 3.3. In this chapter, we will also introduced unifying
approaches which allow us to transform multi-objective problems into single-objective ones
and to gain a joint perspective on constrained and unconstrained optimization.

1.6 Introductory Example

In Algorithm 1.1, we want to give a short introductory example on how metaheuristic


optimization algorithms may look like and of which modules they can be composed. Algo-
rithm 1.1 is based in the bin packing Example E1.1 given on 25. It should be noted that
we do not claim that Algorithm 1.1 is any good in solving the problem, matter of fact, the
opposite is the case. However, it posses typical components which can be found in most
metaheuristics and can relatively easily be understood.
As stated in Example E1.1, the goal of bin packing is to whether an assignment x of n
objects (having the sizes a1 , a2 , .., an ) to (at most) k bins each able to hold a volume of b
exists where the capacity of no bin is exceeded. This decision problem can be translated
into an optimization problem where it is the goal to discover the x⋆ where the bin capacities
are exceeded the least, i.e., to minimize the objective function fbp given in Equation 1.2
on page 26. Objective functions are discussed in Section 2.2 and in Algorithm 1.1, fbp is
evaluated in Line 23.
Obviously, if fbp (x⋆ ) = 0, the answer to the decision problem is “Yes, an assignment
which does not violate the constraints exist.” and otherwise, the answer is “No, it is not
possible to pack the n specified objects into the k bins of size b.”
For fbp in Example E1.1, we represented a possible assignment of the objects to bins as
a tuple x consisting of k sets. Each set can hold numbers from 1..n. If, for one specific x,
5 ∈ x[0], for example, then the fifth object should be placed into the first bin and 3 ∈ x[3]
means that the third object is to be put into the fourth bin. The set X of all such possible
solutions (s) x is called problem space and discussed in Section 2.1.
Instead of exploring this space directly, Example E1.1 uses a different search space G
(see Section 4.1). Each of these encoded candidate solutions, each so-called genotype, is a
permutation of the first n natural numbers. The sequence of these numbers stands for the
assignment order of objects to bins.
This representation is translated between Lines 13 and 22 to the tuple-of-sets representa-
tion which can be evaluated by the objective function fbp . Basically, this genotype-phenotype
mapping gpm : G 7→ X (see Section 4.3) starts with a tuple of k empty sets and sequentially
processes the genotypes g from the beginning to the end. In each processing step, it takes
the next number from the genotype and checks whether the object identified by this number
still fits into the current bin bin. If so, the number is placed into the set in the tuple x at
index bin. Otherwise, if there are empty bins left, bin is increased. This way, the first k − 1
bins are always filled to at most their capacity and only the last bin may overflow. The
translation between the genotypes g ∈ G and the candidate solutions x ∈ X is deterministic.
It is known that bin packing is N P-hard, which means that there currently exists no
algorithm which is always better than enumerating all possible solutions and testing each
40 1 INTRODUCTION

Algorithm 1.1: x ←− optimizeBinPacking(k, b, a) . . . . . . . . . . . . . . . (see Example E1.1)


Input: k ∈ N1 : the number of bins
Input: b ∈ N1 : the size of each single bin
Input: a1 , a2 , .., an ∈ N1 : the sizes of the n objects
Data: g, g ⋆ ∈ G: the current and best genotype so far . . . . . . . . . . (see Definition D4.2)
Data: x ∈ X: the current and best phenotype so far . . . . . . . . . . . . (see Definition D2.2)
Data: y, y ⋆ ∈ R+ : the current and best objective value . . . . . . . . . (see Definition D2.3)
Output: x ∈ X: the best phenotype found . . . . . . . . . . . . . . . . . . . . . . (see Definition D3.5)
1 begin
2 g ⋆ ←− (1, 2, .., n) initialization: nullary search operation
3

y ←− +∞ searchOp : ∅ 7→ G (see Section 4.2)
4 repeat
// perform a random search step
5 g ←− g ⋆
6 i ←− ⌊randomUni[1, n)⌋
7 repeat search step: unary search operation
8 j ←− ⌊randomUni[1, n)⌋ searchOp : G 7→ G (see Section 4.2)
9 until i 6= j
10 t ←− g [i]
genotype-phenotype mapping
11 g [i] ←− g [j]
gpm : G 7→ X (see Section 4.3)
12 g [j] ←− t
// compute the bins
13 x ←− (∅, ∅, . . .)
| {z }
n times
14 bin ←− 0
15 sum ←− 0
16 for i ←− 0 up to n − 1 do
17 j ←− g [i]
18 if ((sum + aj ) > b) ∧ (bin < (k − 1)) then
19 bin ←− bin + 1
20 sum ←− 0
21 x[bin] ←− x[bin] ∪ j
22 sum ←− sum + aj
// compute objective value
23 y ←− fbp (x) objective function f : X 7→ R (see Section 2.2)
// the search logic: always choose the better point
24 if y < y ⋆ then
What is good? (see Chapter 3 and 5.2)
25 g ←− g ⋆
26 x ←− x
27 y ⋆ ←− y
28 until y ⋆ = 0 termination criterion (see Section 6.3.3)
29 return x
1.6. INTRODUCTORY EXAMPLE 41
of them. Our Algorithm 1.1 tries to explore the search space by starting with putting the
objects into the bins in their original order (Line 2). In each optimization step, it randomly
exchanges two objects (Lines 5 to 12). This operation basically is a unary search operation
which maps the elements of the search space G to new elements in G and creates new
genotypes g. The initialization step can be viewed as a nullary search operation. The basic
definitions for search operations is given in Section 4.2 on page 83.
In Line 24, the optimizer compares the objective value of the best assignment x found
so far with the objective value of the candidate solution x decoded from the newly created
genotype g with the genotype-phenotype mapping “gpm”. If the new phenotype x is better
than the old one x, the search will from now on use its corresponding genotype (g ⋆ ←− x)
for creating new candidate solutions with the search operation and remembers x as well as
its objective value y. A discussion on what the term better actually means in optimization
is given in Chapter 3.
The search process in Algorithm 1.1 continues until a perfect packing, i. e., one with no
overflowing bin, is discovered. This termination criterion (see Section 6.3.3), checked in
Line 28, is chosen not too wise, since if no solution can be found, the algorithm will run
forever. Effectively, this Hill Climbing approach (see Chapter 26 on page 229) is thus a Las
Vegas algorithm (see Definition D12.2 on page 148).
In the following chapters, we will put a simple and yet comprehensive theory around
the structure of metaheuristics. We will give clear definitions for the typical modules of an
optimization algorithm and outline how they interact. Algorithm 1.1 here served as example
to show the general direction of where we will be going.
42 1 INTRODUCTION
Tasks T1

1. Name five optimization problems which can be encountered in daily life. Describe
the criteria which are subject to optimization, whether they should be maximized or
minimized, and how the investigated space of possible solutions looks like.
[10 points]

2. Name five optimization problems which can be encountered in engineering, business,


or science. Describe them in the same way as requested in Task 1.
[10 points]

3. Classify the problems from Task 1 and 2 into numerical, combinatorial, discrete, and
mixed integer programming, the classes introduced in Section 1.2 and sketched in Fig-
ure 1.6 on page 30. Check if different classifications are possible for the problems and
if so, outline how the problems should be formulated in each case. The truss opti-
mization problem (see Example E1.6 on page 29) is, for example, usually considered
to be a numerical problem since the thickness of each beam is a real value. If there
is, however, a finite set of beam thicknesses we can choose from, it becomes a discrete
problem and could even be considered as a combinatorial one.
[10 points]

4. Implement Algorithm 1.1 on page 40 in a programming language of your choice. Apply


your implementation to the bin packing instance given in Figure 1.2 on page 25 and
to the problem k = 5, b = 6, a = (6, 5, 4, 3, 3, 3, 2, 2, 1) and (hence) n = 9. Does the
algorithm work? Describe your observations, use a step-by-step debugger if necessary.
[20 points]

5. Find at least three features which make Algorithm 1.1 on page 40 rather unsuitable for
practical applications. Describe these drawbacks, their impact, and point out possible
remedies. You may use an implementation of the algorithm as requested in Task 4 for
gaining experience with what is problematic for the algorithm.
[6 points]

6. What are the advantages and disadvantages of randomized/probabilistic optimization


algorithms?
[4 points]

7. In which way are the requirements for online and offline optimization different? Does
a need for online optimization influence the choice of the optimization algorithm and
the general algorithm structure which can be used in your opinion?
[5 points]

8. What are the differences between solving general optimization problems and specific
ones? Discuss especially the difference between black box optimization and solving
functions which are completely known and maybe even can be differentiated.
[5 points]
Chapter 2
Problem Space and Objective Functions

2.1 Problem Space: How does it look like?

The first step which has to be taken when tackling any optimization problem is to define the
structure of the possible solutions. For a minimizing a simple mathematical function, the
solution will be a real number and if we wish to determine the optimal shape of an airplane’s
wing, we are looking for a blueprint describing it.

Definition D2.1 (Problem Space). The problem space (phenome) X of an optimization


problem is the set containing all elements x which could be its solution.

Usually, more than one problem space can be defined for a given optimization problem. A
few lines before, we said that as problem space for minimizing a mathematical function, the
real numbers R would be fine. On the other hand, we could as well restrict ourselves to
the natural numbers N0 or widen the search to the whole complex plane C. This choice has
major impact: On one hand, it determines which solutions we can possible find. On the
other hand, it also has subtle influence on the way in which an optimization algorithm can
perform its search. Between each two different points in R, for instance, there are infinitely
many other numbers, while in N0 , there are not. Then again, unlike N0 and R, there is no
total order defined on C.

Definition D2.2 (Candidate Solution). A candidate solution (phenotype) x is an ele-


ment of the problem space X of a certain optimization problem.

In this book, we will especially focus on evolutionary optimization algorithms. One of the
oldest such method, the Genetic Algorithm, is inspired by the natural evolution and re-uses
many terms from biology. In dependence on Genetic Algorithms, we synonymously refer
to the problem space as phenome and to candidate solutions as phenotypes. The problem
space X is often restricted by practical constraints. It is, for instance, impossible to take
all real numbers into consideration in the minimization process of a real function. On our
off-the-shelf CPUs or with the Java programming language, we can only use 64 bit floating
point numbers. With them numbers can only be expressed up to a certain precision. If the
computer can only handles numbers up to 15 decimals, an element x = 1 + 10−16 cannot be
discovered.
44 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS
2.2 Objective Functions: Is it good?

The next basic step when defining an optimization problem is to define a measure for what
is good. Such measures are called objective functions. As parameter, they take a candidate
solution x from the problem space X and return a value which rates its quality.

Definition D2.3 (Objective Function). An objective function f : X 7→ R is a mathe-


matical function which is subject to optimization.

In Listing 55.8 on page 712, you can find a Java interface resembling Definition D2.3.
Usually, objective functions are subject to minimization. In other words, the smaller f(x),
the better is the candidate solution x ∈ X. Notice that although we state that an objective
function is a mathematical function, this is not always fully true. In Section 51.10 on
page 646 we introduce the concept of functions as functional binary relations (see ?? on
page ??) which assign to each element x from the domain X one and only one element from
the codomain (in case of the objective functions, the codomain is R). However, the evaluation
of the objective functions may incorporate randomized simulations (Example E2.3) or even
human interaction (Example E2.6). In such cases, the functionality constraint may be
violated. For the purpose of simplicity, let us, however, stick with the concept given in
Definition D2.3.
In the following, we will use several examples to build an understanding of what the
three concepts problem space X, candidate solution x, and objective function f mean. These
example should give an impression on the variety of possible ways in which they can be
specified and structured.

Example E2.1 (A Stone’s Throw).


A very simple optimization problem is the question Which is best velocity with which I should
throw1 a stone (in an α = 15◦ angle) so that it lands exactly 100m away (assuming uniform
gravity and the absence of drag and wind)? Everyone of us has solved countless such tasks
in school and it probably never came to our mind to consider them as optimization problem.

d(x) 100m

a
f(x)

Figure 2.1: A sketch of the stone’s throw in Example E2.1.

The problem space X of this problem equals to the real numbers R and we can define
an objective function f : R 7→ R which denotes the absolute difference between the actual
throw distance d(x) and the target distance 100m when a stone is thrown with x m/s as
sketched in Figure 2.1:

f(x) = |100m − d(x)| (2.1)


2
x
d(x) = sin(2α) (2.2)
g

≈ 0.051s2 /m ∗ x2 g≈9.81m/s2 , α=15◦ (2.3)

1 http://en.wikipedia.org/wiki/Trajectory [accessed 2009-06-13]


2.2. OBJECTIVE FUNCTIONS: IS IT GOOD? 45
This problem can easily be solved and we can obtain the (globally) optimal solution x⋆ ,
since we know that the best value which f can take on is zero:

2
f(x⋆ ) = 0 ≈ 100m − 0.051s2 /m ∗ (x⋆ ) (2.4)
2
0.051s2 /m ∗ (x⋆ ) ≈ 100m (2.5)
s
⋆ 100m p
x ≈ 2 ≈ 1960.78m2 /s2 ≈ 44.28m/s (2.6)
0.051s /m

Example E2.1 was very easy. We had a very clear objective function f for which even the
precise mathematical definition was available and, if not known beforehand, could have
easily been obtained from literature. Furthermore, the space of possible solutions, the
problem space, was very simple since it equaled the (positive) real numbers.

Example E2.2 (Traveling Salesman Problem).


The Traveling Salesman Problem [118, 1124, 1694] was first mentioned in 1831 by Voigt
[2816] and a similar (historic) problem was stated by the Irish Mathematician Hamil-
ton [1024]. Its basic idea is that a salesman wants to visit n cities in the shortest possible
time under the assumption that no city is visited twice and that he will again arrive at the
origin by the end of the tour. The pairwise distances of the cities are known. This problem
can be formalized with the aid of graph theory (see Chapter 52) as follows:

Definition D2.4 (Traveling Salesman Problem). The goal of the Traveling Salesman
Problem (TSP) is to find a cyclic path of minimum total weight2 which visits all vertices of
a weighted graph.

w(a,b) Shanghai Hefei Guangzhou Chengdu


Beijing
Beijing 1244 km 1044 km 2174 km 1854 km
Chengdu 2095 km 1615 km 1954 km Hefei
Guangzhou 1529 km 1257 km Chengdu Shanghai
Hefei 472 km
Guangzhou

Figure 2.2: An example for Traveling Salesman Problems, based on data from GoogleMaps.

Assume that we have the task to solve the Traveling Salesman Problem given in Figure 2.2,
i. e., to find the shortest cyclic tour through Beijing, Chengdu, Guangzhou, Hefei, and
Shanghai. The (symmetric) distance matrix3 for computing a tour’s length is given in the
figure, too.
Here, the set of possible solutions X is not as trivial as in Example E2.1. Instead of
being a mapping from the real numbers to the real numbers, the objective function f takes
2 see Equation 52.3 on page 651
3 Based on the routing utility GoogleMaps (http://maps.google.com/ [accessed 2009-06-16]).
46 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS
one possible permutation of the five cities as parameter. Since the tours which we want
to find are cyclic, we only need to take the permutations of four cities into consideration.
If we, for instance, assume that the tour starts in Hefei, it will also end in Hefei and each
possible solution is completely characterized by the sequence in which Beijing, Chengdu,
Guangzhou, and Shanghai are visited. Hence, X is the set of all such permutations.
X = Π({Beijing, Chengdu, Guangzhou, Shanghai}) (2.7)
We can furthermore halve the search space by recognizing that it plays no role for the
total distance in which direction we take our tour x ∈ X. In other words, a tour
(Hefei → Shanghai → Beijing → Chengdu → Guangzhou → Hefei) has the same length and
is equivalent to (Hefei → Guangzhou → Chengdu → Beijing → Shanghai → Hefei). Hence,
for a TSP with n cities, there are 21 ∗ (n − 1)! possible solutions. In our case, this avails to
12 unique permutations.
2
X
f(x) = w(Hefei, x[0]) + w(x[i], x[i + 1]) + w(x[3], Hefei) (2.8)
i=0

The objective function f : X 7→ R – the criterion subject to optimization – is the total


distance of the tours. It equals to the sum of distances between the consecutive cities x[i]
in the lists x representing the permutations and to the start and end point Hefei. We again
want to minimize it. Computing it is again rather simple, but different from Example E2.1,
we cannot simply solve it and compute the optimal result x⋆ . Matter of fact, the Traveling
Salesman Problem is one of the most complicated optimization problems.
Here, we will determine x⋆ with a so-called brute force 4 approach: We enumerate all
possible solutions and their corresponding objective values and then pick the best one.

the tour xi f(xi )


x1 Hefei → Beijing → Chengdu → Guangzhou → Shanghai → Hefei 6853km
x2 Hefei → Beijing → Chengdu → Shanghai → Guangzhou → Hefei 7779km
x3 Hefei → Beijing → Guangzhou → Chengdu → Shanghai → Hefei 7739km
x4 Hefei → Beijing → Guangzhou → Shanghai → Chengdu → Hefei 8457km
x5 Hefei → Beijing → Shanghai → Chengdu → Guangzhou → Hefei 7594km
x6 Hefei → Beijing → Shanghai → Guangzhou → Chengdu → Hefei 7386km
x7 Hefei → Chengdu → Beijing → Guangzhou → Shanghai → Hefei 7644km
x8 Hefei → Chengdu → Beijing → Shanghai → Guangzhou → Hefei 7499km
x9 Hefei → Chengdu → Guangzhou → Beijing → Shanghai → Hefei 7459km
x10 Hefei → Chengdu → Shanghai → Beijing → Guangzhou → Hefei 8385km
x11 Hefei → Guangzhou → Beijing → Chengdu → Shanghai → Hefei 7852km
x12 Hefei → Guangzhou → Chengdu → Beijing → Shanghai → Hefei 6781km

Table 2.1: The unique tours for Example E2.2.


From Table 2.1 we can easily spot the best solution x⋆ = x12 , as the route over
Guangzhou, Chengdu, Beijing, and Shanghai, both starting and ending in Hefei. With a
total distance of f(x12 ) = 6781 km, it is more than 1500 km shorter than the worst solution
x4 .

Enumerating all possible solutions is, of course, the most time-consuming optimization ap-
proach possible. For TSPs with more cities, it quickly
n becomes infeasible since the number
of solutions grows more than exponentially n3 ≤ n! . Furthermore, in the simple Exam-
ple E2.1, enumerating all candidate solutions would have even been impossible since X = R.
From this example we can learn three things:
4 http://en.wikipedia.org/wiki/Brute-force_search [accessed 2009-06-16]
2.2. OBJECTIVE FUNCTIONS: IS IT GOOD? 47
1. The problem space containing the parameters of the objective functions is not re-
stricted to the real numbers but can, indeed, have a very different structure.
2. There are objective functions which cannot be resolved mathematically. Matter of
fact, most “interesting” optimization problem have this feature.
3. Enumerating all solutions is not a good optimization approach and only applicable in
“small” or trivial problem instances.
Although the objective function of Example E2.2 could not be solved for x, its results can
still be easily determined for any given candidate solution. Yet, there are many optimization
problems where this is not the case.

Example E2.3 (Antenna Design).


Designing antennas is an optimization problem which has been approached by many
researchers such as Chattoraj and Roy [532], Choo et al. [575], John and Ammann
[1450, 1451], Koza et al. [1615], Lohn et al. [1769, 1770], and Villegas et al. [2814], amongst
others. Rahmat-Samii and Michielssen dedicate no less than four chapters to it in their
book on the application of (evolutionary) optimization methods to electromagnetism-related
tasks [2250]. Here, the problem spaces X are the sets of feasible antenna designs with many
degrees of freedom in up to three dimensions, also often involving wire thickness and geom-
etry.
The objective functions usually involve maximizing the antenna gain5 and bandwidth
together with constraints concerning, for instance, weight and shape of the proposed con-
structions. However, computing the gain (depending on the signal frequency and radiation
angle) of an arbitrarily structured antenna is not trivial. In most cases, a simulation envi-
ronment and considerable computational power is required. Besides the fact that there are
many possible antenna designs, different from Example E2.2 the evaluation of even small
sets of candidate solutions may take a substantial amount of time.

Example E2.4 (Analog Electronic Circuit Design).


A similar problem is the automated analog circuit design, where a circuit diagram involving
a combination of different electronic components (resistors, capacitors, inductors, diodes,
and/or transistors) is sought which fulfills certain requirements. Here, the tasks span from
the synthesis of amplifiers [1607, 1766], filters [1766], such as Butterworth [1608, 1763, 1764]
and Chebycheff filters [1608], to analog robot controllers [1613].
Computing the features of analog circuits can be done by hand with some effort, but
requires deeper knowledge of electrical engineering. If an analysis of the frequency and timing
behavior is required, the difficulty of the equations quickly goes out of hand. Therefore, in
order to evaluate the fitness of circuits, simulation environments [2249] like SPICE [2815]
are normally used. Typically, a set of server computers is employed to execute these external
programs so the utility of multiple candidate solutions can be determined in parallel.

Example E2.5 (Aircraft Design).


Aircraft design is another domain where many optimization problems can be spotted.
Obayashi [2059] and Oyama [2111], for instance, tried to find wing designs with minimal
drag and weight but maximum tank volume, whereas Chung et al. [579] searched a geometry
with a minimum drag and shock pressure/sonic boom for transonic aircrafts and Lee and
Hajela [1704, 1705] optimized the design of rotor blades for helicopters. Like in the electrical
engineering example, the computational costs of computing the objective values rating the
aforementioned features of the candidate solutions are high and involve simulating the air
flows around the wings and blades.
5 http://en.wikipedia.org/wiki/Antenna_gain [accessed 2009-06-19]
48 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS

Example E2.6 (Interactive Optimization).


Especially in the area of Evolutionary Computation, some interactive optimization methods
have been developed. The Interactive Genetic Algorithms6 (IGAs) [471, 2497, 2529, 2651,
2652] outsource the objective function evaluation to human beings who, for instance, judge
whether an evolved graphic7 or movie [2497, 2746] is attractive or music8 [1453] sounds good.
By incorporating this information in the search process, esthetic works of art are created. In
this domain, it is not really possible to devise any automated method for assigning objective
values to the points in the problem space. When it comes to judging art, human beings will
not be replaced by machines for a long time.
In Kosorukoff’s Human Based Genetic Algorithm9 (HBGAs, [1583]) and Unemi’s graphic
and movie tool [2746], this approach is driven even further by also letting human “agents”
select and combine the most interesting candidate solutions step by step.
All interactive optimization algorithms have to deal with the peculiarities of their hu-
man “modules”, such as indeterminism, inaccuracy, fickleness, changing attention and loss
of focus which, obviously, also rub off on the objective functions. Nevertheless, similar fea-
tures may also occur if determining the utility of a candidate solution involves randomized
simulations or measuring data from real-world processes.

Let us now summarize the information gained from these examples. The first step when
approaching an optimization problem is to choose the problem space X, as we will later
discuss in detail in Chapter 7 on page 109. Objective functions are not necessarily mere
mathematical expressions, but can be complex algorithms that, for example, involve multiple
simulations. Global Optimization comprises all techniques that can be used to find the
best elements x⋆ in X with respect to such criteria. The next question to answer is What
constitutes these “best” elements?

6 http://en.wikipedia.org/wiki/Interactive_genetic_algorithm [accessed 2007-07-03]


7 http://en.wikipedia.org/wiki/Evolutionary_art [accessed 2009-06-18]
8 http://en.wikipedia.org/wiki/Evolutionary_music [accessed 2009-06-18]
9 http://en.wikipedia.org/wiki/HBGA [accessed 2007-07-03]
2.2. OBJECTIVE FUNCTIONS: IS IT GOOD? 49
Tasks T2

9. In Example E1.2 on page 26, we stated the layout of circuits as a combinatorial opti-
mization problem. Define a possible problem space X for this task that allows us to
assign each circuit c ∈ C to one location of an m × n-sized card. Define an objective
function f : X 7→ R+ which denotes the total length of required wiring required for a
given element x ∈ X. For this purpose, you can ignore wire crossing and simply assume
that a wire going from location (i, j) to location (k, l) has length |i − k| + |j − l|.
[10 points]

10. Name some problems which can occur in circuit layout optimization as outlined in
Task 9 if the mentioned simplified assumption is indeed applied.
[5 points]

11. Define a problem space for general job shop scheduling problems as outlined in Exam-
ple E1.3 on page 26. Do not make the problem specific to the cola/lemonade example.
[10 points]

12. Assume a network N = {n1 , n2 , . . . , nm } and that a m × m distance matrix ∆ is given


that contains the distance between node ni and nj at position ∆ij . A value of +∞ at
∆ij means that there is no direct connection ni → nj between node ni and nj . These
nodes may, however, be connected by a path which leads over other, intermediate
nodes, such as ni → nA → nB → nj . In this case, nj can be reached from ni over
a path leading first to nA and then to nB before finally arriving at nj . Hence, there
would be a connection of finite length.
We consider the specific optimization task Find the shortest connection between node
n1 and n2 . Develop a problem space X which is able to represent all paths between
the first and second node in an arbitrary network with m ≥ 2. Note that there is not
necessarily a direct connection between n1 and n2 .
Define a suitable objective function for this problem.
[15 points]
Chapter 3
Optima: What does good mean?

We already said that Global Optimization is about finding the best possible solutions for
given problems. Let us therefore now discuss what it is that makes a solution optimal 1 .

3.1 Single Objective Functions


In the case of optimizing a single criterion f, an optimum is either its maximum or minimum,
depending on what we are looking for. In the case of antenna design (Example E2.3), the
antenna gain was maximized and in Example E2.2, the goal was to find a cyclic route of
minimal distance through five Chinese cities. If we own a manufacturing plant and have to
assign incoming orders to machines, we will do this in a way that miniminzes the production
time. We will, however, arrange the purchase of raw material, the employment of staff, and
the placement of commercials in a way that maximizes the profit, on the other hand.
In Global Optimization, it is a convention that optimization problems are most often
defined as minimization tasks. If a criterion f is subject to maximization, we usually simply
minimize its negation (−f) instead. In this section, we will give some basic definitions on
what the optima of a single (objective) function are.

Example E3.1 (two-dimensional objective function).


Figure 3.1 illustrates such a function f defined over a two-dimensional space X = (X1 , X2 ) ⊂
R2 . As outlined in this graphic, we distinguish between local and global optima. As illus-
trated, a global optimum is an optimum of the whole domain X while a local optimum is
an optimum of only a subset of X.

1 http://en.wikipedia.org/wiki/Maxima_and_minima [accessed 2007-07-03]


52 3 OPTIMA: WHAT DOES GOOD MEAN?
local maximum global maximum

local minimum local maximum f

X2

X1

global minimum X

Figure 3.1: Global and local optima of a two-dimensional function.

3.1.1 Extrema: Minima and Maxima of Differentiable Functions

A local minimum x̌l is a point which has the smallest objective value amongst its neighbors
in the problem space. Similarly, a local maximum x̂l has the highest objective value when
compared to its neighbors. Such points are called (local) extrema2 . In optimization, a local
optimum is always an extremum, i. e., either a local minimum or maximum.
Back in high school, we learned: “If f : X 7→ R (X ⊆ R) has a (local) extremum at x⋆l
and is differentiable at that point, then the first derivative f ′ is zero at x⋆l .”

x⋆l is local extremum ⇒ f ′ (x⋆l ) = 0 (3.1)

Furthermore, if f can be differentiated two times at x⋆l , the following holds:

(f ′ (x⋆l ) = 0) ∧ (f ′′ (x⋆l ) > 0) ⇒ x⋆l is local minimum (3.2)


(f ′ (x⋆l ) = 0) ∧ (f ′′ (x⋆l ) < 0) ⇒ x⋆l is local maximum (3.3)

Alternatively, we can detect a local minimum or maximum also when the first derivative
undergoes a sign change. If, for a small h, sign (f ′ (x⋆l − h)) 6= sign (f ′ (x⋆l + h)), then we can
state that:
   
(f ′ (x⋆l ) = 0) ∧ lim f ′ (x⋆l − h) < 0 ∧ lim f ′ (x⋆l + h) > 0 ⇒ x⋆l is local minimum (3.4)
h→0 h→0
   
(f ′ (x⋆l ) = 0) ∧ lim f ′ (x⋆l − h) > 0 ∧ lim f ′ (x⋆l + h) < 0 ⇒ x⋆l is local maximum(3.5)
h→0 h→0

2 http://en.wikipedia.org/wiki/Maxima_and_minima [accessed 2010-09-07]


3.1. SINGLE OBJECTIVE FUNCTIONS 53
Example E3.2 (Extrema and Derivatives).
The function f (x) = x2 + 5 has the first derivative f ′ (x) = 2x and the second derivative
f ′′ (x) = 2. To solve f ′ (x⋆l ) = 0 we set 0 = 2x⋆l and get, obviously, x⋆l = 0. For x⋆l = 0,
f ′′ (x⋆l ) = 2 and hence, x⋆l is a local minimum according to Equation 3.2.
If we take the function g(x) = x4 − 2, we get g ′ (x) = 4x3 and g ′′ (x) = 12x2 . At x⋆l ,
both g ′ (x⋆l ) and g ′′ (x⋆l ) become 0. We cannot use Equation 3.2. However, if we take a small
step to the left, i. e., into x < 0, x < 0 ⇒ (4x3 ) < 0 ⇒ g ′ (x) < 0. For positive x, however,
x > 0 ⇒ (4x3 ) > 0 ⇒ g ′ (x) > 0. We have a sign change from − to + and according to
Equation 3.4, x⋆l is a minimum of g(x).

From many of our previous examples such as Example E1.1 on page 25, Example E2.2 on
page 45, or Example E2.4 on page 47, we know that objective functions in many optimization
problems cannot be expressed in a mathematically closed form, let alone in one that can be
differentiated. Furthermore, Equation 3.1 to Equation 3.5 can only be directly applied if the
problem space is a subset of the real numbers X ⊆ R. They do not work for discrete spaces
or for vector spaces. For the two-dimensional objective function given in Example E3.1, for
example, we would need to a apply Equation 3.6, the generalized form of Equation 3.1:

x⋆l is local extremum ⇒ grad(f)(x⋆l ) = 0 (3.6)

For distinguishing maxima, minima, and saddle points, the Hessian matrix3 would be needed.
Even if the objective function exists in a differentiable form, for problem spaces X ⊆ Rn
of larger scales n > 5, computing the gradient, the Hessian, and their properties becomes
cumbersome. Derivative-based optimization is hence only feasible in a subset of optimization
problems.
In this book, we focus on derivative-free optimization. The precise definition of local
optima without using derivatives requires some sort of measure of what neighbor means
in the problem space. Since this requires some further definitions, we will postpone it to
Section 4.4.

3.1.2 Global Extrema

The definitions of globally optimal points are much more straightforward and can be given
based on the terms we already have discussed:

Definition D3.1 (Global Maximum). A global maximum x̂ ∈ x of one (objective) func-


tion f : X 7→ R is an input element with f(x̂) ≥ f(x) ∀x ∈ X.

Definition D3.2 (Global Minimum). A global minimum x̌ ∈ X of one (objective) func-


tion f : X 7→ R is an input element with f(x̌) ≤ f(x) ∀x ∈ X.

Definition D3.3 (Global Optimum). A global optimum x⋆ ∈ X of one (objective) func-


tion f : X 7→ R subject to minimization is a global minimum. A global optimum of a function
subject to maximization is a global maximum.

Of course, each global optimum, minimum, or maximum is, at the same time, also a local
optimum, minimum, or maximum. As stated before, the he definition of local optima is a
bit more complicated and will be discussed in detail in Section 4.4.
3 http://en.wikipedia.org/wiki/Hessian_matrix [accessed 2010-09-07]
54 3 OPTIMA: WHAT DOES GOOD MEAN?
3.2 Multiple Optima

Many optimization problems have more than one globally optimal solution. Even a one-
dimensional function f : R 7→ R can have more than one global maximum, multiple global
minima, or even both in its domain X.

Example E3.3 (One-Dimensional


Functions with Multiple Optima).
If the function f(x) = x3 − 8x2 (illustrated in Fig. 3.2.a) was subject to minimization, we

200 f(x)
f(x)
1
^ ^
100 x1 x2 0.5
0
0 -0.5
^
x-1 ^
x0 ^
x1
-1
-100 xÎX xÎX
-10
-5 0 5 10 -10 -5 0 5 10
Fig. 3.2.a: x3 −8x2 : a function with Fig. 3.2.b: The cosine function: in-
two global minima. finitely many optima.

100 f(x)
80

60
^
40
X
20 xÎX
-10 -5 0 5 10
Fig. 3.2.c: A function with uncount-
able many minima.

Figure 3.2: Examples for functions with multiple optima

would find that it has two global optimal x⋆1 = x̌1 = − 16 ⋆ 16


3 and x2 = x̌2 = 3 . Another
interesting example is the cosine function sketched in Fig. 3.2.b. It has global maxima x̂i at
x̂i = 2iπ and global minima x̌i at x̌i = (2i + 1)π for all i ∈ Z. The correct solution of such
an optimization problem would then be a set X ⋆ of all optimal inputs in X rather than a
single maximum or minimum.
In Fig. 3.2.c, we sketch a function f(x) which evaluates x2 outside the interval [−5, 5]
and has the value 25 in [−5, 5]. Hence, all the points in the real interval [−5, 5] are minima.
If the function is subject to minimization, it has uncountable infinite global optima.

As mentioned before, the exact meaning of optimal is problem dependent. In single-objective


optimization, it means to find the candidate solutions which lead to either minimum or max-
imum objective values. In multi-objective optimization, there exist a variety of approaches
to define optima which we will discuss in-depth in Section 3.3.

Definition D3.4 (Optimal Set). The optimal set X ⋆ ⊆ X of an optimization problem is


the set that contains all its globally optimal solutions.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 55
There are often multiple, sometimes even infinite many optimal solutions in many optimiza-
tion problems. Since the storage space in computers is limited, it is only possible to find a
finite (sub-)set of solutions. In the further text, we will denote the result of optimization
algorithms which return single candidate solutions (in sans-serif font) as x and the results
of algorithms which are capable to produce sets of possible solutions as X = {x1 , x2 , . . . }.

Definition D3.5 (Optimization Result). The set X ⊆ X contains output elements x ∈ X


of an optimization process.

Obviously, the best outcome of an optimization process would be to find X = X ⋆ . From


Example E3.3, we already know that this might actually be impossible if there are too
many global optima. Then, the goal should be to find a X ⊂ X ⋆ in a way that the candidate
solutions are evenly distributed amongst the global optima and not all of them cling together
in a small area of X ⋆ . We will discuss the issue of diversity amongst the optima in Chapter 13
in more depth.
Besides that we are maybe not able to find all global optima for a given optimization
problem, it may also be possible that we are not able to find any global optimum at all or that
we cannot determine whether the results x ∈ X which we have obtained are optimal or not.
There exists, for instance, no algorithm which can solve the Traveling Salesman Problem
(see Example E2.2) in polynomial time, as described in detail in Section 12.3. Therefore,
if such a task involving a high enough number of locations is subject to optimization, an
efficient approximation algorithm might produce reasonably good x in short time. In order
to know whether these are global optima, we would, however, need to wait for thousands of
years. Hence, in many problems, the goal of optimization is not necessarily to find X = X ⋆
or X ⊂ X ⋆ but a good approximation X ≈ X ⋆ in a reasonable time.

3.3 Multiple Objective Functions

3.3.1 Introduction

Global Optimization techniques are not just used for finding the maxima or minima of single
functions f. In many real-world design or decision making problems, there are multiple
criteria to be optimized at the same time. Such tasks can be defined as in Definition D3.6
which will later be refined in Definition D6.2 on page 103.

Definition D3.6 (Multi-Objective Optimization Problem (MOP)). In a multi-objective


optimization problem (MOP), a set f : X 7→ Rn consisting of n objective functions
fi : X 7→ R, each representing one criterion, is to be optimized over a problem space
X [596, 740, 954].

f = {fi : X 7→ R : i ∈ 1..n} (3.7)


T
f (x) = (f1 (x) , f2 (x) , . . . ) (3.8)

In the following, we will treat these sets as vector functions f : X 7→ Rn which return a
vector of real numbers of dimension n when applied to a candidate solution x ∈ X. To the
Rn we will in this case refer to as objective space Y. Algorithms designed to optimize sets
of objective functions are usually named with the prefix multi-objective, like multi-objective
Evolutionary Algorithms (see Definition D28.2 on page 253).
Purshouse [2228, 2230] provides a nice classification of the possible relations between
the objective functions which we illustrate here as Figure 3.3. In the best case from the
56 3 OPTIMA: WHAT DOES GOOD MEAN?
relation

dependent independent

conflict harmony

Figure 3.3: The possible relations between objective functions as given in [2228, 2230].

optimization perspective, two objective functions f1 and f2 are in harmony, written as f1 ∼ f2 ,


meaning that improving one criterion will also lead to an improvement in the other. Then, it
would suffice to optimize only one of them and the MOP can be reduced to a single-objective
one.

Definition D3.7 (Harmonizing Objective Functions). In a multi-objective optimiza-


tion problem, two objective functions fi and fj are harmonizing (fi ∼X fj ) in a subset X ⊆ X
of the problem space X if Equation 3.9 holds.

fi ∼X fj ⇒ [fi (x1 ) < fi (x2 ) ⇒ fj (x1 ) < fj (x2 ) ∀x1 , x2 ∈ X ⊆ X] (3.9)

The opposite are conflicting objectives where an improvement in one criterion leads to a
degeneration of the utility of the candidate solutions according to another criterion. If two
objectives f3 and f4 conflict, we denote this as f3 ≁ f4 . Obviously, objective functions do not
need to be conflicting or harmonic for their complete domain X, but instead may exhibit
these properties on certain intervals only.

Definition D3.8 (Conflicting Objective Functions). In a multi-objective ptimization


problem, two objective functions fi and fj are conflicting (fi ≁X fj ) in a subset X ⊆ X of
the problem space X if Equation 3.10 holds in a subset X of the problem space X.

fi ≁X fj ⇒ [fi (x1 ) < fi (x2 ) ⇒ fj (x1 ) > fj (x2 ) ∀x1 , x2 ∈ X ⊆ X] (3.10)

Besides harmonizing and conflicting optimization criteria, there may be such which are
independent from each other. In a harmonious relationship, the optimization criteria can
be optimized simultaneously, i. e., an improvement of one leads to an improvement of the
other. If the criteria conflict, the exact opposite is the case. Independent criteria, however,
do not influence each other: It is possible to modify candidate solutions in a way that leads
to a change in one objective value while the other one remains constant (see Example E3.8).

Definition D3.9 (Independent Objective Functions). In a multi-objective optimiza-


tion problem, two objective functions fi and fj are independent from each other in a subset
X ⊆ X of the problem space X if the MOP can be decomposed into two problems which can
optimized separately and whose solutions can then be composed to solutions of the original
problem.

If X = X, these relations are called total and the subscript X is left away in the notations. In
the following, we give some examples for typical situations involving harmonizing, conflicting,
and independent optimization criteria. We also list some typical multi-objective optimization
applications in Table 3.1, before discussing basic techniques suitable for defining what is a
“good solution” in problems with multiple, conflicting criteria.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 57
Example E3.4 (Factory Example).
Multi-objective optimization often means to compromise conflicting goals. If we go back to
our factory example, we can specify the following objectives that all are subject to optimiza-
tion:

1. Minimize the time between an incoming order and the shipment of the corresponding
product.
2. Maximize the profit.
3. Minimize the costs for advertising, personal, raw materials etc..
4. Maximize the quality of the products.
5. Minimize the negative impact of waste and pollution caused by the production process
on environment.

The last two objectives seem to clearly conflict with the goal cost minimization. Between the
personal costs and the time needed for production and the product quality there should also
be some kind of contradictive relation, since producing many quality goods in short time
naturally requires many hands. The exact mutual influences between optimization criteria
can apparently become complicated and are not always obvious.

Example E3.5 (Artificial Ant Example).


Another example for such a situation is the Artificial Ant problem4 where the goal is to
find the most efficient controller for a simulated ant. The efficiency of an ant is not only
measured by the amount of food it is able to pile. For every food item, the ant needs to
walk to some point. The more food it aggregates, the longer the distance it needs to walk.
If its behavior is driven by a clever program, it may take the ant a shorter walk to pile the
same amount of food. This better path, however, would maybe not be discovered by an ant
with a clumsy controller. Thus, the distance the ant has to cover to find the food (or the
time it needs to do so) has to be considered in the optimization process, too.
If two control programs produce the same results and one is smaller (i. e., contains
fewer instructions) than the other, the smaller one should be preferred. Like in the factory
example, these optimization goals conflict with each other: A program which contains only
one instruction will likely not guide the ant to a food item and back to the nest.

From these both examples, we can gain another insight: To find the global optimum could
mean to maximize one function fi ∈ f and to minimize another one fj ∈ f , (i 6= j). Hence,
it makes no sense to talk about a global maximum or a global minimum in terms of multi-
objective optimization. We will thus retreat to the notation of the set of optimal elements
x⋆ ∈ X ⋆ ⊆ X in the following discussions.
Since compromises for conflicting criteria can be defined in many ways, there exist mul-
tiple approaches to define what an optimum is. These different definitions, in turn, lead to
different sets X ⋆ . We will discuss some of these approaches in the following by using two
graphical examples for illustration purposes.

Example E3.6 (Graphical Example 1).


In the first graphical example pictured in Figure 3.4, we want to maximize two independent
objective functions f 1 = {f1 , f2 }. Both functions have the real numbers R as problem space
X1 . The maximum (and thus, the optimum) of f1 is x̂1 and the largest value of f2 is at x̂2 .
In Figure 3.4, we can easily see that f1 and f2 are partly conflicting: Their maxima are at
different locations and there even exist areas where f1 rises while f2 falls and vice versa.

4 See ?? on page ?? for more details.


58 3 OPTIMA: WHAT DOES GOOD MEAN?
y y=f1(x)
y=f2(x)

x
^2 x
^1 xÎX1

Figure 3.4: Two functions f1 and f2 with different maxima x̂1 and x̂2 .

Example E3.7 (Graphical Example 2).


The objective functions f1 and f2 in the first example are mappings of a one-dimensional prob-
^ ^ ^ ^
x1 x2 x3 x4

f3 f4

x1 x2 x1 x2

X2

Figure 3.5: Two functions f3 and f4 with different minima x̌1 , x̌2 , x̌3 , and x̌4 .

lem space X1 to the real numbers that are to be maximized. In the second example sketched
in Figure 3.5, we instead minimize two functions f3 and f4 that map a two-dimensional
problem space X2 ⊂ R2 to the real numbers R. Both functions have two global minima; the
lowest values of f3 are x̌1 and x̌2 whereas f4 gets minimal at x̌3 and x̌4 . It should be noted
that x̌1 6= x̌2 6= x̌3 6= x̌4 .

Example E3.8 (Independent Objectives).


Assume that the problem space are the three-dimensional real vectors R3 and the two
objectives f1 and f2 would be subject to minimization:
p
f1 (x) = (x1 − 5)2 + (x2 + 1)2 (3.11)
2
f2 (x) = x3 − sin x − 3 (3.12)
This problem can be decomposed into two separate problems, since f1 can be optimized on
the first two vector elements x1 , x2 alone and f2 can be optimized by considering only x3
while the values of x1 and x2 do not matter. After solving the two separate optimization
T
problems, their solutions can be combined to a full vector x⋆ = (x⋆1 , x⋆2 , x⋆3 ) by taking the
⋆ ⋆ ⋆
solutions x1 and x2 from the optimization of f1 and x2 from the optimization process solving
f2 .
3.3. MULTIPLE OBJECTIVE FUNCTIONS 59

Example E3.9 (Robotic Soccer Cont.).


In Example E1.10 on page 37, we mentioned that in robotic soccer, the robots must try to
decide which action to take in order to maximize the chance of scoring a goal. If decision
making in robotic soccer was only based on this criteria, it is likely that most mates in
a robot team would jointly attack the goal of the opposing team, leaving the own one
unprotected. Hence, at least one second objective is important too: to minimize the chance
that the opposing team scores a goal.

Table 3.1: Applications and Examples of Multi-Objective Optimization.


Area References
Business-To-Business [1559]
Chemistry [306, 668]
Combinatorial Problems [366, 491, 527, 654, 690, 1034, 1429, 1440, 1442, 1553, 1696,
1756–1758, 1859, 2019, 2025, 2158, 2472, 2473, 2862, 2895];
see also Table 20.1
Computer Graphics [1661]; see also Table 20.1
Data Mining [2069, 2398, 2644, 2645]; see also Table 20.1
Databases [1696]
Decision Making [198, 320, 861, 1299, 1559, 1571, 1801]; see also Table 20.1
Digital Technology [789, 869, 1479]
Distributed Algorithms and Sys- [320, 605, 662, 663, 669, 861, 1299, 1488, 1559, 1571, 1574,
tems 1801, 1840, 2071, 2644, 2645, 2862, 2897]
Economics and Finances [320, 662, 857, 1559]
Engineering [93, 519, 547, 548, 579, 605, 669, 737, 789, 869, 953, 1000,
1479, 1488, 1508, 1574, 1622, 1717, 1840, 1873, 2059, 2111,
2158, 2644, 2645, 2862, 2897]
Games [560, 997, 2102]
Graph Theory [669, 1488, 1574, 1840, 2025, 2897]
Logistics [527, 579, 861, 1034, 1429, 1442, 1553, 1756, 1859, 2059,
2111]
Mathematics [560, 662, 857, 869, 997, 1661, 2102]
Military and Defense [1661]
Multi-Agent Systems [1661]
Physics [1000]
Security [1508, 1661]
Software [1508, 1559, 2863, 2897]; see also Table 20.1
Testing [2863]
Water Supply and Management [1268]
Wireless Communication [1717, 1796, 2158]

3.3.2 Lexicographic Optimization

The maybe simplest way to optimize a set f of objective functions is to solve them according
to priorities [93, 860, 2291, 2820]. Without loss of generality, we could, for instance, assume
that f1 is strictly more important than f2 , which, in turn, is more important than f3 , and

so on. We then would first determine the optimal set X(1) according to f1 . If this set
60 3 OPTIMA: WHAT DOES GOOD MEAN?

contains more than one solution, we will then retain only those elements in X(1) which

have the best objective values according to f2 compared to the other members of X(1) and

⋆ ⋆ ⋆
obtain X(1,2) ⊆ X(1) . This process is repeated until either X(1,2,... ) = 1 or all criteria have
been processed. By doing so, the multi-objective optimization problem has been reduced to
n = |f | single-objective ones to be solved iteratively. Then, the problem space Xi of the ith
problem equals X only for i = 1 and then is Xi ∀i > 1 = X(1,2,..i−1)

[860].

Definition D3.10 (Lexicographically Better). A candidate solution x1 ∈ X is lexico-


graphically better than another element x2 ∈ X (denoted by x1 <lex x2 ) if

x1 <lex x2 ⇔ (ωi fi (x1 ) < ωi fi (x2 )) ∧ (i = min {k : fk (x1 ) 6= fk (x2 )}) (3.13)

1 if fj should be minimized
ωj = (3.14)
−1 if fj should be maximized

where the ωj in Equation 3.13 only denote whether fj should be maximized or minimized,
as specified in Equation 3.14.

Since we generally consider minimization problems, the ωj will be assumed to be 1 if not


explicitly stated otherwise. If n objective functions are to be optimized, there are n! =
|Π(1..n)| possible permutations of them where Π(1..n) is the set of all permutations of the
natural numbers from 1 to n. For each such permutation, the described approach may lead
to different results. We can thus refine the previous definition of “lexicographically better”
by putting it into the context of a permutation of the objective functions:

Definition D3.11 (Lexicographically Better (Refined)). A candidate solution x1 ∈


X is lexicographically better than another element x2 ∈ X (denoted by x1 <lex,π x2 ) according
to a permutation π ∈ Π(1..n) applied to the objective functions if
   n o
x1 <lex,π x2 ⇔ ωπ[i] fπ[i] (x1 ) < ωπ[i] fπ[i] (x2 ) ∧ i = min k : fk (x1 ) 6= fπ[k] (x2 ) (3.15)

where n = len(f ) is the number of objective functions, π [i] is the ith element of the permu-
tation π, and the ωi retain their meaning from Equation 3.14.

Based on the <lex -relation, we can now define optimality.

Definition D3.12 (Lexicographic Optimality). A candidate solution x⋆ is lexicograph-


ically optimal in terms of a permutation π ∈ Π(1..n) applied to the objective functions (and
therefore, member of the optimal set Xπ⋆ ) if there is no lexicographically better element in
X, i. e.,
x⋆ ∈ Xπ⋆ ⇔6 ∃x ∈ X : x <lex,π x⋆ (3.16)
As lexicographically global optimal set X ⋆ we consider the union of the optimal sets Xπ⋆ for
all possible permutations π of the first n natural numbers:
[
X⋆ = Xπ⋆ (3.17)
∀π∈Π(1..n)

X ⋆ can be computed by solving all the |Π(1..n)| = n! different multi-objective problems


arising, i. e., the n ∗ n! single-objective problems, separately. This would lead to a more
than exponential complexity in n. In case that X is a finite set, this can be achieved in
polynomial time in n [860]. But which solutions would be contained in the Xπ⋆ or X ⋆ if we
use the lexicographical definition of optimality?
3.3. MULTIPLE OBJECTIVE FUNCTIONS 61
Example E3.10 (Example E3.4 Cont. Lexicographically).
In the factory example Example E3.4, we wished to minimize the production time
(f1 , ω1 = 1), maximize the profit (f2 , ω2 = −1), minimize the running costs (f3 , ω3 = 1),
maximize a product quality measure (f4 , ω4 = −1), and minimize environmental damage
(f5 , ω5 = 1). Determining lexicographically optimal set Xπ⋆ for the identity permutation
π = (1, 2, 3, 4, 5) means to first determine all possible configurations of the factory the
minimize
  the production
  time f1 . Only if there is more than one such configuration
⋆ ⋆
len X(1) > 1 , we consider f2 . Amongst all elements in X(1) , we then copied those
to X(1,2) which have the highest f2 -value and so on. What we get in Xπ⋆ are configurations

which perform exceptionally well in terms of the production time but sacrifice profit, running
costs, production quality, and will largely disregard any environmental issues. No trade-off
between the objectives will happen at all. Therefore, the set of all lexicographically optimal
solutions X ⋆ will contain the elements x⋆ ∈ X which have the best objective values regarding
to one single objective function while largely neglecting the others.

Example E3.11 (Example E3.5 Cont. Lexicographically).


The same would be the case in the Artificial Ant problem. X ⋆ , if determined lexicographi-
cally, contains the
1. smallest possible program (length 0),
2. the program which leads to the highest possible food collection, and
3. the program which guides the ant along the shortest possible path (which is again the
empty program).

Example E3.12 (Example E3.6 Cont. Lexicographically).


There are exactly two lexicographically optimal points on the two functions illustrated in
Figure 3.4. The first one x⋆1 is the single global maximum x̂1 of f1 and the other one (x⋆2 ) is

the single global maximum x̂2 of f2 . In other words, X(1,2) = {x⋆1 } and X(2,1) ⋆
= {x⋆2 }. Since
Π(1, 2) = {(1, 2) , (2, 1)}, it follows that X ⋆ = X(1,2)
⋆ ⋆
∪ X(2,1) = {x⋆1 , x⋆2 }.

Example E3.13 (Example E3.7 Cont. Lexicographically).


The two-dimensional objective functions f3 and f4 from Figure 3.5 each have two global
minima x̌1 , x̌2 and x̌3 , x̌4 , respectively. Although it is a bit hard to see in the figure itself,
f4 (x̌1 ) = f4 (x̌2 ) and f3 (x̌3 ) = f3 (x̌4 ) because of the symmetry in the two objective functions.
⋆ ⋆
This means that X(3,4) = {x̌1 , x̌2 }, that X(4,3) = {x̌3 , x̌4 }, and that X ⋆ = X(3,4)
⋆ ⋆
∪ X(4,3) =
{x̌1 , x̌2 , x̌3 , x̌4 }.

3.3.2.1 Problems with the Lexicographic Method


From these examples it should become clear that the lexicographic approach to defining what
is optimal and what not has some advantages and some disadvantages. On the positive side,
we find that it is very easy realize and that it allows us to treat a multi-objective optimization
problem by iteratively solving single-objective ones. On the downside, it does not perform
any trade-off between the different objective functions at all and we only find the extreme
values of the optimization criteria. In the factory example, for instance, we would surely
be interested in solutions which lead to a short production time but do not entirely ignore
any budget issues. Such differentiations, however, are not possible with the lexicographic
method.
62 3 OPTIMA: WHAT DOES GOOD MEAN?
3.3.3 Weighted Sums (Linear Aggregation)

Another very simple method to define what an optimal solution is, is to compute a weighted
sum ws(x) of all objective functions fi (x) ∈ f . Since a weighted sum is a linear aggregation
of the objective values, it is also often referred to as linear aggregating. Each objective fi is
multiplied with a weight wi representing its importance. Using signed weights allows us to
minimize one objective while maximizing another one. We can, for instance, apply a weight
wa = 1 to an objective function fa and the weight wb = −1 to the criterion fb . By minimizing
ws(x), we then actually minimize the first and maximize the second objective function. If
we instead maximize ws(x), the effect would be inverse and fb would be minimized and while
fa is maximized. Either way, multi-objective problems are reduced to single-objective ones
with this method. As illustrated in Equation 3.19, a global optimum then is a candidate
solution x⋆ ∈ X for which the weighted sum of all objective values is less or equal to those
of all other candidate solutions (if minimization of the weighted sum is assumed).
n
X X
ws(x) = wi fi (x) = wi fi (x) (3.18)
i=1 ∀fi ∈f
x⋆ ∈ X ⋆ ⇔ ws(x⋆ ) ≤ ws(x) ∀x ∈ X (3.19)

With this method, we can also emulate lexicographical optimization in many cases by simply
setting the weights to values of different magnitude. If the objective functions fi ∈ f all had
the range fi : X 7→ 0..10, for instance, setting the weights to wi = 10i would effectively
achieve this.

Example E3.14 (Example E3.6 Cont.: Weighted Sums).


Figure 3.6 demonstrates optimization with the weighted sum approach for the example given
in Example E3.6. The weights are both set to 1 = w1 = w2 . If we maximize ws1 (x), we will
thus also maximize the functions f1 and f2 . This leads to a single optimum x⋆ = x̂.

y y=ws1(x)=f1(x)+f2(x)
y1=f1(x)
y2=f2(x)

x
^ xÎX1

Figure 3.6: Optimization using the weighted sum approach (first example).

Example E3.15 (Example E3.7 Cont.: Weighted Sums).


The sum of the two-dimensional functions f3 and f4 from the second graphical Example E3.7
is sketched in Figure 3.7. Again, we set the weights w3 and w4 to 1. The sum ws2 , however,
is subject to minimization. The graph of ws2 has two especially deep valleys. At the bottoms
of these valleys, the two global minima x̌5 and x̌6 can be found.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 63
^ ^
x5 x6

x1 x2
y=ws2(x)=f3(x)+f4(x)

Figure 3.7: Optimization using the weighted sum approach (second example).

3.3.3.1 Problems with Weighted Sums

This approach has a variety of drawbacks. One is that it cannot handle functions that rise
or fall with different speed5 properly. Whitley [2929] pointed out that the results of an
objective function should not be considered as a precise measure of utility. Adding multiple
such values up thus does not necessarily make sense.

Example E3.16 (Problematic Objectives for Weighted Sums).


In Figure 3.8, we have sketched the sum ws(x) of the two objective functions f1 (x) = −x2
and f2 (x) = ex−2 . When minimizing or maximizing this sum, we will always disregard
one of the two functions, depending on the interval chosen. For small x, f2 is negligible
compared to f1 . For x > 5 it begins to outpace f1 which, in turn, will now become negligible.
Such functions cannot be added up properly using constant weights. Even if we would set
w
1 to the really
large number 1010 , f1 will become insignificant for all x > 40, because
−(40 )∗10
2 10

e40−2 ≈ 0.0005.

5 See Definition D12.1 on page 144 or http://en.wikipedia.org/wiki/Asymptotic_notation [ac-


cessed 2007-07-03] for related information.
64 3 OPTIMA: WHAT DOES GOOD MEAN?

45
y2=f2(x)
35

25

15
y=g(x)
5 =f1(x)+f2(x)

-5 -3 -1
-5 1 3 5

-15
y1=f1(x)
-25
Figure 3.8: A problematic constellation for the weighted sum approach.

Weighted sums are only suitable to optimize functions that at least share the same big-O
notation (see Definition D12.1 on page 144). Often, it is not obvious how the objective
functions will fall or rise. How can we, for instance, determine whether the objective max-
imizing the food piled by an Artificial Ant rises in comparison to the objective minimizing
the distance walked by the simulated insect?
And even if the shape of the objective functions and their complexity class were clear,
the question about how to set the weights w properly still remains open in most cases [689].
In the same paper, Das and Dennis [689] also show that with weighted sum approaches, not
necessarily all elements considered optimal in terms of Pareto domination (see next section)
will be found. Instead, candidate solutions hidden in concavities of the trade-off curve will
not be discovered [609, 1621, 1622] (see Example E3.17 on the next page for different shapes
of trade-off curves). Also, the candidate solutions which will finally be contained in the
result set X will not necessarily be evenly spaced [1000]. Amongst further authors who
discuss the drawbacks of the linear aggregation method are Koski [1582] and Messac and
Mattson [1873].

3.3.4 Weighted Min-Max


The weighted min-max approach [1303, 1305, 1554, 1741] for multi-objective optimization
compares two candidate solutions x1 and x2 on basis of the minimum (or maximum) of
their weighted objective values. A candidate solution x1 ∈ X is preferred in comparison
with x2 ∈ X if Equation 3.20 holds
min {wi fi (x1 ) : i ∈ 1..n} < min {wi fi (x2 ) : i ∈ 1..n} (3.20)
where wi are the weights of the objective values. For objective functions subject to mini-
mization, wi > 0 is used and wi is set to a value below zero for functions to be maximized.
This approach which is also called Weighted Tschebychev Norm is able to find solutions on
both, convex and concave trade-off surfaces (see Definition D3.15 on the facing page). The
T
weight vector describes a beam in direction (1/w1 , 1/w2 , .., 1/wn ) from the point of origin
in objective space to and along which the optimizer should converge.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 65
3.3.5 Pareto Optimization

The mathematical foundations for multi-objective optimization which considers conflicting


criteria in a fair way has been laid by Edgeworth [857] and Pareto [2127] around one hundred
fifty years ago [1642]. Pareto optimality6 became an important notion in economics, game
theory, engineering, and social sciences [560, 997, 2102, 2938]. It defines the frontier of
solutions that can be reached by trading-off conflicting objectives in an optimal manner.
From this front, a decision maker (be it a human or an algorithm) can finally choose the
configurations that, in her opinion, suit best [264, 528, 952, 954, 1010, 1166, 2610]. The
notation of optimal in the Pareto sense is strongly based on the definition of domination:

Definition D3.13 (Domination). An element x1 dominates (is preferred to) an element


x2 (x1 ⊣ x2 ) if x1 is better than x2 in at least one objective function and not worse with
respect to all other objectives. Based on the set f of objective functions f, we can write:

x1 ⊣ x2 ⇔ ∀i ∈ 1..n ⇒ ωi fi (x1 ) ≤ ωi fi (x2 ) ∧


∃j ∈ 1..n : ωj fj (x1 ) < ωj fj (x2 ) (3.21)

1 if fi should be minimized
ωi = (3.22)
−1 if fi should be maximized

We provide a Java version of a comparison operator implementing the domination relation


in Listing 56.55 on page 838. Like in Equation 3.13 in the definition of the lexicographical
ordering of candidate solutions, the ωi again denote whether fi should be maximized or
minimized. Thus Equation 3.22 and Equation 3.14 are the same. Since we generally assume
minimization, the ωi will always be 1 if not explicitly stated otherwise in the rest of this
book. Then, if a candidate solution x1 dominates another one (x2 ), f (x1 ) is partially less
than f (x).
The Pareto domination relation defines a strict partial order (an irreflexive, asymmetric,
and transitive binary relation, see Definition D51.16 on page 645) on the space of possible
objective values Y. In contrast, the weighted sum approach imposes a total order by pro-
jecting the objective space Y it into the real numbers R. The use of Pareto domination in
multi-objective Evolutionary Algorithms was first proposed by [1075] back in 1989 [2228].

Definition D3.14 (Pareto Optimal). An element x⋆ ∈ X is Pareto optimal (and hence,


part of the optimal set X ⋆ ) if it is not dominated by any other element in the problem space
X. X ⋆ is called the Pareto-optimal set or Pareto set.

x⋆ ∈ X ⋆ ⇔6 ∃x ∈ X : x ⊣ x⋆ (3.23)

The Pareto optimal set will also comprise all lexicographically optimal solutions since, per
definition, it includes all the extreme points of the objective functions. Additionally, it
provides the trade-off solutions which we longed for in Section 3.3.2.1.

Definition D3.15 (Pareto Frontier). For a given optimization problem, the Pareto
front(ier) F ⋆ ⊂ Rn is defined as the set of results the objective function vector f creates
when it is applied to all the elements of the Pareto-optimal set X ⋆ .

F ⋆ = {f (x⋆ ) : x⋆ ∈ X ⋆ } (3.24)

Example E3.17 (Shapes of Optimal Sets / Pareto Frontiers).


The Pareto front in Example E13.2 has a convex geometry, but there are other different
6 http://en.wikipedia.org/wiki/Pareto_efficiency [accessed 2007-07-03]
66 3 OPTIMA: WHAT DOES GOOD MEAN?

f2 f2 f2 f2

f1 f1 f1 f1
Fig. 3.9.a: Non-Convex Fig. 3.9.b: Discon- Fig. 3.9.c: linear Fig. 3.9.d: Non-
(Concave) nected Uniformly Distributed

Figure 3.9: Examples of Pareto fronts [2915].

shapes as well. In Figure 3.9 [2915], we show some examples, including non-convex (concave),
disconnected, linear, and non-uniformly distributed Pareto fronts. Different structures of the
optimal sets pose different challenges to achieve good convergence and spread. Optimization
processes driven by linear aggregating functions will, for instance, have problems to find all
portions of Pareto fronts with non-convex geometry as shown by Das and Dennis [689].
3.3. MULTIPLE OBJECTIVE FUNCTIONS 67

In this book, we will refer to candidate solutions which are not dominated by any element
in a reference set as non-dominated. This reference set may be the whole problem space X
itself (in which case the non-dominated elements are global optima) or any subset X ⊆ x of
it. Many optimization algorithms, for instance, work on populations of candidate solutions
and in such a context, a non-dominated candidate solution would be one not dominated by
any other element in the population.

Definition D3.16 (Domination Rank). We define the domination rank #dom(x, X) of


a candidate solution x relative to a reference set X as

#dom(x, X) = |{x′ : (x′ ∈ X) ∧ (x′ ⊣ x)}| (3.25)

Hence, an element x with #dom(x, X) = 0 is non-dominated in X and a candidate solution


x⋆ with #dom(x⋆ , X) = 0 is a global optimum. The higher the domination rank, the worse
is the associated element.

Example E3.18 (Example E3.6 Cont.: Pareto Optimization).


In Figure 3.10, we illustrate the impact of the definition of Pareto optimality on our first
example (outlined in Example E3.6). We again assume that f1 and f2 should both be
maximized and hence, ω1 = ω2 = −1. The areas shaded with dark gray are Pareto optimal
and thus, represent the optimal set X ⋆ = [x2 , x3 ] ∪ [x5 , x6 ] which contains infinite many
elements since it is composed of intervals on the real axis. In practice, of course, our
computers can only handle finitely many elements and we would not be able to find all
optimal elements unless, of course, we could apply a method able to mathematically resolve
the objective functions and to return the intervals in a form similar to the notation we used
here. All candidate solutions not in the optimal set are dominated by at least one element
of X ⋆ .

y y=f1(x)
y=f2(x)

xÎX1
x1 x2 x3 x4 x5 x6

Figure 3.10: Optimization using the Pareto Frontier approach.

The points in the area between x1 and x2 (shaded in light gray) are dominated by other
points in the same region or in [x2 , x3 ], since both functions f1 and f2 can be improved by
increasing x. In other words, f1 and f2 harmonize in [x1 , x2 ): f1 ∼[x1 ,x2 ) f2 . If we start at
the leftmost point in X (which is position x1 ), for instance, we can go one small step ∆
to the right and will find a point x1 + ∆ dominating x1 because f1 (x1 + ∆) > f1 (x1 ) and
f2 (x1 + ∆) > f2 (x1 ). We can repeat this procedure and will always find a new dominating
point until we reach x2 . x2 demarks the global maximum of f2 – the point with the highest
possible f2 value – which cannot be dominated by any other element of X by definition (see
Equation 3.21).
68 3 OPTIMA: WHAT DOES GOOD MEAN?
From here on, f2 will decrease for some time, but f1 keeps rising. If we now go a
small step ∆ to the right, we will find a point x2 + ∆ with f2 (x2 + ∆) < f2 (x2 ) but also
f1 (x2 + ∆) > f1 (x2 ). One objective can only get better if another one degenerates, i. e., f1
and f2 conflict f1 ≁[x2 ,x3 ] f2 . In order to increase f1 , f2 would be decreased and vice versa
and hence, the new point is not dominated by x2 . Although some of the f2 (x) values of the
other points x ∈ [x1 , x2 ) may be larger than f2 (x2 + ∆), f1 (x2 + ∆) > f1 (x) holds for all of
them. This means that no point in [x1 , x2 ) can dominate any point in [x2 , x4 ] because f1
keeps rising until x4 is reached.
At x3 however, f2 steeply falls to a very low level. A level lower than f2 (x5 ). Since the f1
values of the points in [x5 , x6 ] are also higher than those of the points in (x3 , x4 ], all points
in the set [x5 , x6 ] (which also contains the global maximum of f1 ) dominate those in (x3 , x4 ].
For all the points in the white area between x4 and x5 and after x6 , we can derive similar
relations. All of them are also dominated by the non-dominated regions that we have just
discussed.

Example E3.19 (Example E3.7 Cont.: Pareto Optimization).


Another method to visualize the Pareto relationship is outlined in Figure 3.11 for our second
graphical example. For a certain grid-based resolution X′2 ⊂ X2 of the problem space X2 , we
counted the number #dom(x, X′2 ) of elements that dominate each element x ∈ X′2 . Again,
the higher this number, the worst is the element x in terms of Pareto optimization. Hence,
those candidate solutions residing in the valleys of Figure 3.11 are better than those which are
part of the hills. This Pareto ranking approach is also used in many optimization algorithms
as part of the fitness assignment scheme (see Section 28.3.3 on page 275, for instance). A
non-dominated element is, as the name says, not dominated by any other candidate solution.
These elements are Pareto optimal and have a domination rank of zero. In Figure 3.11, there
are four such areas X1⋆ , X2⋆ , X3⋆ , and X4⋆ .

« « « «
X1 X2 X3 X4

#dom

x1 x2

Figure 3.11: Optimization using the Pareto Frontier approach (second example).

If we compare Figure 3.11 with the plots of the two functions f3 and f4 in Figure 3.5, we
can see that hills in the domination space occur at positions where both, f3 and f4 have high
values. Conversely, regions of the problem space where both functions have small values are
dominated by very few elements.

Besides these examples here, another illustration of the domination relation which may help
understanding Pareto optimization can be found in Section 28.3.3 on page 275 (Figure 28.4
and Table 28.1).
3.3. MULTIPLE OBJECTIVE FUNCTIONS 69
3.3.5.1 Problems of Pure Pareto Optimization

Besides the fact that we will usually not be able to list all Pareto optimal elements, the
complete Pareto optimal set is often also not the wanted result of the optimization process.
Usually, we are rather interested in some special areas of the Pareto front only.

Example E3.20 (Example E3.5 – Artificial Ant – Cont.).


We can again take the Artificial Ant example to visualize this problem. In Example E3.5
on page 57 we have introduced multiple conflicting criteria in this problem.

1. Maximize the amount of food piled.

2. Minimize the distance covered or the time needed to find the food.

3. Minimize the size of the program driving the ant.

If we applied pure Pareto optimization to these objective, we may yield for example:

1. A program x1 consisting of 100 instructions, allowing the ant to gather 50 food items
when walking a distance of 500 length units.

2. A program x2 consisting of 100 instructions, allowing the ant to gather 60 food items
when walking a distance of 5000 length units.

3. A program x3 consisting of 50 instructions, allowing the ant to gather 61 food items


when walking a distance of 1 000 000 length units.

4. A program x4 consisting of 10 instructions, allowing the ant to gather 1 food item


when walking a distance of 5 length units.

5. A program x5 consisting of 0 instructions, allowing the ant to gather 0 food item when
walking a distance of 0 length units.

The result of the optimization process obviously contains useless but non-dominated indi-
viduals which occupy space in the non-dominated set. A program x3 which forces the ant
to scan each and every location on the map is not feasible even if it leads to a maximum
in food gathering. Also, an empty program x5 is useless because it will lead to no food
collection at all. Between x1 and x2 , the difference in the amount of piled food is only 20%,
but the ant driven by x2 has to cover ten times the distance created by x1 . Therefore, the
question of the actual utility arises in this case, too.

If such candidate solutions are created and archived, processing time is invested in evaluating
them (and we have already seen that evaluation may be a costly step of the optimization
process). Ikeda et al. [1400] show that various elements, which are apart from the true
Pareto frontier, may survive as hardly-dominated solutions (called dominance resistant, see
Section 20.2). Also, such solutions may dominate others which are not optimal but fall into
the space behind the interesting part of the Pareto front. Furthermore, memory restrictions
usually force us to limit the size of the list of non-dominated solutions X found during
the search. When this size limit is reached, some optimization algorithms use a clustering
technique to prune the optimal set while maintaining diversity. On one hand, this is good
since it will preserve a broad scan of the Pareto frontier. In this case, on the other hand, a
short but dumb program is of course very different from a longer, intelligent one. Therefore,
it will be kept in the list and other solutions which differ less from each other but are more
interesting for us will be discarded.
Better yet, non-dominated elements usually have a higher probability of being explored
further. This then leads inevitably to the creation of a great proportion of useless offspring.
70 3 OPTIMA: WHAT DOES GOOD MEAN?
In the next iteration of the optimization process, these useless offspring will need a good
share of the processing time to be evaluated.
Thus, there are several reasons to force the optimization process into a wanted direction.
In ?? on page ?? you can find an illustrative discussion on the drawbacks of strict Pareto
optimization in a practical example (evolving web service compositions).

3.3.5.2 Goals of Pareto Optimization

Purshouse [2228] identifies three goals of Pareto optimization which also mirror the afore-
mentioned problems:

1. Proximity: The optimizer should obtain result sets X which come as close to the Pareto
frontier as possible.

2. Diversity: If the real set of optimal solutions is too large, the set X of solutions actually
discovered should have a good distribution in both uniformity and extent.

3. Pertinency: The set X of solutions discovered should match the interests of the decision
maker using the optimizer. There is no use in containing candidate solutions with no
actual merits for the human operator.

3.4 Constraint Handling

Often, there are feasibility limits for the parameters of the candidate solutions or the objec-
tive function values. Such regions of interest are one of the reasons for one further extension
of multi-objective optimization problems:

Definition D3.17 (Feasibility). In an optimization problem, p ≥ 0 inequality constraints


g and q ≥ 0 equality constraints h may be imposed additionally to the objective functions.
Then, a candidate solution x is feasible, if and only if it fulfils all constraints:

isFeasible(x) ⇔ gi (x) ≥ 0 ∀i ∈ 1..p∧ (3.26)


hj (x) = 0 ∀j ∈ 1..q

A candidate solution x for which isFeasible(x) does not hold is called infeasible. Obviously,
only a feasible individual can be a solution, i. e., an optimum, for a given optimization
problem. Constraint handling and the notation of feasibility can be combined with all the
methods for single-objective and multi-objective optimization previously discussed. Com-
prehensive reviews on techniques for such problems have been provided by Michalewicz
[1885], Michalewicz and Schoenauer [1890], and Coello Coello et al. [594, 599] in the con-
text of Evolutionary Computation. In Table 3.2, we give some examples for constraints in
optimization problems.

Table 3.2: Applications and Examples of Constraint Optimization.


Area References
Combinatorial Problems [237, 1161, 1735, 1822, 1871, 1921, 2036, 2072, 2658, 3034]
Computer Graphics [2487, 2488]
Databases [1735, 1822, 2658]
3.4. CONSTRAINT HANDLING 71
Decision Making [1801]
Distributed Algorithms and Sys- [1801, 3034]
tems
Economics and Finances [530, 531, 3034]
Engineering [500, 519, 530, 547, 1881, 3070]
Function Optimization [1732]

3.4.1 Genome and Phenome-based Approaches

One approach to ensure that constraints are always obeyed is to ensure that only feasible
candidate solutions can occur during the optimization process.

1. Coding: This can be done as part of the genotype-phenotype mapping by using so-
called decoders. An example for a decoder is given in [2472, 2473]. [2228]

2. Repairing: Infeasible candidate solutions could be repaired, i. e., turned into feasible
ones [2228, 3041] as part of the genotype-phenotype mapping as well.

3.4.2 Death Penalty

Probably the easiest way of dealing with constraints in objective and fitness space is to simply
reject all infeasible candidate solutions right away and not considering them any further in
the optimization process. This death penalty [1885, 1888] can only work in problems where
the feasible regions are very large. If the problem space contains only few or non-continuously
distributed feasible elements, such an approach will lead the optimization process to stagnate
because most effort is spent in sampling untargeted in the hope to find feasible candidate
solutions. Once this has happened, the transition to other feasible elements may be too
complicated and the search only finds local optima. Additionally, if infeasible elements are
directly discarded, the information which could be gained from them is disposed with them.

3.4.3 Penalty Functions

Maybe one of the most popular approach for dealing with constraints, especially in the
area of single-objective optimization, goes back to Courant [647] who introduced the idea of
penalty functions in 1943. Here, the constraints are combined with the objective function f,
resulting in a new function f′ which is then actually optimized. The basic idea is that this
combination is done in a way which ensures that an infeasible candidate solution always has
a worse f′ -value than a feasible one with the same objective values. In [647], this is achieved
2
by defining f′ as f′ (x) = f(x) + v [h(x)] . Various similar approaches exist. Carroll [500, 501],
Pp −1
for instance, chose a penalty function of the form f′ (x) = f(x) + v i=1 [gi (x)] in order to
ensure that the function g always stays larger than zero.
There are practically no limits for the ways in which a penalty for infeasibility can be
integrated into the objective functions. Several researchers suggest dynamic penalties which
incorporate the index of the current iteration of the optimizer [1458, 2072] or adaptive penal-
ties which additionally utilize population statistics [237, 1161, 2487]. Thorough discussions
on penalty functions have been contributed by Fiacco and McCormick [908] and Smith and
Coit [2526].
72 3 OPTIMA: WHAT DOES GOOD MEAN?
3.4.4 Constraints as Additional Objectives

Another idea for handling constraints would be to consider them as new objective functions.
If g(x) ≥ 0 must hold, for instance, we can transform this to a new objective function which
is subject to minimization.
f≥ (x) = −min {g(x) , 0} (3.27)
As long as g is larger than 0, the constraint is met. f≥ will be zero for all candidate solutions
which fulfill this requirement, since there is no use in maximizing g further than 0. However,
if g gets negative, the minimum term in f≥ will become negative as well. The minus in
front of it will turn it positive. The higher the absolute value of the negative g, the higher
will the value of f≥ become. Since it is subject to minimization, we thus apply a force into
the direction of meeting the constraint. No optimization pressure is applied as long as the
constraint is met. An equality constraint h can similarly be transformed to a function f= (x)
(again subject to minimization).

f= (x) = max {|h(x)| , ǫ} (3.28)

f= (x) minimizes the absolute value of h. If h is continuous and real, it may arbitrarily closely
approach 0 but never actually reach it. Then, defining a small cut-off value ǫ (for instance
ǫ = 10−6 ) may help the optimizer to satisfy the constraint. An approach similar to the
idea of interpreting constraints as objectives is Deb’s Goal Programming method [736, 739].
Constraint violations and objective values are ranked together in a part of the MOEA by
Ray et al. [2273] [749].

3.4.5 The Method Of Inequalities

General inequality constraints can also be processed according to the Method of Inequalities
(MOI) introduced by Zakian [3050, 3052, 3054] in his seminal work on computer-aided
control systems design (CACSD) [2404, 2924, 3070]. In the MOI, an area of interest is
˘
specified in form of a goal range [r̆i , ri ] for each objective function fi .
Pohlheim [2190] outlines how this approach can be applied to the set f of objective
functions and combined with Pareto optimization: Based on the inequalities, three categories
of candidate solutions can be defined and each element x ∈ X belongs to one of them:

1. It fulfills all of the goals, i. e.,


˘
r̆i ≤ fi (x) ≤ ri ∀i ∈ 1.. |f | (3.29)

2. It fulfills some (but not all) of the goals, i. e.,


˘ ˘
(∃i ∈ 1.. |f | : r̆i ≤ fi (x) ≤ ri ) ∧ (∃j ∈ 1.. |f | : (fj (x) < r̆j ) ∨ (fj (x) > rj )) (3.30)

3. It fulfills none of the goals, i. e.,


˘
(fi (x) < r̆i ) ∨ (fi (x) > ri ) ∀i ∈ 1.. |f | (3.31)

Using these groups, a new comparison mechanism is created:

1. The candidate solutions which fulfill all goals are preferred instead of all other indi-
viduals that either fulfill some or no goals.
2. The candidate solutions which are not able to fulfill any of the goals succumb to those
which fulfill at least some goals.
3. Only the solutions that are in the same group are compared on basis on the Pareto
domination relation.
3.4. CONSTRAINT HANDLING 73
By doing so, the optimization process will be driven into the direction of the interesting
part of the Pareto frontier. Less effort will be spent on creating and evaluating individuals
in parts of the problem space that most probably do not contain any valid solution.

Example E3.21 (Example E3.6 Cont.: MOI).


In Figure 3.12, we apply the Pareto-based Method of Inequalities to the graphical Exam-
˘ ˘
ple E3.6. We impose the same goal ranges on both objectives r̆1 = r̆2 and r1 = r2 . By
doing so, the second non-dominated region from the Pareto example Figure 3.10 suddenly
˘
becomes infeasible, since f1 rises over r1 . Also, the greater part of the first optimal area
from this example is now infeasible because f2 drops under r̆2 . In the whole domain X of the
optimization problem, only the regions [x1 , x2 ] and [x3 , x4 ] fulfill all the target criteria. To
these elements, Pareto comparisons are applied. It turns out that the elements in [x3 , x4 ]
dominate all the elements [x1 , x2 ] since they provide higher values in f1 for same values in f2 .
If we scan through [x3 , x4 ] from left to right, we see that f1 rises while f2 degenerates, which
is why the elements in this area cannot dominat each other and hence, are all optimal.
˘ ˘ y=f1(x)
y r1,r2
y=f2(x)

˘r1,r˘ 2

xÎX1
x1 x2 x3 x4

Figure 3.12: Optimization using the Pareto-based Method of Inequalities approach (first
example).

Example E3.22 (Example E3.7 Cont.: MOI).


In Figure 3.13 we apply the Pareto-based Method of Inequalities to our second graphical
example (Example E3.7). For illustrating purposes, we define two different ranges of interest
˘ ˘
[r̆3 , r3 ] and [r̆4 , r4 ] on f3 and f4 , as sketched in Fig. 3.13.a.
Like we did in the second example for Pareto optimization, we want to plot the quality of
the elements in the problem space. Therefore, we first assign a number c ∈ {1, 2, 3} to each
of its elements in Fig. 3.13.b. This number corresponds to the classes to which the elements
belong, i. e., 1 means that a candidate solution fulfills all inequalities, for an element of class
2, at least some of the constraints hold, and the elements in class 3 fail all requirements.
Based on this class division, we can then perform a modified Pareto counting where each
candidate solution dominates all the elements in higher classes Fig. 3.13.c. The result is that
multiple single optima x⋆1 , x⋆2 , x⋆3 , etc., and even a set of adjacent, non-dominated elements
X9⋆ occurs. These elements are, again, situated at the bottom of the illustrated landscape
whereas the worst candidate solutions reside on hill tops.
74 3 OPTIMA: WHAT DOES GOOD MEAN?

˘
˘
r4
r3
f3 f4

r˘ 3 ˘r4
x1 x2 x1 x2

Fig. 3.13.a: The ranges applied to f3 and f4 .

«
x«1 x«2 x«3 x«4 x«5 x«6 x«7 x«8 X9

3
MOI class

#dom

1
x1 x2 x1 x2

Fig. 3.13.b: The Pareto-based Method of Inequalities Fig. 3.13.c: The Pareto-based Method of Inequali-
class division. ties ranking.

Figure 3.13: Optimization using the Pareto-based Method of Inequalities approach (first
example).
3.5. UNIFYING APPROACHES 75

A good overview on techniques for the Method of Inequalities is given by Whidborne et al.
[2924].

3.4.6 Constrained-Domination

The idea of constrained-domination is very similar to the Method of Inequalities but natively
applies to optimization problems as specified in Definition D3.17. Deb et al. [749] define
this principle which can easily be incorporated into the dominance comparison between any
two candidate solutions. Here, a candidate solutions x1 ∈ X is preferred in comparison to
an element x2 ∈ X [547, 749, 1297] if

1. x1 is feasible while x2 is not,

2. x1 and x2 both are infeasible but x1 has a smaller overall constraint violation, or

3. x1 and x2 are both feasible but x1 dominates x2 .

3.4.7 Limitations and Other Methods

Other approaches for incorporating constraints into optimization are Goal Attainment [951,
2951] and Goal Programming7 [530, 531]. Especially interesting in our context are methods
which have been integrated into Evolutionary Algorithms [736, 739, 2190, 2393, 2662], such
as the popular Goal Attainment approach by Fonseca and Fleming [951] which is similar to
the Pareto-MOI we have adopted from Pohlheim [2190]. Again, an overview on this subject
is given by Coello Coello et al. in [599].

3.5 Unifying Approaches

3.5.1 External Decision Maker

All approaches for defining what optima are and how constraints should be considered until
now were rather specific and bound to certain mathematical constructs. The more general
concept of an External Decision Maker which (or who) decides which candidate solutions
prevail has been introduced by Fonseca and Fleming [952, 954].
One of the ideas behind “externalizing” the assessment process on what is good and
what is bad is that Pareto optimization as well as, for instance, the Method of Inequalities,
impose rigid, immutable partial orders8 on the candidate solutions. The first drawback of
such a partial order is that elements may exists which neither succeed nor precede each
other. As we have seen in the examples discussed in Section 3.3.5, there can, for instance,
be two candidate solutions x1 , x2 ∈ X with neither x1 ⊣ x2 nor x2 ⊣ x1 . Since there can be
arbitrarily many such elements, the optimization process may run into a situation where it
set of currently discovered “best” candidate solutions X grows quickly, which leads to high
memory consumption and declining search speed.
The second drawback of only using rigid partial orders is that they make no statement
of the quality relations within X. There is no “best of the best”. We cannot decide which
of two candidate solutions not dominating each other is the most interesting one for further
7 http://en.wikipedia.org/wiki/Goal_programming [accessed 2007-07-03]
8 A definition of partial order relations is specified in Definition D51.16 on page 645.
76 3 OPTIMA: WHAT DOES GOOD MEAN?
investigation. Also, the plain Pareto relation does not allow us to dispose the “weakest”
element from X in order to save memory, since all elements in X are alike.
One example for introducing a quality measure additional to the Pareto relation is the
Pareto ranking which we already used in Example E3.19 and which we will discuss later
in Section 28.3.3 on page 275. Here, the utility of a candidate solution is defined by the
number of other candidate solutions it is dominated by. Into this number, additional den-
sity information which deems areas of X which have only sparsely been sampled as more
interesting than those which have already extensively been investigated.
While such ordering methods are a good default approaches able to direct the search
into the direction of the Pareto frontier and delivering a broad scan of it, they neglect the
fact that the user of the optimization most often is not interested in the whole optimal
set but has preferences, certain regions of interest [955]. This region would exclude, for
example, the infeasible (but Pareto optimal) programs for the Artificial Ant as discussed in
Section 3.3.5.1. What the user wants is a detailed scan of these areas, which often cannot
be delivered by pure Pareto optimization.

a priori
knowledge utility/cost results
DM EA
(decision maker) (an optimizer)

objective values
(acquired knowledge)

Figure 3.14: An external decision maker providing an Evolutionary Algorithm with utility
values.

Here comes the External Decision Maker as an expression of the user’s preferences [949]
into play, as illustrated in Figure 3.14. The task of this decision maker is to provide a cost
function u : Rn 7→ R (or utility function, if the underlying optimizer is maximizing) which
maps the space of objective values (Rn ) to the real numbers R. Since there is a total order
defined on the real numbers, this process is another way of resolving the “incomparability-
situation”. The structure of the decision making process u can freely be defined and may
incorporate any of the previously mentioned methods. u could, for example, be reduced
to compute a weighted sum of the objective values, to perform an implicit Pareto ranking,
or to compare individuals based on pre-specified goal-vectors. Furthermore, it may even
incorporate forms of artificial intelligence, other forms of multi-criterion Decision Making,
and even interaction with the user (smilar to Example E2.6 on page 48, for instance). This
technique allows focusing the search onto solutions which are not only good in the Pareto
sense, but also feasible and interesting from the viewpoint of the user.
Fonseca and Fleming make a clear distinction between fitness and cost values. Cost values
have some meaning outside the optimization process and are based on user preferences.
Fitness values on the other hand are an internal construct of the search with no meaning
outside the optimizer (see Definition D5.1 on page 94 for more details). If External Decision
Makers are applied in Evolutionary Algorithms or other search paradigms based on fitness
measures, these will be computed using the values of the cost function instead of the objective
functions [949, 950, 956].

3.5.2 Prevalence Optimization

We have now discussed various ways to define what optima in terms of multi-objective
optimization are and to steer the search process into their direction. Let us subsume all
3.5. UNIFYING APPROACHES 77
of them in general approach. From the concept of Pareto optimization to the Method of
Inequalities, the need to compare elements of the problem space in terms of their quality
as solution for a given problem winds like a read thread through this matter. Even the
weighted sum approach and the External Decision Maker do nothing else than mapping
multi-dimensional vectors to the real numbers in order to make them comparable.
If we compare two candidate solutions x1 und x2 , either x1 is better than x2 , vice versa,
or the result is undecidable. In the latter case, we assume that both are of equal quality.
Hence, there are three possible relations between two elements of the problem space. These
two results can be expressed with a comparator function cmpf .

Definition D3.18 (Comparator Function). A comparator function cmp : A2 7→ R


maps all pairs of elements (a1 , a2 ) ∈ A2 to the real numbers R according to a (strict)
partial order9 R:

R(a1 , a2 ) ⇔ (cmp(a1 , a2 ) < 0) ∧ (cmp(a2 , a1 ) > 0) ∀a1 , a2 ∈ A (3.32)


¬R(a1 , a2 ) ∧ ¬R(a2 , a1 ) ⇔ cmp(a1 , a2 ) = cmp(a2 , a1 ) = 0 ∀a1 , a2 ∈ A (3.33)
R(a1 , a2 ) ⇒ ¬R(a1 , a2 ) ∀a1 , a2 ∈ A (3.34)
cmp(a, a) = 0∀a ∈ A (3.35)

We provide an Java interface for the functionality of a comparator functions in Listing 55.15
on page 719. From the three defining equations, many features of “cmp” can be deduced.
It is, for instance, transitive, i. e., cmp(a1 , a2 ) < 0 ∧ cmp(a2 , a3 ) < 0 ⇒ cmp(a1 , a3 )) < 0.
Provided with the knowledge of the objective functions f ∈ f , such a comparator function
cmpf can be imposed on the problem spaces of our optimization problems:

Definition D3.19 (Prevalence Comparator Function). A prevalence comparator func-


tion cmpf : X2 7→ R maps all pairs (x1 , x2 ) ∈ X2 of candidate solutions to the real numbers
R according to Definition D3.18.

The subscript f in cmpf illustrates that the comparator has access to all the values of the
objective functions in addition to the problem space elements which are its parameters. As
shortcut for this comparator function, we introduce the prevalence notation as follows:

Definition D3.20 (Prevalence). An element x1 prevails over an element x2 (x1 ≺ x2 ) if


the application-dependent prevalence comparator function cmpf (x1 , x2 ) ∈ R returns a value
less than 0.

x1 ≺ x2 ⇔ cmpf (x1 , x2 ) < 0 ∀x1 , x2 , ∈ X (3.36)


(x1 ≺ x2 ) ∧ (x2 ≺ x3 ) ⇒ x1 ≺ x3 ∀x1 , x2 , x3 ∈ X (3.37)

It is easy to see that we can fit lexicographic orderings, Pareto domination relations, the
Method of Inequalities-based comparisons, as well as the weighted sum combination of ob-
jective values easily into this notation. Together with the fitness assignment strategies for
Evolutionary Algorithms which will be introduced later in this book (see Section 28.3 on
page 274), it covers many of the most sophisticated multi-objective techniques that are pro-
posed, for instance, in [952, 1531, 2662]. By replacing the Pareto approach with prevalence
comparisons, all the optimization algorithms (especially many of the evolutionary tech-
niques) relying on domination relations can be used in their original form while offering the
additional ability of scanning special regions of interests of the optimal frontier.
9 Partial orders are introduced in Definition D51.15 on page 645.
78 3 OPTIMA: WHAT DOES GOOD MEAN?
Since the comparator function cmpf and the prevalence relation impose a partial order
on the problem space X like the domination relation does, we can construct the optimal set
X ⋆ containing globally optimal elements x⋆ in a way very similar to Equation 3.23:

x⋆ ∈ X ⋆ ⇔ ¬∃x ∈ X : x 6= x⋆ ∧ x ≺ x⋆ (3.38)

For illustration purposes, we will exercise the prevalence approach on the examples of lexico-
graphic optimality (see Section 3.3.2) and define the comparator function cmpf ,lex . For the
weighted sum method with the weights wi (see Section 3.3.3) we similarly provide cmpf ,ws
and the domination-based Pareto optimization with the objective directions ωi as defined
in Section 3.3.5 can be realized with cmpf ,⊣ .


 −1 if ∃π ∈ Π(1.. |f |) : (x1 <lex,π x2 ) ∧


 ¬∃π ∈ Π(1.. |f |) : (x2 <lex,π x1 )
cmpf ,lex (x1 , x2 ) = 1 if ∃π ∈ Π(1.. |f |) : (x2 <lex,π x1 ) ∧ (3.39)



 ¬∃π ∈ Π(1.. |f |) : (x < x
1 lex,π 2 )

0 otherwise
 
|f |
X
cmpf ,ws (x1 , x2 ) =  (wi fi (x1 ) − wi fi (x2 )) ≡ ws(x1 ) − ws(x2 ) (3.40)
i=1

 −1 if x1 ⊣ x2
cmpf ,⊣ (x1 , x2 ) = 1 if x2 ⊣ x1 (3.41)

0 otherwise

Example E3.23 (Example E3.5 – Artificial Ant – Cont.).


With the prevalence comparator, we can also easily solve the problem stated in Sec-
tion 3.3.5.1 by no longer encouraging the evolution of useless programs for Artificial Ants
while retaining the benefits of Pareto optimization. The comparator function can simply
be defined in a way that infeasible candidate solutions will always be prevailed by useful
programs. It therefore may incorporate the knowledge on the importance of the objective
functions. Let f1 be the objective function with an output proportional to the food piled, f2
denote the distance covered in order to find the food, and f3 be the program length. Equa-
tion 3.42 demonstrates one possible comparator function for the Artificial Ant problem.


 −1 if (f1 (x1 ) > 0 ∧ f1 (x2 ) = 0) ∨



 (f2 (x1 ) > 0 ∧ f2 (x2 ) = 0) ∨


 (f3 (x1 ) > 0 ∧ f1 (x2 ) = 0)
cmpf ,ant (x1 , x2 ) = 1 if (f1 (x2 ) > 0 ∧ f1 (x1 ) = 0) ∨ (3.42)



 (f2 (x2 ) > 0 ∧ f2 (x1 ) = 0) ∨



 (f3 (x2 ) > 0 ∧ f1 (x1 ) = 0)

cmpf ,⊣ (x1 , x2 ) otherwise

Later in this book, we will discuss some of the most popular optimization strategies. Al-
though they are usually implemented based on Pareto optimization, we will always introduce
them using prevalence.
3.5. UNIFYING APPROACHES 79
Tasks T3

13. In Example E1.2 on page 26, we stated the layout of circuits as a combinatorial opti-
mization problem. Name at least three criteria which could be subject to optimization
or constraints in this scenario.
[6 points]

14. The job shop scheduling problem was outlined in Example E1.3 on page 26. The most
basic goal when solving job shop problems is to meet all production deadlines. Would
formulate this goal as a constraint or rather as an objective function? Why?
[4 points]

15. Let us consider the university life of a student as optimization problem with the goal
of obtaining good grades in all the subjects she attends. How could be formulate a

(a) single objective function for that purpose, or


(b) a set of objectives which can be optimized with, for example, the Pareto approach.

Additionally, phrase at least two constraints in daily life which, for example, prevent
the student from learning 24 hours per day.
[10 points]

16. Consider a bi-objective optimization problem with X = R as problem space and the two
objective functions f1 (x) = e|x−3| and f2 (x) = |x − 100| both subject to minimization.
What is your opinion about the viability of solving this problem with the weighted
sum approach with both weights set to 1?
[5 points]

17. If we perform Pareto optimization of n = |f | objective functions fi : i ∈ 1..n where


each of the objectives can take on m different values, there can be at most mn−1
non-dominated candidate solutions at a time. Sketch configurations of mn−1 non-
dominated candidate solutions in diagrams similar to Figure 28.4 on page 276 for
n ∈ {1, 2} and m ∈ {1, 2, 3, 4}. Outline why the sentence also holds for n > 2.
[10 points]

18. In your opinion, is it good if most of the candidate solutions under investigation
by a Pareto-based optimization algorithm are non-dominated? For answering this
question, consider an optimization process that decides which individual should be
used for further investigation solely on basis of its objective values and, hence, the
Pareto dominance relation. Consider the extreme case that all points known to such
an algorithm are non-dominated to answer the question.
Based on your answer, make a statement about the expected impact of the fact dis-
cussed in Task 17 on the quality of solutions that a Pareto-based optimizer can deliver
for rising dimensionality of the objective space, i. e., for increasing values of n = |f |.
[10 points]

19. Use Equation 3.1 to Equation 3.5 from page 52 to determine all extrema (optima) for

(a) f (x) = 3x5 + 2x2 − x,


(b) g(x) = 2x2 + x, and
(c) h(x) = sin x.
[15 points]
Chapter 4
Search Space and Operators: How can we find it?

4.1 The Search Space

In Section 2.2, we gave various examples for optimization problems, such as from simple
real-valued problems (Example E2.1), antenna design (Example E2.3), and tasks in aircraft
development (Example E2.5). Obviously, the problem spaces X of these problems are quite
different. A solution of the stone’s throw task, for instance, was a real number of meters per
second. An antenna can be described by its blueprint which is structured very differently
from the blueprints of aircraft wings or rotors.
Optimization algorithms usually start out with a set of either predefined or automatically
generated elements from X which are far from being optimal (otherwise, optimization would
make not too much sense). From these initial solutions, new elements from X are explored
in the hope to improve the solution quality. In order to find these new candidate solutions,
search operators are used which modify or combine the elements already known.
Since the problem spaces of the three aforementioned examples are different, this would
mean that different sets of search operators are required, too. It would, however, be cumber-
some to develop search operations time and again for each new problem space we encounter.
Such an approach would not only be error-prone, it would also make it very hard to formulate
general laws and to consolidate findings.
Instead, we often reuse well-known search spaces for many different problems. After we
the problem space is defined, we will always look for ways to map it to such a search space
for which search operators are already available. This is not always possible, but in most of
the problems we will encounter, it will save a lot of effort.

Example E4.1 (Search Spaces for Example E2.1, E2.3, and E2.5).
The stone’s throw as well as many antenna and wing blueprints, for instance, can all be
represented as vectors of real numbers. For the stone’s throw, these vectors would have
length one (Fig. 4.1.a). An antenna consisting of n segments could be encoded as vector of
length 3n, where the element at index i specifies a length and i + 1 and i + 2 are angles
denoting into which direction the segment is bent as sketched in Fig. 4.1.b. The aircraft
wing could be represent as vector of length 3n, too, where the element at index i is the
segment width and i + 1 and i + 2 denote its disposition and height (Fig. 4.1.c).
82 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?

g0
()
g0

g8
g6

g3 g5
g= g
g2
1

g...
g2 g5
g= g
g2
1
()
g...
g7
g0 g4
g1 g8
x=g=(g1) g2 g11 g14
g1 g g4 g7 g10
0 g3 g6 g9 g13
Fig. 4.1.a: Stone’s throw. Fig. 4.1.b: Antenna design. Fig. 4.1.c: Wing design.

Figure 4.1: Examples for real-coding candidate solutions.

Definition D4.1 (Search Space). The search space G (genome) of an optimization prob-
lem is the set of all elements g which can be processed by the search operations.

In dependence on Genetic Algorithms, we often refer to the search space synonymously as


genome 1 . Genetic Algorithms (see Chapter 29) are optimization algorithms inspired by the
biological evolution. The term genome was coined by the German biologist Winkler [2958]
as a portmanteau of the words gene and chromosome [1701].2 In biology, the genome is
the whole hereditary information of organisms and the genotype is represents the heredity
information of an individual. Since in the context of optimization we refer to the search
space as genome, we call its elements genotypes.

Definition D4.2 (Genotype). The elements g ∈ G of the search space G of a given


optimization problem are called the genotypes.

As said, a biological genotype encodes the complete heredity information of an individual.


Likewise, a genotype in optimization encodes a complete candidate solution. If G 6= X, we
distinguish genotypes g ∈ G and phenotypes x ∈ X. In Example E4.1, the genomes all are
the vectors of real numbers but the problem spaces are either a velocity, an antenna design,
or a blueprint of a wing.
The elements of the search space rarely are unstructured aggregations. Instead, they
often consist of distinguishable parts, hierarchical units, or well-typed data structures. The
same goes for the chromosomes in biology. They consist of genes 3 , segments of nucleic acid,
which contain the information necessary to produce RNA strings in a controlled manner and
encode phenotypical or metabolical properties. A fish, for instance, may have a gene for the
color of its scales. This gene could have two possible “values” called alleles 4 , determining
whether the scales will be brown or gray. The Genetic Algorithm community has adopted
this notation long ago and we use it for specifying the detailed structure of the genotypes.

Definition D4.3 (Gene). The distinguishable units of information in genotypes g ∈ G


which encode properties of the candidate solutions x ∈ X are called genes.

1 http://en.wikipedia.org/wiki/Genome [accessed 2007-07-15]


2 The words gene, genotype, and phenotype have, in turn, been introduced [2957] by the Danish biologist
Johannsen [1448].
3 http://en.wikipedia.org/wiki/Gene [accessed 2007-07-03]
4 http://en.wikipedia.org/wiki/Allele [accessed 2007-07-03]
4.2. THE SEARCH OPERATIONS 83
Definition D4.4 (Allele). An allele is a value of specific gene.

Definition D4.5 (Locus). The locus5 is the position where a specific gene can be found
in a genotype [634].

Example E4.2 (Small Binary Genome).


Genetic Algorithms are a widely-researched optimization approach which, in its original
form, utilizes bit strings as genomes. Assume that we wished to apply such a plain GA to
the minimization of a function f : (0..3 × 0..3) 7→ R such as the trivial f(x = (x[0], x[1])) =
x[0] − x[1] : x[0] ∈ X0 , x[1] ∈ X1 .

0 0 0 0
0 0 0 1 searchOp x=gpm(g)
0 0 1 0
... gÎG
0 1 1 1 3
X1 xÎX 3
X1

1 1 0 1 2 2
st
1 1 1 0 1 Gene 2nd Gene 1
X0
1
X0
allele ,,01`` allele ,,11``
1 1 1 1 at locus 0 at locus 1 0 1 2 3 0 1 2 3

genome G genotype gÎG phenotype xÎX phenome X=X0´X1


(search space) (solution candidate) (problem space)

Figure 4.2: The relation of genome, genes, and the problem space.

Figure 4.2 illustrates how this can be achieved by using two bits for each, x[0] and x[1].
The first gene resides at locus 0 and encodes x[0] whereas the two bits at locus 1 stand for
x[1]. Four bits suffice for encoding two natural numbers in X1 = X2 = 1..3 and thus, for the
whole candidate solution (phenotype) x. With this trivial encoding, the Genetic Algorithm
can be applied to the problem and will quickly find the solution x = (0, 3)

4.2 The Search Operations

Definition D4.6 (Search Operation). A search operation “searchOp” takes a fixed


number n ∈ N0 of elements g1 ..gn from the search space G as parameter and returns another
element from the search space.
searchOp : Gn 7→ G (4.1)

Search operations are used an optimization algorithm in order to explore the search space.
n is an operator-specific value, its arity6 . An operation with n = 0, i. e., a nullary operator,
receives no parameters and may, for instance, create randomly configured genotypes (see,
for instance, Listing 55.2 on page 708). An operator with n = 1 could randomly modify
5 http://en.wikipedia.org/wiki/Locus_%28genetics%29 [accessed 2007-07-03]
6 http://en.wikipedia.org/wiki/Arity [accessed 2008-02-15]
84 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
its parameter – the mutation operator in Evolutionary Algorithms is an example for this
behavior.
The key idea of all search operations which take at least one parameter (n > 0) (see
Listing 55.3) is that
1. it is possible to take a genotype g of a candidate solution x = gpm(g) with good
objective values f (x) and to
2. modify this genotype g to get a new one g ′ = searchOp(g, . . .) which
3. can be translated to a phenotype x′ = gpm(g ′ ) with similar or maybe even better
objective values f (x′ ).
This assumption is one of the most basic concepts of iterative optimization algorithms.
It is known as causality or locality and discussed in-depth in Section 14.2 on page 161
and Section 24.9 on page 218. If it does not hold then applying a small change to a genotype
leads to large changes in its utility. Instead modifying a genotype, we then could as well
create a new, random one directly.
Many operations which have n > 1 try to combine the genotypes they receive in the
input. This is done in the hope that positive characteristics of them can be joined into an
even better offspring genotype. An example for such operations is the crossover operator in
Genetic Algorithms. Differential Evolution utilizes an operator with n = 3, i. e., which takes
three arguments. Here, the idea often is that good can step by step be accumulated and
united by an optimization process. This Building Block Hypothesis is discussed in detail
in Section 29.5.5 on page 346, where also criticism to this point of view is provided.
Search operations often are randomized, i. e., they are not functions in the mathematical
sense. Instead, the return values of two successive invocations of one search operation with
exactly the same parameters may be different. Since optimization processes quite often use
more than a single search operation at a time, we define the set Op.

Definition D4.7 (Search Operation Set). Op is the set of search operations searchOp ∈
Op which are available to an optimization process. Op(g1 , g2 , .., gn ) denotes the application
of any n-ary search operation in Op to the to the genotypes g1 , g2 , · · · ∈ G.

Op(g1 , g2 , .., gn ) ∈ G if ∃ n-ary operation in Op
∀g1 , g2 , .., gn ∈ G ⇒ (4.2)
Op(g1 , g2 , .., gn ) = ∅ otherwise

It should be noted that, if the n-ary search operation from Op which was applied is a random-
ized operator, obviously also the behavior of Op(. . .) is randomized as well. Furthermore,
if more than one n-ary operation is contained in Op, the optimization algorithm using Op
usually performs some form of decision on which operation to apply. This decision may
either be random or follow a specific schedule and the decision making process may even
change during the course of the optimization process. Writing Op(g) in order to denote the
application of a unary search operation from Op to the genotype g ∈ G is thus a simplified
expression which encompasses both, possible randomness in the search operations and in
the decision process on which operation to apply.
With Opk (g1 , g2 , . . .) we denote k successive applications of (possibly different) search
operators (with respect to their arities). The kth application receives all possible results of
the k-1th application of the search operators as parameter, i. e., is defined recursively as
[
Opk (g1 , g2 , . . .) = Op(G) (4.3)
∀G∈Π(P(Opk−1 (g1 ,g2 ,...)))

where Op0 (g1 , g2 , . . .) = {g1 , g2 , . . . }, P (G) is the power set of a set G, and Π(A) is the set
of all possible permutations of the elements of G.
If the parameter g is left away, i. e., just Opk is written, this chain has to start with
a search operation with zero arity. In the style of Badea and Stanciu [177] and Skubch
[2516, 2517], we now can define:
4.3. THE CONNECTION BETWEEN SEARCH AND PROBLEM SPACE 85
Definition D4.8 (Completeness). A set Op of search operations “searchOp” is complete
if and only if every point g1 in the search space G can be reached from every other point
g2 ∈ G by applying only operations searchOp ∈ Op.
∀g1 , g2 ∈ G ⇒ ∃k ∈ N1 : g1 ∈ Opk (g2 ) (4.4)

Definition D4.9 (Weak Completeness). A set Op of search operations “searchOp” is


weakly complete if every point g in the search space G can be reached by applying only
operations searchOp ∈ Op.
∀g ∈ G ⇒ ∃k ∈ N1 : g ∈ Opk (4.5)

A weakly complete set of search operations usually contains at least one operation without
parameters, i. e., of zero arity. If at least one such operation exists and this operation can
potentially return all possible genotypes, then the set of search operations is already weakly
complete. However, nullary operations are often only used in the beginning of the search
process (see, for instance, the Hill Climbing Algorithm 26.1 on page 230). If Op achieved its
weak completeness because of a nullary search operation, the initialization of the algorithm
would determine which solutions can be found and which not. If the set of search operations
is strongly complete, the search process, in theory, can find any solution regardless of the
initialization.
From the opposite point of view, this means that if the set of search operations is not
complete, there are elements in the search space which cannot be reached from a random
starting point. If it is not even weakly complete, parts of the search space can never be
discovered. Then, the optimization process is probably not able to explore the problem
space adequately and may not find optimal or satisfyingly good solution.

Definition D4.10 (Adjacency (Search Space)). A point g2 is ε-adjacent to a point g1


in the search space G if it can be reached by applying a single search operation “searchOp”
to g1 with a probability of more than ε ∈ [0, 1). Notice that the adjacency relation is not
necessarily symmetric.
adjacentε (g2 , g1 ) ⇔ P (Op(g1 ) = g2 ) > ε (4.6)

The probability parameter ε is defined here in order to allow us to differentiate between


probable and improbable transitions in the search space. Normally, we will use the adjacency
relation with ε = 0 and abbreviate the notation with adjacent(g2 , g1 ) ≡ adjacent0 (g2 , g1 ).
In the case of Genetic Algorithms with, for instance, single-bit flip mutation operators
(see Section 29.3.2.1 on page 330), adjacent(g2 , g1 ) would be true for all bit strings g2
which have a Hamming distance of no more than one from g1 , i. e., which differ by at
most one bit from g1 . However, if we assume a real-valued search space G = R and a
mutation operator which adds a normally distributed random value to a genotype, then
adjacent(g2 , g1 ) would always be true, regardless of their distance. The adjacency relation
with ε = 0 has thus no meaning in this context. If we set ε to a very small number such
as 1 · 104 , we can get a meaningful impression on how likely it is to discover a point g2
when starting from g1 . Furthermore, this definition allows us to reason about limits such as
limε→0 adjacentε (g2 , g1 ).

4.3 The Connection between Search and Problem Space


If the search space differs from the problem space, a translation between them is furthermore
required. In Example E4.2, we would need to transform the binary strings processed by the
Genetic Algorithm to tuples of natural numbers which can be processed by the objective
functions.
86 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
Definition D4.11 (Genotype-Phenotype Mapping). The genotype-phenotype map-
ping (GPM, or ontogenic mapping [2134]) gpm : G 7→ X is a left-total7 binary relation
which maps the elements of the search space G to elements in the problem space X. In
Section 28.2, you can find a thorough discussion of different types of genotype-phenotype
mappings used in Evolutionary Algorithms.

∀g ∈ G ∃x ∈ X : gpm(g) = x (4.7)

In Listing 55.7 on page 711, you can find a Java interface which resembles Equation 4.7. The
only hard criterion we impose on genotype-phenotype mappings in this book is left-totality,
i. e., that they map each element of the search space to at least one candidate solution. The
GPM may be a functional relation if it is deterministic. Nevertheless, it is possible to create
mappings which involve random numbers and hence, cannot be considered to be functions
in the mathematical sense of Section 51.10 on page 646. In this case, we would rewrite
Equation 4.7 as Equation 4.8:

∀g ∈ G ∃x ∈ X : P (gpm(g) = x) > 0 (4.8)

Genotype-phenotype mappings should further be surjective [2247], i. e., relate at least one
genotype to each element of the problem space. Otherwise, some candidate solutions can
never be found and evaluated by the optimization algorithm even if the search operations
are complete. Then, there is no guarantee whether the solution of a given problem can
be discovered or not. If a genotype-phenotype mapping is injective, which means that it
assigns distinct phenotypes to distinct elements of the search space, we say that it is free from
redundancy. There are different forms of redundancy, some are considered to be harmful for
the optimization process, others have positive influence8 . Most often, GPMs are not bijective
(since they are neither necessarily injective nor surjective). Nevertheless, if a genotype-
phenotype mapping is bijective, we can construct an inverse mapping gpm−1 : X 7→ G.

gpm−1 (x) = g ⇔ gpm(g) = x ∀x ∈ X, g ∈ G (4.9)

Based on the genotype-phenotype mapping and the definition of adjacency in the search
space given earlier in this section, we can also define a neighboring relation for the problem
space, which, of course, is also not necessarily symmetric.

Definition D4.12 (Neighboring (Problem Space)). Under the condition that the can-
didate solution x1 ∈ X was obtained from the element g1 ∈ G by applying the genotype-
phenotype mapping, i. e., x1 = gpm(g1 ), a point x2 is ε-neighboring to x1 if it can be reached
by applying a single search operation and a subsequent genotype-phenotype mapping to g1
with at least a probability of ε ∈ [0, 1). Equation 4.10 provides a general definition of the
neighboring relation. If the genotype-phenotype mapping is deterministic (which usually is
the case), this definition boils down to Equation 4.11.
 
 
X
neighboringε (x2 , x1 ) ⇔  P (Op(g1 ) = g2 ) P x2 = gpm(g2 )|Op(g1 )=g2  > ε (4.10)
∀g2 :adjacent0 (g2 ,g1 )
x1 =gpm(g1 )
 


 
 X 
 
neighboringε (x2 , x1 ) ⇔  P (Op(g1 ) = g2 ) > ε (4.11)
 
∀g2 : adjacent (g2 , g1 ) ∧ 
0
x2 = gpm(g2 )
x1 =gpm(g1 )

7 See Equation 51.29 on page 644 to Point 5 for an outline of the properties of binary relations.
8 See Section 16.5 on page 174 for more information.
4.3. THE CONNECTION BETWEEN SEARCH AND PROBLEM SPACE 87
The condition x1 = gpm(g1 ) is relevant if the genotype-phenotype mapping “gpm” is not
injective (see Equation 51.31 on page 644), i. e., if more than one genotype can map to
the same phenotype. Then, the set of ε-neighbors of a candidate solution x1 depends
on how it was discovered, on the genotype g1 it was obtained from, as can be seen in
Example E4.3. Example E5.1 on page 96 shows an account on how the search operations and,
thus, the definition of neighborhood, influence the performance of an optimization algorithm.

Example E4.3 (Neighborhoods under Non-Injective GPMs).


As an example for neighborhoods under non-injective genotype-phenotype mappings, let

Problem Space: X
1 0 2 The natural numbers 0, 1, and 2
X = {0,1,2}
X
Genotype-Phenotype Mapping: gpm
GPM gpm(g) = (g1 + g2) * g3

110 111
gB
010 011 Search Space: G
gA The bit strings of length 3
3 3
100 101 G = B = {0,1}

000 001

(g1, g2, g3)=g G


Figure 4.3: Neighborhoods under non-injective genotype-phenotype mappings.

us imagine that we have a very simple optimization problem where the problem space
X only consists of the three numbers 0, 1, and 2, as sketched in Figure 4.3. Imagine
further that we would use the bit strings of length three as search space, i. e., G = B3 =
3
{0, 1} . As genotype-phenotype mapping gpm : G 7→ X, we would apply the function
x = gpm(g) = (g1 + g2 ) ∗ g3 . The two genotypes gA = 010 and gB = 110 both map to 0
since (0 + 1) ∗ 0 = (1 + 1) ∗ 0 = 0.
If the only search operation available would be a single-bit flip mutation, then all geno-
types which have a Hamming distance of 1 are adjacent in the search space. This means
that gA = 010 is adjacent to 110, 000, and 011 while gB = 110 is adjacent to 010, 100, and
111.
The candidate solution x = 0 is neighboring to 0 and 1 if it was obtained by applying the
genotype-phenotype mapping to gA = 010. With one search step, gpm(110) = gpm(000) =
0 and gpm(011) = 1 can be reached. However, if x = 0 was the result of by applying the
genotype-phenotype mapping to gB = 110, its neighborhood is {0, 2}, since gpm(010) =
gpm(100) = 0 and gpm(111) = 2.
Hence, the neighborhood relation under non-injective genotype-phenotype mappings de-
pends on the genotypes from which the phenotypes under investigation originated. For
injective genotype-phenotype mappings, this is not the case.

If the search space G and the problem space X are the same (G = X) and the genotype-
phenotype mapping is the identity mapping, it follows that g2 = x2 and the sum in Equa-
tion 4.11 breaks down to P (Op(g1 ) = g2 ). Then the neighboring relation (defined on the
problem space) equals the adjacency relation (defined on the search space), as can be seen
88 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
when taking a look at Equation 4.6. Since the identity mapping is injective, the condition
x1 = gpm(g1 ) is always true, too.

(G = X) ∧ (gpm(g) = g ∀g ∈ G) ⇔ neighboring ≡ adjacent (4.12)

In the following, we will use neighboringε to denote a ε-neighborhood relation with implicitly
assuming the dependencies on the search space or that the GPM is injective. Furthermore,
with neighboring(x1 , x2 ), we denote that x1 and x2 are ε = 0-neighbors.

4.4 Local Optima


We now have the means to define the term local optimum more precisely. From Exam-
ple E4.4, it becomes clear that such a definition is not as trivial as the definitions of global
optima given in Chapter 3.

Example E4.4 (Types of Local Optima).


If we assume minimization, the global optimum x⋆ of the objective function given can easily

f(x)

XN x«
X«l

xÎX

Figure 4.4: Minima of a single objective function.

be found. Obviously, every global optimum is also a local optimum. In the case of single-
objective minimization, it is also a local minimum, and a global minimum.
In order to define what a local minimum is, as a first step, we could say “A (local)
minimum x̌l ∈ X of one (objective) function f : X 7→ R is an input element with f(x̌l ) < f(x)
for all x neighboring to x̌l .” However, such a definition would disregard locally optimal
plateaus, such as the set Xl⋆ of candidate solutions sketched in Figure 4.4.
If we simply exchange the “<” in f(x̌l ) < f(x) with a “≤” and obtain f(x̌l ) ≤ f(x) however,
we would classify the elements in XN as a local optima as well. This would contradict the
intuition, since they are clearly worse than the elements on the right side of XN . Hence, a
more fine-grained definition is necessary.

Similar to Handl et al. [1178], let us start with creating a definition of areas in the search
space based on the specification of ε-neighboring (Definition D4.12).

Definition D4.13 (Connection Set). A set X ⊆ X is ε-connected if all of its elements


can be reached from each other with consecutive search steps, each of which having at least
probability ε ∈ [0, 1), i. e., if Equation 4.13 holds.

x1 6= x2 ⇒ neighboringε (x1 , x2 ) ∀x1 , x2 ∈ X (4.13)


4.4. LOCAL OPTIMA 89
4.4.1 Local Optima of single-objective Problems

Definition D4.14 (ε-Local Minimal Set). A ε-local minimal set X̌l ⊆ X under objec-
tive function f and problem space X is a connected set for which Equation 4.14 holds.
 
neighboringε (x, x̌l ) ⇒ (x ∈ X̌l ) ∧ (f(x) = f(x̌l )) ∨ (4.14)
(x 6∈ X̌l ) ∧ (f(x̌l ) < f(x)) ∀x̌l ∈ X̌l , x ∈ X

An ε-local minimum x̌l is an element of an ε-local minimal set. We will use the term local
minimum set and local minimum to denote scenarios where ε = 0.

Definition D4.15 (ε-Local Maximal Set). A ε-local maximal set X̂l ⊆ X under objec-
tive function f and problem space X is a connected set for which Equation 4.15 holds.
h i
neighboringε (x, x̂l ) ⇒ (x ∈ X̂l ) ∧ (f(x) = f(x̂l )) ∨ (4.15)
h i
(x 6∈ X̂l ) ∧ (f(x̂l ) > f(x)) ∀x̂l ∈ X̂l , x ∈ X

Again, an ε-local maximum x̂l is an element of an ε-local maximal set and we will use the
term local maximum set and local maximum to indicate scenarios where ε = 0.

Definition D4.16 (ε-Local Optimal Set (Single-Objective Optimization)). A ε-local


optimal set Xl⋆ ⊆ X is either a ε-local minimum set or a ε-local maximum set, depending
on the definition of the optimization problem.

Similar to the case of local maxima and minima, we will again use the terms ε-local optimum,
local optimum set, and local optimum.

4.4.2 Local Optima of Multi-Objective Problems

As basis for the definition of local optima in multi-objective problems we use the prevalence
relations and the concepts provided by Handl et al. [1178]. In Section 3.5.2, we showed
that these encompass the other multi-objective concepts as well. Therefore, in the following
definition, “prevails” can as well be read as “dominates”, which is probably the most common
use.

Definition D4.17 (ε-Local Optimal Set (Multi-Objective Optimization)). A ε-local


optimal set Xl⋆ ⊆ X under the prevalence operator “≺” and problem space X is a connected
set for which Equation 4.16 holds.

neighboringε (x, x⋆l ) ⇒ [(x ∈ Xl⋆ ) ∧ (¬x ≺ x⋆l )] ∨ (4.16)


[(x 6∈ Xl⋆ ) ∧ (x⋆l ≺ x)] ∀x⋆l ∈ Xl⋆ , x ∈ X

Again, we use the term ε-local optimum to denote candidate solutions which are part of
ε-local optimal sets. If ε is omitted in the name, ε = 0 is assumed. See, for example, Exam-
ple E13.4 on page 158 for an example of local optima in multi-objective problems.
90 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
4.5 Further Definitions
In order to ease the discussions of different Global Optimization algorithms, we furthermore
define the data structure individual, which is the unison of the genotype and phenotype of
an individual.

Definition D4.18 (Individual). An individual p is a tuple p = (p.g, p.x) ∈ G × X of an


element p.g in the search space G and the corresponding element p.x = gpm(p.g) in the
problem space X.

Besides this basic individual structure, many practical realizations of optimization algo-
rithms use extended variants of such a data structure to store additional information like
the objective and fitness values. Then, we will consider individuals as tuples in G × X × W,
where W is the space of the additional information – W = Rn × R, for instance. Such en-
dogenous information [302] often represents local optimization strategy parameters. It then
usually coexists with global (exogenous) configuration settings. In the algorithm definitions
later in this book, we will often access the phenotypes p.x without explicitly referring to
the genotype-phenotype mapping “gpm”, since “gpm” is already implicitly included in the
relation of p.x and p.g according to Definition D4.18. Again relating to the notation used
in Genetic Algorithms, we will refer to lists of such individual records as populations.

Definition D4.19 (Population). A population pop is a list consisting of ps individuals


which are jointly under investigation during an optimization process.
ps
pop ∈ (G × X) ⇒ (∀p = (p.g, p.x) ∈ pop ⇒ p.x = gpm(p.g)) (4.17)
4.5. FURTHER DEFINITIONS 91
Tasks T4

20. Assume that we are car salesmen and search for an optimal configuration for a car.
We have a search space X = X1 × X2 × X3 × X4 where

• X1 = {with climate control, without climate control},


• X2 = {with sun roof, without sun roof},
• X3 = {5 gears, 6 gears, automatic 5 gears, automatic 6 gears}, and
• X4 = {red, blue, green, silver, white, black, yellow}.

Define a proper search space for this problem along with a unary search operation and
a genotype-phenotype mapping.
[10 points]

21. Define a nullary search operation which creates new random points in the n-
n
dimensional search space G = {0, 1, 2} . A nullary search operation has no parameters
n
(see Section 4.2). {0, 1, 2} denotes a search space which consists of strings of the
length n where each element of the string is from the set {0, 1, 2}. An example for
such a string for n = 5 is (1, 2, 1, 1, 0). A nullary search operation here would thus just
create a new, random string of this type.
[5 points]

22. Define a unary search operation which randomly modifies a point in the n-dimensional
n
search space G = {0, 1, 2} See Task 21 for a description of the search space. A unary
n
search operation for G = {0, 1, 2} has one parameter, one string of length n where
each element is from the set {0, 1, 2}. A unary search operation takes such a string
g ∈ G and modifies it. It returns a new string g ′ ∈ G which is a bit different from g.
This modification has to be done in a random way, by adding, subtraction, multiplying,
or permutating g randomly, for example.
[5 points]

23. Define a possible unary search operation which modifies a point g ∈ R2 and which
could be used for minimizing a black box objective function f : R2 7→ R.
[10 points]

24. When trying to find the minimum of an black box objective function f : R3 7→ R, we
could use the same search space and problem space G = X = R3 . Instead, we could
use G = R2 and apply
 a genotype-phenotype
 mapping gpm : R2 7→ R3 . For example,
T T
we could set gpm g = (g [0], g [1]) = (g [0], g [1], g [0] + g [1]) . By this approach, our
optimization algorithm would only face a two-dimensional problem instead of a three-
dimensional one. What do you think about this idea? Is it good?
[5 points]

25. Discuss some basic assumptions usually underlying the design of unary and binary
search operations.
[5 points]

26. Define a search space and search operations for the Traveling Salesman Problem as
discussed in Example E2.2 on page 45.
[10 points]

27. We can define a search space G = (1..n)n for a Traveling Salesman Problem with n
cities. This search space would clearly be numerical since G ⊆ Nn1 ⊆ Rn . Discuss
about whether it makes sense to use a pure numerical optimization algorithm (with
operations similar to the one you defined in Task 23, for example) to find solutions to
the Traveling Salesman Problem in this search space.
[5 points]
92 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
28. Assume that you want to solve an optimization problem defined over an n-dimensional
vector of natural numbers where each element of the vector stems from the natural
interval9 a..b. In other words, the problem space X is a subsetset of the natural
numbers by the n (i. e., X ⊆ Nn0 ); more specific: X = (a..b)n . You have an optimization
algorithm available which is suitable for continuous optimization problems defined over
m-dimensional real-vectors where each vector element stems from the real interval10
[c, d], i. e., which deals with search spaces G ⊆ Rm ; more specific G = [c, d]m . m, c,
and d can be chosen freely. n, a, and b are fixed constants that are specific to the
optimization problem you have.
How can you solve your optimization problem with the available algorithm? Provide an
implementation of the necessary module in a programming language of your choice.
The implementation should obey to the interface definitions in Chapter 55. If you
choose a programming language different from Java, you should give the methods
names similar to those used in that section and write comments relate your code to
the interfaces given there.
[10 points]

9 The natural interval a..b contains only the natural numbers from a to b. For example 1..5 = {1, 2, 34, 5}

and π 6∈ 1..4 .
10 The real interval [c, d] contains all the real numbers between c and d. For example π ∈ [1, 4] and

c..d ∈ [c, d].


Chapter 5
Fitness and Problem Landscape: How does the
Optimizer see it?

5.1 Fitness Landscapes

A very powerful metaphor in Global Optimization is the fitness landscape1 . Like many other
abstractions in optimization, fitness landscapes have been developed and extensively been
researched by evolutionary biologists [701, 1030, 1500, 2985]. Basically, they are visualiza-
tions of the relationship between the genotypes or phenotypes in a given population and
their corresponding reproduction probability. The idea of such visualizations goes back to
Wright [2985], who used level contours diagrams in order to outline the effects of selection,
mutation, and crossover on the capabilities of populations to escape locally optimal config-
urations. Similar abstractions arise in many other areas [2598], like in physics of disordered
systems such as spin glasss [311, 1879], for instance.
In Chapter 28, we will discuss Evolutionary Algorithms, which are optimization methods
inspired by natural evolution. The Evolutionary Algorithm research community has widely
adopted the fitness landscapes as relation between individuals and their objective values [865,
1912]. Langdon and Poli [1673]2 explain that fitness landscapes can be imagined as a view
on a countryside from far above. Here, the height of each point is analogous to its objective
value. An optimizer can then be considered as a short-sighted hiker who tries to find the
lowest valley or the highest hilltop. Starting from a random point on the map, she wants to
reach this goal by walking the minimum distance.
Evolutionary Algorithms were initially developed as single-objective optimization meth-
ods. Then, the objective values were directly used as fitness and the “reproduction prob-
ability”, i. e., the chance of a candidate solution for being subject to further investigation,
was proportional to them. In multi-objective optimization applications with more sophisti-
cated fitness assignment and selection processes, this simple approach does not reflect the
biological metaphor correctly anymore.
In the context of this book, we therefore deviate from this view. Since it would possibly
be confusing for the reader if we used a different definition for fitness landscapes than the
rest of the world, we introduce the new term problem landscape and keep using the term
fitness landscape in the traditional manner. In Figure 11.1 on page 140, you can find some
examples for fitness landscapes.

1 http://en.wikipedia.org/wiki/Fitness_landscape [accessed 2007-07-03]


2 This part of [1673] is also online available at http://www.cs.ucl.ac.uk/staff/W.Langdon/FOGP/

intro_pic/landscape.html [accessed 2008-02-15].


945 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
5.2 Fitness as a Relative Measure of Utility

When performing a multi-objective optimization, i. e., n = |f | > 1, the utility of a candidate


solution is characterized by a vector in Rn . In Section 3.3 on page 55, we have seen how such
vectors can be compared directly in a consistent way. Such comparisons, however, only state
whether one candidate solution is better than another one or not, but give no information
on how much it is better or worse and how interesting it is for further investigation. Often,
such a scalar, relative measure is needed.
In many optimization techniques, especially in Evolutionary Algorithms, the partial or-
ders created by Pareto or prevalence comparators are therefore mapped to the positive real
numbers R+ . For each candidate solution, this single number represents its fitness as solu-
tion for the given optimization problem. The process of computing such a fitness value is
often not solely depending on the absolute objective values of the candidate solutions but
also on those of the other individuals known. It could, for instance, be position of a can-
didate solution in the list of investigated elements sorted according to the Pareto relation.
Hence, fitness values often only have a meaning inside the optimization process [949] and
may change in each iteration of the optimizer, even if the objective values stay constant.
This concept is similar to the heuristic functions in many deterministic search algorithms
which approximate how many modifications are likely necessary to an element in order to
reach a feasible solution.

Definition D5.1 (Fitness). The fitness3 value v : G × X 7→ R+ maps an individual p to a


positive real value v(p) denoting its utility as solution or its priority in the subsequent steps
of the optimization process.

The fitness mapping v is usually built by incorporating the information of a set of indi-
viduals, i. e., a population pop. This allows the search process to compute the fitness as
a relative utility rating. In multi-objective optimization, a fitness assignment procedure
v = assignFitness(pop, cmpf ) as defined in Definition D28.6 on page 274 is typically used
to construct v based on the set pop of all individuals which are currently under investiga-
tion and a comparison criterion cmpf (see Definition D3.20 on page 77). This procedure is
usually repeated in each iteration t cycle of the optimizer and different fitness assignments
vt=1 , vt=2 , . . . , result.
We can detect a similarity between a fitness function v and a heuristic function φ as
defined in Definition D1.14 on page 34 and the more precise Definition D46.5 on page 518.
Both of them give indicators about the utility of p which, most often, are only valid and make
only sense within the optimization process. Both measures reflect some ranking criterion
used to pick the point(s) in the search space which should be investigated further in the next
iteration. The difference between fitness functions and heuristics is that heuristics usually
are based on some insights about the concrete structure of the optimization problem whereas
a fitness assignment process obtains its information primarily from the objective values of
the individuals. They are thus more abstract and general. Also, as mentioned before, the
fitness can change during the optimization process and often a relative measure whereas
heuristic values for a fixed point in the search space are usually constant and independent
from the structure of the other candidate solutions under investigation.
Although it is a bit counter-intuitive that a fitness function is built repeatedly and is not
a constant mapping, this is indeed the case in most multi-objective optimization situations.
In practical implementations, the individual records are often extended to also facilitate the
fitness values. If we refer to v(p) multiple times in an algorithm for the same p and pop,
this should be understood as access to such a cached value which is actually computed only
once. This is indeed a good implementation decision, since some fitness assignment processes
3 http://en.wikipedia.org/wiki/Fitness_(genetic_algorithm) [accessed 2008-08-10]
5.3. PROBLEM LANDSCAPES 95
 
2
have a complexity of O (len(pop)) or higher, because they may involve comparing each
individual with every other one in pop as well as computing density information or clustering.
The term fitness has been borrowed from biology4 [2136, 2542] by the Evolutionary
Algorithms community. As already mentioned, the first applications of Genetic Algorithms
were developed, the focus was mainly on single-objective optimization. Back then, they
called this single function fitness function and thus, set objective value ≡ fitness value. This
point of view is obsolete in principle, yet you will find many contemporary publications that
use this notion. This is partly due the fact that in simple problems with only one objective
function, the old approach of using the objective values directly as fitness, i. e., vp = f(p.x),
can sometimes actually be applied. In multi-objective optimization processes, this is not
possible and fitness assignment processes like those which we are going to elaborate on
in Section 28.3 on page 274 are used instead.
In the context of this book, fitness is subject to minimization, i. e., elements with smaller
fitness are “better” than those with higher fitness. Although this definition differs from the
biological perception of fitness, it complies with the idea that optimization algorithms are to
find the minima of mathematical functions (if nothing else has been stated). It makes fitness
and objective values compatible in that it allows us to consider the fitness of a multi-objective
problem to be the objective value of a single-objective one.

5.3 Problem Landscapes


The fitness steers the search into areas of the search space considered promising to contain
optima. The quality of an optimization process is largely defined by how fast it finds these
points. Since most of the algorithms discussed in this book are randomized, meaning that
two consecutive runs of the same configuration may lead to totally different results, we can
define a quality measure based on probabilities.

Definition D5.2 (Problem Landscape). The problem landscape Φ : X × N1 7→ [0, 1]


maps all the points x in a finite problem space X to the cumulative probability of reaching
them until (inclusively) the τ th evaluation of a candidate solution (for a certain configuration
of an optimization algorithm).

Φ(x, τ ) = P x has been visited until the τ th evaluation ∀x ∈ X, τ ∈ N1 (5.1)

This definition of problem landscape is very similar to the performance measure definition
used by Wolpert and Macready [2963, 2964] in their No Free Lunch Theorem (see Chapter 23
on page 209). In our understanding, problem landscapes are not only closer to the original
meaning of fitness landscapes in biology, they also have another advantage. According to
this definition, all entities involved in an optimization process over a finite problem space
directly influence the problem landscape. The choice of the search operations in the search
space G, the way the initial elements are picked, the genotype-phenotype mapping, the
objective functions, the fitness assignment process, and the way individuals are selected for
further exploration all have impact on the probability to discover certain candidate solutions
– the problem landscape Φ.
Problem landscapes are essentially some form of cumulative distribution functions
(see Definition D53.18 on page 658). As such, they can be considered as the counterparts
of the Markov chain models for the search state used in convergence proofs such as those
in [2043] which resemble probability mass/density functions. On basis of its cumulative
character, we can furthermore make the following assumptions about Φ(x, τ ):
Φ(x, τ1 ) ≤ Φ(x, τ2 ) ∀τ1 < τ2 ∧ x ∈ X, τ1 , τ2 ∈ N1 (5.2)
0 ≤ Φ(x, τ ) ≤ 1 ∀x ∈ X, τ ∈ N1 (5.3)
4 http://en.wikipedia.org/wiki/Fitness_%28biology%29 [accessed 2008-02-22]
965 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
Example E5.1 (Influence of Search Operations on Φ).
Let us now give a simple example for problem landscapes and how they are influenced by
the optimization algorithm applied to them. Figure 5.1 illustrates one objective function,
defined over a finite subset X of the two-dimensional real plane, which we are going to
optimize. In this example, we use the problem space X also as search space G, i. e., the
f(x)

x«l x«

Figure 5.1: An example optimization problem.

genotype-phenotype mapping is the identity mapping. For optimization, we will use a very
simple Hill Climbing algorithm5 , which initially randomly draws one candidate solution with
uniform probability from the X. In each iteration, it creates a new candidate solution from
the current one by using a unary search operation. The old and the new candidate are
compared, and the better one is kept. Hence, we do not need to differentiate between fitness
and objective values. In the example, better means has lower fitness.
In Figure 5.1, we can spot one local optimum x⋆l and one global optimum x⋆ . Between
them, there is a hill, an area of very bad fitness. The rest of the problem space exhibits a
small gradient into the direction of its center. The optimization algorithm will likely follow
this gradient and sooner or later discover x⋆l or x⋆ . The chances to first of x⋆l are higher,
since it is closer to the center of X and thus, has a bigger basin of attraction.
With this setting, we have recorded the traces of two experiments with 1.3 million runs of
the optimizer (8000 iterations each). From these records, we can approximate the problem
landscapes very good.
In the first experiment, depicted in Figure 5.2, the Hill Climbing optimizer used a search
operation searchOp1 : X 7→ X which creates a new candidate solution according to a (dis-
cretized) normal distribution around the old one. We divided X in a regular lattice and
in the second experiment series, we used an operator searchOp2 : X 7→ X which produces
candidate solutions that are direct neighbors of the old ones in this lattice. The problem
landscape Φ produced by this operator is shown in Figure 5.3. Both operators are complete,
since each point in the search space can be reached from each other point by applying them.

searchOp1 (x) ≡ (x1 + randomNorm(0, 1) , x2 + randomNorm(0, 1)) (5.4)


searchOp1 (x) ≡ (x1 + randomUni[−1, 1) , x2 + randomUni[−1, 1)) (5.5)

In both experiments, the probabilities of the elements in the problem space of being discov-
ered are very low, near to zero, and rather uniformly distributed in the first few iterations.
To put it precise, since our problem space is a 36 × 36 lattice, this probability is exactly 1/362
in the first iteration. Starting with the tenth or so evaluation, small peaks begin to form
around the places where the optima are located. These peaks then begin to grow.
5 Hill Climbing algorithms are discussed thoroughly in Chapter 26.
5.3. PROBLEM LANDSCAPES 97

F(x,0)

F(x,2)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.2.a: Φ(x, 1) Fig. 5.2.b: Φ(x, 2)
F(x,5)

F(x,10)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.2.c: Φ(x, 5) Fig. 5.2.d: Φ(x, 10)
F(x,50)

F(x,100)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.2.e: Φ(x, 50) Fig. 5.2.f: Φ(x, 100)
F(x,500)

F(x,1000)

0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.2.g: Φ(x, 500) Fig. 5.2.h: Φ(x, 1000)
F(x,2000)

F(x,4000)

0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.2.i: Φ(x, 2000) Fig. 5.2.j: Φ(x, 4000)
F(x,6000)

F(x,8000)

0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.2.k: Φ(x, 6000) Fig. 5.2.l: Φ(x, 8000)

Figure 5.2: The problem landscape of the example problem derived with searchOp1 .
985 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?

F(x,0)

F(x,2)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.3.a: Φ(x, 1) Fig. 5.3.b: Φ(x, 2)
F(x,5)

F(x,10)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.3.c: Φ(x, 5) Fig. 5.3.d: Φ(x, 10)
F(x,50)

F(x,100)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.3.e: Φ(x, 50) Fig. 5.3.f: Φ(x, 100)
F(x,500)

F(x,1000)

0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.3.g: Φ(x, 500) Fig. 5.3.h: Φ(x, 1000)
F(x,2000)

F(x,4000)

0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.3.i: Φ(x, 2000) Fig. 5.3.j: Φ(x, 4000)
F(x,6000)

F(x,8000)

0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0

X X
Fig. 5.3.k: Φ(x, 6000) Fig. 5.3.l: Φ(x, 8000)

Figure 5.3: The problem landscape of the example problem derived with searchOp2 .
5.3. PROBLEM LANDSCAPES 99
Since the local optimum x⋆l resides at the center of a large basin and the gradients point
straight into its direction, it has a higher probability of being found than the global optimum
x⋆ . The difference between the two search operators tested becomes obvious starting with
approximately the 2000th iteration. The process with the operator utilizing the normal
distribution, the Φ value of the global optimum begins to rise farther and farther, finally
surpassing the one of the local optimum. Even if the optimizer gets trapped in the local
optimum, it will still eventually discover the global optimum and if we ran this experiment
longer, the according probability would have converge to 1. The reason for this is that with
the normal distribution, all points in the search space have a non-zero probability of being
found from all other points in the search space. In other words, all elements of the search
space are adjacent.
The operator based on the uniform distribution is only able to create candidate solutions
in the direct neighborhood of the known points. Hence, if an optimizer gets trapped in the
local optimum, it can never escape. If it arrives at the global optimum, it will never discover
the local one. In Fig. 5.3.l, we can see that Φ(x⋆l , 8000) ≈ 0.7 and Φ(x⋆ , 8000) ≈ 0.3. In
other words, only one of the two points will be the result of the optimization process whereas
the other optimum remains undiscovered.

From Example E5.1 we can draw four conclusions:

1. Optimization algorithms discover good elements with higher probability than elements
with bad characteristics, given the problem is defined in a way which allows this to
happen.

2. The success of optimization depends very much on the way the search is conducted,
i. e., the definition of neighborhoods of the candidate solutions (see Definition D4.12
on page 86).
3. It also depends on the time (or the number of iterations) the optimizer allowed to use.

4. Hill Climbing algorithms are no Global Optimization algorithms since they have no
general means of preventing getting stuck at local optima.
1005 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
Tasks T5

29. What is are the similarities and differences between fitness values and traditional
heuristic values?
[10 points]

30. For many (single-objective) applications, a fitness assignment process which assigns
the rank of an individual p in a population pop according to the objective value f(p.x)
of its phenotype p.x as its fitness v(p) is highly useful. Specify an algorithm for such
an assignment process which sorts the individuals in a list (population pop) according
to their objective value and assigns the index of an individual in this sorted list as its
fitness.
Should the individuals be sorted in ascending or descending order if f is subject to
maximization and the fitness v is subject to minimization?
[10 points]
Chapter 6
The Structure of Optimization: Putting it together.

After discussing the single entities involved in optimization, it is time to put them together
to the general structure common to all optimization processes, to an overall picture. This
structure consists of the previously introduced well-defined spaces and sets as well as the
mappings between them. One example for this structure of optimization processes is given in
Figure 6.1 by using a Genetic Algorithm which encodes the coordinates of points in a plane
into bit strings as an illustration. The same structure was already used in Example E4.2.

6.1 Involved Spaces and Sets

At the lowest level, there is the search space G inside which the search operations navigate.
Optimization algorithms can usually only process a subset of this space at a time. The
elements of G are mapped to the problem space X by the genotype-phenotype mapping
“gpm”. In many cases, “gpm” is the identity mapping (if G = X) or a rather trivial, injective
transformation. Together with the elements under investigation in the search space, the
corresponding candidate solutions form the population pop of the optimization algorithm.
A population may contain the same individual record multiple times if the algorithm
permits this. Some optimization methods only process one candidate solution at a time.
In this case, the size of pop is one and a single variable instead of a list data structure is
used to hold the element’s data. Having determined the corresponding phenotypes of the
genotypes produced by the search operations, the objective functions(s) can be evaluated.
In the single-objective case, a single value f(p.x) determines the utility of p.x. In case
of a multi-objective optimization, on the other hand, a vector f (p.x) of objective values
is computed for each candidate solution p.x in the population. Based on these objective
values, the optimizer may compute a real fitness v(p) denoting how promising an individual
p is for further investigation, i. e., as parameter for the search operations, relative to the
other known candidate solutions in the population pop. Either based on such a fitness value,
directly on the objective values, or on other criteria, the optimizer will then decide how to
proceed, how to generate new points in G.
102 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.

fitness
Fitness and heuristic values
(normally) have only a meaning in the
context of a population or a set of
solution candidates..............................

Fitness Space R
+
Fitness Values v: G´X ! R+
Fitness Assignment Process Fitness Assignment Process
f1(x)

f1(x)

Objective Space YÍRn Objective Values f: X !Y (YÍRn)


Objective Function(s) Objective Function(s)
optimal
solutions (3,0) (3,1) (3,2) (3,3) (3,3)
(3,0) (3,1) (3,2) (3,3)

(2,0) (2,1) (2,2) (2,3) (2,0) (2,1) (2,2) (2,3)

(1,0) (1,1) (1,2) (1,3) (1,3)


(1,0) (1,1) (1,2) (1,3)
(0,2)
(0,2)
(0,2)
(0,0) (0,1) (0,2) (0,3) (0,0) (0,1) (0,2) (0,3)
ps
Problem Space X Population (Phenotypes) PopÎ(G´X)

Genotype-Phenotype Mapping Genotype-Phenotype Mapping

0110 0111 1110 1111 0111 1111


1110
0111 1111
0010 0011 1010 1011 0010 0010
0010 0010
0100 0101 1100 1101 0100

1000
0000 0001 1000 1001
ps
Search Space G Population (Genotypes) PopÎ(G´X)

The Involved Spaces The Involved Sets/Elements

Figure 6.1: Spaces, Sets, and Elements involved in an optimization process.


6.2. OPTIMIZATION PROBLEMS 103
6.2 Optimization Problems
An optimization problem is the task we wish to solve with an optimization algorithm. In
Definition D1.1 and D1.2 on page 22, we gave two different views on how optimization
problems can be defined. Here, we will give a more precise discussion which will be the basis
of our considerations later in this book. An optimization problem, in the context of this
book, includes the definition of the possible solution structures (X), the means to determine
their utilities (f ), and a way to compare two candidate solutions (cmpf ).

Definition D6.1 (Optimization Problem (Mathematical View)). An optimization


problem “OptProb” is a triplet OptProb = (X, f , cmpf ) consisting of a problem space X, a
set f = {f1 , f2 , . . . } of n = |f | objective functions fi : X 7→ R ∀i ∈ 1..n and a comparator
function cmpf : X × X 7→ R which allowing us to compare two candidate solutions x1 , x2 ∈ X
based on their objective values f (x1 ) and f (x2 ) (and/or constraint satisfaction properties or
other possible features).

Many optimization algorithms are general enough to work on different search spaces and
utilize various search operations. Others can only be applied to a distinct class of search
spaces or have fixed operators. Each optimization algorithm is defined by its specification
that states which operations have to be carried out in which order. Definition D6.2 provides
us with a mathematical expression for one specific application of an optimization algorithm
to a specific optimization problem instance.

Definition D6.2 (Optimization Setup). The setup “OptSetup” of an application of an


optimization algorithm is the five-tuple OptSetup = (OptProb, G, Op, gpm, Θ) of the opti-
mization problem OptProb to be solved, the search space G in which the search operations
searchOp ∈ Op navigate and which is translated to the problem space X with the genotype-
phenotype mapping gpm, and the algorithm-specific parameter configuration Θ (such as size
limits for populations, time limits for the optimization process, or starting seeds for random
number generators).

This setup, together with the specification of the algorithm, entirely defines the behavior of
the optimizer “OptAlgo”.

Definition D6.3 (Optimization Algorithm). An optimization algorithm is a mapping


OptAlgo : OptSetup 7→ Φ × X∗ of an optimization setup “OptSetup” to a problem landscape
Φ as well as a mapping to the optimization result x ∈ X∗ , an arbitrarily sized subset of the
problem space.

Apart from this very mathematical definition, we provide a Java interface which unites the
functionality and features of optimization algorithms in Section 55.1.5 on page 713. Here,
the functionality (finding a list of good individuals) and the properties (the objective func-
tion(s), the search operations, the genotype-phenotype mapping, the termination criterion)
characterize an optimizer. This perspective comes closer to the practitioner’s point of view
whereas Definition D6.3 gives a good summary of the theoretical idea of optimization.
If an optimization algorithm can effectively be applied to an optimization problem, it
will find at least one local optimum x⋆l if granted infinite processing time and if such an
optimum exists. Usual prerequisites for effective problem solving are a weakly complete
set of search operations Op, a surjective genotype-phenotype mapping “gpm”, and that the
objective functions f do provide more information than needle-in-a-haystack configurations
(see Section 16.7 on page 175).
∃x⋆l ∈ X : lim Φ(x⋆l , τ ) = 1 (6.1)
τ →∞

From an abstract perspective, an optimization algorithm is characterized by


104 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.
1. the way it assigns fitness to the individuals,
2. the ways it selects them for further investigation,
3. the way it applies the search operations, and
4. the way it builds and treats its state information.
The best optimization algorithm for a given setup OptSetup is the one with the highest values
of Φ(x⋆ , τ ) for the optimal elements x⋆ in the problem space with the lowest corresponding
values of τ . Therefore, finding the best optimization algorithm for a given optimization
problem is, itself, a multi-objective optimization problem.
For a perfect optimization algorithm (given an optimization setup with weakly complete
search operations, a surjective genotype-phenotype mapping, and reasonable objective func-
tion(s)), Equation 6.2 would hold. However, it is more than questionable whether such an
algorithm can actually be built (see Chapter 23).

∀x1 , x2 ∈ X : x1 ≺ x2 ⇒ lim Φ(x1 , τ ) > lim Φ(x2 , τ ) (6.2)


τ →∞ τ →∞

6.3 Other General Features


There are some further common semantics and operations that are shared by most optimiza-
tion algorithms. Many of them, for instance, start out by randomly creating some initial
individuals which are then refined iteratively. Optimization processes which are not allowed
to run infinitely have to find out when to terminate. In this section we define and discuss
general abstractions for such similarities.

6.3.1 Gradient Descend

Definition D6.4 (Gradient). A gradient1 of a scalar mathematical function (scalar field)


f : Rn 7→ R is a vector function (vector field) which points into the direction of the greatest
increase of the scalar field. It is denoted by ∇f or grad(f ).
||f (x + h) − f (x) − grad(f )(x) · h||
lim =0 (6.3)
h→0 ||h||

In other words, if we have a differentiable smooth function with low total variation2 [2224],
the gradient would be the perfect hint for optimizer, giving the best direction into which
the search should be steered. What many optimization algorithms which treat objective
functions as black boxes do is to use rough estimations of the gradient for this purpose. The
Downhill Simplex (see Chapter 43 on page 501), for example, uses a polytope consisting of
n + 1 points in G = Rn for approximating the direction with the steepest descend.
In most cases, we cannot directly differentiate the objective functions with the Nabla op-
erator3 ∇f because its exact mathematical characteristics are not known or too complicated.
Also, the search space G is often not a vector space over the real numbers R.
Thus, samples of the search space are used to approximate the gradient. If we compare
two elements g1 and g2 via their corresponding phenotypes x1 = gpm(g1 ) and x2 = gpm(g2 )
and find x1 ≺ x2 , we can assume that there is some sort of gradient facing upwards from
x1 to x2 and hence, from g1 to g2 . When descending this gradient, we can hope to find an
x3 with x3 ≺ x1 and finally the global minimum.
1 http://en.wikipedia.org/wiki/Gradient [accessed 2007-11-06]
2 http://en.wikipedia.org/wiki/Total_variation [accessed 2009-07-01]
3 http://en.wikipedia.org/wiki/Del [accessed 2008-02-15]
6.3. OTHER GENERAL FEATURES 105
6.3.2 Iterations

Global optimization algorithms often proceed iteratively and step by step evaluate candi-
date solutions in order to approach the optima. We distinguish between evaluations τ and
iterations t.

Definition D6.5 (Evaluation). The value τ ∈ N0 denotes the number of candidate solu-
tions for which the set of objective functions f has been evaluated.

Definition D6.6 (Iteration). An iteration4 refers to one round in a loop of an algorithm.


It is one repetition of a specific sequence of instructions inside of the algorithm.

Algorithms are referred to as iterative if most of their work is done by cyclic repetition of
one main loop. In the context of this book, an iterative optimization algorithm starts with
the first step t = 0. The value t ∈ N0 is the index of the iteration currently performed by
the algorithm and t + 1 refers to the following step. One example for iterative algorithm
is Algorithm 6.1. In some optimization algorithms like Genetic Algorithms, for instance,
iterations are referred to as generations.
There often exists a well-defined relation between the number of performed evaluations
τ of candidate solutions and the index of the current iteration t in an optimization pro-
cess: Many Global Optimization algorithms create and evaluate exactly a certain number
of individuals per generation.

6.3.3 Termination Criterion



The termination criterion terminationCriterion is a function with access to all the infor-
mation accumulated by an optimization process, including the number of performed steps
t, the objective values of the best individuals,
 and the time elapsed since the start of the
process. With terminationCriterion , the optimizers determine when they have to halt.

Definition D6.7 (Termination Criterion). When the termination criterion function


terminationCriterion ∈ {true, false} evaluates to true, the optimization process will
stop and return its results.

In Listing 55.9 on page 712, you can find a Java interface resembling Definition D6.7. Some
possible criteria that can be used to decide whether an optimizer should terminate or not
are [2160, 2622, 3088, 3089]:

1. The user may grant the optimization algorithm a maximum computation time. If this
time has been exceeded, the optimizer should stop. Here we should note that the time
needed for single individuals may vary, and so will the times needed for iterations.
Hence, this time threshold can sometimes not be abided exactly and may avail to
different total numbers of iterations during multiple runs of the same algorithm.
˘
2. Instead of specifying a time limit, an upper limit for the number of iterations t or
˘
individual evaluations τ may be specified. Such criteria are most interesting for the
researcher, since she often wants to know whether a qualitatively interesting solution
can be found for a given problem using at most a predefined number of objective
function evaluations. In Listing
 56.53 on page 836, we give a Java implementation of
the terminationCriterion operation which can be used for that purpose.
4 http://en.wikipedia.org/wiki/Iteration [accessed 2007-07-03]
106 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.
3. An optimization process may be stopped when no significant improvement in the
solution quality was detected for a specified number of iterations. Then, the process
most probably has converged to a (hopefully good) solution and will most likely not
be able to make further progress.
4. If we optimize something like a decision maker or classifier based on a sample data
set, we will normally divide this data into a training and a test set. The training set is
used to guide the optimization process whereas the test set is used to verify its results.
We can compare the performance of our solution when fed with the training set to
its properties if fed with the test set. This comparison helps us detect when most
probably no further generalization can be achieved by the optimizer and we should
terminate the process.
5. Obviously, we can terminate an optimization process if it has already yielded a suffi-
ciently good solution.
In practical applications, we can apply any combination of the criteria above in order to
determine when to halt. Algorithm 1.1 on page 40 is an example for a badly designed termi-
nation criterion: the algorithm will not halt until a globally optimal solution is found. From
its structure, however, it is clear that it is very prone for premature convergence (see Chap-
ter 13 on page 151 and thus, may get stuck at a local optimum. Then, it will run forever
even if an optimal solution exists. This example also shows another thing: The structure
of the termination criterion decides whether a probabilistic optimization technique is a Las
Vegas or Monte Carlo algorithm (see Definition D12.2 and D12.3). How the termination
criterion may be tested in an iterative algorithm is illustrated in Algorithm 6.1.

Algorithm 6.1: Example Iterative Algorithm


Input: [implicit] terminationCriterion: the termination criterion
Data: t: the iteration counter
1 begin
2 t ←− 0
// initialize the data of the algorithm

3 while ¬terminationCriterion do
// perform one iteration - here happens the magic
4 t ←− t + 1

6.3.4 Minimization

The original form of many optimization algorithms was developed for single-objective op-
timization. Such algorithms may be used for both, minimization or maximization. As
previously mentioned, we will present them as minimization processes since this is the most
commonly used notation without loss of generality. An algorithm that maximizes the func-
tion f may be transformed to a minimization using −f instead.
Note that using the prevalence comparisons as introduced in Section 3.5.2 on page 76,
multi-objective optimization processes can be transformed into single-objective minimization
processes. Therefore x1 ≺ x2 ⇔ cmpf (x1 , x2 ) < 0.

6.3.5 Modeling and Simulating

While there are a lot of problems where the objective functions are mathematical expressions
that can directly be computed, there exist problem classes far away from such simple function
optimization that require complex models and simulations.
6.3. OTHER GENERAL FEATURES 107
Definition D6.8 (Model). A model5 is an abstraction or approximation of a system that
allows us to reason and to deduce properties of the system.

Models are often simplifications or idealization of real-world issues. They are defined by
leaving away facts that probably have only minor impact on the conclusions drawn from
them. In the area of Global Optimization, we often need two types of abstractions:

1. The models of the potential solutions shape the problem space X. Examples are

(a) programs in Genetic Programming, for example for the Artificial Ant problem,
(b) construction plans of a skyscraper,
(c) distributed algorithms represented as programs for Genetic Programming,
(d) construction plans of a turbine,
(e) circuit diagrams for logical circuits, and so on.

2. Models of the environment in which we can test and explore the properties of the
potential solutions, like

(a) a map on which the Artificial Ant will move driven by an optimized program,
(b) an abstraction from the environment in which the skyscraper will be built, with
wind blowing from several directions,
(c) a model of the network in which the evolved distributed algorithms can run,
(d) a physical model of air which blows through the turbine,
(e) the model of an energy source the other pins which will be attached to the circuit
together with the possible voltages on these pins.

Models themselves are rather static structures of descriptions and formulas. Deriving con-
crete results (objective values) from them is often complicated. It often makes more sense
to bring the construction plan of a skyscraper to life in a simulation. Then we can test the
influence of various wind forces and directions on building structure and approximate the
properties which define the objective values.

Definition D6.9 (Simulation). A simulation6 is the computational realization of a model.


Whereas a model describes abstract connections between the properties of a system, a
simulation realizes these connections.

Simulations are executable, live representations of models that can be as meaningful as real
experiments. They allow us to reason if a model makes sense or not and how certain objects
behave in the context of a model.

5 http://en.wikipedia.org/wiki/Model_%28abstract%29 [accessed 2007-07-03]


6 http://en.wikipedia.org/wiki/Simulation [accessed 2007-07-03]
Chapter 7
Solving an Optimization Problem

Before considering the following steps, always


first check if a dedicated algorithm exists for
the problem (see Section 1.1.3).

Figure 7.1: Use dedicated algorithms!

There are many highly efficient algorithms for solving special problems. For finding
the shortest paths from one source node to the other nodes in a network, for example, one
would never use a metaheuristic but Dijkstra’s algorithm [796]. For computing the maximum
flow in a flow network, the algorithm by Dinitz [800] beats any probabilistic optimization
method. The key to win with metaheuristics is to know when to apply them and when not.
The first step before using them is always an internet or literature search in order to check
for alternatives.
If this search did not lead to the discovery of a suitable method to solve the problem,
a randomized or traditional optimization technique can be used. Therefore, the following
steps should be performed in the order they are listed below and sketched in Figure 7.2.
0. Check for possible alternatives to using optimization algorithms!
1. Define the data structures for the possible solutions, the problem space X (see Sec-
tion 2.1).
2. Define the objective functions f ∈ f which are subject to optimization (see Section 2.2).
3. Define the equality constraints h and inequality constraints g, if needed (see Sec-
tion 3.4).
4. Define a comparator function cmpf (x1 , x2 ) able to decide which one of two candidate
solutions x1 and x2 is better (based on f , and the constraints hi and gi , see Sec-
tion 3.5.2). In the unconstrained, single-objective case which is most common, this
function will just select the one individual with the lower objective value.
5. Decide upon the optimization algorithm to be used for the optimization problem (see
Section 1.3).
6. Choose an appropriate search space G (see Section 4.1). This decision may be directly
implied by the choice in Point 5.
110 7 SOLVING AN OPTIMIZATION PROBLEM

Are there dedi- yes Use dedicated algorithm!


cated algorithms?

no
Determine datastructure for solutions. X
Define optimization criteria and constraints. f: X ! Rn; v(pÎPop, Pop) ! R+

Select optimization algorithm. EA? CMA-ES? DE?

Choose search space, search operations, m


and mapping to problem space. G; searchOp: G ! G; gpm: G ! X

Configure optimization algorithm. population size = x,


crossover rate = y,
...
Run small-scale test experiments. a few runs per configuration

no Are results
satisfying?

yes
Perform real experiment consisting of enough e.g., 30 runs for each configuration
runs to get statistically significant results. and problem instance

Statistical Evaluation e.g., „Our approach beats approach


xyz with 2% probability to err in a
two-tailed Mann-Whitney test."
no Are results
satisfying?

yes
Production Stage integrate into real application

Figure 7.2:
7 SOLVING AN OPTIMIZATION PROBLEM 111
7. Define a genotype-phenotype mapping “gpm” between the search space G and the
problem space X (see Section 4.3).

8. Define the set searchOperations of search operations searchOp ∈ Op navigating in


the search space (see Section 4.2). This decision may again be directly implied by the
choice in Point 5.
9. Configure the optimization algorithm, i. e., set all its parameters to values which look
reasonable.

10. Run a small-scale test with the algorithm and the settings and test, whether they
approximately meet the expectations. If not, go back to Point 9, 5, or even Point 1,
depending on how bad the results are. Otherwise, continue at Point 11

11. Conduct experiments with some different configurations and perform many runs for
each configuration. Obtain enough results to make statistical comparisons between
the configurations and with other algorithms (see e.g., Section 53.7). If the results are
bad, again consider to go back to a previous step.

12. Now either the results of the optimizer can be used and published or the developed
optimization system can become part of an application (see Part V and [2912] as an
example), depending on the goal of the work.
112 7 SOLVING AN OPTIMIZATION PROBLEM
Tasks T7

31. Why is it important to always look for traditional dedicated and deterministic alterna-
tives before trying to solve an optimization problem with a metaheuristic optimization
algorithm?
[5 points]

32. Why is a single experiment with a randomized optimization algorithm not enough to
show that this algorithm is good (or bad) for solving a given optimization task? Why
do we need comprehensive experiments instead, as pointed out in Point 11 on the
previous page?
[5 points]
Chapter 8
Baseline Search Patterns

Metaheuristic optimization algorithms trade in the guaranteed optimality of their results for
shorter runtime. This means that their results can be (but not necessarily are) worse than
the global optima of a given optimization problem. The results are acceptable as long as the
user is satisfied with them. An interesting question, however, is When is an optimization
algorithm suitable for a given problem?. The answer to this question is strongly related to
how well it can utilize the information it gains during the optimization process in order to
find promising regions in the search space.
The primary information source for any optimizer is the (set of) objective functions.
Hence, it makes sense to declare an optimization algorithm as efficient if it can outperform
primitive approaches which do not utilize the information gained from evaluating candidate
solutions with objective functions, such as the random sampling and random walk methods
which will be introduced in this chapter. The comparison criterion should be the solution
quality in terms of objective values.

Definition D8.1 (Effectiveness). An optimization algorithm is effective if it can (statis-


tically significantly) outperform random sampling and random walks in terms of solution
quality according to the objective functions.

A third baseline algorithm is exhaustive enumeration, the testing of all possible solutions. It
does not utilize the information gained from the objective functions as well. It can be used
as a yardstick in terms of optimization speed.

Definition D8.2 (Efficiency). An optimization algorithm is efficient if it can meet the


criterion of effectiveness (Definition D8.1) with a lower-than-exponential algorithmic com-
plexity (see Section 12.1 on page 143), i. e., within fewer steps than required by exhaustive
enumeration.

8.1 Random Sampling


Random sampling is the most basic search approach. It is uninformed, i. e., does not make
any use of the information it can obtain from evaluating candidate solutions with the objec-
tive functions. As sketched in Algorithm 8.1 and implemented in Listing 56.4 on page 729,
it keeps creating genotypes of random configuration until the termination criterion is met.
In every iteration, it therefore utilizes the nullary search operation “create” which builds
a genotype p.g ∈ G with a random configuration and maps this genotype to a phenotype
p.x ∈ X with the genotype-phenotype mapping “gpm”. It checks whether the new pheno-
type is better than the best candidate solution discovered so far (x). If so, it is stored in
x.
114 8 BASELINE SEARCH PATTERNS

Algorithm 8.1: x ←− randomSampling(f)


Input: f: the objective/fitness function
Data: p: the current individual
Output: x: the result, i. e., the best candidate solution discovered
1 begin
2 x ←− ∅
3 repeat 
4 p.g ←− create
5 p.x ←− gpm(p.g)
6 if (x = ∅) ∨ (f(p.x) < f(x)) then
7 x ←− p.x

8 until terminationCriterion
9 return x

Notice that x is the only variable used for information aggregation and that there is
no form of feedback. The search makes no use of the knowledge about good structures of
candidate solutions nor does it further investigate a genotype which has produced a good
phenotype. Instead, if “create” samples the search space G in a uniform, unbiased way,
random sampling will be completely unbiased as well.
This unbiasedness and ignorance of available information turns random sampling into
one of the baseline algorithms used for checking whether an optimization method is suitable
for a given problem. If an optimization algorithm does not perform better than random
sampling on a given optimization problem, we can deduce that (a) it is not able to utilize the
information gained from the objective function(s) efficiently (see Chapter 14 and 16), (b) its
search operations cannot modify existing genotypes in a useful manner (see Section 14.2),
or (c) the information given by the objective functions is largely misleading (see Chapter 15)..
It is well known as the No Free Lunch Theorem (see Chapter 23), that averaged over the set
of all possible optimization problems, no optimization algorithm can beat random sampling.
However, for specific families of problems and for most practically relevant problems in
general, outperforming random sampling is not too complicated. Still, a comparison to this
primitive algorithm is necessary whenever a new, not yet analyzed optimization task is to
be solved.

8.2 Random Walks

Like random sampling, random walks1 [899, 1302, 2143] (sometimes also called drunkard’s
walks) can be considered as an uninformed, undirected search methods. They form the
second one of the two baseline algorithms to which optimization methods must be compared
for specific problems in order to test their utility. We specify the random walk algorithm in
Algorithm 8.2 and implement it in Listing 56.5 on page 732.
While random sampling algorithms make use of no information gathered so far, random
walks base their next “search move” on the most recently investigated genotype p.g. Instead
of sampling the search space uniformly as random sampling does, a random walk starts at one
(possibly random) point. In each iteration, it uses a unary search operation (in Algorithm 8.2
simply denoted as searchOp(p.g)) to move to the next point.
Because of this structure, random walks are quite similar in their structure to basic
optimization algorithms such as Hill Climbing (see Chapter 26). The difference is that
optimization algorithms make use of the information they gain by evaluating the candidate
1 http://en.wikipedia.org/wiki/Random_walk [accessed 2010-08-26]
8.2. RANDOM WALKS 115

Algorithm 8.2: x ←− randomWalk(f)


Input: f: the objective/fitness function
Data: p: the current individual
Output: x: the result, i. e., the best candidate solution discovered
1 begin
2 x ←− ∅

3 p.g ←− create
4 repeat
5 p.x ←− gpm(p.g)
6 if (x = ∅) ∨ (f(p.x) < f(x)) then
7 x ←− p.x
8 p.g ←− searchOp(p.g) 
9 until terminationCriterion
10 return x

solutions with the objective functions. In any useful configuration for optimization problems
which are not too ill-defined, they should thus be able to outperform random walks and
random sampling.
In cases where they cannot, applying one of these two uninformed, primitive strategies is
advised. Better yet, since optimization methods always introduce some bias into the search
process, always try to steer into the direction of the search space where they expected a
fitness improvement, random walk or random sampling may even outperform them if the
problems are deceptive enough (see Chapter 15). Circumstances where random walks can
be the search algorithms of choice are, for example,

1. if a search algorithm encounters a state explosion because there are too many states
and transcending to them would consume too much memory.
2. In certain cases of online search it is not possible to apply systematic approaches
like breadth-first search or depth-first search. If the environment, for instance, is only
partially observable and each state transition represents an immediate interaction with
this environment, we are maybe not able to navigate to past states again. One example
for such a case is discussed in the work of Skubch [2516, 2517] about reasoning agents.

Random walks are often used in optimization theory for determining features of a fitness
landscape. Measures that can be derived mathematically from a walk model include esti-
mates for the number of steps needed until a certain configuration is found, the chance to
find a certain configuration at all, and the average difference of the objective values of two
consecutive populations. From practically running random walks, some information about
the search space can be extracted. Skubch [2516, 2517], for instance, uses the number of en-
counters of certain state transition during the walk in order to successively build a heuristic
function.
Figure 8.1 illustrates some examples for random walks. The subfigures 8.1.a to 8.1.c
each illustrate three random walks in which axis-parallel steps of length 1 are taken. The
dimensions range from 1 to 3 from the left to the right. The same goes for Fig. 8.1.d to
Fig. 8.1.f, each of which displaying three random walks with Gaussian (standard-normally)
distributed step widths and arbitrary step directions.

8.2.1 Adaptive Walks

An adaptive walk is a theoretical optimization method which, like a random walk, usually
works on a population of size 1. It starts at a random location in the search space and pro-
116 8 BASELINE SEARCH PATTERNS
steps

Fig. 8.1.a: 1d, step length 1 Fig. 8.1.b: 2d, axis-parallel, Fig. 8.1.c: 3d, axis-parallel,
step length 1 step length 1
steps

Fig. 8.1.d: 1d, Gaussian step Fig. 8.1.e: 2d, arbitrary angle, Fig. 8.1.f: 3d, arbitrary angle,
length Gaussian step length Gaussian step length

Figure 8.1: Some examples for random walks.

ceeds by changing (or mutating) its single individual. For this modification, three methods
are available:
1. One-mutant change: The optimization process chooses a single new individual from
the set of “one-mutant change” neighbors. These neighbors differ from the current
candidate solution in only one property, i. e., are in its ε = 0-neighborhood. If the new
individual is better, it replaces its ancestor, otherwise it is discarded.
2. Greedy dynamics: The optimization process chooses a single new individual from the
set of “one-mutant change” neighbors. If it is not better than the current candidate
solution, the search continues until a better one has been found or all neighbors have
been enumerated. The major difference to the previous form is the number of steps
that are needed per improvement.
3. Fitter Dynamics: The optimization process enumerates all one-mutant neighbors of
the current candidate solution and transcends to the best one.
From these elaborations, it becomes clear that adaptive walks are very similar to Hill Climb-
ing and Random Optimization. The major difference is that an adaptive walk is a theoreti-
cal construct that, very much like random walks, helps us to determine properties of fitness
landscapes whereas the other two are practical realizations of optimization algorithms.
Adaptive walks are a very common construct in evolutionary biology. Biological
populations are running for a very long time and so their genetic compositions are as-
sumed to be relatively converged [79, 1057]. The dynamics of such populations in near-
equilibrium states with low mutation rates can be approximated with one-mutant adaptive
walks [79, 1057, 2528].

8.3 Exhaustive Enumeration


Exhaustive enumeration is an optimization method which is as same as primitive as random
walks. It can be applied to any finite search space and basically consists of testing every single
8.3. EXHAUSTIVE ENUMERATION 117
possible solution for its utility, returning the best candidate solution discovered after all have
been tested. If the minimum possible objective value y̆ is known for a given minimization
problem, the algorithm can also terminate sooner.

Algorithm 8.3: x ←− exhaustiveSearch(f, y̆)


Input: f: the objective/fitness function
Input: y̆: the lowest possible objective value, −∞ if unknown
Data: x: the current candidate solution
Data: t: the iteration counter
Output: x: the result, i. e., the best candidate solution discovered
1 begin
2 x ←− gpm(G[0])
3 t ←− 1
4 while (t < |G|) ∧ (f(x) > y̆) do
5 x ←− gpm(G[t])
6 if f(x) > f(x) then x ←− x
7 t ←− t + 1
8 return x

Algorithm 8.3 contains a definition of the exhaustive search algorithm. It is based on the
assumption that the search space G can be traversed in a linear order, where the tth element
in the search space can be accessed as G[t]. The second parameter of the algorithm is the
lower limit y̆ of the objective value. If this limit is known, the algorithm may terminate
before traversing the complete search space if it discovers a global optimum with f(x) = y̆
earlier. If no lower limit for f is known, y̆ = −∞ can be provided.
It is interesting to see that for many hard problems such as N P-complete ones (Sec-
tion 12.2.2), no algorithms are known which can generally be faster than exhaustive enu-
meration. For an input size of n, algorithms for such problems need O(2n ) iterations.
Exhaustive search requires exactly the same complexity. If G = Bn , i. e., the bit strings of
length n, the algorithm will need exactly 2n steps if no lower limit y̆ for the value of f is
known.
118 8 BASELINE SEARCH PATTERNS
Tasks T8

33. What is are the similarities and differences between random walks, random sampling,
and adaptive walks?
[10 points]

34. Why should we consider an optimization algorithm as ineffective for a given problem
if it does not perform better than random sampling or a random walk on the problem?
[5 points]

35. Is it possible that an optimization algorithm such as, for example, Algorithm 1.1 on
page 40 may give us even worse results than a random walk or random sampling? Give
reasons. If yes, try to find a problem (problem space, objective function) where this
could be the case.
[5 points]

36. Implement random sampling for the bin packing problem as a comparison algorithm
for Algorithm 1.1 on page 40 and test it on the same problem instances used to
verify Task 4 on page 42. Discuss your results.
[10 points]

37. Compare your answers to Task 35 and 36. Is the experiment in Task 36 suitable
to validate your hypothesis? If not, can you find some instances of the bin packing
problem where it can be used to verify your idea? If so, apply both Algorithm 1.1
on page 40 and your random sampling implementation to these instances. Did the
experiments support your hypothesis?
[15 points]

38. Can we implement an exhaustive search pattern to the find the minimum of a math-
ematical function f defined over the real interval (−1, 1)? Answer this question both
from

(a) a theoretical-mathematical perspective in an idealized execution environment


(i. e., with infinite precision) and
(b) from a practical point of view based on real-world computers and requirements
(i. e., under the assumption that the available precision of floating point numbers
and of any given production process is limited).
[10 points]

39. Implement an exhaustive-search based optimization procedure for the bin packing
problem as a comparison algorithm for Algorithm 1.1 on page 40 and test it on the
same problem instances used to verify Task 4 on page 42 and 36. Discuss your results.
[10 points]

40. Implement an exhaustive search pattern to solve the instance of the Traveling Salesman
Problem given in Example E2.2 on page 45.
[10 points]
Chapter 9
Forma Analysis

The Schema Theorem has been stated for Genetic Algorithms by Holland [1250] in its seminal
work [711, 1250, 1253]. In this section, we are going to discuss it in the more general version
from Weicker [2879] as introduced by Radcliffe and Surry [2241–2243, 2246, 2634].

Definition D9.1 (Property). A property φ is a function which maps individuals, geno-


types, or phenotypes to a set of possible property values.

Each individual p in the population pop of an optimization algorithm is characterized by


its properties φ. Optimizers focus mainly on the phenotypical properties since these are
evaluated by the objective functions. In the forma analysis, the properties of the genotypes
are considered as well, since they influence the features of the phenotypes via the genotype-
phenotype mapping. With Example E9.1, we want to clarify how possible properties could
look like.

Example E9.1 (Examples for Properties).


A rather structural property φ1 of formulas z : R 7→ R in symbolic regression1 would be
whether it contains the mathematical expression x + 1 or not: φ1 : X 7→ {true, false}.
We can also declare a behavioral property φ2 which is true if |z(0) − 1| ≤ 0.1 holds, i. e., if
the result of z is close to a value 1 for the input 0, and false otherwise: φ2 = |z(0) − 1| ≤
0.1 ∀z ∈ X.
Assume that the optimizer uses a binary search space G = Bn and that the formulas
z are decoded from there to trees that represent mathematical expression by a genotype-
phenotype mapping. A genotypical φ3 property then would be if a certain sequence of bits
occurs in the genotype p.g and another phenotypical property φ4 is the number of nodes in
the phenotype p.x, for instance.
If we try to solve a graph-coloring problem, for example, a property φ5 : X 7→
{black, white, gray} could denote the color of a specific vertex q as illustrated in Figure 9.1.
In general, we can consider properties φi to be some sort of functions that map the
individuals to property values. φ1 and φ2 both map the space of mathematical functions to
the set B = {true, false} whereas φ5 maps the space of all possible colorings for the given
graph to the set {white, gray, black} and property f4 maps trees to the natural numbers.

1 More information on symbolic regression can be found in Section 49.1 on page 531.
120 9 FORMA ANALYSIS

Af =black
5

G5
q
G4
q

G1
q
G6 G2
q q

G8
q

G7 G3
q q
Pop Í X
(G=X)
Af =white
5

Af =gray
5

Figure 9.1: An graph coloring-based example for properties and formae.

On the basis of the properties φi we can define equivalence relations2 ∼φi :

p1 ∼φi p2 ⇒ φi (p1 ) = φi (p2 ) ∀p1 , p2 ∈ G × X (9.1)

Obviously, for each two candidate solutions x1 and x2 , either x1 ∼φi x2 or x1 ≁φi x2 holds.
These relations divide the search space into equivalence classes Aφi =v .

Definition D9.2 (Forma). An equivalence class Aφi =v that contains all the individuals
sharing the same characteristic v in terms of the property φi is called a forma [2241] or
predicate [2826].

Aφi =v = {∀p ∈ G × X : φi (p) = v} (9.2)


∀p1 , p2 ∈ Aφi =v ⇒ p1 ∼φi p2 (9.3)

The number of formae induced by a property, i. e., the number of its different characteristics,
is called its precision [2241]. Two formae Aφi =v and Aφj =w are said to be compatible, written
as Aφi =v ⊲⊳ Aφj =w , if there can exist at least one individual which is an instance of both.

Aφi =v ⊲⊳ Aφj =w ⇔ Aφi =v ∩ Aφj =w 6= ∅ (9.4)


Aφi =v ⊲⊳ Aφj =w ⇔ ∃ p ∈ G × X : p ∈ Aφi =v ∧ p ∈ Aφj =w (9.5)
Aφi =v ⊲⊳ Aφi =w ⇒ w = v (9.6)

Of course, two different formae of the same property φi , i. e., two different characteristics of
φi, are always incompatible.

Example E9.2 (Examples for Formae (Example E9.1 Cont.)).


The precision of φ1 and φ2 in Example E9.1 is 2, for φ5 it is 3. We can define another
property φ6 ≡ z(0) denoting the value a mathematical function has for the input 0. This
property would have an uncountable infinitely large precision.
2 See the definition of equivalence classes in Section 51.9 on page 646.
9 FORMA ANALYSIS 121

z2(x)=x2+1.1
z1(x)=x+1 z4 z7
z8(x)=(tan x)+1 z1 z6
z3(x)=x+2 ~f 1
Af =true
1

z5(x)=tan x z2 Af =false
z5 z3
1

z7(x)=(cos x)(x+1)
z8
z4(x)=2(x+1)
z6(x)=(sin x)(x+1)

Pop Í X ~f
(G=X) 2
z1 z2
z7
~f6 z8
Af =true
2

z5 z6 z2 z6 z5 z3
z4 Af =false
Af =0 Af =1.1
2
4
z1
z4 z3 4
z7
z8
Af =2 4 Af =1
4

Figure 9.2: Example for formae in symbolic regression.

In our initial symbolic regression example Aφ1 =true 6⊲⊳ Aφ1 =false holds, since it is not
possible that a function z contains a term x + 1 and at the same time does not contain
it. All formae of the properties φ1 and φ2 on the other hand are compatible: Aφ1 =false ⊲⊳
Aφ2 =false , Aφ1 =false ⊲⊳ Aφ2 =true , Aφ1 =true ⊲⊳ Aφ2 =false , and Aφ1 =true ⊲⊳ Aφ2 =true all
hold. If we take φ6 into consideration, we will find that there exist some formae compatible
with some of φ2 and some that are not, like Aφ2 =true ⊲⊳ Aφ6 =1 and Aφ2 =false ⊲⊳ Aφ6 =2 , but
Aφ2 =true 6⊲⊳ Aφ6 =0 and Aφ2 =false 6⊲⊳ Aφ6 =0.95 .

The discussion of forma and their dependencies stems from the Evolutionary Algorithm
community and there especially from the supporters of the Building Block Hypothesis.
The idea is that an optimization algorithm should first discover formae which have a good
influence on the overall fitness of the candidate solutions. The hope is that there are many
compatible ones under these formae that can gradually be combined in the search process.
In this text we have defined formae and the corresponding terms on the basis of individ-
uals p which are records that assign an element of the problem spaces p.x ∈ X to an element
of the search space p.g ∈ G. Generally, we will relax this notation and also discuss forma
directly in the context of the search space G or problem space X, when appropriate.
122 9 FORMA ANALYSIS
Tasks T9

41. Define at least three reasonable properties on the individuals, genotypes, or phenotypes
of the Traveling Salesman Problem as introduced in Example E2.2 on page 45 and
further discussed in Task 26 on page 91.
[6 points]

42. List the relevant forma for each of the properties you have defined in based on your
definitions in Task 41 and in Task 26 on page 91.
[12 points]

43. List all the members of your forma given in Task 42 for the Traveling Salesman Problem
instance provided in Table 2.1 on page 46.
[10 points]
Chapter 10
General Information on Optimization

10.1 Applications and Examples

Table 10.1: Applications and Examples of Optimization.


Area References
Art see Tables 28.4, 29.1, and 31.1
Astronomy see Table 28.4
Business-To-Business see Table 3.1
Chemistry [623]; see also Tables 3.1, 22.1, 28.4, 29.1, 30.1, 31.1, 32.1,
40.1, and 43.1
Cloud Computing [2301, 2790, 3022, 3047]
Combinatorial Problems [5, 17, 115, 118, 130, 131, 178, 213, 241, 250, 251, 265, 315,
431, 432, 463, 475, 497, 533, 568, 571, 576, 578, 626–628,
671, 680, 682, 783, 806, 822–825, 832, 868, 1008, 1026, 1027,
1042, 1089, 1091, 1096, 1124, 1239, 1264, 1280, 1308, 1313,
1472, 1497, 1501, 1502, 1533, 1585, 1625, 1694, 1723, 1724,
1834, 1850, 1851, 1879, 1998, 2000, 2066, 2124, 2131, 2132,
2172, 2219, 2223, 2248, 2256, 2257, 2261, 2262, 2288–2290,
2330, 2422, 2449, 2508, 2511, 2512, 2667, 2673, 2692, 2717,
2812, 2813, 2817, 2885, 2900, 2990, 3016, 3017, 3047]; see
also Tables 3.1, 3.2, 18.1, 20.1, 22.1, 25.1, 26.1, 27.1, 28.4,
29.1, 30.1, 31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 38.1, 39.1, 40.1,
41.1, 42.1, 46.1, 46.3, 47.1, and 48.1
124 10 GENERAL INFORMATION ON OPTIMIZATION
Computer Graphics [45, 327, 450, 844, 1224, 2506]; see also Tables 3.1, 3.2, 20.1,
27.1, 28.4, 29.1, 30.1, 31.1, 33.1, and 36.1
Control [1922]; see also Tables 28.4, 29.1, 31.1, 32.1, and 35.1
Cooperation and Teamwork [2238]; see also Tables 22.1, 28.4, 29.1, 31.1, and 37.1
Data Compression see Table 31.1
Data Mining [37, 40, 92, 115, 126, 189, 203, 217, 277, 281, 305, 310, 406,
450, 538, 656, 766, 805, 849, 889, 930, 961, 968, 975, 983,
991, 1035, 1036, 1100, 1168, 1176, 1194, 1259, 1425, 1503,
1959, 2070, 2088, 2202, 2234, 2272, 2312, 2319, 2357, 2459,
2506, 2602, 2671, 2788, 2790, 2824, 2839, 2849, 2939, 2961,
2999, 3047, 3066, 3077, 3078, 3103]; see also Tables 3.1, 20.1,
21.1, 22.1, 27.1, 28.4, 29.1, 31.1, 32.1, 34.1, 35.1, 36.1, 37.1,
39.1, and 46.3
Databases [115, 130, 131, 213, 533, 568, 571, 783, 991, 1096, 1308, 1585,
2692, 2824, 2971, 3016, 3017]; see also Tables 3.1, 3.2, 21.1,
26.1, 27.1, 28.4, 29.1, 31.1, 35.1, 36.1, 37.1, 39.1, 40.1, 46.3,
and 47.1
Decision Making [1194]; see also Tables 3.1, 3.2, 20.1, and 31.1
Digital Technology see Tables 3.1, 26.1, 28.4, 29.1, 30.1, 31.1, 34.1, and 35.1
Distributed Algorithms and Sys- [127, 204, 226, 330–333, 492, 495, 506, 533, 565, 576, 617,
tems 628, 650, 671, 822, 823, 1101, 1168, 1280, 1342, 1425, 1472,
1497, 1561, 1645, 1647, 1650, 1690, 1777, 1850, 1851, 1998,
2000, 2066, 2070, 2172, 2202, 2238, 2256, 2257, 2301, 2319,
2330, 2375, 2511, 2512, 2602, 2638, 2716, 2790, 2839, 2878,
2990, 3017, 3022, 3058]; see also Tables 3.1, 3.2, 18.1, 21.1,
22.1, 25.1, 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 32.1, 33.1, 34.1,
35.1, 36.1, 37.1, 39.1, 40.1, 41.1, 46.1, 46.3, and 47.1
E-Learning [506, 565, 2319]; see also Tables 28.4, 29.1, 37.1, and 39.1
Economics and Finances [204, 281, 310, 559, 1194, 1998, 2257, 2301, 2330, 2878, 3022];
see also Tables 3.1, 3.2, 27.1, 28.4, 29.1, 31.1, 33.1, 35.1, 36.1,
37.1, 39.1, 40.1, 46.1, and 46.3
Engineering [189, 199, 312, 1101, 1194, 1219, 1227, 1431, 1690, 1777, 2088,
2375, 2459, 2716, 2839, 2878, 3004, 3047, 3077, 3078]; see also
Tables 3.1, 3.2, 18.1, 21.1, 22.1, 26.1, 27.1, 28.4, 29.1, 30.1,
31.1, 32.1, 33.1, 34.1, 35.1, 36.1, 37.1, 39.1, 40.1, 41.1, 46.3,
and 47.1
Face Recognition [471]
Function Optimization [72, 613, 1136, 1137, 1188, 1942, 2333, 2708]; see also Tables
3.2, 21.1, 26.1, 27.1, 28.4, 29.1, 30.1, 32.1, 33.1, 34.1, 35.1,
36.1, 39.1, 40.1, 41.1, and 43.1
Games [1954, 2383–2385]; see also Tables 3.1, 28.4, 29.1, 31.1, and
32.1
Graph Theory [241, 576, 628, 650, 1219, 1263, 1472, 1625, 1690, 2172, 2375,
2716, 2839]; see also Tables 3.1, 18.1, 21.1, 22.1, 25.1, 26.1,
27.1, 28.4, 29.1, 30.1, 31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 38.1,
39.1, 40.1, 41.1, 46.1, 46.3, and 47.1
Healthcare [45, 3103]; see also Tables 28.4, 29.1, 31.1, 35.1, 36.1, and
37.1
Image Synthesis [1637]; see also Tables 18.1 and 28.4
Logistics [118, 178, 241, 250, 251, 265, 495, 497, 578, 627, 680, 806,
824, 825, 832, 868, 1008, 1027, 1089, 1091, 1124, 1313, 1501,
1625, 1694, 1723, 1724, 1777, 2131, 2132, 2248, 2261, 2262,
2288–2290, 2508, 2673, 2717, 2812, 2813, 2817, 3047]; see
also Tables 3.1, 18.1, 22.1, 25.1, 26.1, 27.1, 28.4, 29.1, 30.1,
33.1, 34.1, 36.1, 37.1, 38.1, 39.1, 40.1, 41.1, 46.3, 47.1, and
48.1
10.2. BOOKS 125
Mathematics [405, 766, 930, 1020, 2200, 2349, 2357, 2418, 2572, 2885, 2939,
3078]; see also Tables 3.1, 18.1, 26.1, 27.1, 28.4, 29.1, 30.1,
31.1, 32.1, 33.1, 34.1, and 35.1
Military and Defense see Tables 3.1, 26.1, 27.1, 28.4, and 34.1
Motor Control see Tables 18.1, 22.1, 28.4, 33.1, 34.1, 36.1, and 39.1
Multi-Agent Systems [1818, 2081, 2667, 2919]; see also Tables 3.1, 18.1, 22.1, 28.4,
29.1, 31.1, 35.1, and 37.1
Multiplayer Games [127]; see also Table 31.1
News Industry [1425]
Nutrition [2971]
Operating Systems [2900]
Physics [1879]; see also Tables 3.1, 27.1, 28.4, 29.1, 31.1, 32.1, and
34.1
Police and Justice [471]
Prediction [405]; see also Tables 28.4, 29.1, 30.1, 31.1, and 35.1
Security [2459, 2878, 3022]; see also Tables 3.1, 22.1, 26.1, 27.1, 28.4,
29.1, 31.1, 35.1, 36.1, 37.1, 39.1, 40.1, and 46.3
Shape Synthesis see Table 26.1
Software [822, 823, 827, 1008, 1027, 1168, 1651, 1850, 1851, 2000, 2066,
2131, 2132, 2330, 2961, 2971, 3058]; see also Tables 3.1, 18.1,
20.1, 21.1, 22.1, 25.1, 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 33.1,
34.1, 36.1, 37.1, 40.1, 46.1, and 46.3
Sorting see Tables 28.4 and 31.1
Telecommunication see Tables 28.4, 37.1, and 40.1
Testing [1168, 1651]; see also Tables 3.1, 28.4, and 31.3
Theorem Proofing and Automatic see Tables 29.1, 40.1, and 46.3
Verification
Water Supply and Management see Tables 3.1 and 28.4
Wireless Communication [1219]; see also Tables 3.1, 18.1, 22.1, 26.1, 27.1, 28.4, 29.1,
31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 39.1, 40.1, and 46.3

10.2 Books
1. Handbook of Evolutionary Computation [171]
2. New Ideas in Optimization [638]
3. Scalable Optimization via Probabilistic Modeling – From Algorithms to Applications [2158]
4. Handbook of Metaheuristics [1068]
5. New Optimization Techniques in Engineering [2083]
6. Metaheuristics for Multiobjective Optimisation [1014]
7. Selected Papers from the Workshop on Pattern Directed Inference Systems [2869]
8. Applications of Multi-Objective Evolutionary Algorithms [597]
9. Handbook of Applied Optimization [2125]
10. The Traveling Salesman Problem: A Computational Study [118]
11. Intelligent Systems for Automated Learning and Adaptation: Emerging Trends and Applica-
tions [561]
12. Optimization and Operations Research [773]
13. Readings in Artificial Intelligence: A Collection of Articles [2872]
14. Machine Learning: Principles and Techniques [974]
15. Experimental Methods for the Analysis of Optimization Algorithms [228]
16. Evolutionary Multiobjective Optimization – Theoretical Advances and Applications [8]
17. Evolutionary Algorithms for Solving Multi-Objective Problems [599]
18. Pareto Optimality, Game Theory and Equilibria [560]
19. Pattern Classification [844]
20. The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization [1694]
21. Parallel Evolutionary Computations [2015]
22. Introduction to Global Optimization [2126]
126 10 GENERAL INFORMATION ON OPTIMIZATION
23. Global Optimization Algorithms – Theory and Application [2892]
24. Computational Intelligence in Control [1922]
25. Introduction to Operations Research [580]
26. Management Models and Industrial Applications of Linear Programming [530]
27. Efficient and Accurate Parallel Genetic Algorithms [477]
28. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
29. Nonlinear Programming: Sequential Unconstrained Minimization Techniques [908]
30. Nonlinear Programming: Sequential Unconstrained Minimization Techniques [909]
31. Global Optimization: Deterministic Approaches [1271]
32. Soft Computing: Methodologies and Applications [1242]
33. Handbook of Global Optimization [1272]
34. New Learning Paradigms in Soft Computing [1430]
35. How to Solve It: Modern Heuristics [1887]
36. Numerical Optimization [2040]
37. Artificial Intelligence: A Modern Approach [2356]
38. Electromagnetic Optimization by Genetic Algorithms [2250]
39. Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems [2685]
40. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations
[2961]
41. Multi-Objective Swarm Intelligent Systems: Theory & Experiences [2016]
42. Evolutionary Optimization [2394]
43. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
44. Multi-Objective Optimization in Computational Intelligence: Theory and Practice [438]
45. An Operations Research Approach to the Economic Optimization of a Kraft Pulping Process
[500]
46. Fundamental Methods of Mathematical Economics [559]
47. Einführung in die Künstliche Intelligenz [797]
48. Stochastic and Global Optimization [850]
49. Machine Learning Applications in Expert Systems and Information Retrieval [975]
50. Nonlinear and Dynamics Programming [1162]
51. Introduction to Operations Research [1232]
52. Introduction to Stochastic Search and Optimization [2561]
53. Nonlinear Multiobjective Optimization [1893]
54. Heuristics: Intelligent Search Strategies for Computer Problem Solving [2140]
55. Global Optimization with Non-Convex Constraints [2623]
56. The Nature of Statistical Learning Theory [2788]
57. Advances in Greedy Algorithms [247]
58. Multiobjective Decision Making Theory and Methodology [528]
59. Multiobjective Optimization: Principles and Case Studies [609]
60. Multi-Objective Optimization Using Evolutionary Algorithms [740]
61. Multicriteria Optimization [860]
62. Multiple Criteria Optimization: Theory, Computation and Application [2610]
63. Global Optimization [2714]
64. Soft Computing [760]

10.3 Conferences and Workshops

Table 10.2: Conferences and Workshops on Optimization.


10.3. CONFERENCES AND WORKSHOPS 127

IJCAI: International Joint Conference on Artificial Intelligence


History: 2009/07: Pasadena, CA, USA, see [373]
2007/01: Hyderabad, India, see [2797]
2005/07: Edinburgh, Scotland, UK, see [1474]
2003/08: Acapulco, México, see [1118, 1485]
2001/08: Seattle, WA, USA, see [2012]
1999/07: Stockholm, Sweden, see [730, 731, 2296]
1997/08: Nagoya, Japan, see [1946, 1947, 2260]
1995/08: Montréal, QC, Canada, see [1836, 1861, 2919]
1993/08: Chambéry, France, see [182, 183, 927, 2259]
1991/08: Sydney, NSW, Australia, see [831, 1987, 1988, 2122]
1989/08: Detroit, MI, USA, see [2593, 2594]
1987/08: Milano, Italy, see [1846, 1847]
1985/08: Los Angeles, CA, USA, see [1469, 1470]
1983/08: Karlsruhe, Germany, see [448, 449]
1981/08: Vancouver, BC, Canada, see [1212, 1213]
1979/08: Tokyo, Japan, see [2709, 2710]
1977/08: Cambridge, MA, USA, see [2281, 2282]
1975/09: Tbilisi, Georgia, USSR, see [2678]
1973/08: Stanford, CA, USA, see [2039]
1971/09: London, UK, see [630]
1969/05: Washington, DC, USA, see [2841]
AAAI: Annual National Conference on Artificial Intelligence
History: 2011/08: San Francisco, CA, USA, see [3]
2010/07: Atlanta, GA, USA, see [2]
2008/06: Chicago, IL, USA, see [979]
2007/07: Vancouver, BC, Canada, see [1262]
2006/07: Boston, MA, USA, see [1055]
2005/07: Pittsburgh, PA, USA, see [1484]
2004/07: San José, CA, USA, see [1849]
2002/07: Edmonton, Alberta, Canada, see [759]
2000/07: Austin, TX, USA, see [1504]
1999/07: Orlando, FL, USA, see [1222]
1998/07: Madison, WI, USA, see [1965]
1997/07: Providence, RI, USA, see [1634, 1954]
1996/08: Portland, OR, USA, see [586, 2207]
1994/07: Seattle, WA, USA, see [1214]
1993/07: Washington, DC, USA, see [913]
1992/07: San José, CA, USA, see [2639]
1991/07: Anaheim, CA, USA, see [732]
1990/07: Boston, MA, USA, see [793]
1988/08: St. Paul, MN, USA, see [1916]
1987/07: Seattle, WA, USA, see [960]
128 10 GENERAL INFORMATION ON OPTIMIZATION
1986/08: Philadelphia, PA, USA, see [1512, 1513]
1984/08: Austin, TX, USA, see [381]
1983/08: Washington, DC, USA, see [1041]
1982/08: Pittsburgh, PA, USA, see [2845]
1980/08: Stanford, CA, USA, see [197]
HIS: International Conference on Hybrid Intelligent Systems
History: 2011/12: Melaka, Malaysia, see [14]
2010/08: Atlanta, GA, USA, see [1377]
2009/08: Shěnyáng, Liáonı́ng, China, see [1375]
2008/10: Barcelona, Catalonia, Spain, see [2993]
2007/09: Kaiserslautern, Germany, see [1572]
2006/12: Auckland, New Zealand, see [1340]
2005/11: Rio de Janeiro, RJ, Brazil, see [2014]
2004/12: Kitakyushu, Japan, see [1421]
2003/12: Melbourne, VIC, Australia, see [10]
2002/12: Santiago, Chile, see [9]
2001/12: Adelaide, SA, Australia, see [7]
IAAI: Conference on Innovative Applications of Artificial Intelligence
History: 2011/08: San Francisco, CA, USA, see [4]
2010/07: Atlanta, GA, USA, see [2361]
2009/07: Pasadena, CA, USA, see [1164]
2006/07: Boston, MA, USA, see [1055]
2005/07: Pittsburgh, PA, USA, see [1484]
2004/07: San José, CA, USA, see [1849]
2003/08: Acapulco, México, see [2304]
2001/04: Seattle, WA, USA, see [1238]
2000/07: Austin, TX, USA, see [1504]
1999/07: Orlando, FL, USA, see [1222]
1998/07: Madison, WI, USA, see [1965]
1997/07: Providence, RI, USA, see [1634]
1996/08: Portland, OR, USA, see [586]
1995/08: Montréal, QC, Canada, see [51]
1994/08: Seattle, WA, USA, see [462]
1993/07: Washington, DC, USA, see [1]
1992/07: San José, CA, USA, see [2439]
1991/06: Anaheim, CA, USA, see [2533]
1990/05: Washington, DC, USA, see [2269]
1989/03: Stanford, CA, USA, see [2428]
ICCI: IEEE International Conference on Cognitive Informatics
History: 2011/08: Banff, AB, Canada, see [200]
2010/07: Běijı̄ng, China, see [2630]
2009/06: Kowloon, Hong Kong / Xiānggǎng, China, see [164]
2008/08: Stanford, CA, USA, see [2857]
2007/08: Lake Tahoe, CA, USA, see [3065]
2006/07: Běijı̄ng, China, see [3029]
2005/08: Irvine, CA, USA, see [1339]
10.3. CONFERENCES AND WORKSHOPS 129
2004/08: Victoria, Canada, see [524]
2003/08: London, UK, see [2133]
2002/08: Calgary, AB, Canada, see [1337]
ICML: International Conference on Machine Learning
History: 2011/06: Seattle, WA, USA, see [2442]
2010/06: Haifa, Israel, see [1163]
2009/06: Montréal, QC, Canada, see [681]
2008/07: Helsinki, Finland, see [603]
2007/06: Corvallis, OR, USA, see [1047]
2006/06: Pittsburgh, PA, USA, see [602]
2005/08: Bonn, North Rhine-Westphalia, Germany, see [724]
2004/07: Banff, AB, Canada, see [426]
2003/08: Washington, DC, USA, see [895]
2002/07: Sydney, NSW, Australia, see [2382]
2001/06: Williamstown, MA, USA, see [427]
2000/06: Stanford, CA, USA, see [1676]
1999/06: Bled, Slovenia, see [401]
1998/07: Madison, WI, USA, see [2471]
1997/07: Nashville, TN, USA, see [926]
1996/07: Bari, Italy, see [2367]
1995/07: Tahoe City, CA, USA, see [2221]
1994/07: New Brunswick, NJ, USA, see [601]
1993/06: Amherst, MA, USA, see [1945]
1992/07: Aberdeen, Scotland, UK, see [2520]
1991/06: Evanston, IL, USA, see [313]
1990/06: Austin, TX, USA, see [2206]
1989/06: Ithaca, NY, USA, see [2447]
1988/06: Ann Arbor, MI, USA, see [1659, 1660]
1987: Irvine, CA, USA, see [1415]
1985: Skytop, PA, USA, see [2519]
1983: Monticello, IL, USA, see [1936]
1980: Pittsburgh, PA, USA, see [2183]
MCDM: Multiple Criteria Decision Making
History: 2011/06: Jyväskylä, Finland, see [1895]
2009/06: Chéngdū, Sı̀chuān, China, see [2750]
2008/06: Auckland, New Zealand, see [861]
2006/06: Chania, Crete, Greece, see [3102]
2004/08: Whistler, BC, Canada, see [2875]
2002/02: Semmering, Austria, see [1798]
2000/06: Ankara, Turkey, see [1562]
1998/06: Charlottesville, VA, USA, see [1166]
1997/01: Cape Town, South Africa, see [2612]
1995/06: Hagen, North Rhine-Westphalia, Germany, see [891]
1994/08: Coimbra, Portugal, see [592]
1992/07: Taipei, Taiwan, see [2744]
1990/08: Fairfax, VA, USA, see [2551]
130 10 GENERAL INFORMATION ON OPTIMIZATION
1988/08: Manchester, UK, see [1759]
1986/08: Kyoto, Japan, see [2410]
1984/06: Cleveland, OH, USA, see [1165]
1982/08: Mons, Brussels, Belgium, see [1190]
1980/08: Newark, DE, USA, see [1960]
1979/08: Hagen/Königswinter, West Germany, see [890]
1977/08: Buffalo, NY, USA, see [3094]
1975/05: Jouy-en-Josas, France, see [2701]
SEAL: International Conference on Simulated Evolution and Learning
History: 2012/12: Hanoi, Vietnam, see [1179]
2010/12: Kanpur, Uttar Pradesh, India, see [2585]
2008/12: Melbourne, VIC, Australia, see [1729]
2006/10: Héféi, Ānhuı̄, China, see [2856]
2004/10: Busan, South Korea, see [2123]
2002/11: Singapore, see [2661]
2000/10: Nagoya, Japan, see [1991]
1998/11: Canberra, Australia, see [1852]
1996/11: Taejon, South Korea, see [2577]
AI: Australian Joint Conference on Artificial Intelligence
History: 2010/12: Adelaide, SA, Australia, see [33]
2006/12: Hobart, Australia, see [2406]
2005/12: Sydney, NSW, Australia, see [3072]
2004/12: Cairns, Australia, see [2871]
AISB: Artificial Intelligence and Simulation of Behaviour
History: 2008/04: Aberdeen, Scotland, UK, see [1147]
2007/04: Newcastle upon Tyne, UK, see [2549]
2005/04: Bristol, UK and Hatfield, England, UK, see [2547, 2548]
2003/03: Aberystwyth, Wales, UK and Leeds, UK, see [2545, 2546]
2002/04: London, UK, see [2544]
2001/03: Heslington, York, UK, see [2759]
2000/04: Birmingham, UK, see [2748]
1997/04: Manchester, UK, see [637]
1996/04: Brighton, UK, see [938]
1995/04: Sheffield, UK, see [937]
1994/04: Leeds, UK, see [936]
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
History: 2010/12: Boston, MA, USA, see [367]
2009/12: Avignon, France, see [1275]
2008/11: Awaji City, Hyogo, Japan, see [153]
2007/12: Budapest, Hungary, see [1342]
2006/12: Cavalese, Italy, see [509]
ICARIS: International Conference on Artificial Immune Systems
History: 2011/07: Cambridge, UK, see [1203]
2010/07: Edinburgh, Scotland, UK, see [858]
2009/08: York, UK, see [2363]
2008/08: Phuket, Thailand, see [274]
10.3. CONFERENCES AND WORKSHOPS 131
2007/08: Santos, Brazil, see [707]
2006/09: Oeiras, Portugal, see [282]
2005/08: Banff, AB, Canada, see [1424]
2004/09: Catania, Sicily, Italy, see [2035]
2003/09: Edinburgh, Scotland, UK, see [2706]
: Canterbury, Kent, UK, see [2705]
MICAI: Mexican International Conference on Artificial Intelligence
History: 2010/11: Pachuca, Mexico, see [2583]
2009/11: Guanajuato, México, see [38]
2007/11: Aguascalientes, México, see [1037]
2005/11: Monterrey, México, see [1039]
2004/04: Mexico City, México, see [1929]
2002/04: Mérida, Yucatán, México, see [598]
2000/04: Acapulco, México, see [470]
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
History: 2011/10: Cluj-Napoca, Romania, see [2576]
2010/05: Granada, Spain, see [1102]
2008/11: Puerto de la Cruz, Tenerife, Spain, see [1620]
2007/11: Catania, Sicily, Italy, see [1619]
2006/06: Granada, Spain, see [2159]
SMC: International Conference on Systems, Man, and Cybernetics
History: 2014/10: San Diego, CA, USA, see [2386]
2013/10: Edinburgh, Scotland, UK, see [859]
2012/10: Seoul, South Korea, see [2454]
2011/10: Anchorage, AK, USA, see [91]
2009/10: San Antonio, TX, USA, see [1352]
2001/10: Tucson, AZ, USA, see [1366]
1999/10: Tokyo, Japan, see [1331]
1998/10: La Jolla, CA, USA, see [1364]
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
History: 2010/11: see [1029]
2009/11: see [1015]
2008/11: see [1857]
2007/10: see [151]
2006/09: see [2980]
2005: see [2979]
2004: see [2978]
2003: see [2977]
2002: see [2976]
2001: see [2975]
2000: see [2974]
1999/09: Nagoya, Japan, see [1993]
1998/08: Nagoya, Japan, see [1992]
1997/06: see [535]
1996/08: Nagoya, Japan, see [1001]
AIMS: Artificial Intelligence in Mobile System
132 10 GENERAL INFORMATION ON OPTIMIZATION
History: 2003/10: Seattle, WA, USA, see [1624]
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
History: 2008/10: Cergy-Pontoise, France, see [3049]
2005/05: Muroran, Japan, see [11]
ECML: European Conference on Machine Learning
History: 2007/09: Warsaw, Poland, see [2866]
2006/09: Berlin, Germany, see [278]
2005/10: Porto, Portugal, see [2209]
2004/09: Pisa, Italy, see [2182]
2003/09: Cavtat, Dubrovnik, Croatia, see [512]
2002/08: Helsinki, Finland, see [1221]
2001/09: Freiburg, Germany, see [992]
2000/05: Barcelona, Catalonia, Spain, see [215]
1998/04: Chemnitz, Sachsen, Germany, see [537]
1997/04: Prague, Czech Republic, see [2781]
1995/04: Heraclion, Crete, Greece, see [1223]
1994/04: Catania, Sicily, Italy, see [507]
1993/04: Vienna, Austria, see [2809]
EPIA: Portuguese Conference on Artificial Intelligence
History: 2011/10: Lisbon, Portugal, see [1744]
2009/10: Aveiro, Portugal, see [1774]
2007/12: Guimarães, Portugal, see [2026]
2005/12: Covilhã, Portugal, see [275]
EUFIT: European Congress on Intelligent Techniques and Soft Computing
History: 1999/09: Aachen, North Rhine-Westphalia, Germany, see [2807]
1998/09: Aachen, North Rhine-Westphalia, Germany, see [2806]
1997/09: Aachen, North Rhine-Westphalia, Germany, see [3091]
1996/09: Aachen, North Rhine-Westphalia, Germany, see [877]
1995/08: Aachen, North Rhine-Westphalia, Germany, see [2805]
1994/09: Aachen, North Rhine-Westphalia, Germany, see [2803]
1993/09: Aachen, North Rhine-Westphalia, Germany, see [2804]
EngOpt: International Conference on Engineering Optimization
History: 2010/09: Lisbon, Portugal, see [1405]
2008/06: Rio de Janeiro, RJ, Brazil, see [1227]
GEWS: Grammatical Evolution Workshop
See Table 31.2 on page 385.
HAIS: International Workshop on Hybrid Artificial Intelligence Systems
History: 2011/05: Warsaw, Poland, see [2867]
2009/06: Salamanca, Spain, see [2749]
2008/09: Burgos, Spain, see [632]
2007/11: Salamanca, Spain, see [631]
2006/10: Ribeirão Preto, SP, Brazil, see [184]
ICAART: International Conference on Agents and Artificial Intelligence
History: 2011/01: Rome, Italy, see [1404]
2010/01: València, Spain, see [917]
10.3. CONFERENCES AND WORKSHOPS 133
2009/01: Porto, Portugal, see [916]
ICTAI: International Conference on Tools with Artificial Intelligence
History: 2003/11: Sacramento, CA, USA, see [1367]
IEA/AIE: International Conference on Industrial and Engineering Applications of Artificial Intel-
ligence and Expert Systems
History: 2004/05: Ottawa, ON, Canada, see [2088]
ISA: International Workshop on Intelligent Systems and Applications
History: 2011/05: Wǔhàn, Húběi, China, see [1380]
2010/05: Wǔhàn, Húběi, China, see [2846]
2009/05: Wǔhàn, Húběi, China, see [2847]
ISICA: International Symposium on Advances in Computation and Intelligence
History: 2007/09: Wǔhàn, Húběi, China, see [1490]
LION: Learning and Intelligent OptimizatioN
History: 2011/01: Rome, Italy, see [2318]
2010/01: Venice, Italy, see [2581]
2009/01: Trento, Italy, see [2625]
2007/02: Andalo, Trento, Italy, see [1273, 1826]
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
History: 2010/09: St Andrews, Scotland, UK, see [780]
2009/09: Lisbon, Portugal, see [779]
2008/09: Sydney, NSW, Australia, see [2011]
2007/09: Providence, RI, USA, see [2225]
2006/09: Nantes, France, see [583]
2005/10: Barcelona, Catalonia, Spain, see [1860]
2004/09: Toronto, ON, Canada, see [2715]
SoCPaR: International Conference on SOft Computing and PAttern Recognition
History: 2011/10: Dàlián, Liáonı́ng, China, see [1381]
2010/12: Cergy-Pontoise, France, see [515]
2009/12: Malacca, Malaysia, see [1224]
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
History: 1992/09: Tutzing, Germany, see [2742]
1989: Germany, see [245]
1986/07: Neubiberg, Bavaria, Germany, see [244]
1983/06: Neubiberg, Bavaria, Germany, see [243]
AINS: Annual Symposium on Autonomous Intelligent Networks and Systems
History: 2003/06: Menlo Park, CA, USA, see [513]
ALIO/EURO: Workshop on Applied Combinatorial Optimization
History: 2011/05: Porto, Portugal, see [138]
2008/12: Buenos Aires, Argentina, see [137]
2005/10: Paris, France, see [136]
2002/11: Pucón, Chile, see [135]
1999/11: Erice, Italy, see [134]
1996/11: Valparaı́so, Chile, see [133]
1989/08: Rio de Janeiro, RJ, Brazil, see [132]
ANZIIS: Australia and New Zealand Conference on Intelligent Information Systems
History: 1994/11: Brisbane, QLD, Australia, see [1359]
134 10 GENERAL INFORMATION ON OPTIMIZATION
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
History: 2002/07: Banff, AB, Canada, see [1714]
EDM: International Conference on Educational Data Mining
History: 2009/07: Córdoba, Spain, see [217]
EWSL: European Working Session on Learning
Continued as ECML in 1993.
History: 1991/03: Porto, Portugal, see [1560]
1988/10: Glasgow, Scotland, UK, see [1061]
1987/05: Bled, Yugoslavia (now Slovenia), see [322]
1986/02: Orsay, France, see [2097]
FLAIRS: Florida Artificial Intelligence Research Symposium
History: 1994/05: Pensacola Beach, FL, USA, see [1360]
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
See Table 29.2 on page 355.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
History: 2006/12: Vienna, Austria, see [2024]
ICMLC: International Conference on Machine Learning and Cybernetics
History: 2005/08: Guǎngzhōu, Guǎngdōng, China, see [2263]
IICAI: Indian International Conference on Artificial Intelligence
History: 2011/12: Tumkur, India, see [2736]
2009/12: Tumkur, India, see [2217]
2007/12: Pune, India, see [2216]
2005/12: Pune, India, see [2215]
2003/12: Hyderabad, India, see [2214]
ISIS: International Symposium on Advanced Intelligent Systems
History: 2007/09: Sokcho, Korea, see [1579]
MCDA: International Summer School on Multicriteria Decision Aid
History: 1998/07: Monte Estoril, Lisbon, Portugal, see [198]
PAKDD: Pacific-Asia Conference on Knowledge Discovery and Data Mining
History: 2011/05: Shēnzhèn, Guǎngdōng, China, see [1294]
STeP: Finnish Artificial Intelligence Conference
History: 2008/08: Espoo, Finland, see [2255]
TAAI: Conference on Technologies and Applications of Artificial Intelligence
History: 2010/11: Hsinchu, Taiwan, see [1283]
2009/10: Wufeng Township, Taichung County, Taiwan, see [529]
2008/11: Chiao-hsi Shiang, I-lan County, Taiwan, see [2659]
2007/11: Douliou, Yunlin, Taiwan, see [2006]
2006/12: Kaohsiung, Taiwan, see [1493]
2005/12: Kaohsiung, Taiwan, see [2004]
2004/11: Wenshan District, Taipei City, Taiwan, see [2003]
TAI: IEEE Conference on Tools for Artificial Intelligence
History: 1990/11: Herndon, VA, USA, see [1358]
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
History: 1997/09: Rytro, Poland, see [2362]
10.4. JOURNALS 135
Machine Intelligence Workshop
History: 1977: Santa Cruz, CA, USA, see [874]
TAINN: Turkish Symposium on Artificial Intelligence and Neural Networks (Tuärk Yapay Zeka ve
Yapay Sinir Ağları Sempozyumu)
History: 1993/06: Istanbul, Turkey, see [1643]
AGI: Conference on Artificial General Intelligence
History: 2011/08: Mountain View, CA, USA, see [2587]
2010/03: Lugano, Switzerland, see [1786]
2009/03: Arlington, VA, USA, see [661]
2008/03: Memphis, TN, USA, see [1414]
AICS: Irish Conference on Artificial Intelligence and Cognitive Science
History: 2007/08: Dublin, Ireland, see [764]
AIIA: Congress of the Italian Association for Artificial Intelligence (AI*IA) on Trends in Artificial
Intelligence
History: 1991/10: Palermo, Italy, see [124]
ECML PKDD: European Conference on Machine Learning and European Conference on Principles
and Practice of Knowledge Discovery in Databases
History: 2009/09: Bled, Slovenia, see [321]
2008/09: Antwerp, Belgium, see [114]
SEA: International Symposium on Experimental Algorithms
History: 2011/05: Chania, Crete, Greece, see [2098]
2010/05: Ischa Island, Naples, Italy, see [2584]

10.4 Journals
1. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics published by
IEEE Systems, Man, and Cybernetics Society
2. Information Sciences – Informatics and Computer Science Intelligent Systems Applications:
An International Journal published by Elsevier Science Publishers B.V.
3. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientific Publishers Ltd.
4. Machine Learning published by Kluwer Academic Publishers and Springer Netherlands
5. Applied Soft Computing published by Elsevier Science Publishers B.V.
6. Soft Computing – A Fusion of Foundations, Methodologies and Applications published by
Springer-Verlag
7. Computers & Operations Research published by Elsevier Science Publishers B.V. and Perg-
amon Press
8. Artificial Intelligence published by Elsevier Science Publishers B.V.
9. Operations Research published by HighWire Press (Stanford University) and Institute for
Operations Research and the Management Sciences (INFORMS)
10. Annals of Operations Research published by J. C. Baltzer AG, Science Publishers and Springer
Netherlands
11. International Journal of Computational Intelligence and Applications (IJCIA) published by
Imperial College Press Co. and World Scientific Publishing Co.
12. ORSA Journal on Computing published by Operations Research Society of America (ORSA)
13. Management Science published by HighWire Press (Stanford University) and Institute for
Operations Research and the Management Sciences (INFORMS)
14. Journal of Artificial Intelligence Research (JAIR) published by AAAI Press and AI Access
Foundation, Inc.
15. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
16. The Journal of the Operational Research Society (JORS) published by Operations Research
Society and Palgrave Macmillan Ltd.
17. Data Mining and Knowledge Discovery published by Springer Netherlands
136 10 GENERAL INFORMATION ON OPTIMIZATION
18. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
19. Mathematics of Operations Research (MOR) published by Institute for Operations Research
and the Management Sciences (INFORMS)
20. Journal of Global Optimization published by Springer Netherlands
21. AIAA Journal published by American Institute of Aeronautics and Astronautics
22. Journal of Artificial Evolution and Applications published by Hindawi Publishing Corporation
23. Applied Intelligence – The International Journal of Artificial Intelligence, Neural Networks,
and Complex Problem-Solving Technologies published by Springer Netherlands
24. Artificial Intelligence Review – An International Science and Engineering Journal published
by Kluwer Academic Publishers and Springer Netherlands
25. Annals of Mathematics and Artificial Intelligence published by Springer Netherlands
26. IEEE Transactions on Knowledge and Data Engineering published by IEEE Computer Soci-
ety Press
27. IEEE Intelligent Systems Magazine published by IEEE Computer Society Press
28. ACM SIGART Bulletin published by ACM Press
29. Journal of Machine Learning Research (JMLR) published by MIT Press
30. Discrete Optimization published by Elsevier Science Publishers B.V.
31. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
32. Artificial Intelligence in Engineering published by Elsevier Science Publishers B.V.
33. Engineering Optimization published by Informa plc and Taylor and Francis LLC
34. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
35. International Journal of Organizational and Collective Intelligence (IJOCI) published by Idea
Group Publishing (Idea Group Inc., IGI Global)
36. Journal of Intelligent Information Systems published by Springer-Verlag GmbH
37. International Journal of Knowledge-Based and Intelligent Engineering Systems published by
IOS Press
38. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) published by
IEEE Computer Society
39. Optimization Methods and Software published by Taylor and Francis LLC
40. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
41. Structural and Multidisciplinary Optimization published by Springer-Verlag GmbH
42. OR/MS Today published by Institute for Operations Research and the Management Sciences
(INFORMS) and Lionheart Publishing, Inc.
43. AI Magazine published by AAAI Press
44. Computers in Industry – An International, Application Oriented Research Journal published
by Elsevier Science Publishers B.V.
45. International Journal Approximate Reasoning published by Elsevier Science Publishers B.V.
46. International Journal of Intelligent Systems published by Wiley Interscience
47. International Journal of Production Research published by Taylor and Francis LLC
48. International Transactions in Operational Research published by Blackwell Publishing Ltd
49. INFORMS Journal on Computing (JOC) published by HighWire Press (Stanford University)
and Institute for Operations Research and the Management Sciences (INFORMS)
50. OR Spectrum – Quantitative Approaches in Management published by Springer-Verlag
51. SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization pub-
lished by SIAM Activity Group on Optimization – SIAG/OPT
52. Künstliche Intelligenz (KI) published by Böttcher IT Verlag
Part II

Difficulties in Optimization
138 10 GENERAL INFORMATION ON OPTIMIZATION
Chapter 11
Introduction

The classification of optimization algorithms in Section 1.3 and the table of contents of this
book enumerate a wide variety of optimization algorithms. Yet, the approaches introduced
here resemble only a fraction of the actual number of available methods. It is a justified
question to ask why there are so many different approaches, why is this variety needed? One
possible answer is simply because there are so many different kinds of optimization tasks.
Each of them puts different obstacles into the way of the optimizers and comes with own,
characteristic difficulties.
In this part we want to discuss the most important of these complications, the major
problems that may be encountered during optimization [2915]. You may wish to read this
part after reading through some of the later parts on specific optimization algorithms or
after solving your first optimization task. You will then realize that many of the problems
you encountered are discussed here in depth.
Some of subjects in the following text concern Global Optimization in general (multi-
modality and overfitting, for instance), others apply especially to nature-inspired approaches
like Genetic Algorithms (epistasis and neutrality, for example). Sometimes, neglecting a
single one of these aspects during the development of the optimization process can render
the whole efforts invested useless, even if highly efficient optimization techniques are applied.
By giving clear definitions and comprehensive introductions to these topics, we want to raise
the awareness of scientists and practitioners in the industry and hope to help them to use
optimization algorithms more efficiently.
In Figure 11.1, we have sketched a set of different types of fitness landscapes (see Sec-
tion 5.1) which we are going to discuss. The objective values in the figure are subject to
minimization and the small bubbles represent candidate solutions under investigation. An
arrow from one bubble to another means that the second individual is found by applying
one search operation to the first one.
Before we go more into detail about what makes these landscapes difficult, we should
establish the term in the context of optimization. The degree of difficulty of solving a certain
problem with a dedicated algorithm is closely related to its computational complexity. We
will outline this in the next section and show that for many problems, there is no algorithm
which can exactly solve them in reasonable time.
As already stated, optimization algorithms are guided by objective functions. A function
is difficult from a mathematical perspective in this context if it is not continuous, not
differentiable, or if it has multiple maxima and minima. This understanding of difficulty
comes very close to the intuitive sketches in Figure 11.1.
In many real world applications of metaheuristic optimization, the characteristics of the
objective functions are not known in advance. The problems are usually very complex and it
is only rarely possible to derive boundaries for the performance or the runtime of optimizers
in advance, let alone exact estimates with mathematical precision.
In such cases, experience and general rules of thumb are often the best guides available
140 11 INTRODUCTION
objective values f(x)

objective values f(x)


x x
Fig. 11.1.a: Best Case Fig. 11.1.b: Low Variation

? ? ?
objective values f(x)

objective values f(x)

multiple (local) optima x no useful gradient information


Fig. 11.1.c: Multi-Modal Fig. 11.1.d: Rugged
objective values f(x)
objective values f(x)

?
region with misleading neutral area
gradient information x
x
Fig. 11.1.e: Deceptive Fig. 11.1.f: Neutral

needle
?
objective values f(x)
objective values f(x)

(isolated
? optimum)
?
neutral area or
area without much
information x x
Fig. 11.1.g: Needle-In-A-Haystack Fig. 11.1.h: Nightmare

Figure 11.1: Different possible properties of fitness landscapes (minimization).


11 INTRODUCTION 141
(see also Chapter 24). Empirical results based on models obtained from related research
areas such as biology are sometimes helpful as well. In this part, we discuss many such models
and rules, providing a better understanding of when the application of a metaheuristic is
feasible and when not, as well as on how to avoid defining problems in a way that makes
them difficult.
Chapter 12
Problem Hardness

Most of the algorithms we will discuss in this book, in particular the metaheuristics, usually
give approximations as results but cannot guarantee to always find the globally optimal
solution. However, we are actually able to find the global optima for all given problems (of
course, limited to the precision of the available computers). Indeed, the easiest way to do
so would be exhaustive enumeration (see Section 8.3), to simply test all possible values from
the problem space and to return the best one.
This method, obviously, will take extremely long so our goal is to find an algorithm
which is better. More precisely, we want to find the best algorithm for solving a given class
of problems. This means that we need some sort of metric with which we can compare the
efficiency of algorithms [2877, 2940].

12.1 Algorithmic Complexity


The most important measures obtained by analyzing an algorithm1 are the time that it takes
to produce the wanted outcome and the storage space needed for internal data [635]. We
call those the time complexity and the space complexity dimensions. The time-complexity
denotes how many steps algorithms need until they return their results. The space complex-
ity determines how much memory an algorithm consumes at most in one run. Of course,
these measures depend on the input values passed to the algorithm.
If we have an algorithm that should decide whether a given number is prime or not, the
number of steps needed to find that out will differ if the size of the inputs are different.
For example, it is easy to see that telling whether 7 is prime or not can be done in much
shorter time than verifying the same property for 232 582 657 − 1. The nature of the input
may influence the runtime and space requirements of an algorithm as well. We immediately
know that 1 000 000 000 000 000 is not prime, though it is a big number, whereas the answer
in case of the smaller number 100 000 007 is less clear.2 Therefore, often best, average, and
worst-case complexities exist for given input sizes.
In order to compare the efficiency of algorithms, some approximative notations have been
introduced [1555, 1556, 2510]. As we just have seen, the basic criterion influencing the time
and space requirements of an algorithm applied to a problem is the size of the inputs, i. e.,
the number of bits needed to represent the number from which we want to find out whether
it is prime or not, the number of elements in a list to be sorted, the number of nodes or
edges in a graph to be 3-colored, and so on. We can describe this dependency as a function
of this size.
In real systems however, the knowledge of the exact dependency is not needed. If we,
for example, know that sorting n data elements with the Quicksort algorithm3 [1240, 1557]
1 http://en.wikipedia.org/wiki/Analysis_of_algorithms [accessed 2007-07-03]
2 100 000 007
is indeed a prime, in case you’re wondering, 232 582 657 − 1 too.
3 http://en.wikipedia.org/wiki/Quicksort [accessed 2007-07-03]
144 12 PROBLEM HARDNESS
takes in average something about n log n steps, this is sufficient enough, even if the correct
number is 2n ln n ≈ 1.39n log n.
The Big-O-family notations introduced by Bachmann [163] and made popular by Landau
[1664] allow us to group functions together that rise at approximately the same speed.

Definition D12.1 (Big-O notation). The big-O notation4 is a mathematical notation


used to describe the asymptotical upper bound of functions.

f (x) ∈ O(g (x)) ⇔ ∃ x0 , m ∈ R : m > 0 ∧ |f (x) | ≤ m|g (x) | ∀ x > x0 (12.1)

In other words, a function f (x) is in O of another function g (x) if and only if there exists a
real number x0 and a constant, positive factor m so that the absolute value of f (x) is smaller
(or equal) than m-times the absolute value of g (x) for all x that are greater than x0 . f (x)
thus cannot grow (considerably) faster than g (x). Therefore, x3 +x2 +x+1 = f (x) ∈ O x3
since for m = 5 and x0 = 2 it holds that 5x3 > x3 + x2 + x + 1 ∀x ≥ 2.
In terms of algorithmic complexity, we specify the amount of steps or memory units an
algorithm needs in dependency on the size of its inputs in the big-O notation. More gen-
erally, this specification is based on key-features of the input data. For a graph-processing
algorithm, this may be the number of edges |E| and the number of vertexes |V | of a graph
to be processed bya graph-processing algorithm and the algorithmic complexity may be
p
O |V |2 + |E||V | . Some examples for single-parametric functions can be found in Ex-
ample E12.1.

Example E12.1 (Some examples of the big-O notation).

class examples description


222
O(1) f1 (x) = 2 , Algorithms that have constant runtime for all in-
f2 (x) = sinx puts are O(1).
O(log n) f3 (x) = log x, Logarithmic complexity is often a feature of al-
f4 (x) = f4 x2 + gorithms that run on binary trees or search algo-
1; f4 (x < 1) = 0 rithms in ordered sets. Notice that O(log n) im-
plies that only parts of the input of the algorithm
is read/regarded, since the input has length n and
we only perform m log n steps.
O(n) f5 (x) = 23n + 4, Algorithms of O(n) require to access and process
f6 (x) = n2 their input a constant number of times. This is
for example the case when searching in a linked
list.
O(n log n) f7 (x) = 23x + x log 7x Many sorting algorithms like Quicksort and
 Mergesort are in O(n log n)
O n2 f8 (x) = 34x2 , Some sorting algorithms like selection sort have 
Px+3
f9 (x) = i=0 x − 2 this complexity. For many problems, O n2 -
 solutions are acceptable good.
O ni : f10 (x) = x5 − x2 The general polynomial complexity. In this group
i > 1, i ∈ R we find many algorithms that work on graphs.
O(2n ) f11 (x) = 23 ∗ 2x Algorithms with exponential complexity perform
slowly and become infeasible with increasing in-
put size. For many hard problems, there exist
only algorithms of this class. Their solution can
otherwise only be approximated by the means of
randomized global optimization techniques.
12.2. COMPLEXITY CLASSES 145
x x x x
f(x)=x f(x)=x! f(x)=e f(x)=2 f(x)=1.1
40
10
f(x)
35
10
f(x)=x10
10 30 picoseconds
since big bang
f(x)=x8
25
10

20
10

15
10
f(x)=x4
1 trillion ms per day
1 billion
1 million f(x)=x2

1000 f(x)=x
100 x
10
1
1 2 4 8 16 32 64 128 256 512 1024 2048

Figure 12.1: Illustration of some functions illustrating their rising speed, gleaned from [2365].

In Example E12.1, we give some examples for the big-O notation and name kinds of algo-
rithms which have such complexities. The rising behavior of some functions is illustrated in
Figure 12.1. From this figure it becomes clear that problems for which only algorithms with
high complexities such as O(2n ) exist quickly become intractable even for small input sizes
n.

12.2 Complexity Classes


While the big-O notation is used to state how long a specific algorithm needs (at least/in
average/at most) for input with certain features, the computational complexity5 of a problem
is bounded by the best algorithm known for it. In other words, it makes a statement about
how much resources are necessary for solving a given problem, or, from the opposite point
of view, tell us whether we can solve a problem with given available resources.

12.2.1 Turing Machines

The amount of resources required at least to solve a problem depends on the problem
itself and on the machine used for solving it. One of the most basic computer models are
deterministic Turing machines6 (DTMs) [2737] as sketched in Fig. 12.2.a. They consist of
a tape of infinite length which is divided into distinct cells, each capable of holding exactly
one symbol from a finite alphabet. A Turing machine has a head which is always located
at one of these cells and can read and write the symbols. The machine has a finite set of
possible states and is driven by rules. These rules decide which symbol should be written
and/or move the head one unit to the left or right based on the machine’s current state and
the symbol on the tape under its head. With this simple machine, any of our computers
and, hence, also the programs executed by them, can be simulated.7
4 http://en.wikipedia.org/wiki/Big_O_notation [accessed 2007-07-03]
5 http://en.wikipedia.org/wiki/Computational_complexity_theory [accessed 2010-06-24]
6 http://en.wikipedia.org/wiki/Turing_machine [accessed 2010-06-24]
7 Another computational model, RAM (Random Access Machine) is quite close to today’s computer

architectures and can be simulated with Turing machines efficiently [1804] and vice versa.
146 12 PROBLEM HARDNESS
State Tape Symbol Þ Print Motion Next State
A 0 Þ 1 Right A
0 0 00 Þ
A 1 0 Right B
00 1
00 0 0 B 0 Þ 1 Right B
11 1 0 00 B 1 Þ 1 Left C
C 1 Þ 0 Left C
C 0 Þ 1 Right A
Fig. 12.2.a: A deterministic Turing machine.

State Tape Symbol Þ Print Motion Next State


0 0 00
00 1 A 0 Þ 1 Right A
00 0 0
11 1 0 00 A 1 Þ 0 Right B
A 1 Þ 0 Right A
B 0 Þ 1 Right B
B 1 Þ 1 Left C
0
0 00
0 0 00 C 1 Þ 0 Left C
00 1 00 1 Þ
00 0 0 00 0 0 C 0 1 Right A
11 1 0 00 1 1 1 0 0 0
C 0 Þ 1 Left A
Fig. 12.2.b: A non-deterministic Turing machine.

Figure 12.2: Turing machines.

Like DTMs, non-deterministic Turing machine8 (NTMs, see Fig. 12.2.b) are thought
experiments rather than real computers. They are different from DTMs in that they may
have more than one rule for a state/input pair. Whenever there is an ambiguous situation
where n > 1 rules could be applied, the NTM can be thought of to “fork” into n independent
NTMs running in parallel. The computation of the overall system stops when any of these
reaches a final, accepting state. NTMs can be simulated with DTMs by performing a
breadth-first search on the possible computation steps of the NTM until it finds an accepting
state. The number of steps required for this simulation, however, grows exponentially with
the length of the shortest accepting path of the NTM [1483].

12.2.2 P, N P, Hardness, and Completeness

The set of all problems which can be solved by a deterministic Turing machine within
polynomial time is called P [1025]. This means that for the problems within this class,
there exists at least one algorithm (for a DTM) with an algorithmic complexity in O(np )
(where n is the input size and p ∈ N1 is some natural number).
Algorithms designed for Turing machines can be translated to algorithms for normal PCs.
Since it quite easy to simulate a DTM (with finite tape) with an off-the-shelf computer and
vice versa, these are also the problems that can be solved within polynomial time by existing,
real computers. These are the problems which are said to be exactly solvable in a feasible
way [359, 1092].
N P, on the other hand, includes all problems which can be solved by a non-deterministic
Turing machine in a runtime rising polynomial with the input size. It is the set of all problems
for which solutions can be checked in polynomial time [1549].
Clearly, these include all the problems from P, since DTMs can be considered as a
subset of NTMs and if a deterministic Turing machine can solve something in polynomial
time, a non-deterministic one can as well. The other way around, i.e., the question whether
N P ⊆ P and, hence, P = N P, is much more complicated and so far, there neither exists
proof against nor for it [1549]. However, it is strongly assumed that problems in N P need
super-polynomial, exponential, time to be solved by deterministic Turing machines and
therefore also by computers as we know and use them.
8 http://en.wikipedia.org/wiki/Non-deterministic_Turing_machine [accessed 2010-06-24]
12.3. THE PROBLEM: MANY REAL-WORLD TASKS ARE N P-HARD 147
If problem A can be transformed in a way so that we can use an algorithm for problem
B to solve A, we say A reduces to B. This means that A is no more difficult than B. A
problem is hard for a complexity class if every other problem in this class can be reduced
to it. For classes larger than P, reductions which take no more than polynomial time in the
input size are usually used. The problems which are hard for N P are called N P-hard. A
problem which is in a complexity class C and hard for C is called complete for C, hence the
name N P-complete.

12.3 The Problem: Many Real-World Tasks are N P-hard

From a practical perspective,  problems in P are usually friendly. Everything with an algo-
rithmic complexity in O n5 and less can usually be solved more or less efficiently if n does
not get too large. Searching and sorting algorithms, with complexities of between log n and
n log n, are good examples for such problems which, in most cases, are not bothersome.
However, many real-world problems are N P-hard or, without formal proof, can be as-
sumed to be at least as bad as that. So far, no polynomial-time algorithm for DTMs has
been discovered for this class of problems and the runtime of the algorithms we have for
them increases exponentially with the input size.
In 1962, Bremermann [407] pointed out that “No data processing system whether artificial
or living can process more than 2 · 1047 bits per second per gram of its mass”. This is an
absolute limit to computation power which we are very unlikely to ever be able to unleash.
Yet, even with this power, if the number of possible states of a system grow exponentially
with its size, any solution attempt becomes infeasible quickly. Minsky [1907], for instance,
estimates the number of all possible move sequences in chess to be around 1 · 10120 .
The Traveling Salesman Problem which we initially introduced in Example E2.2 on
page 45 (TSP) is a classical example for N P-complete combinatorial optimization problems.
Since many Vehicle Routing Problems (VRPs) [2912, 2914] are basically iterated variants
of the TSP, they fall at least into the same complexity class. This means that until now,
there exists no algorithm with a less-than exponential complexity which can be used to find
optimal transportation plans for a logistics company. In other words, as soon as a logistics
company has to satisfy transportation orders of more than a handful of customers, it will
likely not be able to serve them in an optimal way.
The Knapsack problem [463], i. e., answering the question on how to fit as many items
(of different weight and value) into a weight-limited container so that the totally value of
the items in the container is maximized, is N P-complete as well. Like the TSP, it occurs in
a variety of practical applications.
The first known N P-complete problem is the satisfiability problem (SAT) [626]. Here,
the problem is to decide whether or not there is an assignment of values to a set of Boolean
variables x1 , x2 , . . . , xn which makes a given Boolean expression based on these variables,
parentheses, ∧, ∨, and ¬ evaluate to true.
These few examples and the wide area of practical application problems which can be
reduced to them clearly show one thing: Unless P = N P can be found at some point in
time, there will be no way to solve a variety of problems exactly.

12.4 Countermeasures

Solving a problem exactly to optimality is not always possible, as we have learned in the
previous section. If we have to deal with N P-complete problems, we will therefore have
to give up on some of the features which we expect an algorithm or a solution to have. A
randomized algorithm includes at least one instruction that acts on the basis of random de-
cision, as stated in Definition D1.13 on page 33. In other words, as opposed to deterministic
148 12 PROBLEM HARDNESS
algorithms9 which always produce the same results when given the same inputs, a random-
ized algorithm violates the constraint of determinism. Randomized algorithms are also often
called probabilistic algorithms [1281, 1282, 1919, 1966, 1967]. There are two general classes
of randomized algorithms: Las Vegas and Monte Carlo algorithms.

Definition D12.2 (Las Vegas Algorithm). A Las Vegas algorithm10 is a randomized


algorithm that never returns a wrong result [141, 1281, 1282, 1966]. It either returns the
correct result, reports a failure, or does not return at all.

If a Las Vegas algorithm returns, its outcome is deterministic (but not the algorithm itself).
The termination, however, cannot be guaranteed. There usually exists an expected runtime
limit for such algorithms – their actual execution however may take arbitrarily long. In
summary, a Las Vegas algorithm terminates with a positive probability and is (partially)
correct.

Definition D12.3 (Monte Carlo Algorithm). A Monte Carlo algorithm11 always ter-
minates. Its result, however, can be either correct or incorrect [1281, 1282, 1966].

In contrast to Las Vegas algorithms, Monte Carlo algorithms always terminate but are
(partially) correct only with a positive probability. As pointed out in Section 1.3, they are
usually the way to go to tackle complex problems.

9 http://en.wikipedia.org/wiki/Deterministic_computation [accessed 2007-07-03]


10 http://en.wikipedia.org/wiki/Las_Vegas_algorithm [accessed 2007-07-03]
11 http://en.wikipedia.org/wiki/Monte_carlo_algorithm [accessed 2007-07-03]
12.4. COUNTERMEASURES 149
Tasks T12

44. Find all big-O relationships between the following functions fi : N1 7→ R:



• f1 (x) = 9 x
• f2 (x) = log x
• f3 (x) = 7
• f4 (x) = (1.5)x
• f5 (x) = 99/x
• f6 (x) = ex
• f7 (x) = 10x2
• f8 (x) = (2 + (−1)x )x
• f9 (x) = 2 ln x
• f10 (x) = (1 + (−1)x )x2 + x
[10 points]

45. Show why the statement “The runtime of the algorithm is at least in O n2 ” makes
no sense.
[5 points]

46. What is the difference between a deterministic Turing machine (DTM) and non-
deterministic one (NTM)?
[10 points]

47. Write down the states and rules of a deterministic Turing machine which inverts a
sequence of zeros and ones on the tape while moving its head to the right and that
stops when encountering a blank symbol.
[5 points]

48. Use literature or the internet to find at least three more N P-complete problems. Define
them as optimization problems by specifying proper problem spaces X and objective
functions f which are subject to minimization.
[15 points]
Chapter 13
Unsatisfying Convergence

13.1 Introduction

Definition D13.1 (Convergence). An optimization algorithm has converged (a) if it can-


not reach new candidate solutions anymore or (b) if it keeps on producing candidate solutions
from a “small”1 subset of the problem space.

Optimization processes will usually converge at some point in time. In nature, a similar phe-
nomenon can be observed according to Koza [1602]: The niche preemption principle states
that a niche in a natural environment tends to become dominated by a single species [1812].
Linear convergence to the global optimum x⋆ with respect to an objective function f is
defined in Equation 13.1, where t is the iteration index of the optimization algorithm and
x(t) is the best candidate solution discovered until iteration t [1976]. This is basically the
best behavior of an optimization process we could wish for.

|f(x(t + 1)) − f(x⋆ )| ≤ c |f(x(t)) − f(x⋆ )| , c ∈ [0, 1) (13.1)

One of the problems in Global Optimization (and basically, also in nature) is that it is often
not possible to determine whether the best solution currently known is situated on local or
a global optimum and thus, if convergence is acceptable. In other words, it is usually not
clear whether the optimization process can be stopped, whether it should concentrate on
refining the current optimum, or whether it should examine other parts of the search space
instead. This can, of course, only become cumbersome if there are multiple (local) optima,
i. e., the problem is multi-modal [1266, 2267] as depicted in Fig. 11.1.c on page 140.

Definition D13.2 (Multi-Modality). A mathematical function is multi-modal if it has


multiple (local or global) maxima or minima [711, 2474, 3090]. A set of objective functions
(or a vector function) f is multi-modal if it has multiple (local or global) optima – depending
on the definition of “optimum” in the context of the corresponding optimization problem.

The existence of multiple global optima itself is not problematic and the discovery of only
a subset of them can still be considered as successful in many cases. The occurrence of
numerous local optima, however, is more complicated.
1 according to a suitable metric like numbers of modifications or mutations which need to be applied to

a given solution in order to leave this subset


152 13 UNSATISFYING CONVERGENCE
13.2 The Problems

There are three basic problems which are related to the convergence of an optimization algo-
rithm: premature, non-uniform, and domino convergence. The first one is most important
in optimization, but the latter ones may also cause large inconvenience.

13.2.1 Premature Convergence

Definition D13.3 (Premature Convergence). An optimization process has prema-


turely converged to a local optimum if it is no longer able to explore other parts of the
search space than the area currently being examined and there exists another region that
contains a superior solution [2417, 2760].

The goal of optimization is, of course, to find the global optima. Premature convergence,
basically, is the convergence of an optimization process to a local optimum. A prematurely
converged optimization process hence provides solutions which are below the possible or
anticipated quality.

Example E13.1 (Premature Convergence).


Figure 13.1 illustrates two examples for premature convergence. Although a higher peak
exists in the maximization problem pictured in Fig. 13.1.a, the optimization algorithm solely
focuses on the local optimum to the right. The same situation can be observed in the
minimization task shown in Fig. 13.1.b where the optimization process gets trapped in the
local minimum on the left hand side.
global optimum
local optimum
objective values f(x)

x
Fig. 13.1.a: Example 1: Maximization Fig. 13.1.b: Example 2: Minimization

Figure 13.1: Premature convergence in the objective space.

13.2.2 Non-Uniform Convergence


13.2. THE PROBLEMS 153
Definition D13.4 (Non-Uniform Convergence). In optimization problems with in-
finitely many possible (globally) optimal solutions, an optimization process has non-
uniformly converged if the solutions that it provides only represent a subset of the optimal
characteristics.

This definition is rather fuzzy, but it becomes much clearer when taking a look on
Example E13.2. The goal of optimization in problems with infinitely many optima is often
not to discover only one or two optima. Instead, usually we want to find a set X of solutions
which has a good spread over all possible optimal features.

Example E13.2 (Convergence in a Bi-Objective Problem).


Assume that we wish to solve a bi-objective problem with f = (f1 , f2 ). Then, the goal is to
find a set X of candidate solutions which approximates the set X ⋆ of globally (Pareto) optimal
solutions as good as possible. Then, two issues arise: First, the optimization process should
converge to the true Pareto front and return solutions as close to it as possible. Second,
these solutions should be uniformly spread along this front.

front returned
f2 by the optimizer f2
f2

true Pareto front


f1 f1 f1

Fig. 13.2.a: Bad Convergence and Fig. 13.2.b: Good Convergence and Fig. 13.2.c: Good Convergence and
Good Spread Bad Spread Spread

Figure 13.2: Pareto front approximation sets [2915].

Let us examine the three fronts included in Figure 13.2 [2915]. The first picture
(Fig. 13.2.a) shows a result X having a very good spread (or diversity) of solutions, but
the points are far away from the true Pareto front. Such results are not attractive because
they do not provide Pareto-optimal solutions and we would consider the convergence to be
premature in this case. The second example (Fig. 13.2.b) contains a set of solutions which
are very close to the true Pareto front but cover it only partially, so the decision maker
could lose important trade-off options. Finally, the front depicted in Fig. 13.2.c has the two
desirable properties of good convergence and spread.

13.2.3 Domino Convergence

The phenomenon of domino convergence has been brought to attention by Rudnick [2347]
who studied it in the context of his BinInt problem [2347, 2698] which is discussed in
Section 50.2.5.2. In principle, domino convergence occurs when the candidate solutions
have features which contribute to significantly different degrees to the total fitness. If these
features are encoded in separate genes (or building blocks) in the genotypes, they are likely to
be treated with different priorities, at least in randomized or heuristic optimization methods.
Building blocks with a very strong positive influence on the objective values, for instance,
will quickly be adopted by the optimization process (i. e., “converge”). During this time, the
alleles of genes with a smaller contribution are ignored. They do not come into play until
154 13 UNSATISFYING CONVERGENCE
the optimal alleles of the more “important” blocks have been accumulated. Rudnick [2347]
called this sequential convergence phenomenon domino convergence due to its resemblance
to a row of falling domino stones [2698].
Let us consider the application of a Genetic Algorithm, an optimization method working
on bit strings, in such a scenario. Mutation operators from time to time destroy building
blocks with strong positive influence which are then reconstructed by the search. If this
happens with a high enough frequency, the optimization process will never get to optimize
the less salient blocks because repairing and rediscovering those with higher importance
takes precedence. Thus, the mutation rate of the EA limits the probability of finding the
global optima in such a situation.
In the worst case, the contributions of the less salient genes may almost look like noise
and they are not optimized at all. Such a situation is also an instance of premature conver-
gence, since the global optimum which would involve optimal configurations of all building
blocks will not be discovered. In this situation, restarting the optimization process will not
help because it will turn out the same way with very high probability. Example problems
which are often likely to exhibit domino convergence are the Royal Road and the aforemen-
tioned BinInt problem, which you can find discussed in Section 50.2.4 and Section 50.2.5.2,
respectively.
There are some relations between domino convergence and the Siren Pitfall in feature
selection for classification in data mining [961]. Here, if one feature is highly useful in a hard
sub-problem of the classification task, its utility may still be overshadowed by mediocre
features contributing to easy sub-tasks.

13.3 One Cause: Loss of Diversity

In biology, diversity is the variety and abundance of organisms at a given place and
time [1813, 2112]. Much of the beauty and efficiency of natural ecosystems is based on
a dazzling array of species interacting in manifold ways. Diversification is also a good strat-
egy utilized by investors in the economy in order to increase their wealth.
In population-based Global Optimization algorithms, maintaining a set of diverse candi-
date solutions is very important as well. Losing diversity means approaching a state where all
the candidate solutions under investigation are similar to each other. Another term for this
state is convergence. Discussions about how diversity can be measured have been provided
by Cousins [648], Magurran [1813], Morrison [1956], Morrison and De Jong [1958], Paenke
et al. [2112], Routledge [2344], and Burke et al. [454, 458].

Example E13.3 (Loss of Diversity → Premature Convergence).


Figure 13.3 illustrates two possible ways in which a population-based optimization method
may progress. Both processes start with a set of random candidate solutions denoted as
bubbles in the fitness landscape already used in Fig. 13.1.a in their search for the global
maximum. While the upper algorithm preserves a reasonable diversity in the set of elements
under investigation although initially, the best candidate solutions are located in the prox-
imity of the local optimum. By doing so, it also manages to explore the area near the global
optimum and eventually, discovers both peaks. The process illustrated in the lower part of
the picture instead solely concentrates on the area where the best candidate solutions were
sampled in the beginning and quickly climbs up the local optimum. Reaching a state of
premature convergence, it cannot escape from there anymore.
13.3. ONE CAUSE: LOSS OF DIVERSITY 155

preservation of diversity ® convergence to global optimum

initial situation

loss of diversity ® premature convergence

Figure 13.3: Loss of Diversity leads to Premature Convergence.

Preserving diversity is directly linked with maintaining a good balance between exploitation
and exploration [2112] and has been studied by researchers from many domains, such as

1. Genetic Algorithms [238, 2063, 2321, 2322],

2. Evolutionary Algorithms [365, 366, 1692, 1964, 2503, 2573],

3. Genetic Programming [392, 456, 458, 709, 1156, 1157],

4. Tabu Search [1067, 1069], and

5. Particle Swarm Optimization [2943].

13.3.1 Exploration vs. Exploitation

The operations which create new solutions from existing ones have a very large impact on
the speed of convergence and the diversity of the populations [884, 2535]. The step size in
Evolution Strategy is a good example of this issue: setting it properly is very important and
leads over to the “exploration versus exploitation” problem [1250]. The dilemma of deciding
which of the two principles to apply to which degree at a certain stage of optimization is
sketched in Figure 13.4 and can be observed in many areas of Global Optimization.2

Definition D13.5 (Exploration). In the context of optimization, exploration means find-


ing new points in areas of the search space which have not been investigated before.
2 More or less synonymously to exploitation and exploration, the terms intensifications and diversification

have been introduced by Glover [1067, 1069] in the context of Tabu Search.
156 13 UNSATISFYING CONVERGENCE
initial situation:
good candidate solution
found

exploitation: exploration:
search close proximity of search also the
the good candidata distant area
Figure 13.4: Exploration versus Exploitation

Since computers have only limited memory, already evaluated candidate solutions usually
have to be discarded in order to accommodate the new ones. Exploration is a metaphor
for the procedure which allows search operations to find novel and maybe better solution
structures. Such operators (like mutation in Evolutionary Algorithms) have a high chance
of creating inferior solutions by destroying good building blocks but also a small chance of
finding totally new, superior traits (which, however, is not guaranteed at all).

Definition D13.6 (Exploitation). Exploitation is the process of improving and combin-


ing the traits of the currently known solutions.

The crossover operator in Evolutionary Algorithms, for instance, is considered to be an


operator for exploitation. Exploitation operations often incorporate small changes into al-
ready tested individuals leading to new, very similar candidate solutions or try to merge
building blocks of different, promising individuals. They usually have the disadvantage that
other, possibly better, solutions located in distant areas of the problem space will not be
discovered.
Almost all components of optimization strategies can either be used for increasing ex-
ploitation or in favor of exploration. Unary search operations that improve an existing
solution in small steps can often be built, hence being exploitation operators. They can also
be implemented in a way that introduces much randomness into the individuals, effectively
making them exploration operators. Selection operations3 in Evolutionary Computation
choose a set of the most promising candidate solutions which will be investigated in the
next iteration of the optimizers. They can either return a small group of best individuals
(exploitation) or a wide range of existing candidate solutions (exploration).
Optimization algorithms that favor exploitation over exploration have higher conver-
gence speed but run the risk of not finding the optimal solution and may get stuck at a local
optimum. Then again, algorithms which perform excessive exploration may never improve
their candidate solutions well enough to find the global optimum or it may take them very
long to discover it “by accident”. A good example for this dilemma is the Simulated An-
nealing algorithm discussed in Chapter 27 on page 243. It is often modified to a form called
simulated quenching which focuses on exploitation but loses the guaranteed convergence to
the optimum. Generally, optimization algorithms should employ at least one search opera-
tion of explorative character and at least one which is able to exploit good solutions further.
There exists a vast body of research on the trade-off between exploration and exploitation
that optimization algorithms have to face [87, 741, 864, 885, 1253, 1986].

3 Selection will be discussed in Section 28.4 on page 285.


13.4. COUNTERMEASURES 157
13.4 Countermeasures
There is no general approach which can prevent premature convergence. The probability
that an optimization process gets caught in a local optimum depends on the characteristics
of the problem to be solved and the parameter settings and features of the optimization
algorithms applied [2350, 2722].

13.4.1 Search Operator Design

The most basic measure to decrease the probability of premature convergence is to make
sure that the search operations are complete (see Definition D4.8 on page 85), i. e., to make
sure that they can at least theoretically reach every point in the search space from every
other point. Then, it is (at least theoretically) possible to escape arbitrary local optima.
A prime example for bad operator design is Algorithm 1.1 on page 40 which cannot escape
any local optima. The two different operators given in Equation 5.4 and Equation 5.5 on 96
are another example for the influence of the search operation design on the vulnerability of
the optimization process to local convergence.

13.4.2 Restarting

A very crude and yet, sometimes effective measure is restarting the optimization process at
randomly chosen points in time. One example for this method is GRASP s, Greedy Random-
ized Adaptive Search Procedures [902, 906] (see Chapter 42 on page 499), which continuously
restart the process of creating an initial solution and refining it with local search. Still, such
approaches are likely to fail in domino convergence situations. Increasing the proportion of
exploration operations may also reduce the chance of premature convergence.

13.4.3 Low Selection Pressure and Population Size

Generally, the higher the chance that candidate solutions with bad fitness are investigated
instead of being discarded in favor of the seemingly better ones, the lower the chance of
getting stuck at a local optimum. This is the exact idea which distinguishes Simulated
Annealing from Hill Climbing (see Chapter 27). It is known that Simulated Annealing can
find the global optimum, whereas simple Hill Climbers are likely to prematurely converge.
In an EA, too, using low selection pressure decreases the chance of premature convergence.
However, such an approach also decreases the speed with which good solutions are exploited.
Simulated Annealing, for instance, takes extremely long if a guaranteed convergence to the
global optimum is required.
Increasing the population size of a population-based optimizer may be useful as well,
since larger populations can maintain more individuals (and hence, also be more diverse).
However, the idea that larger populations will always lead to better optimization results
does not always hold, as shown by van Nimwegen and Crutchfield [2778].

13.4.4 Sharing, Niching, and Clearing

Instead of increasing the population size, it makes thus sense to “gain more” from the smaller
populations. In order to extend the duration of the evolution in Evolutionary Algorithms,
many methods have been devised for steering the search away from areas which have already
been frequently sampled. In steady-state EAs (see Paragraph 28.1.4.2.1), it is common to
remove duplicate genotypes from the population [2324].
More general, the exploration capabilities of an optimizer can be improved by integrating
density metrics into the fitness assignment process. The most popular of such approaches
are sharing and niching (see Section 28.3.4, [735, 742, 1080, 1085, 1250, 1899, 2391]). The
158 13 UNSATISFYING CONVERGENCE
Strength Pareto Algorithms, which are widely accepted to be highly efficient, use another
idea: they adapt the number of individuals that one candidate solution dominates as density
measure [3097, 3100].
Pétrowski [2167, 2168] introduced the related clearing approach which you can find
discussed in Section 28.4.7, where individuals are grouped according to their distance in the
phenotypic or genotypic space and all but a certain number of individuals from each group
just receive the worst possible fitness. A similar method aiming for convergence prevention
is introduced in Section 28.4.7, where individuals close to each other in objective space are
deleted.

13.4.5 Self-Adaptation

Another approach against premature convergence is to introduce the capability of self-


adaptation, allowing the optimization algorithm to change its strategies or to modify its
parameters depending on its current state. Such behaviors, however, are often implemented
not in order to prevent premature convergence but to speed up the optimization process
(which may lead to premature convergence to local optima) [2351–2353].

13.4.6 Multi-Objectivization

Recently, the idea of using helper objectives has emerged. Here, usually a single-objective
problem is transformed into a multi-objective one by adding new objective functions [1429,
1440, 1442, 1553, 1757, 2025]. Brockhoff et al. [425] proved that in some cases, such changes
can speed up the optimization process. The new objectives are often derived from the main
objective by decomposition [1178] or from certain characteristics of the problem [1429]. They
are then optimized together with the original objective function with some multi-objective
technique, say an MOEA (see Definition D28.2). Problems where the main objective is
some sort of sum of different components contributing to the fitness are especially suitable
for that purpose, since such a component naturally lends itself as helper objective. It has
been shown that the original objective function has at least as many local optima as the
total number of all local optima present in the decomposed objectives for sum-of-parts
multi-objectivizations [1178].

Example E13.4 (Sum-of-Parts Multi-Objectivization).


In Figure 13.5, we provide an example of multi-objectivization of the a sum-of-parts objective
function f defined on the real interval X = G = [0, 1]. The original (single) objective f
(Fig. 13.5.a) can be divided into the two objective functions f1 and f2 with f(x) = f1 (x)+f2 (x)
illustrated Fig. 13.5.b.
If we assume minimization, f has five locally optimal sets according to Definition D4.14
on page 89: The sets X1⋆ and X5⋆ are globally optimal since they contain the elements with
lowest function value. x⋆l,2 , x⋆l,3 , and x⋆l,4 are three sets each containing exactly one locally
minimal point.
We now split up f to f1 and f2 and optimize either f ≡ (f1 , f2 ) or f ≡ (f, f1 , f2 ) together.
If we apply Pareto-based optimization (see Section 3.3.5 on page 65) and use the Defini-
tion D4.17 on page 89 of local optima (which is also used in [1178]), we find that there are
only three locally optima regions remaining A⋆1 , A⋆2 , and A⋆3 .
A⋆1 , for example, is a continuous region which borders to elements it dominates. If we
go a small step to the left from the left border of A⋆1 , we will find an element with the same
value of f1 but with worse values in both f2 and f. The same is true for its right border. A⋆ 3
has exactly the same behavior.
Any step to the left or right within A⋆2 leads to an improvement of either f1 or f2 and a
degeneration of the other function. Therefore, no element within A⋆2 dominates any other
element in A⋆2 . However, at the borders of A⋆2 , f1 becomes constant while f2 keeps growing,
13.4. COUNTERMEASURES 159

1 1
f(x) x«l,2 x«l,3 x«l,4 A«1 A«2 A«3
0.8 0.8 f2(x)

0.6 X«1 X«5 0.6

0.4 0.4

0.2 0.2
main objective: f(x)=f1(x)+f2(x) f1(x)
xÎX xÎX
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Fig. 13.5.a: The original objective and its local op- Fig. 13.5.b: The helper objectives and the local op-
tima. tima in the objective space.

Figure 13.5: Multi-Objectivization à la Sum-of-Parts.

i. e., getting worse. The neighboring elements outside of A⋆ 2 are thus dominated by the
elements on its borders.
Multi-objectivization may thus help here to overcome the local optima, since there are
fewer locally optimal regions. However, a look at A⋆2 shows us that it contains x⋆l,2 , x⋆l,3 , and
x⋆l,4 and – since it is continuous – also (uncountable) infinitly many other points. In this
case, it would thus not be clear whether multi-objectivization would be beneficial or not a
priori.
160 13 UNSATISFYING CONVERGENCE
Tasks T13

49. Sketch one possible objective function for a single-objective optimization problem
which has infitely many solutions. Copy this graph three times and paint six points
into each copy which represent:

(a) the possible results of a possible prematurely converged optimization process,


(b) the possible results of a possible correctly converged optimization process with
non-uniform spread (convergence),
(c) the possible results of a possible optimization process which has converged nicely.
[10 points]

50. Discuss the dilemma of exploration versus exploitation. What are the advantages and
disadvantages of these two general search principles?
[5 points]

51. Discuss why the Hill Climbing optimization algorithm (Chapter 26) is particularly
prone to premature convergence and how Simulated Annealing (Chapter 27) mitigates
the cause of this vulnerability.
[5 points]
Chapter 14
Ruggedness and Weak Causality

landscape difficulty increases

unimodal multimodal somewhat rugged very rugged

Figure 14.1: The landscape difficulty increases with increasing ruggedness.

14.1 The Problem: Ruggedness

Optimization algorithms generally depend on some form of gradient in the objective or fitness
space. The objective functions should be continuous and exhibit low total variation1 , so the
optimizer can descend the gradient easily. If the objective functions become more unsteady
or fluctuating, i. e., go up and down often, it becomes more complicated for the optimization
process to find the right directions to proceed to as sketched in Figure 14.1. The more
rugged a function gets, the harder it gets to optimize it. For short, one could say ruggedness
is multi-modality plus steep ascends and descends in the fitness landscape. Examples of
rugged landscapes are Kauffman’s NK fitness landscape (see Section 50.2.1), the p-Spin
model discussed in Section 50.2.2, Bergman and Feldman’s jagged fitness landscape [276],
and the sketch in Fig. 11.1.d on page 140.

14.2 One Cause: Weak Causality

During an optimization process, new points in the search space are created by the search
operations. Generally we can assume that the genotypes which are the input of the search
operations correspond to phenotypes which have previously been selected. Usually, the
better or the more promising an individual is, the higher are its chances of being selected for
further investigation. Reversing this statement suggests that individuals which are passed
to the search operations are likely to have a good fitness. Since the fitness of a candidate
solution depends on its properties, it can be assumed that the features of these individuals
1 http://en.wikipedia.org/wiki/Total_variation [accessed 2008-04-23]
162 14 RUGGEDNESS AND WEAK CAUSALITY
are not so bad either. It should thus be possible for the optimizer to introduce slight changes
to their properties in order to find out whether they can be improved any further2 . Normally,
such exploitive modifications should also lead to small changes in the objective values and
hence, in the fitness of the candidate solution.

Definition D14.1 (Strong Causality). Strong causality (locality) means that small
changes in the properties of an object also lead to small changes in its behavior [2278,
2279, 2329].

This principle (proposed by Rechenberg [2278, 2279]) should not only hold for the search
spaces and operations designed for optimization (see Equation 24.5 on page 218 in Sec-
tion 24.9), but applies to natural genomes as well. The offspring resulting from sexual
reproduction of two fish, for instance, has a different genotype than its parents. Yet, it is
far more probable that these variations manifest in a unique color pattern of the scales, for
example, instead of leading to a totally different creature.
Apart from this straightforward, informal explanation here, causality has been investi-
gated thoroughly in different fields of optimization, such as Evolution Strategy [835, 2278],
structure evolution [1760, 1761], Genetic Programming [835, 1396, 2327, 2329], genotype-
phenotype mappings [2451], search operators [835], and Evolutionary Algorithms in gen-
eral [835, 2338, 2599].
Sendhoff et al. [2451, 2452] introduce a very nice definition for causality which combines
the search space G, the (unary) search operations searchOp1 ∈ Op, the genotype-phenotype
mapping “gpm” and the problem space X with the objective functions f . First, a distance
measure “dist caus” based on the stochastic nature of the unary search operation searchOp1
is defined.3
 
1
dist caus(g1 , g2 ) = − log P (g2 = searchOp1 (g1 )) (14.1)
Pid
Pid = P (g = searchOp1 (g)) (14.2)
In Equation 14.1, g1 and g2 are two elements of the search space G, P (g2 = searchOp1 (g1 ))
is the probability that g2 is the result of one application of the search operation searchOp1 to
g1 , and Pid is the probability that the searchOp1 maps an element to itself. dist caus(g1 , g2 )
falls with a rising probability the g1 is mapped to g2 . Sendhoff et al. [2451, 2452] then define
strong causality as the state where Equation 14.3 holds:
∀g1 , g2 , g3 ∈ G : ||f (gpm(g1 )) − f (gpm(g2 ))||2 < ||f (gpm(g1 )) − f (gpm(g3 ))||2
dist caus(g1 , g2 ) < dist caus(g1 , g3 )
P (g2 = searchOp1 (g1 )) > P (g3 = searchOp1 (g1 )) (14.3)
In other words, strong causality means that the search operations have a higher chance to
produce points g2 which (map to candidate solutions which) are closer in objective space
to their inputs g1 than to create elements x3 which (map to candidate solutions which) are
farther away.
In fitness landscapes with weak (low) causality, small changes in the candidate solutions
often lead to large changes in the objective values, i. e., ruggedness. It then becomes harder
to decide which region of the problem space to explore and the optimizer cannot find reliable
gradient information to follow. A small modification of a very bad candidate solution may
then lead to a new local optimum and the best candidate solution currently known may be
surrounded by points that are inferior to all other tested individuals.
The lower the causality of an optimization problem, the more rugged its fitness landscape
is, which leads to a degeneration of the performance of the optimizer [1563]. This does not
necessarily mean that it is impossible to find good solutions, but it may take very long to
do so.
2 We have already mentioned this under the subject of exploitation.
3 This measure has the slight drawback that it is undefined for P (g2 = searchOp1 (g2 )) = 0 or Pid = 0.
14.3. FITNESS LANDSCAPE MEASURES 163
14.3 Fitness Landscape Measures
As measures for the ruggedness of a fitness landscape (or their general difficulty), many
different metrics have been proposed. Wedge and Kell [2874] and Altenberg [80] provide
nice lists of them in their work4 , which we summarize here:
1. Weinberger [2881] introduced the autocorrelation function and the correlation length
of random walks.
2. The correlation of the search operators was used by Manderick et al. [1821] in con-
junction with the autocorrelation [2267].
3. Jones and Forrest [1467, 1468] proposed the fitness distance correlation (FDC), the
correlation of the fitness of an individual and its distance to the global optimum. This
measure has been extended by researchers such as Clergue et al. [590, 2784].
4. The probability that search operations create offspring fitter than their parents, as
defined by Rechenberg [2278] and Beyer [295] (and called evolvability by Altenberg
[77]), will be discussed in Section 16.2 on page 172 in depth.
5. Simulation dynamics have been researched by Altenberg [77] and Grefenstette [1130].
6. Another interesting metric is the fitness variance of formae (Radcliffe and Surry [2246])
and schemas (Reeves and Wright [2284]).
7. The error threshold method from theoretical biology [867, 2057] has been adopted
Ochoa et al. [2062] for Evolutionary Algorithms. It is the “critical mutation rate be-
yond which structures obtained by the evolutionary process are destroyed by mutation
more frequently than selection can reproduce them” [2062].
8. The negative slope coefficient (NSC) by Vanneschi et al. [2785, 2786] may be considered
as an extension of Altenberg’s evolvability measure.
9. Davidor [691] uses the epistatic variance as a measure of utility of a certain represen-
tation in Genetic Algorithms. We discuss the issue of epistasis in Chapter 17.
10. The genotype-fitness correlation (GFC) of Wedge and Kell [2874] is a new measure for
ruggedness in fitness landscape and has been shown to be a good guide for determining
optimal population sizes in Genetic Programming.
11. Rana [2267] defined the Static-φ and the Basin-δ metrics based on the notion of
schemas in binary-coded search spaces.

14.3.1 Autocorrelation and Correlation Length

As example, let us take a look at the autocorrelation function as well as the correlation
length of random walks [2881]. Here we borrow its definition from Vérel et al. [2802]:

Definition D14.2 (Autocorrelation Function). Given a random walk (xi , xi+1 , . . . ),


the autocorrelation function ρ of an objective function f is the autocorrelation function
of the time series (f(xi ) , f(xi+1 ) , . . . ).

E [f(xi ) f(xi+k )] − E [f(xi )] E [f(xi+k )]


ρ(k, f) = (14.4)
D2 [f(xi )]

where E [f(xi )] and D2 [f(xi )] are the expected value and the variance of f(xi ).
4 Especially the one of Wedge and Kell [2874] is beautiful and far more detailed than this summary here.
164 14 RUGGEDNESS AND WEAK CAUSALITY
1
The correlation length τ = − log ρ(1,f) measures how the autocorrelation function decreases
and summarizes the ruggedness of the fitness landscape: the larger the correlation length,
the lower the total variation of the landscape. From the works of Kinnear, Jr [1540] and
Lipsitch [1743] from twenty, however, we also know that correlation measures do not always
represent the hardness of a problem landscape full.

14.4 Countermeasures

To the knowledge of the author, no viable method which can directly mitigate the effects
of rugged fitness landscapes exists. In population-based approaches, using large population
sizes and applying methods to increase the diversity can reduce the influence of ruggedness,
but only up to a certain degree. Utilizing Lamarckian evolution [722, 2933] or the Baldwin
effect [191, 1235, 1236, 2933], i. e., incorporating a local search into the optimization pro-
cess, may further help to smoothen out the fitness landscape [1144] (see Section 36.2 and
Section 36.3, respectively).
Weak causality is often a home-made problem because it results to some extent from
the choice of the solution representation and search operations. We pointed out that explo-
ration operations are important for lowering the risk of premature convergence. Exploitation
operators are as same as important for refining solutions to a certain degree. In order to
apply optimization algorithms in an efficient manner, it is necessary to find representations
which allow for iterative modifications with bounded influence on the objective values, i. e.,
exploitation. In Chapter 24, we present some further rules-of-thumb for search space and
operation design.
14.4. COUNTERMEASURES 165
Tasks T14

52. Investigate the Weierstraß Function 5 . Is it unimodal, smooth, multimodal, or even


rugged?
[5 points]

53. Why is causality important in metaheuristic optimization? Why are problems with
low causality normally harder than problems with strong causality?
[5 points]

54. Define three objective functions by yourself: a uni-modal one (f1 : R 7→ R+ ), a simple
multi-modal one (f2 : R 7→ R+ ), and a rugged one (f3 : R 7→ R+ ). All three functions
should have the same global minimum (optimum) 0 with objective value 0. Try to solve
all three function with a trivial implementation of Hill Climbing (see Algorithm 26.1 on
page 230) by applying it 30 times to each function. Note your observations regarding
the problem hardness.
[15 points]

5 http://en.wikipedia.org/wiki/Weierstrass_function [accessed 2010-09-14]


Chapter 15
Deceptiveness

15.1 Introduction
Especially annoying fitness landscapes show deceptiveness (or deceptivity). The gradient
of deceptive objective functions leads the optimizer away from the optima, as illustrated in
Figure 15.1 and Fig. 11.1.e on page 140.

landscape difficulty increases

unimodal unimodal with multimodal (center very deceptive


neutrality and limits) deceptive

Figure 15.1: Increasing landscape difficulty caused by deceptivity.

The term deceptiveness [288] is mainly used in the Genetic Algorithm1 community in the
context of the Schema Theorem [1076, 1077]. Schemas describe certain areas (hyperplanes)
in the search space. If an optimization algorithm has discovered an area with a better
average fitness compared to other regions, it will focus on exploring this region based on
the assumption that highly fit areas are likely to contain the true optimum. Objective
functions where this is not the case are called deceptive [288, 1075, 1739]. Examples for
deceptiveness are the ND fitness landscapes outlined in Section 50.2.3, trap functions (see
Section 50.2.3.1), and the fully deceptive problems given by Goldberg et al. [743, 1076, 1083]
and [288].

15.2 The Problem

Definition D15.1 (Deceptiveness). An objective function f : X 7→ R is deceptive if a Hill


Climbing algorithm (see Chapter 26 on page 229) would be steered in a direction leading
away from all global optima in large parts of the search space.
1 We are going to discuss Genetic Algorithms in Chapter 29 on page 325 and the Schema Theorem

in Section 29.5 on page 341.


168 15 DECEPTIVENESS
For Definition D15.1, it is usually also assumed that (a) the search space is also the problem
space (G = X), (b) the genotype-phenotype mapping is the identity mapping, and (c) the
search operations do not violate the causality (see Definition D14.1 on page 162).. Still,
Definition D15.1 remains a bit blurry since the term large parts of the search space is
intuitive but not precise. However, it reveals the basic problem caused by deceptiveness.
If the information accumulated by an optimizer actually guides it away from the optimum,
search algorithms will perform worse than a random walk or an exhaustive enumeration
method. This issue has been known for a long time [1914, 1915, 2696, 2868] and has been
subsumed under the No Free Lunch Theorem which we will discuss in Chapter 23.

15.3 Countermeasures

Solving deceptive optimization tasks perfectly involves sampling many individuals with very
bad features and low fitness. This contradicts the basic ideas of metaheuristics and thus,
there are no efficient countermeasures against deceptivity. Using large population sizes,
maintaining a very high diversity, and utilizing linkage learning (see Section 17.3.3) are,
maybe, the only approaches which can provide at least a small chance of finding good
solutions.
A completely deceptive fitness landscape would be even worse than the needle-in-a-
haystack problems outlined in Section 16.7 [? ]. Exhaustive enumeration of the search
space or searching by using random walks may be the only possible way to go in extremely
deceptive cases. However, the question what kind of practical problem would have such a
feature arises. The solution of a completely deceptive problem would be the one that, when
looking on all possible solutions, seems to be the most improbably one. However, when
actually testing it, this worst-looking point in the problem space turns out to be the best-
possible solution. Such kinds of problem are, as far as the author of this book is concerned,
rather unlikely and rare.
15.3. COUNTERMEASURES 169
Tasks T15

55. Why is deceptiveness fatal for metaheuristic optimization? Why are non-deceptive
problems usually easier than deceptive ones?
[5 points]

56. Construct two different deceptive objective functions f1 , f2 : R 7→ R+ by yourself. Like


the three functions from Task 54 on page 165, they should have the global minimum
(optimum) 0 with objective value 0. Try to solve these functions with a trivial imple-
mentation of Hill Climbing (see Algorithm 26.1 on page 230) by applying it 30 times
to both of them. Note your observations regarding the problem hardness and compare
them to your results from Task 54.
[7 points]
Chapter 16
Neutrality and Redundancy

16.1 Introduction

landscape difficulty increases

unimodal not much gradient with some neutrality very neutral


distant from optimum

Figure 16.1: Landscape difficulty caused by neutrality.

Definition D16.1 (Neutrality). We consider the outcome of the application of a search


operation to an element of the search space as neutral if it yields no change in the objective
values [219, 2287].

It is challenging for optimization algorithms if the best candidate solution currently known
is situated on a plane of the fitness landscape, i. e., all adjacent candidate solutions have
the same objective values. As illustrated in Figure 16.1 and Fig. 11.1.f on page 140, an
optimizer then cannot find any gradient information and thus, no direction in which to
proceed in a systematic manner. From its point of view, each search operation will yield
identical individuals. Furthermore, optimization algorithms usually maintain a list of the
best individuals found, which will then overflow eventually or require pruning.
The degree of neutrality ν is defined as the fraction of neutral results among all possible
products of the search operations applied to a specific genotype [219]. We can generalize
this measure to areas G in the search space G by averaging over all their elements. Regions
where ν is close to one are considered as neutral.
|{g2 : P (g2 = Op(g1 )) > 0 ∧ f (gpm(g2 )) = f (gpm(g1 ))}|
∀g1 ∈ G ⇒ ν (g1 ) = (16.1)
|{g2 : P (g2 = Op(g1 )) > 0}|
1 X
∀G ⊆ G ⇒ ν (G) = ν (g) (16.2)
|G|
g∈G
172 16 NEUTRALITY AND REDUNDANCY
16.2 Evolvability

Another metaphor in Global Optimization which has been borrowed from biological sys-
tems [1289] is evolvability1 [699]. Wagner [2833, 2834] points out that this word has two
uses in biology: According to Kirschner and Gerhart [1543], a biological system is evolv-
able if it is able to generate heritable, selectable phenotypic variations. Such properties
can then be spread by natural selection and changed during the course of evolution. In its
second sense, a system is evolvable if it can acquire new characteristics via genetic change
that help the organism(s) to survive and to reproduce. Theories about how the ability
of generating adaptive variants has evolved have been proposed by Riedl [2305], Wagner
and Altenberg [78, 2835], Bonner [357], and Conrad [624], amongst others. The idea of
evolvability can be adopted for Global Optimization as follows:

Definition D16.2 (Evolvability). The evolvability of an optimization process in its cur-


rent state defines how likely the search operations will lead to candidate solutions with new
(and eventually, better) objectives values.

The direct probability of success [295, 2278], i. e., the chance that search operators produce
offspring fitter than their parents, is also sometimes referred to as evolvability in the context
of Evolutionary Algorithms [77, 80].

16.3 Neutrality: Problematic and Beneficial

The link between evolvability and neutrality has been discussed by many researchers [2834,
3039]. The evolvability of neutral parts of a fitness landscape depends on the optimization
algorithm used. It is especially low for Hill Climbing and similar approaches, since the search
operations cannot directly provide improvements or even changes. The optimization process
then degenerates to a random walk, as illustrated in Fig. 11.1.f on page 140. The work of
Beaudoin et al. [242] on the ND fitness landscapes2 shows that neutrality may “destroy”
useful information such as correlation.
Researchers in molecular evolution, on the other hand, found indications that the major-
ity of mutations in biology have no selective influence [971, 1315] and that the transformation
from genotypes to phenotypes is a many-to-one mapping. Wagner [2834] states that neutral-
ity in natural genomes is beneficial if it concerns only a subset of the properties peculiar to
the offspring of a candidate solution while allowing meaningful modifications of the others.
Toussaint and Igel [2719] even go as far as declaring it a necessity for self-adaptation.
The theory of punctuated equilibria 3 , in biology introduced by Eldredge and Gould
[875, 876], states that species experience long periods of evolutionary inactivity which are
interrupted by sudden, localized, and rapid phenotypic evolutions [185].4 It is assumed that
the populations explore neutral layers5 during the time of stasis until, suddenly, a relevant
change in a genotype leads to a better adapted phenotype [2780] which then reproduces
quickly. Similar phenomena can be observed/are utilized in EAs [604, 1838].
“Uh?”, you may think, “How does this fit together?” The key to differentiating between
“good” and “bad” neutrality is its degree ν in relation to the number of possible solutions
maintained by the optimization algorithms. Smith et al. [2538] have used illustrative ex-
amples similar to Figure 16.2 showing that a certain amount of neutral reproductions can
foster the progress of optimization. In Fig. 16.2.a, basically the same scenario of premature
1 http://en.wikipedia.org/wiki/Evolvability [accessed 2007-07-03]
2 See Section 50.2.3 on page 557 for a detailed elaboration on the ND fitness landscape.
3 http://en.wikipedia.org/wiki/Punctuated_equilibrium [accessed 2008-07-01]
4 A very similar idea is utilized in the Extremal Optimization method discussed in Chapter 41.
5 Or neutral networks, as discussed in Section 16.4.
16.4. NEUTRAL NETWORKS 173
convergence as in Fig. 13.1.a on page 152 is depicted. The optimizer is drawn to a local opti-
mum from which it cannot escape anymore. Fig. 16.2.b shows that a little shot of neutrality
could form a bridge to the global optimum. The optimizer now has a chance to escape the
smaller peak if it is able to find and follow that bridge, i. e., the evolvability of the system
has increased. If this bridge gets wider, as sketched in Fig. 16.2.c, the chance of finding the
global optimum increases as well. Of course, if the bridge gets too wide, the optimization
process may end up in a scenario like in Fig. 11.1.f on page 140 where it cannot find any
direction. Furthermore, in this scenario we expect the neutral bridge to lead to somewhere
useful, which is not necessarily the case in reality.

global optimum
local optimum

Fig. 16.2.a: Premature Conver- Fig. 16.2.b: Small Neutral Bridge Fig. 16.2.c: Wide Neutral Bridge
gence

Figure 16.2: Possible positive influence of neutrality.

Recently, the idea of utilizing the processes of molecular6 and evolutionary7 biology as
complement to Darwinian evolution for optimization gains interest [212]. Scientists like
Hu and Banzhaf [1286, 1287, 1289] began to study the application of metrics such as the
evolution rate of gene sequences [2973, 3021] to Evolutionary Algorithms. Here, the degree
of neutrality (synonymous vs. non-synonymous changes) seems to play an important role.
Examples for neutrality in fitness landscapes are the ND family (see Section 50.2.3),
the NKp and NKq models (discussed in Section 50.2.1.5), and the Royal Road (see Sec-
tion 50.2.4). Another common instance of neutrality is bloat in Genetic Programming,
which is outlined in ?? on page ??.

16.4 Neutral Networks


From the idea of neutral bridges between different parts of the search space as sketched by
Smith et al. [2538], we can derive the concept of neutral networks.

Definition D16.3 (Neutral Network). Neutral networks are equivalence classes K of


elements of the search space G which map to elements of the problem space X with the
same objective values and are connected by chains of applications of the search operators
Op [219].

∀g1 , g2 ∈ G : g1 ∈ K (g2 ) ⊆ G ⇔ ∃k ∈ N0 : P g2 = Opk (g1 ) > 0 ∧
f (gpm(g1 )) = f (gpm(g2 )) (16.3)
6 http://en.wikipedia.org/wiki/Molecular_biology [accessed 2008-07-20]
7 http://en.wikipedia.org/wiki/Evolutionary_biology [accessed 2008-07-20]
174 16 NEUTRALITY AND REDUNDANCY
Barnett [219] states that a neutral network has the constant innovation property if

1. the rate of discovery of innovations keeps constant for a reasonably large amount of
applications of the search operations [1314], and

2. if this rate is comparable with that of an unconstrained random walk.

Networks with this property may prove very helpful if they connect the optima in the fitness
landscape. Stewart [2611] utilizes neutral networks and the idea of punctuated equilibria
in his extrema selection, a Genetic Algorithm variant that focuses on exploring individuals
which are far away from the centroid of the set of currently investigated candidate solutions
(but have still good objective values). Then again, Barnett [218] showed that populations
in Genetic Algorithm tend to dwell in neutral networks of high dimensions of neutrality
regardless of their objective values, which (obviously) cannot be considered advantageous.
The convergence on neutral networks has furthermore been studied by Bornberg-Bauer
and Chan [361], van Nimwegen et al. [2778, 2779], and Wilke [2942]. Their results show that
the topology of neutral networks strongly determines the distribution of genotypes on them.
Generally, the genotypes are “drawn” to the solutions with the highest degree of neutrality
ν on the neutral network Beaudoin et al. [242].

16.5 Redundancy: Problematic and Beneficial

Definition D16.4 (Redundancy). Redundancy in the context of Global Optimization is


a feature of the genotype-phenotype mapping and means that multiple genotypes map to
the same phenotype, i. e., the genotype-phenotype mapping is not injective.

∃g1 , g2 : g1 6= g2 ∧ gpm(g1 ) = gpm(g2 ) (16.4)

The role of redundancy in the genome is as controversial as that of neutrality [2880].


There exist many accounts of its positive influence on the optimization process. Ship-
man et al. [2458, 2480], for instance, tried to mimic desirable evolutionary properties of
RNA folding [1315]. They developed redundant genotype-phenotype mappings using voting
(both, via uniform redundancy and via a non-trivial approach), Turing machine-like binary
instructions, Cellular automata, and random Boolean networks [1500]. Except for the trivial
voting mechanism based on uniform redundancy, the mappings induced neutral networks
which proved beneficial for exploring the problem space. Especially the last approach pro-
vided particularly good results [2458, 2480]. Possibly converse effects like epistasis (see
Chapter 17) arising from the new genotype-phenotype mappings have not been considered
in this study.
Redundancy can have a strong impact on the explorability of the problem space. When
utilizing a one-to-one mapping, the translation of a slightly modified genotype will always
result in a different phenotype. If there exists a many-to-one mapping between genotypes
and phenotypes, the search operations can create offspring genotypes different from the
parent which still translate to the same phenotype. The optimizer may now walk along a
path through this neutral network. If many genotypes along this path can be modified to
different offspring, many new candidate solutions can be reached [2480]. One example for
beneficial redundancy is the extradimensional bypass idea discussed in Section 24.15.1.
The experiments of Shipman et al. [2479, 2481] additionally indicate that neutrality
in the genotype-phenotype mapping can have positive effects. In the Cartesian Genetic
Programming method, neutrality is explicitly introduced in order to increase the evolvability
(see ?? on page ??) [2792, 3042].
16.6. SUMMARY 175
Yet, Rothlauf [2338] and Shackleton et al. [2458] show that simple uniform redundancy
is not necessarily beneficial for the optimization process and may even slow it down. Ronald
et al. [2324] point out that the optimization may be slowed down if the population of the
individuals under investigation contains many isomorphic genotypes, i. e., genotypes which
encode the same phenotype. If this isomorphismcan be identified an removed, a significant
speedup may be gained [2324]. We can thus conclude that there is no use in introducing
encodings which, for instance, represent each phenotypic bit with two bits in the genotype
where 00 and 01 map to 0 and 10 and 11 map to 1. Another example for this issue is given
in Fig. 24.1.b on page 220.

16.6 Summary

Different from ruggedness which is always bad for optimization algorithms, neutrality has
aspects that may further as well as hinder the process of finding good solutions. Generally
we can state that degrees of neutrality ν very close to 1 degenerate optimization processes
to random walks. Some forms of neutral networks accompanied by low (nonzero) values of
ν can improve the evolvability and hence, increase the chance of finding good solutions.
Adverse forms of neutrality are often caused by bad design of the search space or
genotype-phenotype mapping. Uniform redundancy in the genome should be avoided where
possible and the amount of neutrality in the search space should generally be limited.

16.7 Needle-In-A-Haystack

Besides fully deceptive problems, one of the worst cases of fitness landscapes is the needle-
in-a-haystack (NIAH) problem [? ] sketched in Fig. 11.1.g on page 140, where the optimum
occurs as isolated [1078] spike in a plane. In other words, small instances of extreme rugged-
ness combine with a general lack of information in the fitness landscape. Such problems
are extremely hard to solve and the optimization processes often will converge prematurely
or take very long to find the global optimum. An example for such fitness landscapes is
the all-or-nothing property often inherent to Genetic Programming of algorithms [2729], as
discussed in ?? on page ??.
176 16 NEUTRALITY AND REDUNDANCY
Tasks T16

57. Describe the possible positive and negative aspects of neutrality in the fitness land-
scape.
[10 points]

58. Construct two different objective functions f1 , f2 : R 7→ R+ which have neutral parts
in the fitness landscape. Like the three functions from Task 54 on page 165 and the
two from Task 56 on page 169, they should have the global minimum (optimum) 0
with objective value 0. Try to solve these functions with a trivial implementation of
Hill Climbing (see Algorithm 26.1 on page 230) by applying it 30 times to both of
them. Note your observations regarding the problem hardness and compare them to
your results from Task 54.
[7 points]
Chapter 17
Epistasis, Pleiotropy, and Separability

17.1 Introduction

17.1.1 Epistasis and Pleiotropy

In biology, epistasis 1 is defined as a form of interaction between different genes [2170].


The term was coined by Bateson [231] and originally meant that one gene suppresses the
phenotypical expression of another gene. In the context of statistical genetics, epistasis was
initially called “epistacy” by Fisher [928]. According to Lush [1799], the interaction between
genes is epistatic if the effect on the fitness of altering one gene depends on the allelic state
of other genes.

Definition D17.1 (Epistasis). In optimization, epistasis is the dependency of the con-


tribution of one gene to the value of the objective functions on the allelic state of other
genes. [79, 692, 2007]

We speak of minimal epistasis when every gene is independent of every other gene. Then,
the optimization process equals finding the best value for each gene and can most efficiently
be carried out by a simple greedy search (see Section 46.3.1) [692]. A problem is maximally
epistatic when no proper subset of genes is independent of any other gene [2007, 2562]. Our
understanding and effects of epistasis come close to another biological phenomenon:

Definition D17.2 (Pleiotropy). Pleiotropy 2 denotes that a single gene is responsible for
multiple phenotypical traits [1288, 2944].

Like epistasis, pleiotropy can sometimes lead to unexpected improvements but often is harm-
ful for the evolutionary system [77]. Both phenomena may easily intertwine. If one gene
epistatically influences, for instance, two others which are responsible for distinct phenotyp-
ical traits, it has both epistatic and pleiotropic effects. We will therefore consider pleiotropy
and epistasis together and when discussing the effects of the latter, also implicitly refer to
the former.

Example E17.1 (Pleiotropy, Epistasis, and a Dinosaur).


In Figure 17.1 we illustrate a fictional dinosaur along with a snippet of its fictional genome
1 http://en.wikipedia.org/wiki/Epistasis [accessed 2008-05-31]
2 http://en.wikipedia.org/wiki/Pleiotropy [accessed 2008-03-02]
178 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY

shape

color

length

length

gene 1 gene 2 gene 3 gene 4

Figure 17.1: Pleiotropy and epistasis in a dinosaur’s genome.

consisting of four genes. Gene 1 influences the color of the creature and is neither pleiotropic
nor has any epistatic relations. Gene 2, however, exhibits pleiotropy since it determines the
length of the hind legs and forelegs. At the same time, it is epistatically connected with
gene 3 which too, influences the length of the forelegs – maybe preventing them from looking
exactly like the hind legs. The fourth gene is again pleiotropic by determining the shape of
the bone armor on the top of the dinosaur’s skull.

Epistasis can be considered as the higher-order or non-main effects in a model predicting


fitness values based on interactions of the genes of the genotypes from the point of view of
Design of Experiments (DOE) [2285].

17.1.2 Separability
In the area of optimizing over continuous problem spaces (such as X ⊆ Rn ), epistasis and
pleiotropy are closely related to the term separability. Separbility is a feature of the objective
functions of an optimization problem [2668]. A function of n variables is separable if it can
be rewritten as a sum of n functions of just one variable [1162, 2099]. Hence, the decision
variables involved in the problem can be optimized independently of each other, i. e., are
minimally epistatic, and the problem is said to be separable according to the formal definition
given by Auger et al. [147, 1181] as follows:

Definition D17.3 (Separability). A function f : X n 7→ R (where usually X ⊆ R is


separable if and only if
arg min(x1 ,..,xn ) f(x1 , .., xn ) = (arg minx1 f(x1 , ..) , .., arg minxn f(.., xn )) (17.1)
Otherwise, f(x) is called a nonseparable function.
17.2. THE PROBLEM 179
If a function f(x) is separable, the parameters x1 , .., xn forming the candidate solution x ∈ X
are called independent. A separable problem is decomposable, as already discussed under
the topic of epistasis.

Definition D17.4 (Nonseparability). A function f : X n 7→ R (where usually x ⊆ R


is m-nonseparable if at most m of its parameters xi are not independent. A nonseparable
function f(x) is called fully-nonseparable if any two of its parameters xi are not independent.

The higher the degree of nonseparability, the harder a function will usually become more
difficult for optimization [147, 1181, 2635]. Often, the term nonseparable is used in the sense
of fully-nonseparable. In other words, in between separable and fully nonseparable problems,
a variety of partially separable problems exists. Definition D17.5 is, hence, an alternative
for Definition D17.4.

Definition D17.5 (Partial Separability). If an objective function f : X n 7→ R+ can be


expressed as
M
X  
f(x) = fi xIi,1 , .., xIi,|I | (17.2)
i
i=1

where each of the M component functions fi : X |Ii | 7→ R depends on a subset Ii ⊆ {1..n} of


the n decision variables [612, 613, 1136, 1137].

17.1.3 Examples

Examples of problems with a high degree of epistasis are Kauffman’s NK fitness land-
scape [1499, 1501] (Section 50.2.1), the p-Spin model [86] (Section 50.2.2), and the tun-
able model by Weise et al. [2910] (Section 50.2.7). Nonseparable objective functions often
used for benchmarking are, for instance, the Griewank function [1106, 2934, 3026] (see Sec-
tion 50.3.1.11) and Rosenbrock’s Function [3026] (see Section 50.3.1.5). Partially separable
test problems are given in, for example, [2670, 2708].

17.2 The Problem

As sketched in Figure 17.2, epistasis has a strong influence on many of the previously
discussed problematic features. If one gene can “turn off” or affect the expression of other
genes, a modification of this gene will lead to a large change in the features of the phenotype.
Hence, the causality will be weakened and ruggedness ensues in the fitness landscape. It also
becomes harder to define search operations with exploitive character. Moreover, subsequent
changes to the “deactivated” genes may have no influence on the phenotype at all, which
would then increase the degree of neutrality in the search space. Bowers [375] found that
representations and genotypes with low pleiotropy lead to better and more robust solutions.
Epistasis and pleiotropy are mainly aspects of the way in which objective functions f ∈ f ,
the genome G, and the genotype-phenotype mapping are defined and interact. They should
be avoided where possible.
180 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY

Needle in a
Haystack

ruggedness multi-
modality

neutrality weak causality


high
epistasis
º causes
Figure 17.2: The influence of epistasis on the fitness landscape.

Example E17.2 (Epistasis in Binary Genomes).


Consider the following three objective functions f1 to f3 all subject to maximization. All three
 
5
are based on a problem space consisting of all bit strings of length five XB = B5 = {1, 0} .
X
f1 (x) = 4x[i] (17.3)
i=0
f2 (x) = (x[0] ∗ x[1]) + (x[2] ∗ x[3]) + x[4] (17.4)
Y
f3 (x) = 4x[i] (17.5)
i=0

Furthermore, the search space be identical to the problem space, i. e., G = X. In the
optimization problem defined by XB and f , the elements of the candidate solutions are
totally independent and can be optimized separately. Thus, f1 is minimal epistatic. In f2 ,
only the first two (x[0], x[1]) and second two (x[2], x[3]) genes of the each genotype influence
each other. Both groups are independent from each other and from x[4]. Hence, the problem
has some epistasis. f3 is a needle-in-a-haystack problem where all five genes directly influence
each other. A single zero in a candidate solution deactivates all other genes and the optimum
– the string containing all ones – is the only point in the search space with non-zero objective
values.

Example E17.3 (Epistasis in Genetic Programming).


Epistasis can be observed especially often Genetic Programming. In this area of Evolution-
ary Computation the problem spaces are sets of all programs in (usually) domain-specific
languages and we try to find candidate solutions which solve a given problem “algorithmi-
cally”.
1 int gcd(int a, int b) { //skeleton code
2 int t; //skeleton code
3 while(b!=0) { // ------------------
4 t = b; //
5 b = a % b; // Evolved Code
6 a = t; //
7 } // ------------------
8 return a; //skeleton code
9 } //skeleton code
Listing 17.1: A program computing the greatest common divisor.
17.2. THE PROBLEM 181
Let us assume that we wish to evolve a program computing the greatest common divisor of
two natural numbers a, b ∈ N1 in a C-style representation (X = C). Listing 17.1 would be
a program which fulfills this task. As search space G we could, for instance, use all trees
where the inner nodes are chosen from while, =, !=, ==, %, +, -, each accepting two child
nodes, and the leaf nodes are either xa, b, t, 1, or 0. The evolved trees could be pasted by
the genotype-phenotype mapping into some skeleton code as indicated in Listing 17.1.
The genes of this representation are the nodes of the trees. A unary search operator
(mutation) could exchange a node inside a genotype with a randomly created one and a
binary search operator (crossover) could swap subtrees between two genotypes. Such a
mutation operator could, for instance, replace the “t = b” in Line 4 of Listing 17.1 with
a “t = a”. Although only a single gene of the program has been modified, its behavior
has now changed completely. Without stating any specific objective functions, it should be
clear that the new program will have very different (functional) objective values. Changing
the instruction has epistatic impact on the behavior of the rest of the program – the loop
declared in Line 3, for instance, may now become an endless loop for many inputs a and b.

Example E17.4 (Epistasis in Combinatorial Optimization: Bin Packing).


In Example E1.1 on page 25 we introduced bin packing as a typical combinatorial optimiza-
tion problem. Later, in Task 67 on page 238 we propose a solution approach to a variant of
this problem. As discussed in this task and sketched in Figure 17.3, we suggest representing
a solution to bin packing as a permutation of the n objects to be packed into the bin and
to set G = X = Π(n). In other words, a candidate solution describes the order in which
the objects have to be placed into the bin and is processed from left to right. Whenever
the current object does not fit into the current bin anymore, a new bin is “opened”. An
implementation of this idea is given in Section 57.1.2.1 and an example output of a test
program applying Hill Climbing, Simulated Annealing, a random walk, and the random
sampling algorithm is listen in Listing 57.16 on page 903.
As can be seen in that listing of results, neither Hill Climbing nor Simulated Annealing
can come very close to the known optimal solutions. One reason for this performance is
that the permutation representation for bin packing exhibits some epistasis. In Figure 17.3,
we sketch how the exchange of two genes near the beginning of a candidate solution x1 can
lead to a new solution x2 with a very different behavior: Due to the sequential packing of
objects into bins according to the permutation described by the candidate solutions, object
groupings way after the actually swapped objects may also be broken down. In other words,
the genes at the beginning of the representation have a strong influence on the behavior at
its end. Interestingly (and obviously), this influence is not mutual: the genes at the end of
the permutation have no influence on the genes preceding them.
Suggestions for a better encoding, an extension of the permutation encoding with so-
called separators marking the beginning and ending of a bin, have been made by Jones and
Beltramo [1460] and Stawowy [2604]. Khuri et al. [1534] suggest a method which works
completely without permutations: Each gene of a string of length n stands for one of the
n objects and its (numerical) allele denotes the bin into which the object belongs. Though
this solution allows for infeasible solutions, it does not suffer from the epistasis phenomenon
discussed in this example.
182 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
12 objects (with identifiers 0 to 11) and
sizes a = (2, 3, 3, 3, 4, 5, 5, 7, 8, 9, 11, 12)
should be packed into bins of size b=14

10 a 11
11=12
8 9 a10=10
7 a8=8
a9=9
4 5 6 a7=7
0 1 2 3 a4=4
a5=5 a6=6
a1=3 a2=3 a3=3
a0=2

swap the first and fourth element of the permutation

g1 = x1 = (10, 11, 6, 0, 2, 9, 4, 8, 5, 7, 1, 3) g1 = x2 = (0, 11, 6, 10, 2, 9, 4, 8, 5, 7, 1, 3)

3
4 5 a3=3
bin size b = 14

a4=4
a5=5
2 1 7
a2=3 a1=3
11 9 8 a7=7
a11=12 a8=8
0 a9=9
10 11 10
a11=12 a0=2 3
a10=10 9 8 a10=10
a9=9 7 a3=3
6 a8=8 6 5
a7=7 4
a6=6 0 a6=6 2 a4=4
a5=5 1
a2=3 a1=3
a0=2

bin 1 bin 2 bin 3 bin 4 bin 5 bin 6 bin 1 bin 2 bin 3 bin 4 bin 5 bin 6 bin 7
First candidate solution x1 requires 6 bins. Candidate x2, a slightly modified version of x1,
, ,
requires 7 bins and has a very different behavior .

Figure 17.3: Epistasis in the permutation-representation for bin packing.

Generally, epistasis and conflicting objectives in multi-objective optimization should be dis-


tinguished from each other. Epistasis as well as pleiotropy is a property of the influence of
the editable elements (the genes) of the genotypes on the phenotypes. Objective functions
can conflict without the involvement of any of these phenomena. We can, for example, define
two objective functions f1 (x) = x and f2 (x) = −x which are clearly contradicting regardless
of whether they both are subject to maximization or minimization. Nevertheless, if the can-
didate solutions x and the genotypes are simple real numbers and the genotype-phenotype
mapping is an identity mapping, neither epistatic nor pleiotropic effects can occur.
Naudts and Verschoren [2008] have shown for the special case of length-two binary string
genomes that deceptiveness does not occur in situations with low epistasis and also that
objective functions with high epistasis are not necessarily deceptive. [239] state the de-
ceptiveness is a special case of epistatic interaction (while we would rather say that it is
a phenomenon caused by epistasis). Another discussion about different shapes of fitness
landscapes under the influence of epistasis is given by Beerenwinkel et al. [248].

17.3 Countermeasures
We have shown that epistasis is a root cause for multiple problematic features of optimization
tasks. General countermeasures against epistasis can be divided into two groups. The
symptoms of epistasis can be mitigated with the same methods which increase the chance
of finding good solutions in the presence of ruggedness or neutrality.

17.3.1 Choice of the Representation


Epistasis itself is a feature which results from the choice of the search space structure, the
17.3. COUNTERMEASURES 183
search operations, the genotype-phenotype mapping, and the structure of the problem space.
Avoiding epistatic effects should be a major concern during their design.

Example E17.5 (Bad Representation Design: Algorithm 1.1).


Algorithm 1.1 on page 40 is a good example for bad search space design. There, the
assignment of n objects to k bins in order to solve the bin packing problem is represented
as a permutation of the first n natural numbers. The genotype-phenotype mapping starts
to take numbers from the start of the genotype sequentially and places the corresponding
objects into the first bin. If the object identified by the current gene does not fill into the
bin anymore, the GPM continues with the next bin.
Representing an assignment of objects to bins like this has one drawback: if the genes at
lower loci, i. e., close to the beginning of the genotype, are modified, this may change many
subsequent assignments. If a large object is swapped from the back to an early bin, the
other objects may be shifted backwards and many of them will land in different bins than
before. The meaning of the gene at locus i in the genotype depends on the allelic states of
all genes at loci smaller than i. Hence, this representation is quite epistatic.
Interestingly, using the representation suggested in Example E1.1 on page 25 and in
Equation 1.2 directly as search space would not only save us the work of implementing a
genotype-phenotype mapping, it would also exhibit significantly lower epistasis.

Choosing the problem space and the genotype-phenotype mapping efficiently can lead to a
great improvement in the quality of the solutions produced by the optimization process [2888,
2905]. Introducing specialized search operations can achieve the same [2478].
We outline some general rules for representation design in Chapter 24. Considering these
suggestions may further improve the performance of an optimizer.

17.3.2 Parameter Tweaking

Using larger populations and favoring explorative search operations seems to be helpful in
epistatic problems since this should increase the diversity. Shinkai et al. [2478] showed that
applying (a) higher selection pressure, i. e., increasing the chance to pick the best candidate
solutions for further investigation instead of the weaker ones, and (b) extinctive selection,
i. e., only working with the newest produced set of candidate solutions while discarding
the ones from which they were derived, can also increase the reliability of an optimizer to
find good solutions. These two concepts are slightly contradicting, so careful adjustment of
the algorithm settings appears to be vital in epistatic environments. Shinkai et al. [2478]
observed higher selection pressure also leads to earlier convergence, a fact that we pointed
out in Chapter 13.

17.3.3 Linkage Learning

According to Winter et al. [2960], linkage is “the tendency for alleles of different genes to
be passed together from one generation to the next” in genetics. This usually indicates that
these genes are closely located in the same chromosome. In the context of Evolutionary
Algorithms, this notation is not useful since identifying spatially close elements inside the
genotypes g ∈ G is trivial. Instead, we are interested in alleles of different genes which have
a joint effect on the fitness [1980, 1981].
Identifying these linked genes, i. e., learning their epistatic interaction, is very helpful
for the optimization process. Such knowledge can be used to protect building blocks3 from
being destroyed by the search operations (such as crossover in Genetic Algorithms), for
instance. Finding approaches for linkage learning has become an especially popular discipline
3 See Section 29.5.5 for information on the Building Block Hypothesis.
184 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
in the area of Evolutionary Algorithms with binary [552, 1196, 1981] and real [754] genomes.
Two important methods from this area are the messy GA (mGA, see Section 29.6) by
Goldberg et al. [1083] and the Bayesian Optimization Algorithm (BOA) [483, 2151]. Module
acquisition [108] may be considered as such an effort.

17.3.4 Cooperative Coevolution

Variable Interaction Learning (VIL) techniques can be used to discover the relations be-
tween decision variables, as introduced by Chen et al. [551] and discussed in the context of
cooperative coevolution in Section 21.2.3 on page 204.
17.3. COUNTERMEASURES 185
Tasks T17

59. What is the difference between epistasis and pleiotropy?


[5 points]

60. Why does epistasis cause ruggedness in the fitness landscape?


[5 points]

61. Why can epistasis cause neutrality in the fitness landscape?


[5 points]

62. Why does pleiotropy cause ruggedness in the fitness landscape?


[5 points]

63. How are epistasis and separability related with each other?
[5 points]
Chapter 18
Noise and Robustness

18.1 Introduction – Noise

In the context of optimization, three types of noise can be distinguished. The first form is
noise in the training data used as basis for learning or the objective functions [3019] (i). In
many applications of machine learning or optimization where a model for a given system is
to be learned, data samples including the input of the system and its measured response are
used for training.

Example E18.1 (Noise in the Training Data).


Some typical examples of situations where training data is the basis for the objective function
evaluation are

1. the usage of Global Optimization for building classifiers (for example for predicting
buying behavior using data gathered in a customer survey for training),

2. the usage of simulations for determining the objective values in Genetic Programming
(here, the simulated scenarios correspond to training cases), and

3. the fitting of mathematical functions to (x, y)-data samples (with artificial neural
networks or symbolic regression, for instance).

Since no measurement device is 100% accurate and there are always random errors, noise is
present in such optimization problems.

Besides inexactnesses and fluctuations in the input data of the optimization process, pertur-
bations are also likely to occur during the application of its results. This category subsumes
the other two types of noise: perturbations that may arise from (ii) inaccuracies in the pro-
cess of realizing the solutions and (iii) environmentally induced perturbations during the
applications of the products.

Example E18.2 (Creating the perfect Tire).


This issue can be illustrated by using the process of developing the perfect tire for a car
as an example. As input for the optimizer, all sorts of material coefficients and geometric
constants measured from all known types of wheels and rubber could be available. Since
these constants have been measured or calculated from measurements, they include a certain
degree of noise and imprecision (i).
188 18 NOISE AND ROBUSTNESS
The result of the optimization process will be the best tire construction plan discovered
during its course and it will likely incorporate different materials and structures. We would
hope that the tires created according to the plan will not fall apart if, accidently, an extra
0.0001% of a specific rubber component is used (ii). During the optimization process,
the behavior of many construction plans will be simulated in order to find out about their
utility. When actually manufactured, the tires should not behave unexpectedly when used in
scenarios different from those simulated (iii) and should instead be applicable in all driving
situations likely to occur.

The effects of noise in optimization have been studied by various researchers; Miller and
Goldberg [1897, 1898], Lee and Wong [1703], and Gurin and Rastrigin [1155] are some of
them. Many Global Optimization algorithms and theoretical results have been proposed
which can deal with noise. Some of them are, for instance, specialized
1. Genetic Algorithms [932, 1544, 2389, 2390, 2733, 2734],
2. Evolution Strategies [168, 294, 1172], and
3. Particle Swarm Optimization [1175, 2119] approaches.

18.2 The Problem: Need for Robustness


The goal of Global Optimization is to find the global optima of the objective functions. While
this is fully true from a theoretical point of view, it may not suffice in practice. Optimization
problems are normally used to find good parameters or designs for components or plans to
be put into action by human beings or machines. As we have already pointed out, there will
always be noise and perturbations in practical realizations of the results of optimization.
There is no process in the world that is 100% accurate and the optimized parameters,
designs, and plans have to tolerate a certain degree of imprecision.

Definition D18.1 (Robustness). A system in engineering or biology isrobust if it is able


to function properly in the face of genetic or environmental perturbations [1288, 2833].

Therefore, a local optimum (or even a non-optimal element) for which slight disturbances
only lead to gentle performance degenerations is usually favored over a global optimum
located in a highly rugged area of the fitness landscape [395]. In other words, local optima in
regions of the fitness landscape with strong causality are sometimes better than global optima
with weak causality. Of course, the level of this acceptability is application-dependent.
Figure 18.1 illustrates the issue of local optima which are robust vs. global optima which
are not. Table 18.1 gives some examples on problems which require robust optimization.

Example E18.3 (Critical Robustness).


When optimizing the control parameters of an airplane or a nuclear power plant, the global
optimum is certainly not used if a slight perturbation can have hazardous effects on the
system [2734].

Example E18.4 (Robustness in Manufacturing).


Wiesmann et al. [2936, 2937] bring up the topic of manufacturing tolerances in multilayer
optical coatings. It is no use to find optimal configurations if they only perform optimal
when manufactured to a precision which is either impossible or too hard to achieve on a
constant basis.
18.2. THE PROBLEM: NEED FOR ROBUSTNESS 189

Example E18.5 (Dynamic Robustness in Road-Salting).


The optimization of the decision process on which roads should be precautionary salted for
areas with marginal winter climate is an example of the need for dynamic robustness. The
global optimum of this problem is likely to depend on the daily (or even current) weather
forecast and may therefore be constantly changing. Handa et al. [1177] point out that it is
practically infeasible to let road workers follow a constantly changing plan and circumvent
this problem by incorporating multiple road temperature settings in the objective function
evaluation.

Example E18.6 (Robustness in Nature).


Tsutsui et al. [2733, 2734] found a nice analogy in nature: The phenotypic characteristics
of an individual are described by its genetic code. During the interpretation of this code,
perturbations like abnormal temperature, nutritional imbalances, injuries, illnesses and so
on may occur. If the phenotypic features emerging under these influences have low fitness,
the organism cannot survive and procreate. Thus, even a species with good genetic material
will die out if its phenotypic features become too sensitive to perturbations. Species robust
against them, on the other hand, will survive and evolve.

f(x)

robust local optimum


global optimum

Figure 18.1: A robust local optimum vs. a “unstable” global optimum.

Table 18.1: Applications and Examples of robust methods.


Area References
Combinatorial Problems [644, 1177]
Distributed Algorithms and Sys- [52, 1401, 2898]
tems
190 18 NOISE AND ROBUSTNESS
Engineering [52, 644, 1134, 1135, 2936, 2937]
Graph Theory [52, 63, 67, 644, 1401]
Image Synthesis [375]
Logistics [1177]
Mathematics [2898]
Motor Control [489]
Multi-Agent Systems [2925]
Software [1401]
Wireless Communication [63, 67, 644]

18.3 Countermeasures

For the special case where the phenome is a real vector space (X ⊆ Rn ), several approaches
for dealing with the need for robustness have been developed. Inspired by Taguchi meth-
T
ods1 [2647], possible disturbances are represented by a vector δ = (δ1 , δ2 , .., δn ) , δi ∈
R in the method suggested by Greiner [1134, 1135]. If the distributions and influ-
ences of the δi are known, the objective function f(x) : x ∈ X can be rewritten as
f̃(x,δ) [2937]. In the special case where δ is normally distributed, this can be simplified
T
to f̃ (x1 + δ1 , x2 + δ2 , .., xn + δn ) . It would then make sense to sample the probability
distribution of δ a number of t times and to use the mean values of f̃(x, δ) for each objective
function evaluation during the optimization process. In cases where the optimal value y ⋆
of the objective function f is known, Equation 18.1 can be minimized. This approach is
also used in the work of Wiesmann et al. [2936, 2937] and basically turns the optimization
algorithm into something like a maximum likelihood estimator (see Section 53.6.2 and Equa-
tion 53.257 on page 695).
1 X ⋆ 2
t
f′ (x) = y − f̃(x, δ i ) (18.1)
t i=1
This method corresponds to using multiple, different training scenarios during the objective
function evaluation in situations where X 6⊆ Rn . By adding random noise and artificial
perturbations to the training cases, the chance of obtaining robust solutions which are
stable when applied or realized under noisy conditions can be increased.

1 http://en.wikipedia.org/wiki/Taguchi_methods [accessed 2008-07-19]


Chapter 19
Overfitting and Oversimplification

In all scenarios where optimizers evaluate some of the objective values of the candidate
solutions by using training data, two additional phenomena with negative influence can be
observed: overfitting and (what we call) oversimplification. This is especially true if the
solutions represent functional dependencies inside the training data St .

Example E19.1 (Function Approximation).


Let us assume that St ⊆ S be the training data sample from a space S of possible data.
Let us assume in the following that this space would be the cross product S = A × B of a
space B containing the values of the variables whose dependencies on input variables from
A should be learned. The structure of each single element s[i] of the training data sample
St = (s[0], s[1], .., s[n − 1]) is s[i] = (a[i], b[i]) : a[i] ∈ A, b[i] ∈ B. The candidate solutions would
actually be functions x : A 7→ B and the goal is to find the optimal function x⋆ which exactly
represents the (previously unknown) dependencies between A and B as they occur in St ,
i. e., a function with x⋆ (a[i]) ≈ b[i] ∀i ∈ 1..n.
Artificial neural networks, (normal) regression, and symbolic regression are the most
common techniques to solve such problems, but other techniques such as Particle Swarm
Optimization, Evolution Strategies, and Differential Evolution can be used too. Then, the
candidate solutions may, for instance, be polynomials x(a) = x0 + x1 a1 + x2 a2 + . . . or
otherwise structured functions and only the coefficients x0 , x1 , x2 . . . would be optimized.
When trying to good candidate solutions x : A 7→ B, possible objective functions (subject
Pn−1 Pn−1 2
to minimization) fE (x) = i=0 |x(a[i]) − b[i]| or fSE (x) = i=0 (x(a[i]) − b[i]) .

19.1 Overfitting

19.1.1 The Problem

Overfitting1 is the emergence of an overly complicated candidate solution in an optimization


process resulting from the effort to provide the best results for as much of the available
training data as possible [792, 1040, 2397, 2531].

Definition D19.1 (Overfitting). A candidate solution x̀ ∈ X optimized based on a set


of training data St is considered to be overfitted if a less complicated, alternative solution
x⋆ ∈ X exists which has a smaller (or equal) error for the set of all possible (maybe even
infinitely many), available, or (theoretically) producible data samples S ∈ S.
1 http://en.wikipedia.org/wiki/Overfitting [accessed 2007-07-03]
192 19 OVERFITTING AND OVERSIMPLIFICATION
Hence, overfitting also stands for the creation of solutions which are not robust. Notice
that x⋆ may, however, have a larger error in the training data St than x̀. The phenomenon
of overfitting is best known and can often be encountered in the field of artificial neural
networks or in curve fitting2 [1697, 1742, 2334, 2396, 2684] as described in Example E19.1.

Example E19.2 (Example E19.1 Cont.).


There exists exactly one polynomial3 of the degree n − 1 that fits to each such training
data and goes through all its points.4 Hence, when only polynomial regression is performed,
there is exactly one perfectly fitting function of minimal degree. Nevertheless, there will
also be an infinite number of polynomials with a higher degree than n − 1 that also match
the sample data perfectly. Such results would be considered as overfitted.
In Figure 19.1, we have sketched this problem. The function x⋆ (a) = b for A = B = R
shown in Fig. 19.1.b has been sampled three times, as sketched in Fig. 19.1.a. There exists
no other polynomial of a degree of two or less that fits to these samples than x⋆ . Optimizers,
however, could also find overfitted polynomials of a higher degree such as x̀ in Fig. 19.1.c
which also match the data. Here, x̀ plays the role of the overly complicated solution which
will perform as good as the simpler model x⋆ when tested with the training sets only, but
will fail to deliver good results for all other input data.

b b b

x★ x

a a a
Fig. 19.1.a: Three sample points Fig. 19.1.b: The optimal solution Fig. 19.1.c: The overly complex
of a linear function. x⋆ . variant x̀.

Figure 19.1: Overfitting due to complexity.

A very common cause for overfitting is noise in the sample data. As we have already pointed
out, there exists no measurement device for physical processes which delivers perfect results
without error. Surveys that represent the opinions of people on a certain topic or randomized
simulations will exhibit variations from the true interdependencies of the observed entities,
too. Hence, data samples based on measurements will always contain some noise.

Example E19.3 (Overfitting caused by Noise).


In Figure 19.2 we have sketched how such noise may lead to overfitted results. Fig. 19.2.a
illustrates a simple physical process obeying some quadratic equation. This process has been
measured using some technical equipment and the n = 100 = |S| noisy samples depicted in
Fig. 19.2.b have been obtained. Fig. 19.2.c shows a function x̀ resulting from optimization
that fits the data perfectly. It could, for instance, be a polynomial of degree 99 which goes
right through all the points and thus, has an error of zero. Hence, both fE (x̀) and fSQ (x̀)
from Example E19.1 would be zero. Although being a perfect match to the measurements,
this complicated solution does not accurately represent the physical law that produced the
sample data and will not deliver precise results for new, different inputs.
2 We will discuss overfitting in conjunction with Genetic Programming-based symbolic regression in Sec-

tion 49.1 on page 531.


3 http://en.wikipedia.org/wiki/Polynomial [accessed 2007-07-03]
4 http://en.wikipedia.org/wiki/Polynomial_interpolation [accessed 2008-03-01]
19.1. OVERFITTING 193

b b b

x★ x

a a a
Fig. 19.2.a: The original physical Fig. 19.2.b: The measured train- Fig. 19.2.c: The overfitted result
process. ing data. x̀.

Figure 19.2: Fitting noise.

From the examples we can see that the major problem that results from overfitted solutions
is the loss of generality.

Definition D19.2 (Generality). A solution of an optimization process is general if it is


not only valid for the sample inputs s[0], s[1], .., s[n − 1] ∈ St which were used for training
during the optimization process, but also for different inputs s 6∈ St if such inputs exist.

19.1.2 Countermeasures

There exist multiple techniques which can be utilized in order to prevent overfitting to a
certain degree. It is most efficient to apply multiple such techniques together in order to
achieve best results.

19.1.2.1 Restricting the Problem Space

A very simple approach is to restrict the problem space X in a way that only solutions up
to a given maximum complexity can be found. In terms of polynomial function fitting, this
could mean limiting the maximum degree of the polynomials to be tested, for example.

19.1.2.2 Complement Optimization Criteria

Furthermore, the functional objective functions (such as those defined in Example E19.1)
which solely concentrate on the error of the candidate solutions should be augmented by
penalty terms and non-functional criteria which favor small and simple models [792, 1510].

19.1.2.3 Using much Training Data

Large sets of sample data, although slowing down the optimization process, may improve the
generalization capabilities of the derived solutions. If arbitrarily many training datasets or
training scenarios can be generated, there are two approaches which work against overfitting:
1. The first method is to use a new set of (randomized) scenarios for each evaluation
of each candidate solution. The resulting objective values then may differ largely
even if the same individual is evaluated twice in a row, introducing incoherence and
ruggedness into the fitness landscape.
194 19 OVERFITTING AND OVERSIMPLIFICATION
2. At the beginning of each iteration of the optimizer, a new set of (randomized) scenarios
is generated which is used for all individual evaluations during that iteration. This
method leads to objective values which can be compared without bias. They can be
made even more comparable if the objective functions are always normalized into some
fixed interval, say [0, 1].

In both cases it is helpful to use more than one training sample or scenario per evaluation
and to set the resulting objective value to the average (or better median) of the outcomes.
Otherwise, the fluctuations of the objective values between the iterations will be very large,
making it hard for the optimizers to follow a stable gradient for multiple steps.

19.1.2.4 Limiting Runtime

Another simple method to prevent overfitting is to limit the runtime of the optimizers [2397].
It is commonly assumed that learning processes normally first find relatively general solutions
which subsequently begin to overfit because the noise “is learned”, too.

19.1.2.5 Decay of Learning Rate

For the same reason, some algorithms allow to decrease the rate at which the candidate
solutions are modified by time. Such a decay of the learning rate makes overfitting less
likely.

19.1.2.6 Dividing Data into Training and Test Sets

If only one finite set of data samples is available for training/optimization, it is common
practice to separate it into a set of training data St and a set of test cases Sc . During the
optimization process, only the training data is used. The resulting solutions x are tested
with the test cases afterwards. If their behavior is significantly worse when applied to Sc
than when applied to St , they are probably overfitted.
The same approach can be used to detect when the optimization process should be
stopped. The best known candidate solutions can be checked with the test cases in each
iteration without influencing their objective values which solely depend on the training data.
If their performance on the test cases begins to decrease, there are no benefits in letting the
optimization process continue any further.

19.2 Oversimplification

19.2.1 The Problem

With the term oversimplification (or overgeneralization) we refer to the opposite of overfit-
ting. Whereas overfitting denotes the emergence of overly complicated candidate solutions,
oversimplified solutions are not complicated enough. Although they represent the training
samples used during the optimization process seemingly well, they are rough overgeneral-
izations which fail to provide good results for cases not part of the training.
A common cause for oversimplification is sketched in Figure 19.3: The training sets only
represent a fraction of the possible inputs. As this is normally the case, one should always
be aware that such an incomplete coverage may fail to represent some of the dependencies
and characteristics of the data, which then may lead to oversimplified solutions. Another
possible reason for oversimplification is that ruggedness, deceptiveness, too much neutrality,
19.2. OVERSIMPLIFICATION 195
or high epistasis in the fitness landscape may lead to premature convergence and prevent
the optimizer from surpassing a certain quality of the candidate solutions. It then cannot
adapt them completely even if the training data perfectly represents the sampled process. A
third possible cause is that a problem space could have been chosen which does not include
the correct solution.

Example E19.4 (Oversimplified Polynomial).


Fig. 19.3.a shows a cubic function. Since it is a polynomial of degree three, four sample
points are needed for its unique identification. Maybe not knowing this, only three samples
have been provided in Fig. 19.3.b. By doing so, some vital characteristics of the function
are lost. Fig. 19.3.c depicts a square function – the polynomial of the lowest degree that fits
exactly to these samples. Although it is a perfect match, this function does not touch any
other point on the original cubic curve and behaves totally differently at the lower parameter
area.
However, even if we had included point P in our training data, it is still possible that the
optimization process could yield Fig. 19.3.c as a result. Having training data that correctly
represents the sampled system does not mean that the optimizer is able to find a correct
solution with perfect fitness – the other, previously discussed problematic phenomena can
prevent it from doing so. Furthermore, if it was not known that the system which was to be
modeled by the optimization process can best be represented by a polynomial of the third
degree, one could have limited the problem space X to polynomials of degree two and less.
Then, the result would likely again be something like Fig. 19.3.c, regardless of how many
training samples are used.

b b a

a a b
Fig. 19.3.a: The “real system” Fig. 19.3.b: The sampled training Fig. 19.3.c: The oversimplified re-
and the points describing it. data. sult x̀.

Figure 19.3: Oversimplification.

19.2.2 Countermeasures

In order to counter oversimplification, its causes have to be mitigated.

19.2.2.1 Using many Training Cases

It is not always possible to have training scenarios which cover the complete input space
A. If a program is synthesized Genetic Programming which has an 32bit integer number
as input, we would need 232 = 4 294 967 296 samples. By using multiple scenarios for each
individual evaluation, the chance of missing important aspects is decreased. These scenarios
can be replaced with new, randomly created ones in each generation, in order to decrease
this chance even more.
196 19 OVERFITTING AND OVERSIMPLIFICATION
19.2.2.2 Using a larger Problem Space

The problem space, i. e., the representation of the candidate solutions, should further be
chosen in a way which allows constructing a correct solution to the problem defined. Then
again, releasing too many constraints on the solution structure increases the risk of overfit-
ting and thus, careful proceeding is recommended.
Chapter 20
Dimensionality (Objective Functions)

20.1 Introduction

The most common way to define optima in multi-objective problems (MOPs, see Defini-
tion D3.6 on page 55) is to use the Pareto domination relation discussed in Section 3.3.5
on page 65. A candidate solution x1 ∈ X is said to dominate another candidate solution
x2 ∈ X (x1 ⊣ x2 ) if and only if its corresponding vector of objective values f (x1 ) is (par-
tially) less than the one of x2 , i. e., i ∈ 1..n ⇒ fi (x1 ) ≤ fi (x2 ) and ∃i ∈ 1..n : fi (x1 ) < fi (x2 )
(in minimization problems, see Definition D3.13). The domination rank #dom(x, X) of a
candidate solution x ∈ X is the number of elements in a reference set X ⊆ X it is dominated
by (Definition D3.16), a non-dominated element (relative to X) has a domination rank of
#dom(x, X) = 0, and the solutions considered to be global optima are non-dominated for
the whole problem space X (#dom(x, X) = 0, Definition D3.14). These are the elements we
would like to find with optimization or, in the case of probabilistic metaheuristics, at least
wish to approximate as closely as possible.

Definition D20.1 (Dimension of MOP). We refer to the number n = |f | of objective


functions f1 , f2 , .., fn ∈ f of an optimization problem as its dimension (or dimensionality).

Many studies in the literature consider mainly bi-objective MOPs [1237, 1530]. As a conse-
quence, many algorithms are designed to deal with that kind of problems. However, MOPs
having a higher number of objective functions are common in practice – sometimes the
number of objectives reaches double figures [599] – leading to the so-called many-objective
optimization [2228, 2915]. This term has been coined by the Operations Research commu-
nity to denote problems with more than two or three objective functions [894, 3099]. It is
currently a hot research topic with the first conference on evolutionary multi-objective opti-
mization held in 2001 [3099]. Table 20.1 gives some examples for many-objective situations.

Table 20.1: Applications and Examples of Many-Objective Optimization.


Area References
Combinatorial Problems [1417, 1437]
Computer Graphics [1002, 1577, 1893, 2060]
198 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
Data Mining [2213]
Decision Making [1893]
Software [2213]

20.2 The Problem: Many-Objective Optimization

When the dimension of the MOPs increases, the majority of candidate solutions are non-
dominated. As a consequence, traditional nature-inspired algorithms have to be redesigned:
According to Hughes [1303] and on basis of the results provided by Purshouse and Fleming
[2231], it can be assumed that purely Pareto-based optimization approaches (maybe with
added diversity preservation methods) will not perform good in problems with four or more
objectives. Results from the application of such algorithms (NSGA-II [749], SPEA-2 [3100],
PESA [639]) to two or three objectives cannot simply be extrapolated to higher numbers
of optimization criteria [1530]. Kang et al. [1491] consider Pareto optimality as unfair
and imperfect in many-objective problems. Hughes [1303] indicates that the following two
hypotheses may hold:

1. An optimizer that produces an entire Pareto set in one run is better than generating the
Pareto set through many single-objective optimizations using an aggregation approach
such as weighted min-max (see Section 3.3.4) if the number of objective function
evaluations is fixed.

2. Optimizers that use Pareto ranking based methods to sort the population will be very
effective for small numbers of objectives, but not perform as effectively for many-
objective optimization in comparison with methods based on other approaches.

The results of Jaszkiewicz [1437] and Ishibuchi et al. [1418] further demonstrate the degener-
ation of the performance of the traditional multi-objective metaheuristics in many-objective
problems in comparison with single-objective approaches. Ikeda et al. [1400] show that vari-
ous elements, which are apart from the true Pareto frontier, may survive as hardly-dominated
solutions which leads to a decrease in the probability of producing new candidate solutions
dominating existing ones. This phenomenon is called dominance resistant. Deb et al. [750]
recognize this problem of redundant solutions and incorporate an example function provok-
ing it into their test function suite for real-valued, multi-objective optimization.
Additionally to this algorithm-sided limitations, Bouyssou [374] suggests that based on,
for instance, the limited capabilities of the human mind [1900], an operator will not be able
to make efficient decisions if more than a dozen objectives are involved. Visualizing the
solutions in a human-understandable way becomes more complex with the rising number of
dimensions, too [1420].

Example E20.1 (Non-Dominated Elements in Random Samples).


Deb [740] showed that the number of non-dominated elements in random samples increases
quickly with the dimension. The hyper-surface of the Pareto frontier may increase exponen-
tially with the number of objective functions [1420]. Like Ishibuchi et al. [1420], we want to
illustrate this issue with an experiment.
Imagine that a population-based optimization approach was used to solve a many-
objective problem with n = |f | objective functions. At startup, the algorithm will fill
the initial population pop with ps randomly created individuals p. If we assume the problem
space X to be continuous and the creation process sufficiently random, no two individuals
in pop will be alike. If the objective functions are locally invertible and sufficiently indepen-
dent, we can assume that there will be no two candidate solution in pop for which any of
the objectives takes on the same value.
20.2. THE PROBLEM: MANY-OBJECTIVE OPTIMIZATION 199
P(#dom=k | ps,n)

P(#dom=k | ps,n)
0.3 0.3
0.2 0.2
0.1 0.1
0 0

0 0
4 4
k 8 10 12
14 k 8 10 12
14
12 6 8 12 6 8
2 4
jectiv es n 2 4
jectiv es n
no. of ob no. of ob
Fig. 20.1.a: ps = 5 Fig. 20.1.b: ps = 10
P(#dom=k | ps,n)

P(#dom=k | ps,n)
0.3 0.3
0.2 0.2
0.1 0.1
0 0

0 0
4 4
k 8 10 12
14 k 8 10 12
14
12 6 8 12 6 8
4 sn 4 es n
2
no. of o bjective 2
no. of ob
jectiv
Fig. 20.1.c: ps = 50 Fig. 20.1.d: ps = 100
P(#dom=k | ps,n)

P(#dom=k | ps,n)

0.3 0.3
0.2 0.2
0.1 0.1
0 0

0 0
4 4
k 8 10 12
14 k 8 10 12
14
12 6 8 12 6 8
4 sn 4 es n
2
no. of o bjective 2
no. of ob
jectiv
Fig. 20.1.e: ps = 500 Fig. 20.1.f: ps = 1000
P(#dom=k | ps,n)

0.3
0.2
0.1
0

0
4
k 8 10 12
14
12 6 8
2 4
jectiv es n
no. of ob
Fig. 20.1.g: ps = 3000

Figure 20.1: Approximated distribution of the domination rank in random samples for
several population sizes.
200 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
The distribution P ( #dom = k| ps, n) of the probability that the domination rank “#dom”
of a randomly selected individual from the initial, randomly created population is exactly
k ∈ 0..ps − 1 depends on the population size ps and the number of objective functions n. We
have approximated this probability distribution with experiments where we determined the
domination rank of ps n-dimensional vectors where each element is drawn from a uniform
distribution for several values of n and ps, spanning from n = 2 to n = 50 and ps = 3 to
ps = 3600. (For n = 1, no experiments where performed since there, each domination rank
from 0 to ps − 1 occurs exactly once in the population.)
Figure 20.1 and Figure 20.2 illustrate the results of these experiments, the arithmetic
means of 100 000 runs for each configuration. In Figure 20.1, the approximated distribu-
tion of P ( #dom = k| ps, n) is illustrated for various n. The value of P ( #dom = 0| ps, n)
quickly approaches one for rising n. We limited the z-axis to 0.3 so the behavior of the
other ranks k > 0, which quickly becomes negligible, can still be seen. For fixed n and
ps, P ( #dom = k| ps, n) drops rapidly, from the looks roughly similar to an exponential dis-
tribution (see Section 53.4.3). For a fixed k and ps, P ( #dom = k| ps, n) takes on roughly
the shape of a Poisson distribution (see Section 53.3.2) along the n-axis, i. e., it rises for
some time and then falls again. With rising ps, the maximum of this distribution shifts
to the right, meaning that an algorithm with a higher population size may be able to deal
with more objective functions. Still, even for ps = 3000, around 64% of the population are
already non-dominated in this experiment.
P(#dom=0 | ps,n)

1.0
0.8
0.6
0.4
0.2
0
20
500
1000 15 sn
pop 1500 2000 ct ive
ulat 10
b je
ion 2500 o
size 3000 5 . of
ps 3500 no

Figure 20.2: The proportion of non-dominated candidate solutions for several population
sizes ps and dimensionalities n.

The fraction of non-dominated elements in the random populations is illustrated in Fig-


ure 20.2. We find that it again seems to rise exponentially with n. The population size ps in
Figure 20.2 seems to have only an approximately logarithmically positive influence: If we list
the population sizes required to keep the fraction of non-dominated candidate solutions at the
same level of P ( #dom = 0| ps = 5, n = 2) ≈ 0.457, we find P ( #dom = 0| ps = 12, n = 3) ≈
0.466, P ( #dom = 0| ps = 35, n = 4) ≈ 0.447, P ( #dom = 0| ps = 90, n = 5) ≈ 0.455,
P ( #dom = 0| ps = 250, n = 6) ≈ 0.452, P ( #dom = 0| ps = 650, n = 7) ≈ 0.460, and
P ( #dom = 0| ps = 1800, n = 8) ≈ 0.459. An extremely coarse rule of thumb for this ex-
periment would hence be that around 0.6en individuals are required in the population to
hold the proportion of non-dominated at around 46%.
20.3. COUNTERMEASURES 201
To sum things up, the increasing dimensionality of the objective space leads to three main
problems [1420]:

1. The performance of traditional approaches based solely on Pareto comparisons dete-


riorates.
2. The utility of the solutions cannot be understood by the human operator anymore.
3. The number of possible Pareto-optimal solutions may increase exponentially.

Finally, it should be noted that there exists also interesting research where objectives are
added to a single-objective problem to make it easier [1553, 2025] which you can find sum-
marized in Section 13.4.6. Indeed, Brockhoff et al. [425] showed that in some cases, adding
objective functions may be beneficial (while it is not in others).

20.3 Countermeasures

Various countermeasures have been proposed against the problem of dimensionality. Nice
surveys on contemporary approaches to many-objective optimization with evolutionary ap-
proaches, to the difficulties arising in many-objective, and on benchmark problems have
been provided by Hiroyasu et al. [1237], Ishibuchi et al. [1420], and López Jaimes and Coello
Coello [1775]. In the following we list some of approaches for many-objective optimization,
mostly utilizing the information provided in [1237, 1420].

20.3.1 Increasing Population Size

The most trivial measure, however, has not been introduced in these studies: increasing
the population size. Well, from Example E20.1 we know that this will work only for a
few objective functions and we have to throw in exponentially more individuals in order
to fight the influence of many objectives. For five or six objective functions, we may still
be able to obtain good results if we do this – for the price of many additional objective
function evaluations. If we follow the rough approximation equation from the example, it
would already take a population size of around ps = 100 000 in order to keep the fraction of
non-dominated candidate solutions below 0.5 (in the scenario used there, that is). Hence,
increasing the population size can only get us so far.

20.3.2 Increasing Selection Pressure

A very simple idea for improving the way multi-objective approaches scale with increasing
dimensionality is to increase the exploration pressure into the direction of the Pareto frontier.
Ishibuchi et al. [1420] distinguish approaches which modify the definition of domination in
order to reduce the number non-dominated candidate solutions in the population [2403] and
methods which assign different ranks to non-dominated solutions [636, 829, 830, 1576, 1635,
2629]. Farina and Amato [894] propose to rely fuzzy Pareto methods instead of pure Pareto
comparisons.

20.3.3 Indicator Function-based Approaches

Ishibuchi et al. [1420] point further out that fitness assignment methods could be used which
are not based on Pareto dominance. According to them, one approach is using indicator
functions such as those involving hypervolume metrics [1419, 2838].
202 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
20.3.4 Scalerizing Approaches

Another fitness assignment method are scalarizing functions [1420] which treat many-
objective problems with single-objective-style methods. Advanced single-objective optimiza-
tion like those developed by Hughes [1303] could be used instead of Pareto comparisons.
Wagner et al. [2837, 2838] support the results and assumptions of Hughes [1303] in terms
of single-objective being better in many-objective problems than traditional multi-objective
Evolutionary Algorithms such as NSGA-II [749] or SPEA 2 [3100], but also refute the gen-
eral idea that all Pareto-based cannot cope with their performance. They also show that
more recent approaches [880, 2010, 3096] which favor more than just distribution aspects
can perform very well. Other scalarizing methods can be found in [1304, 1416, 1417, 1437]

20.3.5 Limiting the Search Area in the Objective Space

Hiroyasu et al. [1237] point out that the search area in the objective space can be lim-
ited. This leads to a decrease of the number of non-dominated points [1420] and can be
achieved by either incorporating preference information [746, 933, 2695] or by reducing the
dimensionality [422–424, 745, 2411].

20.3.6 Visualization Methods

Approaches for visualizing solutions of many-objective problems in order to make them


more comprehensible have been provided by Furuhashi and Yoshikawa [1002], Köppen and
Yoshida [1577], Obayashi and Sasaki [2060] and in [1893].
Chapter 21
Scale (Decision Variables)

In Chapter 20, we discussed how an increasing number of objective functions can threaten
the performance of optimization algorithms. We referred to this problem as dimensionality,
i. e., the dimension of the objective space. However, there is another space in optimization
whose dimension can make an optimization problem difficult: the search space G. The
“curse of dimensionality” 1 of the search space stands for the exponential increase of its
volume with the number of decision variables [260, 261]. In order to better distinguish
between the dimensionality of the search and the objective space, we will refer to the former
one as scale.
In Example E21.1, we give an example for the growth of the search space with the scale
of a problem and in Table 21.1, some large-scale optimization problems are listed.

Example E21.1 (Small-scale versus large-scale).


The problem of small-scale versus large-scale problems can best be illustrated using discrete
or continuous vector-based search spaces as example. If we search, for instance, in the
natural interval G = (1..10), there are ten points which could be the optimal solution.
When G = (1..10)2 , there exist one hundred possible results and for G = (1..10)3 , it is
already one thousand. In other words, if the number of points in G be n, then the number
of points in Gm is nm .

Table 21.1: Applications and Examples of Large-Scale Optimization.


Area References
Data Mining [162, 994, 1748]
Databases [162]
Distributed Algorithms and Sys- [1748, 3031]
tems
Engineering [2860]
Function Optimization [551, 1927, 2130, 2486, 3020]
Graph Theory [3031]
Software [3031]

21.1 The Problem


Especially in combinatorial optimization, the issue of scale has been considered for a long
time. Here, the computational complexity (see Section 12.1) denotes how many algorithm
1 http://en.wikipedia.org/wiki/Curse_of_dimensionality [accessed 2009-10-20]
204 21 SCALE (DECISION VARIABLES)
steps are needed to solve a problem which consists of n parameters. As sketched in Fig-
ure 12.1 on page 145, the time requirements to solve problems which require step numbers
growing exponentially with n quickly exceed any feasible bound. Here, especially N P-hard
problems should be mentioned, since they always exhibit this feature.
Besides combinatorial problems, numerical problems suffer the curse of dimensionality
as well.

21.2 Countermeasures

21.2.1 Parallelization and Distribution

When facing problems of large-scale, the problematic symptom is the high runtime require-
ment. Thus, any measure to use as much computational power as available can be a remedy.
Obviously, there (currently) exists no way to solve large-scale N P-hard problems exactly
within feasible time. With more computers, cores, or hardware, only a linear improvement
of runtime can be achieved at most. However, for problems residing in the gray area between
feasible and infeasible, distributed computing [1747] may be the method of choice.

21.2.2 Genetic Representation and Development

The best way to solve a large-scale problem is to solve it as a small-scale problem. For some
optimization tasks, it is possible to choose a search space which has a smaller size than
the problem space (|G| < |X|). Indirect genotype-phenotype mappings can link the spaces
together, as we will discuss later in Section 28.2.2.
Here, one option are generative mappings (see Section 28.2.2.1) which step-by-step con-
struct a complex phenotype by translating a genotype according to some rules. Grammatical
Evolution, for instance, unfolds a symbol according to a BNF grammar with rules identified
in a genotype. This recursive process can basically lead to arbitrarily complex phenotypes.
Another approach is to apply a developmental, ontogenic mapping (see Section 28.2.2.2)
that uses feedback from simulations or objective functions in this process.

Example E21.2 (Dimension Reduction via Developmental Mapping).


Instead of optimizing (may be up to a million) points on the surface of an airplane’s wing
(see Example E2.5 and E4.1) in order to find an optimal shape, the weights of an artificial
neural network used to move surface points to a beneficial configuration could be optimized.
The second task would involve significantly fewer variables, since an artificial neural network
with usually less than 1000 or even 100 weights could be applied. Of course, the number
of possible solutions which can be discovered by this approach decreases (it is at most |G|).
However, the problem can become tangible and good results may be obtained in reasonable
time.

21.2.3 Exploiting Separability

Sometimes, parts of candidate solutions are independent from each other and can be opti-
mized more or less separately. In such a case of low epistasis (see Chapter 17), a large-scale
problem can be divided into several problems of smaller-scale which can be optimized sepa-
rately. If solving a problem of scale n takes 2n algorithm steps, solving two problems of scale
0.5n will clearly lead to a great runtime reduction. Such a reduction may even be worth
21.2. COUNTERMEASURES 205
sacrificing some solution quality. If the optimization problem at hand exhibits low epistasis
or is separable, such a sacrifice may even be unnecessary.
Coevolution has shown to be an efficient approach in combinatorial optimization [1311].
If extended with a cooperative component, i. e., to Cooperative Coevolution [2210, 2211],
it can efficiently exploit separability in numerical problems and lead to better results [551,
3020].

21.2.4 Combination of Techniques

Generally, it can be a good idea to concurrently use sets of algorithm [1689] or portfo-
lios [2161] working on the same or different populations. This way, the different strengths
of the optimization methods can be combined. In the beginning, for instance, an algorithm
with good convergence speed may be granted more runtime. Later, the focus can shift
towards methods which can retain diversity and are not prone to premature convergence.
Alternatively, a sequential approach can be performed, which starts with one algorithm and
switches to another one when no further improvements are found [? ]. This way, first an
interesting area in the search space can be found which then can be investigated in more
detail.
Chapter 22
Dynamically Changing Fitness Landscape

It should also be mentioned that there exist problems with dynamically changing fitness
landscapes [396, 397, 400, 1957, 2302]. The task of an optimization algorithm is then to
provide candidate solutions with momentarily optimal objective values for each point in time.
Here we have the problem that an optimum in iteration t will possibly not be an optimum in
iteration t + 1 anymore. Problems with dynamic characteristics can, for example, be tackled
with special forms [3019] of

1. Evolutionary Algorithms [123, 398, 1955, 1956, 2723, 2941],


2. Genetic Algorithms [1070, 1544, 1948–1950],
3. Particle Swarm Optimization [316, 498, 499, 1728, 2118],
4. Differential Evolution [1866, 2988], and
5. Ant Colony Optimization [1148, 1149, 1730].

The moving peaks benchmarks by Branke [396, 397] and Morrison and De Jong [1957] are
good examples for dynamically changing fitness landscapes. You can find them discussed
in ?? on page ??. In Table 22.1, we give some examples for applications of dynamic opti-
mization.

Table 22.1: Applications and Examples of Dynamic Optimization.


Area References
Chemistry [1909, 2731, 3007]
Combinatorial Problems [400, 644, 1148, 1446, 1486, 1730, 2120, 2144, 2499]
Cooperation and Teamwork [1996, 1997, 2424, 2502]
Data Mining [3018]
Distributed Algorithms and Sys- [353, 786, 962, 964–966, 1401, 1733, 1840, 1909, 1935, 1982,
tems 1995, 2387, 2425, 2694, 2731, 2791, 3007, 3032, 3106]
Engineering [556, 644, 786, 1486, 1733, 1840, 1909, 1982, 2144, 2425,
2499, 2694, 2731, 3007]
Graph Theory [353, 556, 644, 786, 962, 964–966, 1401, 1486, 1733, 1840,
1909, 1935, 1982, 1995, 2144, 2387, 2425, 2502, 2694, 2731,
2791, 3032, 3106]
Logistics [1148, 1446, 1730, 2120]
Motor Control [489]
Multi-Agent Systems [2144, 2925]
208 22 DYNAMICALLY CHANGING FITNESS LANDSCAPE
Security [1486]
Software [1401, 1486, 1995, 3032]
Wireless Communication [556, 644, 1486, 2144, 2502]
Chapter 23
The No Free Lunch Theorem

By now, we know the most important difficulties that can be encountered when applying an
optimization algorithm to a given problem. Furthermore, we have seen that it is arguable
what actually an optimum is if multiple criteria are optimized at once. The fact that there
is most likely no optimization method that can outperform all others on all problems can,
thus, easily be accepted. Instead, there exists a variety of optimization methods specialized
in solving different types of problems. There are also algorithms which deliver good results
for many different problem classes, but may be outperformed by highly specialized methods
in each of them. These facts have been formalized by Wolpert and Macready [2963, 2964]
in their No Free Lunch Theorems1 (NFL) for search and optimization algorithms.

23.1 Initial Definitions


Wolpert and Macready [2964] consider single-objective optimization over a finite do-
main and define an optimization problem φ(g) ≡ f(gpm(g)) as a mapping of a search
space G to the objective space R.2 Since this definition subsumes the problem space
and the genotype-phenotype mapping, only skipping the possible search operations, it
is very similar to our Definition D6.1 on page 103. They further call a time-ordered
set dm of m distinct visited points in G × R a “sample” of size m and write dm ≡
{(dgm (1), dym (1)) , (dgm (2), dym (2)) , . . . , (dgm (m), dym (m))}. dgm (i) is the genotype and dym (i)
m
the corresponding objective value visited at time step i. Then, the set Dm = (G × R) is
the space of all possible samples of length m and D = ∪m≥0 Dm is the set of all samples of
arbitrary size.
An optimization algorithm a can now be considered to be a mapping of the previously
visited points in the search space (i. e., a sample) to the next point to be visited. Formally,
this means a : D 7→ G. Without loss of generality, Wolpert and Macready [2964] only regard
unique visits and thus define a : d ∈ D 7→ g : g 6∈ d.
Performance measures Ψ can be defined independently from the optimization algorithms
only based on the values of the objective function visited in the samples dm . If the objective
function is subject to minimization, Ψ(dym ) = min {dym : i = 1..m} would be the appropriate
measure.
Often, only parts of the optimization problem φ are known. If the minima of the objective
function f were already identified beforehand, for instance, its optimization would be useless.
Since the behavior in wide areas of φ is not obvious, it makes sense to define a probability
P (φ) that we are actually dealing with φ and no other problem. Wolpert and Macready
[2964] use the handy example of the Traveling Salesman Problem in order to illustrate this
1 http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization [accessed 2008-

03-28]
2 Notice that we have partly utilized our own notations here in order to be consistent throughout the

book.
210 23 THE NO FREE LUNCH THEOREM
issue. Each distinct TSP produces a different structure of φ. Yet, we would use the same
optimization algorithm a for all problems of this class without knowing the exact shape of φ.
This corresponds to the assumption that there is a set of very similar optimization problems
which we may encounter here although their exact structure is not known. We act as if there
was a probability distribution over all possible problems which is non-zero for the TSP-alike
ones and zero for all others.

23.2 The Theorem


The performance of an algorithm a iterated m times on an optimization problem φ can
then be defined as P (dym |φ, m, a ), i. e., the conditional probability of finding a particular
sample dym . Notice that this measure is very similar to the value of the problem landscape
Φ(x, τ ) introduced in Definition D5.2 on page 95 which is the cumulative probability that
the optimizer has visited the element x ∈ X until (inclusively) the τ th evaluation of the
objective function(s).
Wolpert and Macready [2964] prove that the sum of such probabilities over all possible
optimization problems φ on finite domains is always identical for all optimization algorithms.
For two optimizers a1 and a2 , this means that
X X
P (dym |φ, m, a1 ) = P (dym |φ, m, a2 ) (23.1)
∀φ ∀φ

Hence, the average over all φ of P (dym |φ, m, a ) is independent of a.

23.3 Implications
From this theorem, we can immediately follow that, in order to outperform a1 in one opti-
mization problem, a2 will necessarily perform worse in another. Figure 23.1 visualizes this
issue. It shows that general optimization approaches like Evolutionary Algorithms can solve
a variety of problem classes with reasonable performance. In this figure, we have chosen a
performance measure Φ subject to maximization, i. e., the higher its values, the faster will
the problem be solved. Hill Climbing approaches, for instance, will be much faster than
Evolutionary Algorithms if the objective functions are steady and monotonous, that is, in a
smaller set of optimization tasks. Greedy search methods will perform fast on all problems
with matroid3 structure. Evolutionary Algorithms will most often still be able to solve these
problems, it just takes them longer to do so. The performance of Hill Climbing and greedy
approaches degenerates in other classes of optimization tasks as a trade-off for their high
utility in their “area of expertise”.
One interpretation of the No Free Lunch Theorem is that it is impossible for any opti-
mization algorithm to outperform random walks or exhaustive enumerations on all possible
problems. For every problem where a given method leads to good results, we can construct
a problem where the same method has exactly the opposite effect (see Chapter 15). As
a matter of fact, doing so is even a common practice to find weaknesses of optimization
algorithms and to compare them with each other, see Section 50.2.6, for example.

3 http://en.wikipedia.org/wiki/Matroid [accessed 2008-03-28]


23.3. IMPLICATIONS 211

performance very crude sketch

all possible optimization problems

random walk or exhaustive enumeration or ...


general optimization algorithm - an EA, for instance
specialized optimization algorithm 1; a hill climber, for instance
specialized optimization algorithm 2; a depth-first search, for instance

Figure 23.1: A visualization of the No Free Lunch Theorem.

Example E23.1 (Fitness Landscapes for Minimization and the NFL).


Let us assume that the Hill Climbing algorithm is applied to the objective functions given

I II III IV V
objective values f(x)

objective values f(x)


ht

t
righ
rig

g
on
wr rig
wro
ng
ht

x x
Fig. 23.2.a: A landscape good for Hill Fig. 23.2.b: A landscape bad for Hill
Climbing. Climbing.

Figure 23.2: Fitness Landscapes and the NFL.

in Figure 23.2 which are both subject to minimization. We divided both fitness landscapes
into sections labeled with roman numerals. A Hill Climbing algorithm will generate points
in the proximity of the currently best solution and transcend to these points if they are
better. Hence, in both problems it will always follow the downward gradient in all three
sections. In Fig. 23.2.a, this is the right decision in sections II and III because the global
optimum is located at the border between them, but wrong in the deceptive section I. In
Fig. 23.2.b, exactly the opposite is the case: Here, a gradient descend is only a good idea
in section IV (which corresponds to I in Fig. 23.2.a) whereas it leads away from the global
optimum in section V which has the same extend as sections II and III in Fig. 23.2.a.
Hence, our Hill Climbing would exhibit very different performance when applied to
212 23 THE NO FREE LUNCH THEOREM
Fig. 23.2.a and Fig. 23.2.b. Depending on the search operations available, it may be even
outpaced by a random walk on Fig. 23.2.b in average. Since we may not know the nature of
the objective functions before the optimization starts, this may become problematic. How-
ever, Fig. 23.2.b is deceptive in the whole section V, i. e., most of its problem space, which
is not the case to such an extent in many practical problems.

Another interpretation is that every useful optimization algorithm utilizes some form of
problem-specific knowledge. Radcliffe [2244] basically states that without such knowledge,
search algorithms cannot exceed the performance of simple enumerations. Incorporating
knowledge starts with relying on simple assumptions like “if x is a good candidate solution,
than we can expect other good candidate solutions in its vicinity”, i. e., strong causality.
The more (correct) problem specific knowledge is integrated (correctly) into the algorithm
structure, the better will the algorithm perform. On the other hand, knowledge correct
for one class of problems is, quite possibly, misleading for another class. In reality, we use
optimizers to solve a given set of problems and are not interested in their performance when
(wrongly) applied to other classes.

Example E23.2 (Arbitrary Landscapes and Exhaustive Enumeration).


An even simpler thought experiment than Example E23.1 on optimization thus goes as
follows: If we consider an optimization problem of truly arbitrary structure, then any element
in X could be the global optimum. In a arbitrary (or random) fitness landscape where the
objective value of the global optimum is unknown, it is impossible to be sure that the best
candidate solution found so far is the global optimum until all candidate solutions have
been evaluated. In this case, a complete enumeration or uninformed search is the optimal
approach. In most randomized search procedures, we cannot fully prevent that a certain
candidate solution is discovered and evaluated more than once, at least without keeping all
previously discovered elements in memory which is infeasible in most cases. Hence, especially
when approaching full coverage of the problem space, these methods will waste objective
function evaluations and thus, become inferior.
The number of arbitrary fitness landscapes should be at least as high as the number
of “optimization-favorable” ones: Assuming a favorable landscape with a single global op-
timum, there should be at least one position to which the optimum could be moved in
order to create a fitness landscape where any informed search is outperformed by exhaustive
enumeration or a random walk.

The rough meaning of the NLF is that all black-box optimization methods perform equally
well over the complete set of all optimization problems [2074]. In practice, we do not want
to apply an optimizer to all possible problems but to only some, restricted classes. In terms
of these classes, we can make statements about which optimizer performs better.
Today, there exists a wide range of work on No Free Lunch Theorems for many different
aspects of machine learning. The website http://www.no-free-lunch.org/4 gives
a good overview about them. Further summaries, extensions, and criticisms have been
provided by Köppen et al. [1578], Droste et al. [837, 838, 839, 840], Oltean [2074], and
Igel and Toussaint [1397, 1398]. Radcliffe and Surry [2247] discuss the NFL in the context
of Evolutionary Algorithms and the representations used as search spaces. The No Free
Lunch Theorem is furthermore closely related to the Ugly Duckling Theorem5 proposed by
Watanabe [2868] for classification and pattern recognition.
4 accessed: 2008-03-28
5 http://en.wikipedia.org/wiki/Ugly_duckling_theorem [accessed 2008-08-22]
23.4. INFINITE AND CONTINUOUS DOMAINS 213
23.4 Infinite and Continuous Domains
Recently, Auger and Teytaud [145, 146] showed that the No Free Lunch Theorem holds only
in a weaker form for countable infinite domains. For continuous domains (i. e., uncountable
infinite ones), it does not hold. This means for a given performance measure and a limit
of the maximally allowed objective function evaluations, it can be possible to construct an
optimal algorithm for a certain family of objective functions [145, 146].
A factor that should be considered when thinking about these issues is, that real ex-
isting computers can only work with finite search spaces, since they have finite memory.
In the end, even floating point numbers are finite in value and precision. Hence, there is
always a difference between the behavior of the algorithmic, mathematical definition of an
optimization method and its practical implementation for a given computing environment.
Unfortunately, the author of this book finds himself unable to state with certainty whether
this means that the NFL does hold for realistic continuous optimization or whether the
proofs by Auger and Teytaud [145, 146] carry over to actual programs.

23.5 Conclusions

So far, the subject of this chapter was the question about what makes optimization prob-
lems hard, especially for metaheuristic approaches. We have discussed numerous different
phenomena which can affect the optimization process and lead to disappointing results.
If an optimization process has converged prematurely, it has been trapped in a non-
optimal region of the search space from which it cannot “escape” anymore (Chapter 13).
Ruggedness (Chapter 14) and deceptiveness (Chapter 15) in the fitness landscape, often
caused by epistatic effects (Chapter 17), can misguide the search into such a region. Neu-
trality and redundancy (Chapter 16) can either slow down optimization because the appli-
cation of the search operations does not lead to a gain in information or may also contribute
positively by creating neutral networks from which the search space can be explored and
local optima can be escaped from. Noise is present in virtually all practical optimization
problems. The solutions that are derived for them should be robust (Chapter 18). Also,
they should neither be too general (oversimplification, Section 19.2) nor too specifically
aligned only to the training data (overfitting, Section 19.1). Furthermore, many practical
problems are multi-objective, i. e., involve the optimization of more than one criterion at
once (partially discussed in Section 3.3), or concern objectives which may change over time
(Chapter 22).
The No Free Lunch Theorem argues that it is not possible to develop the one optimization
algorithm, the problem-solving machine which can provide us with near-optimal solutions
in short time for every possible optimization task (at least in finite domains). Even though
this proof does not directly hold for purely continuous or infinite search spaces, this must
sound very depressing for everybody new to this subject.
Actually, quite the opposite is the case, at least from the point of view of a researcher.
The No Free Lunch Theorem means that there will always be new ideas, new approaches
which will lead to better optimization algorithms to solve a given problem. Instead of being
doomed to obsolescence, it is far more likely that most of the currently known optimization
methods have at least one niche, one area where they are excellent. It also means that it
is very likely that the “puzzle of optimization algorithms” will never be completed. There
will always be a chance that an inspiring moment, an observation in nature, for instance,
may lead to the invention of a new optimization algorithm which performs better in some
problem areas than all currently known ones.
214 23 THE NO FREE LUNCH THEOREM

EDA Branch &


Bound Dynamic
Program.
«
Evolutionary GA, GP, ES, A
DE, EP, ... Search
Algorithms
Extremal RFD Memetic IDDFS
Optimiz. Algorithms

Simulated ACO Hill LCS


Annealing Climbing Tabu
Search
PSO Downhill
Simplex Random
Optimiz.

Figure 23.3: The puzzle of optimization algorithms.


Chapter 24
Lessons Learned: Designing Good Encodings

Most Global Optimization algorithms share the premise that solutions to problems are either
(a) elements of a somewhat continuous space that can be approximated stepwise or (b) that
they can be composed of smaller modules which already have good attributes even when oc-
curring separately.Especially in the second case, the design of the search space (or genome)
G and the genotype-phenotype mapping “gpm” is vital for the success of the optimization
process. It determines to what degree these expected features can be exploited by defining
how the properties and the behavior of the candidate solutions are encoded and how the
search operations influence them. In this chapter, we will first discuss a general theory
on how properties of individuals can be defined, classified, and how they are related. We
will then outline some general rules for the design of the search space G, the search opera-
tions searchOp ∈ Op, and the genotype-phenotype mapping “gpm”. These rules represent
the lessons learned from our discussion of the possible problematic aspects of optimization
tasks.In software engineering, there exists a variety of design patterns1 that describe good
practice and experience values. Utilizing these patterns will help the software engineer to
create well-organized, extensible, and maintainable applications.
Whenever we want to solve a problem with Global Optimization algorithms, we need to
define the structure of a genome. The individual representation along with the genotype-
phenotype mapping is a vital part of Genetic Algorithms and has major impact on the
chance of finding good solutions. Like in software engineering, there exist basic patterns
and rules of thumb on how to design them properly.
Based on our previous discussions, we can now provide some general best practices for the
design of encodings from different perspectives. These principles can lead to finding better
solutions or higher optimization speed if considered in the design phase [1075, 2032, 2338].
We will outline suggestions given by different researchers transposed into the more general
vocabulary of formae (introduced in Chapter 9 on page 119).

24.1 Compact Representation


[1075] and Ronald [2323] state that the representations of the formae in the search space
should be as short as possible. The number of different alleles and the lengths of the different
genes should be as small as possible. In other words, the redundancy in the genomes should
be minimal. In Section 16.5 on page 174, we will discuss that uniform redundancy slows
down the optimization process.

24.2 Unbiased Representation


UnbiasednessThe search space G should be unbiased in the sense that all phenotypes are
1 http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29 [accessed 2007-08-12]
216 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
represented by the same number of genotypes [2032, 2113, 2115]. This property allows to
efficiently select an unbiased starting population which gives the optimizer the chance of
reaching all parts of the problem space.

∀x1 , x2 ∈ X ⇒ |{g ∈ G : x1 = gpm(g)}| ≈ |{g ∈ G : x2 = gpm(g)}| (24.1)

Beyer and Schwefel [300, 302] furthermore also request for unbiased variation operators. The
unary mutation operations in Evolution Strategy, for example, should not give preference to
any specific change of the genotypes. The reason for this is that selection operations, i. e.,
the methods which chooses the candidate solutions already exploits fitness information and
guides the search into one direction. The variation operators thus should not do this job as
well but instead try to increase the diversity. They should perform exploration as opposed
to the exploitation performed by selection.
Beyer and Schwefel [302] therefore again suggest to use normal distributions to mutate
real-valued vectors if G ⊆ Rn and Rudolph [2348] propose to use geometrically distributed
random numbers to offset genotype vectors in G ⊆ Zn .

24.3 Surjective GPM

Palmer [2113], Palmer and Kershenbaum [2115] and Della Croce et al. [765] state that a
good search space G and genotype-phenotype mapping “gpm” should be able to repre-
sent all possible phenotypes and solution structures, i. e., be surjective (see Section 51.7 on
page 644) [2032].
∀x ∈ X ⇒ ∃g ∈ G : x = gpm(g) (24.2)

24.4 Injective GPM

In the spirit of Section 24.1 and with the goal to lower redundancy, the number of genotypes
which map to the same phenotype should be low [2322–2324] In Section 16.5 we provided a
thorough discussion of redundancy in the genotypes and its demerits (but also its possible
advantages). Combining the idea of achieving low uniform redundancy with the principle
stated in Section 24.3, Palmer and Kershenbaum [2113, 2115] conclude that the GPM should
ideally be both, simple and bijective.

24.5 Consistent GPM

The genotype-phenotype mapping should always yield valid phenotypes [2032, 2113, 2115].
The meaning of valid in this context is that if the problem space X is the set of all possible
trees, only trees should be encoded in the genome. If we use the R3 as problem space,
no vectors with fewer or more elements than three should be produced by the genotype-
phenotype mapping. Anything else would not make sense anyway, so this is a pretty basic
rule. This form of validity does not imply that the individuals are also correct solutions in
terms of the objective functions.
Constraints on what a good solution is can, on one hand, be realized by defining them ex-
plicitly and making them available to the optimization process (see Section 3.4 on page 70).
On the other hand, they can also be implemented by defining the encoding, search opera-
tions, and genotype-phenotype mapping in a way which ensures that all phenotypes created
are feasible by default [1888, 2323, 2912, 2914].
24.6. FORMAE INHERITANCE AND PRESERVATION 217
24.6 Formae Inheritance and Preservation

The search operations should be able to preserve the (good) formae and (good) configurations
of multiple formae [978, 2242, 2323, 2879]. Good formae should not easily get lost during
the exploration of the search space.
From a binary search operation like recombination (e.g., crossover in GAs), we would
expect that, if its two parameters g1 and g2 are members of a forma A, the resulting element
will also be an instance of A.

∀g1 , g2 ∈ A ⊆ G ⇒ searchOp(g1 , g2 ) ∈ A (24.3)

If we furthermore can assume that all instances of all formae A with minimal precision
(A ∈ mini) of an individual are inherited by at least one parent, the binary reproduction
operation is considered as pure. [2242, 2879]

∀g3 = searchOp(g1 , g2 ) ∈ G, ∀A ∈ mini : g3 ∈ A ⇒ g1 ∈ A ∨ g2 ∈ A (24.4)

If this is the case, all properties of a genotype g3 which is a combination of two others (g1 , g2 )
can be traced back to at least one of its parents. Otherwise, “searchOp” also performs an
implicit unary search step, a mutation in Genetic Algorithm, for instance. Such properties,
although discussed here for binary search operations only, can be extended to arbitrary n-ary
operators.

24.7 Formae in Genotypic Space Aligned with Phenotypic Formae

The optimization algorithms find new elements in the search space G by applying the search
operations searchOp ∈ Op. These operations can only create, modify, or combine genotypical
formae since they usually have no information about the problem space. Most mathematical
models dealing with the propagation of formae like the Building Block Hypothesis and the
Schema Theorem2 thus focus on the search space and show that highly fit genotypical formae
will more probably be investigated further than those of low utility. Our goal, however, is to
find highly fit formae in the problem space X. Such properties can only be created, modified,
and combined by the search operations if they correspond to genotypical formae. According
to Radcliffe [2242] and Weicker [2879], a good genotype-phenotype mapping should provide
this feature.
It furthermore becomes clear that useful separate properties in phenotypic space can only
be combined by the search operations properly if they are represented by separate formae
in genotypic space too.

24.8 Compatibility of Formae

Formae of different properties should be compatible [2879]. Compatible Formae in phe-


notypic space should also be compatible in genotypic space. This leads to a low level of
epistasis [240, 2323] (see Chapter 17 on page 177) and hence will increase the chance of
success of the reproduction operations. The representations of different, compatible pheno-
typic formae should not influence each other [1075]. This will lead to a representations with
minimal epistasis.
2 See Section 29.5 for more information on the Schema Theorem.
218 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
24.9 Representation with Causality
Besides low epistasis, the search space, GPM, and search operations together should possess
strong causality (see Section 14.2). Generally, the search operations should produce small
changes in the objective values. This would, simplified, mean that:

∀x1 , x2 ∈ X, g ∈ G : x1 = gpm(g) ∧ x2 = gpm(searchOp(g)) ⇒ f (x2 ) ≈ f x1 (24.5)

24.10 Combinations of Formae


If genotypes g1 , g2 , . . . which are instances of different but compatible formae A1 ⊲⊳ A2 ⊲⊳ . . .
are combined by a binary (or n-ary) search operation, the resulting genotype g should be an
instance of both properties, i. e., the combination of compatible formae should be a forma
itself [2879].

∀g1 ∈ A1 , g2 ∈ A2 , · · · ⇒ searchOp(g1 , g2 , . . .) ∈ A1 ∩ A2 ∩ . . . (6= ∅) (24.6)

If this principle holds for many individuals and formae, useful properties can be combined by
the optimization step by step, narrowing down the precision of the arising, most interesting
formae more and more. This should lead the search to the most promising regions of the
search space.

24.11 Reachability of Formae


Beyer and Schwefel [300, 302] suggest that a good representation has the feature of reach-
ability that any other (finite) individual state can be reached from any parent state within
a finite number of (mutation) steps. In other words, the set of search operations should be
complete (see Definition D4.8 on page 85). Beyer and Schwefel state that this property is
necessary for proving global convergence.
The set of available search operations Op should thus include at least one (unary) search
operation which is theoretically able to reach all possible formae within finite steps. If the
binary search operations in Op all are pure, this unary operator is the only one (apart
from creation operations) able to introduce new formae which are not yet present in the
population. Hence, it should be able to find any given forma [2879].
Generally, we just need to be able to generate all formae, not necessarily all genotypes
with such an operator, if formae can arbitrarily combined by the available n-ary search or
recombination operations.

24.12 Influence of Formae


One rule which, in my opinion, was missing in the lists given by Radcliffe [2242] and Weicker
[2879] is that the absolute contributions of the single formae to the overall objective values
of a candidate solution should not be too different. Let us divide the phenotypic formae
into those with positive and those with negative or neutral contribution and let us, for
simplification purposes, assume that those with positive contribution can be arbitrarily
combined. If one of the positive formae has a contribution with an absolute value much
lower than those of the other positive formae, we will trip into the problem of domino
convergence discussed in Section 13.2.3 on page 153.
Then, the search will first discover the building blocks of higher value. This, itself, is
not a problem. However, as we have already pointed out in Section 13.2.3, if the search is
24.13. SCALABILITY OF SEARCH OPERATIONS 219
stochastic and performs exploration steps, chances are that alleles of higher importance get
destroyed during this process and have to be rediscovered. The values of the less salient
formae would then play no role. Thus, the chance of finding them strongly depends on how
frequent the destruction of important formae takes place.
Ideally, we would therefore design the genome and phenome in a way that the different
characteristics of the candidate solution all influence the objective values to a similar degree.
Then, the chance of finding good formae increases.

24.13 Scalability of Search Operations


The scalability requirement outlined by Beyer and Schwefel [300, 302] states that the strength
of the variation operator (usually a unary search operation such as mutation) should be
tunable in order to adapt to the optimization process. In the initial phase of the search,
usually larger search steps are performed in order to obtain a coarse idea in which general
direction the optimization process should proceed. The closer the focus gets to a certain local
or global optimum, the smaller should the search steps get in order to balance the genotypic
features in a fine-grained manner. Ideally, the optimization algorithm should achieve such
behavior in a self-adaptive way in reaction to the structure of the problem it is solving.

24.14 Appropriate Abstraction


The problem should be represented at an appropriate level of abstraction [240, 2323]. The
complexity of the optimization problem should evenly be distributed between the search
space G, the search operations searchOp ∈ Op, the genotype-phenotype mapping “gpm”,
and the problem space X. Sometimes, using a non-trivial encoding which shifts complexity
to the genotype-phenotype mapping and the search operations can improve the performance
of an optimizer and increase the causality [240, 2912, 2914].

24.15 Indirect Mapping


If a direct mapping between genotypes and phenotypes is not possible, a suitable develop-
mental approach should be applied [2323]. Such indirect genotype-phenotype mappings, as
we will discuss in Section 28.2.2 on page 272, are especially useful if large-scale and robust
structure designs should be constructed by the optimization process.

24.15.1 Extradimensional Bypass: Example for Good Complexity

Minimal-sized genomes are not always the best approach. An interesting aspect of genome
design supporting this claim is inspired by the works of the theoretical biologist Conrad
[621, 622, 623, 625]. According to his extradimensional bypass principle, it is possible to
transform a rugged fitness landscape with isolated peeks into one with connected saddle
points by increasing the dimensionality of the search space [496, 544]. In [625] he states that
the chance of isolated peeks in randomly created fitness landscapes decreases when their
dimensionality grows.
This partly contradicts Section 24.1 which states that genomes should be as compact
as possible. Conrad [625] does not suggest that nature includes useless sequences in the
genome but either genes which allow for
1. new phenotypical characteristics or
2. redundancy providing new degrees of freedom for the evolution of a species.
220 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
In some cases, such an increase in freedom makes more than up for the additional “costs”
arising from the enlargement of the search space. The extradimensional bypass can be
considered as an example of positive neutrality (see Chapter 16).

global global
f(x) optimum f(x) optimum
local
optimum

local
optimum
G’ G’
G G
Fig. 24.1.a: Useful increase of dimensionality. Fig. 24.1.b: Useless increase of dimensionality.

Figure 24.1: Examples for an increase of the dimensionality of a search space G (1d) to G′
(2d).

In Fig. 24.1.a, an example for the extradimensional bypass (similar to Fig. 6 in [355])
is sketched. The original problem had a one-dimensional search space G corresponding to
the horizontal axis up front. As can be seen in the plane in the foreground, the objective
function had two peeks: a local optimum on the left and a global optimum on the right,
separated by a larger valley. When the optimization process began climbing up the local
optimum, it was very unlikely that it ever could escape this hill and reach the global one.
Increasing the search space to two dimensions (G′ ), however, opened up a path way
between them. The two isolated peeks became saddle points on a longer ridge. The global
optimum is now reachable from all points on the local optimum.
Generally, increasing the dimension of the search space makes only sense if the added
dimension has a non-trivial influence on the objective functions. Simply adding a useless new
dimension (as done in Fig. 24.1.b) would be an example for some sort of uniform redundancy
from which we already know (see Section 16.5) that it is not beneficial. Then again, adding
useful new dimensions may be hard or impossible to achieve in most practical applications.
A good example for this issue is given by Bongard and Paul [355] who used an EA to
evolve a neural network for the motion control of a bipedal robot. They performed runs
where the evolution had control over some morphological aspects and runs where it had
not. The ability to change the leg with of the robots, for instance, comes at the expense
of an increase of the dimensions of the search spaced. Hence, one would expect that the
optimization would perform worse. Instead, in one series of experiments, the results were
much better with the extended search space. The runs did not converge to one particular
leg shape but to a wide range of different structures. This led to the assumption that the
morphology itself was not so much target of the optimization but the ability of changing it
transformed the fitness landscape to a structure more navigable by the evolution. In some
other experimental runs performed by Bongard and Paul [355], this phenomenon could not
be observed, most likely because
1. the robot configuration led to a problem of too high complexity, i. e., ruggedness in
the fitness landscape and/or
2. the increase in dimensionality this time was too large to be compensated by the gain
of evolvability.
24.15. INDIRECT MAPPING 221
Further examples for possible benefits of “gradually complexifying” the search space are
given by Malkin in his doctoral thesis [1818].
Part III

Metaheuristic Optimization Algorithms


224 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
Chapter 25
Introduction

In Definition D1.15 on page 36 we stated that metaheuristics is a black-box optimization


technique which combines information from the objective functions gained by sampling the
search space in a way which (hopefully) leads to the discovery of candidate solutions with
high utility. In the taxonomy given in Section 1.3, these algorithms are the majority since
they are the focus of this book.
226 25 INTRODUCTION
25.1 General Information on Metaheuristics

25.1.1 Applications and Examples

Table 25.1: Applications and Examples of Metaheuristics.


Area References
Art see Tables 28.4, 29.1, and 31.1
Astronomy see Table 28.4
Chemistry see Tables 28.4, 29.1, 30.1, 31.1, 32.1, 40.1, and 43.1
Combinatorial Problems [342, 651, 1033, 1034, 2283, 2992]; see also Tables 26.1, 27.1,
28.4, 29.1, 30.1, 31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 38.1, 39.1,
40.1, 41.1, and 42.1
Computer Graphics see Tables 27.1, 28.4, 29.1, 30.1, 31.1, 33.1, and 36.1
Control see Tables 28.4, 29.1, 31.1, 32.1, and 35.1
Cooperation and Teamwork see Tables 28.4, 29.1, 31.1, and 37.1
Data Compression see Table 31.1
Data Mining see Tables 27.1, 28.4, 29.1, 31.1, 32.1, 34.1, 35.1, 36.1, 37.1,
and 39.1
Databases see Tables 26.1, 27.1, 28.4, 29.1, 31.1, 35.1, 36.1, 37.1, 39.1,
and 40.1
Decision Making see Table 31.1
Digital Technology see Tables 26.1, 28.4, 29.1, 30.1, 31.1, 34.1, and 35.1
Distributed Algorithms and Sys- [3001, 3031]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1,
tems 32.1, 33.1, 34.1, 35.1, 36.1, 37.1, 39.1, 40.1, and 41.1
E-Learning see Tables 28.4, 29.1, 37.1, and 39.1
Economics and Finances see Tables 27.1, 28.4, 29.1, 31.1, 33.1, 35.1, 36.1, 37.1, 39.1,
and 40.1
Engineering see Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 32.1, 33.1, 34.1,
35.1, 36.1, 37.1, 39.1, 40.1, and 41.1
Function Optimization see Tables 26.1, 27.1, 28.4, 29.1, 30.1, 32.1, 33.1, 34.1, 35.1,
36.1, 39.1, 40.1, 41.1, and 43.1
Games see Tables 28.4, 29.1, 31.1, and 32.1
Graph Theory [3001, 3031]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1,
33.1, 34.1, 35.1, 36.1, 37.1, 38.1, 39.1, 40.1, and 41.1
Healthcare see Tables 28.4, 29.1, 31.1, 35.1, 36.1, and 37.1
Image Synthesis see Table 28.4
Logistics [651, 1033, 1034]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1,
33.1, 34.1, 36.1, 37.1, 38.1, 39.1, 40.1, and 41.1
Mathematics see Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 32.1, 33.1, 34.1,
and 35.1
Military and Defense see Tables 26.1, 27.1, 28.4, and 34.1
Motor Control see Tables 28.4, 33.1, 34.1, 36.1, and 39.1
Multi-Agent Systems see Tables 28.4, 29.1, 31.1, 35.1, and 37.1
Multiplayer Games see Table 31.1
Physics see Tables 27.1, 28.4, 29.1, 31.1, 32.1, and 34.1
Prediction see Tables 28.4, 29.1, 30.1, 31.1, and 35.1
Security see Tables 26.1, 27.1, 28.4, 29.1, 31.1, 35.1, 36.1, 37.1, 39.1,
and 40.1
25.1. GENERAL INFORMATION ON METAHEURISTICS 227
Shape Synthesis see Table 26.1
Software [467, 3031]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1,
33.1, 34.1, 36.1, 37.1, and 40.1
Sorting see Tables 28.4 and 31.1
Telecommunication see Tables 28.4, 37.1, and 40.1
Testing see Tables 28.4 and 31.3
Theorem Proofing and Automatic see Tables 29.1 and 40.1
Verification
Water Supply and Management see Table 28.4
Wireless Communication see Tables 26.1, 27.1, 28.4, 29.1, 31.1, 33.1, 34.1, 35.1, 36.1,
37.1, 39.1, and 40.1

25.1.2 Books
1. Metaheuristics for Scheduling in Industrial and Manufacturing Applications [2992]
2. Handbook of Metaheuristics [1068]
3. Fleet Management and Logistics [651]
4. Metaheuristics for Multiobjective Optimisation [1014]
5. Large-Scale Network Parameter Configuration using On-line Simulation Framework [3031]
6. Modern Heuristic Techniques for Combinatorial Problems [2283]
7. How to Solve It: Modern Heuristics [1887]
8. Search Methodologies – Introductory Tutorials in Optimization and Decision Support Tech-
niques [453]
9. Handbook of Approximation Algorithms and Metaheuristics [1103]

25.1.3 Conferences and Workshops

Table 25.2: Conferences and Workshops on Metaheuristics.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
ICARIS: International Conference on Artificial Immune Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
History: 2009/07: Hamburg, Germany, see [1968]
2008/10: Atizapán de Zaragoza, México, see [1038]
2007/06: Montréal, QC, Canada, see [1937]
2005/08: Vienna, Austria, see [2810]
2003/08: Kyoto, Japan, see [1321]
2001/07: Porto, Portugal, see [2294]
1999/07: Angra dos Reis, Rio de Janeiro, Brazil, see [2298]
1997/07: Sophia-Antipolis, France, see [2828]
1995/07: Breckenridge, CO, USA, see [2104]
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
228 25 INTRODUCTION
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
GEWS: Grammatical Evolution Workshop
See Table 31.2 on page 385.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
History: 2012/10: Port El Kantaoui, Sousse, Tunisia, see [2205]
2010/10: Djerba Island, Tunisia, see [873]
2008/10: Hammamet, Tunesia, see [1171]
2006/11: Hammamet, Tunesia, see [1170]
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

25.1.4 Journals
1. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientific Publishers Ltd.
2. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
3. The Journal of the Operational Research Society (JORS) published by Operations Research
Society and Palgrave Macmillan Ltd.
4. Journal of Heuristics published by Springer Netherlands
5. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
6. Journal of Global Optimization published by Springer Netherlands
7. Discrete Optimization published by Elsevier Science Publishers B.V.
8. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
9. Engineering Optimization published by Informa plc and Taylor and Francis LLC
10. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
11. Optimization Methods and Software published by Taylor and Francis LLC
12. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
13. Structural and Multidisciplinary Optimization published by Springer-Verlag GmbH
14. SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization pub-
lished by SIAM Activity Group on Optimization – SIAG/OPT
Chapter 26
Hill Climbing

26.1 Introduction

Definition D26.1 (Local Search). A local search1 algorithm solves an optimization


problem by iteratively moving from one candidate solution to another in the problem space
until a termination criterion is met [2356].

In other words, a local search algorithm in each iteration either keeps the current candidate
solution or moves to one element from its neighborhood as defined in Definition D4.12 on
page 86. Local search algorithms have two major advantages: They use very little memory
(normally only a constant amount) and are often able to find solutions in large or infinite
search spaces. These advantages come, of course, with large trade-offs in processing time
and the risk of premature convergence.
Hill Climbing2 (HC) [2356] is one of the oldest and simplest local search and optimiza-
tion algorithm for single objective functions f. In Algorithm 1.1 on page 40, we gave an
example for a simple and unstructured Hill Climbing algorithm. Here we will generalize
from Algorithm 1.1 to a more basic Monte Carlo Hill Climbing algorithm structure.

26.2 Algorithm

The Hill Climbing procedures usually starts either from a random point in the search space
or from a point chosen according to some problem-specific criterion (visualized with the
operator create Algorithm 26.1). Then, in a loop, the currently known best individual p⋆
is modified (with the mutate( )operator) in order to produce one offspring pnew . If this new
individual is better than its parent, it replaces it. Otherwise, it isdiscarded. This procedure
is repeated until the termination criterion terminationCriterion is met. In Listing 56.6 on
page 735, we provide a Java implementation which resembles Algorithm 26.1.
In this sense, Hill Climbing is similar to an Evolutionary Algorithm (see Chapter 28)
with a population size of 1 and truncation selection. Matter of fact, it resembles a (1 + 1)
Evolution Strategies. In Algorithm 30.7 on page 369, we provide such an optimization
algorithm for G ⊆ Rn which additionally adapts the search step size.
1 http://en.wikipedia.org/wiki/Local_search_(optimization) [accessed 2010-07-24]
2 http://en.wikipedia.org/wiki/Hill_climbing [accessed 2007-07-03]
230 26 HILL CLIMBING

Algorithm 26.1: x ←− hillClimbing(f)


Input: f: the objective function subject to minization
Input: [implicit] terminationCriterion: the termination criterion
Data: pnew : the new element created
Data: p⋆ : the (currently) best individual
Output: x: the best element found
1 begin 
2 p⋆.g ←− create
3 p⋆.x ←− gpm(p⋆.g) 
4 while ¬terminationCriterion do
5 pnew .g ←− mutate(p⋆.g)
6 pnew .x ←− gpm(pnew .g)
7 if f(pnew .x) ≤ f(p⋆.x) then p⋆ ←− pnew
8 return p⋆.x

Although the search space G and the problem space X are most often the same in Hill
Climbing, we distinguish them in Algorithm 26.1 for the sake of generality. It should further
be noted that Hill Climbing can be implemented in a deterministic manner if, for each
genotype g1 ∈ G, the set of adjacent points is finite (g2 ∈ G : |adjacent(g2 , g1 )| ∈ N0 ).

26.3 Multi-Objective Hill Climbing

As illustrated in Algorithm 26.2, we can easily extend Hill Climbing algorithms with a
support for multi-objective optimization by using some of the methods from Section 3.5.2
on page 76 and some which we later will discuss in the context of Evolutionary Algorithms.
This extended approach will then return a set X of the best candidate solutions found instead
of a single point x from the problem space as done in Algorithm 26.1.
The set of currently known best individuals archive may contain more than one element.
In each round, we pick one individual from the archive at random. We then use this individ-
ual to create the next point in the problem space with the “mutate” operator. The method
pruneOptimalSet(archive, as) checks whether the new candidate solution is non-prevailed
from any element in archive. If so, it will enter the archive and all individuals from archive
which now are dominated are removed. Since an optimization problem may potentially have
infinitely many global optima, we subsequently have to ensure that the archive does not grow
too big: pruneOptimalSet(archive, as) makes sure that it contains at most as individuals and,
if necessary, deletes some individuals. In case of Algorithm 26.2, the archive can grow to
at most as + 1 individuals before the pruning, so at most one individual will  be deleted.
When the loop ends because the termination criterion terminationCriterion was met, we
extract all the phenotypes from the archived individuals (extractPhenotypes(archive)) and
return them as the set X. An example implementation of this method in Java can be found
in Listing 56.7 on page 737.
26.4. PROBLEMS IN HILL CLIMBING 231

Algorithm 26.2: X ←− hillClimbingMO(OptProb, as)


Input: OptProb: a tuple (X, f , cmpf ) of the problem space X, the objective functions
f , and a comparator function cmpf
Input: as: the maximum archive size
Input: [implicit] terminationCriterion: the termination criterion
Data: pold : the individual used as parent for the next point
Data: pnew : the new individual generated
Data: archive: the set of best individuals known
Output: X: the set of the best elements found
1 begin
2 archive ←− () 
3 pnew .g ←− create
4 pnew .x ←− gpm(pnew .g)

5 while ¬terminationCriterion do
// check whether the new individual belongs into archive
6 archive ←− updateOptimalSet(archive, pnew , cmpf )
// if the archive is too big, delete one individual
7 archive ←− pruneOptimalSet(archive, as)
// pick random element from archive
8 pold ←− archive[⌊randomUni[0, len(archive))⌋]
9 pnew .g ←− mutate(pold .g)
10 pnew .x ←− gpm(pnew .g)
11 return extractPhenotypes(archive)

26.4 Problems in Hill Climbing

The major problem of Hill Climbing is premature convergence. We introduced this phe-
nomenon in Chapter 13 on page 151: The optimization gets stuck at a local optimum and
cannot discover the global optimum anymore.
Both previously specified versions of the algorithm are very likely to fall prey to this
problem, since the always move towards improvements in the objective values and thus,
cannot reach any candidate solution which is situated behind an inferior point. They will
only follow a path of candidate solutions if it is monotonously3 improving the objective func-
tion(s). The function used in Example E5.1 on page 96 is, for example, already problematic
for Hill Climbing.
As previously mentioned, Hill Climbing in this form is a local search rather than Global
Optimization algorithm. By making a few slight modifications to the algorithm however, it
can become a valuable technique capable of performing Global Optimization:

1. A tabu-list which stores the elements recently evaluated can be added. By preventing
the algorithm from visiting them again, a better exploration of the problem space X can
be enforced. This technique is used in Tabu Search which is discussed in Chapter 40
on page 489.

2. A way of preventing premature convergence is to not always transcend to the better


candidate solution in each step. Simulated Annealing introduces a heuristic based
on the physical model the cooling down molten metal to decide whether a superior
offspring should replace its parent or not. This approach is described in Chapter 27
on page 243.
3 http://en.wikipedia.org/wiki/Monotonicity [accessed 2007-07-03]
232 26 HILL CLIMBING
3. The Dynamic Hill Climbing approach by Yuret and de la Maza [3048] uses the last
two visited points to compute unit vectors. With this technique, the directions are
adjusted according to the structure of the problem space and a new coordinate frame
is created which points more likely into the right direction.

4. Randomly restarting the search after so-and-so many steps is a crude but efficient
method to explore wide ranges of the problem space with Hill Climbing. You can find
it outlined in Section 26.5.

5. Using a reproduction scheme that not necessarily generates candidate solutions directly
neighboring p⋆.x, as done in Random Optimization, an optimization approach defined
in Chapter 44 on page 505, may prove even more efficient.

26.5 Hill Climbing With Random Restarts

Hill Climbing with random restarts is also called Stochastic Hill Climbing (SH) or Stochastic
gradient descent 4 [844, 2561]. In the following, we will outline how random restarts can be
implemented into Hill Climbing and serve as a measure against premature convergence.
We will further combine this approach directly with the multi-objective Hill Climbing ap-
proach defined in Algorithm 26.2. The new algorithm incorporates two archives for optimal
solutions: archive1 , the overall optimal set, and archive2 the set of the best individuals in
the current run. We additionally define the restart criterion shouldRestart which is eval-
uated in every iteration and determines whether or not the algorithm should be restarted.
shouldRestart therefore could, for example, count the iterations performed or check if
any improvement was produced in the last ten iterations. After each single run, archive2 is
incorporated into archive1, from which we extract and return the problem space elements
at the end of the Hill Climbing process, as defined in Algorithm 26.3.

4 http://en.wikipedia.org/wiki/Stochastic_gradient_descent [accessed 2007-07-03]


26.5. HILL CLIMBING WITH RANDOM RESTARTS 233

Algorithm 26.3: X ←− hillClimbingMORR(OptProb, as)


Input: OptProb: a tuple (X, f , cmpf ) of the problem space X, the objective functions
f , and a comparator function cmpf
Input: as: the maximum archive size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] shouldRestart: the restart criterion
Data: pold : the individual used as parent for the next point
Data: pnew : the new individual generated
Data: archive1 , archive2 : the sets of best individuals known
Output: X: the set of the best elements found
1 begin
2 archive1 ←− ()

3 while ¬terminationCriterion do
4 archive2 ←− () 
5 pnew .g ←− create
6 pnew .x ←− gpm(pnew .g)
 
7 while ¬ shouldRestart ∨ terminationCriterion do
// check whether the new individual belongs into
archive
8 archive2 ←− updateOptimalSet(archive2 , pnew , cmpf )
// if the archive is too big, delete one individual
9 archive2 ←− pruneOptimalSet(archive2 , as)
// pick random element from archive
10 pold ←− archive2 [⌊randomUni[0, len(archive2 ))⌋]
11 pnew .g ←− mutate(pold .g)
12 pnew .x ←− gpm(pnew .g)
// check whether good individual were found in this run
13 archive1 ←− updateOptimalSetN(archive1 , archive2 , cmpf )
// if the archive is too big, delete one individual
14 archive1 ←− pruneOptimalSet(archive1 , as)
15 return extractPhenotypes(archive1 )
234 26 HILL CLIMBING
26.6 General Information on Hill Climbing

26.6.1 Applications and Examples

Table 26.1: Applications and Examples of Hill Climbing.


Area References
Combinatorial Problems [16, 502, 775, 1261, 1486, 1532, 1553, 1558, 1781, 1971, 2656,
2663, 2745, 3027]; see also Table 26.3
Databases [1781, 2745]
Digital Technology [1600, 1665]
Distributed Algorithms and Sys- [15, 410, 775, 1532, 1558, 2058, 2165, 2286, 2315, 2554, 2656,
tems 2663, 2903, 2994, 3027, 3085, 3106]
Engineering [410, 1486, 1600, 1665, 2058, 2286, 2656, 2663]
Function Optimization [1927, 3048]
Graph Theory [15, 410, 775, 1486, 1532, 1558, 2058, 2165, 2237, 2286, 2315,
2554, 2596, 2656, 2663, 3027, 3085, 3106]
Logistics [1553, 1971]
Mathematics [1674]
Military and Defense [775]
Security [1486, 2165, 2554]
Shape Synthesis [887]
Software [1486, 1665, 2554, 2994]
Wireless Communication [1486, 2237, 2596]

26.6.2 Books
1. Artificial Intelligence: A Modern Approach [2356]
2. Introduction to Stochastic Search and Optimization [2561]
3. Evolution and Optimum Seeking [2437]

26.6.3 Conferences and Workshops

Table 26.2: Conferences and Workshops on Hill Climbing.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
26.7. RAINDROP METHOD 235
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

26.6.4 Journals
1. Journal of Heuristics published by Springer Netherlands

26.7 Raindrop Method


Only five years ago, Bettinger and Zhu [290, 3083] contributed a new search heuristic for
constrained optimization: the Raindrop Method, which they used for forest planning prob-
lems. In the original description of the algorithm, the search and problem space are identical
(G = X). The algorithm is based on precise knowledge of the components of the candidate
solutions and on how their interaction influences the validity of the constraints. It works as
follows:
1. The Raindrop Method starts out with a single, valid candidate solution x⋆ (i. e., one
that violates none of the constraints). This candidate may be found with a random
search process or provided by a human operator.
2. Create a copy x of x⋆ . Set the iteration counter τ to a user-defined maximum value
maxτ of iterations for modifying and correcting x that are allowed without improve-
ments before reverting to x⋆ .
3. Perturb x by randomly modifying one of its components. Let us refer to this randomly
selected component as s. This modification may lead to constraint violations.
4. If no constraint was violated, continue at Point 11, otherwise proceed as follows.
5. Set a distance value d to 0.
6. Create a list L of the components of x that lead to constraint violations. Here we
make use the knowledge of the interaction of components and constraints.
7. From L, we pick the component c which is physically closest to s, that is, the component
with the minimum distance dist(c, s). In the original application of the Raindrop
Method, physically close was properly defined due to the fact that the candidate
solutions were basically two-dimensional maps. For applications different from forest
planning, appropriate definitions for the distance measure have to be supplied.
8. Set d = dist(c, s).
236 26 HILL CLIMBING
9. Find the next best value for c which does not introduce any new constraint violations
in components f which are at least as close to s, i. e., with dist(f, s) ≤ dist(c, s). This
change may, however, cause constraints violations in components farther away from s.
If no such change is possible, go to Point 13. Otherwise, modify the component c in
x.

10. Go back to step Point 4.

11. If x is better than x⋆ , that is, x ≺ x⋆ , set x⋆ = x. Otherwise, decrease the iteration
counter τ .

12. If the termination criterion has not yet been met, go back to step Point 3 if τ > 0 and
to Point 2 if τ = 0.

13. Return x⋆ to the user.

The iteration counter τ here is used to allow the search to explore solutions more distance
from the current optimum x⋆ . The higher the initial value maxτ specified user, the more
iterations without improvement are allowed before reverting x to x⋆ . The name Raindrop
Method comes from the fact that the constraint violations caused by the perturbation of
the valid solution radiate away from the modified component s like waves on a water surface
radiate away from the point where a raindrop hits.

26.7.1 General Information on Raindrop Method

26.7.1.1 Applications and Examples

Table 26.3: Applications and Examples of Raindrop Method.


Area References
Combinatorial Problems [290, 3083]

26.7.1.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]

26.7.1.3 Conferences and Workshops

Table 26.4: Conferences and Workshops on Raindrop Method.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
26.7. RAINDROP METHOD 237
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

26.7.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
238 26 HILL CLIMBING
Tasks T26

64. Use the implementation of Hill Climbing given in Listing 56.6 on page 735 (or an
equivalent own one as in Task 65) in order to find approximations of the global minima
of
n
1 X
f1 (x) = 0.5 − [cos (ln (1 + |xi |))] (26.1)
2n i=1
" n
#!
X
f2 (x) = 0.5 − 0.5 ∗ cos ln 1 + |xi | (26.2)
i=1

f3 (x) = 0.01 ∗ maxni=1 xi 2 (26.3)

in the real interval [−10, 10]n .

(a) Run ten experiments for n = 2, n = 5, and n = 10 each. Note all parameter
settings of the algorithm you used (the number of algorithm steps, operators,
etc.), the results, the vectors discovered, and their objective values. [20 points]
(b) Compute the median (see Section 53.2.6 on page 665), the arithmetic mean (see
Section 53.2.2, especially Definition D53.32 on page 661) and the standard devi-
ation (see Section 53.2.3, especially Section 53.2.3.2 on page 663) of the achieved
objective values in the ten runs. [10 points]
(c) The three objective functions are designed so that their value is always in [0, 1]
in the interval [−10, 10]n . Try to order the optimization problems according to
the median of the achieved objective values. Which problems seem to be the
hardest? What is the influence of the dimension n on the achieved objective
values? [10 points]
(d) If you compare f1 and f2 , you find striking similarities. Does one of the functions
seem to be harder? If so, which. . . and what, in your opinion, makes it harder?
[5 points]
[45 points]

65. Implement the Hill Climbing procedure Algorithm 26.1 on page 230 in a programming
language different from Java. You can specialize your implementation for only solving
continuous numerical problems, i. e., you do not need to follow the general, interface-
based approach we adhere to in Listing 56.6 on page 735. Apply your implementation
to the Sphere function (see Section 50.3.1.1 on page 581 and Listing 57.1 on page 859)
for n = 10 dimensions in order to test its functionality.
[20 points]

66. Extend the Hill Climbing algorithm implementation given in Listing 56.6 on page 735
to Hill Climbing with restarts, i. e., to a Hill Climbing which restarts the search pro-
cedure every n steps (while preserving the results of the optimization process in an
additional variable). You may use the discussion in Section 26.5 on page 232 as refer-
ence. However, you do not need to provide the multi-objective optimization capabilities
of Algorithm 26.3 on page 233. Instead, only extend the single-objective Hill Climbing
method in Listing 56.6 on page 735 or provide a completely new implementation of
Hill Climbing with restarts in a programming language of your choice. Test your im-
plementation on one of the previously discussed benchmark functions for real-valued,
continuous optimization (see, e.g., Task 64).
[25 points]

67. Let us now solve a modified version of the bin packing problem from Example E1.1
on page 25. In Table 26.5, we give four datasets from Data Set 1 in [2422]. Each
26.7. RAINDROP METHOD 239
dataset stands for one bin packing problem and provides the bin size b, the number
n of objects to be put into the bins, and the sizes ai of the objects (with i ∈ 1..n).
The problem space be the set of all possible assignments of objects to bins, where the
number k of bins is not provided. For each of the four bin packing problems, find
a candidate solution x which contains an assignment of objects to as few as possible
bins of the given size b. In other words: For each problem, we do know the size b of
the bin, the number n of objects, and the object sizes a. We want to find out how
many bins we need at least to store all the objects (k ∈ N1 , the objective, subject to
minimization) and how we can store (x ∈ X) the objects in these bins.

Name [2422] b n ai k⋆
N1C1W1 A 100 50 [50, 100, 99, 99, 96, 96, 92, 92, 91, 88, 87, 86, 85, 76, 74, 25
72, 69, 67, 67, 62, 61, 56, 52, 51, 49, 46, 44, 42, 40, 40,
33, 33, 30, 30, 29, 28, 28, 27, 25, 24, 23, 22, 21, 20, 17,
14, 13, 11, 10, 7, 7, 3]
N1C2W2 D 120 50 [50, 120, 99, 98, 98, 97, 96, 90, 88, 86, 82, 82, 80, 79, 76, 24
76, 76, 74, 69, 67, 66, 64, 62, 59, 55, 52, 51, 51, 50, 49,
44, 43, 41, 41, 41, 41, 41, 37, 35, 33, 32, 32, 31, 31, 31,
30, 29, 23, 23, 22, 20, 20]
N2C1W1 A 100 100 [100, 100, 99, 97, 95, 95, 94, 92, 91, 89, 86, 86, 85, 84, 80, 48
80, 80, 80, 80, 79, 76, 76, 75, 74, 73, 71, 71, 69, 65, 64,
64, 64, 63, 63, 62, 60, 59, 58, 57, 54, 53, 52, 51, 50, 48,
48, 48, 46, 44, 43, 43, 43, 43, 42, 41, 40, 40, 39, 38, 38,
38, 38, 37, 37, 37, 37, 36, 35, 34, 33, 32, 30, 29, 28, 26,
26, 26, 24, 23, 22, 21, 21, 19, 18, 17, 16, 16, 15, 14, 13,
12, 12, 11, 9, 9, 8, 8, 7, 6, 6, 5, 1]
N2C3W2 C 150 100 [100, 150, 100, 99, 99, 98, 97, 97, 97, 96, 96, 95, 95, 95, 41
94, 93, 93, 93, 92, 91, 89, 88, 87, 86, 84, 84, 83, 83, 82,
81, 81, 81, 78, 78, 75, 74, 73, 72, 72, 71, 70, 68, 67, 66,
65, 64, 63, 63, 62, 60, 60, 59, 59, 58, 57, 56, 56, 55, 54,
51, 49, 49, 48, 47, 47, 46, 45, 45, 45, 45, 44, 44, 44, 44,
43, 41, 41, 40, 39, 39, 39, 37, 37, 37, 35, 35, 34, 32, 31,
31, 30, 28, 26, 25, 24, 24, 23, 23, 22, 21, 20, 20]

Table 26.5: Bin Packing test data from [2422], Data Set 1.

Solve this optimization problem with Hill Climbing. It is not the goal to solve it to
optimality, but to get solutions with Hill Climbing which are as good as possible. For
the sake of completeness and in order to allow comparisons, we provide the lowest
known number k ⋆ of bins for the problems given by Scholl and Klein [2422] in the last
column of Table 26.5.
In order to solve this optimization problem, you may proceed as discussed in Chap-
ter 7 on page 109, except that you have to use Hill Climbing. Below, we give some
suggestions – but you do not necessarily need to follow them:
(a) A possible problem space X for n objects would be the set of all permutations
of the first n natural numbers (G = Π(1..n)). A candidate solution x ∈ X, i. e.,
such a permutation, could describe the order in which the objects have to be put
into bins. If a bin is filled to its capacity (or the current object does not fit into
the bin anymore), a new bin is needed. This process is repeated until all objects
have been packed into bins.
(b) For the objective function, there exist many different choices.
i. As objective function f1 , the number k(x) of bins required for storing the
objects according to a permutation x ∈ X could be used. This value can
easily be computed as discussed in our outline of the problem space X above.
You can find an example implementation of this function in Listing 57.10 on
page 892.
240 26 HILL CLIMBING
ii. Alternatively, we could also sum up the squares of the free space left over in
each bin as f2 . It is easy to see that if we put each object into a single bin,
the free capacity left over is maximal. If there was a way to perfectly fill each
bin with objects, the free capacity would become 0. Such a configuration
would also require the least number of bins, since using fewer bins would
mean to not store all objects. By summing up the squares of the free spaces,
we punish bins with lots of free space more severely. In other words, we
reward solutions where the bins are filled equally. Naturally one would “feel”
that such solutions are “good”. However, this approach may prevent us
from finding the optimum if it is a solution where the objects are distributed
uneven. We implement this objective function as Listing 57.11 on page 893.
iii. Also, a combination of the above would be possible, such as f3 (x) = f1 (x) −
1
1+f2 (x) , for example. An example implementation of this function is given
as Listing 57.12 on page 894.
iv. Then again, we could consider the total number of bins required but ad-
ditionally try to minimize the space lastWaste(x) left over in the last bin,
i. e., set f4 (x) = f1 (x) + 1/(1 + lastWaste(x)). The idea here would be that
the dominating factor of the objective should still be the number of bins re-
quired. But, if two solutions require the same amount of bins, we prefer the
one where more space is free in the last bin. If this waste in the last bin is
increased more and more, this means that the volume of objects in that bin
is reduced. It may eventually become 0 which would equal leaving the bin
empty. The empty bin would not be used and hence, we would be saving one
bin. An implementation of such a function for the suggested problem space
is given at Listing 57.13 on page 896.
v. Finally, instead of only considering the remaining empty space in the last
bin, we could consider the maximum empty space over all bins. The idea
is pretty similar: If we have a bin which is almost empty, maybe we can
make it disappear with just a few moves (object swaps). Hence, we could
prefer solutions where there is one bin with much free space. In order to find
such solutions, we go over all the bins required by the solution, and check how
much space is free in them. We then take the one bin with the most free space
mf and add this space as a factor to the bin count in the objective function,
similar to as we did in the previous function: f5 (x) = f1 (x) + 1/(1 + mf(x)).
An implementation of this function for the suggested problem space is given
at Listing 57.14 on page 897.
(c) If the suggested problem space is used, it could also be used as search space,
i. e., G = X. Then, a simple unary search operation (mutation) would swap the
elements of the permutation such as those provided in Listing 56.38 on page 806
and Listing 56.39. A creation operation could draw a permutation according to
the uniform distribution as shown in Listing 56.37.

Use the Hill Climbing Algorithm 26.1 given in Listing 56.6 on page 735 for solving the
task and compare its results with the results produced by equivalent random walks
and random sampling (see Chapter 8 and Section 56.1.2 on page 729). For your
experiments, you may use the program given in Listing 57.15 on page 899.

(a) For each algorithm or objective function you test, perform at least 30 independent
runs to get reliable results (Remember: our Hill Climbing algorithm is randomized
and can produce different results in each run.)
(b) For each of the four problems, write down the best assignment of objects to bins
that you have found (along with the number of bins required). You may add own
ideas as well
26.7. RAINDROP METHOD 241
(c) Discuss which of the objective functions and algorithm or algorithm configuration
performed best and give reasons.
(d) Also provide all your source code.
[40 points]
Chapter 27
Simulated Annealing

27.1 Introduction

In 1953, Metropolis et al. [1874] developed a Monte Carlo method for “calculating the
properties of any substance which may be considered as composed of interacting individual
molecules”. With this new “Metropolis” procedure stemming from statistical mechanics,
the manner in which metal crystals reconfigure and reach equilibriums in the process of
annealing can be simulated, for example. This inspired Kirkpatrick et al. [1541] to develop
the Simulated Annealing1 (SA) algorithm for Global Optimization in the early 1980s and
to apply it to various combinatorial optimization problems. Independently around the same
time, Černý [516] employed a similar approach to the Traveling Salesman Problem. Even
earlier, Jacobs et al. [1426, 1427] use a similar method to optimize program code. Simulated
Annealing is a general-purpose optimization algorithm that can be applied to arbitrary
search and problem spaces. Like simple Hill Climbing, Simulated Annealing only needs a
single initial individual as starting point and a unary search operation and thus belongs to
the family of local search algorithms.

27.1.1 Metropolis Simulation

In metallurgy and material science, annealing2 [1307] is a heat treatment of material


with the goal of altering properties such as its hardness. Metal crystals have small defects,
dislocations of atoms which weaken the overall structure as sketched in Figure 27.1. It
should be noted that Figure 27.1 is a very rough sketch and the following discussion is very
rough and simplified, too.3 In order decrease the defects, the metal is heated to, for instance
0.4 times of its melting temperature, i. e., well below its melting point but usually already a
few hundred degrees Celsius. Adding heat means adding energy to the atoms in the metal.
Due to the added energy, the electrons are freed from the atoms in the metal (which now
become ions). Also, the higher energy of the ions makes them move and increases their
diffusion rate. The ions may now leave their current position and assume better positions.
During this process, they may also temporarily reside in configurations of higher energy, for
example if two ions get close to each other. The movement decreases as the temperature
of the metal sinks. The structure of the crystal is reformed as the material cools down and
1 http://en.wikipedia.org/wiki/Simulated_annealing [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Annealing_(metallurgy) [accessed 2008-09-19]
3 I assume that a physicist or chemist would beat for these simplifications it. If you are a physicist or

chemist and have suggestions for improving/correcting the discussion, please drop me an email.
244 27 SIMULATED ANNEALING
metal structure with defects increased movement of atoms ions in low-energy structure
with fewer defects

T Ts=getTemperature(1)
physical role model of
Simulated Annealing

0K=getTemperature(¥)
t

Figure 27.1: A sketch of annealing in metallurgy as role model for Simulated Annealing.

approaches its equilibrium state. During this process, the dislocations in the metal crystal
can disappear. When annealing metal, the initial temperature must not be too low and the
cooling must be done sufficiently slowly so as to avoid to stop the ions too early and to
prevent the system from getting stuck in a meta-stable non-crystalline state representing a
local minimum of energy.
In the Metropolis simulation of the annealing process, each set of positions pos of all
E(pos)

atoms of a system is weighted by its Boltzmann probability factor e kB ∗T where E(pos) is
the energy of the configuration pos, T is the temperature measured in Kelvin, and kB is the
Boltzmann’s constant4 :
kB ≈ 1.380 650 524 · 10−23 J/K (27.1)
The Metropolis procedure is a model of this physical process which could be used to simulate
a collection of atoms in thermodynamic equilibrium at a given temperature. A new nearby
geometry posi+1 was generated as a random displacement from the current geometry posi
of the atoms in each iteration. The energy of the resulting new geometry is computed and
∆E, the energetic difference between the current and the new geometry, was determined.
The probability that this new geometry is accepted, P (∆E) is defined in Equation 27.3.

∆E = E(posi+1 ) − E(posi ) (27.2)


(
− k∆E
P (∆E) = e
B ∗T if ∆E > 0
(27.3)
1 otherwise

Systems in chemistry and physics always “strive” to attain low-energy states, i. e., the lower
the energy, the more stable is the system. Thus, if the new nearby geometry has a lower
energy level, the change of atom positions is accepted in the simulation. Otherwise, a
uniformly distributed random number r = randomUni[0, 1) is drawn and the step will only
be realized if r is less or equal the Boltzmann probability factor, i. e., r ≤ P (∆E). At high
temperatures T, this factor is very close to 1, leading to the acceptance of many uphill steps.
4 http://en.wikipedia.org/wiki/Boltzmann%27s_constant [accessed 2007-07-03]
27.2. ALGORITHM 245
As the temperature falls, the proportion of steps accepted which would increase the energy
level decreases. Now the system will not escape local regions anymore and (hopefully) comes
to a rest in the global energy minimum at temperature T = 0K.

27.2 Algorithm

The abstraction of the Metropolis simulation method in order to allow arbitrary problem
spaces is straightforward – the energy computation E(posi ) is replaced by an objective
function f or maybe even by the result v of a fitness assignment process. Algorithm 27.1
illustrates the basic course of Simulated Annealing which you can find implemented in List-
ing 56.8 on page 740. Without loss of generality, we reuse the definitions from Evolutionary
Algorithms for the search operations and set Op = {create, mutate}.

Algorithm 27.1: x ←− simulatedAnnealing(f)


Input: f: the objective function to be minimized
Data: pnew : the newly generated individual
Data: pcur : the point currently investigated in problem space
Data: T: the temperature of the system which is decreased over time
Data: t: the current time index
Data: ∆E: the energy (objective value) difference of the pcur .x and pnew .x
Output: x: the best element found
1 begin 
2 pcur .g ←− create
3 pcur .x ←− gpm(pnew .g)
4 x ←− pcur .x
5 t ←− 1 
6 while ¬terminationCriterion do
7 T ←− getTemperature(t)
8 pnew .g ←− mutate(pcur .g)
9 pnew .x ←− gpm(pnew .g)
10 ∆E ←− f(pnew .x) − f(pcur .x)
11 if ∆E ≤ 0 then
12 pcur ←− pnew
13 if f(pcur .x) < f(x) then x ←− pcur .x
14 else
∆E
15 if randomUni[0, 1) < e− T then pcur ←− pnew
16 t ←− t + 1
17 return x

Let us discuss in a very simplified way how this algorithm works. During the course of
the optimization process, the temperature T decreases (see Section 27.3). If the temperature
is high in the beginning, this means that, different from a Hill Climbing process, Simulated
Annealing accepts also many worse individuals. Better candidate solutions are always ac-
cepted. In summary, the chance that the search will transcend to a new candidate solution,
regardless whether it is better or worse, is high. In its behavior, Simulated Annealing is
then a bit similar to a random walk. As the temperature decreases, the chance that worse
individuals are accepted decreases. If T approaches 0, this probability becomes 0 as well
and Simulated Annealing becomes a pure Hill Climbing algorithm. The rationale of having
an algorithm that initially behaves like a random walk and then becomes a Hill Climber is
to make use of the benefits of the two extreme cases-
246 27 SIMULATED ANNEALING
A random walk is a maximally-explorative and minimally-exploitive algorithm. It can
never get stuck at a local optimum and will explore all areas in the search space with
the same probability. Yet, it cannot exploit a good solution, i. e., is unable to make small
modifications to an individual in a targeted way in order to check if there are better solutions
nearby.
A Hill Climber, on the other hand, is minimally-explorative and maximally-exploitive. It
always tries to modify and improve on the current solution and never investigates far-away
areas of the search space.
Simulated Annealing combines both properties, it performs a lot of exploration in order
to discover interesting areas of the search space. The hope is that, during this time, it will
discover the basin of attraction of the global optimum. The more the optimization process
becomes like a Hill Climber, the less likely will it move away from a good solution to a worse
one and will instead concentrate on exploiting the solution.

27.2.1 No Boltzmann’s Constant

Notice the difference between Line 15 in Algorithm 27.1 and Equation 27.3 on page 244:
Boltzmann’s constant has been removed. The reason is that with Simulated Annealing, we
do not want to simulate a physical process but instead, aim to solve an optimization problem.
The nominal value of Boltzmann’s constant, given in Equation 27.1 (something around
1.4 · 10−23 ), has quite a heavy impact on the Boltzmann probability. In a combinatorial
problem, where the differences in the objective values between two candidate solutions are
often discrete, this becomes obvious: In order to achieve an acceptance probability of 0.001
for a solution which is 1 worse than the best known one, one would need a temperature T
of:
1
−k
P = 0.001 = e B ∗T (27.4)
1
ln 0.001 = − (27.5)
kB ∗ T
1
− = kB ∗ T (27.6)
ln 0.001
1
− =T (27.7)
kB ∗ ln 0.001
T ≈ 1.05 · 1022 K (27.8)

Making such a setting, i. e., using a nominal value of the scale of 1 · 1022 or more, for such
a problem is not obvious and hard to justify. Leaving the constant kB away leads to
1
P = 0.001 = e− T (27.9)
1
ln 0.001 = − (27.10)
T
1
− =T (27.11)
ln 0.001
T ≈ 0.145 (27.12)

In other words, a temperature of around 0.15 would be sufficient to achieve an acceptance


probability of around 0.001. Such a setting is much easier to grasp and allows the algo-
rithm user to relate objective values, objective value differences, and probabilities in a more
straightforward way. Since “physical correctness” is not a goal in optimization (while find-
ing good solutions in an easy way is), kB is thus eliminated from the computations in the
Simulated Annealing algorithm.
If you take a close look on Equation 27.8 and Equation 27.12, you will spot another
difference: Equation 27.8 contains the unit K whereas Equation 27.12 does not. The reason
is that the absence of kB in Equation 27.12 removes the physical context and hence, using
27.3. TEMPERATURE SCHEDULING 247
the unit Kelvin is no longer appropriate. Technically, we should also no longer speak about
temperature, but for the sake of simplicity and compliance with literature, we will keep that
name during our discussion of Simulated Annealing.

27.2.2 Convergence

It has been shown that Simulated Annealing algorithms with appropriate cooling strategies
will asymptotically converge to the global optimum [1403, 2043]. The probability that
the individual pcur as used in Algorithm 27.1 represents a global optimum approaches 1
for t → ∞. This feature is closely related to the property ergodicity5 of an optimization
algorithm:

Definition D27.1 (Ergodicity). An optimization algorithm is ergodic if its result x ∈ X


(or its set of results X ⊆ X) does not depend on the point(s) in the search space at which
the search process is started.

Nolte and Schrader [2043] and van Laarhoven and Aarts [2776] provide lists of the most
important works on the issue of guaranteed convergence to the global optimum. Nolte and
Schrader [2043] further list the research providing deterministic, non-infinite boundaries
for the asymptotic convergence by Anily and Federgruen [111], Gidas [1054], Nolte and
Schrader [2042], and Mitra et al. [1917]. In the same paper, they introduce a significantly
lower bound. They conclude that, in general, pcur .x in the Simulated Annealing process is
a globally optimal candidate solution with a high probability after a number of iterations
which exceeds the cardinality of the problem space. This is a boundary which is, well, slow
from the viewpoint of a practitioner [1403]: In other words, it would be faster to enumerate
all possible candidate solutions in order to find the global optimum with certainty than
applying Simulated Annealing. Indeed, this makes sense: If this was not the case, Simulated
Annealing could be used to solve N P-complete problems in sub-exponential time. . .
This does not mean that Simulated Annealing is generally slow. It only needs that much
time if we insist on the convergence to the global optimum. Speeding up the cooling process
will result in a faster search, but voids the guaranteed convergence on the other hand. Such
speeded-up algorithms are called Simulated Quenching (SQ) [1058, 1402, 2405].
Quenching6 , too, is a process in metallurgy. Here, a work piece is first heated and then
cooled rapidly, often by holding it into a water bath. This way, steel objects can be hardened,
for example.

27.3 Temperature Scheduling

Definition D27.2 (Temperature Schedule). The temperature schedule defines how the
temperature parameter T in the Simulated Annealing process (as defined in Algorithm 27.1)
is set. The operator getTemperature : N1 7→ R+ maps the current iteration index t to a
(positive) real temperature value T.

getTemperature(t) ∈ [1.. + ∞) ∀t ∈ N1 (27.13)

5 http://en.wikipedia.org/wiki/Ergodicity [accessed 2010-09-26]


6 http://en.wikipedia.org/wiki/Quenching [accessed 2010-10-11]
248 27 SIMULATED ANNEALING
Equation 27.13 is provided as a Java interface in Listing 55.12 on page 717. As already
mentioned, the temperature schedule has major influence on the probability that the Sim-
ulated Annealing algorithm will find the global optimum. For “getTemperature”, a few
general rules hold. All schedules start with a temperature Ts which is greater than zero.

getTemperature(1) = Ts > 0 (27.14)

If the number of iterations t approaches infinity, the temperature should approach 0. If the
temperature sinks too fast, the ergodicity of the search may be lost and the global optimum
may not be discovered. In order to avoid that the resulting Simulated Quenching algorithm
gets stuck at a local optimum, random restarting can be applied, which already has been
discussed in the context of Hill Climbing in Section 26.5 on page 232.

lim getTemperature(t) = 0 (27.15)


t→+∞

There exists a wide range of methods to determine this temperature schedule. Miki et al.

Ts

0.8 Ts

0.6 Ts

linear scaling
(i.e., polynomial with a=1)

0.4 Ts exponentially
with e=0.025
logarithmically

0.2 Ts
polynomial
with a=2
exponentially
with e=0.05

0 20 40 60 80 t 100

Figure 27.2: Different temperature schedules for Simulated Annealing.

[1896], for example, used Genetic Algorithms for this purpose. The most established methods
are listed below and implemented in Paragraph 56.1.3.2.1 and some examples of them are
sketched in Figure 27.2.

27.3.1 Logarithmic Scheduling

Scale the temperature logarithmically where Ts is set to a value larger than the greatest
difference of the objective value of a local minimum and its best neighboring candidate
27.3. TEMPERATURE SCHEDULING 249
solution. This value may, of course, be estimated since it usually is not known. [1167, 2043]
An implementation of this method is given in Listing 56.9 on page 743.

Ts if t < e
getTemperatureln (t) = (27.16)
Ts / ln t otherwise

27.3.2 Exponential Scheduling

Reduce T to (1−ǫ)T in each step (or ever m steps, see Equation 27.19), where the exact value
of 0 < ǫ < 1 is determined by experiment [2808]. Such an exponential cooling strategy will
turn Simulated Annealing into Simulated Quenching [1403]. This strategy is implemented
in Listing 56.10 on page 744.

getTemperatureexp (t) = (1 − ǫ)t ∗ Ts (27.17)

27.3.3 Polynomial and Linear Scheduling


˘
T is polynomially scaled between Ts and 0 for t iterations according to Equation 27.18 [2808].
α is a constant, maybe 1, 2, or 4 that depends on the positions and objective values of the
local minima. Large values of α will spend more iterations at lower temperatures. For α = 1,
the schedule proceeds in linear steps.
 α

getTemperaturepoly (t) = 1 − ∗ Ts (27.18)
t

27.3.4 Adaptive Scheduling

Instead of basing the temperature solely on the iteration index t, other information may be
used for self-adaptation as well. After every m moves, the temperature T can be set to β
times ∆Ec = f(pcur .x) − f(x), where β is an experimentally determined constant, f(pcur .x)
is the objective value of the currently examined candidate solution pcur .x, and f(x) is the
objective value of the best phenotype x found so far. Since ∆Ec may be 0, we limit the
temperature change to a maximum of T ∗ γ with 0 < γ < 1. [2808]

27.3.5 Larger Step Widths

If the temperature reduction should not take place continuously but only every m ∈ N1
iterations, t′ computed according to Equation 27.19 can be used in the previous formulas,
i. e., getTemperature(t′ ) instead of getTemperature(t).

t′ = m⌊t/m⌋ (27.19)
250 27 SIMULATED ANNEALING
27.4 General Information on Simulated Annealing

27.4.1 Applications and Examples

Table 27.1: Applications and Examples of Simulated Annealing.


Area References
Combinatorial Problems [190, 308, 344, 382, 516, 667, 771, 1403, 1454, 1455, 1486,
1541, 1550, 1896, 2043, 2105, 2171, 2640, 2770, 2901, 2966]
Computer Graphics [433, 1043, 1403, 2707]
Data Mining [1403]
Databases [771, 2171]
Distributed Algorithms and Sys- [616, 1926, 2058, 2068, 2631, 2632, 2901, 2994, 3002]
tems
Economics and Finances [1060, 1403]
Engineering [437, 1058, 1059, 1403, 1486, 1541, 1550, 1926, 2058, 2250,
2405, 3002]
Function Optimization [1018, 1071, 2180, 2486]
Graph Theory [63, 344, 616, 1454, 1455, 1486, 1926, 2058, 2068, 2237, 2631,
2632, 2793, 3002]
Logistics [190, 382, 516, 667, 1403, 1550, 1896, 2043, 2770, 2966]
Mathematics [111, 227, 1043, 1674, 2180]
Military and Defense [1403]
Physics [1403]
Security [1486]
Software [1486, 1550, 2901, 2994]
Wireless Communication [63, 154, 1486, 2237, 2250, 2793]

27.4.2 Books
1. Genetic Algorithms and Simulated Annealing [694]
2. Simulated Annealing: Theory and Applications [2776]
3. Electromagnetic Optimization by Genetic Algorithms [2250]
4. Facts, Conjectures, and Improvements for Simulated Annealing [2369]
5. Simulated Annealing for VLSI Design [2968]
6. Introduction to Stochastic Search and Optimization [2561]
7. Simulated Annealing [2966]

27.4.3 Conferences and Workshops

Table 27.2: Conferences and Workshops on Simulated Annealing.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
27.4. GENERAL INFORMATION ON SIMULATED ANNEALING 251
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

27.4.4 Journals
1. Journal of Heuristics published by Springer Netherlands
252 27 SIMULATED ANNEALING
Tasks T27

68. Perform the same experiments as in Task 64 on page 238 with Simulated Annealing.
Therefore, use the Java Simulated Annealing implementation given in Listing 56.8 on
page 740 or develop an equivalent own implementation.
[45 points]

69. Compare your results from Task 64 on page 238 and Task 68. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results.
[10 points]

70. Perform the same experiments as in Task 67 on page 238 with Simulated Annealing and
Simulated Quenching. Therefore, use the Java Simulated Annealing implementation
given in Listing 56.8 on page 740 or develop an equivalent own implementation.
[25 points]

71. Compare your results from Task 67 on page 238 and Task 70. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results. Put your explanations into the context of the discussions provided
in Example E17.4 on page 181.
[10 points]

72. Try solving the bin packing problem again, this time by using the representation given
by Khuri, Schütz, and Heitkötter in [1534]. Use both, Simulated Annealing and Hill
Climbing to evaluate the performance of the new representation similar to what we
do in Section 57.1.2.1 on page 887. Compare the results with the results listed in that
section and obtained by you in Task 70. Which method is better? Provide reasons for
your observation.
[50 points]

73. Compute the probabilities that a new solution with ∆E1 = 0, ∆E2 = 0.001, ∆E3 =
0.01, ∆E4 = 0.1, ∆E5 = 1.0, ∆E6 = 10.0, and ∆E7 = 100 is accepted by the Simulated
Annealing algorithm at temperatures T1 = 0, T2 = 1, T3 = 10, and T4 = 100. Use
the same formulas used in Algorithm 27.1 (consider the discussion in Section 27.2.1).
[15 points]

74. Compute the temperatures needed so that solutions with ∆E1 = 0, ∆E2 = 0.001,
∆E3 = 0.01, ∆E4 = 0.1, ∆E5 = 1.0, ∆E6 = 10.0, and ∆E7 = 100 are accepted by the
Simulated Annealing algorithm with probabilities P1 = 0, P2 = 0.1, P3 = 0.25, and
P4 = 0.5. Base your calculations on the formulas used in Algorithm 27.1 and consider
the discussion in Section 27.2.1.
[15 points]
Chapter 28
Evolutionary Algorithms

28.1 Introduction

Definition D28.1 (Evolutionary Algorithm). Evolutionary Algorithms1 (EAs) are


population-based metaheuristic optimization algorithms which use mechanisms inspired by
the biological evolution, such as mutation, crossover, natural selection, and survival of the
fittest, in order to refine a set of candidate solutions iteratively in a cycle. [167, 171, 172]

Evolutionary Algorithms have not been invented in a canonical form. Instead, they are a
large family of different optimization methods which have, to a certain degree, been de-
veloped independently by different researchers. Information on their historic development
is thus largely distributed amongst the chapters of this book which deal with the specific
members of this family of algorithms.
Like other metaheuristics, all EAs share the “black box” optimization character and
make only few assumptions about the underlying objective functions. Their consistent and
high performance in many different problem domains (see, e.g., Table 28.4 on page 314) and
the appealing idea of copying natural evolution for optimization made them one the most
popular metaheuristic.
As in most metaheuristic optimization algorithms, we can distinguish between single-
objective and multi-objective variants. The latter means that multiple, possibly conflict-
ing criteria are optimized as discussed in Section 3.3. The general area of Evolutionary
Computation that deals with multi-objective optimization is called EMOO, evolutionary
multi-objective optimization and the multi-objective EAs are called, well, multi-objective
Evolutionary Algorithms.

Definition D28.2 (Multi-Objective Evolutionary Algorithm). A multi-objective Evo-


lutionary Algorithm (MOEA) is an Evolutionary Algorithm suitable to optimize multiple
criteria at once [595, 596, 737, 740, 954, 1964, 2783].

In this section, we will first outline the basic cycle in which Evolutionary Algorithms proceed
in Section 28.1.1. EAs are inspired by natural evolution, so we will compare the natural
paragons of the single steps of this cycle in detail in Section 28.1.2. Later in this chapter,
we will outline possible algorithms which can be used for the processes in EAs and finally
give some information on conferences, journals, and books on EAs as well as it applications.
1 http://en.wikipedia.org/wiki/Artificial_evolution [accessed 2007-07-03]
254 28 EVOLUTIONARY ALGORITHMS
28.1.1 The Basic Cycle of EAs

All evolutionary algorithms proceed in principle according to the scheme illustrated in Fig-

Initial Population Evaluation


create an initial compute the objective
population of random values of the solution
individuals candidates

Fitness Assignment
GPM use the objective values
apply the genotype- to determine fitness
phenotype mapping and values
obtain the phenotypes

Selection
Reproduction
create new individuals select the fittest indi-
from the mating pool by viduals for reproduction
crossover and mutation

Figure 28.1: The basic cycle of Evolutionary Algorithms.

ure 28.1 which is a specialization of the basic cycle of metaheuristics given in Figure 1.8 on
page 35. We define the basic EA for single-objective optimization in Algorithm 28.1 and the
multi-objective version in Algorithm 28.2.
1. In the first generation (t = 1), a population pop of ps individuals p is created. The
function createPop(ps) produces this population. It depends on its implementation
whether the individuals p have random genomes p.g (the usual case) or whether the
population is seeded (see Paragraph 28.1.2.2.2).
2. the genotypes p.g are translated to phenotypes p.x (see Section 28.1.2.3). This
genotype-phenotype mapping may either directly be performed to all individuals
(performGPM) in the population or can be a part of the process of evaluating the
objective functions.
3. The values of the objective functions f ∈ f are evaluated for each candidate solu-
tion p.x in pop (and usually stored in the individual records) with the operation
“computeObjectives”. This evaluation may incorporate complicated simulations and
calculations. In single-objective Evolutionary Algorithms, f = {f} and, hence, |f | = 1
holds.
4. With the objective functions, the utilities of the different features of the candidate
solutions p.x have been determined and a scalar fitness value v(p) can now be assigned
to each of them (see Section 28.1.2.5). In single-objective Evolutionary Algorithms,
v ≡ f holds.
5. A subsequent selection process selection(pop, v, mps) (or selection(pop, f, mps), if v ≡ f,
as is the case in single-objective Evolutionary Algorithms) filters out the candidate
solutions with bad fitness and allows those with good fitness to enter the mating pool
with a higher probability (see Section 28.1.2.6).
6. In the reproduction phase, offspring are derived from the genotypes p.g of the selected
individuals p ∈ mate by applying the search operations searchOp ∈ Op (which are
28.1. INTRODUCTION 255
called reproduction operations in the context of EAs, see Section 28.1.2.7). There usu-
ally are two different reproduction operations: mutation, which modifies one genotype,
and crossover, which combines two genotypes to a new one. Whether the whole popu-
lation is replaced by the offspring or whether they are integrated into the population as
well as which individuals recombined with each other depends on the implementation
of the operation “reproducePop”. We highlight the transition between the generations
in Algorithm 28.1 and 28.2 by incrementing a generation counter t and annotating the
populations with it, i. e., pop(t).

7. If the terminationCriterion is met, the evolution stops here. Otherwise, the algo-
rithm continues at Point 2.

Algorithm 28.1: X ←− simpleEA(f, ps, mps)


Input: f: the objective/fitness function
Input: ps: the population size
Input: mps: the mating pool size
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t ←− 1
3 pop(t = 1) ←− createPop(ps)
4 continue ←− true
5 while continue do
6 pop(t) ←− performGPM(pop(t) , gpm)
7 pop(t) ←− computeObjectives(pop(t) , f)

8 if terminationCriterion then
9 continue ←− false
10 else
11 mate ←− selection(pop(t) , f, mps)
12 t ←− t + 1
13 pop(t) ←− reproducePop(mate, ps)
14 return extractPhenotypes(extractBest(pop(t) , f))


Algorithm 28.1 is strongly simplified. The termination criterion terminationCriterion ,
for example, is usually invoked after each and every evaluation of the objective functions.
Often, we have a limit in terms of objective function evaluations (see, for instance Sec-
tion 6.3.3). Then, it is not sufficient to check terminationCriterion after each generation,
since a generation involves computing the objective values of multiple individuals. You can
find a more precise version of the basic Evolutionary Algorithm scheme (with steady-state
population treatment) is specified in Listing 56.11 on page 746.
Also, the return value(s) of an Evolutionary Algorithm often is not the best individual(s)
from the last generation, but the best candidate solution discovered so far. This, however,
demands for maintaining an archive, a thing which – especially in the case of multi-objective
or multi-modal optimization.
An example implementation of Algorithm 28.2 is given in Listing 56.12 on page 750.

28.1.2 Biological and Artificial Evolution


The concepts of Evolutionary Algorithms are strongly related to the way in which the
256 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.2: X ←− simpleMOEA(OptProb, ps, mps)


Input: OptProb: a tuple (X, f , cmpf ) of the problem space X, the objective functions
f , and a comparator function cmpf
Input: ps: the population size
Input: mps: the mating pool size
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: v: the fitness function resulting from the fitness assigning process
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t ←− 1
3 pop(t = 1) ←− createPop(ps)
4 continue ←− true
5 while continue do
6 pop(t) ←− performGPM(pop(t) , gpm)
7 pop(t) ←− computeObjectives(pop(t) , f)

8 if terminationCriterion then
9 continue ←− false
10 else
11 v ←− assignFitness(pop(t) , cmpf )
12 mate ←− selection(pop, v, mps)
13 t ←− t + 1
14 pop(t) ←− reproducePop(mate, ps)
15 return extractPhenotypes(extractBest(pop, cmpf ))
28.1. INTRODUCTION 257
development of species in nature could proceed. We will outline the historic ideas behind
evolution and put the steps of a multi-objective Evolutionary Algorithm into this context.

28.1.2.1 The Basic Principles from Nature

In 1859, Darwin [684] published his book “On the Origin of Species”2 in which he identified
the principles of natural selection and survival of the fittest as driving forces behind the
biological evolution. His theory can be condensed into ten observations and deductions
[684, 1845, 2938]:
1. The individuals of a species possess great fertility and produce more offspring than
can grow into adulthood.
2. Under the absence of external influences (like natural disasters, human beings, etc.),
the population size of a species roughly remains constant.
3. Again, if no external influences occur, the food resources are limited but stable over
time.
4. Since the individuals compete for these limited resources, a struggle for survival ensues.
5. Especially in sexual reproducing species, no two individuals are equal.
6. Some of the variations between the individuals will affect their fitness and hence, their
ability to survive.
7. Some of these variations are inheritable.
8. Individuals which are less fit are also less likely to reproduce. The fittest individuals
will survive and produce offspring more probably.
9. Individuals that survive and reproduce will likely pass on their traits to their offspring.
10. A species will slowly change and adapt more and more to a given environment during
this process which may finally even result in new species.
These historic ideas themselves remain valid to a large degree even today. However, some
of Darwin’s specific conclusions can be challenged based on data and methods which simply
were not available at his time. Point 10, the assumption that change happens slowly over
time, for instance, became controversial as fossils indicate that evolution seems to “at rest”
for longer times which exchange with short periods of sudden, fast developments [473, 2778]
(see Section 41.1.2 on page 493).

28.1.2.2 Initialization

28.1.2.2.1 Biology

Until now, it is not clear how life started on earth. It is assumed that before the first organ-
isms emerged, an abiogenesis 3 , a chemical evolution took place in which the chemical com-
ponents of life formed. This process was likely divided in three steps: (1) In the first step,
simple organic molecules such as alcohols, acids, purines4 (such as adenine and guanine),
and pyrimidines5 (such as thymine, cytosine, and uracil) were synthesized from inorganic
2 http://en.wikipedia.org/wiki/The_Origin_of_Species [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Abiogenesis [accessed 2009-08-05]
4 http://en.wikipedia.org/wiki/Purine [accessed 2009-08-05]
5 http://en.wikipedia.org/wiki/Pyrimidine [accessed 2009-08-05]
258 28 EVOLUTIONARY ALGORITHMS
substances. (2) Then, simple organic molecules were combined to the basic components of
complex organic molecules such as monosaccharides, amino acids6 , pyrroles, fatty acids, and
nucleotides7 . (3) Finally, complex organic molecules emerged.Many different theories exist
on how this could have taken place [176, 1300, 1698, 1713, 1837, 2084, 2091, 2432]. An impor-
tant milestone in the research in this area is likely the Miller-Urey experiment 8 [1904, 1905]
where it was shown that amino acids can form under conditions such as those expected to be
present in the early time of the earth from inorganic substances. The first single-celled life
most probably occurred in the Eoarchean9 , the earth age 3.8 billion years ago, demarking
the onset of evolution.

28.1.2.2.2 Evolutionary Algorithms

Usually, when an Evolutionary Algorithm starts up, there exists no information about what
is good or what is bad. Basically, only random genes are coupled together as individuals
in the initial population pop(t = 0). This form of initialization can be considered to be
similar to the early steps of chemical and biological evolution, where it was unclear which
substances or primitive life form would be successful and nature experimented with many
different reactions and concepts.
Besides random initialization, Evolutionary Algorithms can also be seeded [1126, 1139,
1471]. If seeding is applied, a set of individuals which already are adapted to the optimization
criteria is inserted into the initial population. Sometimes, the entire first population consists
of such individuals with good characteristics only. The candidate solutions used for this may
either have been obtained manually or by a previous optimization step [1737, 2019].
The goal for seeding is be to achieve faster convergence and better results in the evolu-
tionary process. However, it also raises the danger of premature convergence (see Chapter 13
on page 151), since the already-adapted candidate solutions will likely be much more com-
petitive than the random ones and thus, may wipe them out of the population. Care must
thus be taken in order to achieve the wanted performance improvement.

28.1.2.3 Genotype-Phenotype Mapping

28.1.2.3.1 Biology

TODO
Example E28.1 (Evolution of Fish).
In the following, let us discuss the evolution on the example of fish. We can consider a fish as
an instance of its heredity information, as one possible realization of the blueprint encoded
in its DNA.

28.1.2.3.2 Evolutionary Algorithms

At the beginning of every generation in an EA, each genotype p.g is instantiated as a


new phenotype p.x = gpm(p.g). Genotype-phenotype mappings are discussed in detail in
Section 28.2.
6 http://en.wikipedia.org/wiki/Amino_acid [accessed 2009-08-05]
7 http://en.wikipedia.org/wiki/Nucleotide [accessed 2009-08-05]
8 http://en.wikipedia.org/wiki/MillerUrey_experiment [accessed 2009-08-05]
9 http://en.wikipedia.org/wiki/Eoarchean [accessed 2007-07-03]
28.1. INTRODUCTION 259
28.1.2.4 Objective Values

28.1.2.4.1 Biology

There are different criteria which determine whether an organism can survive in its environ-
ment. Often, individuals are good in some characteristics but rarely reach perfection in all
possible aspects.

Example E28.2 (Evolution of Fish (Example E28.1) Cont.).


The survival of the genes of the fish depends on how good the fish performs in the ocean.
Its fitness, however, is not only determined by one single feature of the phenotype like its
size. Although a bigger fish will have better chances to survive, size alone does not help if
it is too slow to catch any prey. Also its energy consumption in terms of calories per time
unit which are necessary to sustain its life functions should be low so it does not need to eat
all the time. Other factors influencing the fitness positively are formae like sharp teeth and
colors that blend into the environment so it cannot be seen too easily by its predators such
as sharks. If its camouflage is too good on the other hand, how will it find potential mating
partners? And if it is big, it will usually have higher energy consumption. So there may be
conflicts between the desired properties.

28.1.2.4.2 Evolutionary Algorithms

To sum it up, we could consider the life of the fish as the evaluation process of its genotype
in an environment where good qualities in one aspect can turn out as drawbacks from other
perspectives. In multi-objective Evolutionary Algorithms, this is exactly the same. For each
problem that we want to solve, we can specify one or multiple so-called objective functions
f ∈ f . An objective function f represents one feature that we are interested in.

Example E28.3 (Evolution of a Car).


Let us assume that we want to evolve a car (a pretty weird assumption, but let’s stick with
it). The genotype p.g ∈ G would be the construction plan and the phenotype p.x ∈ X the
real car, or at least a simulation of it. One objective function fa would definitely be safety.
For the sake of our children and their children, the car should also be environment-friendly,
so that’s our second objective function fb . Furthermore, a cheap price fc , fast speed fd , and
a cool design fe would be good. That makes five objective functions from which for example
the second and the fourth are contradictory (fb ≁ fd ).

28.1.2.5 Fitness

28.1.2.5.1 Biology

As stated before, different characteristics determine the fitness of an individual. The biolog-
ical fitness is a scalar value that describes the number of offspring of an individual [2542].
260 28 EVOLUTIONARY ALGORITHMS
Example E28.4 (Evolution of Fish (Example E28.1) Cont.).
In Example E28.1, the fish genome is instantiated and proved its physical properties in
several different categories in Example E28.2. Whether one specific fish is fit or not, however,
is still not easy to say: fitness is always relative; it depends on the environment. The fitness
of the fish is not only defined by its own characteristics but also results from the features of
the other fish in the population (as well as from the abilities of its prey and predators). If
one fish can beat another one in all categories, i. e., is bigger, stronger, smarter, and so on,
it is better adapted to its environment and thus, will have a higher chance to survive. This
relation is transitive but only forms a partial order since a fish which is strong but not very
clever and a fish that is clever but not strong maybe have the same probability to reproduce
and hence, are not directly comparable.
It is not clear whether a weak fish with a clever behavioral pattern is worse or better
than a really strong but less cunning one. Both traits are useful in the evolutionary process
and maybe, one fish of the first kind will sometimes mate with one of the latter and produce
an offspring which is both, intelligent and powerful.

Example E28.5 (Computer Science and Sports).


Fitness depends on the environment. In my department, I may be considered as average fit.
If took a stroll to the department of sports science, however, my relative physical suitability
will decrease. Yet, my relative computer science skills will become much better.

28.1.2.5.2 Evolutionary Algorithms


The fitness assignment process in multi-objective Evolutionary Algorithms computes one
scalar fitness value for each individual p in a population pop by building a fitness function
v (see Definition D5.1 on page 94). Instead of being an a posteriori measure of the number
of offspring of an individual, it rather describes the relative quality of a candidate solution,
i. e., its ability to produce offspring [634, 2987].
Optimization criteria may contradict, so the basic situation in optimization very much
corresponds to the one in biology. One of the most popular methods for computing the fitness
is called Pareto ranking10 . It performs comparisons similar to the ones we just discussed.
Some popular fitness assignment processes are given in Section 28.3.

28.1.2.6 Selection

28.1.2.6.1 Biology
Selection in biology has many different facets [2945]. Let us again assume fitness to be a
measure for the expected number of offspring, as done in Paragraph 28.1.2.5.1. How fit an
individual is then does not necessarily determine directly if it can produce offspring and, if
so, how many. There are two more aspects which have influence on this: the environment
(including the rest of the population) and the potential mating partners. These factors are
the driving forces behind the three selection steps in natural selection 11 : (1) environmental
selection 12 (or survival selection and ecological selection) which determines whether an in-
dividual survives to reach fertility [684], (2) sexual selection, i. e., the choice of the mating
partner(s) [657, 2300], and (3) parental selection – possible choices of the parents against
pregnancy or newborn [2300].
10 Pareto comparisons are discussed in Section 3.3.5 on page 65 and Pareto ranking is introduced in

Section 28.3.3.
11 http://en.wikipedia.org/wiki/Natural_selection [accessed 2010-08-03]
12 http://en.wikipedia.org/wiki/Ecological_selection [accessed 2010-08-03]
28.1. INTRODUCTION 261
Example E28.6 (Evolution of Fish (Example E28.1) Cont.).
An intelligent fish may be eaten by a shark, a strong one can die from a disease, and even
the most perfect fish could die from a lighting strike before ever reproducing. The process
of selection is always stochastic, without guarantees – even a fish that is small, slow, and
lacks any sophisticated behavior might survive and could produce even more offspring than
a highly fit one. The degree to which a fish is adapted to its environment therefore can
be considered as some sort of basic probability which determines its expected chances to
survive environment selection. This is the form of selection propagated by Darwin [684] in
his seminal book “On the Origin of Species”.
The sexual or mating selection describes the fact that survival alone does not guarantee
reproduction: At least for sexually reproducing species, two individuals are involves in this
process. Hence, even a fish which is perfectly adapted to its environment will have low
chances to create offspring if it is unable to attract female interest. Sexual selection is
considered to be responsible for many of nature’s beauty, such as the colorful wings of a
butterfly or the tail of a peacock [657, 2300].

Example E28.7 (Computer Science and Sports (Example E28.5) cnt.).


For today’s noble computer scientist, survival selection is not the threatening factor. Further-
more, our relative fitness is on par with the fitness of the guys with the sports department,
at least in terms trade-off between theoretical and physical abilities. Yet, surprisingly, the
sexual selection is less friendly to some of us. Indeed, parallels with human behavior and
peacocks which prefer mating partners with colorful feathers over others regardless of their
further qualities can be drawn.

The third selection factor, parental , may lead to examples for reinforcement of certain phys-
ical characteristics which are much more extreme than Example E28.7. Rich Harris [2300]
argues that after the fertilization, i. e., before or after birth, parents may make a choice
for or against the life of their offspring. If the offspring does or is likely to exhibit certain
unwanted traits, this decision may be negative. A negative decision may also be caused by
environmental or economical decisions. A negative decision, however, may also again be
overruled after birth if the offspring turns out to possess especially favorable characteris-
tics. Rich Harris [2300], for instance, argues that parents in prehistoric European and Asian
cultures may this way have actively selected hairlessness and pale skin.

28.1.2.6.2 Evolutionary Algorithms

In Evolutionary Algorithms, survival selection is always applied in order to steer the op-
timization process towards good areas in the search space. From time to time, also a
subsequent sexual selection is used. For survival selection, the so-called selection algorithms
mate = selection(pop, v, mps) determine the mps individuals from the population pop which
should enter the mating pool mate based on their scalar fitness v. selection(pop, v, mps)
usually picks the fittest individuals and places them into mate with the highest probability.
The oldest selection scheme is called Roulette wheel and will be introduced in Section 28.4.3
on page 290 in detail. In the original version of this algorithm (intended for fitness maxi-
mization), the chance of an individual p to reproduce is proportional to its fitness v(p)pop.
More information on selection algorithms is given in Section 28.4.

28.1.2.7 Reproduction
262 28 EVOLUTIONARY ALGORITHMS
28.1.2.7.1 Biology

Last but not least, there is the reproduction. For this purpose, two approaches exist in
nature: the sexual and the asexual method.

Example E28.8 (Evolution of Fish (Example E28.1) Cont.).


Fish reproduce sexually. Whenever a female fish and a male fish mate, their genes will
be recombined by crossover. Furthermore, mutations may take place which slightly al-
ter the DNA. Most often, the mutations affect the characteristics of resulting larva only
slightly [2303]. Since fit fish produce offspring with higher probability, there is a good
chance that the next generation will contain at least some individuals that have combined
good traits from their parents and perform even better than them.

Asexual reproducing species basically create clones of themselves, a process during which,
again, mutations may take place. Regardless of how they were produced, the new generation
of offspring now enters the struggle for survival and the cycle of evolution starts in another
round.

28.1.2.7.2 Evolutionary Algorithms

In Evolutionary Algorithms, individuals usually do not have such a “gender”. Yet, the
concept of recombination, i. e., of binary search operations, is one of the central ideas of this
metaheuristic. Each individual from the mating pool can potentially be recombined with
every other one. In the car example, this means that we could copy the design of the engine
of one car and place it into the construction plan of the body of another car.
The concept of mutation is present in EAs as well. Parts of the genotype (and hence,
properties of the phenotype) can randomly be changed with a unary search operation. This
way, new construction plans for new cars are generated. The common definitions for repro-
duction operations in Evolutionary Algorithms are given in Section 28.5.

28.1.2.8 Does the Natural Paragon Fit?

At this point it should be mentioned that the direct reference to Darwinian evolution in
Evolutionary Algorithms is somehow controversial. The strongest argument against this
reference is that Evolutionary Algorithms introduce a change in semantics by being goal-
driven [2772] while the natural evolution is not.
Paterson [2134] further points out that “neither GAs [Genetic Algorithms] nor GP [Ge-
netic Programming] are concerned with the evolution of new species, nor do they use natural
selection.” On the other hand, nobody would claim that the idea of selection has not been
borrowed from nature although many additions and modifications have been introduced in
favor for better algorithmic performance.
An argument against close correspondence between EAs and natural evolution concerns
the development of different species. It depends on the definition of species: According to
Wikipedia – The Free Encyclopedia [2938], “a species is a class of organisms which are very
similar in many aspects such as appearance, physiology, and genetics.” In principle, there
is some elbowroom for us and we may indeed consider even different solutions to a single
problem in Evolutionary Algorithms as members of a different species – especially if the
binary search operation crossover/recombination applied to their genomes cannot produce
another valid candidate solution.
In Genetic Algorithms, the crossover and mutation operations may be considered to be
strongly simplified models of their natural counterparts. In many advanced EAs there may
be, however, operations which work entirely differently. Differential Evolution, for instance,
features a ternary search operation (see Chapter 33). Another example is [2912, 2914], where
28.1. INTRODUCTION 263
the search operations directly modify the phenotypes (G = X) according to heuristics and
intelligent decision.
Paterson [2134] points out that “neither GAs [Genetic Algorithms] nor GP [Genetic
Programming] are concerned with the evolution of new species, nor do they use natural
selection.” Nonetheless, nobody would claim that the idea of selection has not been borrowed
from nature although many additions and modifications have been introduced in favor for
better algorithmic performance.
Furthermore, although the concept of fitness13 in nature is controversial [2542], it is often
considered as an a posteriori measurement. It then defines the ratio between the numbers
of occurrences of a genotype in a population after and before selection or the number of
offspring an individual has in relation to the number of offspring of another individual. In
Evolutionary Algorithms, fitness is an a priori quantity denoting a value that determines
the expected number of instances of a genotype that should survive the selection process.
However, one could conclude that biological fitness is just an approximation of the a priori
quantity arisen due to the hardness (if not impossibility) of directly measuring it.
My personal opinion (which may as well be wrong) is that the citation of Darwin here is
well motivated since there are close parallels between Darwinian evolution and Evolutionary
Algorithms. Nevertheless, natural and artificial evolution are still two different things and
phenomena observed in either of the two do not necessarily carry over to the other.

28.1.2.9 Formae and Implicit Parallelistm

Let us review our introductory fish example in terms of forma analysis. Fish can, for instance,
be characterized by the properties “clever” and “strong”. For the sake of simplicity, let us
assume that both properties may be true or false for a single individual. They hence
define two formae each. A third property can be the color, for which many different possible
variations exist. Some of them may be good in terms of camouflage, others maybe good in
terms of finding mating partners. Now a fish can be clever and strong at the same time, as
well as weak and green. Here, a living fish allows nature to evaluate the utility of at least
three different formae.
This issue has first been stated by Holland [1250] for Genetic Algorithms and is termed
implicit parallelism (or intrinsic parallelism). Since then, it has been studied by many dif-
ferent researchers [284, 1128, 1133, 2827]. If the search space and the genotype-phenotype
mapping are properly designed, implicit parallelism in conjunction with the crossover/re-
combination operations is one of the reasons why Evolutionary Algorithms are such a suc-
cessful class of optimization algorithms. The Schema Theorem discussed in Section 29.5.3
on page 342 gives a further explanation for this from of parallel evaluation.

28.1.3 Historical Classification

The family of Evolutionary Algorithms historically encompasses five members, as illustrated


in Figure 28.2. We will only enumerate them here in short. In-depth discussions will follow
in the corresponding chapters.

1. Genetic Algorithms (GAs) are introduced in Chapter 29 on page 325. GAs subsume
all Evolutionary Algorithms which have string-based search spaces G in which they
mainly perform combinatorial search.

2. The set of Evolutionary Algorithms which explore the space of real vectors X ⊆ Rn ,
often using more advanced mathematical operations, is called Evolution Strategies (ES,
see Chapter 30 on page 359). Closely related is Differential Evolution, a numerical
optimizer using a ternary search operation.
13 http://en.wikipedia.org/wiki/Fitness_(biology) [accessed 2008-08-10]
264 28 EVOLUTIONARY ALGORITHMS
3. For Genetic Programming (GP), which will be elaborated on in Chapter 31 on page 379,
we can provide two definitions: On one hand, GP includes all Evolutionary Algorithms
that grow programs, algorithms, and these alike. On the other hand, also all EAs that
evolve tree-shaped individuals are instances of Genetic Programming.

4. Learning Classifier Systems (LCS), discussed in Chapter 35 on page 457, are learning
approaches that assign output values to given input values. They often internally use
a Genetic Algorithm to find new rules for this mapping.

5. Evolutionary Programming (EP, see Chapter 32 on page 413) approaches usually fall
into the same category as Evolution Strategies, they optimize vectors of real numbers.
Historically, such approaches have also been used to derive algorithm-like structures
such as finite state machines.

Genetic Programming
GGGP SGP
LGP

Genetic Algorithms Evolutionary


Programming

Learning Classifier
Systems Evolution Strategy

Differential
Evolution

Evolutionary Algorithms

Figure 28.2: The family of Evolutionary Algorithms.

The early research [718] in Genetic Algorithms (see Section 29.1 on page 325), Genetic
Programming (see Section 31.1.1 on page 379), and Evolutionary Programming (see ??
on page ??) date back to the 1950s and 60s. Besides the pioneering work listed in these
sections, at least other important early contribution should not go unmentioned here: The
Evolutionary Operation (EVOP) approach introduced by Box and Draper [376, 377] in the
late 1950s. The idea of EVOP was to apply a continuous and systematic scheme of small
changes in the control variables of a process. The effects of these modifications are evaluated
and the process is slowly shifted into the direction of improvement. This idea was never
realized as a computer algorithm, but Spendley et al. [2572] used it as basis for their simplex
method which then served as progenitor of the Downhill Simplex algorithm14 by Nelder and
Mead [2017]. [718, 1719] Satterthwaite’s REVOP [2407, 2408], a randomized Evolutionary
Operation approach, however, was rejected at this time [718].
Like in Section 1.3 on page 31, we here classified different algorithms according to their
semantics, in other words, corresponding to their specific search and problem spaces and
the search operations [634]. All five major approaches can be realized with the basic scheme
defined in Algorithm 28.1. To this simple structure, there exist many general improve-
ments and extensions. Often, these do not directly concern the search or problem spaces.
14 We discuss Nelder and Mead’s Downhill Simplex [2017] optimization method in Chapter 43 on page 501.
28.1. INTRODUCTION 265
Then, they also can be applied to all members of the EA family alike. Later in this chap-
ter, we will discuss the major components of many of today’s most efficient Evolutionary
Algorithms [593]. Some of the distinctive features of these EAs are:
1. The population size and the number of populations used.
2. The method of selecting the individuals for reproduction.
3. The way the offspring is included into the population(s).

28.1.4 Populations in Evolutionary Algorithms


Evolutionary Algorithms try to simulate processes known from biological evolution in order
to solve optimization problems. Species in biology consist of many individuals which live
together in a population. The individuals in a population compete but also mate and with
each other generation for generation. In EAs, it is quite similar: the population pop ∈
ps
(G × X) is a set of ps individuals p ∈ G × X where each individual p possesses heredity
information (it’s genotype p.g) and a physical representation (the phenotype p.x).
There exist various way in which an Evolutionary Algorithm can process its population.
Especially interesting is how the population pop(t + 1) of the next generation is formed
from (1) the individuals selected from pop(t) which are collected in the mating pool mate
and (2) the new offspring derived from them.

28.1.4.1 Extinctive/Generational EAs


If it the new population only contains the offspring, we speak of extinctive selection [2015,
2478]. Extinctive selection can be compared with ecosystems of small single-cell organisms
(protozoa15 ) which reproduce in a asexual fissiparous16 manner, i. e., by cell division. In this
case, of course, the elders will not be present in the next generation. Other comparisons can
partly be drawn to the sexual reproducing to octopi, where the female dies after protecting
the eggs until the larvae hatch, or to the black widow spider where the female devours
the male after the insemination. Especially in the area of Genetic Algorithms, extinctive
strategies are also known as generational algorithms.

Definition D28.3 (Generational). In Evolutionary Algorithms that are genera-


tional [2226], the next generation will only contain the offspring of the current one and
no parent individuals will be preserved.

Extinctive Evolutionary Algorithms can further be divided into left and right selec-
tion [2987]. In left extinctive selections, the best individuals are not allowed to reproduce in
order to prevent premature convergence of the optimization process. Conversely, the worst
individuals are not permitted to breed in right extinctive selection schemes in order to reduce
the selective pressure since they would otherwise scatter the fitness too much [2987].

28.1.4.2 Preservative EAs


In algorithms that apply a preservative selection scheme, the population is a combination of
the previous population and its offspring [169, 1461, 2335, 2772]. The biological metaphor for
such algorithms is that the lifespan of many organisms exceeds a single generation. Hence,
parent and child individuals compete with each other for survival.
Even in preservative strategies, it is not granted that the best individuals will always
survive, unless truncation selection (as introduced in Section 28.4.2 on page 289) is applied.
In general, most selection methods are randomized. Even if they pick the best candidate
solutions with the highest probabilities, they may also select worse individuals.
15 http://en.wikipedia.org/wiki/Protozoa [accessed 2008-03-12]
16 http://en.wikipedia.org/wiki/Binary_fission [accessed 2008-03-12]
266 28 EVOLUTIONARY ALGORITHMS
28.1.4.2.1 Steady-State Evolutionary Algorithms
Steady-state Evolutionary Algorithms [519, 2041, 2270, 2316, 2641], abbreviated by SSEA,
are preservative Evolutionary Algorithms which usually do not allow inferior individuals to
replace better ones in the population. Therefore, the binary search operator (crossover/re-
combination) is often applied exactly once per generation. The new offspring then replaces
the worst member of the population if and only if it has better fitness.
Steady-state Evolutionary Algorithms often produce better results than generational
EAs. Chafekar et al. [519], for example, introduce steady-state Evolutionary Algorithms
that are able to outperform generational NSGA-II (which you can find summarized in ??
on page ??) for some difficult problems. In experiments of Jones and Soule [1463] (primarily
focused on other issues), steady-state algorithms showed better convergence behavior in a
multi-modal landscape. Similar results have been reported by Chevreux [558] in the context
of molecule design optimization. The steady-state GENITOR has shown good behavior in
an early comparative study by Goldberg and Deb [1079]. On the other hand, with a badly
configured steady-state approach, we run also the risk of premature convergence. In our own
experiments on a real-world Vehicle Routing Problem discussed in Section 49.3 on page 536,
for instance, steady-state algorithms outperformed generational ones – but only if sharing
was applied [2912, 2914].

28.1.4.3 The µ-λ Notation


The µ-λ notation has been developed to describe the treatment of parent and offspring
individuals in Evolution Strategies (see Chapter 30) [169, 1243, 2437]. Now it is often used
in other Evolutionary Algorithms as well. Usually, this notation implies truncation selection,
i. e., the deterministic selection of the µ best individuals from a population. Then,
1. λ denotes the number of offspring (to be) created and
2. µ is the number of parent individuals, i. e., the size of the mating pool (mps = µ).
The population adaptation strategies listed below have partly been borrowed from the Ger-
man Wikipedia – The Free Encyclopedia [2938] site for Evolution Strategy17 :
28.1.4.3.1 (µ + λ)-EAs
Using the reproduction operations, from µ parent individuals λ offspring are created from µ
parents. From the joint set of offspring and parents, i. e., a temporary population of size ps =
λ+µ, only the µ fittest ones are kept and the λ worst ones are discarted [302, 1244]. (µ + λ)-
strategies are thus preservative. The naming (µ + λ) implies that only unary reproduction
operators are used. However, in practice, this part of the convention is often ignored [302].
28.1.4.3.2 (µ, λ)-EAs
For extinctive selection patterns, the (µ, λ)-notation is used, as introduced by Schwefel
[2436]. EAs named according to this pattern create λ ≥ µ child individuals from the µ
available genotypes. From these, they only keep the µ best individuals and discard the µ
parents as well as the λ−µ worst children [295, 2436]. Here, λ corresponds to the population
size ps.
Again, this notation stands for algorithms which employ no crossover operator but mu-
tation only. Yet, many works on (µ, λ) Evolution Strategies, still apply recombination [302].
28.1.4.3.3 (1 + 1)-EAs
The EA only keeps track a single individual which is reproduced. The elder and the offspring
together form the population pop, i. e., ps = 2. From these two, only the better individual
will survive and enter the mating pool mate (mps = 1). This scheme basically is equivalent
to Hill Climbing introduced in Chapter 26 on page 229.
17 http://en.wikipedia.org/wiki/Evolutionsstrategie [accessed 2007-07-03]
28.1. INTRODUCTION 267
28.1.4.3.4 (1, 1)-EAs

The EA creates a single offspring from a single parent. The offspring then replaces its parent.
This is equivalent to a random walk and hence, does not optimize very well, see Section 8.2
on page 114.

28.1.4.3.5 (µ + 1)-EAs

Here, the population contains µ individuals from which one is drawn randomly. This indi-
vidual is reproduced. Alternatively, two parents can be chosen and recombined [302, 2278].
From the joint set of its offspring and the current population, the least fit individual is
removed.

28.1.4.3.6 (µ/ρ, λ)-EAs

The notation (µ/ρ, λ) is a generalization of (µ, λ) used to signify the application of ρ-ary
reproduction operators. In other words, the parameter ρ is added to denote the number of
parent individuals of one offspring. Normally, only unary mutation operators (ρ = 1) are
used in Evolution Strategies, the family of EAs the µ-λ notation stems from. If (binary)
recombination is also applied (as normal in general Evolutionary Algorithms), ρ = 2 holds. A
special case of (µ/ρ, λ) algorithms is the (µ/µ, λ) Evolution Strategy [1843] which, basically,
is similar to an Estimation of Distribution Algorithm.

28.1.4.3.7 (µ/ρ + λ)-EAs

Analogously to (µ/ρ, λ)-Evolutionary Algorithms, the (µ/ρ + λ)-Evolution Strategies are


generalized (µ, λ) approaches where ρ denotes the number of parents of an offspring indi-
vidual.

28.1.4.3.8 (µ′ , λ′ (µ, λ)γ )-EAs

Geyer et al. [1044, 1045, 1046] have developed nested Evolution Strategies where λ′ off-
spring are created and isolated for γ generations from a population of the size µ′ . In each
of the γ generations, λ children are created from which the fittest µ are passed on to the
next generation. After the γ generations, the best individuals from each of the γ isolated
candidate solutions propagated back to the top-level population, i. e., selected. Then, the
cycle starts again with λ′ new child individuals. This nested Evolution Strategy can be
more efficient than the other approaches when applied to complex multimodal fitness envi-
ronments [1046, 2279].

28.1.4.4 Elitism

Definition D28.4 (Elitism). An elitist Evolutionary Algorithm [595, 711, 1691] ensures
that at least one copy of the best individual(s) of the current generation is propagated on
to the next generation.

The main advantage of elitism is that it is guaranteed that the algorithm will converge. This
means, meaning that once the basin of attraction of the global optimum has been discovered,
the Evolutionary Algorithm will converge to that optimum. On the other hand, the risk
of converging to a local optimum is also higher. Elitism is an additional feature of Global
Optimization algorithms – a special type of preservative strategy – which is often realized
by using a secondary population (called the archive) only containing the non-prevailed in-
dividuals. This population is updated at the end of each iteration. It should be noted that
268 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.3: X ←− elitistMOEA(OptProb, ps, as, mps)


Input: OptProb: a tuple (X, f , cmpf ) of the problem space X, the objective functions
f , and a comparator function cmpf
Input: ps: the population size
Input: as: the archive size
Input: mps: the population size
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: v: the fitness function resulting from the fitness assigning process
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t ←− 1
3 archive ←− ()
4 pop(t = 1) ←− createPop(ps)
5 continue ←− true
6 while continue do
7 pop(t) ←− performGPM(pop(t) , gpm)
8 pop(t) ←− computeObjectives(pop(t) , f)
9 archive ←− updateOptimalSetN(archive, pop, cmpf )
10 archive ←− pruneOptimalSet(archive,
 as)
11 if terminationCriterion then
12 continue ←− false
13 else
14 v ←− assignFitness(pop(t) , cmpf )
15 mate ←− selection(pop, v, mps)
16 t ←− t + 1
17 pop(t) ←− reproducePop(mate, ps)
18 return extractPhenotypes(extractBest(archive, cmpf ))
28.1. INTRODUCTION 269
elitism is usually of fitness and concentrates on the true objective values of the individuals.
Such an archive-based elitism can also be combined with generational approaches. Algo-
rithm 28.3, an extension of Algorithm 28.2 on page 256 and the basic evolutionary cycle
given in Section 28.1.1:
1. The archive archive is the set of best individuals found by the algorithm. Initially, it is
the empty set ∅. Subsequently, it is updated with the function “updateOptimalSetN”
which inserts only the new, unprevailed elements from the population into it and also
removes archived individuals which become prevailed by those new optima. Algorithms
that realize such updating are defined in Section 28.6.1 on page 308.
2. If the archive becomes too large – it might theoretically contain uncountable many
individuals – “pruneOptimalSet” reduces it to a proper size, employing techniques like
clustering in order to preserve the element diversity. More about pruning can be found
in Section 28.6.3 on page 311.
3. You should also notice that both, the fitness assignment and selection processes, of
elitist Evolutionary Algorithms may take the archive as additional parameter. In
principle, such archive-based algorithms can also be used in non-elitist Evolutionary
Algorithms by simply replacing the parameter archive with ∅.

28.1.5 Configuration Parameters of Evolutionary Algorithms

Evolutionary Algorithms are very robust and efficient optimizers which can often also provide
good solutions for hard problems. However, their efficient is strongly influenced by design
choices and configuration parameters. In Chapter 24, we gave some general rules-of-thumb
for the design of the representation used in an optimizer. Here we want to list the set of
configuration and setup choices available to the user of an EA.
Fitne
s Ass s
Proc

m
ith
or
ignm
ess

lg
A

Searc
n

h
Searc Space a
en

io
ct

h Op n
erati d
t

le
Se

ons Arch
ive/P
pe

runin
g
y

Basic ion size, over


(pop n and cr
ng ot
tatio rates)
pi en
ap h
M pe-P

ulat
Para
y
ot
en

mete
G

oss

rs
mu-

Figure 28.3: The configuration parameters of Evolutionary Algorithms.

Figure 28.3 illustrates these parameters. The performance and success of an evolutionary
optimization approach applied to a problem given by a set of objective functions f and a
problem space X is defined by
270 28 EVOLUTIONARY ALGORITHMS
1. its basic parameter settings such as the population size ps or the crossover rate cr and
mutation rate mr,

2. whether it uses an archive archive of the best individuals found and, if so, the archive
size as and which pruning technology is used to prevent it from overflowing,

3. the fitness assignment process “assignFitness” and the selection algorithm “selection”,

4. the choice of the search space G and the search operations Op, and, finally

5. the genotype-phenotype mapping “gpm” connecting the search Space and the problem
space.

In ??, we go more into detail on how to state the configuration of an optimization algorithm
in order to fully describe experiments and to make them reproducible.

28.2 Genotype-Phenotype Mappings

In Evolutionary Algorithms as well as in any other optimization method, the search space
G is not necessarily the same as the problem space X. In other words, the elements g
processed by the search operations searchOp ∈ Op may differ from the candidate solutions
x ∈ X which are evaluated by the objective functions f : X 7→ R. In such cases, a mapping
between G and X is necessary: the genotype-phenotype mapping (GPM). In Section 4.3 on
page 85 we gave a precise definition of the term GPM. Generally, we can distinguish four
types of genotype-phenotype mappings:

1. If G = X, the genotype-phenotype mapping is the identity mapping. The search


operations then directly work on the candidate solutions.

2. In many cases, direct mappings, i. e., simple functional transformations which directly
relate parts of the genotypes and phenotypes are used (see Section 28.2.1).

3. In generative mappings, the phenotypes are built step by step in a process directed by
the genotypes. Here, the genotypes serve as programs which direct the phenotypical
developments (see Section 28.2.2.1).

4. Finally, the third set of genotype-phenotype mappings additionally incorporates feed-


back from the environment or the objective functions into the developmental process
(see Section 28.2.2.2).

Which kind of genotype-phenotype mapping should be used for a given optimization task
strongly depends on the type of the task at hand. For most small-scale and many large-
scale problems, direct mappings are completely sufficient. However, especially if large-scale,
possibly repetitive or recursive structures are to be created, generative or ontogenic mappings
can be the best choice [777, 2735].

28.2.1 Direct Mappings

Direct mappings are explicit functional relationships between the genotypes and the phe-
notypes [887]. Usually, each element of the phenotype is related to one element in the
genotype [2735] or to a group of genes. Also, the algorithmic complexity of these mappings
usually is usually not above O(n log n) where n is the length of a genotype. Based on the
work of Devert [777], we define explicit or direct mappings as follows:
28.2. GENOTYPE-PHENOTYPE MAPPINGS 271
Definition D28.5 (Explicit Representation). An explicit (or direct) representation is
a representation for which a computable functions exists that takes as argument the size
the genotype and gives an upper bound for the number of algorithm steps of the genotype-
phenotype mapping until the phenotype-building process has converged to a stable pheno-
type.

The algorithm steps mentioned in Definition D28.5 can be, for example, the write operations
to the phenotype. In the direct mappings we discuss in the remainder of this section, usually
each element of a candidate solution is written to exactly once. If bit strings are transformed
to vectors of natural numbers, for example, each element of the numerical vector is set to
one value. The same goes when scaling real vectors: the elements of the resulting vectors are
written to once. In the following, we provide some examples for direct genotype-phenotype
mappings.

Example E28.9 (Binary Search Spaces: Gray Coding).


Bit string genomes are sometimes complemented with the application of gray coding18 during
the genotype-phenotype mapping. If the value of a natural number changes by one, in its
binary complement representation, an arbitrary number of bit may change. 810 has the
binary representation 10002 but 710 has 01112 . In gray code, only one bit changes: (one
possible) the gray code representation of 810 is 11002g and 710 then corresponds to 0100g2 .
Using gray code may thus be good to preserve locality (see Section 14.2) and to reduce
hidden biases [504] when dealing with bit-string encoded numbers in optimization. Collins
and Eaton [610] studied further encodings for Genetic Algorithms and found that their
E-code outperform both gray and direct binary coding in function optimization.

Example E28.10 (Real Search Spaces: Normalization).


If bounded real-valued, continuous problem spaces X ⊆ Rn are investigated, it is possible
that the bounds of the dimensions differ significantly. Depending on the optimization algo-
rithm and search operations applied, this may deceive the optimization process to give more
emphasis to dimensions with large bounds while neglecting dimensions with smaller bounds.

Example E28.11 (Neglecting One Dimension of X).


Assume that a two dimensional problem space X was searched for the best solution of an˘
optimization task. The bounds of the first dimension are x̆1 = −1 and x1 = 1 and the
˘
bounds of the second dimension are x̆2 = −109 and x2 = 109 . The only search operation
available would add random numbers normally distributed with expected value µ = 0 and
standard deviation σ = 0.01 to each dimension of the candidate vector. Then, a more or
less efficient search in the first dimension would be possible.

A genotype-phenotype mapping which allows the search to operate on G = [0, 1]n instead
˘ ˘ ˘ ˘
of X = [x̆1 , x1 ] × [x̆2 , x2 ] × · · · × [x̆n , xn ], where x̆i ∈ R and xi ∈ R are the lower and upper
bounds of the ith dimension of the problem space X, respectively, would prevent such pitfalls
from the beginning:
˘
gpmscale (g) = (x̆i + g [i] ∗ (xi − x̆i ∀i ∈ 1 . . . n) (28.1)

18 http://en.wikipedia.org/wiki/Gray_coding [accessed 2007-07-03]


272 28 EVOLUTIONARY ALGORITHMS
28.2.2 Indirect Mappings

Different from direct mappings, in the phenotype-building process in indirect mappings is


more complex [777]. The basic idea of such developmental approaches is that the phenotypes
are constructed from the genotypes in an iterative way, by stepwise applications of rules and
refinements. These refinements take into consideration some state information such as the
structure of the phenotype. The genotypes usually represent the rules and transformations
which have to be applied to the candidate solutions during the construction phase.
In nature, life begins with a single cell which divides19 time and again until a mature
individual is formed20 after the genetic information has been reproduced. The emergence
of a multi-cellular phenotype from its genotypic representation is called embryogenesis21
in biology. Many researchers have drawn their inspiration for indirect mappings from this
embryogenesis, from the developmental process which embryos undergo when developing to
a fully-functional being inside an egg or the womb of their mother.
Therefore, developmental mappings received colorful names such as artificial embryo-
geny [2601], artificial ontongeny (AO) [356], computational embryogeny [273, 1637], cellular
encoding [1143], morphogenesis [291, 1431] and artificial development [2735].
Natural embryogeny is a highly complex translation from hereditary information to phys-
ical structures. Among other things, the DNA, for instance, encodes the structural design
information of the human brain. As pointed out by Manos et al. [1829], there are only
about 30 thousand active genes in the human genome (2800 million amino acids) for over
100 trillion neural connections in our cerebrum. A huge manifold of information is hence
decoded from “data” which is of a much lower magnitude. This is possible because the same
genes can be reused in order to repeatedly create the same pattern. The layout of the light
receptors in the eye, for example, is always the same – just their wiring changes.
If we draw inspiration from such a process, we find that features such as gene reuse and
(phenotypic) modularity seem to be beneficial. Indeed, allowing for the repetition of the
same modules (cells, receptors, . . . ) encoded in the same way many times is one of the
reasons why natural embryogeny leads to the development of organisms whose number of
modules surpasses the number of genes in the DNA by far.
The inspiration we get from nature can thus lead to good ideas which, in turn, can lead
to better results in optimization. Yet, we would like to point out that being inspired by
nature does – by far – not mean to be like nature. The natural embryogenesis processes
are much more complicated than the indirect mappings applied in Evolutionary Algorithms.
The same goes for approaches which are inspired by chemical diffusion and such and such.
Furthermore, many genotype-phenotype mappings based on gene reuse, modularity, and
feedback between the phenotypes and the environment are not related to and not inspired
by nature.
In the following, we divide indirect mappings into two categories. We distinguish GPMs
which solely use state information (such as the partly-constructed phenotype) and such that
additionally incorporate feedback from external information sources, the environment, into
the phenotype building process.

28.2.2.1 Generative Mappings: Phenotypic Interaction

In the following, we provide some examples for generative genotype-phenotype mappings.

28.2.2.1.1 Genetic Programming

19 http://en.wikipedia.org/wiki/Cell_division [accessed 2007-07-03]


20 Matter of fact, cell division will continue until the individual dies. However, this is not important here.
21 http://en.wikipedia.org/wiki/Embryogenesis [accessed 2007-07-03]
28.2. GENOTYPE-PHENOTYPE MAPPINGS 273
Example E28.12 (Grammar-Guided Genetic Programming).
Genetic Programming, as we will discuss in Chapter 31, is the family of Evolutionary Algo-
rithms which deals with the evolutionary synthesis of programs. In grammar-guided Genetic
Programming, usually the grammar of the programming language in which these programs
are to be specified is given. Each gene in the genotypes of these approaches usually encodes
for the application of a single rule of the grammar, as is, for instance, the case in Gads
(see ?? on page ??) which uses an integer string genome where each integer corresponds to
the ID of a rule. This way, step by step, complex sentences can be built by unfolding the
genotype. In Grammatical Evolution (??), integer strings with genes identifying grammati-
cal productions are used as well. Depending on the un-expanded variables in the phenotype,
these genes may have different meanings. Applying the rules identified by them may intro-
duce new unexpanded variables into the phenotype. Hence, the runtime of such a mapping
is not necessarily known in advance.

Example E28.13 (Graph-creating Trees).


In Standard Genetic Programming approaches, the genotypes are trees. Trees are very useful
to express programs and programs can be used to create many different structures. In a
genotype-phenotype mapping, the programs may, for example, be executed to build graphs
(as in Luke and Spector’s Edge Encoding [1788] discussed here in ?? on page ??) or electrical
circuits [1608]. In the latter case, the structure of the phenotype strongly depends on how
and how often given instructions or automatically defined functions in the genotypes are
called.

28.2.2.2 Ontogenic Mappings: Environmental Interaction

Full ontogenic (or developmental [887]) mappings even go a step farther than just utilizing
state information during the transformation of the genotypes to phenotypes [777]. This
additional information can stem from the objective functions. If simulations are involved
in the computation of the objective functions, the mapping may access the full simulation
information as well.

Example E28.14 (Developmental Wing Construction).


The phenotype could, for instance, be a set of points describing the surface of an airplane’s
wing. The genotype could be a real vector holding the weights of an artificial neural network.
The artificial neural network may then represent rules for moving the surface points as feed-
back to the air pressure on them given by a simulation of the wing. These rules can describe
a distinct wing shape which develops according to the feedback from the environment.

Example E28.15 (Developmental Termite Nest Construction).


Estévez and Lipson [887] evolved rules for the construction of a termite nest. The phenotype
is the nest which consists of multiple cells. The fitness of the cell depends on how close the
temperature in each cell comes to a given target temperature. The rules which were evolved
in Estévez and Lipson’s work are directly applied to the phenotypes in the simulations.
The decide whether and where cells were added to or removed from the nest based on the
temperature and density of the cells in the simulation.
274 28 EVOLUTIONARY ALGORITHMS
28.3 Fitness Assignment

28.3.1 Introduction

In multi-objective optimization, each candidate solution p.x is characterized by a vector


of objective values f (p.x). The early Evolutionary Algorithms, however, were intended to
perform single-objective optimization, i. e., were capable of finding solutions for a single
objective function alone. Historically, many algorithms for survival and mating selection
base their decision on such a scalar value.
A fitness assignment process, as defined in Definition D5.1 on page 94 in Section 5.2 is
thus used to transform the information given by the vector of objective values f (p.x) for
the phenotype p.x ∈ X of an individual p ∈ G × X to a scalar fitness value v(p). This
assignment process takes place in each generation of the Evolutionary Algorithm and, as
stated in Section 5.2, may thus change in each iteration.
Including a fitness assignment step may also be beneficial in a MOEA, but can also
prove useful in a single-objective scenario: The fitness assigned to an individual may not
just reflect the absolute utility of an individual, but its suitability relative to the other
individuals in the population pop. Then, the fitness function has the character of a heuristic
function (see Definition D1.14 on page 34).
Besides such ranking data, fitness can also incorporate density/niching information. This
way, not only the quality of a candidate solution is considered, but also the overall diversity
of the population. Therefore, either the phenotypes p.x ∈ X or the genotypes p.g ∈ G may
be analyzed and compared. This can improve the chance of finding the global optima as
well as the performance of the optimization algorithm significantly. If many individuals in
the population occupy the same rank or do not dominate each other, for instance, such
information will be very helpful.
The fitness v(p.x) thus may not only depend on the candidate solution p.x itself, but
on the whole population pop of the Evolutionary Algorithm (and on the archive archive of
optimal elements, if available). In practical realizations, the fitness values are often stored
in a special member variable in the individual records. Therefore, v(p) can be considered as
a mapping that returns the value of such a variable which has previously been stored there
by a fitness assignment process assignFitness(pop, cmpf ).

Definition D28.6 (Fitness Assignment Process). A fitness assignment process “assignFitness”


uses a comparison criterion cmpf (see Definition D3.20 on page 77) to create a function
v : G × X 7→ R+ (see Definition D5.1 on page 94) which relates a scalar fitness value to each
individual p in the population pop as stated in Equation 28.2 (and archive archive, if an
archive is available Equation 28.3).
v = assignFitness(pop, cmpf ) ⇒ v(p) ∈ R+ ∀p ∈ pop (28.2)
v = assignFitnessArch(pop, archive, cmpf ) ⇒ v(p) ∈ R+ ∀p ∈ pop ∪ archive (28.3)

Listing 55.14 on page 718 provides a Java interface which enables us to implement different
fitness assignment processes according to Definition D28.6. In the context of this book, we
generally minimize fitness values, i. e., the lower the fitness of a candidate solution, the
better. If no density or sharing information is included and the fitness only reflects the
strict partial order introduced by the Pareto (see Section 3.3.5) or a prevalence relation (see
Section 3.5.2), Equation 28.4 will hold. If further information such as diversity metrics are
included in the fitness, this equation may no longer be applicable.
p1 .x ≺ p2 .x ⇒ v(p1 ) < v(p2 ) ∀p1 , p2 ∈ pop ∪ archive (28.4)
28.3. FITNESS ASSIGNMENT 275
28.3.2 Weighted Sum Fitness Assignment
The most primitive fitness assignment strategy would be to just use a weighted sum of
the objective values. This approach is very static and brings about the same problems as
weighted sum-based approach for defining what an optimum is discussed in Section 3.3.3 on
page 62. It makes no use of Pareto or prevalence relations. For computing the weighted sum
of the different objective values of a candidate solution, we reuse Equation 3.18 on page 62
from the weighted sum optimum definition. The weights have to be chosen in a way that
ensures that v(p) ∈ R+ holds for all individuals p.x ∈ X.
v(p) = assignFitnessWeightedSum(pop) ⇔ (∀p ∈ pop ⇒ v(p) = ws(p.x)) (28.5)

28.3.3 Pareto Ranking


Another simple method for computing fitness values is to let them directly reflect the Pareto
domination (or prevalence) relation. There are two ways for doing this:
1. We can assign a value inversely proportional to the number of other individuals p2 it
prevails to each individual p1 .
1
v1 (p1 ) ≡ (28.6)
|p2 ∈ pop : p1 .x ≺ p2 .x| + 1

2. or we can assign the number of other individuals p3 it is dominated by to it.


v2 (p1 ) ≡ |p3 ∈ pop : p3 .x ≺ p2 .x| (28.7)

In case 1, individuals that dominate many others will here receive a lower fitness value
than those which are prevailed by many. Since fitness is subject to minimization they will
strongly be preferred. Although this idea looks appealing at first glance, it has a decisive
drawback which can also be observed in Example E28.16: It promotes individuals that reside
in crowded region of the problem space and underrates those in sparsely explored areas. If an
individual has many neighbors, it can potentially prevail many other individuals. However,
a non-prevailed candidate solution in a sparsely explored region of the search space may not
prevail any other individual and hence, receive fitness 1, the worst possible fitness under v1 .
Thus, the fitness assignment process achieves exactly the opposite of what we want.
Instead of exploring the problem space and delivering a wide scan of the frontier of best
possible candidate solutions, it will focus all effort on a small set of individuals. We will
only obtain a subset of the best solutions and it is even possible that this fitness assignment
method leads to premature convergence to a local optimum.
A much better second approach for fitness assignment has first been proposed by Gold-
berg [1075]. Here, the idea is to assign the number of individuals it is prevailed by to each
candidate solution [1125, 1783]. This way, the previously mentioned negative effects will not
occur and the exploration pressure is applied to a much wider area of the Pareto frontier.
This so-called Pareto ranking can be performed by first removing all non-prevailed (non-
dominated) individuals from the population and assigning the rank 0 to them (since they
are prevailed by no individual and according to Equation 28.7 should receive v2 ). Then, the
same is performed with the rest of the population. The individuals only dominated by those
on rank 0 (now non-dominated) will be removed and get the rank 1. This is repeated until
all candidate solutions have a proper fitness assigned to them. Since we follow the idea of the
freer prevalence comparators instead of Pareto dominance relations, we will synonymously
refer to this approach as Prevalence ranking and define Algorithm 28.4 which is implemented
in Java in Listing 56.16 on page 759. It should be noted that this simplealgorithm has 

a complexity of O ps2 ∗ |f | . A more efficient approach with complexity O ps log|f |−1 ps
has been introduced by Jensen [1441] based on a divide-and-conquer strategy.
276 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.4: v2 ←− assignFitnessParetoRank(pop, cmpf )


Input: pop: the population to assign fitness values to
Input: cmpf : the prevalence comparator defining the prevalence relation
Data: i, j, cnt: the counter variables
Output: v2 : a fitness function reflecting the Prevalence ranking
1 begin
2 for i ←− len(pop) − 1 down to 0 do
3 cnt ←− 0
4 p ←− pop[i]
5 for j ←− len(pop) − 1 down to 0 do
// Check whether cmpf (pop[j].x, p.x) < 0
6 if (j 6= i) ∧ (pop[j].x ≺ p.x) then cnt ←− cnt + 1
7 v2 (p) ←− cnt
8 return v2

Example E28.16 (Pareto Ranking Fitness Assignment).


Figure 28.4 and Table 28.1 illustrate the Pareto relations in a population of 15 individuals
and their corresponding objective values f1 and f2 , both subject to minimization.

10
f2 8 15
8 5
7 1 6 9 14
6 10
5 7 11
4 2 13
3 12
2 3
Pareto Frontier
1 4

0 1 2 3 4 5 6 7 8 9 10 f1 12

Figure 28.4: An example scenario for Pareto ranking.


28.3. FITNESS ASSIGNMENT 277
p.x dominates is dominated by v1 (p) v2 (p)
1 {5, 6, 8, 9, 14, 15} ∅ 1/7 0
2 {6, 7, 8, 9, 10, 11, 13, 14, 15} ∅ 1/10 0
3 {12, 13, 14, 15} ∅ 1/5 0
4 ∅ ∅ 1 0
5 {8, 15} {1} 1/3 1
6 {8, 9, 14, 15} {1, 2} 1/5 2
7 {9, 10, 11, 14, 15} {2} 1/6 1
8 {15} {1, 2, 5, 6} 1/2 4
9 {14, 15} {1, 2, 6, 7} 1/3 4
10 {14, 15} {2, 7} 1/3 2
11 {14, 15} {2, 7} 1/3 2
12 {13, 14, 15} {3} 1/4 1
13 {15} {2, 3, 12} 1/2 3
14 {15} {1, 2, 3, 6, 7, 9, 10, 11, 12} 1/2 9
15 ∅ {1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14} 1 13

Table 28.1: The Pareto domination relation of the individuals illustrated in Figure 28.4.

We computed the fitness values for both approaches for the individuals in the population
given in Figure 28.4 and noted them in Table 28.1. In column v1 (p), we provide the fitness
values are inversely proportional to the number of individuals which are dominated by p.
A good example for the problem that approach 1 prefers crowded regions in the search
space while ignoring interesting candidate solutions in less populated regions are the four
non-prevailed individuals {1, 2, 3, 4} on the Pareto frontier. The best fitness is assigned to
the element 2, followed by individual 1. Although individual 7 is dominated (by 1), its
fitness is better than the fitness of the non-dominated element 3.
The candidate solution 4 gets the worst possible fitness 1, since it prevails no other
element. Its chances for reproduction are similarly low than those of individual 15 which
is dominated by all other elements except 4. Hence, both candidate solutions will most
probably be not selected and vanish in the next generation. The loss of individual 4 will
greatly decrease the diversity and further increase the focus on the crowded area near 1 and
2.
Column v2 (p) in Table 28.1 shows that all four non-prevailed individuals now have the
best possible fitness 0 if the method given as Algorithm 28.4 is applied. Therefore, the focus
is not centered to the overcrowded regions.
However, the region around the individuals 1 and 2 has probably already extensively
been explored, whereas the surrounding of candidate solution 4 is rather unknown. More
individuals are present in the top-left area of Figure 28.4 and at least two individuals with
the same fitness as 3 and 4 reside there. Thus, chances are sill that more individuals
from this area will be selected than from the bottom-right area. A better approach of
fitness assignment should incorporate such information and put a bit more pressure into the
direction of individual 4, in order to make the Evolutionary Algorithm investigate this area
more thoroughly. Algorithm 28.4 cannot fulfill this hope.

28.3.4 Sharing Functions

Previously, we have mentioned that the drawback of Pareto ranking is that it does not
incorporate any information about whether the candidate solutions in the population reside
closely to each other or in regions of the problem space which are only sparsely covered
by individuals. Sharing, as a method for including such diversity information into the
278 28 EVOLUTIONARY ALGORITHMS
fitness assignment process, was introduced by Holland [1250] and later refined by Deb [735],
Goldberg and Richardson [1080], and Deb and Goldberg [742] and further investigated
by [1899, 2063, 2391].

Definition D28.7 (Sharing Function). A sharing function Sh : R+ 7→ R+ is a function


that maps the distance d = dist(p1 , p2 ) between two individuals p1 and p2 to the real interval
[0, 1]. The value of Shσ is 1 if d = 0 and 1 is d exceeds a specific constant value σ.

 1 if d ≤ 0
Shσ (d = dist(p1 , p2 )) = ∈ [0, 1] if 0 < d < σ (28.8)

0 otherwise

Sharing functions can be employed in many different ways and are used by a variety of fitness
assignment processes [735, 1080]. Typically, the simple triangular function “Sh tri” [1268]
or one of its either convex (Sh cvexξ ) or concave (Sh ccavξ ) pendants with the power
ξ ∈ R+ , ξ > 0 are applied. Besides using different powers of the distance-σ-ratio, another
approach is the exponential sharing method “Sh exp”.

1 − σd if 0 ≤ d < σ
Sh triσ (d) = (28.9)
0 otherwise
( ξ
Sh cvexσ,ξ (d) = 1 − σd if 0 ≤ d < σ (28.10)
0 otherwise
( 
d ξ
Sh ccavσ,ξ (d) = 1 − σ if 0 ≤ d < σ (28.11)
0 otherwise


 1 if d ≤ 0
Sh expσ,ξ (d) = 0 if d ≥ σ (28.12)
 ξd
 e σ −e
− −ξ

1−e−ξ
otherwise

For sharing, the distance of the individuals in the search space G as well as their distance
in the problem space X or the objective space Y may be used. If the candidate solutions
are real vectors in the Rn , we could use the Euclidean distance of the phenotypes of the
individuals directly, i. e., compute dist eucl(p1 .x, p2 .x). However, this makes only sense in
lower dimensional spaces (small values of n), since the Euclidean distance in Rn does not
convey much information for high n.
In Genetic Algorithms, where the search space is the set of all bit strings G = Bn
of the length n, another suitable approach would be to use the Hamming distance22
dist ham(p1 .g, p2 .g) of the genotypes. The work of Deb [735] indicates that phenotypical
sharing will often be superior to genotypical sharing.

Definition D28.8 (Niche Count). The niche count m(p, P ) [738, 1899] of an individual
p ∈ P is the sum of its sharing values with all individual in a list P .
X
∀p ∈ P ⇒ m(p, P ) = Shσ (dist(p, p′ )) (28.13)
∀p′ ∈P

The niche count “m” is always greater than or equal to 1, since p ∈ P and, hence,
Shσ (dist(p, p)) = 1 is computed and added up at least once. The original sharing ap-
proach was developed for fitness assignment in single-objective optimization where only one
objective function f was subject to maximization. In this case, the objective value was sim-
ply divided by the niche count, punishing solutions in crowded regions [1899]. The goal of
22 See ?? on page ?? for more information on the Hamming distance.
28.3. FITNESS ASSIGNMENT 279
sharing was to distribute the population over a number of different peaks in the fitness land-
scape, with each peak receiving a fraction of the population proportional to its height [1268].
The results of dividing the fitness by the niche counts strongly depends on the height dif-
ferences of the peaks and thus, on the complexity class23 of f. On f1 ∈ O(x), for instance,
the influence of “m” is much bigger than on a f2 ∈ O(ex ).
By multiplying the niche count “m” to predetermined fitness values v′ , we can use this
approach for fitness minimization in conjunction with a variety of other different fitness
assignment processes, but also inherit its shortcomings:

v(p) = v′ (p) ∗ m(p, pop) , v′ ≡ assignFitness(pop, cmpf ) (28.14)

Sharing was traditionally combined with fitness proportionate, i. e., roulette wheel selec-
tion24 . Oei et al. [2063] have shown that if the sharing function is computed using the
parental individuals of the “old” population and then naı̈vely combined with the more so-
phisticated tournament selection25 , the resulting behavior of the Evolutionary Algorithm
may be chaotic. They suggested to use the partially filled “new” population to circumvent
this problem. The layout of Evolutionary Algorithms, as defined in this book, bases the
fitness computation on the whole set of “new” individuals and assumes that their objec-
tive values have already been completely determined. In other words, such issues simply
do not exist in multi-objective Evolutionary Algorithms as introduced here and the chaotic
behavior does occur. 
For computing the niche count “m”, O n2 comparisons are needed. According to
Goldberg et al. [1085], sampling the population can be sufficient to approximate “m”in
order to avoid this quadratic complexity.

28.3.5 Variety Preserving Fitness Assignment

Using sharing and the niche counts naı̈vely leads to more or less unpredictable effects. Of
course, it promotes solutions located in sparsely populated niches but how much their fitness
will be improved is rather unclear. Using distance measures which are not normalized can
lead to strange effects, too. Imagine two objective functions f1 and f2 . If the values of f1
span from 0 to 1 for the individuals in the population whereas those of f2 range from 0 to
10 000, the components of f1 will most often be negligible in the Euclidian distance of two
individuals in the objective space Y. Another problem is that the effect of simple sharing
on the pressure into the direction of the Pareto frontier is not obvious either or depends on
the sharing approach applied. Some methods simply add a niche count to the Pareto rank,
which may cause non-dominated individuals having worse fitness than any others in the
population. Other approaches scale the niche count into the interval [0, 1) before adding it
which not only ensures that non-dominated individuals have the best fitness but also leave
the relation between individuals at different ranks intact which, however, does not further
variety very much.
In some of our experiments [2888, 2912, 2914], we applied a Variety Preserving Fitness
Assignment, an approach based on Pareto ranking using prevalence comparators and well-
dosed sharing. We developed it in order to mitigate the previously mentioned side effects
and balance the evolutionary pressure between optimizing the objective functions and max-
imizing the variety inside the population. In the following, we will describe it in detail and
give its specification in Algorithm 28.5.
Before this fitness assignment process can begin, it is required that all individuals with
infinite objective values must be removed from the population pop. If such a candidate
solution is optimal, i. e., if it has negative infinitely large objectives in a minimization process,
for instance, it should receive fitness zero, since fitness is subject to minimization. If the
23 SeeChapter 12 on page 143 for a detailed introduction into complexity and the O-notation.
24 Roulette wheel selection is discussed in Section 28.4.3 on page 290.
25 You can find an outline of tournament selection in Section 28.4.4 on page 296.
280 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.5: v ←− assignFitnessVariety(pop, cmpf )


Input: pop: the population
Input: cmpf : the comparator function
Input: [implicit] f : the set of objective functions
Data: . . .: sorry, no space here, we’ll discuss this in the text
Output: v: the fitness function
1 begin
/* If needed: Remove all elements with infinite objective
values from p pop and assign fitness 0 or
len(pop) + len(pop) + 1 to them. Then compute the
prevalence ranks. */
2 ranks ←− createList(len(pop) , 0)
3 maxRank ←− 0
4 for i ←− len(pop) − 1 down to 0 do
5 for j ←− i − 1 down to 0 do
6 k ←− cmpf (pop[i].x, pop[j].x)
7 if k < 0 then ranks[j] ←− ranks[j] + 1
8 else if k > 0 then ranks[i] ←− ranks[i] + 1
9 if ranks[i] > maxRank then maxRank ←− ranks[i]
// determine the ranges of the objectives
10 mins ←− createList(|f | , +∞)
11 maxs ←− createList(|f | , −∞)
12 foreach p ∈ pop do
13 for i ←− |f | down to 1 do
14 if fi (p.x) < mins[i − 1] then mins[i − 1] ←− fi (p.x)
15 if fi (p.x) > maxs[i − 1] then maxs[i − 1] ←− fi (p.x)
16 rangeScales ←− createList(|f | , 1)
17 for i ←− |f | − 1 down to 0 do
18 if maxs[i] > mins[i] then rangeScales[i] ←− 1/ (maxs[i] − mins[i])
// Base a sharing value on the scaled Euclidean distance of
all elements
19 shares ←− createList(len(pop) , 0)
20 minShare ←− +∞
21 maxShare ←− −∞
22 for i ←− len(pop) − 1 down to 0 do
23 curShare ←− shares[i]
24 for j ←− i − 1 down to 0 do
25 dist ←− 0
26 for k ←− |f | down to 1 do
2
27 dist ←− dist + [(fk (pop[i].x) − fk (pop[j].x)) ∗ rangeScales[k − 1]]
√ 
28 s ←− Sh exp√|f |,16 dist
29 curShare ←− curShare + s
30 shares[j] ←− shares[j] + s
31 shares[i] ←− curShare
32 if curShare < minShare then minShare ←− curShare
33 if curShare > maxShare then maxShare ←− curShare
// Finally, compute the fitness values

1/ (maxShare − minShare) if maxShare > minShare
34 scale ←−
1 otherwise
35 for i ←− len(pop) − 1 down to 0 do
36 if ranks[i] > 0 then √
37 v(pop[i]) ←− ranks[i] + maxRank ∗ scale ∗ (shares[i] − minShare)
38 else v(pop[i]) ←− scale ∗ (shares[i] − minShare)
28.3. FITNESS ASSIGNMENT 281
p
individual is infeasible, on the other hand, its fitness should be set to len(pop)+ len(pop)+1,
which is 1 larger than every other fitness values that may be assigned by Algorithm 28.5.
In lines 2 to 9, a list ranks is created which is used to efficiently compute the Pareto rank
of every candidate solution in the population. Actually, the word prevalence rank would be
more precise in this case, since we use prevalence comparisons as introduced in Section 3.5.2.
Therefore, Variety Preserving is not limited to Pareto optimization but may also incorporate
External Decision Makers (Section 3.5.1) or the method of inequalities (Section 3.4.5).
The highest rank encountered in the population is stored in the variable maxRank.
This value may be zero if the population contains only non-prevailed elements. The lowest
rank will always be zero since the prevalence comparators cmpf define order relations which
are non-circular by definition.26 . maxRank is used to determine the maximum penalty for
solutions in an overly crowded region of the search space later on.
From line 10 to 18, the maximum and the minimum values that each objective function
takes on when applied to the individuals in the population are obtained. These values are
used to store the inverse of their ranges in the array rangeScales, which will be used to
scale all distances in each dimension (objective) of the individuals into the interval [0, 1].
There are |f | objective functions in f and, hence, the maximum Euclidian distance p between
two candidate solutions in the objective space (scaled to normalization) becomes |f |. It
occurs if all the distances in the single dimensions are 1.
Between line 19 and 33, the scaled distance from every individual to every other candidate
solution in the objective space is determined. This distance is used to aggregate share values
(in the array shares). Therefore, again two nested loops are needed (lines 22 and 24). The
distance components of two individuals pop[i] and pop[j] are scaled√and summarized in a
variable dist in line 27. The Euclidian distance between them is dist which is used to
determine a sharing value in 28. We therefore have decided for p exponential sharing, as
introduced in Equation 28.12 on page 278, with power 16 and σ = |f |. For every individual,
we sum up all the shares (see line 30). While doing so, the minimum and maximum total
shares are stored in the variables minShare and maxShare in lines 32 and 33.
These variables are used scale all sharing values again into the interval [0, 1] (line 34),
so the individual in the most crowded region always has a total share of 1 and the most
remote individual always has a share of 0. Basically, two things are now known about the
individuals in pop:
1. their Pareto/Prevalence ranks, stored in the array ranks, giving information about
their relative quality according to the objective values and
2. their sharing values, held in shares, denoting how densely crowded the area around
them is.
With this information, the final fitness values of an individual p is determined as follows: If p
is non-prevailed, i. e., its rank is zero, its√fitness is its scaled total share (line 38). Otherwise,
the square root of the maximum rank, maxRank, is multiplied with the scaled share and
add it to its rank (line 37). By doing so, the supremacy of non-prevailed individuals in
the population is preserved. Yet, these individuals can compete with each other based on
the crowdedness of their location in the objective space. All other candidate solutions may
degenerate in rank, but at most by the square root of the worst rank.

Example E28.17 (Variety Preserving Fitness Assignment).


Let us now apply Variety Preserving to the examples for Pareto ranking from Section 28.3.3.
In Table 28.2, we again list all the candidate solutions from Figure 28.4 on page 276, this
time with their objective values obtained with f1 and f2 corresponding to their coordinates
in the diagram. In the third column, you can find the Pareto ranks of the individuals as it
has been listed in Table 28.1 on page 277. The columns share/u and share/s correspond to
the total sharing sums of the individuals, unscaled and scaled into [0, 1].
26 In all order relations imposed on finite sets there is always at least one “smallest” element. See Sec-

tion 51.8 on page 645 for more information.


282 28 EVOLUTIONARY ALGORITHMS

1 15
share potential

8
0.8
5 14
0.6 1 6 9
10
0.4
7 11
13
0.2
2
12
0
3
4
10

8
Pareto Frontier
f2 6

8 f1 10
4 6
0 2
Figure 28.5: The sharing potential in the Variety Preserving Example E28.17.
28.3. FITNESS ASSIGNMENT 283
x f1 f2 rank share/u share/s v(x)
1 1 7 0 0.71 0.779 0.779
2 2 4 0 0.239 0.246 0.246
3 6 2 0 0.201 0.202 0.202
4 10 1 0 0.022 0 0
5 1 8 1 0.622 0.679 3.446
6 2 7 2 0.906 1 5.606
7 3 5 1 0.531 0.576 3.077
8 2 9 4 0.314 0.33 5.191
9 3 7 4 0.719 0.789 6.845
10 4 6 2 0.592 0.645 4.325
11 5 5 2 0.363 0.386 3.39
12 7 3 1 0.346 0.366 2.321
13 8 4 3 0.217 0.221 3.797
14 7 7 9 0.094 0.081 9.292
15 9 9 13 0.025 0.004 13.01

Table 28.2: An example for Variety Preserving based on Figure 28.4.

But first things first; as already mentioned, we know the Pareto ranks of the candidate
solutions from Table 28.1, so the next step is to determine the ranges of values the objective
functions take on for the example population. These can again easily be found out from
Figure 28.4. f1 spans from 1 to 10, which leads to rangeScale[0] = 1/9. rangeScale[1] = 1/8
since the maximum of f2 is 9 and its minimum is 1. With this, we now can compute the
(dimensionally√scaled) distances amongst the candidate solutions in the objective space,
the values of dist in algorithm Algorithm  28.5, as well as the corresponding values of


the sharing function Sh exp |f |,16 dist . We noted these in Table 28.3, using the upper
triangle of the table for the distances and the lower triangle for the shares.

Upper triangle: distances. Lower triangle: corresponding share values.


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 0.391 0.836 1.25 0.125 0.111 0.334 0.274 0.222 0.356 0.51 0.833 0.863 0.667 0.923
2 0.012 0.51 0.965 0.512 0.375 0.167 0.625 0.391 0.334 0.356 0.569 0.667 0.67 0.998
3 7.7E-5 0.003 0.462 0.933 0.767 0.502 0.981 0.708 0.547 0.391 0.167 0.334 0.635 0.936
4 6.1E-7 1.8E-5 0.005 1.329 1.163 0.925 1.338 1.08 0.914 0.747 0.417 0.436 0.821 1.006
5 0.243 0.003 2.6E-5 1.8E-7 0.167 0.436 0.167 0.255 0.417 0.582 0.914 0.925 0.678 0.898
6 0.284 0.014 1.7E-4 1.8E-6 0.151 0.274 0.25 0.111 0.255 0.417 0.747 0.765 0.556 0.817
7 0.023 0.151 0.003 2.9E-5 0.007 0.045 0.512 0.25 0.167 0.222 0.51 0.569 0.51 0.833
8 0.045 0.001 1.5E-5 1.5E-7 0.151 0.059 0.003 0.274 0.436 0.601 0.933 0.914 0.609 0.778
9 0.081 0.012 3.3E-4 4.8E-6 0.056 0.284 0.059 0.045 0.167 0.334 0.669 0.67 0.444 0.712
10 0.018 0.023 0.002 3.2E-5 0.009 0.056 0.151 0.007 0.151 0.167 0.502 0.51 0.356 0.67
11 0.003 0.018 0.012 2.1E-4 0.001 0.009 0.081 0.001 0.023 0.151 0.334 0.356 0.334 0.669
12 8E-5 0.002 0.151 0.009 3.2E-5 2.1E-4 0.003 2.6E-5 0.001 0.003 0.023 0.167 0.5 0.782
13 5.7E-5 0.001 0.023 0.007 2.9E-5 1.7E-4 0.002 3.2E-5 0.001 0.003 0.018 0.151 0.391 0.635
14 0.001 0.001 0.001 9.3E-5 4.6E-4 0.002 0.003 0.001 0.007 0.018 0.023 0.003 0.012 0.334
15 2.9E-5 1.2E-5 2.5E-5 1.1E-5 3.9E-5 9.7E-5 8E-5 1.5E-4 3.1E-4 0.001 0.001 1.4E-4 0.001 0.023

Table 28.3: The distance and sharing matrix of the example from Table 28.2.

The value of the sharing function can be imagined as a scalar field, as illustrated in
Figure 28.5. In this case, each individual in the population can be considered as an electron
that will build an electrical field around it resulting in a potential. If two electrons come
284 28 EVOLUTIONARY ALGORITHMS
close, repulsing forces occur, which is pretty much the same what we want to do with Variety
Preserving. Unlike the electrical field, the power of the sharing potential falls exponentially,
resulting in relatively steep spikes in Figure 28.5 which gives proximity and density a heavier
influence. Electrons in atoms on planets are limited in their movement by other influences
like gravity or nuclear forces, which are often stronger than the electromagnetic force. In
Variety Preserving, the prevalence rank plays this role – as you can see in Table 28.2, its
influence on the fitness is often dominant.
By summing up the single sharing potentials for each individual in the example, we
obtain the fifth column of Table 28.3, the unscaled share values. Their minimum is around
0.022 and the maximum is 0.94. Therefore, we must subtract 0.022 from each of these values
and multiply the result with 1.131. By doing so, we build the column shares/s. Finally, we
can compute the fitness values v(x) according to lines 38 and 37 in Algorithm 28.5.
The last column of Table 28.2 lists these results. All non-prevailed individuals have
retained a fitness value less than one, lower than those of any other candidate solution in
the population. However, amongst these best individuals, candidate solution 4 is strongly
preferred, since it is located in a very remote location of the objective space. Individual
1 is the least interesting non-dominated one, because it has the densest neighborhood in
Figure 28.4. In this neighborhood, the individuals 5 and 6 with the Pareto ranks 1 and 2 are
located. They are strongly penalized by the sharing process and receive the fitness values
v(5) = 3.446 and v(6) = 5.606. In other words, individual 5 becomes less interesting than
candidate solution 7 which has a worse Pareto rank. 6 now is even worse than individual 8
which would have a fitness better by two if strict Pareto ranking was applied.
Based on these fitness values, algorithms like Tournament selection (see Section 28.4.2)
or fitness proportionate approaches (discussed in Section 28.4.3) will pick elements in a
way that preserves the pressure into the direction of the Pareto frontier but also leads to a
balanced and sustainable variety in the population. The benefits of this approach have been
shown, for instance, in [2188, 2888, 2912, 2914].

28.3.6 Tournament Fitness Assignment

In tournament fitness assignment which is a generalization of the q-level (binary) tournament


selection introduced by Weicker [2879], the fitness of each individual is computed by letting
it compete q times against r other individuals (with r = 1 as default) and counting the
number of competitions it loses. For a better understanding of the tournament metaphor
see Section 28.4.4 on page 296, where the tournament selection scheme is discussed. The
number of losses will approximate the Pareto rank of the individual, but it is a bit more
randomized than that. If the number of tournaments won was instead of the losses, the
same problems than in the first idea of Pareto ranking would be encountered.
28.4. SELECTION 285

Algorithm 28.6: v ←− assignFitnessTournament(q, r, pop, cmpf )


Input: q: the number of tournaments per individuals
Input: r: the number of other contestants per tournament, normally 1
Input: pop: the population to assign fitness values to
Input: cmpf : the comparator function providing the prevalence relation
Data: i, j, k, z: counter variables
Data: b: a Boolean variable being true as long as a tournament isn’t lost
Data: p: the individual currently examined
Output: v: the fitness function
1 begin
2 for i ←− len(pop) − 1 down to 0 do
3 z ←− q
4 p ←− pop[i]
5 for j ←− q down to 1 do
6 b ←− true
7 k ←− r
8 while (k > 0) ∧ b do
9 b ←− ¬pop[⌊randomUni0len(pop)⌋].x ≺ p.x
10 k ←− k − 1
11 if b then z ←− z − 1
12 v(p) ←− z
13 return v

28.4 Selection

28.4.1 Introduction

Definition D28.9 (Selection). In Evolutionary Algorithms, the selection27 operation


mate = selection(pop, v, mps) chooses mps individuals according to their fitness values v
(subject to minimization) from the population pop and places them into the mating pool
mate [167, 340, 1673, 1912].

mate = selection(pop, v, mps) ⇒ ∀p ∈ mate ⇒ p ∈ pop


∀p ∈ pop ⇒ p ∈ G × X
v(p) ∈ R+ ∀p ∈ pop
(len(mate) ≥ min {len(pop) , mps}) ∧ (len(mate) ≤ mps)
(28.15)

Definition D28.10 (Mating Pool). The mating pool mate in an Evolutionary Algorithm
contains the individuals which are selected for further investigation and on which the repro-
duction operations (see Section 28.5 on page 306) will subsequently be applied.
27 http://en.wikipedia.org/wiki/Selection_%28genetic_algorithm%29 [accessed 2007-07-03]
286 28 EVOLUTIONARY ALGORITHMS
Selection may behave in a deterministic or in a randomized manner, depending on the
algorithm chosen and its application-dependant implementation. Furthermore, elitist Evo-
lutionary Algorithms may incorporate an archive archive in the selection process, as outlined
in ??.
Generally, there are two classes of selection algorithms: such with replacement (annotated
with a subscript r in this book) and such without replacement (here annotated with a
subscript w ) [2399]. In Figure 28.6, we present the transition from generation t to t + 1 for
both types of selection methods.In a selection algorithm without replacement, each individual
individuals can enter
mating pool more than once

A A B
A’ A´B A´D
selection B D
with duplication,
B’ B B´D
replacement mating pool at time step t mutation,
crossover
Mate(t); ms = 5 ...
C D´A D’
A B C

Population in generation t+1


D E F Pop(t+1); ps = 9

G H I
A’ B´D C´A
individuals enter mating duplication,
Population in generation t selection pool at most once mutation,
Pop(t); ps = 9 without crossover D’ G A´B
replacement ...
A B C
B C’ D´B
D G

mating pool at time step t in some EAs (steady-state, (m+l) ES, ...)
Mate(t); ms = 5 the parents compete with the offsprings

Figure 28.6: Selection with and without replacement.

from the population pop can enter the mating pool mate at most once in each generation.
If a candidate solution has been selected, it cannot be selected a second time in the same
generation. This process is illustrated in the bottom row of Figure 28.6.
The mating pool returned by algorithms with replacement can contain the same indi-
vidual multiple times, as sketched in the top row of Figure 28.6. The dashed gray line in
this figure stands for the possibility to include the parent generation in the next selection
process. In other words, the parents selected into mate(t) may compete with their children
in popt+1 and together take part in the next environmental selection step. This is the case
in (µ + λ)-Evolution Strategies and in steady-state Evolutionary Algorithms, for example.
In extinctive/generational EAs, such as, for example, (µ, λ)-Evolution Strategies, the parent
generation simply is discarded, as discussed in Section 28.1.4.1.

mate = selectionw (pop, v, mps) ⇒ count(p, mate) = 1 ∀p ∈ mate (28.16)


mate = selectionr (pop, v, mps) ⇒ count(p, mate) ≥ 1 ∀p ∈ mate (28.17)

The selection algorithms have major impact on the performance of Evolutionary Algorithms.
Their behavior has thus been subject to several detailed studies, conducted by, for instance,
Blickle and Thiele [340], Chakraborty et al. [520], Goldberg and Deb [1079], Sastry and
Goldberg [2400], and Zhong et al. [3079], just to name a few.
Usually, fitness assignment processes are carried out before selection and the selection
algorithms base their decisions solely on the fitness v of the individuals. It is possible to
28.4. SELECTION 287
rely on the prevalence or dominance relation, i. e., to write selection(pop, cmpf , mps) instead
of selection(pop, v, mps), or, in case of single-objective optimization, to use the objective
values directly (selection(pop, f, mps)) and thus, to save the costs of the fitness assignment
process. However, this can lead to the same problems that occurred in the first approach of
Pareto-ranking fitness assignment (see Point 1 in Section 28.3.3 on page 275).
As outlined in Paragraph 28.1.2.6.2, Evolutionary Algorithms generally apply survival
selection, sometimes followed by a sexual selectionSelection!sexual procedure. In this section,
we will focus on the former one.

28.4.1.1 Visualization

In the following sections, we will discuss multiple selection algorithms. In order to ease
understanding them, we will visualize the expected number of offspring ESp of an indi-
vidual, i. e., the number of times it will take part in reproduction. If we assume uniform
sexual selection (i. e., no sexual selection), this value will be proportional to its number of
occurrences in the mating pool.

Example E28.18 (Individual Distribution after Selection).


Therefore, we will use the special case where we have a population pop of the constant size
of ps(t) = ps(t + 1) = 1000 individuals. The individuals are numbered from 0 to 999, i. e.,
p0 ..p999 . For the sake of simplicity, we assume that sexual selection from the mating pool
mate is uniform and that only a unary reproduction operation such as mutation is used.
The fitness of an individual strongly influence its chance for reproduction. We consider four
cases with fitness subject to minimization:

1. As sketched in Fig. 28.7.a, the individual pi has fitness i, i. e., v1 (p0 ) = 0, v1 (p1 ) =
1, . . . , v1 (p999 ) = 999.

2. Individual pi has fitness (i + 1)3 , i. e., v2 (p0 ) = 1, v2 (p1 ) = 3, . . . , v2 (p999 ) =


1 000 000 000, as illustrated in Fig. 28.7.b.
288 28 EVOLUTIONARY ALGORITHMS

1000 v1(pi) 1e9 v2(pi)

800 8e8

600 6e8

400 4e8

200 2e8
i i
0 200 400 600 800 0 200 400 600 800
Fig. 28.7.a: Case 1: v1 (pi ) = i Fig. 28.7.b: Case 2: v2 (pi ) = (i + 1)3

2.0 v3(pi) 2.0 v4(pi)

1.6 1.6

1.2 1.2

0.8 0.8

0.4 0.4
i
i
0 200 400 600 800 0 200 400 600 800
Fig. 28.7.c: Case 3: v3 (pi ) = Fig. 28.7.d: Case 4: v4 (pi ) = 1 + v3 (pi )
0.0001i if i < 999, 1 otherwise

Figure 28.7: The four example fitness cases.


28.4. SELECTION 289

28.4.2 Truncation Selection

Truncation selection28 , also called deterministic selection, threshold selection, or breeding


selection, returns the k ≤ mps best elements from the list pop. The name breeding selection
stems from the fact that farmers who breed animals or crops only choose the best individuals
for reproduction, too [302]. The selected elements are copied as often as needed until the
mating pool size mps is reached. Most often k equals the mating pool size mps which usually
is significantly smaller than the population size ps.
Algorithm 28.7 realizes truncation selection with replacement (i. e., where individuals
may enter the mating pool mate more than once) and Algorithm 28.8 represents truncation
selection without replacement. You can find both algorithms are implemented in one single
class whose sources are provided in Listing 56.13 on page 754. They first sort the population
in ascending order according to the fitness v. For truncation selection without replacement,
k = mps and k < ps always holds, since otherwise, this approach would make no sense since
it would select all individuals. Algorithm 28.8 simply returns the best mps individuals from
the population. Algorithm 28.7, on the other hand, iterates from 0 to mps − 1 after the
sorting and inserts only the elements with indices from 0 to k − 1 into the mating pool.

Algorithm 28.7: mate ←− truncationSelectionr (k, pop, v, mps)


Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: k: cut-off rank, usually k = mps
Data: i: counter variable
Output: mate: the survivors of the truncation which now form the mating pool
1 begin
2 mate ←− ()
3 k ←− min {k, ps}
4 pop ←− sortAsc(pop, v)
5 for i ←− 1 up to mps do
6 mate ←− addItem(mate, pop[i mod k])
7 return mate

Algorithm 28.8: mate ←− truncationSelectionw (mps, pop, v)


Input: pop: the list of individuals to select from
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Output: mate: the survivors of the truncation which now form the mating pool
1 begin
2 pop ←− sortAsc(pop, v)
3 return deleteRange(pop, mps, ps − mps)

Truncation selection is typically applied in Evolution Strategies with (µ + λ) and (µ, λ)


Evolution Strategies as well as in steady-state EAs. In general Evolutionary Algorithms, it
28 http://en.wikipedia.org/wiki/Truncation_selection [accessed 2007-07-03]
290 28 EVOLUTIONARY ALGORITHMS
should be combined with a fitness assignment process that incorporates diversity information
in order to prevent premature convergence. Recently, Lässig and Hoffmann [1686], Lässig
et al. [1687] have proved that threshold selection, a truncation selection method where a cut-
off fitness is defined after which no further individuals are selected is the optimal selection
strategy for crossover, provided that the right cut-off value is used. In practical applications,
this value is normally not known.

Example E28.19 (Truncation Selection (Example E28.18 cntd)).


In Figure 28.8, we sketch the number of offspring for the individuals p ∈ pop from Exam-

ES(pi)

6 k = 0.100 * ps
k = 0.125 * ps
5 k = 0.250 * ps
k = 0.500 * ps
k = 1.000 * ps
4

1
0
0 100 200 300 400 500 600 700 i 900
0 100 200 300 400 500 600 700 v1(pi) 900
1 1e6 8e6 3e7 6e7 1e8 2e8 3e8 v2(pi) 7e8
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 v3(pi) 0.09
1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 v4(pi) 1.09

Figure 28.8: The number of expected offspring in truncation selection.

ple E28.18 if truncation selection would be applied. As stated, k is less or equal to the
mating pool size mps. Under the assumption that no sexual selection takes place, ESp only
depends on k.
Under truncation selection, the diagram will look exactly the same regardless which of
the four example fitness configurations we use. Here, only the order of individuals and not
on the numerical relation of their fitness is important. If we set k = ps, each individual will
have one offspring in average. If k = 12 mps, the top-50% individuals will have two offspring
1
and the others none. For k = 10 mps, only the best 100 from the 1000 candidate solutions
will reach the mating pool and reproduce 10 times in average.

28.4.3 Fitness Proportionate Selection


Fitness proportionate selection29 was the selection method applied in the original Genetic
Algorithms as introduced by Holland [1250] and therefore is one of the oldest selection
29 http://en.wikipedia.org/wiki/Fitness_proportionate_selection [accessed 2008-03-19]
28.4. SELECTION 291
schemes. In this Genetic Algorithm, fitness was subject to maximization. Under fitness
proportionate selection, as its name already implies, the probability P (select(p)) of an in-
dividual p ∈ pop to enter the mating pool is proportional to its fitness v(p). This relation
in its original form is defined in Equation 28.18 below: The probability P (select(p)) that
individual p is picked in each selection decision corresponds to the fraction of its fitness in
the sum of the fitness of all individuals.
v(p)
P (select(p)) = X (28.18)
v(p′ )
∀p′ ∈pop

There exists a variety of approaches which realize such probability distributions [1079], like
stochastic remainder selection [358, 409] and stochastic universal selection [188, 1133]. The
most commonly known method is the Monte Carlo roulette wheel selection by De Jong
[711], where we imagine the individuals of a population to be placed on a roulette30 wheel
as sketched in Fig. 28.9.a. The size of the area on the wheel standing for a candidate solution
is proportional to its fitness. The wheel is spun, and the individual where it stops is placed
into the mating pool mate. This procedure is repeated until mps individuals have been
selected.
In the context of this book, fitness is subject to minimization. Here, higher fitness values
v(p) indicate unfit candidate solutions p.x whereas lower fitness denotes high utility. Fur-
thermore, the fitness values are normalized into a range of [0, 1], because otherwise, fitness
proportionate selection will handle the set of fitness values {0, 1, 2} in a different way than
{10, 11, 12}. Equation 28.22 defines the framework for such a (normalized) fitness propor-
tionate selection. It is realized in Algorithm 28.9 as a variant with and in Algorithm 28.10
without replacement.

minV = min {v(p) ∀p ∈ pop} (28.19)


maxV = max {v(p) ∀p ∈ pop} (28.20)
maxV − v(p.x)
normV(p) = (28.21)
maxV − minV
normV (p)
P (select(p)) ≈ X (28.22)
normV (p′ )
∀p′ ∈pop

30 http://en.wikipedia.org/wiki/Roulette [accessed 2008-03-20]


292 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.9: mate ←− rouletteWheelSelectionr (pop, v, mps)


Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Data: i: a counter variable
Data: a: a temporary store for a numerical value
Data: A: the array of fitness values
Data: min, max, sum: the minimum, maximum, and sum of the fitness values
Output: mate: the mating pool
1 begin
2 A ←− createList(ps, 0)
3 min ←− ∞
4 max ←− −∞
// Initialize fitness array and find extreme fitnesses
5 for i ←− 0 up to ps − 1 do
6 a ←− v(pop[i])
7 A[i] ←− a
8 if a < min then min ←− a
9 if a > max then max ←− a
// break situations where all individuals have the same
fitness
10 if max = min then
11 max ←− max + 1
12 min ←− min − 1
/* aggregate fitnesses: A = (normV(pop[0]),normV(pop[0]) +
normV(pop[1]),normV(pop[0]) + normV(pop[1]) + normV(pop[2]),. . . ) */
13 sum ←− 0
14 for i ←− 0 up to ps − 1 do
max − A[i]
15 sum ←− sum +
max − min
16 A[i] ←− sum
/* spin the roulette wheel: the chance that a uniformly
distributed random number lands in A[i] is (inversely)
proportional to v(pop[1]) */
17 for i ←− 0 up to mps − 1 do
18 a ←− searchItemAS(randomUni[0, sum) , A)
19 if a < 0 then a ←− −a − 1
20 mate ←− addItem(mate, pop[a])
21 return mate
28.4. SELECTION 293

Algorithm 28.10: mate ←− rouletteWheelSelectionw (pop, v, mps)


Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Data: i: a counter variable
Data: a, b: temporary stores for numerical values
Data: A: the array of fitness values
Data: min, max, sum: the minimum, maximum, and sum of the fitness values
Output: mate: the mating pool
1 begin
2 A ←− createList(ps, 0)
3 min ←− ∞
4 max ←− −∞
// Initialize fitness array and find extreme fitnesses
5 for i ←− 0 up to ps − 1 do
6 a ←− v(pop[i])
7 A[i] ←− a
8 if a < min then min ←− a
9 if a > max then max ←− a
// break situations where all individuals have the same
fitness
10 if max = min then
11 max ←− max + 1
12 min ←− min − 1
/* aggregate fitnesses: A = (normV(pop[0]),normV(pop[0]) +
normV(pop[1]),normV(pop[0]) + normV(pop[1]) + normV(pop[2]),. . . ) */
13 sum ←− 0
14 for i ←− 0 up to ps − 1 do
max − A[i]
15 sum ←− sum +
max − min
16 A[i] ←− sum
/* spin the roulette wheel: the chance that a uniformly
distributed random number lands in A[i] is (inversely)
proportional to v(pop[1]) */
17 for i ←− 0 up to min {mps, ps} − 1 do
18 a ←− searchItemAS(randomUni[0, sum) , A)
19 if a < 0 then a ←− −a − 1
20 if a = 0 then b ←− 0
21 else b ←− A[a − 1]
22 b ←− A[a] − b
// remove the element at index a from A
23 for j ←− a + 1 up to len(A) − 1 do
24 A[j] ←− A[j] − b
25 sum ←− sum − b
26 mate ←− addItem(mate, pop[a])
27 pop ←− deleteItem(pop, a)
28 A ←− deleteItem(A, a)
29 return mate
294 28 EVOLUTIONARY ALGORITHMS
Example E28.20 (Roulette Wheel Selection).
In Figure 28.9, we illustrate the fitness distribution on the roulette wheel for fitness max-

v(p4)=40
v(p2)=20
v(p1)=10 A(p4)= 0/60 A
A(p2)= 1/5 A =0A
A(p1)= 30/60 A= 1/2 A
v(p3)=30 v(p1)=10
A(p3)= 1/3 A A(p1)= 1/10 A
v(p3)=30
A(p3)= 10/60 A
v(p4)=40 v(p2)=20 = 1/6 A
A(p4)= 2/5 A A(p2)= /60 A= /3 A
20 1

Fig. 28.9.a: Example for fitness maximiza- Fig. 28.9.b: Example for normalized fitness mini-
tion. mization.

Figure 28.9: Examples for the idea of roulette wheel selection.

imimization (Fig. 28.9.a) and normalized minimization (Fig. 28.9.b). In this example we
compute the expected proportion in which each of the four element p1 ..p4 will occur in the
mating pool mate if the fitness values are v(p1 ) = 10, v(p2 ) = 20, v(p3 ) = 30, and v(p4 ) = 40.

Amongst others, Whitley [2929] points out that even fitness normalization as performed here
cannot overcome the drawbacks of fitness proportional selection methods. In order to clarify
these drawbacks, let us visualize the expected results of roulette wheel selection applied to
the special cases stated in Section 28.4.1.1.

Example E28.21 (Roulette Wheel Selection (Example E28.18 cntd)).


Figure 28.10 illustrates the expected number of occurrences ES (pi ) of an individual pi if
roulette wheel selection is applied with replacement.
In such cases, usually parents for each offspring are selected into the mating pool, which
then is as large as the population (mps = ps = 1000). Thus we draw one thousand times a
single individual from the population pop. Each single choice is based on the proportion of
the individual fitness in the total fitness of all individuals, as defined in Equation 28.18 and
Equation 28.22.
In scenario 1 with the fitness sum 999∗998 2 = 498501, the relation ES (pi ) = mps ∗
i 999−i
498501 holds for fitness maximization and ES (pi ) = mps 498501 for minimization. As result
(sketched in Fig. 28.10.a), the fittest individuals produce (on average) two offspring, whereas
the worst candidate solutions will always vanish in this example.
For scenario 2 with v2 (pi .x) = (i + 1)3 , the total fitness sum is approximately 2.51 · 1011
(i+1)3
and ES (pi ) = mps 2.52 · 1011 holds for maximization. The resulting expected values depicted
in Fig. 28.10.b are significantly different from those in Fig. 28.10.a.
The meaning of this is that the design of the objective functions (or the fitness assign-
ment process) has a very strong influence on the convergence behavior of the Evolutionary
Algorithm. This selection method only works well if the fitness of an individual is indeed
something like a proportional measure for the probability that it will produce better off-
spring.
The drawback becomes even clearer when comparing the two fitness functions v3 and
v4 . v4 is basically the same as v3 – just shifted upwards by 1. From a balanced selection
28.4. SELECTION 295

1.8 fitness maximization


3.5 fitness maximization
ES(pi)
ES(pi)
1.4
2.5 fitness minimization
1.2
1.0 2.0
0.8 1.5
0.6
1.0
0.4
0.2 fitness minimization 0.5
0 0
0 200 400 600 i 0 200 400 600 i
0 200 400 600 v1(p) 1 8.12e6 6.48e7 2.17e8 v2(p)
Fig. 28.10.a: v1 (pi .x) = i Fig. 28.10.b: v1 (pi .x) = (i + 1)3

20 2.0 fitness maximization


ES(pi) ES(pi)
15 1.5

10 1.0
fitness maximization
5 0.5

0 0
0 200 400 600 i 0 200 400 600 i
0 0.02 0.04 0.06 v3(p) 0 1.02 1.04 1.06 v4(p)
Fig. 28.10.c: v3 (pi ) = Fig. 28.10.d: v4 (pi ) = 1 + v3 (pi )
0.0001i if i < 999, 1 otherwise

Figure 28.10: The number of expected offspring in roulette wheel selection.


296 28 EVOLUTIONARY ALGORITHMS
method, we would expect that it produces similar mating pools for two functions which are
that similar. However, the result is entirely different. For v3 , the individual at index 999
is totally outstanding with its fitness of 1 (which is more than ten times the fitness of the
next-best individual). Hence, it receives many more copies in the mating pool. In the case
of v4 , however, it is only around twice as good as the next best individual and thus, only
receives around twice as many copies. From this situation, we can draw two conclusions:
1. The number of copies that an individual receives changes significantly when the vertical
shift of the fitness function changes – even if its other characteristics remain constant.
2. In cases like v3 , where one individual has outstanding fitness, this may degenerate the
Genetic Algorithm to a better Hill Climber, since this individual will unite most of
the slots in the mating pool on itself. If the Genetic Algorithm behaves like a Hill
Climber, its benefits, such as the ability to trace more than one possible solution at a
time, disappear.

Roulette wheel selection has a bad performance compared to other schemes like tournament
selection or ranking selection [1079]. It is mainly included here for the sake of completeness
and because it is easy to understand and suitable for educational purposes.

28.4.4 Tournament Selection

Tournament selection31 , proposed by Wetzel [2923] and studied by Brindle [409], is one
of the most popular and effective selection schemes. Its features are well-known and have
been analyzed by a variety of researchers such as Blickle and Thiele [339, 340], Lee et al.
[1707], Miller and Goldberg [1898], Sastry and Goldberg [2400], and Oei et al. [2063]. In
tournament selection, k elements are picked from the population pop and compared with
each other in a tournament. The winner of this competition will then enter mating pool
mate. Although being a simple selection strategy, it is very powerful and therefore used in
many practical applications [81, 96, 461, 1880, 2912, 2914].

Example E28.22 (Binary Tournament Selection).


As example, consider a tournament selection (with replacement) with a tournament size of
two [2931]. For each single tournament, the contestants are chosen randomly according to a
uniform distribution and the winners will be allowed to enter the mating pool. If we assume
that the mating pool will contain about as same as many individuals as the population, each
individual will, on average, participate in two tournaments. The best candidate solution of
the population will win all the contests it takes part in and thus, again on average, contributes
approximately two copies to the mating pool. The median individual of the population is
better than 50% of its challengers but will also loose against 50%. Therefore, it will enter the
mating pool roughly one time on average. The worst individual in the population will lose
all its challenges to other candidate solutions and can only score even if competing against
2
itself, which will happen with probability (1/mps) . It will not be able to reproduce in the
2
average case because mps ∗ (1/mps) = 1/mps < 1 ∀mps > 1.

Example E28.23 (Tournament Selection (Example E28.18 cntd)).


For visualization purposes, let us go back to Example E28.18 with a population of 1000
individuals p0 ..p999 and mps = ps = 1000. Again, we assume that each individual has an
unique fitness value of v1 (pi ) = i, v2 (pi ) = (i + 1)3 , v3 (pi ) = 0.0001i if i < 999, 1 otherwise,
and v4 (pi ) = 1 + v3 (pi ), respectively. If we apply tournament selection with replacement in
31 http://en.wikipedia.org/wiki/Tournament_selection [accessed 2007-07-03]
28.4. SELECTION 297
this special scenario, the expected number of occurrences ES (pi ) of an individual pi in the
mating pool can be computed according to Blickle and Thiele [340] as
 k  k !
1000 − i 1000 − i − 1
ES (pi ) = mps ∗ − (28.23)
1000 1000

The absolute values of the fitness play no role in tournament selection. The only thing that

ES(pi)

6
k=10
5 k=5
k=4
k=3
4 k=2
k=1
3

1
0
0 100 200 300 400 500 600 700 i 900
0 100 200 300 400 500 600 700 v1(pi) 900
1 1e6 8e6 3e7 6e7 1e8 2e8 3e8 v2(pi) 7e8
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 v3(pi) 0.09
1 1.01 1.02 1.03 1.04 1.05 1.06 1.07 v4(pi) 1.09

Figure 28.11: The number of expected offspring in tournament selection.

matters is whether or not the fitness of one individual is higher as the fitness of another
one, not fitness difference itself. The expected numbers of offspring for the four example
cases 1 and 2 from Example E28.18 are the same. Tournament selection thus gets rid of the
problematic dependency on the relative proportions of the fitness values in roulette-wheel
style selection schemes.
Figure 28.11 depicts the expected offspring numbers for different tournament sizes k ∈
{1, 2, 3, 4, 5, 10}. If k = 1, tournament selection degenerates to randomly picking individuals
and each candidate solution will occur one time in the mating pool on average. With rising
k, the selection pressure increases: individuals with good fitness values create more and
more offspring whereas the chance of worse candidate solutions to reproduce decreases.

Tournament selection with replacement (TSR) is presented in Algorithm 28.11 which has
been implemented in Listing 56.14 on page 756. Tournament selection without replacement
(TSoR) [43, 1707] can be defined in two forms. In the first variant specified as Algo-
rithm 28.12, a candidate solution cannot compete against itself. This method is defined in.
In Algorithm 28.13, on the other hand, an individual may enter the mating pool at most
once.
298 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.11: mate ←− tournamentSelectionr (k, pop, v, mps)


Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] k: the tournament size
Data: i, j, a, b: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ←− ()
3 for i ←− 1 up to mps do
4 a ←− ⌊randomUni[0, ps)⌋
5 for j ←− 1 up to k − 1 do
6 b ←− ⌊randomUni[0, ps)⌋
7 if v(pop[b]) < v(pop[a]) then a ←− b
8 mate ←− addItem(mate, pop[a])
9 return mate

Algorithm 28.12: mate ←− tournamentSelectionw (k, pop, v, mps)


Input: pop: the list of individuals to select from
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] k: the tournament size
Data: i, j, a, b: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ←− ()
3 pop ←− sortAsc(pop, v)
4 for i ←− 1 up to min {len(pop) , mps} do
5 a ←− ⌊randomUni[0, len(pop))⌋
6 for j ←− 1 up to min {len(pop) , k} − 1 do
7 b ←− ⌊randomUni[0, ps)⌋
8 if v(pop[b]) < v(pop[a]) then a ←− b
9 mate ←− addItem(mate, pop[a])
10 pop ←− deleteItem(pop, a)
11 return mate
28.4. SELECTION 299

Algorithm 28.13: mate ←− tournamentSelectionw (k, pop, v, mps)


Input: pop: the list of individuals to select from
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] k: the tournament size
Data: A: the list of contestants per tournament
Data: a: the tournament winner
Data: i, j: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ←− ()
3 pop ←− sortAsc(pop, v)
4 for i ←− 1 up to mps do
5 A ←− ()
6 for j ←− 1 up to min{k, len(pop)} do
7 repeat
8 searchItemUS(a, A)¡0
9 until a ←− ⌊randomUni[0, len(pop))⌋
10 A ←− addItem(A, a)
11 a ←− minA
12 mate ←− addItem(mate, pop[a])
13 return mate

The algorithms specified here should more precisely be entitled as deterministic tour-
nament selection algorithms since the winner of the k contestants that take part in each
tournament enters the mating pool. In the non-deterministic variants, this is not necessarily
the case. There, a probability η is defined. The best individual in the tournament is selected
with probability η, the second best with probability η(1 − η), the third best with probability
η(1 − η)2 and so on. The ith best individual in a tournament enters the mating pool with
probability η(1 − η)i . Algorithm 28.14 realizes this behavior for a tournament selection with
replacement. Notice that it becomes equivalent to Algorithm 28.11 on the preceding page
if η is set to 1. Besides the algorithms discussed here, a set of additional tournament-based
selection methods has been introduced by Lee et al. [1707].
300 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.14: mate ←− tournamentSelectionNDr (η, k, pop, v, mps)


Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the fitness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] η: the selection probability, η ∈ [0, 1]
Input: [implicit] k: the tournament size
Data: A: the set of tournament contestants
Data: i, j: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ←− ()
3 pop ←− sortAsc(pop, v)
4 for i ←− 1 up to mps do
5 A ←− ()
6 for j ←− 1 up to k do
7 A ←− addItem(A, ⌊randomUni[0, ps)⌋)
8 A ←− sortAsc(A, cmp(a1 , a2 ) ≡ (a1 − a2 ))
9 for j ←− 0 up to len(A) − 1 do
10 if (randomUni[0, 1) ≤ η) ∨ (j ≥ len(A) − 1) then
11 mate ←− addItem(mate, pop[A[j]])
12 j ←− ∞

13 return mate

28.4.5 Linear Ranking Selection

Linear ranking selection [340], introduced by Baker [187] and more thoroughly discussed by
Blickle and Thiele [340], Whitley [2929], and Goldberg and Deb [1079] is another approach
for circumventing the problems of fitness proportionate selection methods. In ranking se-
lection [187, 1133, 2929], the probability of an individual to be selected is proportional to
its position (rank) in the sorted list of all individuals in the population. Using the rank
smoothes out larger differences of the fitness values and emphasizes small ones. Generally,
we can consider the conventional ranking selection method as the application of a fitness
assignment process setting the rank as fitness (which can be achieved with Pareto ranking,
for instance) and a subsequent fitness proportional selection.
In its original definition discussed in [340], linear ranking selection was used to maximize
fitness. Here, we will discuss it again for fitness minimization. In linear ranking selection with
replacement, the probability P (select(pi )) that the individual at rank i ∈ 0..ps is picked in
each selection decision is given in Equation 28.24 below. Notice that each individual always
receives a different rank and even if two individuals have the same fitness, they must be
assigned to different ranks [340].

 
1 − +

− ps − i − 1
P (select(pi )) = η + η −η (28.24)
ps ps − 1

η+
In Equation 28.24, the best individual (at rank 0) will have the probability ps of being
28.4. SELECTION 301

η
picked while the worst individual (rank ps − 1) is chosen with ps chance:
 
1  ps − 0 − 1 1 −  η+
P (select(p0 )) = η− + η+ − η− = η + η+ − η− = (28.25)
ps ps − 1 ps ps
 
1  ps − (ps − 1) − 1
P (select(pps−1 )) = η− + η+ − η− (28.26)
ps ps − 1
 
1  0 η−
= η− + η+ − η− = (28.27)
ps ps − 1 ps

It should be noted that all probabilities must add up to one, i. e., :

ps−1
X
P (select(pi )) = 1 (28.28)
i=0
ps−1
X  
1 − +
 ps − i − 1

1= η + η −η (28.29)
i=0
ps ps − 1
ps−1  
1 X  ps − i − 1
= η− + η+ − η− (28.30)
ps i=0 ps − 1
" ps−1
#
X 
1 − + − ps − i − 1
= ps η + η −η (28.31)
ps i=0
ps − 1
" #
1 − + −
 ps−1
X ps − i − 1
= ps η + η − η (28.32)
ps i=0
ps − 1
" ps−1
#
1 − η+ − η− X
= ps η + ps − i − 1 (28.33)
ps ps − 1 i=0
" ps−1
!#
1 η +
− η − X
= ps η − + ps (ps − 1) − i (28.34)
ps ps − 1 i=0
  
1 − η+ − η− 1
= ps η + ps (ps − 1) − ps (ps − 1) (28.35)
ps ps − 1 2
 + − 
1 η −η 1
= ps η − + ps (ps − 1) (28.36)
ps ps − 1 2
 
1 − 1 + −

= ps η + ps η − η (28.37)
ps 2
 
1 1 − 1 +
= ps η + ps η (28.38)
ps 2 2
1 − 1 +
1= η + η (28.39)
2 2
1 1 +
1 − η− = η (28.40)
2 2
2 − η− = η+ (28.41)

The parameters η − and η + must be chosen in a way that 2 − η − = η + and, obviously,


0 < η − and 0 < η + must hold as well. If the selection algorithm should furthermore make
sense, i. e., prefer individuals with a lower fitness rank over those with a higher fitness rank
(again, fitness is subject to minimization), 0 ≤ η − < η + ≤ 2 is true.
Koza [1602] determines the slope of the linear ranking selection probability function by
a factor rm in his implementation of this selection scheme. According to [340], this factor
302 28 EVOLUTIONARY ALGORITHMS
can be transformed to η − and η + by the following equations.
2
η− = (28.42)
rm + 1
2rm
η+ = (28.43)
rm + 1
2 2rm
2 − η− = η+ ⇒ 2 − = (28.44)
rm + 1 rm + 1
2 (rm + 1) − 2 = 2rm (28.45)
2rm = 2rm q.e.d. (28.46)

As mentioned, we can simply emulate this scheme by (1) first building a secondary fit-
ness function v′ based on the ranks of the individuals pi according to the original fitness v
and Equation 28.24 and then (2) applying fitness proportionate selection as given in Algo-
rithm 28.9 on page 292. We therefore only need to set v′ (pi ) = 1 − P (select(pi )) (since Al-
gorithm 28.9 on page 292 minimizes fitness).

28.4.6 Random Selection

Random selection is a selection method which simply fills the mating pool with randomly
picked individual from the population. Obviously, to use this selection scheme for sur-
vival/environmental selection, i. e., to determine which individuals are allowed to produce
offspring (see Section 28.1.2.6 on page 260) makes little sense. This would turn the Evolu-
tionary Algorithm into some sort of parallel random walk, since it means to disregard all
fitness information.
However, after an actual mating pool has been filled, it may make sense to use random
selection to choose the parents for the different individuals, especially in cases where there
are many parents such as in Evolution Strategies which are introduced in Chapter 30 on
page 359.

Algorithm 28.15: mate ←− randomSelectionr (pop, mps)


Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: mps: the number of individuals to be placed into the mating pool mate
Data: i: counter variable
Output: mate: the randomly picked parents
1 begin
2 mate ←− ()
3 for i ←− 1 up to mps do
4 mate ←− addItem(mate, pop[⌊randomUni[0, ps)⌋])
5 return mate

Algorithm 28.15 provides a specification of random selection with replacement. An


example implementation of this algorithm is given in Listing 56.15 on page 758.

28.4.7 Clearing and Simple Convergence Prevention (SCP)

In our experiments, especially in Genetic Programming [2888] and problems with discrete
objective functions [2912, 2914], we often use a very simple mechanism to prevent premature
convergence. Premature convergence, as outlined in Chapter 13, means that the optimization
process gets stuck at a local optimum and loses the ability to investigate other areas in the
28.4. SELECTION 303
search space, in particular the area where the global optimum resides. OurSCP method
is neither a fitness nor a selection algorithm, but instead more of a filter which is placed
after computing the objectives and before applying a fitness assignment process. Since our
method actively deletes individuals from the population, we placed it into this section.
The idea is simple: the more similar individuals exist in the population, the more likely
the optimization process has converged. It is not clear whether it converged to a global
optimum or to a local one. If it got stuck at a local optimum, the fraction of the population
which resides at that spot should be limited. Then, some individuals necessarily will be
located somewhere else and, if they survive, may find a path away from the local optimum
towards a different area in the search space. In case that the optimizer indeed converged to
the global optimum, such a measure does not cause problems because in the end, because
better points than the global optimum cannot be discovered anyway.

28.4.7.1 Clearing

The first one to apply an explicit limitation of the number of individuals in an area of
the search space was Pétrowski [2167, 2168] whose clearing approach is applied in each
generation and works as specified in Algorithm 28.16 where fitness is subject to minimization.
Basically, clearing divides the population of an EA into several sub-populations according

Algorithm 28.16: v′ ←− clearingFilter(pop, σ, k, v)


Input: pop: the list of individuals to apply clearing to
Input: σ: the clearing radius
Input: k: the nieche capacity
Input: [implicit] v: the fitness values
Input: [implicit] dist: a distance measure in the genome or phenome
Data: n: the current number of winners
Data: i, j: counter variables
Data: P : counter variables
Output: v′ : the new fitness assignment
1 begin
2 v′ ←− f itnf
3 P ←− sortAsc(pop, v)
4 for i ←− 0 up to len(P ) − 1 do
5 if v(P [i]) < ∞ then
6 n ←− 1
7 for j ←− i + 1 up to len(P ) − 1 do
8 if (v(P [j]) < ∞) ∧ (dist(P [i], P [j]) < σ) then
9 if n < k then n ←− n + 1
10 else v′ (P [j]) ←− ∞

11 return v′

to a distance measure “dist” applied in the genotypic (G) or phenotypic space (X) in each
generation. The individuals of each sub-population have at most the distance σ to the
fittest individual in this niche. Then, the fitness of all but the k best individuals in such a
sub-population is set to the worst possible value. This effectively prevents that a niche can
get too crowded. Sareni and Krähenbühl [2391] showed that this method is very promising.
Singh and Deb [2503] suggest a modified clearing approach which shifts individuals that
would be cleared farther away and reevaluates their fitness.
304 28 EVOLUTIONARY ALGORITHMS
28.4.7.2 SCP
We modify this approach in two respects: We measure similarity not in form of a distance
in the search space G or the problem space X, but in the objective space Y ⊆ R|f | . All
individuals are compared with each other. If two have exactly the same objective values32 ,
one of them is thrown away with probability33 c ∈ [0, 1] and does not take part in any further
comparisons. This way, we weed out similar individuals without making any assumptions
about G or X and make room in the population and mating pool for a wider diversity of
candidate solutions. For c = 0, this prevention mechanism is turned off, for c = 1, all
remaining individuals will have different objective values.
Although this approach is very simple, the results of our experiments were often
significantly better with this convergence prevention method turned on than without
it [2188, 2888, 2912, 2914]. Additionally, in none of our experiments, the outcomes were
influenced negatively by this filter, which makes it even more robust than other methods for
convergence prevention like sharing or variety preserving. Algorithm 28.17 specifies how our
simple mechanism works. It has to be applied after the evaluation of the objective values of
the individuals in the population and before any fitness assignment or selection takes place.

Algorithm 28.17: P ←− scpFilter(pop, c, f )


Input: pop: the list of individuals to apply convergence prevention to
Input: c: the convergence prevention probability, c ∈ [0, 1]
Input: [implicit] f : the set of objective function(s)
Data: i, j: counter variables
Data: p: the individual checked in this generation
Output: P : the pruned population
1 begin
2 P ←− ()
3 for i ←− 0 up to len(P ) − 1 do
4 p ←− pop[i]
5 for j ←− len(P ) − 1 down to 0 do
6 if f (p.x) = f (pop′ [j].x) then
7 if randomUni[0, 1) < c then
8 P ←− deleteItem(P, j)

9 P ←− addItem(P, p)
10 return P

If an individual p occurs n times in the population or if there are n individuals with


exactly the same objective values, Algorithm 28.17 cuts down the expected number of their
occurrences ESp to
n
X n−1
X n n
i−1 i (1 − c) − 1 1 − (1 − c)
ES (p) = (1 − c) = (1 − c) = = (28.47)
i=1 i=0
−c c

In Figure 28.12, we sketch the expected number of remaining instances of the individual p
after this pruning process if it occurred n times in the population before Algorithm 28.17
was applied.
From Equation 28.47 follows that even a population of infinite size which has fully con-
verged to one single value will probably not contain more than 1c copies of this individual
32 The exactly-the-same-criterion makes sense in combinatorial optimization and many Genetic Program-

ming problems but may easily be replaced with a limit imposed on the Euclidian distance in real-valued
optimization problems, for instance.
33 instead of defining a fixed threshold k
28.4. SELECTION 305

9 cp=0.1
ES(p)

5
cp=0.2
4

3 cp=0.3
cp=0.5
2

1 cp=0.7

0 5 10 15 n 25
Figure 28.12: The expected numbers of occurrences for different values of n and c.

after the simple convergence prevention has been applied. This threshold is also visible in
Figure 28.12.
n
1 − (1 − c) 1−0 1
lim ES (p) = lim = = (28.48)
n→∞ n→∞ c c c

28.4.7.3 Discussion

In Pétrowski’s clearing approach [2167], the maximum number of individuals which can
survive in a niche was a fixed constant k and, if less than k individuals resided in a niche,
none of them would be affected. Different from that, in SCP, an expected value of the
number of individuals allowed in a niche is specified with the probability c and may be
both, exceeded or undercut.
Another difference of the approaches arises from the space in which the distance is
computed: Whereas clearing prevents the EA from concentrating too much on a certain area
in the search or problem space, SCP stops it from keeping too many individuals with equal
utility. The former approach works against premature convergence to a certain solution
structure while the latter forces the EA to “keep track” of a trail to candidate solutions
with worse fitness which may later evolve to good individuals with traits different from
the currently exploited ones. Furthermore, since our method does not need to access any
information apart from the objective values, it is independent from the search and problem
space and can, once implemented, combined with any kind of population-based optimizer.
Which of the two approaches is better has not yet been tested with comparative ex-
periments and is part of our future work. At the present moment, we assume that in
real-valued search or problem spaces, clearing should be more suitable whereas we know
from experiments using our approach only that SCP performs very good in combinatorial
problems [2188, 2914] and Genetic Programming [2888].
306 28 EVOLUTIONARY ALGORITHMS
28.5 Reproduction
An optimization algorithm uses the information gathered up to step t for creating the can-
didate solutions to be evaluated in step t + 1. There exist different methods to do so. In
Evolutionary Algorithms, the aggregated information corresponds to the population pop
and the archive archive of best individuals, if such an archive is maintained. The search
operations searchOp ∈ Op used in the Evolutionary Algorithm family are called reproduc-
tion operation, inspired by the biological procreation mechanisms34 of mother nature [2303].
There are four basic operations:
1. Creation simply creates a new genotype without any ancestors or heritage.
2. Duplication directly copies an existing genotype without any modification.
3. Mutation in Evolutionary Algorithms corresponds to small, random variations in the
genotype of an individual, exactly natural mutation35 .
4. Like in sexual reproduction, recombination 36 combines two parental genotypes to a
new genotype including traits from both elders.
In the following, we will discuss these operations in detail and provide general definitions
form them. When an Evolutionary Algorithm starts, no information about the search space
has been gathered yet. Hence, unless seeding is applied (see Paragraph 28.1.2.2.2, we cannot
use existing candidate solutions to derive new ones and search operations with an arity higher
than zero cannot be applied. Creation is thus used to fill the initial population pop(t = 1).

Definition D28.11 (Creation). The creation operation create is used to produce a new
genotype g ∈ G with a random configuration.

g = create ⇒ g ∈ G (28.49)

For creating the initial population of the size ps, we define the function createPop(ps) in
Algorithm 28.18.

Algorithm 28.18: pop ←− createPop(ps)


Input: ps: the number of individuals in the new population
Input: [implicit] create: the creation operator
Data: i: a counter variable
Output: pop: the new population of randomly created individuals (len(pop) = s)
1 begin
2 pop ←− ()
3 for i ←− 0 up to ps − 1 do
4 p ←− ∅

5 p.g ←− create
6 pop ←− addItem(pop, p)
7 return pop

Duplication is just a placeholder for copying an element of the search space, i. e., it is
what occurs when neither mutation nor recombination are applied. It is useful to increase
the share of a given type of individual in a population.

34 http://en.wikipedia.org/wiki/Reproduction [accessed 2007-07-03]


35 http://en.wikipedia.org/wiki/Mutation [accessed 2007-07-03]
36 http://en.wikipedia.org/wiki/Sexual_reproduction [accessed 2008-03-17]
28.5. REPRODUCTION 307
Definition D28.12 (Duplication). The duplication operation duplicate : G 7→ G is used
to create an exact copy of an existing genotype g ∈ G.

g = duplicate(g) ∀g ∈ G (28.50)

In most EAs, a unary search operation, mutation, is provided. This operation takes one
genotype as argument and modifies it. The way this modification is performed is application-
dependent. It may happen in a randomized or in a deterministic fashion. Together with
duplication, mutation corresponds to the asexual reproduction in nature.

Definition D28.13 (Mutation). The mutation operation mutate : G 7→ G is used to


create a new genotype gn ∈ G by modifying an existing one.

gn = mutate(g) : g ∈ G ⇒ gn ∈ G (28.51)

Like mutation, the recombination is a search operation inspired by natural reproduction. Its
role model is sexual reproduction. The goal of implementing a recombination operation is to
provide the search process with some means to join beneficial features of different candidate
solutions. This may happen in a deterministic or randomized fashion.
Notice that the term recombination is more general than crossover since it stands for
arbitrary search operations that combines the traits of two individuals. Crossover, however,
is only used if the elements search space G are linear representations. Then, it stands for
exchanging parts of these so-called strings.

Definition D28.14 (Recombination). The recombination37 operation recombine : G ×


G 7→ G is used to create a new genotype gn ∈ G by combining the features of two existing
ones.
gn = recombine(ga , gb ) : ga , gb ∈ G ⇒ gn ∈ G (28.52)

It is not unusual to chain the application of search operators, i. e., to perform something
like mutate(recombine(g1 , g2 )). The four operators are altogether used to reproduce whole
populations of individuals.

Definition D28.15 (reproducePop). The population reproduction operation pop =


reproducePop(mate, ps) is used to create a new population pop of ps individuals by applying
the reproduction operations to the mating pool mate.

In Algorithm 28.19, we give a specification on how Evolutionary Algorithms usually would


apply the reproduction operations defined so far to create a new population of ps individuals.
This procedure applies mutation and crossover at specific rates mr ∈ [0, 1] and cr ∈ [0, 1],
respectively.

37 http://en.wikipedia.org/wiki/Recombination [accessed 2007-07-03]


308 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.19: pop ←− reproducePopEA(mate, ps, mr, cr)


Input: mate: the mating pool
Input: [implicit] mps: the mating pool size
Input: ps: the number of individuals in the new population
Input: mr: the mutation rate
Input: cr: the crossover rate
Input: [implicit] mutate, recombine: the mutation and crossover operators
Data: i, j: counter variables
Output: pop: the new population (len(pop) = ps)
1 begin
2 pop ←− ()
3 for i ←− 0 up to ps do
4 p ←− duplicate(mate[i mod mps])
5 if (randomUni[0, 1) < cr) ∧ (mps > 1) then
6 repeat
7 j ←− ⌊randomUni[0, mps)⌋
8 until j = (i mod mps)
9 p.g ←− recombine(p.g, mate[j].g)
10 if randomUni[0, 1) < mr then p.g ←− mutate(p.g)
11 pop ←− addItem(pop, p)
12 return pop

28.6 Maintaining a Set of Non-Dominated/Non-Prevailed


Individuals

Most multi-objective optimization algorithms return a set X of best solutions x discovered


instead of a single individual. Many optimization techniques also internally keep track of
the set of best candidate solutions encountered during the search process. In Simulated An-
nealing, for instance, it is quite possible to discover an optimal element x⋆ and subsequently
depart from it to a local optimum x⋆l . Therefore, optimizers normally carry a list of the
non-prevailed candidate solutions ever visited with them.
In scenarios where the search space G differs from the problem space X it often makes
more sense to store the list of individual records P instead of just keeping a set x of the
best phenotypes X. Since the elements of the search space are no longer required at the
end of the optimization process, we define the simple operation “extractPhenotypes” which
extracts them from a set of individuals P .

∀x ∈ extractPhenotypes(P ) ⇒ ∃p ∈ P : x = p.x (28.53)

28.6.1 Updating the Optimal Set

Whenever a new individual p is created, the set P holding the best individuals known so far
may change. It is possible that the new candidate solution must be included in the optimal
set or even prevails some of the phenotypes already contained therein which then must be
removed.

Definition D28.16 (updateOptimalSet). The function “updateOptimalSet” updates a


set of elements Pold with the new individual pnew .x. It uses knowledge of the prevalence
28.6. MAINTAINING A SET OF NON-DOMINATED/NON-PREVAILED INDIVIDUALS309
relation ≺ and the corresponding comparator function cmpf .
Pnew = updateOptimalSet(Pold , pnew , , ) Pold , Pnew ⊆ G × X, pnew ∈ G × X :
∀p1 ∈ Pold ∃p2 ∈ Pold : p2 .x ≺ p1 .x ⇒ Pnew ⊆ Pold ∪ {pnew } ∧
∀p1 ∈ Pnew ∃p2 ∈ Pnew : p2 .x ≺
(28.54)

We define two equivalent approaches in Algorithm 28.20 and Algorithm 28.21 which perform
the necessary operations. Algorithm 28.20 creates a new, empty optimal set and successively
inserts non-prevailed elements whereas Algorithm 28.21 removes all elements which are pre-
vailed by the new individual pnew from the old set Pold .

Algorithm 28.20: Pnew ←− updateOptimalSet(Pold , pnew , cmpf )


Input: Pold : a set of non-prevailed individuals as known before the creation of pnew
Input: pnew : a new individual to be checked
Input: cmpf : the comparator function representing the domainance/prevalence
relation
Output: Pnew : the set updated with the knowledge of pnew
1 begin
2 Pnew ←− ∅
3 foreach pold ∈ Pold do
4 if ¬ (pnew .x ≺ pold .x) then
5 Pnew ←− Pnew ∪ {pold }
6 if pold .x ≺ pnew .x then return Pold
7 return Pnew ∪ {pnew }

Algorithm 28.21: Pnew ←− updateOptimalSet(Pold , pnew , cmpf ) (nd Version)


Input: Pold : a set of non-prevailed individuals as known before the creation of pnew
Input: pnew : a new individual to be checked
Input: cmpf : the comparator function representing the domainance/prevalence
relation
Output: Pnew : the set updated with the knowledge of pnew
1 begin
2 Pnew ←− Pold
3 foreach pold ∈ Pold do
4 if pnew .x ≺ pold .x then
5 Pnew ←− Pnew \ {pold }
6 else if pold .x ≺ pnew .x then
7 return Pold
8 return Pnew ∪ {pnew }

Especially in the case of Evolutionary Algorithms, not a single new element is created
in each generation but a set P . Let us define the operation “updateOptimalSetN” for this
purpose. This operation can easily be realized by iteratively applying “updateOptimalSet”,
as shown in Algorithm 28.22.

28.6.2 Obtaining Non-Prevailed Elements


The function “updateOptimalSet” helps an optimizer to build and maintain a list of optimal
310 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.22: Pnew ←− updateOptimalSetN(Pold , P, cmpf )


Input: Pold : the old set of non-prevailed individuals
Input: P : the set of new individuals to be checked for non-prevailedness
Input: cmpf : the comparator function representing the domainance/prevalence
relation
Data: p: an individual from P
Output: Pnew : the updated set
1 begin
2 Pnew ←− Pold
3 foreach p ∈ P do Pnew ←− updateOptimalSet(Pnew , p, cmpf )
4 return Pnew

individuals. When the optimization process finishes, the “extractPhenotypes” can then be
used to obtain the non-prevailed elements of the problem space and return them to the
user. However, not all optimization methods maintain an non-prevailed set all the time.
When they terminate, they have to extracts all optimal elements the set of individuals pop
currently known.

Definition D28.17 (extractBest). The function “extractBest” function extracts a set P


of non-prevailed (non-dominated) individuals from any given set of individuals pop according
to the comparator cmpf .

∀P ⊆ pop ⊆ G × X, P = extractBest(pop, cmpf ) ⇒ ∀p1 ∈ P 6 ∃p2 ∈ pop : p2 .x ≺ p1 .x


(28.55)

Algorithm 28.23 defines one possible realization of “extractBest”. By the way, this approach
could also be used for updating an optimal set.

updateOptimalSet(Pold , pnew , cmpf ) ≡ extractBest(Pold cuppnew , cmpf ) (28.56)

Algorithm 28.23: P ←− extractBest(pop, cmpf )


Input: pop: the list to extract the non-prevailed individuals from
Data: pany , pchk : candidate solutions tested for supremacy
Data: i, j: counter variables
Output: P : the non-prevailed subset extracted from pop
1 begin
2 P ←− pop
3 for i ←− len(P ) − 1 down to 1 do
4 for j ←− i − 1 down to 0 do
5 if P [i] ≺ P [j] then
6 P ←− deleteItemP j
7 i ←− i − 1
8 else if P [j] ≺ P [i] then
9 P ←− deleteItemP i

10 return listToSetP
28.6. MAINTAINING A SET OF NON-DOMINATED/NON-PREVAILED INDIVIDUALS311
28.6.3 Pruning the Optimal Set

In some optimization problems, there may be very many if not infinite many optimal indi-
viduals. The set X of the best discovered candidate solutions computed by the optimization
algorithms, however, cannot grow infinitely because we only have limited memory. The same
holds for the archives archive of non-dominated individuals maintained by several MOEAs.
Therefore, we need to perform an action called pruning which reduces the size of the set of
non-prevailed individuals to a given limit k [605, 1959, 2645].
There exists a variety of possible pruning operations. Morse [1959] and Taboada and Coit
[2644], for instance, suggest to use clustering algorithms38 for this purpose. In principle, also
any combination of the fitness assignment and selection schemes discussed in Chapter 28
would do. It is very important that the loss of generality during a pruning operation is
minimized. Fieldsend et al. [912], for instance, point out that if the extreme elements of
the optimal frontier are lost, the resulting set may only represent a small fraction of the
optima that could have been found without pruning. Instead of working on the set X of best
discovered candidate solutions x, we again base our approach on a set of individuals P and
we define:

Definition D28.18 (pruneOptimalSet). The pruning operation “pruneOptimalSet” re-


duces the size of a set Pold of individuals to fit to a given upper boundary k.

∀Pnew ⊆ Pold subseteqG × X, k ∈ N1 : Pnew = pruneOptimalSet(Pold , k) ⇒ |Pnew | ≤ k


(28.57)

28.6.3.1 Pruning via Clustering

Algorithm 28.24 uses clustering to provide the functionality specified in this definition and
thereby realizes the idea of Morse [1959] and Taboada and Coit [2644]. Basically, any given
clustering algorithm could be used as replacement for “cluster” – see ?? on page ?? for more
information on clustering.

Algorithm 28.24: Pnew ←− pruneOptimalSetc (Pold , k)


Input: Pold : the set to be pruned
Input: k: the maximum size allowed for the set (k > 0)
Input: [implicit] cluster: the clustering algorithm to be used
Input: [implicit] nucleus: the function used to determine the nuclei of the clusters
Data: B: the set of clusters obtained by the clustering algorithm
Data: b: a single cluster b ∈ B
Output: Pnew : the pruned set
1 begin
// obtain k clusters
2 B ←− cluster(Pold )
3 Pnew ←− ∅
4 foreach b ∈ B do Pnew ←− Pnew ∪ nucleus(b)
5 return Pnew

38 You can find a discussion of clustering algorithms in ??.


312 28 EVOLUTIONARY ALGORITHMS
28.6.3.2 Adaptive Grid Archiving

Let us discuss the adaptive grid archiving algorithm (AGA) as example for a more sophis-
ticated approach to prune the set of non-dominated / non-prevailed individuals. AGA has
been introduced for the Evolutionary Algorithm PAES39 by Knowles and Corne [1552] and
uses the objective values (computed by the set of objective functions f ) directly. Hence, it
can treat the individuals as |f |-dimensional vectors where each dimension corresponds to one
objective function f ∈ f . This |f |-dimensional objective space Y is divided in a grid with d
divisions in each dimension. Its span in each dimension is defined by the corresponding min-
imum and maximum objective values. The individuals with the minimum/maximum values
are always preserved. This circumvents the phenomenon of narrowing down the optimal
set described by Fieldsend et al. [912] and distinguishes the AGA approach from clustering-
based methods. Hence, it is not possible to define maximum set sizes k which are smaller
than 2|f |. If individuals need to be removed from the set because it became too large, the
AGA approach removes those that reside in regions which are the most crowded.
The original sources outline the algorithm basically with with descriptions and defini-
tions. Here, we introduce a more or less trivial specification in Algorithm 28.25 on the next
page and Algorithm 28.26 on page 314.
The function “divideAGA” is internally used to perform the grid division. It transforms
the objective values of each individual to grid coordinates stored in the array lst. Further-
more, “divideAGA” also counts the number of individuals that reside in the same coordi-
nates for each individual and makes it available in cnt. It ensures the preservation of border
individuals by assigning a negative cnt value to them. This basically disables their later
disposal by the pruning algorithm “pruneOptimalSetAGA” since “pruneOptimalSetAGA”
deletes the individuals from the set Pold that have largest cnt values first.

39 PAES is discussed in ?? on page ??


28.6. MAINTAINING A SET OF NON-DOMINATED/NON-PREVAILED INDIVIDUALS313

Algorithm 28.25: (Pl , lst, cnt) ←− divideAGA(Pold , d)


Input: Pold : the set to be pruned
Input: d: the number of divisions to be performed per dimension
Input: [implicit] f : the set of objective functions
Data: i, j: counter variables
Data: mini, maxi, mul: temporary stores
Output: (Pl , lst, cnt): a tuple containing the list representation Pl of Pold , a list lst
assigning grid coordinates to the elements of Pl and a list cnt containing
the number of elements in those grid locations
1 begin
2 mini ←− createList(|f |, ∞)
3 maxi ←− createList(|f |, −∞)
4 for i ←− |f | − 1 down to 0 do
5 mini[i − 1] ←− min{fi (p.x) ∀p ∈ Pold }
6 maxi[i − 1] ←− max{fi (p.x) ∀p ∈ Pold }
7 mul ←− createList(|f |, 0)
8 for i ←− |f | − 1 down to 0 do
9 if maxi[i] 6= mini[i] then
d
10 mul[i] ←−
maxi[i] − mini[i]
11 else
12 maxi[i] ←− maxi[i] + 1
13 mini[i] ←− mini[i] − 1
14 Pl ←− setToListPold
15 lst ←− createList(len(Pl ) , ∅)
16 cnt ←− createList(len(Pl ) , 0)
17 for i ←− len(Pl ) − 1 down to 0 do
18 lst[i] ←− createList(|f |, 0)
19 for j ←− 1 up to |f | do
20 if (fj (Pl [i]) ≤ mini[j − 1]) ∨ (fj (Pl [i]) ≥ maxi[j − 1]) then
21 cnt[i] ←− cnt[i] − 2
22 lst[i][j − 1] ←− ⌊(fj (Pl [i]) − mini[j − 1]) ∗ mul[j − 1]⌋
23 if cnt[i] > 0 then
24 for j ←− i + 1 up to len(Pl ) − 1 do
25 if lst[i] = lst[j] then
26 cnt[i] ←− cnt[i] + 1
27 if cnt[j] > 0 then cnt[j] ←− cnt[j] + 1

28 return (Pl , lst, cnt)


314 28 EVOLUTIONARY ALGORITHMS

Algorithm 28.26: Pnew ←− pruneOptimalSetAGA(Pold , d, k)


Input: P old: the set to be pruned
Input: d: the number of divisions to be performed per dimension
Input: k: the maximum size allowed for the set (k ≥ 2|f |)
Input: [implicit] f : the set of objective functions
Data: i: a counter variable
Data: Pl : the list representation of Pold
Data: lst: a list assigning grid coordinates to the elements of Pl
Data: cnt: the number of elements in the grid locations defined in lst
Output: Pnew : the pruned set
1 begin
2 if len(Pold ) ≤ k then return Pold
3 (Pl , lst, cnt) ←− divideAGA(Pold , d)
4 while len(Pl ) > k do
5 idx ←− 0
6 for i ←− len(Pl ) − 1 down to 1 do
7 if cnt[i] > cnt[idx] then idx ←− i
8 for i ←− len(Pl ) − 1 down to 0 do
9 if (lst[i] = lst[idx]) ∧ (cnt[i] > 0) then cnt[i] ←− cnt[i] − 1
10 Pl ←− deleteItemPl idx
11 cnt ←− deleteItemcntidx
12 lst ←− deleteItemlstidx
13 return listToSetPl

28.7 General Information on Evolutionary Algorithms

28.7.1 Applications and Examples

Table 28.4: Applications and Examples of Evolutionary Algorithms.


Area References
Art [1174, 2320, 2445]; see also Tables 29.1 and 31.1
Astronomy [480]
Chemistry [587, 678, 2676]; see also Tables 29.1, 30.1, 31.1, and 32.1
Combinatorial Problems [308, 354, 360, 371, 400, 444, 445, 451, 457, 491, 564, 567,
619, 644, 649, 690, 802, 807, 808, 810, 812, 820, 1011–1013,
1117, 1145, 1148, 1149, 1177, 1290, 1311, 1417, 1429, 1437,
1486, 1532, 1618, 1653, 1696, 1730, 1803, 1822–1825, 1830,
1835, 1859, 1871, 1884, 1961, 2025, 2055, 2105, 2144, 2162,
2188, 2189, 2245, 2335, 2448, 2499, 2535, 2633, 2663, 2669,
2772, 2811, 2862, 3027, 3064, 3071, 3101]; see also Tables
29.1, 30.1, 31.1, 33.1, 34.1, and 35.1
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 315
Computer Graphics [1452, 1577, 1661, 2096, 2487, 2488, 2651]; see also Tables
29.1, 30.1, 31.1, and 33.1
Control [1710]; see also Tables 29.1, 31.1, 32.1, and 35.1
Cooperation and Teamwork [1996, 2424]; see also Tables 29.1 and 31.1
Data Compression see Table 31.1
Data Mining [144, 360, 480, 1048, 1746, 2213, 2503, 2644, 2645, 2676,
3018]; see also Tables 29.1, 31.1, 32.1, 34.1, and 35.1
Databases [371, 567, 1145, 1696, 1822, 1824, 1825, 2633, 3064, 3071];
see also Tables 29.1, 31.1, and 35.1
Decision Making see Table 31.1
Digital Technology [1479]; see also Tables 29.1, 30.1, 31.1, 34.1, and 35.1
Distributed Algorithms and Sys- [89, 112, 125, 156, 202, 328, 352–354, 404, 439, 545, 605,
tems 663, 669, 702, 705, 786, 811, 843, 871, 882, 963, 1006, 1007,
1094, 1225, 1292, 1298, 1532, 1574, 1633, 1718, 1733, 1749,
1800, 1835, 1838, 1863, 1883, 1995, 2020, 2058, 2071, 2100,
2387, 2425, 2440, 2448, 2470, 2493, 2628, 2644, 2645, 2663,
2694, 2791, 2862, 2864, 2893, 2903, 3027, 3061, 3085]; see
also Tables 29.1, 30.1, 31.1, 32.1, 33.1, 34.1, and 35.1
E-Learning [1158, 1298, 2628, 2763]; see also Table 29.1
Economics and Finances [549, 585, 1718, 1825, 2622]; see also Tables 29.1, 31.1, 33.1,
and 35.1
Engineering [112, 156, 267, 517, 535, 567, 579, 605, 644, 669, 677, 777,
786, 787, 853, 871, 882, 953, 1094, 1225, 1310, 1323, 1479,
1486, 1508, 1574, 1710, 1717, 1733, 1746, 1749, 1830, 1835,
1838, 1883, 2058, 2059, 2083, 2111, 2144, 2189, 2320, 2364,
2425, 2448, 2468, 2499, 2535, 2644, 2645, 2663, 2677, 2693,
2694, 2743, 2799, 2858, 2861, 2862, 2864]; see also Tables
29.1, 30.1, 31.1, 32.1, 33.1, 34.1, and 35.1
Function Optimization [742, 795, 853, 855, 1018, 1523, 1727, 1927, 1986, 2099, 2210,
2799]; see also Tables 29.1, 30.1, 32.1, 33.1, 34.1, and 35.1
Games [2350]; see also Tables 29.1, 31.1, and 32.1
Graph Theory [39, 63, 67, 89, 112, 202, 328, 329, 353, 354, 439, 644, 669,
786, 804, 843, 882, 963, 1007, 1094, 1225, 1290, 1486, 1532,
1574, 1733, 1749, 1800, 1835, 1863, 1883, 1934, 1995, 2020,
2025, 2058, 2100, 2144, 2189, 2237, 2387, 2425, 2440, 2448,
2470, 2493, 2663, 2677, 2694, 2772, 2791, 2864, 3027, 3085];
see also Tables 29.1, 30.1, 31.1, 33.1, 34.1, and 35.1
Healthcare [567, 2021, 2537]; see also Tables 29.1, 31.1, and 35.1
Image Synthesis [291, 777, 2445, 2651]
Logistics [354, 444, 445, 451, 517, 579, 619, 802, 808, 810, 812, 820,
1011, 1013, 1148, 1149, 1177, 1429, 1653, 1730, 1803, 1823,
1830, 1859, 1869, 1884, 1961, 2059, 2111, 2162, 2188, 2189,
2245, 2669, 2811, 2912]; see also Tables 29.1, 30.1, 33.1, and
34.1
Mathematics [376, 377, 550, 1217, 1661, 2350, 2407, 2677]; see also Tables
29.1, 30.1, 31.1, 32.1, 33.1, 34.1, and 35.1
Military and Defense [439, 1661, 1872]; see also Table 34.1
Motor Control [489]; see also Tables 33.1 and 34.1
Multi-Agent Systems [967, 1661, 2144, 2268, 2693, 2766]; see also Tables 29.1, 31.1,
and 35.1
Multiplayer Games see Table 31.1
Physics [407, 678]; see also Tables 29.1, 31.1, 32.1, and 34.1
316 28 EVOLUTIONARY ALGORITHMS
Prediction [144, 1045]; see also Tables 29.1, 30.1, 31.1, and 35.1
Security [404, 540, 871, 1486, 1507, 1508, 1661, 1749]; see also Tables
29.1, 31.1, and 35.1
Software [703, 842, 1486, 1508, 1791, 1934, 1995, 2213, 2677, 2863,
2887]; see also Tables 29.1, 30.1, 31.1, 33.1, and 34.1
Sorting [1233]; see also Table 31.1
Telecommunication [820]
Testing [2863]; see also Table 31.3
Theorem Proofing and Automatic see Table 29.1
Verification
Water Supply and Management [1268, 1986]
Wireless Communication [39, 63, 67, 68, 155, 644, 787, 1290, 1486, 1717, 1934, 2144,
2189, 2237, 2364, 2677]; see also Tables 29.1, 31.1, 33.1, 34.1,
and 35.1

28.7.2 Books
1. Handbook of Evolutionary Computation [171]
2. Variants of Evolutionary Algorithms for Real-World Applications [567]
3. New Ideas in Optimization [638]
4. Multiobjective Problem Solving from Nature – From Concepts to Applications [1554]
5. Natural Intelligence for Scheduling, Planning and Packing Problems [564]
6. Advances in Evolutionary Computing – Theory and Applications [1049]
7. Bio-inspired Algorithms for the Vehicle Routing Problem [2162]
8. Evolutionary Computation 1: Basic Algorithms and Operators [174]
9. Nature-Inspired Algorithms for Optimisation [562]
10. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
11. Hybrid Evolutionary Algorithms [1141]
12. Success in Evolutionary Computation [3014]
13. Computational Intelligence in Telecommunications Networks [2144]
14. Evolutionary Design by Computers [272]
15. Evolutionary Computation in Economics and Finance [549]
16. Parameter Setting in Evolutionary Algorithms [1753]
17. Evolutionary Computation in Dynamic and Uncertain Environments [3019]
18. Applications of Multi-Objective Evolutionary Algorithms [597]
19. Introduction to Evolutionary Computing [865]
20. Nature-Inspired Informatics for Intelligent Applications and Knowledge Discovery: Implica-
tions in Business, Science and Engineering [563]
21. Evolutionary Computation [847]
22. Evolutionary Computation [863]
23. Evolutionary Multiobjective Optimization – Theoretical Advances and Applications [8]
24. Evolutionary Computation 2: Advanced Algorithms and Operators [175]
25. Evolutionary Algorithms for Solving Multi-Objective Problems [599]
26. Genetic and Evolutionary Computation for Image Processing and Analysis [466]
27. Evolutionary Computation: The Fossil Record [939]
28. Swarm Intelligence: Collective, Adaptive [1524]
29. Parallel Evolutionary Computations [2015]
30. Evolutionary Computation in Practice [3043]
31. Design by Evolution: Advances in Evolutionary Design [1234]
32. New Learning Paradigms in Soft Computing [1430]
33. Designing Evolutionary Algorithms for Dynamic Environments [1956]
34. How to Solve It: Modern Heuristics [1887]
35. Electromagnetic Optimization by Genetic Algorithms [2250]
36. Advances in Metaheuristics for Hard Optimization [2485]
37. Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems [2685]
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 317
38. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music [2320]
39. Evolutionary Optimization [2394]
40. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
41. Multi-Objective Optimization in Computational Intelligence: Theory and Practice [438]
42. Computer Simulation in Genetics [659]
43. Evolutionary Computation: A Unified Approach [715]
44. Genetic Algorithms in Search, Optimization, and Machine Learning [1075]
45. Evolutionary Computation in Data Mining [1048]
46. Introduction to Stochastic Search and Optimization [2561]
47. Advances in Evolutionary Algorithms [1581]
48. Self-Adaptive Heuristics for Evolutionary Computation [1617]
49. Representations for Genetic and Evolutionary Algorithms [2338]
50. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
51. Search Methodologies – Introductory Tutorials in Optimization and Decision Support Tech-
niques [453]
52. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
53. Evolutionäre Algorithmen [2879]
54. Advances in Evolutionary Algorithms – Theory, Design and Practice [41]
55. Evolutionary Algorithms in Molecular Design [587]
56. Multi-Objective Optimization Using Evolutionary Algorithms [740]
57. Evolutionary Dynamics in Time-Dependent Environments [2941]
58. Evolutionary Computation: Theory and Applications [3025]
59. Evolutionary Computation [2987]

28.7.3 Conferences and Workshops

Table 28.5: Conferences and Workshops on Evolutionary Algorithms.

GECCO: Genetic and Evolutionary Computation Conference


History: 2011/07: Dublin, Ireland, see [841]
2010/07: Portland, OR, USA, see [30, 31]
2009/07: Montréal, QC, Canada, see [2342, 2343]
2008/07: Atlanta, GA, USA, see [585, 1519, 1872, 2268, 2537]
2007/07: London, UK, see [905, 2579, 2699, 2700]
2006/07: Seattle, WA, USA, see [1516, 2441]
2005/06: Washington, DC, USA, see [27, 304, 2337, 2339]
2004/06: Seattle, WA, USA, see [751, 752, 1515, 2079, 2280]
2003/07: Chicago, IL, USA, see [484, 485, 2578]
2002/07: New York, NY, USA, see [224, 478, 1675, 1790, 2077]
2001/07: San Francisco, CA, USA, see [1104, 2570]
2000/07: Las Vegas, NV, USA, see [1452, 2155, 2928, 2935, 2986]
1999/07: Orlando, FL, USA, see [211, 482, 2093, 2502]
CEC: IEEE Conference on Evolutionary Computation
History: 2011/06: New Orleans, LA, USA, see [2027]
2010/07: Barcelona, Catalonia, Spain, see [1354]
2009/05: Trondheim, Norway, see [1350]
318 28 EVOLUTIONARY ALGORITHMS
2008/06: Hong Kong (Xiānggǎng), China, see [1889]
2007/09: Singapore, see [1343]
2006/07: Vancouver, BC, Canada, see [3033]
2005/09: Edinburgh, Scotland, UK, see [641]
2004/06: Portland, OR, USA, see [1369]
2003/12: Canberra, Australia, see [2395]
2002/05: Honolulu, HI, USA, see [944]
2001/05: Gangnam-gu, Seoul, Korea, see [1334]
2000/07: La Jolla, CA, USA, see [1333]
1999/07: Washington, DC, USA, see [110]
1998/05: Anchorage, AK, USA, see [2496]
1997/04: Indianapolis, IN, USA, see [173]
1996/05: Nagoya, Japan, see [1445]
1995/11: Perth, WA, Australia, see [1361]
1994/06: Orlando, FL, USA, see [1891]
EvoWorkshops: Real-World Applications of Evolutionary Computing
History: 2011/04: Torino, Italy, see [788]
2009/04: Tübingen, Germany, see [1052]
2008/05: Naples, Italy, see [1051]
2007/04: València, Spain, see [1050]
2006/04: Budapest, Hungary, see [2341]
2005/03: Lausanne, Switzerland, see [2340]
2004/04: Coimbra, Portugal, see [2254]
2003/04: Colchester, Essex, UK, see [2253]
2002/04: Kinsale, Ireland, see [465]
2001/04: Lake Como, Milan, Italy, see [343]
2000/04: Edinburgh, Scotland, UK, see [464]
1999/05: Göteborg, Sweden, see [2197]
1998/04: Paris, France, see [1310]
PPSN: Conference on Parallel Problem Solving from Nature
History: 2012/09: Taormina, Italy, see [2675]
2010/09: Kraków, Poland, see [2412, 2413]
2008/09: Birmingham, UK and Dortmund, North Rhine-Westphalia, Germany, see
[2354, 3028]
2006/09: Reykjavik, Iceland, see [2355]
2002/09: Granada, Spain, see [1870]
2000/09: Paris, France, see [2421]
1998/09: Amsterdam, The Netherlands, see [866]
1996/09: Berlin, Germany, see [2818]
1994/10: Jerusalem, Israel, see [693]
1992/09: Brussels, Belgium, see [1827]
1990/10: Dortmund, North Rhine-Westphalia, Germany, see [2438]
WCCI: IEEE World Congress on Computational Intelligence
History: 2012/06: Brisbane, QLD, Australia, see [1411]
2010/07: Barcelona, Catalonia, Spain, see [1356]
2008/06: Hong Kong (Xiānggǎng), China, see [3104]
2006/07: Vancouver, BC, Canada, see [1341]
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 319
2002/05: Honolulu, HI, USA, see [1338]
1998/05: Anchorage, AK, USA, see [1330]
1994/06: Orlando, FL, USA, see [1324]
EMO: International Conference on Evolutionary Multi-Criterion Optimization
History: 2011/04: Ouro Preto, MG, Brazil, see [2654]
2009/04: Nantes, France, see [862]
2007/03: Matsushima, Sendai, Japan, see [2061]
2005/03: Guanajuato, México, see [600]
2003/04: Faro, Portugal, see [957]
2001/03: Zürich, Switzerland, see [3099]
AE/EA: Artificial Evolution
History: 2011/10: Angers, France, see [2586]
2009/10: Strasbourg, France, see [608]
2007/10: Tours, France, see [1928]
2005/10: Lille, France, see [2657]
2003/10: Marseilles, France, see [1736]
2001/10: Le Creusot, France, see [606]
1999/11: Dunkerque, France, see [948]
1997/10: Nı̂mes, France, see [1191]
1995/09: Brest, France, see [75]
1994/09: Toulouse, France, see [74]
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
History: 2011/04: Ljubljana, Slovenia, see [2589]
2009/04: Kuopio, Finland, see [1564]
2007/04: Warsaw, Poland, see [257, 258]
2005/03: Coimbra, Portugal, see [2297]
2003/04: Roanne, France, see [2142]
2001/04: Prague, Czech Republic, see [1641]
1999/04: Protorož, Slovenia, see [801]
1997/04: Vienna, Austria, see [2527]
1995/04: Alès, France, see [2141]
1993/04: Innsbruck, Austria, see [69]
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
History: 2012/05: Chóngqı̀ng, China, see [574]
2011/07: Shànghǎi, China, see [1379]
2010/08: Yāntái, Shāndōng, China, see [1355]
2009/08: Tiānjı̄n, China, see [2848]
2008/10: Jı̀’nán, Shāndōng, China, see [1150]
2007/08: Hǎikǒu, Hǎinán, China, see [1711]
2006/09: Xı̄’ān, Shǎnxı̄, China, see [1443, 1444]
2005/08: Chángshā, Húnán, China, see [2850–2852]
320 28 EVOLUTIONARY ALGORITHMS
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
History: 2012/05: Bohinj, Slovenia, see [349]
2010/05: Ljubljana, Slovenia, see [921]
2008/10: Ljubljana, Slovenia, see [920]
2006/10: Ljubljana, Slovenia, see [919]
2004/10: Ljubljana, Slovenia, see [918]
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
History: 2005/07: Salt Lake City, UT, USA, see [2379]
2003/09: Cary, NC, USA, see [546]
2002/03: Research Triangle Park, NC, USA, see [508]
2000/02: Atlantic City, NJ, USA, see [2853]
1998/10: Research Triangle Park, NC, USA, see [2293]
1997/03: Research Triangle Park, NC, USA, see [2475]
ICARIS: International Conference on Artificial Immune Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
History: 2006/11: Sydney, NSW, Australia, see [1923]
2005/11: Vienna, Austria, see [1370]
CIS: International Conference on Computational Intelligence and Security
History: 2009/12: Běijı̄ng, China, see [1393]
2008/12: Sūzhōu, Jiāngsū, China, see [1392]
2007/12: Harbin, Heilongjiang China, see [1390]
2006/11: Guǎngzhōu, Guǎngdōng, China, see [557]
2005/12: Xı̄’ān, Shǎnxı̄, China, see [1192, 1193]
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 321
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
History: 2009/04: Tübingen, Germany, see [645]
2008/03: Naples, Italy, see [2775]
2007/04: València, Spain, see [646]
2006/04: Budapest, Hungary, see [1116]
2005/03: Lausanne, Switzerland, see [2252]
2004/04: Coimbra, Portugal, see [1115]
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
History: 2009/06: Shànghǎi, China, see [3000]
GEM: International Conference on Genetic and Evolutionary Methods
History: 2011/07: Las Vegas, NV, USA, see [2552]
2010/07: Las Vegas, NV, USA, see [119]
2009/07: Las Vegas, NV, USA, see [121]
2008/06: Las Vegas, NV, USA, see [120]
2007/06: Las Vegas, NV, USA, see [122]
GEWS: Grammatical Evolution Workshop
See Table 31.2 on page 385.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
History: 1993—1994: Australia, see [3023]
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
History: 2001/11: Dunedin, New Zealand, see [1498]
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
History: 2011/04: Torino, Italy, see [2588]
FOCI: IEEE Symposium on Foundations of Computational Intelligence
History: 2007/04: Honolulu, HI, USA, see [1865]
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
History: 2010/10: València, Spain, see [914]
2009/10: Madeira, Portugal, see [1809]
IWNICA: International Workshop on Nature Inspired Computation and Applications
History: 2011/04: Héféi, Ānhuı̄, China, see [2754]
322 28 EVOLUTIONARY ALGORITHMS
2010/10: Héféi, Ānhuı̄, China, see [2758]
2008/05: Héféi, Ānhuı̄, China, see [2757]
2006/10: Héféi, Ānhuı̄, China, see [1122]
2004/10: Héféi, Ānhuı̄, China, see [2755, 2756]
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
History: 2011/11: Salamanca, Spain, see [2368]
2010/12: Kitakyushu, Japan, see [1545]
2009/12: Coimbatore, India, see [12]
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
History: 2010/09: Vienna, Austria, see [2311]
2007/07: London, UK, see [905]
2005/06: Oslo, Norway, see [479]
EvoRobots: International Symposium on Evolutionary Robotics
History: 2001/10: Tokyo, Japan, see [1099]
ICEC: International Conference on Evolutionary Computation
History: 2010/10: València, Spain, see [1276]
2009/10: Madeira, Portugal, see [2325]
IWACI: International Workshop on Advanced Computational Intelligence
History: 2010/08: Sūzhōu, Jiāngsū, China, see [2995]
2009/10: Mexico City, México, see [1875]
2008/06: Macau, China, see [2752]
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
History: 2010/12: Chennai, India, see [2595]
SSCI: IEEE Symposium Series on Computational Intelligence
Conference series which contains many other symposia and workshops.
History: 2011/04: Paris, France, see [1357]
2009/03: Nashville, TN, USA, see [1353]
2007/04: Honolulu, HI, USA, see [1345]

28.7.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Evolutionary Computation published by MIT Press
3. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics published by
IEEE Systems, Man, and Cybernetics Society
4. International Journal of Computational Intelligence and Applications (IJCIA) published by
Imperial College Press Co. and World Scientific Publishing Co.
5. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
6. Natural Computing: An International Journal published by Kluwer Academic Publishers and
Springer Netherlands
7. Journal of Heuristics published by Springer Netherlands
8. Journal of Artificial Evolution and Applications published by Hindawi Publishing Corporation
9. International Journal of Computational Intelligence Research (IJCIR) published by Research
India Publications
10. The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) pub-
lished by Idea Group Publishing (Idea Group Inc., IGI Global)
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 323
11. IEEE Computational Intelligence Magazine published by IEEE Computational Intelligence
Society
12. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
13. Computational Intelligence published by Wiley Periodicals, Inc.
14. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
324 28 EVOLUTIONARY ALGORITHMS
Tasks T28

75. Implement a general steady-state Evolutionary Algorithm. You may use Listing 56.11
as blueprint and create a new class derived from EABase (given in ?? on page ??) which
provides the steady-state functionality described in Paragraph 28.1.4.2.1 on page 266.
[20 points]

76. Implement a general Evolutionary Algorithm with archive-based elitism. You may use
Listing 56.11 as blueprint and create a new class derived from EABase (given in ?? on
page ??) which provides the steady-state functionality described in Section 28.1.4.4 on
page 267.
[20 points]

77. Implement the linear ranking selection scheme given in Section 28.4.5 on page 300
for your version of Evolutionary Algorithms. You may use Listing 56.14 as blueprint
implement the interface ISelectionAlgorithm given in Listing 55.13 in a new class.
[20 points]
Chapter 29
Genetic Algorithms

29.1 Introduction
Genetic Algorithms1 (GAs) are a subclass of Evolutionary Algorithms where (a) the
genotypes g of the search space G are strings of primitive elements (usually all of the
same type) such as bits, integer, or real numbers, and (b) search operations such as
mutation and crossover directly modify these genotypes.. Because of the simple struc-
ture of the search spaces of Genetic Algorithms, often a non-identity mapping is used
as genotype-phenotype mapping to translate the elements g ∈ G to candidate solutions
x ∈ X [1075, 1220, 1250, 2931].
Genetic Algorithms are the original prototype of Evolutionary Algorithms and adhere to
the description given in Section 28.1.1. They provide search operators which try to closely
copy sexual and asexual reproduction schemes from nature. In such “sexual” search opera-
tions, the genotypes of the two parents genotypes will recombine. In asexual reproduction,
mutations are the only changes that occur.

29.1.1 History

The roots of Genetic Algorithms go back to the mid-1950s, where biologists like Barri-
celli [220, 221, 222, 223] and the computer scientist Fraser [941, 988, 989] began to apply
computer-aided simulations in order to gain more insight into genetic processes and the
natural evolution and selection. Bremermann [407] and Bledsoe [323, 324, 325, 326] used
evolutionary approaches based on binary string genomes for solving inequalities, for func-
tion optimization, and for determining the weights in neural networks in the early 1960s.
At the end of that decade, important research on such search spaces was contributed by
Bagley [180] (who introduced the term Genetic Algorithm), Rosenberg [2331], Cavicchio, Jr.
[510, 511], and Frantz [985] – all under the advice of Holland at the University of Michigan.
As a result of Holland’s work [1247–1250] Genetic Algorithms as a new approach for prob-
lem solving could be formalized finally became widely recognized and popular. As same as
important was the work of De Jong [711] on benchmarking GAs for function optimization
from 1975 preceded by Hollstien’s PhD thesis [1258]. Today, there are many applications
in science, economy, and research and development [564, 2232] that can be tackled with
Genetic Algorithms.
It should further be mentioned that, because of the close relation to biology and since
Genetic Algorithms were originally applied to single-objective optimization, the objective
functions f here are often referred to as fitness functions. This is a historically grown
1 http://en.wikipedia.org/wiki/Genetic_algorithm [accessed 2007-07-03]
326 29 GENETIC ALGORITHMS
misnaming which should not be mixed up with the fitness assignment processes discussed
in Section 28.3 on page 274 and the fitness values v used in the context of this book.

29.2 Genomes in Genetic Algorithms


Most of the terminology which we have defined in Part I and used throughout this book stems
from the GA sector. The search spaces G of Genetic Algorithms, for instance, are referred
to genomes and their elements are called genotypes because of their historical development.
Thymine Cytosine Adenine Guanine Hydrogen Bond

Phosphate Deoxyribose (sugar)

rough simplification

gÎG 1 0 1 0 0 1 1 0 0 1 1 1 0 1 0 0 0 1

Search Space G: all bit strings of a given length


Genotypes g: specific bit strings

Figure 29.1: A Sketch of a DNA Sequence.

Genotypes in nature encompass the whole hereditary information of an organism en-


coded in the DNA illustrated of the DNA. The DNA is a string of base pairs that encodes
the phenotypical characteristics of the creature it belongs to. Like their natural proto-
types, the genomes in Genetic Algorithms are strings, linear sequences of certain data
types [1075, 1253, 1912]. Because of the linear structure, these genotypes are also often
called chromosomes. Originally, Genetic Algorithms worked on bit strings of fixed length,
as sketched in Figure 29.1. Today, a string chromosome can either be a fixed-length tuple
(Equation 29.1) or a variable-length list (Equation 29.3) of elements.

29.2.1 Classification According to Length


In the first case, the loci i of the genes g [i] are constant and hence, each gene can have a
fixed meaning. Then, the tuples may contain elements of different types Gi .
G = {∀ (g [0], g [1], .., g [n − 1]) : g [i] ∈ Gi ∀i ∈ 0..n − 1} (29.1)
G = G0 × G1 × .. × Gn−1 (29.2)
This is not given in variable-length string genomes. Here, the positions of the genes may
shift when the reproduction operations are applied. Thus, all elements of such genotypes
must have the same type GT .
G = {∀lists g : g [i] ∈ GT ∀0 ≤ i < len(g)} (29.3)

G = GT (29.4)
29.2. GENOMES IN GENETIC ALGORITHMS 327

29.2.2 Classification According to Element Type


The most common string chromosomes are (1) bit strings, (2) vectors of integer numbers,
and (3) vectors of real numbers.. Genetic Algorithms with numeric vector genomes, i. e.,
where G ⊆ Rn , are called real-coded [1509]. GAs with bit string genome are referred to as
binary-coded if no re-coding genotype-phenotype mapping is applied. In, for example, gray
coding is used as discussed in Example E28.9, we would refer to the GA as gray-coded.

29.2.3 Classification According to Meaning of Loci


Let us shortly recapitulate the structure of the elements g of the search space G given in
Section 4.1. A gene (see Definition D4.3 on page 82) is the basic informational unit in a
genotype g. In biology, a gene is a segment of nucleic acid that contains the information
necessary to produce a functional RNA product in a controlled manner. An allele (see
Definition D4.4) is a value of specific gene in nature and in EAs alike. The locus (see
Definition D4.5) is the position where a specific gene can be found in a chromosome.
As stated in Section 29.2.1, in fixed-length chromosomes there may be a specific meaning
assigned to each locus in the genotype (if direct genotype-phenotype mappings are applied
as outlined in Section 28.2.1). In Example E17.1 on page 177, for instance, gene 1 stands
for the color of the dinosaur’s neck whereas gene 4 codes the shape of its head. Although
this was an example from nature, in a fixed-length string genome, each locus may have a
direct connection to an element of the phenotype.
In many applications, this connection of locus to meaning is loose. When, for example,
solving the Traveling Salesman Problem given in Example E2.2 on page 45, each gene may
be an integer number identifying a city within China and the genotype, a permutation of
these identifiers, would represent the order in which they are visited. Here, the genes do not
code for distinct and different features of the phenotypes. Instead, the phenotypic features
have more uniform meanings. Yet, the sequence of the genes is still important.
Messy genomes (see Section 29.6) where introduced to improve locality by linkage learn-
ing: Here, the genes do not only code for a specific phenotypic element, but also for the
position in the phenotype where this element should be placed.
Finally, there exist indirect genotype-phenotype mappings (Section 28.2.2) and encod-
ings where there is no relation between the location of a gene and its meaning or interpre-
tation. A good example for the latter are Algorithmic Chemistries which we will discuss
in ?? on page ??. In their use in linear Genetic Programming developed by Lasarczyk and
Banzhaf [207, 1684, 1685], each gene stands for an instruction. Unlike normal computer pro-
grams, the Algorithmic Chemistry programs in execution are not interpreted sequentially.
Instead, at each step, the processor randomly chooses one instruction from the program.
Hence, the meaning of a gene (an instruction) here only depends on its allelic state (its
value) and not on its position (its locus) inside the chromosome.

29.2.4 Introns
Besides the functional genes and their alleles, there are also parts of natural genomes which
have no (obvious) function [1072, 2870]. The American biochemist Gilbert [1056] coined the
term intron 2 for such parts. In some representations in Evolutionary Algorithms, similar
structures can be observed, especially in variable-length genomes.

Definition D29.1 (Intron). An intron is a part of a genotype g ∈ G that does not con-
tribute to the phenotype x = gpm(g), i. e., plays no role during the genotype-phenotype
mapping.
2 http://en.wikipedia.org/wiki/Intron [accessed 2007-07-05]
328 29 GENETIC ALGORITHMS
Biological introns have often been thought of as junk DNA or “old code”, i. e., parts of the
genome that were translated to proteins in evolutionary past, but now are not used anymore.
Currently though, many researchers assume that introns are maybe not as useless as initially
assumed [660]. Instead, they seem to provide support for efficient splicing, for instance.
Similarly, in some cases, the role introns in Genetic Algorithms is mysterious as well.
They represent a form of redundancy which may have positive as well as negative effects, as
outlined in Section 16.5 on page 174 and ??.

29.3 Fixed-Length String Chromosomes


Especially widespread in Genetic Algorithms are search spaces based on fixed-length chro-
mosomes. The properties of their reproduction operations such as crossover and mutation
are well known and an extensive body of research on them is available [1075, 1253].

29.3.1 Creation: Nullary Reproduction

29.3.1.1 Fully-Random Genotypes

The creation of random fixed-length string individuals means to create a new tuple of the
structure defined by the genome and initialize it with random values. In reference to Equa-
tion 29.1 on page 326, we could roughly describe this process with Equation 29.5 for fixed-
length genomes where the elementary types Gi from G0 × G1 × .. × Gn−1 = G are finite sets
and their j th elements can be accessed with an index Gi [j]. Although creation is a nullary
reproduction operation, i. e., one which does not receive any genotypes g ∈ G as parameter,
in Equation 29.5 we provide n, the dimension of the search space, as well as the search space
G to the operation in order to keep the definition general. In Algorithm 29.1, we show how
this general equation can be implemented for fixed-length bit-string genomes, i. e., we define
an algorithm createGAFLBin(n) ≡ createGAFL(n, Bn ) which creates bit strings g ∈ G of
length n (G = Bn ).

createGAFL(n, G) ≡ (g [1], g [2], .., g [n]) : g [i] = Gi [⌊randomUni[0, len(Gi ))⌋] ∀i ∈ 1..n (29.5)

Algorithm 29.1: g ←− createGAFLBin(n)


Input: n: the length of the individuals
Data: i ∈ 1..n: the running index
Data: v ∈ B: the running index
Output: g: a new bit string of length n with random contents
1 begin
2 i ←− n
3 g ←− ()
4 while i > 0 do
5 if randomUni[0, 1) < 0.5 then v ←− 0
6 else v ←− 1
7 i ←− i − 1
8 g ←− addItem(g, v)
9 return g
29.3. FIXED-LENGTH STRING CHROMOSOMES 329
29.3.1.2 Permutations

Besides on arbitrary vectors over some vector space (be it Bn , Nn0 , or Rn ), Genetic Algorithms
can also work on permutations, as defined in Section 51.6 on page 643. If G is the space of all
possible permutations of the n natural numbers from 0 to n − 1, i. e., G = Π(0..n − 1), we
can create a new genotype g ∈ G using the algorithm Algorithm 29.2 which is implemented
in Listing 56.37 on page 804.

Algorithm 29.2: g ←− createGAPerm(n)


Input: n: the number of elements to permutate
Data: tmp: a temporary list
Data: i, j: indices into tmp and g
Output: g: the new random permutation of the numbers 0..n − 1
1 begin
2 tmp ←− (0, 1, . . . , n − 2, n − 1)
3 i ←− n
4 g ←− ()
5 while i > 0 do
6 j ←− ⌊randomUni[0, i)⌋
7 i ←− i − 1
8 g ←− addItem(g, tmp[j])
9 tmp[j] ←− tmp[i]
10 return g

From a more general perspective, we can consider the problem space X to be the space
X = Π(S) of all permutations of the elements si : i ∈ 0..n − 1 of a given set S. Usually,
of course, S = 0..n − 1 and si = i ∀i ∈ 0..n − 1. The genotypes g in the search space
G are vectors of natural numbers from 0 to n − 1. The mapping of these vectors to the
permutations can be done implicitly or explicitly according to the following Equation 29.6.

gpmΠ (g, S) = x : x[i] = sg[i] ∀i ∈ 0..n − 1 (29.6)

In other words, the gene at locus i in the genotype g ∈ G defines which element s from the
set S should appear at position i in the permutation x ∈ X = Π(S). More precisely, the
allelic value g [i] of the gene is the index of the element sg[i] ∈ S which will appear at that
position in the phenotype x.
330 29 GENETIC ALGORITHMS
29.3.2 Mutation: Unary Reproduction

Mutation is an important method for preserving the diversity of the candidate solutions by
introducing small, random changes into them.

Fig. 29.2.a: Single-gene mutation. Fig. 29.2.b: Consecutive multi-gene mutation


(i).

Fig. 29.2.c: Uniform multi-gene mutation (ii). Fig. 29.2.d: Complete mutation.

Figure 29.2: Value-altering mutation of string chromosomes.

29.3.2.1 Single-Point Mutation

In fixed-length string chromosomes, this can be achieved by randomly modifying the value
(allele) of a gene, as illustrated in Fig. 29.2.a. If the genome consists of bit strings, this
single-point mutation is called single-bit flip mutation. In this case, this operation creates a
neighborhood where all points which have a Hamming distance of exactly one are adjacent
(see Definition D4.10 on page 85).

adjacentε (g2 , g1 ) = dist ham(g2 , g1 ) = 1 (29.7)

29.3.2.2 Multi-Point Mutation

It is also possible to perform a multi-point mutation, to change 0 < n ≤ len(gp ) locations in


the genotype gp are changed at once (Fig. 29.2.b and Fig. 29.2.c). If parts of the genotype
interact epistatically (see Chapter 17), such a multi-point mutation may be the only way to
escape local optima.
In this case, the method sketched in Fig. 29.2.c is far more efficient than the one in
Fig. 29.2.b. This becomes clear when assuming that the search space was the bit strings of
length n, i. e., G = Bn . The former method, illustrated in Fig. 29.2.c, creates a neighborhood
as defined in Equation 29.8: depending on the probability ε(l) of modifying exactly 0 <
l ≤ n elements, all individuals in Hamming distance of exactly l become adjacent (with a
probability of exactly exactly ε(l)). The method sketched in Fig. 29.2.b, on the other hand,
can only put strings into adjacency which have a Hamming distance of l and where the
modified bits additionally from a consecutive sequence. Of course, similar considerations
would hold for any kind of string-based search space.

adjacentε(l) (g2 , g1 ) = dist ham(g2 , g1 ) = l (29.8)

If a multi-point mutation operator is applied, the number of bits to be modified at once


can be drawn from a (bounded) exponential distribution (see Section 53.4.3 on page 682).
29.3. FIXED-LENGTH STRING CHROMOSOMES 331
This would increase the probability ε(l) that only few genes are changed (l is low), which
makes sense since one of the basic assumptions in Genetic Algorithms is that there exists
at least some locality ( Definition D14.1 on page 162) in the search space, i. e., that some
small changes in the genotype also lead to small changes in the phenotype. However, bigger
modifications (l → n in search spaces of dimension n) would still be possible and happen
from time to time with such a choice.

Algorithm 29.3: g ←− mutateGAFLBinRand(gp , n)


Input: gp : the parent individual
Input: n ∈ 1..len(gp ): the highest possible number of bits to change
Data: i, j: temporary variables
Data: q: the first len(gp ) natural numbers
Output: g: the new random permutation of the numbers 0..n − 1
1 begin
2 g ←− gp
3 q ←− (0, 1, 2, .., len(g) − 1)
4 repeat
5 i ←− ⌊randomUni[0, len(g))⌋
6 j ←− q [i]
7 q ←− deleteItem(q, j)
8 g [j] ←− ¬g [j]
9 until (len(q) = 0) ∨ (randomUni[0, 1) < 0.5)
10 return g

In Algorithm 29.3, we provide an implementation of the mutation operation sketched in


Fig. 29.2.c which modifies a roughly exponentially distributed number of genes (bits) in a
genotype from a bit-string based search space (G ⊆ Bn ). It does so by randomly selecting
loci and toggling the alleles (values) of the genes (bits) at these positions. It stops if it has
toggled all bits (i. e., all loci have been used) or a random number uniformly distributed
in [0, 1) becomes lower than 0.5. The latter condition has the effect that only one gene is
modified with probability 0.5. With probability (1 − 0.5) ∗ 0.5 = 0.25, exactly two genes are
changed. With probability (1 − 0.5)2 ∗ 0.5 = 0.125, the operator modifies three genes and
so on.

29.3.2.3 Vector Mutation

In real-coded genomes, it is common to instead modify all genes as sketched in Fig. 29.2.d.
This would make little sense in binary-coded search space, since the inversion of all bits
means to completely discard all information of the parent individual.

29.3.2.4 Mutation: Binary and Real Genomes

Changing a single gene in binary-coded chromosomes, i. e., encodings where each gene is a
single bit, means that the value at a given locus is simply inverted as sketched in Fig. 29.3.a.
This operation is referred to as bit-flip mutation.
For real-encoded genomes, modifying an element g [i] can be done by replacing it with a
number drawn from a normal distribution (see Section  53.4.2) with expected value g [i] and
some standard deviation σ, i. e., g [i]new ∼ N g [i], σ 2 which we illustrated in Fig. 29.3.b and
outline in detail in Section 30.4.1 on page 364.
332 29 GENETIC ALGORITHMS

g[i]-s g[i]+s

g[i] g[i]’

Fig. 29.3.a: Mutation in binary genomes. Fig. 29.3.b: Mutation in real genomes.

Figure 29.3: Mutation in binary- and real-coded GAs


29.3. FIXED-LENGTH STRING CHROMOSOMES 333
29.3.3 Permutation: Unary Reproduction

The permutation operation is an alternative mutation method where the alleles of two genes
are exchanged as sketched in Figure 29.4. This, of course, makes only sense if all genes have
similar data types and the location of a gene has influence on its meaning. Permutation is,
for instance, useful when solving problems that involve finding an optimal sequence of items,
like the Traveling Salesman Problem as mentioned in Example E2.2. Here, a genotype g
could encode the sequence in which the cities are visited. Exchanging two alleles then equals
of switching two cities in the route.

Figure 29.4: Permutation applied to a string chromosome.

29.3.3.1 Single Permutation

In Algorithm 29.4 which is implemented in Listing 56.38 on page 806, we provide a sim-
ple unary search operation which exchanges the alleles of two genes, exactly as sketched
in Figure 29.4. If the genotype was a proper permutation obeying the conditions given
in Definition D51.13 on page 643, these conditions will hold for the new offspring as well.

Algorithm 29.4: g ←− mutateSinglePerm(n)


Input: gp : the parental genotype
Data: i, j: indices into g
Data: t: a temporary variable
Output: g: the new, permutated genotype
1 begin
2 g ←− gp
3 i ←− randomUni[0, len(g))
4 repeat
5 j ←− randomUni[0, len(g))
6 until i 6= j
7 t ←− g [i]
8 g [i] ←− g [j]
9 g [j] ←− t
10 return g

29.3.3.2 Multiple Permutations

Different from Algorithm 29.4, Algorithm 29.5 (implemented in Listing 56.39 on page 808)
repetitively exchanges elements. The number of elements which are swapped is roughly
exponentially distributed (see Section 53.4.3). This operation also preserves the validity of
the conditions for permutations given in Definition D51.13 on page 643.
334 29 GENETIC ALGORITHMS

Algorithm 29.5: g ←− mutateMultiPerm(gp )


Input: gp : the parental genotype
Data: i, j: indices into g
Data: t: a temporary variable
Output: g: the new, permutated genotype
1 begin
2 g ←− gp
3 repeat
4 i ←− ⌊randomUni[0, len(g))⌋
5 repeat
6 j ←− ⌊randomUni[0, len(g))⌋
7 until i 6= j
8 t ←− g [i]
9 g [i] ←− g [j]
10 g [j] ←− t
11 until randomUni[0, 1) ≥ 0.5
12 return g
29.3. FIXED-LENGTH STRING CHROMOSOMES 335
29.3.4 Crossover: Binary Reproduction

There is an incredible wide variety of crossover operators for Genetic Algorithms. In his
2006 book [1159], Gwiazda mentions alone 11 standard, 66 binary-coded, and 89 real-coded
crossover operators. Here, we only can discuss a small fraction of those.
Amongst all Evolutionary Algorithms, some of the recombination operations which
probably comes closest to the natural paragon can be found in Genetic Algorithms. Fig-
ure 29.5 outlines the most basic methods to recombine two string chromosomes, the so-called
crossover, which is performed by swapping parts of two genotypes. Although these still can-
not nearly resemble the idea of natural crossover, they at least mirror the school book view
on it.

() () () ()
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5

Fig. 29.5.a: Single-point Fig. 29.5.b: Two-point Fig. 29.5.c: Multi-point Fig. 29.5.d: Uniform
Crossover (SPX, 1-PX). Crossover (TPX, 2-PX). Crossover (MPX, k-PX). Crossover (UX).

Figure 29.5: Crossover (recombination) operators for fixed-length string genomes.

29.3.4.1 Single-Point Crossover (SPX, 1-PX)

When performing single-point crossover (SPX3 , 1-PX, [1159]), both parental chromosomes
are split at a randomly determined crossover point. Subsequently, a new child genotype is
created by appending the second part of the second parent to the first part of the first parent
as illustrated in Fig. 29.5.a. With crossover, it is possible that two offspring are created at
once from the two parents. The second offspring is shown in the long parentheses. Doing so
it, however, not mandatory.

29.3.4.2 Two-Point Crossover (TPX)

In two-point crossover (TPX, 2-PX, sketched in Fig. 29.5.b), both parental genotypes are
split at two points and a new offspring is created by using parts number one and three from
the first, and the middle part from the second parent chromosome. The second possible
offspring of this operation is again displayed in parentheses.

29.3.4.3 Multi-Point Crossover (MPX, k-PX)

In Algorithm 29.6, the generalized form of this technique is specified: The k-point crossover
operation (k-PX, [1159]), also called multi-point crossover (MPX). The operator takes the
two parent genotypes gp1 and gp2 from an n-dimensional search space as well as the number
0 < k < n of crossover points as parameter.
By setting the parameter k to 1 or 2, single-point and two-point crossover can be emulated
and with k = n − 1, becomes uniform crossover (see Section 29.3.4.4). It furthermore is
possible to choose k randomly from 1..n − 1 for each application of the operator, maybe by
picking it with a bounded exponential distribution. This way, many different recombination
operations can be performed with the same algorithm.
3 This abbreviation is also used for simplex crossover, see ??.
336 29 GENETIC ALGORITHMS

Algorithm 29.6: g ←− recombineMPX(gp1 , gp2 , k)


Input: gp1 , gp2 ∈ G: the parental genotypes
Input: [implicit] n: the dimension of the search space G
Input: k: the number of crossover points
Data: i: a counter variable
Data: cp: a randomly chosen crossover point candidate
Data: CP : the list of crossover points
Data: s, e: the segment start and end indices
Data: b: a Boolean variable for choosing the segment
Data: gu ∈ G: the currently used parent
Output: g ∈ G: a new genotype
1 begin
// find k crossover points from 0 to n − 1
2 CP ←− (n)
3 for i ←− 1 up to k do
4 repeat
5 cp ←− ⌊randomUni[0, n − 1)⌋ + 1
6 until cp 6∈ CP
7 CP ←− addItem(CP, cp)
8 CP ←− sortAsc(CP, <)
// perform the crossover by copying sub-strings
9 g ←− ()
10 s ←− 0
11 b ←− true
12 gu ←− gp1
13 for i ←− 0 up to k do
14 e ←− CP [i]
15 if b then gu ←− gp1
16 else gu ←− gp2
// add substring of length e − s from gu starting at index
s to g
17 g ←− appendList(g, subList(gu , s, e − s))
18 b ←− ¬b
19 s ←− e
20 return g
29.3. FIXED-LENGTH STRING CHROMOSOMES 337
In Algorithm 29.6, first k different crossover points cp are picked according to the uniform
distribution and the offspring is initialized as empty list. Then, the algorithm iterates
through the sorted list CP of crossover points. It copies string sub-sequences between each
two crossover points. Depending on whether it is in an odd (b = 1) or even (b = 0) iteration,
either a portion from the first parent gp1 or the second parent gp2 is copied. Notice that
n, the dimension of the search space, is also part of the list CP since we need to copy the
sub-sequence between the last crossover point and the end of one of parent genotypes as
well.
An example for the application of this algorithm illustrated in Fig. 29.5.c. For fixed-
length strings, the crossover points for both parents are always identical whereas for variable-
length strings (discussed in Section 29.4 and sketched in Fig. 29.8.c on page 341), the algo-
rithm could be redesigned to use different crossover points for the two parents.

29.3.4.4 Uniform Crossover (UX)

Uniform crossover (UX [1159, 2563], sketched in Fig. 29.5.d, specified in Algorithm 29.7, and
exemplarily implemented in Listing 56.36 on page 803) follows another scheme: Here, for
each locus in the offspring chromosome, the allelic value if chosen with the same probability
(50%) from the same locus of the first parent or from the same locus of the second parent.
Dominant (discrete) recombination which is introduced in Section 30.3.1 on page 362 is a
generalization of UX to an ρ-ary operator, i. e., to an operator which creates one child fromρ
parents.

Algorithm 29.7: g ←− recombineUX(gp1 , gp2 )


Input: gp1 , gp2 ∈ G: the parental genotypes
Data: i: a counter variable
Output: g ∈ G: a new genotype
1 begin
2 g ←− gp1
3 for i ←− 0 up to len(g) − 1 do
4 if randomUni[0, 1) < 0.5 then g [i] ←− gp2 [i]
5 return g

29.3.4.5 (Weighted) Average Crossover

The previously discussed crossover methods have primarily been developed for bit string
genomes G = Bn . If they combine two strings g1 and g2 , the offspring will necessarily be
located on one of the corners of the hypercube described by both parents [2099] and sketched
in Figure 29.6. In binary-coded Genetic Algorithms, this makes indeed sense.
For real-valued genomes, we would like to explore the volume inside the hypercube as
well. Therefore, we can use a (weighted) average of the two parent vectors as offspring. One
way to do so is Equation 29.9 for search spaces G ⊆ Rn . In this equation, each element of
the two parents g1 and g2 are combined by a weighted average to gnew . For pure average
crossover, the weight γ will always be set to γ = 0.5. If the average is randomly weighted,
as weight γ we can use

1. a new random number ru uniformly distributed in (0, 1),


2. 0.5 ± (0.5 ∗ (run )) where n is a natural number in order to prefer points which are
somewhat centered between the original coordinates and for ±, either + or − is chosen
randomly (as done in Listing 56.29 on page 792, for example), or
338 29 GENETIC ALGORITHMS

G1ÍR

R

X
,U

2
X
-P
G1ÍR (g2[0], g2[1], g2[2])
X,M
P
, 2-
PX G0ÍR
1-
R

2

W
Cr eig (g2[0], g2[1], g2[2])
os hte G1ÍR
(g1[0], g1[1], g1[2]) so d
ve A
G0ÍR r, ve
etc ra

R
. ge


2
(g1[0], g1[1], g1[2])
G0ÍR

parents possible offspring

Figure 29.6: The points sampled by different crossover operators.

3. an approximately normally distributed random number with mean µ = 0.5, again


within the bounds (0, 1) for each locus i.
gnew [i] = γg1 [i] + (1 − γ)g2 [i] ∀i ∈ 1..n (29.9)
The intermediate recombination operator discussed in Section 30.3.2 on page 362 is an
extension of average crossover to ρ parents.

29.3.4.6 Homologous Crossover (HRO, HX)

According to Banzhaf et al. [209], natural crossover is very restricted and usually exchanges
only genes that express the same functionality and are located at the same positions (loci)
on the chromosomes.

Definition D29.2 (Homology). In genetics, homology 4 of protein-coding DNA sequences


means that they code for the same protein which may indicate common functionality. Ho-
mologous chromosomes5 are either chromosomes in a biological cell that pair during meiosis
or non-identical chromosomes which code for the same functional feature by containing
similar genes in different allelic states.

In other words, homologous genetic material is very similar and in nature, only such material
is exchanged in sexual reproduction. Park et al. [2128] propose that a similar approach
should be useful in Genetic Algorithms. As basis for their analysis they take multi-point
crossover (MPX, k-PK) as introduced here in Algorithm 29.6. In MPX, k > 0 crossover
points are chosen randomly and subsequences between two points (as well as the start of the
string and the first crossover point and the last crossover point and the end of the parent
strings) are exchanged.
Park et al. [2128] point out that such an exchange is likely to destroy schemas which
extend over crossover points while leaving only those completely embedded in a sequence
subject to exchange intact. They thus make exchange of genetic information conditional:
4 http://en.wikipedia.org/wiki/Homology_(biology) [accessed 2008-06-17]
5 http://en.wikipedia.org/wiki/Homologous_chromosome [accessed 2008-06-17]
29.4. VARIABLE-LENGTH STRING CHROMOSOMES 339
their HRO (Homologous Recombination Operator, HX) always chooses the subsequence
from the first parent gp1 if

1. it would be the turn of gp1 anyway (b = true in Algorithm 29.6 on Line 15), if
2. the exchanged sequence would be too short, i. e., its length would be below a given
threshold w, or if
3. the sequences are not different enough. As measure for difference, the Hamming
distance “dist ham” divided by the sequence length (e − s). If this value is above
a threshold u, again the sequence from gp1 is used.

Homologous crossover can be implemented by changing the condition in the alternative in


Line 15 in Algorithm 29.6 to Equation 29.10.
 
dist ham(subList(gp1 , s, e − s) , subList(gp2 , s, e − s))
b ∨ ((e − s) < w) ∨ ≥u (29.10)
e−s

Besides the schema-preservation which was the reason for the invention of HRO, another
argument for the measured better performance of homologous crossover [2128] would be the
discussion on similarity extraction given in Section 29.5.6 on page 346.
The reader should be informed that in Point 3 of the enumeration and in Equation 29.10,
we deviate from the suggestions given in [1159, 2128]. There, combining the two subse-
quences with xor and then counting the number of 1-bits is suggested as similarity measure.
Actually, like the Hamming distance it is a dissimilarity measure. Therefore, in Equa-
tion 29.10, we permit an information exchange between the parental chromosomes only if
it is below a threshold u while using the genes from the first parent if it is above or equal
the threshold. Either the quoted works just contain a small typo regarding that issue or the
author of this book misunderstood the concept of homology. . .

29.3.4.7 Random Crossover

In his 1995 paper [1466], Jones introduces an experiment to test whether a crossover operator
is efficient or not. His random crossover or headless chicken crossover [101] (g3 , g4 ) ←−
recombineR (g1 , g2 ) uses an internal crossover operation recombinei and a genotype creation
routine “create”. It receives two parent individuals g1 and g2 and produces two offspring
g3 and g4 . However, it does not perform “real” crossover. Instead, it internally creates two
new (random) genotypes g5 , g6 and uses the internal operator recombinei to recombine each
of them with one of the parents.

(g3 , g4 ) = recombineR (g1 , g2 ) ⇒ (recombinei g1 , create  , (29.11)
recombinei g2 , create )

If the performance of Genetic Algorithm which uses recombinei as crossover operator is not
better than with recombineR using recombinei , than the efficiency of recombinei is ques-
tionable. In the experiments of Jones [1466] with recombinei = TPX, two-point crossover
outperformed random crossover only in problems where there are well-defined building blocks
while not leading to better results in problems without this feature.

29.4 Variable-Length String Chromosomes

Variable-length genomes for Genetic Algorithms where first proposed by Smith in his PhD
thesis [2536]. There, he introduced a new variant of classifier systems (see Chapter 35) with
the goal of evolving programs for playing poker [2240, 2536].
340 29 GENETIC ALGORITHMS
29.4.1 Creation: Nullary Reproduction

Variable-length strings can be created by first randomly drawing a length l > 0 and then
creating a list of that length filled with random elements. l could be drawn from a bounded
exponential distribution.

createGAVL(GT ) = createGAFL l, GT l l ∼ exp (29.12)

29.4.2 Insertion and Deletion: Unary Reproduction

If the string chromosomes are of variable length, the set of mutation operations introduced
in Section 29.3 can be extended by two additional methods. First, we could insert a couple

Fig. 29.7.a: Insertion of random genes. Fig. 29.7.b: Deletion of genes.

Figure 29.7: Search operators for variable-length strings (additional to those from Sec-
tion 29.3.2 and Section 29.3.3).

of genes with randomly chosen alleles at any given position into a chromosome as sketched
in Fig. 29.7.a. Second, this operation can be reversed by deleting elements from the string
as shown in Fig. 29.7.b. It should be noted that both, insertion and deletion, are also
implicitly be performed by crossover. Recombining two identical strings with each other
can, for example, lead to deletion of genes. The crossover of different strings may turn out
as an insertion of new genes into an individual.

29.4.3 Crossover: Binary Reproduction

For variable-length string chromosomes, the same crossover operations are available as for
fixed-length strings except that the strings are no longer necessarily split at the same loci.
The lengths of the new strings resulting from such a cut and splice operation may differ from
the lengths of the parents, as sketched in Figure 29.8. A special case of this type of recom-
bination is the homologous crossover, where only genes at the same loci and with similar
content are exchanged, as discussed in Section 29.3.4.6 on page 338 and in Section 31.4.4.1
on page 407.
29.5. SCHEMA THEOREM 341

()
Fig. 29.8.a: Single-Point
()
Fig. 29.8.b: Two-Point
()
Fig. 29.8.c: Multi-Point
Crossover Crossover Crossover

Figure 29.8: Crossover of variable-length string chromosomes.

29.5 Schema Theorem


The Schema Theorem is a special instance of forma analysis (discussed in Chapter 9 on
page 119) for Genetic Algorithms. Matter of fact, it is older than its generalization and
was first stated by Holland back in 1975 [711, 1250, 1253]. Here we will first introduce the
basic concepts of schemata, masks, and wildcards before going into detail about the Schema
Theorem itself, its criticism, and the related Building Block Hypothesis.

29.5.1 Schemata and Masks

Assume that the genotypes g in the search space G of Genetic Algorithms are strings of
a fixed-length l over an alphabet6 Σ, i. e., G = Σl . Normally, Σ is the binary alphabet
Σ = {true, false} = {0, 1}. From forma analysis, we know that properties can be
defined on the genotypic or the phenotypic space. For fixed-length string genomes, we can
consider the values at certain loci as properties of a genotype. There are two basic principles
on defining such properties: masks and do not care symbols.

Definition D29.3 (Mask). For a fixed-length string genome G = Σl , the set of all geno-
typic masks M (l) is the power set7 of the valid loci M (l) = P (0..l − 1) [2879]. Every mask
mi ∈ M (l) defines a property φi and an equivalence relation:

g ∼φi h ⇔ g [j] = h[j] ∀j ∈ mi (29.13)

Definition D29.4 (Order). The order order(mi ) of the mask mi is the number of loci
defined by it:
order(mi ) = |mi | (29.14)

Definition D29.5 (Defined Length). The defined length δ (mi ) of a mask mi is the max-
imum distance between two indices in the mask:

δ (mi ) = max {|j − k| ∀j, k ∈ mi } (29.15)

A mask contains the indices of all elements in a string that are interesting in terms of the
property it defines.

Definition D29.6 (Schema). A forma defined on a string genome concerning the values
of the characters at specified loci is called Schema [558, 1250]

6 Alphabets and such and such are defined in ?? on page ??.


7 The power set you can find described in Definition D51.9 on page 642.
342 29 GENETIC ALGORITHMS
29.5.2 Wildcards

The second method of specifying such schemata is to use don’t care symbols (wildcards)
to create “blueprints” H of their member individuals. Therefore, we place the don’t care
symbol * at all irrelevant positions and the characterizing values of the property at the
others.

g [j] if j ∈ mi
∀j ∈ 0..l − 1 ⇒ H (j) = (29.16)
∗ otherwise
H (j) ∈ Σ ∪ {∗} ∀j ∈ 0..l − 1 (29.17)

Schemas correspond to masks and thus, definitions like the defined length and order can
easily be transported into their context.

Example E29.1 (Masks and Schemas on Bit Strings).


Assume we have bit strings of the length l = 3 as genotypes (G = B3 ). The set of valid
masks M (3) is then M (3) = {{0} , {1} , {2} , {0, 1} , {0, 2} , {1, 2} , {0, 1, 2}}. The mask m1 =
{1, 2}, for example, specifies that the values at the loci 1 and 2 of a genotype denote the
value of a property φ1 and the value of the bit at position 3 is irrelevant. Therefore,
it defines four formae Aφ1 =(0,0) = {(0, 0, 0) , (0, 0, 1)}, Aφ1 =(0,1) = {(0, 1, 0) , (0, 1, 1)},
Aφ1 =(1,0) = {(1, 0, 0) , (1, 0, 0)}, and Aφ1 =(1,1) = {(1, 1, 0) , (1, 1, 1)}.

g2
H1=(0,0,*)
(0,0,1) (1,0,1)

H3=(1,0,*)
(0,1,1) (1,1,1)

H2=(0,1,*) H5=(1,*,*)
H4=(1,1,*)

g0
(0,0,0) (1,0,0)

g1
(0,1,0) (1,1,0)

Figure 29.9: An example for schemata in a three bit genome.

These formae can be defined with masks as well: Aφ1 =(0,0) ≡ H1 = (0, 0, ∗), Aφ1 =(0,1) ≡
H2 = (0, 1, ∗), Aφ1 =(1,0) ≡ H3 = (1, 0, ∗), and Aφ1 =(1,1) ≡ H4 = (1, 1, ∗). These schemata
mark hyperplanes in the search space G, as illustrated in Figure 29.9 for the three bit
genome.

29.5.3 Holland’s Schema Theorem

The Schema Theorem8 was defined by Holland [1250] for Genetic Algorithms which use
fitness-proportionate selection (see Section 28.4.3 on page 290) where fitness is subject to
8 http://en.wikipedia.org/wiki/Holland%27s_Schema_Theorem [accessed 2007-07-29]
29.5. SCHEMA THEOREM 343
maximization [711, 1253]. Let us first only consider selection, fitness proportionate selec-
tion for fitness maximization and with replacement in particular, and ignore reproduction
operations such as mutation and crossover.
For fitness proportionate selection in this case, Equation 28.18 on page 291 defines the
probability to pick an individual in a single selection choice according its fitness. We here
print this equation in a slightly adapted form. If the genotype of pH,i is an instance of the
schema H, its probability of being selected with this method equals its fitness divided by
the sum of the fitness of all other individuals (p′ ) in the population pop(t) at generation t.
v(pH,i )
P (select(pH,i )) = X (29.18)
v(p′ )
∀p′ ∈pop(t)

The chance to select any instance of schema H, here denoted as P (select(pH )), is thus
simply the sum of the probabilities of selecting any of the count(H, pop(t)) instances of this
schema in the current population:
count(H,pop(t))
X
count(H,pop(t)) v(pH,i )
X i=1
P (select(pH )) = P (select(pH,i )) = X (29.19)
i=1 v(p′ )
∀p′ ∈pop(t)

We can rewrite this equation to represent the arithmetic mean vt (H) of the fitness of all
instances of the schema in the population:
count(H,pop(t))
1 X
vt (H) = v(pH,i ) (29.20)
count(H, pop(t)) i=1
count(H,pop(t))
X
v(pH,i ) = count(H, pop(t)) vt (H) (29.21)
i=1
count(H, pop(t)) vt (H)
P (select(pH )) = ps−1
(29.22)
X
v(pj )
j=0

We can do a similar thing to compute the mean fitness vt of all the ps individuals p′ in the
population of generation t:
1 X
vt = v(p′ ) (29.23)
ps ′
∀p ∈pop(t)
X

v(p ) = ps vt (29.24)
∀p′ ∈pop(t)

count(H, pop(t)) vt (H)


P (select(pH )) = (29.25)
ps vt
In order to fill the new population pop(t + 1) of generation t + 1, we need to select ps
individuals. In other words, we have repeat the decision given in Equation 29.25 exactly ps
times. The expected value E [count(H, pop(t + 1))] of the number count(H, pop(t + 1)) of
instances of the schema H selected into pop(t + 1) of generation t + 1 then evaluates to:
E [count(H, pop(t + 1))] = ps P (select(pH )) (29.26)
count(H, pop(t)) vt (H)
= ps (29.27)
ps vt
count(H, pop(t)) vt (H)
E [count(H, pop(t + 1))] = (29.28)
vt
344 29 GENETIC ALGORITHMS
From this equation we can see that schemas with better (higher) fitness are likely to be
present to a higher proportion in the next generation. Also, we can expect that schemas
with lower average fitness are likely to have fewer instances after selection. The next thing to
consider in the Schema Theorem are reproduction operations. Each individual application
search operation may either:
1. Leave the schema untouched when reproducing an individual. If the parent was in-
stance of the schema, then the child is as well. If the parent was not instance of the
schema, the child will be neither.
2. Destroy an occurrence of the schema: The parent individual was an instance of the
schema, but the child is not.
3. Create a new instance of the schema, i. e., the parent was not a schema instance, but
the child is.
Holland [1250] was only interested in finding a lower bound for the expected value and thus
considered only the first two cases. If ζ is the probability that an instance of the schema is
destroyed, Equation 29.28 changes to Equation 29.29.
count(H, pop(t)) vt (H)
E [count(H, pop(t + 1))] = (1 − ζ) (29.29)
vt
If we apply single-point crossover to an individual which is already a member of the schema
H, this can lead to the loss of the schema only if the crossover point separates the bits which
are not declared as * in H. In a search space G = Bl , we have l − 1 possible crossover points:
between loci 0 and 1, between 1 and 2, . . . , and between l − 2 and l − 1. Amongst them, only
δ (H) points are dangerous. The defined length δ (H) of H is the number of possible crossover
points lying within the schema (see, for example, Figure 29.10 on page 347). Furthermore,
in order to actually destroy the schema instance, we would have to pick a second parent
which has at least one wrong value in the genes it provides for the non-don’t care points it
covers after crossover. We assume that this probability would be Pdif f and, since we want
to find a lower bound, simply assume Pdif f = 1, i. e., the worst possible luck when choosing
the parent. Under these circumstances ζc , the probability to destroy an instance of H with
crossover, becomes Equation 29.30.
δ (H) δ (H)
ζc = Pdif f ≤ (29.30)
l−1 l−1
For single-point mutation, on the other hand, solely the number of non-don’t care points in
the schema H is interesting, the order order(H). Here, amongst the l loci we could hit (with
uniform probability), exactly order(H) will lead to the destruction of the schema instance.
In other words, the probability ζm to do this with mutation is given in Equation 29.31.
order(H)
ζm = (29.31)
l
What we see is that both reproduction operations favor short schemas over long ones. The
lower the defined length of a schema and the lower its order, the higher is its chance to
survive mutation and crossover. Finally, we now can put Equation 29.30 and Equation 29.31
together. If we assume that either mutation is applied (with probability mr), crossover
is applied (with probability cr), or neither operation is used and the individual survives
without modification, we get Equation 29.32 and, by substituting it into Equation 29.29,
Equation 29.33.
order(H) δ (H)
ζ ≤ mr ζm + cr ζc = mr + cr (29.32)
l l−1
 
count(H, pop(t)) vt (H) order(H) δ (H)
E [count(H, pop(t + 1))] ≥ 1 − mr − cr (29.33)
vt l l−1
29.5. SCHEMA THEOREM 345
The fact that Equation 29.29 and Equation 29.33 hold for any possible schema at any
given point in time accounts for what is called implicit parallelism and introduced in Sec-
tion 28.1.2.9.
In Equation 29.29, we can already see that the number of instances of a schema is likely
to increase as long as
vt (H)
1< (1 − ζ) (29.34)
vt
1−ζ
1 < vt (H) (29.35)
vt
1 vt
1> (29.36)
vt (H) 1 − ζ
vt
vt (H) > (29.37)
1−ζ
If ζ was 0, i. e., if instances of the schema H would never get lost, this just means that the
schema fitness vt (H) must be higher than the average population fitness vt . For non-zero
ζs, the schema fitness must be higher in order to compensate for the destroyed instances
appropriately.
If the relation of vt (H) to vt would remain constant for multiple generations (and of such
a quality that leads to an increase of instances), the number of schema instances would grow
exponentially (since it would multiply with a constant factor in each generation).

29.5.4 Criticism of the Schema Theorem

From Equation 29.29 or Equation 29.33, one could assume that short, highly fit schemas
would spread exponentially in the population since their number would multiply with a cer-
tain factor in every generation. This deduction is however a very optimistic assumption and
not generally true, maybe even misleading. If a highly fit schema has many offspring with
good fitness, this will also improve the overall fitness of the population. Hence, the proba-
bilities in Equation 29.29 will shift over time. Generally, the Schema Theorem represents a
lower bound that will only hold for one generation [2931]. Trying to derive predictions for
more than one or two generations using the Schema Theorem as is will lead to deceptive or
wrong results [1129, 1133].
Furthermore, the population of a Genetic Algorithm only represents a sample of limited
size of the search space G. This limits the reproduction of the schemata and also makes
statements about probabilities in general more complicated. Since we only have samples
of the schemata H, we cannot be sure if vt (H) really represents the average fitness of all
the members of the schema. That is why we annotate it with t instead of writing v(H).
Thus, even reproduction operators which preserve the instances of the schema may lead to
a decrease of vt+... (H) by time. It is also possible that parts of the population already have
converged and other members of a schema will not be explored anymore, so we do not get
further information about its real utility.
Additionally, we cannot know if it is really good if one specific schema spreads fast, even
it is very fit. Remember that we have already discussed the exploration versus exploitation
topic and the importance of diversity in Section 13.3.1 on page 155.
Another issue is that we implicitly assume that most schemata are compatible and can
be combined, i. e., that there is low interaction between different genes. This is also not
generally valid: Epistatic effects, for instance, can lead to schema incompatibilities. The
expressiveness of masks and blueprints even is limited and can be argued that there are
properties which we cannot specify with them. Take the set D3 of numbers divisible by
three for example D3 = {3, 6, 9, 12, . . . }. Representing them as binary strings will lead
to D3 = {0011, 0110, 1001, 1100, . . . } if we have a bit-string genome of the length 4.
Obviously, we cannot seize these genotypes in a schema using the discussed approach. They
346 29 GENETIC ALGORITHMS
may, however, be gathered in a forma. The Schema Theorem, however, cannot hold for such
a forma since the probability p of destruction may be different from instance to instance.
Finally, we mentioned that the Schema Theorem gives rise to implicit parallelism since it
holds for any possible schema at the same time. However, it should be noted that (a) only a
small subset of schemas is usually actually instantiated within the population and (b) only a
few samples are usually available per schema (if any).Generally, this assumption (like many
others) about the Schema Theorem makes only sense if the population size is assumed to
be infinite and the sampling of the population and the schema would be really uniform.

29.5.5 The Building Block Hypothesis

According to Harik [1196], the substructure of a genotype which allows it to match to


a schema is called a building block. The Building Block Hypothesis (BBH) proposed by
Goldberg [1075], Holland [1250] is based on two assumptions:

1. When a Genetic Algorithm solves a problem, there exist some low-order, low-defining
length schemata with above-average fitness (the so-called building blocks).

2. These schemata are combined step by step by the Genetic Algorithm in order to form
larger and better strings. By using the building blocks instead of testing any possible
binary configuration, Genetic Algorithms efficiently decrease the complexity of the
problem. [1075]

Although it seems as if the Building Block Hypothesis is supported by the Schema Theorem,
this cannot be verified easily. Experiments that originally were intended to proof this theory
often did not work out as planned [1913]. Also consider the criticisms of the Schema Theorem
mentioned in the previous section which turns higher abstractions like the Building Block
Hypothesis into a touchy subject. In general, there exists much criticism of the Building
Block Hypothesis and, although it is a very nice model, it cannot yet be considered as proven
sufficiently.

29.5.6 Genetic Repair and Similarity Extraction

The GR Hypothesis by Beyer [298] based on the Genetic Repair (GR) Effect [296] takes
a counter position to the Building Block Hypothesis [302]. It states that not the differ-
ent schemas are combined from the parent individuals to form the child, but instead the
similarities are inherited.
Let us take a look on uniform crossover (UX ) for bit-string based search spaces, as
introduced in Section 29.3.4.4 on page 337. Here, the value for each locus of the child
individual gc is chosen randomly from the values at the same locus in one of the two parents
gp,a and gp,b . This means that if the parents have the same value in locus i, then the
child will inherit this value too gp,a [i] = gp,b [i] ⇒ gc [i] = gp,a [i] = gp,b [i]. If the two parents
disagree in one locus j, however, the value at that locus in gc is either 1 or 0 – both with
a probability of 50%. In other words, uniform crossover preserves the positions where the
parents are equal and randomly fills up the remaining genes.
If a sequence of genes carries the same alleles in both parents, there are three possible
reasons: (a) either it emerged randomly which has a probability of 2−l for length-l sequences
– which is thus very unlikely, (b) both parents inherited the sequence, or (c) a combination
of inherited sequences of length m less than l and a random coincidence of l − m equal bits
(resulting, for example, from mutation). The latter two cases, the inheritance of the sequence
or subsequence, imply that the sequence (or parts thereof) was previously selected, i. e.,
seemingly has good fitness. Uniform crossover preserves these components. The remaining
bits are uninteresting, maybe even harmful, so they can be replaced. In UX, this is done
with maximum entropy, in a maximally random fashion [302].
29.6. THE MESSY GENETIC ALGORITHM 347
Hence, the idea of Genetic Repair is that useful gene sequences which step-by-step are
built by mutation (or the variation resulting from crossover) are aggregated. The useless
sequences, however, are dampened or destroyed by the same operator. This theory has been
explained for bit-string based as well as real-valued vector based search spaces alike [298,
302].

29.6 The Messy Genetic Algorithm


According to the schema theorem specified in Equation 29.29 and Equation 29.33, a schema
is likely to spread in the population if it has above-average fitness, is short (i. e., low defined
length) and is of low order [180]. Thus, according to Equation 29.33, from two schemas of
the same average fitness and order, the one with the lesser defined length will be propagated
to more offspring, since it is less likely to be destroyed by crossover. Therefore, placing
dependent genes close to each other would be a good search space design approach since it
would allow good building blocks to proliferate faster. These building blocks, however, are
not known at design time – otherwise the problem would already be solved. Hence, it is not
generally possible to devise such a design.
The messy Genetic Algorithms (mGAs) developed by Goldberg et al. [1083] use a coding
scheme which is intended to allow the Genetic Algorithm to re-arrange genes at runtime.
Because of this feature, it can place the genes of a building block spatially close together.
This method of linkage learning may thus increase the probability that building blocks, i. e.,
sets of epistatically linked genes, are preserved during crossover operations, as sketched in
Figure 29.10. It thus mitigates the effects of epistasis as discussed in Section 17.3.3.

destroyed in 6 out of 9 cases by crossover


rearrange

destroyed in 1 out of 9 cases by crossover

Figure 29.10: Two linked genes and their destruction probability under single-point
crossover.

29.6.1 Representation
The idea behind the genomes used in messy GAs goes back to the work Bagley
[180] from 1967 who first introduced a representation where the ordering of the
genes was not fixed. Instead, for each gene a tuple (φ, γ) with its position (lo-
cus) φ and value (allele) γ was used. For instance, the bit string 000111 can
be represented as g1 = ((0, 0) , (1, 0) , (2, 0) , (3, 1) , (4, 1) , (5, 1)) but as well as g2 =
((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1)) where both genotypes map to the same phenotype,
i. e., gpm(g1 ) = gpm(g2 ).

29.6.2 Reproduction Operations

29.6.2.1 Inversion: Unary Reproduction


The inversion operator reverses the order of genes between two randomly chosen loci [180,
1196]. With this operation, any particular ordering can be produced in a relatively small
number of steps. Figure 29.11 illustrates, for example, how the possible building block com-
ponents (1, 0), (3, 0), (4, 0), and (6, 0) can be brought together in two steps. Nevertheless,
the effects of the inversion operation were rather disappointing [180, 985].
348 29 GENETIC ALGORITHMS

(0,0) (1,0) (2,0) (3,0) (4,1) (5,1) (6,1) (7,1) first inversion

(0,0) (1,0) (4,1) (3,0) (2,0) (5,1) (6,1) (7,1) second inversion

(0,0) (5,1) (2,0) (3,0) (4,1) (1,0) (6,1) (7,1)

Figure 29.11: An example for two subsequent applications of the inversion operation [1196].

29.6.2.2 Cut: Unary Reproduction

The cut operator splits a genotype g into two with the probability pc = (len(g) − 1) pK where
pK is a bitwise probability and len(g) the length of the genotype [1551]. With pK = 0.1, the
g1 = ((0, 0) , (1, 0) , (2, 0) , (3, 1) , (4, 1) , (5, 1)) has a cut probability of pc = (6 − 1)∗0.1 = 0.5.
A cut at position 4 would lead to g3 = ((0, 0) , (1, 0) , (2, 0) , (3, 1)) and g4 = ((4, 1) , (5, 1)).

29.6.2.3 Splice: Binary Reproduction

The splice operator joins two genotypes with a predefined probability ps by simply attach-
ing one to the other [1551]. Splicing g2 = ((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1)) and g4 =
((4, 1) , (5, 1)), for instance, leads to g5 = ((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1) , (4, 1) , (5, 1)).
In summary, the application of two cut and a subsequent splice operation to two genotypes
has roughly the same effect as a single-point crossover operator in variable-length string
chromosomes Section 29.4.3.

29.6.3 Overspecification and Underspecification

The genotypes in messy GAs have a variable length and the cut and splice operators
can lead to genotypes being over or underspecified. If we assume a three bit genome,
the genotype g6 = ((2, 0) , (0, 0) , (2, 1) , (1, 0)) is overspecified since it contains two (in this
example, different) alleles for the third gene (at locus 2). g7 = ((2, 0) , (0, 0)), in turn, is
underspecified since it does not contain any value for the gene in the middle (at locus 1).
Dealing with overspecification is rather simple [847, 1551]: The genes are processed from
left to right during the genotype-phenotype mapping, and the first allele found for a specific
locus wins. In other words, g6 from above codes for 000 and the second value for locus 2
is discarded. The loci left open during the interpretation of underspecified genes are filled
with values from a template string [1551]. If this string was 000, g7 would code for 000, too.

29.6.4 The Process

In a simple Genetic Algorithm, building blocks are identified and recombined simultaneously,
which leads to a race between recombination and selection [1196]. In the messy GA [1083,
1084], this race is avoided by separating the evolutionary process into two stages:

1. In the primordial phase, building blocks are identified. In the original conception of
the messy GA, all possible building blocks of a particular order k are generated. Via
selection, the best ones are identified and spread in the population.
29.7. RANDOM KEYS 349
2. These building blocks are recombined with the cut and splice operators in the subse-
quent juxtapositional phase.

The complexity of the original mGA needed a bootstrap phase in order to identify the order-
k building blocks which required to identify the order-k − 1 blocks first. This bootstrapping
was done by applying the primordial and juxtapositional phases for all orders from 1 to
k − 1. This process was later improved by using a probabilistic complete initialization
algorithm [1087] instead.

29.7 Random Keys

In Section 29.3.1.2 and 29.3.3, we introduce operators for processing strings which directly
represent permutations of the elements from a given set S = {s0 , s1 , .., sn−1 }. This direct
representation is achieved by using a search space G which contains the n-dimensional vectors
over the natural numbers from 0 to n − 1, i. e., G = [0..n − 1]n . The gene g [i] at locus i of
the genotype g ∈ G hence has a value j = g [i] ∈ 1..n and specifies that element sj should be
at position i in the permutation, as defined in the genotype-phenotype mapping introduced
in Equation 29.6 on page 329.
It is quite clear that every number in 1..n must occur exactly once in each genotype.
Performing a simple single-point crossover on two different genotypes is hence not feasi-
ble. The two genotypes g1 = (1, 2, 3, 5, 4) and g2 = (4, 2, 3, 5, 1) could lead to the offspring
(1, 2, 3, 4, 1) or (4, 2, 3, 5, 4), both of which containing one number twice while missing an-
other one. We can therefore not apply Algorithm 29.6 nor Algorithm 29.7 to such genotypes.
The Random Keys encoding defined by Bean [236] is a simple idea how to circumvent
this problem. It introduces a genotype-phenotype mapping which translates n-dimensional
vectors of real numbers (G ⊆ Rn ) to permutations of length n. Each gene g [i] of a genotype
g with i ∈ 0..n − 1 is thus a real number or, most commonly, an element of the subset [0, 1]
of the real numbers. The gene at locus i stands for the element si . The permutation of these
elements, i. e., the position at which the element si appears in the phenotype, is defined by
the position of the gene g [i] in the sorted list of all genes. In Algorithm 29.8, we present
an algorithm which performs the Random Keys genotype-phenotype mapping. An example
implementation of the algorithm in Java is given in Listing 56.52 on page 834.

Algorithm 29.8: x ←− gpmRK (g, S)


Input: g ∈ G ⊆ Rn : the genotype
Input: S = {s0 , s1 , .., sn−1 }: The set of elements in the permutation. Often S ⊂ N0
and si = i
Data: i: a counter variable
Data: j: the position index of the allele of the ith gene
Data: g ′ : a temporary genotype
Output: x ∈ X = Π(S): the corresponding phenotype
1 begin
2 g ′ ←− ()
3 x ←− ()
4 for i ←− 0 up to len(g) − 1 do
5 j ←− searchItemAS(g [i], g ′ )
6 if j < 0 then j ←− ((−j) − 1)
7 g ′ ←− insertItem(g ′ , j, g [i])
8 x ←− insertItem(x, j, sj )
9 return x
350 29 GENETIC ALGORITHMS
In other words, the genotype g3 = (0.2, 0.45, 0.17, 0.84, 0.59) would be translated to the
phenotype x3 = (s2 , s0 , s1 , s4 , s3 ), since 0.17 (i. e., g3 [2]) is the smallest number and hence,
s2 comes first. The next smaller number is 0.2, the value of the gene at locus 0. Hence, s0
will occur at the second place of the phenotype, and so on. Finally, the gene at locus 3 has
the largest allele (g3 [3] = 0.84) and thus, s3 is the last element in the permutation.
With this genotype-phenotype mapping, we can translate a combinatorial problem where
a given set of elements must be ordered into a numerical optimization problem defined over
a continuous domain. Bean [236] used the Random Keys method to solve Vehicle Routing
Problems, Traveling Salesman Problems, quadratic assignment problems, scheduling and
resource allocation problems. Snyder and Daskin [2540] combine a Genetic Algorithm, the
Random Keys encoding, and a local search method to, as well, tackle generalized Traveling
Salesman Problems.
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 351
29.8 General Information on Genetic Algorithms

29.8.1 Applications and Examples

Table 29.1: Applications and Examples of Genetic Algorithms.


Area References
Art [1953, 2497, 2529]
Chemistry [214, 558, 668, 2729]
Combinatorial Problems [16, 41, 65, 81, 190, 236, 237, 240, 382, 402, 429, 436, 527,
573, 629, 640, 654, 765, 978, 1126, 1161, 1270, 1440, 1442,
1446, 1447, 1460, 1471, 1534, 1538, 1558, 1581, 1686, 1706,
1709, 1737, 1756, 1757, 1779–1781, 1896, 1969, 1971, 2019,
2075, 2267, 2324, 2338, 2373, 2374, 2472, 2473, 2540, 2656,
2658, 2686, 2713, 2732, 2795, 2796, 2865, 2901, 2923, 3024,
3044, 3062, 3084]
Computer Graphics [56, 466, 511, 541, 757, 2732]
Control [541, 1074]
Cooperation and Teamwork [2922]
Data Mining [633, 683, 717, 911, 973, 994, 1081, 1082, 1105, 1241, 1487,
1581, 1906, 1910, 2147, 2148, 2168, 2643, 2819, 2894, 2902,
3076]
Databases [640, 1269, 1270, 1706, 1779–1781, 2658, 2795, 2796, 2865,
3015, 3044, 3062, 3063]
Digital Technology [240, 575, 789]
Distributed Algorithms and Sys- [15, 71, 335, 410, 541, 629, 721, 769, 872, 896, 1291, 1488,
tems 1537, 1538, 1558, 1636, 1657, 1750, 1840, 1935, 1982, 2068,
2113, 2114, 2177, 2338, 2370, 2498, 2656, 2720, 2729, 2796,
2901, 2909, 2922, 2950, 2989, 3002, 3076]
E-Learning [71, 721, 1284, 3076]
Economics and Finances [1060, 1906]
Engineering [56, 240, 303, 355, 410, 519, 532, 556, 575, 640, 757, 769, 789,
1211, 1269, 1450, 1488, 1494, 1636, 1640, 1657, 1704, 1705,
1725, 1763–1765, 1767–1770, 1829, 1840, 1894, 1982, 2018,
2163, 2178, 2232, 2250, 2270, 2498, 2656, 2729, 2732, 2814,
2959, 3002, 3055, 3056]
Function Optimization [103, 169, 409, 610, 711, 713, 714, 735, 1080, 1258, 1709,
1841, 2128, 2324, 2933, 2981, 3079]
Games [998]
Graph Theory [15, 410, 556, 575, 629, 640, 769, 872, 896, 1284, 1450, 1471,
1488, 1558, 1636, 1657, 1750, 1767–1770, 1840, 1935, 1982,
2068, 2113, 2114, 2177, 2338, 2370, 2372, 2374, 2498, 2655,
2656, 2729, 2814, 2950, 3002]
Healthcare [2022]
Logistics [41, 65, 81, 190, 236, 382, 402, 436, 527, 978, 1126, 1442,
1446, 1447, 1581, 1704, 1705, 1709, 1756, 1896, 1971, 2075,
2270, 2324, 2540, 2686, 2713, 2720, 3024, 3084]
352 29 GENETIC ALGORITHMS
Mathematics [240, 520, 652, 998, 2284, 2285, 2416]
Multi-Agent Systems [1494]
Physics [532, 575]
Prediction [1285]
Security [1725]
Software [620, 652, 790, 998, 2901]
Theorem Proofing and Automatic [640]
Verification
Wireless Communication [154, 532, 556, 575, 640, 679, 826, 1284, 1450, 1494, 1535,
1767–1770, 1796, 2250, 2371, 2372, 2374, 2655, 2814]

29.8.2 Books

1. Handbook of Evolutionary Computation [171]


2. Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163]
3. Evolutionary Computation 1: Basic Algorithms and Operators [174]
4. Genetic Algorithms and Genetic Programming at Stanford [1601]
5. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
6. Genetic Algorithms and Simulated Annealing [694]
7. Handbook of Genetic Algorithms [695]
8. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications
to Biology, Control, and Artificial Intelligence [1250]
9. Practical Handbook of Genetic Algorithms: New Frontiers [522]
10. Genetic and Evolutionary Computation for Image Processing and Analysis [466]
11. Evolutionary Computation: The Fossil Record [939]
12. Genetic Algorithms for Applied CAD Problems [1640]
13. Extending the Scalability of Linkage Learning Genetic Algorithms – Theory & Practice [552]
14. Efficient and Accurate Parallel Genetic Algorithms [477]
15. Electromagnetic Optimization by Genetic Algorithms [2250]
16. Genetic Fuzzy Systems: Evolutionary Tuning and Learning of Fuzzy Knowledge Bases [634]
17. Genetic Algorithms Reference – Volume 1 – Crossover for Single-Objective Numerical Opti-
mization Problems [1159]
18. Foundations of Learning Classifier Systems [443]
19. Practical Handbook of Genetic Algorithms: Complex Coding Systems [523]
20. Evolutionary Computation: A Unified Approach [715]
21. Genetic Algorithms in Search, Optimization, and Machine Learning [1075]
22. Introduction to Stochastic Search and Optimization [2561]
23. Advances in Evolutionary Algorithms [1581]
24. Representations for Genetic and Evolutionary Algorithms [2338]
25. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
26. Evaluation of the Effectiveness of Genetic Algorithms in Combinatorial Optimization [2923]
27. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
28. Evolutionäre Algorithmen [2879]
29. Genetic Algorithm and its Application (Yı́chuán Suànfǎ Jı́ Qı́ Yı̀ngyòng) [541]
30. Advances in Evolutionary Algorithms – Theory, Design and Practice [41]
31. Cellular Genetic Algorithms [66]
32. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications
to Biology, Control, and Artificial Intelligence [1252]
33. An Introduction to Genetic Algorithms [1912]
34. Genetic Algorithms + Data Structures = Evolution Programs [1886]
35. Practical Handbook of Genetic Algorithms: Applications [521]
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 353
29.8.3 Conferences and Workshops

Table 29.2: Conferences and Workshops on Genetic Algorithms.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
ICGA: International Conference on Genetic Algorithms and their Applications
History: 1997/07: East Lansing, MI, USA, see [166]
1995/07: Pittsburgh, PA, USA, see [883]
1993/07: Urbana-Champaign, IL, USA, see [969]
1991/07: San Diego, CA, USA, see [254]
1989/06: Fairfax, VA, USA, see [2414]
1987/07: Cambridge, MA, USA, see [1132]
1985/06: Pittsburgh, PA, USA, see [1131]
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
FOGA: Workshop on Foundations of Genetic Algorithms
History: 2011/01: Schwarzenberg, Austria, see [301]
2009/01: Orlando, FL, USA, see [29]
2005/01: Aizu-Wakamatsu City, Japan, see [2983]
2002/09: Torremolinos, Spain, see [719]
2001/07: Charlottesville, VA, USA, see [2565]
1998/07: Madison, WI, USA, see [208]
1996/08: San Diego, CA, USA, see [256]
1994/07: Estes Park, CO, USA, see [2932]
1992/07: Vail, CO, USA, see [2927]
1990/07: Bloomington, IN, USA, see [2562]
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
354 29 GENETIC ALGORITHMS
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
EUROGEN: Evolutionary Methods for Design Optimization and Control
History: 2007/06: Jyväskylä, Finland, see [2751]
2005/09: Munich, Bavaria, Germany, see [2419]
2003/09: Barcelona, Catalonia, Spain, see [216]
2001/09: Athens, Greece, see [1053]
1999/05: Jyväskylä, Finland, see [1894]
1997/11: Triest, Italy, see [2232]
1995/12: Las Palmas de Gran Canaria, Spain, see [2959]
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MENDEL: International Mendel Conference on Genetic Algorithms
History: 2009/06: Brno, Czech Republic, see [411, 412]
2007/09: Prague, Czech Republic, see [2107]
2006/05: Brno, Czech Republic, see [413]
2005/06: Brno, Czech Republic, see [421]
2004/06: Brno, Czech Republic, see [420]
2003/06: Brno, Czech Republic, see [1842]
2002/06: Brno, Czech Republic, see [419]
2001/06: Brno, Czech Republic, see [2522]
2000/06: Brno, Czech Republic, see [2106]
1998/06: Brno, Czech Republic, see [417]
1997/06: Brno, Czech Republic, see [416, 418]
1996/06: Brno, Czech Republic, see [415]
1995/09: Brno, Czech Republic, see [414]
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 355
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GALESIA: International Conference on Genetic Algorithms in Engineering Systems
History: 1997/09: Glasgow, Scotland, UK, see [3056]
1995/09: Sheffield, UK, see [3055]
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
COGANN: International Workshop on Combinations of Genetic Algorithms and Neural Networks
History: 1992/06: Baltimore, MD, USA, see [2415]
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
Later continued as NWGA.
History: 1995/06: Vaasa, Finland, see [59]
1994/03: Vaasa, Finland, see [58]
356 29 GENETIC ALGORITHMS
1992/11: Espoo, Finland, see [57]
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NWGA: Nordic Workshop on Genetic Algorithms
History: 1998/07: Trondheim, Norway, see [62]
1997/08: Helsinki, Finland, see [61]
1996/08: Vaasa, Finland, see [60]
1995/06: Vaasa, Finland, see [59]
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

29.8.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 357
Tasks T29

78. Perform the same experiments as in Task 64 on page 238 (and 68) with a Genetic
Algorithm. Use a bit-string based search space and utilize an appropriate genotype-
phenotype mapping.
[40 points]

79. Perform the same experiments as in Task 64 on page 238 (and 68 and 78) with a
Genetic Algorithm. Use a real-vector based search space. Implement an appropriate
crossover operator.
[40 points]

80. Compare your results from Task 64 on page 238, 68, and 78. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for all three tasks? Try to give
reasons for your results.
[10 points]

81. Perform the same experiments as in Task 67 on page 238 (and 70) with a Genetic
Algorithm. If you use a permutation-based search space, you may re-use the nullary
and unary search operations we already have developed for this task. Yet, you need
to define a recombination operation which does not violate the permutation-character
of the genotypes.
[40 points]

82. Compare your results from Task 67 on page 238, 70 and 81. What are the differences?
What are the similarities? Did you use different experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results. Put your explanations into the context of the discussions provided
in Example E17.4 on page 181.
[10 points]

83. Try solving the bin packing problem again, this time by reproducing the experiments
given in Khuri, Schütz, and Heitkötter in [1534]. Compare the results with the results
listed in that section and obtained by you in Task 70 and 70. Which method is better?
Provide reasons for your observation.
[50 points]

84. In Paragraph 56.2.1.2.1 on page 801, we provide a Java implementation of some of the
standard operators of Genetic Algorithms based on arrays of boolean. Obviously, using
boolean[] is a bit inefficient, since it wastes memory. Each boolean will take up at least
1 byte, probably even 4, 8, or maybe even 16 bytes. Since it only represents a single bit,
i. e., one eighth of a byte, it would be more economically to represent bit strings as array
of int or long and access specific bits via bit arithmetic (via, e.g., &, |, ˆ, <<, or >>>).
Please re-implement the operators from Paragraph 56.2.1.2.1 on page 801 in a more
efficient way, by either using byte[], int[], or long[], or a specific class such as java
.util.BitSet for storing the bit strings instead of boolean[].
[30 points]
Chapter 30
Evolution Strategies

30.1 Introduction
Evolution Strategies1 (ES) introduced by Rechenberg [2277, 2278, 2279] and Schwefel [2433,
2434, 2435] are a heuristic optimization technique based in the ideas of adaptation and
evolution, a special form of Evolutionary Algorithms [170, 300, 302, 1220, 2277–2279, 2437].
The first Evolution Strategies were quite similar to Hill Climbing methods. They did not
yet focus on evolving real-valued, numerical vectors and only featured two rules for finding
good solutions (a) slightly modifying one candidate solution and (b) keeping it only if it
is at least as good as its parent. Since there was only one parent and one offspring in this
form of Evolution Strategy, this early method more or less resembles a (1 + 1) population
discussed in Paragraph 28.1.4.3.3 on page 266. From there on, (µ + λ) and (µ, λ) Evolution
Strategies were developed – a step into a research direction which was not very welcomed
back in the 1960s [302].
The search space of today’s Evolution Strategies normally are vectors from the Rn , but
bit strings or integer strings are common as well [302]. Here, we will focus mainly on the
former, the real-vector based search spaces. In this case, the problem space is often identical
with the search space, i. e., X = G ⊆ Rn . ESs normally follow one of the population handling
methods classified in the µ-λ notation given in Section 28.1.4.3.
Mutation and selection are the primary reproduction operators and recombination is
used less often in Evolution Strategies. Most often, normal distributed random numbers
are used for mutation. The parameter of the mutation is the standard deviation of these
random numbers. The Evolution Strategy may either
1. maintain an a single standard deviation parameter σ and use identical normal dis-
tributions for generating the random numbers added to each element of the solution
vectors,
2. use a separate standard deviation (from a standard deviation vector σ) for each element
of the genotypes, i. e., create random numbers from different normal distributions
for mutations in order to facilitate different strengths and resolutions of the decision
variables (see, for instance, Example E28.11), or
3. use a complete covariance matrix C for creating random vectors distributed in a hy-
perellipse and thus also taking into account binary interactions between the elements
of the solution vectors.
The standard deviations are governed by self-adaptation [1185, 1617, 1878] and may result
from a stochastic analysis of the elements in the population [1182–1184, 1436]. They are often
1 http://en.wikipedia.org/wiki/Evolution_strategy [accessed 2007-07-03]
360 30 EVOLUTION STRATEGIES
treated as endogenous strategy parameters which can directly be stored in the individual
records p and evolve along with them [302]. The idea of self-adaption can be justified
with the following thoughts. Assume we have a Hill Climbing algorithm which uses a fixed
mutation step size σ to modify the (single) genotype it currently has under investigation.
1. If a large value is used for σ, the Hill Climber will initially quickly find interesting
areas in the search space. However, due to its large step size, it will often jump in and
out of smaller highly fit areas without being able to investigate them thoroughly.
2. If a small value is used for σ, the Hill Climber will progress very slowly. However, once
it finds the basin of attraction of an optimum, it will be able to trace that optimum
and deliver solutions.
It thus makes sense to adapt the step width of the mutation operation. Starting with a larger
step size, we may reduce it while approaching the global optimum, as we will lay down in
Section 30.5. The parameters µ and λ, on the other hand, are called exogenous parameters
and are usually kept constant during the evolutionary optimization process [302].
We can describe the idea of endogenous parameters for governing the search step width
as follows. Evolution Strategies are Evolutionary Algorithms which usually add the step
length vector as endogenous information to the individuals. This vector undergoes selec-
tion, recombination, and mutation together with the individual. The idea is that the EA
selects individuals which are good. By doing so, it also selects step width that led to its
creation. This step width can be considered to be good since it was responsible for creating
a good individual. Bad step widths, on the other hand, will lead to the creation of inferior
candidate solutions and thus, perish together with them during selection. In other words,
the adaptation of the step length of the search operations is governed by the same selection
and reproduction mechanisms which are applied in the Evolutionary Algorithm to find good
solutions. Since we expect these mechanisms to work for this purpose, it makes sense to
also assume that they can do well to govern the adaptation of the search steps. We can
further expect that initially, larger steps are favored by the evolution, which help the algo-
rithm to explore wide areas in the search space. Later, when the basin(s) of attraction of
the optimum/optima are reached, only small search steps can lead to improvements in the
objective values of the candidate solutions. Hence, individuals created by search operations
which perform small modifications will be selected. These individuals will hold endogenous
information which leads to small step sizes. Mutation and recombination will lead to the
construction of endogenous information which favors smaller and smaller search steps – the
optimization process converges.

30.2 Algorithm
In Algorithm 30.1 (and in the corresponding Java implementation in Listing 56.17 on
page 760) we present the basic single-objective Evolution Strategy algorithm as specified
by Beyer and Schwefel [302]. The algorithm can facilitate the two basic strategies (µ/ρ, λ)
(Paragraph 28.1.4.3.6) and (µ/ρ + λ) (Paragraph 28.1.4.3.7). These general strategies also
encompass (µ, λ) ≡ (µ/1, λ) and (µ + λ) ≡ (µ/1 + λ) population treatment.
The algorithm starts with the creation of a population of µ random individuals in Line 3,
maybe by using a procedure similar to the Java code given in Listing 56.24. In each
generation, first a genotype-phenotype mapping may be performed (Line 6) and then the
objective function is evaluated in .
In every generation (except the first one with t = 1), there are λ individuals in the
population pop. We therefore have to select µ from them into the mating pool mate. The
way in which this selection is performed depends on the type of the Evolution Strategy.
In the (µ/ρ, λ), only the individuals of generation t in pop(t) are considered (Line 13). If
the Evolution Strategy is a (µ/ρ + λ) strategy, then also the previously selected individuals
30.2. ALGORITHM 361

Algorithm 30.1: X ←− generalES(f, µ, λ, ρ)


Input: f: the objective/fitness function
Input: µ: the mating pool size
Input: λ: the number of offsprings
Input: ρ: the number of parents
Input: [implicit] strategy: the population strategy, either (µ/ρ, λ) or (µ/ρ + λ)
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: P : a set of ρ parents pool for a single offspring
Data: pn : a single offspring
Data: g ′ , w′ : temporary variables
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t ←− 1
3 pop(t = 1) ←− createPop(µ)
4 continue ←− true
5 while continue do
6 pop(t) ←− performGPM(pop(t) , gpm)
7 pop(t) ←− computeObjectives(pop(t)
 , f)
8 if terminationCriterion then
9 continue ←− false
10 else
11 if t > 1 then
12 if strategy = (µ/ρ, λ) then
13 mate(t) ←− truncationSelectionw (pop(t) , f, µ)
14 else
15 mate(t) ←− truncationSelectionw (pop(t) ∪ mate(t − 1) , f, µ)
16 else
17 mate(t) ←− pop(t)
18 pop(t + 1) ←− ∅
19 for i ←− 1 up to λ do
20 P ←− selectionw (mate(t) , f, ρ)
21 pn ←− ∅
22 g ′ ←− recombineG (p.g ∀p ∈ P )
23 w′ ←− recombineW (p.w ∀p ∈ P )
24 pn .w ←− mutateW (w′ )
25 pn .g ←− mutateG (g ′ , pn .w)
26 pop(t + 1) ←− addItem(pop(t + 1) , pn )
27 t ←− t + 1
28 return extractPhenotypes(extractBest(pop(t) ∪ pop(t − 1) ∪ . . . , f))
362 30 EVOLUTION STRATEGIES
from mate(t − 1) can be selected. In any way, the selection method chosen for this usually
is truncation selection without replacement, i. e., each candidate solution can be selected at
most once.
After the mating pool mate has been filled, the λ new individuals can be built. In a
(µ/ρ +, λ) strategy, there are ρ parents for each offspring. For each of the λ offspring pn to
be constructed, we therefore first select ρ parents into a parent pool P in Line 20. This
selection is usually a uniform random process which does not consider the objective values.
Furthermore, again a selection method without replacement is generally used.
The genotype pn .g of the offspring pn is then created by recombining the genotypes
of all of the parents with an operator “recombine” (Line 22). This genotype, a vector,
may, for instance, be the arithmetic mean of all parent genotype vectors. The endogenous
information pn .w ∈ W to be stored in pn is created in a similar fashion with the operator
“recombine” in Line 22. pn .w could be the covariance matrix of all parent genotype vectors.
In the next step Line 24, this information is modified, i. e., mutated. This mutated
information record is then used to mutate the newly created genotype in Line 25 with
the operator “mutate”. Here, a random number normally distributed according to the
covariance stored in pn .w could be added to p.gn, for example. The order of first mutating
the information record and then the genotype is important for the selection round in the
following generation since we want information records which create good individuals to be
selected. However, if the information was mutated after the genotype was created, it would
not be clear whether its features are still good – after mutation, it might have become bad.
Finally, the new individual can be added to the next population popt+1 in Line 26 and a
new generation can begin. The algorithm should, during all the iterations, keep track of the
best candidate solutions found so far. In single-objective optimization, most often a single
result will be returned, but even then, a whole set X of equally good candidate solutions
may be discovered. The optimizer can maintain such a set during the generations by using
an archive and pruning techniques. For the sake of simplicity, we sketched that the best
solutions found in all populations over all generations would be returned in Line 28 which
symbolizes such bookkeeping. Of course, no one would implement it like this in reality, but
in order to keep the algorithm definition a bit smaller and to not have too many variables,
we chose this notation.

30.3 Recombination
The standard recombination operators in Evolution Strategy basically resemble generaliza-
tions of uniform crossover and average crossover to ρ ≥ 1 parents for (µ/ρ +
, λ) strategies.
If ρ > 2, we speak of multi-recombination.

30.3.1 Dominant Recombination


Dominant (discrete) recombination, as specified in Algorithm 30.2 and implemented in List-
ing 56.32 on page 798, is a generalization of uniform crossover (see Section 29.3.4.4 on
page 337) to ρ parents. Like in uniform crossover, for the ith locus in the child chro-
mosome g′ , we randomly pick an allele from the same locus from any of the ρ parents
gp,1 , gp,2 , . . . , gp,ρ .

30.3.2 Intermediate Recombination


The intermediate recombination operator can be considered to be an extension of average
crossover (discussed in Section 29.3.4.5 on page 337) to ρ parents. Here, the value of the ith
gene of the offspring g′ is the arithmetic mean of the alleles of genes at the same locus in
the parents. We specify this procedure as Algorithm 30.3 and implement it in Listing 56.33
on page 799.
30.3. RECOMBINATION 363

Algorithm 30.2: g′ ←− recombineDiscrete(P )


Input: P : the list of parent individuals, len(P ) = ρ
Data: i: a counter variable
Data: p: a parent individual
Output: g′ : the offspring of the parents
1 begin
2 for i ←− 0 up to n − 1 do
3 p ←− P [⌊randomUni[0, ρ)⌋]
4 g′ [i] ←− p.g[i]
5 return g′

Algorithm 30.3: g′ ←− recombineIntermediate(P )


Input: P : the list of parent individuals, len(P ) = ρ
Data: i, j: counter variables
Data: s: the sum of the values
Data: p: a parent individual
Output: g′ : the offspring of the parents
1 begin
2 for i ←− 0 up to n − 1 do
3 s ←− 0
4 for j ←− 0 up to ρ − 1 do
5 p ←− P [j]
6 s ←− s + p.g[i]
s
7 g′ [i] ←−
n
8 return g′
364 30 EVOLUTION STRATEGIES
30.4 Mutation

30.4.1 Using Normal Distributions

If Evolution Strategies are applied to real-valued (vector-based) search spaces G = Rn , (re-

selected parents

g’

sG G sG G sG G

sG G +sG G
0

2
0 1 1
s» s( )
GG
0

1
0

1
C = (s 0

G G sG G
1
0

0
0

1
)
1

B C D
Figure 30.1: Sampling Results from Normal Distributions

combined intermediate) genotypes


 g ′ are often mutated by using normally distributed ran-
2
dom numbers ∼ N µ, σ . Normal distributions have the advantage that (a) small changes
are preferred, since values close to the expected value µ, but (b) basically, all numbers have
a non-zero probability of being drawn, i. e., all points in the search space are adjacent with
ε = 0 (see Definition D4.10 on page 85) and, finally, (c) the dispersion of distribution can
easily be increased/decreased by increasing/decreasing the standard deviation σ. A uni-
variate normal distribution thus has two parameters, the mean µ and the (scalar) standard
deviation σ.
If a real-valued, intermediate genotype g ′ ∈ R is mutated, we want the possible results
to be distributed around this genotype. Therefore, we have two equivalent options:
 either
we draw the numbers around the mean µ = g ′ , i. e., use randomNorm g ′ , σ 2 , or we  add a
normal distribution with mean 0 to the genotype, i. e., do g ′ + randomNorm 0, σ 2 . For a
multi-dimensional case, where the genotypes are vectors g′ ∈ Rn , the procedure is exactly
the same.
30.4. MUTATION 365
For the parameter σ of the normal distribution, however, things are a little bit more
complicated. Usually, this value is usually stored as endogenous strategy parameter and
adapted over time.
Basically, there exist three simple methods for defining the second parameter of the
normal distribution(s) for mutation. They are illustrated exemplarily in Figure 30.1. In this
figure, we assume that the new genotype g′ is the mean vector of a set of selected parent
individuals (A) and we assume that the standard deviations are derived from these parents
as well. We illustrate the two-dimensional case where the parents which have been selected
based on their fitness reside in a (rotated) ellipse. Such a situation, where the selected
individuals form a shape in the search space which is not aligned with the main axes, is
quite common.

30.4.1.1 Single Normal Distribution

The simplest way to mutate a vector g′ ∈ Rn with a normal distribution is given in Algo-

Algorithm 30.4: g ←− mutateG,σ (g′ , w = σ)


Input: g′ ∈ Rn : the input vector
Input: σ ∈ R: the standard deviation of the mutation
Data: i: a counter variable
Output: g ∈ Rn : the mutated version of g ′
1 begin
2 for i ←− 0 up to n − 1 do
3 g[i] ←− g′ [i] + σrandomNorm(0, 1)

// Line 3 ≡ g[i] ←− randomNorm g′ [i], σ 2
4 return g

rithm 30.4 and implemented in Listing 56.25 on page 786. Here, the endogenous information
is a scalar number w =  σ. For each dimension i ∈ 0..n − 1, a random value normally dis-
tributed with N µ, σ 2 is drawn and added to g′ [i].
As illustrated in part (B) of Figure 30.1, this operation creates an unbiased distribution
of possible genotypes g. The mutants are symmetrically distributed around g′ . The dis-
tribution is isotropic, i. e., the surfaces of constant density form concentric spheres around
g′ .
The advantage of this mutation operator is that it only needs a single, scalar endogenous
parameter, w = σ. The drawback is that the resulting genotype distribution is always
isotropic and cannot reflect different scales along the axes. In Example E28.11 on page 271
we pointed out that the influence of the genes of a genotype on the objective values, i. e.,
the influence of the dimensions of the search space, may differ largely. Furthermore, it may
even change depending on which area of the search space is currently investigated by the
optimization process.
Figure 30.1 is the result of an experiment where we illustrate the distribution of a sampled
point cloud. For the normal distribution in part (B), we chose a standard deviation which
equals the mean of the standard deviations along both axes G0 and G1 .

30.4.1.2 n Univariate Normal Distributions

By using n different standard deviations, i. e., a vector σ, we can scale the cloud of points
along the axes of the coordinate system as can be seen in part (C) of Figure 30.1. Algo-
rithm 30.5 illustrates how this can be done and Listing 56.25 on page 786 again provides a
366 30 EVOLUTION STRATEGIES

Algorithm 30.5: g ←− mutateG,σ (g′ , w = σ)


Input: g′ ∈ Rn : the input vector
Input: σ ∈ Rn : the standard deviation of the mutation
Data: i: a counter variable
Output: g ∈ Rn : the mutated version of g′
1 begin
2 for i ←− 0 up to n − 1 do
3 g[i] ←− g′ [i] + σ [i]randomNorm(0, 1)

// Line 3 ≡ g[i] ←− randomNorm g′ [i], σ [i]2
4 return g

suitable implementation of the idea. Instead of sampling all dimensions from the same nor-
mal distribution N 0, σ 2 , we now use n distinct univariate normal distributions N (0, σ [i])
with i ∈ 0..n − 1. For Figure 30.1 part (C), we sampled points with the standard deviations
of the parent points along the axes.
The endogenous parameter w = σ now has n dimensions as well. Even though the
problem of axis scale has been resolved, we still cannot sample an arbitrarily rotated cloud
of points – the main axes of the iso-density ellipsoids of the distribution are still always
parallel to the main axes of the coordinate system.

30.4.1.3 n-dimensional Multivariate Normal Distribution

Algorithm 30.6 shows how to mutate vectors sespel′ by adding numbers which

Algorithm 30.6: g ←− mutateG,M (g′ , w = M)


Input: g′ ∈ Rn : the input vector
Input: M ∈ Rn×n : an (orthogonal) rotation matrix
Data: i, j: a counter variable
Data: t: a temporary vector
Output: g ∈ Rn : the mutated version of g′
1 begin
2 for i ←− 0 up to n − 1 do
3 t[i] ←− randomNorm(0, 1)
4 g ←− g′
// g ←− g + Mt
5 for i ←− 0 up to n − 1 do
6 for j ←− 0 up to n − 1 do
7 g[i] ←− g[i] + M[i, j] ∗ t[j]
8 return g

follow normally distributions with rotated iso-density ellipsoids. With a ma-


trix M we scale and rotate standard-normal distributed random vectors t =
T
(randomNorm(0, 1) , randomNorm(0, 1) , . . . , randomNorm(0, 1)) . The result of the mul-

tiplication Mt is then shifted by g .
We are thus able to, for instance, sample points similar distributions than the selected
parents as sketched in Figure 30.1, part (D). The advantage of this sampling method is
imminent: we can create points which are similarly distributed than a selected cloud of
parents and thus better represent the information gathered so far.
30.5. PARAMETER ADAPTATION 367
There are also three disadvantages: (a) First, the size of the endogenous Parameter w
increases to n(n+1)
2 if G ⊆ Rn : the main diagonal of M has n elements. The matrix elements
above and below the diagonal are the same and thus, need to bestored only once. (b) The
second problem is that sampling now has a complexity of O n2 , as can easily be seen in
Algorithm 30.6. (c) Finally, obtaining M is another interesting question. As it turns out,
computing M has a higher complexity than using it for sampling. The matrix M can be
constructed from the covariance matrix C of the individuals selected for becoming parents
P . We therefore utilize Definition D53.41 on page 664, i. e., by applying Equation 30.1:
X  
p.g [i] − p.g [i] p.g [j] − p.g [j]
∀p∈P
C[i, j] = (30.1)
|P |

We then can set, for example, the rotation and scaling matrix M to be the matrix of all
the normalized and scaled (n dimensional) Eigenvectors Λi (C) : i ∈ 1..n of the covariance
matrix C. These vectors are normalized by dividing them by their length, i. e., by ||Λi (C)||2 .
Then, they are scaled with the square roots of the corresponding Eigenvalues λi (C). By
multiplying a vector of n independent standard normal distributions with the matrix of
normalized Eigenvectors, it is rotated to
p be aligned to the covariance matrix C. By scaling
the Eigenvectors with the square roots λi (C) of the corresponding Eigenvalues, we stretch
the standard normal distributions from spheres to ellipsoids with the same shape as described
by the covariance matrix.
p !
λi (C)
M= Λi (C) for i from 1 to n (30.2)
||Λi (C)||2

If C is a diagonal matrix, M[i, j] = C[i, j] ∀i, j ∈ 0..n − 1 can be used. Otherwise,
computing the Eigenvectors and Eigenvalues is only easy for dimensions n ∈ {1, 2, 3} but
becomes computationally intense for higher dimensions [1097].
If this last mutation operator is indeed applied with a matrix M directly derived from the
covariance matrix C estimated from the parent individuals, the Evolution Strategy basically
becomes a real-valued Estimation of Distribution Algorithm as introduced in Section 34.5
on page 442.

30.5 Parameter Adaptation

In Section 30.4, we discussed different methods to mutation genotypes g′ . The mutation


operators randomly picked points in the search space which are distributed around g′ . The
distribution chosen for this purpose depends on endogenous strategy parameters such as σ,
σ, and M.
One of the general, basic principles of metaheuristics is that the optimization process
should converge to areas of the search space which contain highly-fit individuals. At the
beginning, the best a metaheuristic algorithm can do is to uniformly sample the search
space. The more information it acquires by evaluating the candidate solutions, the smaller
the interesting area should become.
From this perspective, it makes sense to initially sample a wide area around the inter-
mediate genotypes g′ with mutation. Step by step, however, as the information gathered
increases, the dispersion of the mutation results should decrease. This change in dispersion
can only be achieved by a change in the parameters controlling it – the endogenous strategy
parameters of the ES. Thus, there must be a change in these parameters as well. Such a
change of the parameter values is called adaption. [302]
368 30 EVOLUTION STRATEGIES
30.5.1 The 1/5th Rule

One of the most well-known adaptation rules for Evolution Strategies is the 1/5th Rule
which adapts the single parameter σ of the isotropic mutation operator introduced in Sec-
tion 30.4.1.1. Since σ defines the dispersion of the mutation results, we can refer to it as
mutation strength.
Beyer [300] and Rechenberg [2278] considered two key performance metrics in their works
on (1 + 1)-ESs: the success probability PS of the mutation operation and progress rate ϕ de-
scribing the expected distance gain towards the optimum. Like Beyer [300] and Rechenberg
[2278], let us discuss the behavior of these parameters on the Sphere function fRSO1 discussed
in Section 50.3.1.1 on page 581 for n → ∞.
n
X
fRSO1 (x = g) = xi 2 with G = X ⊆ Rn (30.3)
i=1

The isotropic mutation operator samples points in a sphere around the intermediate genotype
g′ . Since we consider the Sphere function as benchmark, g′ will always lie on a iso-fitness
sphere. All points which have a lower Euclidean distance to the coordinate center 0 than
g′ have a better objective value and all points with larger distance have a worse objective
value.
Both, PS and ϕ, depend on the value of σ. If σ is very small, the chance PS of sampling
a point g with a smaller norm ||g||2 than g′ is 0.5. However, the expected distance ϕ for
moving towards the optimum also becomes zero.

lim PS = 0.5 (30.4)


σ→0
lim ϕ = 0 (30.5)
σ→0

If σ → +∞, on the other hand, the success probability PS becomes zero since mutation will
likely jump over the iso-fitness sphere of g′ even if it samples towards the right direction.
Since PS is zero, the expected distance gain towards the optimum also disappears.

lim PS = 0 (30.6)
σ→+∞
lim ϕ = 0 (30.7)
σ→+∞

In between these two extreme cases, i. e., for 0 < σ < +∞, lies an area where both ϕ > 0
and 0 < PS < 0.5. The goal of adaptation is to have a high expected progress towards the
global optimum, i. e., to maximize ϕ. If ϕ is large, the Evolution Strategy will converge
quickly to the global optimum. σ is the parameter we can tune in order to achieve this goal.
PS can be estimated as feedback from the optimizer: the estimation of PS is the number of
times a mutated individual replaced its parent in the population divided the total number of
performed mutations. Beyer and Schwefel [302] and Rechenberg [2278] found theoretically
that ϕ becomes maximal for values of PS in [0.18, 0.28] for different mathematical benchmark
functions. For larger as well as for lower values of PS , ϕ begins to decline. As a compromise,
they conceived the 1/5th Rule:

Definition D30.1 (1/5th Rule). In order to obtain nearly optimal (local) performance
of the (1 + 1)-ES with isotropic mutation in real-valued search spaces, tune the mutation
strength σ in such a way that the (estimated) success rate PS is about 1/5 [302].

PS is monotonously decreasing with rising σ from limσ→0 PS = 0.5 to limσ→+∞ PS = 0.


This means if success probability PS is larger than 0.2, we need to increase the mutation
strength σ in order to maintain a near-optimal progress towards the optimum. If, on the
30.5. PARAMETER ADAPTATION 369

Algorithm 30.7: x ←− (1+1)-ES1/5 (f, L, a, σ0 )


Input: f: the objective/fitness function
Input: L > 1: the generation limit for adaptation
Input: a ∈ (0, 1): the adaptation value
Input: σ0 > 0: the initial mutation strength
Data: t: the generation counter
Data: p⋆ : the current individual
Data: pn : a single offspring
Data: PS ∈ [0, 1]: the estimated success probability of mutation
Data: continue: a Boolean variable used for termination detection
Output: x: the best element found
1 begin
2 t ←− 1
3 s ←− 0
4 σ ←− σ0 
5 p⋆.g ←− create
6 p⋆.x ←− gpm(p⋆.g)
7 pn ←− p⋆
8 continue ←− true
9 while continue do
10 pn .x ←− gpm(pn .g) 
11 if terminationCriterion then
12 continue ←− false
13 else
14 if f(pn .x) < f(p⋆.x) then
15 p⋆ ←− pn
16 s ←− s + 1
17 if (t mod L) = 0 then
s
18 PS ←−
L
19 if PS < 0.2 then σ ←− σ ∗ a
20 else if PS > 0.2 then σ ←− σ/a
21 s ←− 0
22 pn .g ←− mutateG,σ (p⋆.g, σ)
23 t ←− t + 1
24 return p⋆.x
370 30 EVOLUTION STRATEGIES
other hand, the fraction of accepted mutations falls below 0.2, the step size is too large and
σ must be reduced.
In Algorithm 30.7 we present a (1 + 1)-Evolution Strategy (1+1)-ES which implements
the 1/5th Rule. The algorithm estimates the success probability PS every L generations. If it
falls below 0.2, i. e., if the mutation steps are too long, the mutation strength σ is multiplied
with a factor a ∈ (0, 1), i. e., reduced. If the mutations are too weak and PS > 0.2, the
reverse operation is performed and the mutation strength is divided by a.
Beyer and Schwefel [302] propose that if the dimension n of the search space G ⊆ Rn is
sufficiently large (n ≥ 30), setting L = n would be a good choice. Schwefel [2435] suggests
to keep a in 0.85 < a < 1 for good results.

30.5.2 Endogenous Parameters


In the introductory Section 30.1 as well as in the beginning of this section, we stated that the
some of the control parameters of Evolution Strategies, such as the mutation strength, are
endogenous. So far, we gave little evidence why this is actually is the case, what implications
this design has, and what it is actually good for. If we look back at the 1/5th rule, we find
three drawbacks.
1. First, it only covers one special case. It is suitable for (1 + 1)-Evolution Strategies and
requires a certain level of causality in the search space. If the fitness landscape is too
rugged (see Chapter 14 on page 161), this adaptation mechanism may be misled by
the information gained from sampling.
2. Second, it adapts only one single strategy parameter σ and thus cannot support
more sophisticated mutation operators such as those discussed in Section 30.4.1.2 and
30.4.1.3.
3. Finally, the adapted parameter holds for the complete population. It does not allow the
optimization algorithm to trace down different local or global optima with potentially
different topology. (Which is not possible with a (1 + 1)-ES anyway, but could be done
with an extension of the rule to (µ + λ)-Evolution Strategies.)
In order to find a more general way to perform self-adaptation, the control parameters are
considered to be endogenous information of each individual in the population. This step
leads to three decisive changes in the adaptation process:
1. Instead of a global parameter set, the parameters are now local,
2. undergo selection together with the individuals owning them, and
3. are processed by reproduction operations.
As can be seen in the basic Evolution Strategy algorithm given in Algorithm 30.1 on page 361,
the endogenous strategy parameters p.w are responsible for the final individual creation step
(mutation). They are part of the individual p whose genotype p.g they helped building. If
the genotype p.g has a corresponding phenotype p.x = gpm(p.g) with a good objective
value f(p.x), it has a high chance of surviving the selection step. In this case, it will take
the endogenous parameters p.w which lead to its creation with it. On the other hand, if p.x
has bad characteristics, it will perish and doom its genotype p.g and the control parameters
p.w to peril as well. Mutation steps which were involved into the successful production of
good offspring will thus be selected and may lead to the creation of more good candidate
solutions.

30.5.2.1 Recombining the Mutation Strength (Vectors)


Beyer and Schwefel [302] state that mutating the strategy parameters can lead to fluctuations
which slow down the optimization process. The similarity-preserving effect of intermediate
recombination (see Section 30.3.2 on page 362 and Section 29.5.6 on page 346) can work
against this phenomenon, especially if many parents are used. For recombining the mutation
strengths, Beyer and Schwefel [302] hence propose to use Algorithm 30.3 on page 363 as well.
30.5. PARAMETER ADAPTATION 371
30.5.2.2 Mutating the Mutation Strength σ

The mutation of the single strategy parameter σ, the mutation strength, can be achieved
by multiplying it a positive number ν. If 0 < ν < 1, the strength decreases and if 1 < ν, the
mutation will become more volatile. The interesting question is how to create this number.
Rechenberg’s Two-Point Rule [2279] sets ν = α with 50% probability and to α1 otherwise.
α > 1 is a fixed exogenous strategy parameter. We can then define the operation mutateW,σ
(for the recombined mutation strength σ ′ ) given in Line 24 of Algorithm 30.1 on page 361
according to Equation 30.8.
 ′
′ ασ if randomUni[0, 1) < 0.5
mutateW,σ (σ ) = σ′ (30.8)
α otherwise
α = 1+τ (30.9)

According to Beyer [297], nearly optimal behavior of the Evolution Strategy can be achieved
if α is set according to Equation 30.9. The meaning of τ will be discussed below and values
for it will be given in Equation 30.11. The interesting thing about Equation 30.8 is that it
creates an extremely limited neighborhood of σ-values if we extend the concept of adjacency
from Definition D4.10 on page 85 to the space of endogenous information W. Matter of fact,
only the values σ0 ∗ αi : i ∈ Z can be reached at all if the mutation strength is initialized
with σ0 . Only the direct predecessor and successor of an element in this series can be found
in one step by mutation.
The ε = 0 adjacency for the strategy parameter can be extended to the whole R+ by
setting the multiplication parameter ν to e raised to the power of a normally distributed
number. Since all real numbers are ε = 0-adjacent under a normal distribution, all values
in R+ \ {0} are adjacent under eN (0,1) .

mutateW,σ (σ ′ ) = σ ′ ∗ eτ randomNorm(0,1) (30.10)


1
τ ∝ √ (30.11)
N
The exogenous parameter τ is used to scale the standard
√ deviation of the random numbers.
Beyer and Schwefel [302] that it should√be to τ = 1/ n for problems with G ⊆ Rn . If the
fitness landscape is multimodal, τ = 1/ 2n can be used.

30.5.2.3 Mutating the Strength Vector σ

Algorithm 30.8 presents a way to mutate vectors σ ′ which represent axis-parallel mutation


Algorithm 30.8: σ ←− mutateW,σ (σ)
Input: σ ′ ∈ Rn : the old mutation strength vector
Data: i: a counter variable
Output: σ ∈ Rn : the new mutation strength vector
1 begin
2 ν ←− eτ0 randomNorm(0,1)
3 σ ←− 0
4 for i ←− 0 up to n − 1 do
5 σ [i] ←− ν ∗ eτ randomNorm(0,1) ∗ σ [i]′
6 return σ

strengths. σ ′ may be the result from recombination of several parents. In Algorithm 30.8,
we first compute a general scaling factor ν which is log-normally distributed. Each field i of
372 30 EVOLUTION STRATEGIES
the new mutation strength vector σ then is the old value σ ′ [i] multiplied with another log-
normally distributed random number and ν. This idea has exemplarily been implemented
in Listing 56.26 on page 787.
c
τ0 = √ (30.12)
2n
c
τ = q √ (30.13)
2 n

For σ ∈ Rn , the settings in Equation 30.12 and Equation 30.13 are recommended by Beyer
and Schwefel [302]. For a (10, 100)-Evolution Strategy, Schwefel [2437] further suggests to
use c = 1.
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 373
30.6 General Information on Evolution Strategies

30.6.1 Applications and Examples

Table 30.1: Applications and Examples of Evolution Strategies.


Area References
Chemistry [1573]
Combinatorial Problems [292, 299, 1617, 2604]
Computer Graphics [3087]
Digital Technology [3042]
Distributed Algorithms and Sys- [2683]
tems
Engineering [303, 778, 1135, 1586, 1881, 1894, 2163, 2232, 2266, 2279,
2433, 2434, 2683, 2936, 2937, 2959, 3042, 3087]
Function Optimization [169, 3087]
Graph Theory [2683]
Logistics [292, 299, 1586, 1617, 2279]
Mathematics [227, 302, 520, 778, 1182, 1183, 1186, 1187, 1436, 2279]
Prediction [1044]
Software [2279]

30.6.2 Books
1. Handbook of Evolutionary Computation [171]
2. Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163]
3. Self-Adaptive Heuristics for Evolutionary Computation [1617]
4. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evo-
lution [2278]
5. Evolutionsstrategie ’94 [2279]
6. Evolution and Optimum Seeking [2437]
7. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
8. The Theory of Evolution Strategies [300]
9. Cybernetic Solution Path of an Experimental Problem [2277]
10. Numerical Optimization of Computer Models [2436]

30.6.3 Conferences and Workshops

Table 30.2: Conferences and Workshops on Evolution Strategies.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
374 30 EVOLUTION STRATEGIES
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
EUROGEN: Evolutionary Methods for Design Optimization and Control
See Table 29.2 on page 354.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 375
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
376 30 EVOLUTION STRATEGIES
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

30.6.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 377
Tasks T30

85. Implement the simple Evolution Strategy Algorithm 30.7 on page 369 in a program-
ming language of your choice.
[40 points]

86. How does the parameter adaptation method in Algorithm 30.7 on page 369 differ from
the method applied in Algorithm 30.1 on page 361? What is the general difference be-
tween endogenous and exogenous parameters and the underlying ideas of adaptation?
[10 points]

87. How do the parameters λ, µ, and ρ of an Evolution Strategy relate to the population
size ps and the mating pool size mps used in general Evolutionary Algorithms? You
may use Section 28.1.4 on page 265 as a reference.
[10 points]

88. How does the n-ary domination recombination algorithm in Evolution Strategies
(see Section 30.3.1 on page 362) relate to the uniform crossover operator (see Sec-
tion 29.3.4.4 on page 337) in Genetic Algorithms? What are the similarities? What
are the differences?
[10 points]

89. Perform the same experiments as in Task 67 on page 238 (and 70 and Task 81 on
page 357) with a Evolution Strategy using the Random Keys encoding given in Sec-
tion 29.7 on page 349. In Section 57.1.2.1 on page 887, you can find the necessary
code for that. How does the Evolution Strategy perform in comparison with the other
algorithms and previous experimental results? What conclusions can we draw about
the utility of the Random Keys genotype-phenotype mapping?
[10 points]

90. Apply an Evolution Strategy using the Random Keys encoding given in Section 29.7
on page 349 to the Traveling Salesman Problem instance att48 taken from [2288] and
listed in Paragraph 57.1.2.2.1.
As already discussed in Example E2.2 on page 45, a solution to an n-city Traveling
Salesman Problem can be represented by a permutation of length n − 1. Such per-
mutations can be created and optimized in a way very similar to what we already did
with the bin packing problem a few times. Only the objective function needs to be
re-defined. Additionally to the Evolution Strategy, also apply a simple Hill Climber
to the same problem, using a method similar to what we did in Task 67 on page 238
and a representation according to Section 29.3.1.2 on page 329.
How does the Evolution Strategy perform in comparison with the Hill Climber on
this problem? What conclusions can we draw about the utility of the Random Keys
genotype-phenotype mapping? How close do the algorithms come to the optimal,
shortest path for this problem which has the length 10 628?
[50 points]
Chapter 31
Genetic Programming

31.1 Introduction

The term Genetic Programming1 (GP) [1220, 1602, 2199] has two possible mean-
ings. (a) First, it is often used to subsume all Evolutionary Algorithms that have tree data
structures as genotypes. (b) Second, it can also be defined as the set of all Evolutionary
Algorithms that breed programs, algorithms, and similar constructs. In this chapter, we
focus on the latter definition which still includes discussing tree-shaped genomes.

samples are known

input Process output


(running Program)

to be found with genetic programming

Figure 31.1: Genetic Programming in the context of the IPO model.

The well-known input-processing-output model from computer science states that a run-
ning instance of a program uses its input information to compute and return output data.
In Genetic Programming, usually some inputs and corresponding output data samples are
known, can be produced, or simulated. The goal then is to find a program that connects
them or that exhibits some kind of desired behavior according to the specified situations, as
sketched in Figure 31.1.

31.1.1 History

The history of Genetic Programming [102] goes back to the early days of computer science
as sketched in Figure 31.2. In 1957, Friedberg [995] left the first footprints in this area by
using a learning algorithm to stepwise improve a program. The program was represented as
a sequence of instructions for a theoretical computer called Herman [995, 996]. Friedberg
did not use an evolutionary, population-based approach for searching the programs – the
idea of Evolutionary Computation had not been developed yet at that time, as you can see
1 http://en.wikipedia.org/wiki/Genetic_programming [accessed 2007-07-03]
380 31 GENETIC PROGRAMMING

Tree-based Standard GP Strongly-typed GP


(Forsyth, 1981) (Koza, 1988) (Montana, 1993)

Evolutionary Programming
(Fogel et al., 1966)

RBGP
(Weise, 2007)

First Steps Linear GP


(Friedberg, 1958; Samuel 1959) (Nordin, 1994) (Brameier/Banzhaf, 2001)

String-to-Tree Mappings Gene Expression Programming


(Cramer, 1985) (Ferreira, 2001)

Grammatical Evolution
(Ryan et al., 1995)
Grammars GADS 1/2
(Antonisse, 1990) (Paterson/Livesey, 1995) TAG3P
LOGENPRO (Nguyen, 2004)
(Wong/Leung, 1995)

PDGP Cartesian GP
(Poli, 1996) (Miller/Thomson, 1998)

1960 1970 1980 1985 1990 1995 2000 2005

Figure 31.2: Some important works on Genetic Programming illustrated on a time line.

in Section 29.1.1 on page 325. Also, the computational capacity of the computers of that era
was very limited and would not have allowed for experiments an program synthesis driven
by a (population-based) Evolutionary Algorithm.
Around the same time, Samuel applied machine learning to the game of checkers and, by
doing so, created the world’s first self-learning program. In the future development section
of his 1959 paper [2383], he suggested that effort could be spent into allowing the (checkers)
program to learn scoring polynomials – an activity which would be similar to symbolic
regression. Yet, in his 1967 follow-up work [2385], he could not report any progress in this
issue.
The Evolutionary Programming approach for evolving Finite State Machine by Fogel
et al. [945] (see also Chapter 32 on page 413) dates back to 1966. In order to build predictors,
different forms of mutation (but no crossover) were used for creating offspring from successful
individuals.
Fourteen years later, the next generation of scientists began to look for ways to evolve
programs. New results were reported by Smith [2536] in his PhD thesis in 1980. Forsyth
[972] evolved trees denoting fully bracketed Boolean expressions for classification problems
in 1981 [972, 973, 975].
The mid-1980s were a very productive period for the development of Genetic Program-
ming. Cramer [652] applied a Genetic Algorithm in order to evolve a program written in
a subset of the programming language PL in 1985. This GA used a string of integers as
genome and employed a genotype-phenotype mapping that recursively transformed them
into program trees. We discuss this approach in ?? on page ??.
At the same time, the undergraduate student Schmidhuber [2420] also used a Genetic Al-
gorithm to evolve programs at the Siemens AG. He re-implemented his approach in Prolog
at the TU Munich in 1987 [790, 2420]. Hicklin [1230] and Fujuki [999] implemented re-
production operations for manipulating the if-then clauses of LISP programs consisting of
single COND-statements. With this approach, Fujiko and Dickinson [998] evolved strategies
for playing the iterated prisoner’s dilemma game. Bickel and Bickel [309] evolved sets of
rules which were represented as trees using tree-based mutation crossover operators.
Genetic Programming became fully accepted at the end of this productive decade mainly
because of the work of Koza [1590, 1591]. Koza studied many benchmark applications of
Genetic Programming, such as learning of Boolean functions [1592, 1596], the Artificial Ant
31.1. INTRODUCTION 381
problem (as discussed in ?? on page ??) [1594, 1595, 1602], and symbolic regression [1596,
1602], a method for obtaining mathematical expressions that match given data samples
(see Section 49.1 on page 531). Koza formalized (and patented [1590, 1598]) the idea of
employing genomes purely based on tree data structures rather than string chromosomes as
used in Genetic Algorithms. We will discuss this form of Genetic Programming in-depth in
Section 31.3.
In the first half of the 1990s, researchers such as Banzhaf [205], Perkis [2164], Openshaw
and Turton [2085, 2086], and Crepeau [653], on the other hand, began to evolve programs in
a linear shape. Similar to programs written in machine code, string genomes used in linear
Genetic Programming (LGP ) consist of single, simple commands. Each gene identifies an
instruction and its parameters. Linear Genetic Programming is discussed in Section 31.4.
Also in the first half of the 1990s, Genetic Programming systems which are driven by a
given grammar were developed. Here, the candidate solutions are sentences in a language
defined by, for example, an EBNF. Antonisse [113], Stefanski [2605], Roston [2336], and
Mizoguchi et al. [1920] we the first ones to delve into grammar-guided Genetic Programming
(GGGP, G3P ) which is outlined in Section 31.5.
382 31 GENETIC PROGRAMMING
31.2 General Information on Genetic Programming

31.2.1 Applications and Examples

Table 31.1: Applications and Examples of Genetic Programming.


Area References
Art [1234, 1449, 1453, 2746]
Chemistry [487, 798, 799, 958, 1684, 1685, 1909, 2729–2731, 2889, 3007,
3009]; see also Table 31.3
Combinatorial Problems [2338]
Computer Graphics [97, 581, 1195, 1591, 1596, 1726, 2646]; see also Table 31.3
Control [2541, 2566]; see also Tables 31.3 and 31.4
Cooperation and Teamwork [98, 1215, 1317, 1320, 1789, 1805, 2239, 2502]
Data Compression [336]; see also Table 31.3
Data Mining [76, 99, 350, 351, 481, 972, 973, 993, 994, 1021, 1022, 1422,
1593, 1597, 1721, 1726, 1867, 1985, 2109, 2513, 2728, 2819,
2854, 2855, 2996, 3080]; see also Tables 31.3 and 31.4
Databases [993, 2514, 2904]
Decision Making [2728]
Digital Technology [336, 337, 455, 581, 704, 869, 870, 1478, 1591, 1592, 1600,
1602, 1665, 1787, 2560, 2954]; see also Tables 31.3, 31.4, and
31.5
Distributed Algorithms and Sys- [52, 76, 150, 543, 614, 615, 658, 798, 799, 1120, 1215, 1317,
tems 1320, 1401, 1456, 1542, 1785, 1805, 1909, 2239, 2338, 2377,
2500, 2501, 2729–2731, 2904, 2905, 3007, 3009]; see also Ta-
ble 31.3
Economics and Finances [455, 846, 1021, 1022, 1721, 1805, 2728, 2854]; see also Ta-
ble 31.4
Engineering [52, 98, 271, 303, 337, 455, 581, 704, 869, 1120, 1195, 1234,
1317, 1320, 1456, 1478, 1542, 1591, 1592, 1600, 1602, 1607,
1608, 1613–1615, 1665, 1769, 1785, 1787, 1894, 1909, 1932,
2090, 2500, 2501, 2513, 2514, 2560, 2566, 2567, 2729, 2731,
2954, 3007]; see also Tables 31.3, 31.4, and 31.5
Games [998, 2328]
Graph Theory [52, 76, 1120, 1216, 1401, 1456, 1542, 1615, 1769, 1785, 1909,
2338, 2500–2502, 2729, 2731]; see also Table 31.3
Healthcare [350, 351]; see also Table 31.3
Mathematics [149, 455, 542, 581, 652, 846, 869, 924, 998, 1157, 1456, 1514,
1596, 1602, 1658, 1674, 1772, 1787, 1855, 1856, 1932, 2251,
2377, 2532, 2997]; see also Tables 31.3, 31.4, and 31.5
Multi-Agent Systems [98, 269, 270, 1317, 1320, 1789, 1805, 2239, 2240]
Multiplayer Games [98, 581]
Physics [487, 2568]
Prediction [351, 1021, 1721, 2728]; see also Table 31.3
Security [76, 658, 1785, 2090, 2513, 2514]; see also Table 31.3
Software [652, 704, 772, 790, 972, 998, 999, 1120, 1230, 1401, 1423,
1591, 1623, 1665, 2681, 2682, 2897, 2970]; see also Tables
31.3 and 31.4
31.2. GENERAL INFORMATION ON GENETIC PROGRAMMING 383
Sorting [1610]; see also Table 31.5
Testing see Table 31.3
Wireless Communication [1615, 1769, 2502]; see also Table 31.3

31.2.2 Books
1. Advances in Genetic Programming II [106]
2. Handbook of Evolutionary Computation [171]
3. Advances in Genetic Programming [2569]
4. Genetic Algorithms and Genetic Programming at Stanford [1601]
5. Genetic Programming IV: Routine Human-Competitive Machine Intelligence [1615]
6. Dynamic, Genetic, and Chaotic Programming: The Sixth-Generation [2559]
7. Genetic Programming: An Introduction – On the Automatic Evolution of Computer Pro-
grams and Its Applications [209]
8. Design by Evolution: Advances in Evolutionary Design [1234]
9. Linear Genetic Programming [394]
10. A Field Guide to Genetic Programming [2199]
11. Representations for Genetic and Evolutionary Algorithms [2338]
12. Automatic Quantum Computer Programming – A Genetic Programming Approach [2568]
13. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
14. Evolutionäre Algorithmen [2879]
15. Foundations of Genetic Programming [1673]
16. Genetic Programming II: Automatic Discovery of Reusable Programs [1599]

31.2.3 Conferences and Workshops

Table 31.2: Conferences and Workshops on Genetic Programming.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
EuroGP: European Workshop on Genetic Programming
History: 2011/04: Torino, Italy, see [2492]
2010/04: Istanbul, Turkey, see [886]
2009/04: Tübingen, Germany, see [2787]
2008/03: Naples, Italy, see [2080]
2007/04: València, Spain, see [856]
2006/04: Budapest, Hungary, see [607]
2005/03: Lausanne, Switzerland, see [1518]
2004/04: Coimbra, Portugal, see [1517]
2003/04: Colchester, Essex, UK, see [2360]
2002/04: Kinsale, Ireland, see [977]
2001/04: Lake Como, Milan, Italy, see [1903]
384 31 GENETIC PROGRAMMING
2000/04: Edinburgh, Scotland, UK, see [2198]
1999/05: Göteborg, Sweden, see [2196]
1998/04: Paris, France, see [210, 2195]
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
GP: Annual Conference of Genetic Programming
History: 1998/06: Madison, WI, USA, see [1605, 1612]
1997/07: Stanford, CA, USA, see [1604, 1611]
1996/07: Stanford, CA, USA, see [1603, 1609]
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
GPTP: Genetic Programming Theory and Practice
History: 2011/05: Ann Arbor, MI, USA, see [2753]
2010/05: Ann Arbor, MI, USA, see [2310]
2009/05: Ann Arbor, MI, USA, see [2309]
2008/05: Ann Arbor, MI, USA, see [2308]
2007/05: Ann Arbor, MI, USA, see [2575]
2006/05: Ann Arbor, MI, USA, see [2307]
2004/05: Ann Arbor, MI, USA, see [2089]
2003/05: Ann Arbor, MI, USA, see [2306]
1995/07: Tahoe City, CA, USA, see [2326]
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
31.2. GENERAL INFORMATION ON GENETIC PROGRAMMING 385
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
GEWS: Grammatical Evolution Workshop
History: 2004/06: Seattle, WA, USA, see [2079]
2003/07: Chicago, IL, USA, see [2078]
2002/07: New York, NY, USA, see [2077]
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
386 31 GENETIC PROGRAMMING
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

31.2.4 Journals

1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer


Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands

31.3 (Standard) Tree Genomes

31.3.1 Introduction

Tree-based Genetic Programming (TGP), usually referred to as Standard Genetic Program-


ming, SGP) is the most widespread Genetic Programming variant, both for historical reasons
and because of its efficiency in many problem domains. Here, the genotypes are tree data
structures. Generally, a tree can represent a rule set [1867], a mathematical expression [1602],
a decision tree [1597], or even the blueprint of an electrical circuit [1608].
31.3. (STANDARD) TREE GENOMES 387
31.3.1.1 Function and Terminal Nodes

In tree-based Genetic Programming, there usually is a semantic distinction between inner


nodes and leaf nodes of the genotypes. In Standard Genetic Programming, every node is
characterized by a type i which defines how many child nodes it can have. We distinguish
between terminal and non-terminal nodes (or node types).

Definition D31.1 (Terminal Node Types). The set Σ contains the node types i which
do not allow their instances t to have child nodes. The nodes t with typeOf (t) ∈ i ∧ i ∈ Σ
are thus always leaf nodes.

typeOf (t) ∈ i ∧ i ∈ Σ ⇒ numChildren(t) = 0 (31.1)

Definition D31.2 (Non-Terminal Node Types). Each node type i from the set of non-
terminal (function) node types in a Genetic Programming system has a definite number k ∈
N1 of children (k > 1). The nodes t which belong to a non-terminal node type typeOf (t) ∈
i ∧ i ∈ N thus are always inner nodes of a tree.

typeOf (t) ∈ i ∧ i ∈ Σ ⇒ (numChildren(t) = k) ∧ k > 0 (31.2)

In Section 56.3.1 on page 824, we provide simple Java implementations for basic tree node
classes (Listing 56.48)). In this implementation, each node belongs to a specific type (List-
ing 56.49. Such a node type does not only describe the number of children that its instance
nodes can have. In our example implementation, we go one step further and also define
the possible types of children a node can have by specifying a type set (Listing 56.50) for
each possible child node location. We thus provide a fully-fledged Strongly-Typed Genetic
Programming (STGP) Montana [1930, 1931, 1932] implementation.

Example E31.1 (GP of Mathematical Expressions: Symbolic Regression).


Trees also provide a very intuitive representation for mathematical functions. This method

+
ab *
sin x
e +3 x e sin 3 ab
x 0.5

x
p
Figure 31.3: The formula esinx + 3 |x| expressed as tree.

is called symbolic regression and is discussed in detail in Section 49.1 on page 531, but
it can serve here as a good example for the abilities of Genetic Programming. SGP has
initially been used by Koza to evolve them. Here, an inner node stands for a mathematical
operation and its child nodes are the parameters of the operation. Leaf nodes then are
terminal symbols like numbers or variables. With

1. a terminal set of Σ = (x, e, R) where x is a variable, e is Euler’s number, and R stands


for a random-valued constant and 
2. a function set of N = +, −, ∗, /, sin, cos, ab where ab means “a to the power of b”
388 31 GENETIC PROGRAMMING
we can already express and evolve a large number of mathematical formulas such as the
one given in Figure 31.3. In this figure, the root node is a + operation. Since trees natu-
rally comply with the infix 2 , this + would be the last operation which is evaluated when
computing the value of the formula. Before this final step, first the value of its left child
(esinx ) and then the value of the right child (3 ∗ |x|0.5 ) will be determined. Evaluating a
tree-encoded function requires a depth-first traversal of the tree where the value of a node
can be computed after the values of all of its children have been calculated.

31.3.1.2 Similarity to High-Level Programming Languages

Trees are very (and deceptively) close to the natural structure of algorithms and programs.
The syntax of most of the high-level programming languages, for example, leads to a certain
hierarchy of modules and alternatives. Not only does this form normally constitute a tree –
compilers even use tree representations internally.

Pop = createPop(s)
Input: s the size of the population to be created
Data: i a counter variable
Output: Pop the new, random population 1 List<IIndividual> createPop(s) {
1 begin 2 List<Individual> Xpop;
3 Xpop = new ArrayList<IIndividual>(s);
2 Pop () 4 for(int i=s; i>0; i--) {
3 i s 5 Xpop.add(create());
4 while i>0 do 6 }
7 return Xpop;
5 Pop appendList(Pop, create()) 8 }
6 i i-1
7 return Pop
8 end
Program
Algorithm (Schematic Java, High-Level Language)

{block}

while ret

Pop () i s > {block} Pop

i 0 appendList

Pop create i

Abstract Syntax Tree Representation i 1


Figure 31.4: The AST representation of algorithms/programs.

When reading the source code of a program, compilers first split it into tokens3 . Then
2 http://en.wikipedia.org/wiki/Infix_notation [accessed 2010-08-21]
3 http://en.wikipedia.org/wiki/Lexical_analysis [accessed 2007-07-03]
31.3. (STANDARD) TREE GENOMES 389
4 5
they parse these tokens. Finally, an abstract syntax tree (AST) is created [1278, 1462].
The internal nodes of ASTs are labeled by operators and the leaf nodes contain the operands
of these operators. In principle, we can illustrate almost every6 program or algorithm as such
an AST (see Figure 31.4). Tree-based Genetic Programming directly evolves individuals in
this form.
It should, however, be pointed out that Genetic Programming is not suitable to evolve
programs such as the one illustrated in Figure 31.4. As we will discuss in ??, programs in
representations which we know from our everyday life as students or programmers exhibit
too much epistasis. Synthesizing them is thus not possible with GP.

31.3.1.3 Structure and Human-Readability

Another interesting aspect of the tree genome is that it has no natural role model. While Ge-
netic Algorithms match their direct biological metaphor, the DNA, to some degree, Genetic
Programming introduces completely new characteristics and traits.
Genetic Programming is one of the few techniques that are able to learn structures
of potentially unbounded complexity. It can be considered as more general than Genetic
Algorithms because it makes fewer assumptions about the shape of the possible solutions.
Furthermore, it often offers white-box solutions that are human-interpretable. Many other
learning algorithms like artificial neural networks, for example, usually generate black-box
outputs, which are highly complicated if not impossible to fully grasp [1853].

31.3.2 Creation: Nullary Reproduction

Before the evolutionary process in Standard Genetic Programming can begin, an initial
population composed of randomized individuals is needed. In Genetic Algorithms, a set
of random bit strings is created for this purpose. In Genetic Programming, random trees
instead of such one-dimensional sequences are constructed.
Normally, there is a maximum depth dˆ specified that the tree individuals are not allowed
to surpass. Then, the creation operation will return only trees where the path between the
root and the most distant ˆ There are three basic ways for
leaf node is not longer than d.

realizing the create and “createPop” operations from for trees which can be distinguished
according to the depth of the produced individuals.

31.3.2.1 Full

The full method illustrated in Figure 31.5 creates trees where each (non-backtracking)

maximum depth

Figure 31.5: Tree creation by the full method.

4 http://en.wikipedia.org/wiki/Parse_tree [accessed 2007-07-03]


5 http://en.wikipedia.org/wiki/Abstract_syntax_tree [accessed 2007-07-03]
6 Excluding such algorithms and programs that contain jumps (the infamous “goto”) that would produce
crossing lines in the flowchart (http://en.wikipedia.org/wiki/Flowchart [accessed 2007-07-03]).
390 31 GENETIC PROGRAMMING
 
ˆ N, Σ
Algorithm 31.1: g ←− createGPFull d,
Input: N : the set of function types
Input: Σ: the set terminal types
ˆ the maximum depth
Input: d:
Data: j: a counter variable
Data: i ∈ N ∪ Σ: the selected tree node type
Output: g ∈ G: a new genotype
1 begin
2 if dˆ > 1 then
3 i ←− N [⌊randomUni[0, len(N ))⌋]
4 g ←− instantiate(i)
5 for j ←− 1 up to numChildren(i)
 do
 
6 g ←− addChild g, createGPFull dˆ − 1, N, Σ
7 else
8 i ←− Σ[⌊randomUni[0, len(Σ))⌋]
9 g ←− instantiate(i)
10 return g

path from the root to the leaf nodes has exactly the length d.ˆ Algorithm 31.1 illustrates
ˆ
this process: For each node of a depth below d, the type is chosen from the non-terminal
symbols (the functions N ). A chain of function nodes is recursively constructed until the
maximum depth minus one. If we reach d, ˆ of course, only leaf nodes (with types from Σ)
can be attached.

31.3.2.2 Grow

The grow method depicted in Figure 31.6 and specified in Algorithm 31.2, also creates trees

maximum depth

Figure 31.6: Tree creation by the grow method.

where each (non-backtracking) path from the root to the leaf nodes is not longer than dˆ but
may be shorter. This is achieved by deciding randomly for each node if it should be a leaf or
not when it is attached to the tree. Of course, to nodes of the depth dˆ − 1, only leaf nodes
can be attached to.

31.3.2.3 Ramped Half-and-Half

Koza [1602] additionally introduced a mixture method called ramped half-and-half defined
here in Algorithm 31.3. For each tree to be created, thish algorithm
 draws a number r
ˆ ˆ
uniformly distributed between 2 and d: (r = ⌊randomUni 2, d + 1 ⌋). Now either full or
31.3. (STANDARD) TREE GENOMES 391

 
ˆ N, Σ
Algorithm 31.2: g ←− createGPGrow d,
Input: N : the set of function types
Input: Σ: the set terminal types
ˆ the maximum depth
Input: d:
Data: j: a counter variable
Data: V = N ∪ Σ: the joint list of function and terminal node types
Data: i ∈ V : the selected tree node type
Output: g: a new genotype
1 begin
2 if dˆ > 1 then
3 V ←− N ∪ Σ
4 i ←− V [⌊randomUni[0, len(V ))⌋]
5 g ←− instantiate(i)
6 for j ←− 1 up to numChildren(i)
 do  
7 g ∈ G ←− addChild g, createGPFull dˆ − 1, N, Σ
8 else
9 i ←− Σ[⌊randomUni[0, len(Σ))⌋]
10 g ←− instantiate(i)
11 return g

 
ˆ N, Σ
Algorithm 31.3: g ←− createGPRamped d,
Input: N : the set of function types
Input: Σ: the set terminal types
ˆ the maximum depth
Input: d:
Data: r: the maximum depth chosen for the genotype
Output: g ∈ G: a new genotype
1 begin j h k
2 r ←− randomUni 2, dˆ + 1
3 if randomUni[0, 1) < 0.5 then return createGPGrow(r, N, Σ)
4 else return createGPFull(r, N, Σ)
392 31 GENETIC PROGRAMMING
grow is chosen to finally create a tree with the maximum depth r (in place of d). ˆ This
method is often preferred since it produces an especially wide range of different tree depths
and shapes and thus provides a great initial diversity. The Ramped Half-and-Half algorithm
for Strongly-Typed Genetic Programming is implemented in Listing 56.42 on page 814.

31.3.3 Node Selection

In most of the reproduction operations for tree genomes, in mutation as well as in recom-
bination, certain nodes in the trees need to be selected. In order to apply mutation, for
instance, we first need to find the node which is to be altered. For recombination, we need
one node in each parent tree. These nodes are then exchanged. We introduce an operator
“selectNode” for choosing these nodes.

Definition D31.3 (Node Selection Operator). The operator tc = selectNode(tr )


chooses one node tc from the tree whose root is given by tr .

31.3.3.1 Uniform Selection

A good method for doing so could select all nodes tc in the tree tc with exactly the

Algorithm 31.4: tc ←− selectNodeuni (tr )


Input: tr : the (root of the) tree to select a node from
Data: b, d: two Boolean variables
Data: w: a value uniformly distributed in [0, nodeWeight(tc )]
Data: i: an index
Output: tc : the selected node
1 begin
2 b ←− true
3 tc ←− tr
4 while b do
5 w ←− ⌊randomUni[0, nodeWeight(tc ))⌋
6 if w ≥ nodeWeight(tc ) − 1 then
7 b ←− false
8 else
9 i ←− numChildren(tc ) − 1
10 while i ≥ 0 do
11 w ←− w − nodeWeight(tc .children[i])
12 if w < 0 then
13 tc ←− tc .children[i]
14 i ←− −1
15 else
16 i ←− i − 1

17 return tc

same probability as done by the method selectNodeuni given in Algorithm 31.4. There,
P (selectNodeuni (tr ) = tc ) = P (selectNodeuni (tr ) = tn ) for all nodes tc and tn in the tree
with root tr .
31.3. (STANDARD) TREE GENOMES 393
In order to achieve such behavior, we first we define the weight nodeWeight(tn ) of a tree
node tn to be the total number of nodes in the subtree with tn as root, i. e., the tree which
contains tn itself, its children, grandchildren, grand-grandchildren, and so on.
numChildren(t)−1
X
nodeWeight(t) = 1 + nodeWeight(t.children[i]) (31.3)
i=0

Thus, the weight of the root of a tree is the number of all nodes in the tree and the weight of
each of the leaves is exactly 1. In selectNodeuni , the probability for a node of being selected
in a tree with root tr is thus 1/nodeWeight(tr ). We provide an example implementation such a
node selection method in Listing 56.41 on page 811.
A tree descend where with probabilities different from these defined here may lead to
unbalanced node selection probability distributions. Then, the reproduction operators will
prefer accessing some parts of the trees while very rarely altering the other regions. We
could, for example, descend the tree by starting at the root t and would return the current
node with probability 0.5 or recursively go to one of its children (also with 50% probability).
Then, the root tr would have a 50 : 50 chance of being the starting point of reproduction
2
operation. Its direct children have at most probability 0.5 /numChildren(tr ) each, and their
children would have a base multiplier of 0.53 for their probabilities and so on. Hence, the
leaves would almost never take actively part in reproduction.
We could also choose other probabilities which strongly prefer going down to the children
of the tree, but then, the nodes near to the root will most likely be left untouched during
reproduction. Often, this approach is favored by selection methods, although leaves in
different branches of the tree are not chosen with the same probabilities if the branches
differ in depth. When applying Algorithm 31.4 on the other hand, there exist no regions in
the trees that have lower selection probabilities than others.

31.3.4 Unary Reproduction Operations

31.3.4.1 Mutation: Unary Reproduction

Tree genotypes may undergo small variations during the reproduction process in the Evo-
lutionary Algorithm. Such a mutation is usually defined as the random selection of a node
in the tree, removing this node and all of its children, and finally replacing it with another
node [1602]. Algorithm 31.5 shows one possible way to realize such a mutation. First, a
copy g ∈ G from the original genotype gp ∈ G is created. Then, a random parent node t
is selected from g. One child of this node is replaced with a newly created subtree. For
creating this tree, we pick the ramped half-and-half method with a maximum depth similar
to the one of the child which we replace by it. The possible outcome of this algorithm
is illustrated in Fig. 31.7.a and an example implementation in Java for Strongly-Typed
Genetic Programming is given in Listing 56.43 on page 815.
Furthermore, in the special case that the tree nodes may have a variable number of chil-
dren, two additional operations for mutation become available: (a) insertions of new nodes
or small trees (Fig. 31.7.b), and (b) the deletion of nodes, as illustrated in Fig. 31.7.c.The
effects of insertion and deletion can, of course, also be achieved with replacement.
394 31 GENETIC PROGRAMMING

Algorithm 31.5: g ←− mutateGP(gp )


Input: gp ∈ G: the parent genotype
Input: [implicit] N : the set of function types
Input: [implicit] Σ: the set terminal types
Data: t: the selected tree node
Data: j: the selected child index
Output: g ∈ G: a new genotype
1 begin
2 g ←− gp
3 repeat
4 t ←− selectNode(g)
5 until numChildren(t) > 0
6 j ←− ⌊randomUni[0, numChildren(t))⌋
7 t.children[j] ←− createGPRamped(max {2, treeDepth(t.children[j])} , N, Σ)
8 return g

maximum depth maximum depth


Fig. 31.7.a: Subtree replacement. Fig. 31.7.b: Subtree insertions.

maximum depth
Fig. 31.7.c: Subtree deletion.

Figure 31.7: Possible tree mutation operations.


31.3. (STANDARD) TREE GENOMES 395
Example E31.2 (Mutation – Example E31.1 Cont.).
Based on Example E31.1 on page 387, Figure 31.8 shows one possible application of Algo-

* randomly
created
+ +
/
11 x 3
x 12
cut random node x

replace a randomly
chosen node with
a randomly created
sub tree
*
/ +
x 12 3
x

Figure 31.8: Mutation of a mathematical expression represented as a tree.

rithm 31.5 to a mathematical expression encoded as a tree. The initial genotype represents
the mathematical formula (11 + x) ∗ (3 + |x|). From this tree, the portion (11 + x) is re-
placed with a randomly created subtree which resembles the expression x/12. The resulting
genotype then stands for the formula (x/12) ∗ (3 + |x|).

31.3.4.2 Permutation: Unary Reproduction

The tree permutation operation illustrated in Figure 31.9 resembles the permutation oper-
ation of string genomes or the inversion used in messy GA (Section 29.6.2.1, [1602]). Like
mutation, it is used to reproduce one single tree. It first selects an internal node of the
parental tree. The child nodes attached to that node are then shuffled randomly, i. e., per-
mutated. If the tree represents a mathematical formula and the operation represented by
the node is commutative, this has no direct effect. The main goal of this operation is to
re-arrange the nodes in highly fit subtrees in order to make them less fragile for other oper-
ations such as recombination. The effects of this operation are questionable and most often,
it is not applied [1602].

Figure 31.9: Tree permutation – (asexually) shuffling subtrees.


396 31 GENETIC PROGRAMMING
31.3.4.3 Editing: Unary Reproduction

Editing is to trees in Genetic Programming what simplifying is to formulas in mathematics.


Editing a tree means to create a new offspring tree which is more efficient (e.g., in size)
but equivalent to its parent in terms of functional aspects. It is thus a very domain-specific
operation.

Example E31.3 (Editing – Example E31.1 Cont.).


Let us take the mathematical formula sketched in Figure 31.10 x = b + (7 − 4) + (1 ∗ a)

+ +

+ * + a

b - 1 a b 3

7 4

Figure 31.10: Tree editing – (asexual) optimization.

for instance. This expression clearly can be written in a shorter way by replacing (7 − 4)
with 3 and (1 ∗ a) with a. By doing so, we improve its readability and also decrease the
computational time needed to evaluate the expression for concrete values of a and b. Similar
measures can often be applied to algorithms and program code.
The positive aspect of editing here is that it reduces the number of nodes in the tree by
removing useless expressions. This makes it more easy for recombination operations to pick
“important” building blocks. At the same time, the expression (7 − 4) is now less likely to
be destroyed by the reproduction processes since it is replaced by the single terminal node
3.
A negative aspect would be if (in our example) a fitter expression was (7 − (4 ∗ a)) and a
is a variable close to 1. Then, transforming (7 − 4) into 3 prevents an evolutionary transition
to the fitter expression.

Besides the aspects mentioned in Example E31.3, editing also reduces the diversity in the
genome which could degrade the performance by decreasing the variety of structures avail-
able. In Koza’s experiments, Genetic Programming with and without editing showed equal
performance, so the overall benefits of this operation are not fully clear [1602].

31.3.4.4 Encapsulation: Unary Reproduction

The idea behind the encapsulation operation is to identify potentially useful subtrees and to
turn them into atomic building block as sketched in Figure 31.11. To put it plain, we create
new terminal symbols that (internally hidden) are trees with multiple nodes. This way,
they will no longer be subject to potential damage by other reproduction operations. The
new terminal may spread throughout the population in the further course of the evolution.
According to Koza, this operation has no substantial effect but may be useful in special
applications like the evolution of artificial neural networks [1602].

31.3.4.5 Wrapping: Unary Reproduction

Applying the wrapping operation means to first select an arbitrary node n in the tree. A new
non-terminal node m is created outside of the tree. In m, at least one child node position is
31.3. (STANDARD) TREE GENOMES 397

Figure 31.11: An example for tree encapsulation.

Figure 31.12: An example for tree wrapping.

left unoccupied. Then, n (and all its potential child nodes) is cut from the original tree and
append it to m by plugging it into the free spot. Finally, m is hung into the tree position
that formerly was occupied by n.
The purpose of this reproduction method illustrated in Figure 31.12 is to allow modifi-
cations of non-terminal nodes that have a high probability of being useful. Simple mutation
would, for example, cut n from the tree or replace it with another expression. This will
always change the meaning of the whole subtree below n dramatically.

Example E31.4 (Wrapping – Example E31.1 Cont.).


If we evolve mathematical expressions, a candidate solution could be the formula (b + a) ∗ 5.
A simple
√ mutation operator could
√ replace the node a with a randomly created subtree such
as ( a ∗ 7), arriving at (b + ( a ∗ 7)) ∗ 5, which is quite different from (b + a) ∗ a. Wrapping
may, however, place a into an expression such as (a + 2). The result, (b + (a + 2)) ∗ 5, is
closer to the original formula.

The idea of wrapping is to introduce a search operation with high locality which allows
smaller changes to genotypes. The wrapping operation is used by the author in is exper-
iments and so far, I have not seen another source where it is used. Then again, the idea
of this operation is especially appealing to the evolution of executable structures such as
mathematical formulas and quite simple, so it is likely used in similar ways by a variety of
researchers.

31.3.4.6 Lifting: Unary Reproduction

While wrapping allows nodes to be inserted in non-terminal positions with small change of
the tree’s semantic, lifting is able to remove them in the same way. It is the inverse operation
to wrapping, which becomes obvious when comparing Figure 31.12 and Figure 31.13.
Lifting begins with selecting an arbitrary inner node n of the tree. This node then
replaces its parent node. The parent node inclusively all of its child nodes (except n)
are removed from the tree. With lifting, a tree that represents the mathematical formula
(b + a) ∗ 3 can be transformed to b ∗ 3 in a single step.
398 31 GENETIC PROGRAMMING

Figure 31.13: An example for tree lifting.

Lifting is used by the author in his experiments with Genetic Programming (see for
example ?? on page ??). Again, so far I have not seen a similar operation defined explicitly
in other works, but because of its simplicity, it would be strange if it had not been used by
various researchers in the past.

31.3.5 Recombination: Binary Reproduction

The mating process in nature – the recombination of the genotypes of two individuals – is
also copied in tree-based Genetic Programming. Exchanging parts of genotypes is not much
different in tree-based representations than in string-based ones.

31.3.5.1 Subtree Crossover

Applying the default subtree exchange recombination operator to two parental trees means
to swap subtrees between them as defined in Algorithm 31.6 and illustrated in Figure 31.14.
Therefore, single subtrees are selected randomly from each of the parents and subsequently
are cut out and reinserted in the partner genotype.

Algorithm 31.6: g ←− recombineGP(gp1 , gp2 )


Input: gp1 , gp2 ∈ G: the parental genotypes
Data: t: the selected tree node
Data: j: the selected child index
Output: g ∈ G: a new genotype
1 begin
2 g ←− gp1
3 repeat
4 t ←− selectNode(g)
5 until numChildren(t) > 0
6 j ←− ⌊randomUni[0, numChildren(t))⌋
7 t.children[j] ←− selectNode(gp2 )
8 return g

If a depth restriction is imposed on the genome, both, the mutation and the recombi-
nation operation have to respect them. The new trees they create must not exceed it. An
example implementation of this operation in Java for Strongly-Typed Genetic Programming
is given in Listing 56.44 on page 817.
The intent of using the recombination operation in Genetic Programming is the same as
in Genetic Algorithms. Over many generations, successful building blocks – for example a
highly fit expression in a mathematical formula – should spread throughout the population
and be combined with good genes of different candidate solutions.
31.3. (STANDARD) TREE GENOMES 399

()
maximum depth

Figure 31.14: Tree recombination by exchanging subtrees.

Example E31.5 (Recombination – Example E31.1 Cont.).


Based on Example E31.1 on page 387, Figure 31.15 shows one possible application of Algo-

ab
-
sin x
-
/
e x 3
3 e

Select node to Select node to replace


replace in Parent 1 with from Parent 2

ab
sin -
/ e x
3 e Offspring

Figure 31.15: Recombination of two mathematical expressions represented as trees.

rithm 31.6. Two mathematical expressions encoded as trees are recombined by the exchange
of subtrees, i. e., mathematical expressions.
The first parent tree gp1 represents the formula (sin(3/e))x and the second one (gp2 )
denotes (e − x) − |3|. The node which stands for the variable x in the first parent is selected
as well as the subtree e − x in the second parent. x is cut from gp1 and replaced with e − x,
yielding an offspring which encodes the formula (sin(3/e))e−x .

31.3.5.2 Headless Chicken Crossover


Recombination in Standard Genetic Programming can also have a very destructive effect
on the individual fitness [2032]. Angeline [101] argues that it performs no better than
mutation and causes bloat [104] and used Jones’s headless chicken crossover [1466] discussed
in Section 29.3.4.7 on page 339.
Angeline [101] attacked Koza’s [1602]’s claims about the efficiency of subtree crossover
by constructing two new operators:
1. In Strong Headless Chicken Crossover (SHCC), each parent tree is recombined with a
new randomly created tree and the offspring is returned. The new child will normally
contain a relatively small amount of random nodes.
400 31 GENETIC PROGRAMMING
2. In Weak Headless Chicken Crossover (WHCC), the parent tree is recombined with a
new randomly created tree and vice versa. One of the offspring randomly selected and
returned. In half of the returned offspring, there will be a relatively small amount of
non-random code.

SHCC is very close to subtree mutation, since a sub-tree in the parent is replaced with a
randomly created one. It resembles a macromutation. Angeline [101] showed that SHCC
performed at least as good if not better than subtree crossover on a variety of problems and
that the performance of WHCC is good, too.
His conclusion is that subtree crossover actually works similar to a macromutation oper-
ator whose contents are limited by the subtrees available in the population. This contradicts
the common (prior) believe that subtree crossover would be a building block combining fa-
cility. In problems where a constant incoming stream of new genetic material and variation
is necessary, the pure macromutation provided by SHCC thus outperforms crossover.
Interestingly, Lang [1665] claimed that Hill Climbing algorithm with a similar macromu-
tation could outperform Genetic Programming a few years prior but were concluded from
non-conclusive and ill-designed experiments. As a consequence, they could not withstand a
thorough investigation [1600]. The similarity of crossover and mutation (which also becomes
obvious when comparing Algorithm 31.5 and 31.6) in Genetic Programming was thus first
profoundly pointed out by Angeline [101].

31.3.5.3 Advanced Techniques

Several techniques have been proposed in order to mitigate the destructive effects of re-
combination in tree-based representations. In 1994, D’Haeseleer [782] obtained modest im-
provements with his strong context preserving crossover that permitted only the exchange
of subtrees that occupied the same positions in the parents. Poli and Langdon [2193, 2194]
define the similar single-point crossover for tree genomes with the same purpose: increasing
the probability of exchanging genetic material which is structural and functional akin and
thus decreasing the disruptiveness. A related approach define by Francone et al. [982] for
linear Genetic Programming is discussed in Section 31.4.4.1 on page 407.

31.3.6 Modules

31.3.6.1 Automatically Defined Functions

The concept of automatically defined functions (ADFs) as introduced by Koza [1602] pro-
vides some sort of pre-specified modularity for Genetic Programming. Finding a way to
evolve modules and reusable building blocks is one of the key issues in using GP to derive
higher-level abstractions and solutions to more complex problems [107, 108, 1599]. If ADFs
are used, a certain structure is defined for the genome.
The root of the tree usually loses its functional responsibility and now serves only as glue
that holds the individual together and has a fixed number n of children, from which n−1 are
automatically defined functions and one is the result-generating branch. When evaluating
the fitness of an individual, often only this first branch is taken into consideration whereas
the root and the ADFs are ignored. The result-generating branch, however, may use any of
the automatically defined functions to produce its output.

Example E31.6 (ADFs – Example E31.1 Cont.).


When ADFs are employed, typically not only their number must be specified beforehand but
also the number of arguments of each of them. How this works can maybe best illustrated
31.3. (STANDARD) TREE GENOMES 401
7
by using the example given in ??. It stems from function approximation , since this is the
area where many early examples of the idea of ADFs come from.
Assume that the goal of GP is to approximate a function g with the one parameter x
and that a genome is used where two functions (f0 and f1 ) are automatically defined. f0
has a single formal parameter a and f1 has two formal parameters a and b. The genotype
?? encodes the following mathematical functions:

g(x) = f1 (4, f0 (x)) ∗ (f0 (x) + 3) (31.4)


f0 (a) = a + 7 (31.5)
f1 (a, b) = (−a) ∗ b (31.6)

Hence, g(x) ≡ ((−4) ∗ (x + 7)) ∗ ((x + 7) + 3). The number of children of the function
calls in the result-generating branch must be equal to the number of the parameters of the
corresponding ADF.

Although ADFs were first introduced for symbolic regression by Koza [1602], they can also
be applied to a variety of other problems like in the evolution of agent behaviors [96, 2240],
electrical circuit design [1608], or the evolution of robotic behavior [98].

31.3.6.2 Automatically Defined Macros

Spector’s idea of automatically defined macros (ADMs) complements the ADFs of


Koza [2566, 2567]. Both concepts are very similar and only differ in the way that their
parameters are handled. The parameters in automatically defined functions are always val-
ues whereas automatically defined macros work on code expressions. This difference shows
up only when side-effects come into play.

Example E31.7 (ADFs vs. ADM).


In Figure 31.16, we have illustrated the pseudo-code of two programs – one with a function
(called ADF) and one with a macro (called ADM). Each program has a variable x which is
initially zero. The function y() has the side-effect that it increments x and returns its new
value. Both, the function and the macro, return a sum containing their parameter a two
times. The parameter of ADF is evaluated before ADF is invoked. Hence, x is incremented
one time and 1 is passed to ADF which then returns 2=1+1. The parameter of the macro,
however, is the invocation of y(), not its result. Therefore, the ADM resembles to two calls to
y(), resulting in x being incremented two times and in 3=1+2 being returned.

7 A very common example for function approximation, Genetic Programming-based symbolic regression,

is discussed in Section 49.1 on page 531.


402 31 GENETIC PROGRAMMING

variable x=0 variable x=0

subroutine y() subroutine y()


begin begin
x++ x++
return x return x
end end

function macro
ADF(param a) º (a+a) ADM(param a) º (a+a)

main_program main_program
begin begin
print(”out: “ + ADF(y)) print(”out: “ + ADM(y))
end end
Program with ADF Program with ADM

...roughly resembles ...roughly resembles

main_program main_program
begin begin
variable temp1, temp2 variable temp
temp1 = y() temp = y() + y()
temp2 = temp1 + temp1
print(”out: “ + temp2) print(”out: “ + temp)
end end

produced output produced output


C:\ C:\

C:\ C:\

C:\ C:\

C:\ C:\

exec main_program exec main_program


-> “out 2” -> “out 3”

Figure 31.16: Comparison of functions and macros.


31.4. LINEAR GENETIC PROGRAMMING 403

The ideas of automatically defined macros and automatically defined functions are very
close to each other. Automatically defined macros are likely to be useful in scenarios where
context-sensitive or side-effect-producing operators play important roles [2566, 2567]. In
other scenarios, there is no much difference between the application of ADFs and ADMs.
Finally, it should be mentioned that the concepts of automatically defined functions and
macros are not restricted to the standard tree genomes but are also applicable in other forms
of Genetic Programming, such as linear Genetic Programming (see Section 31.4) or PADO
(see ?? on page ??).

31.4 Linear Genetic Programming

31.4.1 Introduction

In the beginning of this chapter, it was stated that one of the two meanings of Genetic
Programming is the automated, evolutionary synthesis of programs that solve a given set of
problems. It was discussed that tree genomes are suitable to encode programs or formulas
and how the genetic operators can be applied to them.
Nevertheless, trees are not the only way for representing programs. Matter of fact, a
computer processes programs them as sequences of simple machine instructions instead.
These sequences may contain branches in form of jumps to other places in the code. Ev-
ery possible flowchart describing the behavior of a program can be translated into such a
sequence. It is therefore only natural that the first approach to automated program gener-
ation developed by Friedberg [995] at the end of the 1950s used a fixed-length instruction
sequence genome [995, 996]. The area of Genetic Programming focused on such instruction
string genomes is called linear Genetic Programming (LGP).

Example E31.8 (LGP versus SGP: Syntax).


The general idea of LGP, as opposed to SGP, is that tree-based GP is close to high-level
representations such as LISP S-expressions whereas linear Genetic Programming breeds
instruction sequences resembling programs in assembler language or machine code. This
distinction in philosophy goes as far as down to the naming conventions of the instructions
and variables: In symbolic regression in Standard Genetic Programming, for instance, func-
tion names such as +, −, ∗, and sin are common and the variables are usually called x (or
x1 , x2 , . . . ). In linear Genetic Programming for the same purpose, the instructions would
usually be named like ADD, SUB ,MUL, SIN and instead of variables, the data is stored in reg-
isters such as R[0], R[1], . . . An ADD instruction in LGP could have, for example, three
parameters: the two input registers and the register to store the result of the computation.
In Standard Genetic Programming, on the other hand, + would have two child nodes and
return the result of the addition to the calling node without explicitly storing it somewhere.

In linear Genetic Programming, instruction strings (or bit/integer strings encoding tem)
are the center of the evolution and contain the program code directly. This distinguishes it
from many grammar-guided Genetic Programming approaches like Grammatical Evolution
(see ?? on page ??), where strings are just genotypic, intermediate representations that
encode the program trees. Some of the most important early contributions to LGP come
from [2199]:
404 31 GENETIC PROGRAMMING
1. Banzhaf [205], who used a genotype-phenotype mapping with repair mechanisms to
translate a bit string to nested simple arithmetic instructions in 1993,
2. Perkis [2164] (1994), whose stack-based GP evaluated arithmetic expressions in Re-
verse Polish Notation (RPN),
3. Openshaw and Turton [2086] (1994) who also used Perkis’s approach but already
represented mathematical equations as fixed-length bit string back in the 1980s [2085],
and
4. Crepeau [653], who developed a machine code GP system around an emulator for the
Z80 processor.

Besides the methods discussed in this section (and especially in Section 31.4.3), other inter-
esting approaches to linear Genetic Programming are the LGP variants developed by Eklund
[870] and Leung et al. [536, 1715] on specialized hardware, the commercial system by Foster
[976], and the MicroGP (µGP) system for test program induction by Corno et al. [642, 2592].

31.4.2 Advantages and Disadvantages

The advantage of linear Genetic Programming lies in the straightforward evaluation of the
evolved algorithms. Like a normal computer program, a (simulated) processor starts at the
beginning of the instruction string. It reads the first instruction and its operands from the
string and carries it out. Then, it moves to the second instruction and so on. The size of
the status information needed by an interpreter for tree-based programs is in Od (where d
is the depth of the tree), because a recursive depth-first traversal of the trees is required.
For processing a linear instruction sequence, the required status information is of O1 since
only the index of the currently executed instruction needs to be stored.
Because of this structure, it is also easy to limit the runtime in the program evalua-
tion and even simulating parallelism. Like Standard Genetic Programming, LGP is most
often applied with a simple function set comprised of primitive instructions and some math-
ematical operators. In SGP, such an instruction set can be extended with alternatives
(if-then-else) and loops. Similarly, in linear Genetic Programming jump instructions may
be added to allow for the evolution of a more complex control flow.
Then, the drawback arises that simply reusing the genetic operators for variable-length
string genomes (discussed in Section 29.4 on page 339), which randomly insert, delete, or
toggle bits, is no longer feasible. jump instructions are highly epistatic (see Chapter 17 and
??).

Example E31.9 (Linear Genetic Programming with Jumps).


We can visualize, for example, that the alternatives and loops which we know from high-level
programming languages are mapped to conditional and unconditional jump instructions in
machine code. These jumps target to either absolute or relative addresses inside the program.
Let us consider the insertion of a single, new command into the instruction string, maybe as
result of a mutation or recombination operation. If we do not perform any further corrections
after this insertion, it is well possible that the resulting shift of the absolute addresses of the
subsequent instructions in the program invalidates the control flow and renders the whole
program useless. This issue is illustrated in Fig. 31.17.a.
Nordin et al. [2048, 2048] point out that standard crossover is highly disruptive. Even
though the subtree crossover in tree-genomes is shown to be not very efficient either (see [101]
and Section 31.3.5.2 on page 399), in comparison, tree-based genomes are less vulnerable
in this aspect. As can be seen in Fig. 31.17.b, the insertion of an instruction into a loop
represented as a tree in Standard Genetic Programming is less critical.
31.4. LINEAR GENETIC PROGRAMMING 405
... 50 01 50 01 9A 10 38 33 83 8F 03 50 ... ... 50 01 50 01 9A 10 23 38 33 83 8F 03 50 ...

(before insertion) (after insertion: loop begin shifted)


Fig. 31.17.a: Inserting into an instruction string.

{...}

while

> {...}

0 = = =

a b b a *

(after Insertion) b 1 b 0.5

Fig. 31.17.b: Inserting in a tree representation.

Figure 31.17: The impact of insertion operations in Genetic Programming

One approach to increase the locality in linear Genetic Programming with jumps is to use
intelligent mutation and crossover operators which preserve the control flow of the program
when inserting or deleting instructions. Such operations could, for instance, analyze the
program structure and automatically correct jump targets. Operations which are restricted
to have only minimal effect on the control flow from the start can also easily be introduced.
In Section 31.4.3.4, we shortly outline some of the work of Brameier and Banzhaf [391],
who define some interesting approaches to this issue. Section 31.4.4.1 discusses the homol-
ogous crossover operation which represents another method for decreasing the destructive
effects of reproduction in LGP.

31.4.3 Realizations and Implementations

31.4.3.1 The Compiling Genetic Programming System

One of the early linear Genetic Programming approaches was developed by Nordin [2044],
who was dissatisfied with the performance of GP systems written in an interpreted language
which, in turn, interpret the programs evolved using a tree-shaped genome. In 1994, he
published his work on a new Compiling Genetic Programming System (CGPS) written in the
C programming language8 [1525] directly manipulating individuals represented as machine
code.
CGPS evolves linear programs which accept some input data and produce output data
as result. Each candidate solution in CGPS consists of (a) a prologue for shoveling the
input data from the stack into registers, (b) a set of instructions for information process-
ing, (c) and an epilogue for terminating the function [2045]. The prologue and epilogue
were never modified by the genetic operations. As instructions for the middle part, the
8 http://en.wikipedia.org/wiki/C_(programming_language) [accessed 2008-09-16]
406 31 GENETIC PROGRAMMING
Genetic Programming system had arithmetical operations and bit-shift operators at its dis-
posal in [2044], but no control flow manipulation primitives like jumps or procedure calls.
These were added in [2046] along with ADFs, making this LGP approach Turing-complete.
Nordin [2044] used the classification of Swedish words as task in the first experiments
with this new system. He found that it had approximately the same capability for grow-
ing classifiers as artificial neural networks but performed much faster. Another interesting
application of his system was the compression of images and audio data [2047].

31.4.3.2 Automatic Induction of Machine Code by Genetic Programming

CGPS originally evolved code for the Sun Sparc processors, which is a member of the RISC9
processor class. This had the advantage that all instructions have the same size. In the Au-
tomatic Induction of Machine Code with GP system (AIM-GP, AIMGP), the successor of
CGPS, the support for multiple other architectures was added by Nordin, Banzhaf, and
Francone [2050, 2053], including Java bytecode10 and CISC11 CPUs with variable instruc-
tion widths such as Intel 80x86 processors. A new interesting application for linear Genetic
Programming tackled with AIMGP is the evolution of robot behavior such as obstacle avoid-
ing and wall following [2052].

31.4.3.3 Java Bytecode Evolution

Besides AIMGP, there exist numerous other approaches to the evolution of linear Java
bytecode functions. The Java Bytecode Genetic Programming system (JBGP, also Java
Method Evolver, JME) by Lukschandl et al. [1792–1795] is written in Java. A genotype
in JBGP contains the maximum allowed stack depth together with a linear list of instruc-
tion descriptors. Each instruction descriptor holds information such as the corresponding
bytecode and a branch offset. The genotypes are transformed with a genotype-phenotype
mapping into methods of a Java class which then can be loaded into the JVM, executed,
and evaluated. [1206, 1207].
The JAPHET system of Klahold et al. [1547], the user provides an initial Java class
at startup. Classes are divided into a static and a dynamic part. The static parts contain
things like version information are not affected by the reproduction operations. The dynamic
parts, containing the methods, are modified by the genetic operations which add new byte
code [1206, 1207].
Harvey et al. [1206, 1207] introduce byte code GP (bcGP), where the whole population
of each generation is represented by one class file. Like in AIMGP, each individual is a linear
sequence of Java bytecode and is surrounded by a prologue and epilogue. Furthermore, by
adding buffer space, each individual has the same size and, thus, the whole population can
be kept inside a byte array of a fixed size, too.

31.4.3.4 Brameier and Banzhaf: LGP with Implicit Intron removal

In the Genetic Programming system developed by Brameier and Banzhaf [391] based on
former experience with AIMGP, an individual is represented as a linear sequence of simple
C instructions as outlined in the example Listing 31.1 (a slightly modified version of the
example given in [391]). Due to reproduction operations like as mutation and crossover,
linearly encoded programs may contain introns, i. e., instructions not influencing the result
(see Definition D29.1 and ??). Given that the output of the program defined in Listing 31.1
will store its outputs in the registers v[0] and v[1], all the lines marked with (I) do not
contribute to the overall functional fitness.
9 http://en.wikipedia.org/wiki/Reduced_Instruction_Set_Computing [accessed 2008-09-16]
10 http://en.wikipedia.org/wiki/Bytecode [accessed 2008-09-16]
11 http://en.wikipedia.org/wiki/Complex_Instruction_Set_Computing [accessed 2008-09-16]
31.4. LINEAR GENETIC PROGRAMMING 407

1 void ind(double[8] v) {
2 ...
3 v[0] = v[5] + 73;
4 v[7] = v[0] - 59; (I)
5 if(v[1] > 0)
6 if(v[5] > 23)
7 v[4] = v[2] * v[1];
8 v[2] = v[5] + v[4]; (I)
9 v[6] = v[0] * 25; (I)
10 v[6] = v[4] - 4;
11 v[1] = sin(v[6]);
12 if(v[0] > v[1]) (I)
13 v[3] = v[5] * v[5]; (I)
14 v[7] = v[6] * 2;
15 v[5] = v[7] + 115; (I)
16 if(v[1] <= v[6])
17 v[1] = sin(v[7]);
18 }
Listing 31.1: A genotype of an individual in Brameier and Banzhaf’s LGP system.

Brameier and Banzhaf [391] introduce an algorithm which removes these introns during
the genotype-phenotype mapping, before the fitness evaluation. Basically, this approach can
be considered as an editing operation (see Section 31.3.4.3 on page 396) for linear Genetic
Programming. The GP system based on this approach was successfully tested with several
classification tasks [390–392], function approximation and Boolean function synthesis [393].
In his doctoral dissertation, Brameier [388] elaborates that the control flow of linear
Genetic Programming more equals a graph than a tree because of jump and call instructions.
In the earlier work of Brameier and Banzhaf [391] mentioned just a few lines ago, introns were
only excluded by the genotype-phenotype mapping but preserved in the genotypes because
they were expected to make the programs robust against variations. In [388], Brameier
concludes that such implicit introns representing unreachable or ineffective code have no
real protective effect but reduce the efficiency of the reproduction operations and, thus,
should be avoided or at least minimized by them. Instead, the concept of explicitly defined
introns (EDIs) proposed by Nordin et al. [2051] is utilized in form of something like nop
instructions in order to decrease the destructive effect of crossover. Brameier finds that
introducing EDIs decreases the proportion of introns arising from unreachable or ineffective
code and leads to better results. In comparison with standard tree-based GP, his linear
Genetic Programming approach performed better during experiments with classification,
regression, and Boolean function evolution benchmarks.

31.4.4 Recombination

31.4.4.1 Sticky (Homologous) Crossover

In Section 29.3.4.6 on page 338, we discussed homologous crossover for Genetic Algorithms.
Normal crossover operations like MPX may have a disruptive effect on building blocks. The
idea of homologous crossover is that, similar to nature Banzhaf et al. [209], only genes that
express the same functionality and are located at the same loci, are exchanged between
parental genotypes. It is assumed that such a recombination operator would have a lower
disruptiveness than plain MPX.
The programs which evolve in linear Genetic Programming are encoded in string chro-
mosome, usually of variable length. Hence, a homologous crossover mechanism could here
be beneficial as well.
408 31 GENETIC PROGRAMMING
Francone et al. [982, 2050] introduce a sticky crossover operator which resembles ho-
mology by allowing the exchange of instructions between two genotypes (programs) only if
they reside at the same loci. In sticky crossover, first a sequence of code in the first parent
genotype is choosen. This sequence is then swapped with the sequence at exactly the same
position in the second parent.

31.4.4.2 Page-based LGP

An approach similar to Francone et al.’s sticky crossover is the Page-based linear Genetic
Programming by Heywood and Zincir-Heywood [1229]. Here, programs are described as
sequences of pages. Each page includes the same number z of instructions.
Crossover exchanges exactly a single page between the parents but, different from the
homologous operators, is not locally bound. In other words, the third page of the first
parent may be swapped with the fifth page of the second parent. Since exactly one page is
exchanged, the length of the programs remains the same and the number of their instructions
does not change. This way of crossover is useful to identify and spread building blocks and
thus, is less destructive.
Heywood and Zincir-Heywood [1229] also device a dynamic page size adaptation method
for their crossover operator. Instead of keeping the page size z constant throughout the
˘
evolution, an upper limit z for this size is defined. The Genetic Programming process starts
with a working page size of z = 1. If a fitness plateau is reached, i. e., no further improvement
˘
of the best fitness can be found for some time, z is increased to the next divisor of z. In
˘ ˘
other words, if z = 12, z would go through 1, 2, 3, 4, 6, and 12. If z = z and the GP process
stagnates again, this iteration starts all over again at z = 1. The idea of such an approach
is that initially, the building blocks in the system are likely to be simple and small. Over
time, longer useful code sequences will be discovered, calling for a larger page size in order
to be preserved.

31.4.5 General Information on Linear Genetic Programming

31.4.5.1 Applications and Examples

Table 31.3: Applications and Examples of Linear Genetic Programming.


Area References
Chemistry [207, 1671, 2899]
Computer Graphics [1671]
Control [2052]
Data Compression [2047]
Data Mining [388–392, 982, 1207, 1229, 1671, 2044]
Digital Technology [392, 393, 1229, 2050, 2164]
Distributed Algorithms and Sys- [947, 1180, 1793, 1977, 2555, 2888, 2896, 2898, 2899, 2916]
tems
Engineering [392, 393, 947, 1180, 1229, 1671, 1682, 1763–1767, 1793, 1977,
2050, 2052, 2164, 2555, 2896]
Graph Theory [947, 1180, 1682, 1767, 1793, 1977, 2555, 2896]
Healthcare [388, 391]
Mathematics [390, 392, 1206, 1207, 1229, 1301, 1715, 2164, 2888, 2891,
2898]
31.5. GRAMMARS IN GENETIC PROGRAMMING 409
Prediction [205, 390, 1671, 2086]
Security [947, 1180, 1682, 1977, 2555]
Software [976, 1206, 1207, 1547, 1669, 1671, 1792, 1793, 2046, 2050,
2053, 2916]
Testing [642, 2592]
Wireless Communication [1682, 1767]

31.4.5.2 Books
1. Advances in Genetic Programming [2569]
2. Genetic Programming: An Introduction – On the Automatic Evolution of Computer Pro-
grams and Its Applications [209]
3. Linear Genetic Programming [394]
4. Evolutionary Program Induction of Binary Machine Code and its Applications [2045]

31.5 Grammars in Genetic Programming

31.5.1 General Information on Grammar-Guided Genetic Programming

31.5.1.1 Applications and Examples

Table 31.4: Applications and Examples of Grammar-Guided Genetic Programming.


Area References
Control [2336]
Data Mining [2969]
Digital Technology [2032]
Economics and Finances [379, 380]
Engineering [1920, 2032, 2336]
Mathematics [2032–2034, 2359]
Software [1920]

31.5.1.2 Books
1. Data Mining Using Grammar Based Genetic Programming and Applications [2969]

31.6 Graph-based Approaches

31.6.1 General Information on Grahp-based Genetic Programming

31.6.1.1 Applications and Examples

Table 31.5: Applications and Examples of Grahp-based Genetic Programming.


410 31 GENETIC PROGRAMMING
Area References
Digital Technology [2792, 3042]
Engineering [2792, 3042]
Mathematics [2482]
Sorting [2482]
31.6. GRAPH-BASED APPROACHES 411
Tasks T31

91. Use Genetic Programming to perform symbolic regression (see Section 49.1 on page 531
and Example E31.1 on page 387) on the data given in Listing 31.2. Which mathemat-
ical function φ with b = φ(a) relates the values of a to the values of b? You can use
the files from Section 57.2.1.1 on page 905 and the other sources provided with this
book to construct your optimization process.
1 a b
2 =============================================
3 0.008830073673861905 0.008752332965198615
4 0.018694563401923325 0.018347254459257438
5 0.05645469964408556 0.05332752398796371
6 0.0766136391753689 0.0708938021131889
7 0.14129105849778467 0.12226631044115759
8 0.34977791509449296 0.241542619578675
9 0.35201052061616 0.24247840720311337
10 0.36438318566633743 0.2475456173823204
11 0.3893919925542615 0.2571847111870792
12 0.45985453110101393 0.2802156496978409
13 0.5294420401690347 0.29744194372200355
14 0.545693694874572 0.30073566724237905
15 0.5794622522099054 0.30675072726587754
16 0.5854806945538868 0.3077087939606112
17 0.587999637880455 0.30809978088344814
18 0.6134151841062928 0.31172078461868413
19 0.6305855626214137 0.3138417822317378
20 0.6451020098521425 0.315436888525947
21 0.6595800013242682 0.3168517955439009
22 0.6622167712763556 0.3170909127233424
23 0.7470926929713728 0.32191168989382
24 0.7513516563209482 0.3220146762426389
25 0.7571482262199194 0.32213477030839227
26 0.760834074066516 0.32219920416927167
27 0.799102390437177 0.32233694519750067
28 0.8823767703419055 0.31955612939384237
29 0.8839162433815917 0.31946827608018985
30 0.9278045504520468 0.3164575071414033
31 0.9482259113015682 0.3147394303666216
32 0.9483969161261779 0.31472423502162195
Listing 31.2: The data for Task 91

[50 points]

92. Use any of the optimization algorithms for real-valued problem spaces to match the
polynomial g5 a5 + g4 a4 + g3 a3 + g2 a2 + g1 a + g0 to the data given in Listing 31.2
T
from Task 91. Optimize the vector g = (g0 , g1 , . . . , g5 ) by considering it as genotype
in the six-dimensional search space G ⊆ R . Use an objective function which could
6

be roughly similar to Listing 57.20 on page 908. Do not use any tree-related data
structures.
[50 points]

93. Do the same as in Task 92, but instead of the polynomial, try to fit the function
g2 a2 + g4 a + sin(g3 a + g2 ) + g1 ∗ ea∗g0 .
[20 points]

94. Combine what you have learned from Task 91, 92, and 93: Develop a system which in
a first optimization run synthesizes a formula F via symbolic regression and Genetic
Programming. In a second step, this system should discover all nodes which represent
ephemeral random constants (see Definition D49.3) and optimizes their value with a
numerical optimization procedure, while keeping the structure of the formula F intact.
In other words, first discover a good formula structure, and then adapt it in the best
412 31 GENETIC PROGRAMMING
possible way to the available data. Both steps should be performed by an automatic
procedure and provide you with the final result without needing manual interaction.
Repeat the experiments multiple times. Try to find functions that fit different data
created by yourself. What conclusions can you draw about the result?
[50 points]

95. Replace the Evolutionary Algorithm as underlying optimization method for Genetic
Programming with symbolic regression or a Hill Climber and apply the resulting sys-
tem to Task 91 or 94. How does the result change? What are your conclusions about
GP, especially in terms of the binary reproduction operation crossover (which is not
used in Simulated Annealing or Hill Climbing)?
[20 points]

96. Instead of a tree-based representation for programs and formulas, as used in Task 91
to 95, implement a linear Genetic Programming (LGP) system which provides mathe-
matical operations in a style similar to assembler code. Repeat the experiment defined
in Task 91 with this LGP system and compare the performance with the tree-based,
Standard Genetic Programming approach.
[70 points]
Chapter 32
Evolutionary Programming
414 32 EVOLUTIONARY PROGRAMMING
32.1 General Information on Evolutionary Programming

32.1.1 Applications and Examples

Table 32.1: Applications and Examples of Evolutionary Programming.


Area References
Chemistry [848]
Control [1536]
Data Mining [2392]
Distributed Algorithms and Sys- [2711]
tems
Engineering [303, 1536, 1894, 2018]
Function Optimization [169, 2711]
Games [940]
Mathematics [1702]
Physics [848]

32.1.2 Books
1. Evolutionary Computation: The Fossil Record [939]
2. Artificial Intelligence through Simulated Evolution [945]
3. Blondie24: Playing at the Edge of AI [940]
4. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
5. Genetic Algorithms + Data Structures = Evolution Programs [1886]

32.1.3 Conferences and Workshops

Table 32.2: Conferences and Workshops on Evolutionary Programming.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
32.1. GENERAL INFORMATION ON EVOLUTIONARY PROGRAMMING 415
EP: Annual Conference on Evolutionary Programming
History: 1998/05: San Diego, CA, USA, see [2208]
1997/04: Indianapolis, IN, USA, see [109]
1996/02: San Diego, CA, USA, see [946]
1995/03: San Diego, CA, USA, see [1848]
1994/02: San Diego, CA, USA, see [2444]
1993/02: La Jolla, CA, USA, see [943]
1992/02: La Jolla, CA, USA, see [942]
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
416 32 EVOLUTIONARY PROGRAMMING
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
32.1. GENERAL INFORMATION ON EVOLUTIONARY PROGRAMMING 417
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

32.1.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
Chapter 33
Differential Evolution

33.1 Introduction

Differential Evolution1 (strategy) (DE, DES) is a method for mathematical optimization


of multidimensional functions [286, 408, 903, 1662, 1866, 1882, 2220, 2620]. Developed by
Storn and Price [2621], the DE technique has been invented in order to solve the Chebyshev
polynomial fitting problem. It has proven to be a very reliable optimization strategy for
many different tasks where parameters that can be encoded in real vectors.
In numerical optimization, at the beginning there exists very little knowledge about what
is good or bad. Points are usually sampled uniformly distributed in the search space. The
search operations need to perform step with larger sizes are in order to move towards the
regions with best fitness. As the basins of attraction of the optima are reached, smaller step
sizes are required to refine the solutions. The closer the optimization algorithm gets to an
optimum, the smaller should the steps become in order to approach it closer and closer.
Differential Evolution is an Evolutionary Algorithm which uses the assumption that
selection will make the population converge to the basin of attraction of an optimum as
basis for self-adaption. As the points in the population approach an optimum (or better:
when only those points close to the optimum survive selection), their distances to each
other get smaller. This distance information is used to derive a proper step width for the
search operation. In other words, initially large inter-point distances lead to larger search
steps. As the search progresses, the search steps get smaller, hence allowing the optimization
process to converge to the optimum. Vice versa, the convergence to an optimum leads to
smaller search steps. With this method, Differential Evolution does not need to use explicit
endogenous information in the sense of Evolution Strategies (see Section 30.5.2), instead it
uses the information stored in the whole population.

33.2 Ternary Recombination

The essential idea behind Differential Evolution is the way in which the (ternary) recombina-
tion operator “recombineDE” is defined for creating new genotypes. It takes three parental
individuals g1 , g2 , and g3 as parameter. The difference between g1 and g2 in the search
space G is computed, weighted with a weight F ∈ R and added to the third parent g3 .

recombineDE(g1 , g2 , g3 ) = g3 + F (g1 − g2 ) (33.1)


1 http://en.wikipedia.org/wiki/Differential_evolution [accessed 2007-07-03]
420 33 DIFFERENTIAL EVOLUTION
Except for determining F , no additional probability distribution has to be used in the
basic Differential Evolution scheme. There exists a common nomenclature for Differential
Evolution algorithms: DE/A/p/B where

1. DEstands for Differential Evolution,


2. is the way the individuals are selected to compute the differential information,
A
3. p is the number of individual pairs used for computing the difference information used
in the recombination operation, and
4. B identifies the recombination operator to be used.

33.2.1 Advanced Operators

The algorithm is completely self-organizing. This classical reproduction strategy has been
complemented with new ideas like triangle mutation and alternations with weighted directed
strategies. Gao and Wang [1017] emphasize the close similarities between the reproduction
operators of Differential Evolution and the search step of the Downhill Simplex. Thus, it
is only logical to combine or to compare the two methods as we show in ?? on page ??.
Further improvements to the basic Differential Evolution scheme have been contributed, for
instance, by Kaelo and Ali. Their DERL and DELB algorithms outperformed [1475–1477]
standard DE on the test benchmark compiled by Ali et al. [72].
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 421
33.3 General Information on Differential Evolution

33.3.1 Applications and Examples

Table 33.1: Applications and Examples of Differential Evolution.


Area References
Combinatorial Problems [286, 527]
Computer Graphics [2707]
Distributed Algorithms and Sys- [2315]
tems
Economics and Finances [1060]
Engineering [2860]
Function Optimization [408, 551, 1017, 1476, 2130, 2619, 2621, 3020]
Graph Theory [2315, 2793]
Logistics [527]
Mathematics [2620]
Motor Control [490]
Software [2620]
Wireless Communication [2793]

33.3.2 Books
1. Differential Evolution – A Practical Approach to Global Optimization [2220]
2. Differential Evolution – In Search of Solutions [903]

33.3.3 Conferences and Workshops

Table 33.2: Conferences and Workshops on Differential Evolution.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
422 33 DIFFERENTIAL EVOLUTION
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 423
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
424 33 DIFFERENTIAL EVOLUTION
See Table 28.5 on page 322.

33.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 425
Tasks T33

97. In Paragraph 56.2.1.1.4 on page 794, we give the implementation of the ternary search
operations of the simple Differential Evolution algorithm, which in turn, you can find
implemented in Listing 56.19 on page 771. However, as discussed by Mezura-Montes
et al. [1882], Differential Evolution does not necessary only take two points in the
search space to compute the difference information. Instead, it may take p point pairs.
In this case, the search operators are no longer ternary but (1 + 2p)-ary. Re-implement
the operators and Differential Evolution algorithm for this general case. You may use
a programming language of your choice.
[50 points]

98. In Task 90 on page 377, an Evolution Strategy was applied to an instance of the Trav-
eling Salesman Problem and its results were compared to what we obtain with a simple
Hill Climber that uses the straightforward integer-string permutation representation
discussed in Section 29.3.1.2 on page 329. Repeat this experiment, but now test some
Differential Evolution variants instead of Evolution Strategy. Compare your results
with those obtained with the other two algorithms in Task 90.
[50 points]
Chapter 34
Estimation Of Distribution Algorithms

34.1 Introduction

Traditional Evolutionary Algorithms are based directly on the natural paragon of Darwinian
evolution. The candidate solutions are considered to be equivalent to living organisms in
nature, which compete for survival (selection) and reproduce if successful. Estimation of
Distribution Algorithms1 (EDAs, [126, 1683, 1974, 2153, 2158]) are different. Instead of
improving possible solutions step by step, they try to learn how a perfect solution should
look like.
This is done from the point of view stochastic: Initially, nothing is known about where
the optimal solution is located in the search space, so the probability of each point which
could be investigated to be the optimum is basically the same. With every candidate solution
which has actually been evaluated with the objective functions, more information is gained.
In Genetic Algorithms or Genetic Programming, the population begins to move to certain,
promising regions. From the stochastic point of view, this corresponds to the assumption
that the probability that the optimum may be contained in these regions is estimated to be
high. The spatially denser the population becomes, the higher this probability is assumed.
EDAs abandon the biological example of repeated sexual and asexual reproduction [2153]
and instead, try to estimate the probability distribution M directly.
In case of an Estimation of Distribution Algorithm solving a continuous optimization
problem, initially, the probability model M(t = 0) at time step t = 0 may be estimated with
a uniform distribution. Step by step, the variance D2 M of the distribution corresponding to
the parameters of M should decrease and, in case that there exists a single, global optimum
x⋆ , the goal is that D2 [M(t)] = 0 and E [M(t)] = x⋆ are approached for t → ∞ (ideally,
earlier) [1197].
In Figure 34.1 we sketch the information held in a population of, for instance, an Evolu-
tion Strategy in Fig. 34.1.a to ?? and the information held in a model of an Estimation of
Distribution Algorithm in Fig. 34.1.d to 34.1.f. In the first row, of the figure, from the left
to the right, it can be seen how the population of the ES converges. The model used in a
real-valued EDA may converge similarly. In this case, its parameters may shrink down and
become smaller and smaller, thus decreasing the number of likely sampled genotypes.

1 http://en.wikipedia.org/wiki/Estimation_of_distribution_algorithm [accessed 2009-08-20]


428 34 ESTIMATION OF DISTRIBUTION ALGORITHMS

Fig. 34.1.a: Population 1 Fig. 34.1.b: Population 3 Fig. 34.1.c: Population 4

s1 s1 s1
m2 s2 m2 s2 m2 s2

m1 m1 m1
Fig. 34.1.d: Model 1 Fig. 34.1.e: Model 3 Fig. 34.1.f: Model 4

Figure 34.1: Comparison between information in a population and in a model.

34.2 General Information on Estimation Of Distribution


Algorithms

34.2.1 Applications and Examples

Table 34.1: Applications and Examples of Estimation Of Distribution Algorithms.


Area References
Combinatorial Problems [366, 534, 1683, 2158]
Data Mining [42, 1683, 1784, 1910, 2069, 2398, 2917]
Digital Technology [3093]
Distributed Algorithms and Sys- [1435, 2455, 2456, 2631, 2632]
tems
Engineering [117, 1435, 2158, 2860, 3093]
Function Optimization [2038]
Graph Theory [534, 1435, 2455, 2456, 2631, 2632, 2793, 2794]
Logistics [1121, 1683]
Mathematics [2038]
Military and Defense [3046]
Motor Control [664]
Physics [2187]
Software [117]
Wireless Communication [534, 2158, 2793, 2794]

34.2.2 Books
1. Scalable Optimization via Probabilistic Modeling – From Algorithms to Applications [2158]
34.2. GENERAL INFORMATION ON ESTIMATION OF DISTRIBUTION ALGORITHMS429
2. Estimation of Distribution Algorithms – A New Tool for Evolutionary Computation [1683]
3. Towards a New Evolutionary Computation – Advances on Estimation of Distribution Algo-
rithms [1782]
4. Hierarchical Bayesian Optimization Algorithm – Toward a New Generation of Evolutionary
Algorithms [2145]

34.2.3 Conferences and Workshops

Table 34.2: Conferences and Workshops on Estimation Of Distribution Algorithms.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
430 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
OBUPM: Optimization by Building and Using Probabilistic Models Workshop
History: 2001/07: San Francisco, CA, USA, see [2149]
2000/07: Las Vegas, NV, USA, see [2155]
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
34.3. ALGORITHM 431
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

34.2.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands

34.3 Algorithm

In Algorithm 34.1, we define the basic structure “PlainEDA” of Estimation of Distribution


Algorithms and in Algorithm 34.2, the more general structure “EDA MO” is given. The
problem spaces X of the EDAs are usually the real vectors Rn or bit strings Bn and they
also search in these spaces (G ≡ X), as we assume in Algorithm 34.2. Both algorithms begin
with creating an initial a population pop (via “createPop”). “createPop” might just produce
a set of individuals representing random points in the search space or may as well create the
points systematically following some grid.
After this is done, the actual loop of the EDA begins. First, we wish to compute the
objective values of the candidate solutions. In the general case, where the search space G
is not necessarily the same as the problem space X, we first need to derive the points p.x
which belong to the initially created elements p.g in the search space for all p ∈ pop. This
432 34 ESTIMATION OF DISTRIBUTION ALGORITHMS

Algorithm 34.1: x ←− PlainEDA(f, ps, mps)


Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Data: pop: the population
Data: mate: the matingPool
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individuals
1 begin
2 pop ←− createPop(ps)
3 M ←− initial model, uniform distribution
4 x ←− pop[0]
5 continue ←− true
6 while continue do
7 pop ←− computeObjectives(pop, f)
8 x ←− extractBest(pop ∪ {x}
 , f) [0]
9 if terminationCriterion then
10 continue ←− f alse
11 else
12 mate ←− selection(mate, f, mps)
13 M ←− buildModel(mate, M)
14 pop ←− sampleModel(M, ps)
15 return x
34.3. ALGORITHM 433

Algorithm 34.2: X ←− EDA MO(OptProb, ps, mps)


Input: OptProb: a tuple (X, f , cmpf ) of the problem space X, the objective functions
f , and a comparator function cmpf
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] gpm: the genotype-phenotype
 mapping
Input: [implicit] terminationCriterion : the termination criterion
Data: pop: the population
Data: mate: the matingPool
Data: v: the fitness function
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Data: archive: the archive with the best individuals
Output: X: the set of best candidate solutions discovered
1 begin
2 pop ←− createPop(ps)
3 M ←− initial model, uniform distribution
4 archive ←− ∅
5 continue ←− true
6 while continue do
7 pop ←− performGPM(pop, gpm)
8 pop ←− computeObjectives(pop, f )
9 archive ←− extractBest(pop ∪ archive, cmpf )

10 if terminationCriterion then
11 continue ←− f alse
12 else
13 v ←− assignFitness(pop, cmpf )
14 mate ←− selection(mate, v, mps)
15 M ←− buildModel(mate, M)
16 pop ←− appendList(pop, sampleModel(M, ps))
17 return extractPhenotypes(archive)
434 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
would be done with a genotype-phenotype mapping gpm : G 7→ X whose application we
here denote as “performGPM” in the generalized algorithm variant Algorithm 34.2. This
procedure allows us to use, for instance, an EDA for bit strings Bn while actually optimizing
antenna designs [3046].
Based on the candidate solutions p.x, we can determine the objective values (here
sketched as applying “computeObjectives” to pop). Normally, we deal with a single objective
function f (Algorithm 34.1), but generally, multi-objective optimization may be performed
as well. In the latter case outlined in Algorithm 34.2, the set f contains more than one
function f ∈ f . Then, an additional fitness assignment process “assignFitness” may be used
which breaks down the objective values to a single scalar fitness v(p) for each individual p
denoting its performance in relation to the other individuals in the population pop. In the
usual, single-objective case, this is not required.
After the objective values are known, we can check whether we discovered a new candi-
date solution with better performance than the best one known until now (x, Algorithm 34.1).
In the general case where |f | ≥ 1, there may be more than one candidate solution which
can be considered as optimal from the perspective of the current knowledge, which is, for
instance, not dominated. We use the operation “extractBest” in both algorithm variants
do sketch this process. Notice that in the latter case, archive pruning strategies may be
necessary in order to prevent the archive of best individuals from overflowing.
Based on the objective values f(p.x) (Algorithm 34.1) or the fitness (Algorithm 34.2), a
selection step “selection” chooses mps of the best individuals from pop and puts them into
the mating pool mate. This may be done with any given selection algorithm, but truncation
or fitness proportional selection are often preferred in EDAs. Notice that until now, the
EDA proceeds exactly like any normal Evolutionary Algorithm.
In the next step, however, a probabilistic model M representing the traits elements in
mate is computed with an EDA-specific procedure “buildModel”. This operation may also
involve previously accumulated knowledge such as models created in past generations, which
is illustrated by providing it with the previous model as parameter.
Instead of creating a new population based on elements in mate by, for instance, applying
mutation and crossover operators as it would be done in Genetic Algorithms or Genetic
Programming, the model M is used therefore. M represents a specific probability distribution
which can be sampled by using a random number generator creating elements following the
parameters defined in M. We therefore define here the model type specific “sampleModel”-
operation whose results then can either replace the old population completely or (as in case
of Algorithm 34.2) be merged into it.
With the population of new individuals, the cycle starts all over again unless the ter-
mination criterion terminationCriterion was met, obviously. In the end, the single best
candidate solution discovered is returned in the simple case Algorithm 34.1 or the archive
of best solutions in the general case Algorithm 34.2.

34.4 EDAs Searching Bit Strings


Most of the research on EDAs is performed in areas where the search spaces G are bit strings
g ∈ G = Bn of a fixed length n.

34.4.1 UMDA: Univariate Marginal Distribution Algorithm

Maybe the simplest EDA for binary search spaces is the Univariate Marginal Distribution
Algorithm (UMDA) [1973, 1974]. We define it here as “UMDA” in Algorithm 34.3 and
provide a simple example implementation of its modules in Java on top of our basic EDA
(see Listing 56.20) in Paragraph 56.1.4.4.1 on page 781.
The UMDA constructs a model M storing one real number M[i] corresponding to the
estimated probability that the bit at index i in the genotype p⋆ .g belonging to the optimal
34.4. EDAS SEARCHING BIT STRINGS 435

1 g1
g2
=
=
(1,
(1,
0,
0,
0,
1,
0,
1,
1)
1)
g3 = (1, 1, 1, 0, 1)
Select individuals, g4 = (1, 0, 1, 1, 0)
g5 = (0, 1, 0, 0, 0)
Illustrated: Indivduals g6 = (1, 0, 0, 1, 1)
in the mating pool: g7 = (1, 0, 1, 1, 1)
g8 = (1, 1, 1, 1, 1)
g9 = (0, 1, 1, 1, 0)
g10 = (1, 0, 1, 0, 1)

2 Model #0 2 6 3 4 3
building: #1 8 4 7 6 7

3 Model: M = (0.2, 0.6, 0.3, 0.4, 0.3)

4 P(g[4]=0)=0.2 P(g[2]=0)=0.3 P(g[0]=0)=0.3


P(g[4]=1)=0.8 P(g[2]=1)=0.7 P(g[0]=1)=0.7
Rules for
Sampling:
P(g[3]=0)=0.6 P(g[1]=0)=0.4
P(g[3]=1)=0.4 P(g[1]=1)=0.6

5 randUni[0,1)=0.5 randUni[0,1)=0.9
Model
sampling:
randUni[0,1)=0.3 randUni[0,1)=0.2 randUni[0,1)=0.7

6 New genotype: g = (1, 0, 0, 1, 1)

7 Repeat 5/6 ps (population size) times, then go back to 1.

Figure 34.2: A rough sketch of the UMDA process.


436 34 ESTIMATION OF DISTRIBUTION ALGORITHMS

Algorithm 34.3: x ←− UMDA(f, ps, mps)


Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] n: the number of bits per genotype
Data: pop: the population
Data: mate: the matingPool
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
2 M ←− (0.5, 0.5, . . . , 0.5)
// see Listing 56.21 on page 781
3 x ←− null
4 continue ←− true
5 while continue do
6 pop ←− sampleModelUMDA(M, ps)
7 pop ←− performGPM(pop, gpm)
8 pop ←− computeObjectives(pop, f)
9 mate ←− rouletteWheelSelectionr (pop, f, mps)
10 if x = null then optres ←− extractBest(mate, f) [0]
11 else optres ←− extractBest(mate
 ∪ {x} , f) [0]
12 if terminationCriterion then continue ←− f alse
13 else M ←− buildModelUMDA(mate)
14 return x
34.4. EDAS SEARCHING BIT STRINGS 437
candidate solution p⋆ .x is 1. Initially, each element of the vector M is 0.5. The model
is sampled in order to fill the population with individuals. Therefore, real-valued random
numbers randomUni[0, 1) uniformly distributed between (inclusively) 0 and (exclusively)
1 are used. For the jth bit in a new genotype p.g, such a number is drawn and the bit
becomes 0 if it is smaller or equal to M[j]. A rough sketch of the UMDA process is given
in Figure 34.2. We illustrate this process in Algorithm 34.4 as “sampleModelUMDA” and
implement it in Listing 56.23 on page 783.

Algorithm 34.4: pop ←− sampleModelUMDA(M, ps)


Input: M: the model
Input: ps: the target population size
Input: [implicit] n: the number of bits per genotype
Data: i, j: counter variables
Data: p: an individual record and its corresponding genotype p.g
Output: pop: the new population
1 begin
2 pop ←− ()
3 for i ←− 1 up to ps do
4 p.g ←− ()
5 for j ←− 0 up to n − 1 do
6 if randomUni[0, 1) ≤ M[j] then p.g ←− addItem(p.g, 0)
7 else p.g ←− addItem(p.g, 1)
8 pop ←− addItem(pop, p)

A genotype-phenotype mapping may be performed which transforms the genotypes p.g


of the individuals in the population pop to their corresponding phenotypes p.x if needed,
as is, for instance, the case in circuit design [3093]. Then, the objective values of the
candidate solutions p.x can be determined. UMDA prescribes no specific selection scheme.
The original papers [1973] discuss fitness proportionate and tournament selection, but setups
with truncation selection are used as well [3093]. With any of these operators (we chose the
fitness proportionate variant in Algorithm 34.3, mps individuals are copied into the mating
pool.

Algorithm 34.5: M ←− buildModelUMDA(mate)


Input: mate: the selected individuals
Input: [implicit] n: the number of bits per genotype
Data: i, j, c: counter variables
Data: g: an element from the search space g ∈ G
Output: M: the new model
1 begin
2 M ←− (0, 0, . . . , 0)
3 for j ←− 0 up to n − 1 do
4 c ←− 0
5 for i ←− 0 up to len(mate) − 1 do
6 g ←− mate[i].g
7 if g [j] = 1 then c ←− c + 1
c
8 M[j] ←−
len(mate)

The model building process Algorithm 34.5 of the UMDA is very simple and you can
438 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
find a Java example implementation in Listing 56.22 on page 782. For the bit at index j,
the number c of times it was 1 is counted over all genotypes g stored the individual records
in the mating pool mate. c divided the number of individuals in the mating pool hence is
the relative frequency h(p.x[j] = 1, len(mate)) and serves as the estimate of the probability
that the locus j should be 1 in the optimal solution. With the new probability
 vector, the
cycle starts again until the termination criterion terminationCriterion is met.

34.4.2 PBIL: Population-Based Incremental Learning

One of the first steps into EDA research was the population-based incremental learning
algorithm2 PBIL published in 1994 by Baluja [193], [194, 195]. Here we define it as Algo-
rithm 34.6 in the version given in [195].

Algorithm 34.6: x ←− PBIL(f, ps, mps)


Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] n: the number of bits per genotype
Input: [implicit] terminationCriterion: the termination criterion
Data: pop: the population
Data: mate: the matingPool
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
2 M ←− (0.5, 0.5, . . . , 0.5)
3 x ←− null
4 continue ←− true
5 while continue do
6 pop ←− sampleModelUMDA(M, ps)
7 pop ←− performGPM(pop, gpm)
8 pop ←− computeObjectives(pop, f)
9 mate ←− truncationSelection(pop, f, mps)
10 if x = null then x ←− extractBest(mate, f) [0]
11 else x ←− extractBest(mate ∪ {x} , f) [0]

12 if terminationCriterion then continue ←− f alse
13 else M ←− buildModelPBIL(mate, M)
14 return x

The structure of the models in PBIL is exactly the same as in the UMDA: For each
bit in the genotypes, a real number denoting the estimated probability that that bit should
be 1 in the optimal genotype. The model is therefore sampled like in the UMDA and the
algorithm itself also proceeds pretty much in the same way.
The main difference is the model building step presented as Algorithm 34.7. Here, a new
model M is created by merging the current model M′ and the information provided by the
mating pool mate. The learning rate λ defines how much each bit in the genotypes g of the
selected individuals p ∈ mate contributes to the new model. In PBIL, truncation selection
is used which allows only the best mps individuals to enter mating pool.
It should be noted that in the original version of the algorithm given in [193], a simple
mutation was applied to the models, too. Here, the single probabilities were randomly
2 http://en.wikipedia.org/wiki/Population-based_incremental_learning [accessed 2009-08-20]
34.4. EDAS SEARCHING BIT STRINGS 439

Algorithm 34.7: M ←− buildModelPBIL(mate, M′ )


Input: mate: the selected individuals
Input: M′ : the old model
Input: [implicit] λ: the learning rate
Input: [implicit] n: the number of bits per genotype
Data: i: a counter variable
Data: g: an element from the search space g ∈ G
Output: MT : a temporary model
Output: M: the new model
1 begin
2 MT ←− buildModelUMDA(mate)
3 for i ←− 0 up to n − 1 do
4 M[i] ←− (1 − λ) ∗ M′ [i] + λ ∗ MT [i]

changed by a very small number. Also, the mating pool size was one, i. e., only the best
individual had influence on M. Similar algorithms have been introduced by Kvasnicka et al.
[1652] (Hill Climbing with Learning, HCwL) and Mühlenbein [1973] (Incremental Univariate
Marginal Distribution Algorithm, IUMDA).

34.4.3 cGA: Compact Genetic Algorithm

The compact Genetic Algorithm (cGA) by Harik et al. [1198, 1199] again uses a vector of
(real-valued) probabilities to represent the estimation of the solutions with the best features
according to the objective function f. In Algorithm 34.8 we provide an outline of this

Algorithm 34.8: x ←− cGA(f, ps)


Input: f: the objective function
Input: ps: the population size
Input: [implicit] n: the number of bits per genotype
Data: mate: the matingPool
Data: M: the stochastic model
Output: x: the best individual
1 begin
2 M ←− (0.5, 0.5, . . . , 0.5)
3 while ¬terminationCriterionCGA(M) do
4 mate ←− sampleModelUMDA(M, 2)
5 mate ←− performGPM(mate, gpm)
6 mate ←− computeObjectives(mate, f)
7 M ←− buildModelCGA(mate, M, ps)
8 return transformModelCompGA(M)

algorithm which modifies this vector M so that there is a direct relation between it and the
population it represents [2153]. The idea of the cGA is the following: Assume that we have
a normal Genetic Algorithm and two individuals p1 and p2 compete in a binary tournament.
In the single-objective case, usually one of them will win and reproduce whereas the other
vanishes. Assume further that p1 prevails in the competition, i. e., p1 .x ≺ p2 .x, because
fp1 .x < fp2 .x in case of minimization, for instance. Then, the absolute frequency of its
corresponding genotype p1 .g in the population will increase by one. For a population of size
440 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
1
ps, this means that its relative frequency increases by ps . This obviously also holds for every
single gene in p1 .g, since the individual is copied as a whole.
The cGA tries to emulate this situation by creating exactly two new genotypes
in each iteration by using the same sampling method already utilized in the UMDA
(sampleModelUMDA(M, 2), see Algorithm 34.4). These two individuals then compete with
each other and the genes of the winner then are used to update the model M as sketched in
Algorithm 34.9.

Algorithm 34.9: M ←− buildModelCGA(mate, M′ , ps)


Input: mate: the two individuals
Input: M′ : the old model
Input: ps: the simulated population size
Input: [implicit] n: the number of bits per genotype
Data: p1 , p2 : the individual variables
Data: i: counter variable
Output: M: the new model
1 begin
2 M ←− M′
3 p1 ←− mate[0]
4 p2 ←− mate[1]
5 if p2 .x ≺ p1 .x then
6 p1 ←− p2
7 p2 ←− mate[0]
8 for i ←− n − 1 down to 0 do
9 if p1 .g [i] 6= p2 .g [i] then
1
10 if p1 .g [i] = 0 then M[i] ←− M′ [i] +
ps
1
11 else M[i] ←− M′ [i] −
ps

The optimization process has converged if all elements of the model M became either 1
or 0. Then, only genotypes which exactly equal to the model can be sampled and the vector
will not change anymore. Hence, as termination criterion “terminationCriterionCGA” we
simply need to check whether this convergence already has taken place or not.

Algorithm 34.10: (true, false) ←− terminationCriterionCGA(M)


Input: M: the model
Input: [implicit] n: the number of bits per genotype
Data: i: a counter
Output: (true, false): whether the cGA should terminate
1 begin
2 for i ←− n − 1 down to 0 do
3 if (M[i] > 0) ∧ (M[i] < 1) then return false
4 return true

The result of the optimization process is the phenotype gpm(g) corresponding to the
genotype g ≡ M defined by the model M in the converged state. The trivial way to produce
this element is specified here as Algorithm 34.11.
Further theoretical results concerning the efficiency of the cGA can be found in [833,
34.4. EDAS SEARCHING BIT STRINGS 441

Algorithm 34.11: x ←− transformModelCompGA(M)


Input: M: the model
Input: [implicit] n: the number of bits per genotype
Data: i: a counter
Data: g: the genotype, g ∈ G
Output: x: the optimization result, x ∈ X
1 begin
2 g ←− ()
3 for i ←− 0 up to n − 1 do
4 if M[i] < 0.5 then g ←− addItem(g, 0)
5 else g ←− addItem(g, 1)
6 return gpm(g)

834, 2271]. Because of its compact structure – only one real vector and two bit strings need
to be held in memory – the cGA is especially suitable for implementation in hardware [117].
442 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
34.5 EDAs Searching Real Vectors
A variety of Estimation of Distribution Algorithms has been developed for finding optimal
real vectors from the Rn , where n the number of elements of the genotypes. Mühlenbein
et al. [1976], for instance, introduce one of the first real-valued EDAs and in the following,
we discuss multiple such approaches.

34.5.1 SHCLVND: Stochastic Hill Climbing with Learning by Vectors of


Normal Distribution

The Stochastic Hill Climbing with Learning by Vectors of Normal Distribution (SHCLVND)
by Rudlof and Köppen [2346] is kind of a real-valued version of the PBIL. For search spaces

Algorithm 34.12: x ←− SHCLVND(f, ps, mps)


Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] n: the number of elements of the genotypes
Input: [implicit] Ǧ, Ĝ:
h the minimum
i h and maximumi vector delimiting the problem
space G = Ǧ[0], Ĝ[0] × Ǧ[1], Ĝ[1] × . . .
Input: [implicit] rangeToStdDev: a factor for converting the range to the initial
standard deviations σ, rangeToStdDev = 0.5 in [2346]
Data: pop: the population
Data: mate: the matingPool
Data: M = (µ, σ): the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
1 
2 M.µ ←− Ǧ + Ĝ
2  
3 M.σ ←− rangeToStdDev Ĝ − Ǧ
4 x ←− null
5 continue ←− true
6 while continue do
7 pop ←− sampleModelSHCLVND(M, ps)
8 pop ←− performGPM(pop, gpm)
9 pop ←− computeObjectives(pop, f)
10 mate ←− truncationSelection(pop, f, mps)
11 if x = null then x ←− extractBest(mate, f) [0]
12 else x ←− extractBest(mate
 ∪ {x} , f) [0]
13 if terminationCriterion then continue ←− f alse
14 else M ←− buildModelSHCLVND(mate, M)
15 return x

G ⊆ Rn , it builds models M which consist of a vector of expected values M.µ and a vector
of standard deviations M.σ. We introduce it here as Algorithm 34.12.
Initially, the expected value vector M.µ is set to the point right in the middle of the search
space which is delimited by the vector of minimum values Ǧ and the vector of maximum
values Ĝ. Notice that these limits are soft and genotypes may be created which lie outside
the limits (due to the unconstraint normal distribution used for sampling). The vector of
34.5. EDAS SEARCHING REAL VECTORS 443
standard deviations M.σ is initialized by multiplying a constant rangeToStdDev with the
range spanned by Ĝ − Ǧ. The constant rangeToStdDev is set to rangeToStdDev = 0.5 in
the original paper by Rudlof and Köppen [2346].

Algorithm 34.13: pop ←− sampleModelSHCLVND(M, ps)


Input: M = (µ, σ): the model
Input: ps: the target population size
Input: [implicit] n: the number of elements of the genotypes
Data: i, j: counter variables
Data: p: an individual record and its corresponding genotype p.g
Data: µ: the vector of estimated expected values
Data: σ: the vector of standard deviations
Output: pop: the new population
1 begin
2 pop ←− ()
3 µ ←− M.µ
4 σ ←− M.σ
5 for i ←− 1 up to ps do
6 p.g ←− ()
7 for j ←− 0 up to n − 1 do
8 p.g ←− addItem(p.g, randomNorm(µ[j], σ [j]))
9 pop ←− addItem(pop, p)

Algorithm 34.13 illustrates how the models M = (µ, σ) are sampled in order to create
new points in the search space. Therefore, one univariate normally distributed random
number randomNorm(µ[j], σ [j])is created for each locus j of the genotypes according to
the value µ[j] denoting the expected location of the optimum in the jth dimension and the
prescribed standard deviation σ [j].

Algorithm 34.14: M ←− buildModelSHCLVND(mate, M′ )


Input: mate: the selected individuals
Input: M′ : the old model
Input: [implicit] n: the number of elements of the genotypes
Input: [implicit] λ: the learning rate
Input: [implicit] σred ∈ [0, 1]: the standard deviation reduction rate
Data: g: the arithmetical mean of the selected vectors
Output: M: the new model
1 begin
len(mate)−1
1 X
2 g ←− mate[i].g
len(mate) i=0

3 M.µ ←− (1 − λ) ∗ M .µ + λ ∗ g
4 M.σ ←− σred ∗ M′ .σ
5 return M

After sampling a population pop of ps individuals, performing a genotype-phenotype


mapping “gpm” if necessary, and computing the objective values for the individual records
in the “computeObjectives” step, mps elements are selected using truncation selection –
exactly like in the PBIL. These mps best elements are then used to update the model, a
process here defined as “buildModelSHCLVND” in Algorithm 34.14. Like PBIL’s model
444 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
updating step Algorithm 34.7, “buildModelSHCLVND” uses a learning rate λ for updating
the expected value vector model component M.µ which defines how much this vector is
shifted into the direction of the mean vector g of the genotypes p.g of the individuals p in
the mating pool mate.
The standard deviation vector M.σ model component is not learned. Instead, it is simply
multiplied with a simple reduction factor σred ∈ [0, 1]. In the case where all elements of M.σ
are the same, Rudlof and Köppen [2346] assume a goal value σG the standard deviations
are to take on after t̂ generations of the algorithm and set

σred = t̂
σG (34.1)
1
which availed with t̂ = 2500 generations and σG = 1000 to σred ≈ 0.997 241 in their experi-
ment [2346].

34.5.2 PBILC : Continuous PBIL

Sebag and Ducoulombier [2443] suggest a similar extension of the PBIL to continuous search
spaces: PBILC . For updating the center vector M.µ of the estimated distribution, the
consider the genotypes gb,1 and gb,2 of the two best and the genotype gw of the worst
individual discovered in the current generation. These vectors are combined in a way similar
to the reproduction operator used in Differential Evolution [2220, 2621] and here listed in
Equation 34.2.
M.µ = (1 − λ) ∗ M′ .µ + λ ∗ (gb,1 + gb,2 − gw ) (34.2)
For updating M.σ, they suggest four possibilities:

1. Setting it to a constant value would, however, prevent the search from becoming very
specific or would slow it down significantly, depending on the value,

2. to allow evolution to adjust M.σ itself in a way similar to Evolution Strategy,

3. to set each of its elements to the variance found in the corresponding loci of the K
best genotypes of the current generation,

4. or to adjust each of its elements to the variance found in the corresponding loci of the
K best genotypes of the current generation using the same learning mechanism as for
the center vector.

34.5.3 Real-Coded PBIL

The real-coded PBIL by Servet et al. [2455, 2456] follows a different approach. Here, the
model M consists of three vectors: the upper and lower boundary vectors M.Ĝ and M.Ǧ
limiting the current region of interest and vector M.P denoting the probability
h for each
i locus
j that a good genotype is located in the upper half of the interval M.Ǧ[j], M.Ĝ[j] . We
define this real-coded PBIL as Algorithm 34.15 which basically proceeds almost exactly like
the PBIL for binary search spaces (see Algorithm 34.6). In the startup phase, the boundary
vectors M.Ǧ and M.Ĝ of the model M are initialized to the problem-specific boundaries of
the search space and the M.P is set to (0.5, 0.5, . . . ).
The model can be sampled using, forhinstance, independent
i uniformly distributed random
numbers in the interval prescribed by M.Ǧ[j], M.Ĝ[j] for the jth gene from any of the n
loci, as done in Algorithm 34.16.
After a possible genotype-phenotype mapping, determining the objective values, and a
truncation selection step, the model M is to be updated with Algorithm 34.17. First, the
vector M.P by merging it with the information of whether good traits can be found in
34.5. EDAS SEARCHING REAL VECTORS 445

Algorithm 34.15: x ←− RCPBIL(f, ps, mps)


Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] n: the number of elements of the genotypes
Input: [implicit] Ǧ, Ĝ:
h the minimum
i h and maximumi vector delimiting the problem
space G = Ǧ[0], Ĝ[0] × Ǧ[1], Ĝ[1] × . . .
Data: pop: the population
Data: mate:the matingPool
Data: M = P, Ǧ, Ĝ : the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
2 M.Ǧ ←− Ǧ
3 M.Ĝ ←− Ĝ
4 M.P ←− (0.5, 0.5, . . . , 0.5)
5 x ←− null
6 continue ←− true
7 while continue do
8 pop ←− sampleModelRCPBIL(M, ps)
9 pop ←− performGPM(pop, gpm)
10 pop ←− computeObjectives(pop, f)
11 mate ←− truncationSelection(pop, f, mps)
12 if x = null then x ←− extractBest(mate, f) [0]
13 else x ←− extractBest(mate  ∪ {x} , f) [0]
14 if terminationCriterion then continue ←− f alse
15 else M ←− buildModelRCPBIL(mate, M)
16 return x

Algorithm 34.16: M ←− sampleModelRCPBIL(mate, M′ )


Input: M: the model
Input: ps: the target population size
Input: [implicit] n: the number of elements of the genotypes
Data: i, j: counter variables
Data: p: an individual record and its corresponding genotype p.g
Output: pop: the new population
1 begin
2 pop ←− ()
3 for i ←− 1 up to ps do
4 p.g ←− ()
5 for j ←− 0 up to n− 1 do h 
6 p.g ←− addItem p.g, randomUni M.Ǧ[j], M.Ĝ[j]
7 pop ←− addItem(pop, p)
446 34 ESTIMATION OF DISTRIBUTION ALGORITHMS

Algorithm 34.17: M ←− buildModelRCPBIL(mate, M′ )


Input: mate: the selected individuals
Input: M′ : the old model
Input: [implicit] n: the number of elements of the genotypes
Input: [implicit] λ: the learning rate
Data: i, j, c: internal variables
Output: M: the new model
1 begin
2 M ←− M′
3 for i ←− 0 up to len(mate) − 1 do
4 g ←− mate[i].g
5 for j ←− 0 up to n − 1 do
1 
6 if g [j] > M.Ĝ[j] + M.Ǧ[j] then c ←− 1
2
7 else c ←− 0
8 M.P [j] ←− (1 − λ) ∗ M.P [j] + λ ∗ c
9 for i ←− 0 up to n − 1 do
10 if M.P [j] ≥ 0.9 then
1 
11 M.Ǧ[j] ←− M.Ĝ[j] + M.Ǧ[j]
2
12 M.P[j] ←− 0.5
13 else
14 if p ≤ 0.1 then
1 
15 M.Ĝ[j] ←− M.Ĝ[j] + M.Ǧ[j]
2
16 M.P [j] ←− 0.5

17 return M
34.6. EDAS SEARCHING TREES 447
the upper or lower halves of the regions of interest for each locus. In case that the upper
halve of one dimension j of the region of interest is very promising, i. e., M.P [j] ≥ 0.9,
the
 lower boundaryM.Ǧ[j] of the region in this dimension is updated towards the middle
1
2 M.Ĝ[j] + M.Ǧ[j] and M.P [j] is reset to 0.5. On the other hand, if the lower half of
dimension j of the region is very interesting and M.P [j] ≤ 0.1, the same is adjustment is
done with the upper boundary. In this respect, the real-coded PBIL works similar to a
(randomized) multi-dimensional binary search.

34.6 EDAs Searching Trees

Estimation of Distribution Algorithms cannot only be used for evolving bit strings or real
vectors, but also are applicable to tree-based spaces, i. e., for Genetic Programming as dis-
cussed in Chapter 31 and [1602, 2199].

34.6.1 PIPE: Probabilistic Incremental Program Evolution

Salustowicz and Schmidhuber [2380, 2381] define such an EDA for learning (program) trees:
PIPE: Probabilistic Incremental Program Evolution. Here, we provide this GP approach
as Algorithm 34.18. Let us assume a Genetic Programming system would construct trees
consisting of inner nodes, each belonging to one of the types in the function set N and leaf
nodes, each belonging to one of the nodes in the terminal set Σ.
The function set could, for instance, be N = {+, −, ∗, /, sin} and the terminal set could
be Σ = {X, R}, where X stands for an input variable of the expression to be evolved and
R is a real constant. Programs are n-ary trees where n is the number of arguments of the
function with the most parameters in N . In the example configuration, n would be two
since, for instance, arityOf (∗) = 2. Let V = N ∪ Σ the set of all possible node types.
Salustowicz and Schmidhuber [2380, 2381] represent the probability distribution of pos-
sible trees with a so-called Probabilistic Prototype Tree (PPT). Each node t of such a tree
P N .P : V 7→ [0, 1] which assigns a probability N .P (i) to each
consists of a vector function
node type i such that ∀i∈V N .P (i) = 1 for all the tree nodes holds. Furthermore, it
also contains one value N .R ∈ [0, 1] which stands for the constant value that the termi-
nal R takes on if it was selected. Each inner node N of a prototype tree has n children
N .children[0] to N .children[n − 1] which again, are prototype tree nodes. The models M
are the complete prototype trees or, more precisely, their roots (and, due to the children
structure N .children, hold the complete tree data.
The initial prototype tree consists only of a single node as created by the function
“newModelNodePIPE” in Algorithm 34.19. Salustowicz and Schmidhuber [2381] prescribe
a start probability PΣ for creating terminal symbols which they set to PΣ = 0.8 in their
experiments. All the |Σ| terminal symbols are given the same probability for creation and the
remaining share of the probability (1 − PΣ ) is evenly distributed amongst the |N | function
node types. The random constant N .R is set to a value uniformly distributed in [0, 1].
A new population pop is created by sampling the prototype tree as prescribed in Algo-
rithm 34.20. The method uses a procedure “buildTreePIPE” defined in Algorithm 34.21
which recursively builds and attaches child nodes to a new genotype. As previously pointed
out, the model tree M initially only consists of a single node. In PIPE, model nodes are
dynamically attached when required. This is done by “buildTreePIPE”, which therefore
returns a tuple (N, t) consisting of the updated node N of the prototype tree and the new
node t which was inserted in the genotype. In the top-level called instance of the function,
N corresponds to the updated model M and t is the genotype p.g of a new individual.
“buildTreePIPE” works as follows: First, a node type i is chosen from both, the function
set N and the terminal set Σ according to the probability distribution defined in the input
prototype tree node N ′ which, in the top-level instance of the method, corresponds to the
448 34 ESTIMATION OF DISTRIBUTION ALGORITHMS

Algorithm 34.18: x ←− PIPE(f, ps)


Input: f: the objective function
Input: ps: the population size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] N : the set of function types
Input: [implicit] Σ: the set terminal types
Input: [implicit] PΣ : the probability for chosing a terminal symbol
Data: pop: the population
Data: mate: the matingPool (of size len(mate) = 1)
Data: M: the stochastic model
Data: p⋆ : the best individual
Data: continue: a variable holding the termination criterion
Output: x: the best candidate solution
1 begin
2 M ←− newModelNodePIPE(N, Σ, PΣ )
3 p⋆ ←− null
4 continue ←− true
5 while continue do
6 if (p⋆ 6= null) ∧ (randomUni[0, 1) < PE ) then
7 M ←− buildModelPIPE({p⋆ } , M, p⋆ )
8 else
9 pop ←− sampleModelPIPE(M, ps)
10 pop ←− performGPM(pop, gpm)
11 pop ←− computeObjectives(pop, f)
12 mate ←− truncationSelection(pop, f, 1)
13 if p⋆ = null then p⋆ ←− extractBest(mate, f) [0]
14 else p⋆ ←− extractBest(mate
 ∪ {p⋆ } , f) [0]
15 if terminationCriterion then continue ←− f alse
16 else M ←− buildModelPIPE(mate, M, p⋆ )
17 return p⋆ .x

Algorithm 34.19: N ←− newModelNodePIPE(N, Σ, PΣ )


Input: N : the set of function types
Input: Σ: the set terminal types
Input: PΣ : the probability for chosing a terminal symbol
Data: i: a node type variable
Output: N : a new model node with its initial configuration
1 begin
2 N .R ←− randomUni[0, 1)
3 foreach i ∈ Σ do

4 N .P (i) ←−
|Σ|
5 foreach i ∈ N do
1 − PΣ
6 N .P (i) ←−
|N |
7 N .children ←− ∅
34.6. EDAS SEARCHING TREES 449

Algorithm 34.20: pop ←− sampleModelPIPE(M, ps)


Input: M: the model
Input: ps: the target population size
Data: j: counter variable
Data: p: an individual record and its corresponding genotype p.g
Output: pop: the new population
1 begin
2 pop ←− ()
3 for j ←− 1 up to ps do
4 (M, p.g) ←− buildTreePIPE(M)
5 pop ←− addItem(pop, p)

current model. The selected type i is then instantiated as new node t to be inserted into
the genotype.
If i happens to be the type for real constants (which is kind of a problem-specific node
type and may be left away in implementations for other domains), the constant value t.val
is either set to a random value uniformly distributed in [0, 1) or to the constant value N .R
held in the prototype node, depending on whether the probability of choosing the type R
surpassed a given threshold PR . Salustowicz and Schmidhuber [2381] set PR to 0.3 in their
experiments.
If t is a function node, all necessary child nodes are created by recursively calling the
same procedure. In the case that for a required child node no corresponding node exists in
the prototype tree, this node is again created via Algorithm 34.19.
Updating the model in PIPE means to adapt it towards the direction of the genotype of
the best individual p⋆ found so far. This is done by first computing the recursively defined
probability P ( p⋆ .g| M) that this individual would occur given the current model M, as noted
in Equation 34.3.
 QnumChildren(t)
if numChildren(t) > 0
P (t |N ) = N .P (typeOf (t)) ∗ i=0 (34.3)
1 otherwise
ε ∗ f(x)
Pgoal (p.g |M, x ) = P (g |M ) + (1 − P (p.g |M )) ∗ λ ∗ (34.4)
ε ∗ f(gpm(g))

Based on a learning rate λ, the target probability Pgoal ( p⋆ .g| M, x) that this tree should
receive after the model was updated is derived. Here, also the objective values of the best
candidate solution x discovered so far are also taken into consideration. The relation of the
fitness of best individual p⋆ of the current generation and this elitist fitness is weighted by a
factor ε. In their experiments, Salustowicz and Schmidhuber [2381] set λ = 0.01 and ε = 1.
In Algorithm 34.22, we show how a probability-increasing function “increasePropPIPE”
(Algorithm 34.23) for the best tree of the current generation is repeatedly invoked in order
to shift its probability towards the target probability Pgoal . This function again works
recursively. It also sets the constant values stored in the probability to the constant values
used in the best tree and ensures that all probabilities remain normalized. A constant c (set
to 0.1 in the original work) is used to trade-off between fast increase of the tree probabilities
and the precision in reaching Pgoal .
The prototype trees are pruned with Algorithm 34.24 after learning which removes paths
which are not reached when expanding a prototype node with a probability of at least PP
set to 0.999 999 in [2381].
Algorithm 34.18 performs either elitist learning towards the best individual p⋆ discovered
during all generations with probability PE (0.2 in the original paper) or normal learning.
Also, the original work [2381] features a mutation operator which changes the probabilities
450 34 ESTIMATION OF DISTRIBUTION ALGORITHMS

Algorithm 34.21: (N, t) ←− buildTreePIPE(N ′ )


Input: N ′ : the tree node
Input: [implicit] V : the available symbols
Input: [implicit] PR : the threshold probability value to re-use the random constant
Input: [implicit] i: the node type chosen
Input: [implicit] N : the set of function types
Input: [implicit] Σ: the set terminal types
Input: [implicit] PΣ : the probability for choosing a terminal symbol
Data: p: the probability value
Data: j: a counte variable
Data: N ′ : the current sub node
Output: N : the updated model
Output: t′ : the new sub tree
Output: t: the newly created tree node
1 begin
2 N ←− N ′
3 p ←− randomUni[0, 1)
4 V ←− setToList(N ∪ Σ)
5 j ←− 0
6 while p > V [j] do
7 p ←− p − N .P (V [j])
8 j ←− j + 1
9 i ←− V [j]
10 t ←− instantiate(i)
11 if i = R then
12 if N .P (V [j]) > PR then t.val ←− N .R
13 else t.val ←− randomUni[0, 1)
14 for j ←− 0 up to arityOf (i) − 1 do
15 if numChildren(N ) ≥ j then
16 N ′ ←− newModelNodePIPE(N, Σ, PΣ )
17 N ←− addChild(N, N ′ )
18 else
19 N ′ ←− N .children[j]
20 (N ′ , t′ ) ←− buildTreePIPE(N ′ )
21 N .children[j] ←− N ′
22 t ←− addChild(t, t′ )
23 return (N, t)
34.6. EDAS SEARCHING TREES 451

Algorithm 34.22: M ←− buildModelPIPE(mate, M′ , x)


Input: mate: the individuals selected for mating
Input: M′ : the old model
Input: x: the best individual found so far
Data: Pgoal : a variable holding the target probability
Output: M: the new model
1 begin
2 p ←− mate[0]
3 Pgoal ←− Pgoal (p.g |M′ , x )
4 M ←− M′
5 while P (p.g |M ) < Pgoal do
6 M ←− increasePropPIPE(M, p.g)
7 return pruneTreePIPE(M)

Algorithm 34.23: N ←− increasePropPIPE(N ′ , t)


Input: N ′ : the protype tree node
Input: t: the corresponding tree node
Input: N : the updated protype tree node
Data: ic , i: tree node types
Data: j: a counter variable
Data: s: the probability sum before normalization
Output: N : the model updated node
1 begin
2 N ←− N ′
3 ic ←− typeOf (t)
4 if ic = R then
5 N .R ←− t.val
6 N .P (i) ←− N .P (ic ) + cλ (1 − N .P (ic ))
X
7 s ←− N .P (i)
∀i∈N ∪Σ
8 foreach i ∈ N ∪ Σ do
1
9 N .P (i) ←− N .P (i)
s
10 for j ←− 0 up to numChildren(t) do
11 N .children[j] ←− increasePropPIPE(N .children[j], t.children[j])
12 return N
452 34 ESTIMATION OF DISTRIBUTION ALGORITHMS

Algorithm 34.24: N ←− pruneTreePIPE(N ′ )


Input: N ′ : the original node
Output: N : the pruned node
1 begin
2 N ←− N ′
3 P̂ ←− max {N .P (i) : i ∈ N ∪ Σ}
4 if P̂ ≥ PP then
n o
5 a ←− max arityOf (i) : i ∈ N ∪ Σ ∧ N .P (i) = P̂
6 for j ←− a up to numChildren(N ) − 1 do
7 N ←− deleteChild(N, a)
8 for j ←− 0 up to a − 1 do
9 N .children[j] ←− pruneTreePIPE(N .children[j])
10 return N

in the prototype trees with a given probability (and normalizes them again).

34.7 Difficulties

34.7.1 Diversity

Estimation of Distribution Algorithms evolve a model of how a perfect solution should


probably look like. The central idea is that such a model is defined by several parameters
which will converge and the span of possible values which can be sampled from it will become
smaller and smaller over time. As already stated, if everything goes well, at one point the
model should have become so specific that only the single, global optimum can be sampled
from it. If everything goes well, that is. Now not all problems are uni-modal, i. e., have a
single optimum. In fact, as we already pointed out in Chapter 13 and Section 3.2, many
problems are multi-modal, with many local or global optima.
Then, the danger arises that the EDA too quickly loses diversity [2843] and converges
towards a local optimum. Evolutionary Algorithms can preserve diversity in their population
by allowing with worse objective values candidate solutions to survive if they are located in
different areas of the search space. In EDAs, this is not possible. If we use such individuals
during the model updating process, the only thing we achieve is that the area where new
points are sampled stays very large, the search process does not converge, and many objective
function evaluations are wasted to candidate solutions with little chance of being good.

34.7.1.1 Model Mutation

A very simple way for increasing the diversity in EDAs is mutating the model itself from
time to time. By randomly modifying its parameters, new individuals will be sampled a bit
more distant from the configurations which are currently assumed to perform best. If the
newly explored parts of the search space are inferior, chances are that the EDA finds back
the previous parameters. If they are better, on the other hand, the EDA managed to escape
a local optimum. This approach is relatively old and has already been applied in PBIL [193]
and PIPE [2380, 2381].
34.7. DIFFICULTIES 453
34.7.1.2 Sampling Mutation

But diverse individuals can also be created without permanently mutating the model. In-
stead, a model mutation may be applied which only affects the sampling of one single
genotype and is reverted thereafter, i. e., which has very temporary characteristics. This
way, the danger that a good model may get lost is circumvented.
Such an operator is the Sampling Mutation by Wallin and Ryan [2843, 2844] who tem-
porarily invert the probability distribution describing one variable of the model. In a , for
instance, we could temporarily set the ith element of element of the model vector M to
1 − M[i] during the sampling of one individual.

34.7.1.3 Clustering in the Objective Space

In their Evolutionary Bayesian Classifier-based Optimization Algorithms (EBCOAs),


Miquélez et al. [1910] chose yet another approach. Before the model building phase, the
population pop is divided into fixed number |K| of classes. This is achieved by dividing the
population into equal-sized groups of individuals from the fittest to the lest fit one [1910]. To
each individual p ∈ pop a label k(p). Some of the classes Ki ∈ K : Ki = {p ∈ pop : k(p) = i}
are discarded to facilitate learning and only |C| ≤ |K| classes are selected. Using only a
(strict) subset of classes for learning can be justified because it emphasizes the differences
between the classes and reduces noise [1910, 2843].
The class variables (which indirectly represent the fitness) tagged to the individuals are
treated as part of the individuals and incorporated into the model building process as well.
In other words, the modes M also try to estimate the distribution of the classes (additionally
to the parameters of the genotypes). In [1910] and [2843], Bayesian networks are learned as
models.
For the sampling process, Miquélez et al. [1910] suggest sampling a number m(i) of
individuals for a class i in the next generation proportional to the sum of the objective
values that the individuals achieved in the last generation:
X
m(i) ∝ f(p.x) (34.5)
p∈pop:k(p)=i

Wallin and Ryan [2843] further combine such fitness clustering with sampling mutation.
Pelikan et al. [2157], Sastry and Goldberg [2398] extend the method for multiple objective
functions. They define a multi-objective version of their hBOA algorithm as a combination
of hBOA, NSGA-II, and clustering in the objective space.
Lu and Yao [1784] introduce a basic multi-model EDA scheme for real-valued optimiza-
tion. This scheme utilizes clustering in a way similar to our work presented here. However,
they focus mainly on numerical optimization whereas we introduce a general framework
together with one possible instantiation for numerical optimization. Platel et al. [2187] pro-
pose a quantum-inspired evolutionary algorithm which is an EDA for bit-string base search
spaces. This algorithm utilizes a structured population, similar to the use of demes in EAs,
which makes it a multi-model EDA. Gallagher et al. [1009] extend the PBIL algorithm to
real-valued optimization by using an Adaptive Gaussian mixture model density estimator.
This approach can deal with multi-modal problems but too, is more complex than multi-
model algorithms which utilize clustering
Niemczyk and Weise [2038] finally introduce a general multi-model EDA which unites
research on Estimation of Distribution Algorithmsand Evolutionary Algorithms: They sug-
gest divide the population into clusters and derive one model for each of them. These models
may then be recombined and sampled. In the case that one model is created for each se-
lected individual and additional models result from recombination, the algorithm behaves
exactly like a standard EA. If, on the other hand, only one single cluster is created and no
model recombination is performed, the algorithm equals a traditional EDA. Niemczyk and
454 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Weise [2038] point out that this scheme applies to arbitrary search and problem spaces and
clustering methods. They instantiate it for a continuous search space, but it could as well
be used for bit-string based or tree-based genomes.

34.7.1.4 Clustering in the Search Space

Symmetric optimization problems can be especially challenging for EDAs since optima with
similar objective values but located in different areas of the search space may hinder them
from converging. Pelikan and Goldberg [2147, 2148] first applied clustering the search space
of general EAs and EDAs in order to alleviate this problem. In a Genetic Algorithm, they
would cluster the mating pool (i. e., the selected parents) according to the genotypes. Each
cluster will then be processed separately and yield its own offspring. The recombination
operator only uses parents which stem from the same cluster. The number of new genotypes
per cluster is proportional to its average fitness or its size. This method hence facilitates
niching. In their work, Pelikan and Goldberg [2147, 2148] use k-means clustering [1503, 1808]
both, in a simple Genetic Algorithm and in UMDA, and found that it helps to achieve better
results in the presence of symmetry.
Bosman and Thierens [363, 364] too apply k-means clustering but also the randomized
leader algorithm and expectation maximization after selection. For each of the clusters,
again a separate model is created. These are combined to an overall distribution via a
weighted sum approach, where the weight of a cluster is proportional to its size.
Lu and Yao [1784] apply the heuristic competitive learning algorithm RPCL to cluster the
populations in their Clustering and Estimation of Gaussian Network Algorithms (CEGNA)
for continuous optimization problems. RPCL has the advantage that it can select the number
of clusters automatically [2999]. In CEGNA, after selection, the mating pool is clustered
into k clusters where both, the clusters and k, are determined by RPCL. For each of the
clusters, an independent model Mi : i ∈ 1..k is derived. In the sampling step, Ni new
genotypes are sampled for each cluster i where Ni is proportional to the average fitness of
the individuals in the cluster. The experimental results [1784] show that the new algorithm
performs well on multi-modal problems.
Other EDA approaches utilizing clustering have been provided in [42, 486, 2069]. Cluster-
ing has also been incorporated in EAs as, for example, niching method, by various research
groups [1081, 1241].
In [2357], Ruzgas et al. show that a multi-modal problem can be reduced to a set of
uni-modal ones and the quality of estimation procedures increases significantly if applying
estimators first to each cluster and later combining the results. Although their work was
not related to EDAs, it is another account for the successful integration of clustering into
distribution estimation.

34.7.2 Epistasis
34.7. DIFFICULTIES 455
Tasks T34

99. Implement the Royal Road function given in Section 50.2.4 on page 558 for genotypes
of lengths 64, 512, and 2048. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your findings.
[25 points]

100. Implement the OneMax function given in Section 50.2.5.1 on page 562 for genotypes
of lengths 64, 512, and 2048. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your findings.
[25 points]

101. Implement the BinInt function given in Section 50.2.5.2 on page 562 for genotypes
of lengths 16, 32, and 48. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your findings.
[25 points]

102. Implement the NK Fitness Landscape given in Section 50.2.1 on page 553 for N = 12
and 16 and K = 4 and K = 6 and random neighbors. Apply both, the UMDA (see
Section 34.4.1) and a simple Genetic Algorithm (see Section 29.3) to the four resulting
functions. Repeat the experiments 15 times. Describe your findings.
[50 points]

103. Implement any of the real-valued Estimation of Distribution Algorithms defined in Sec-
tion 34.5 on page 442 and test them on suitable benchmark problems (such as those
given in Task 64 on page 238).
[50 points]
Chapter 35
Learning Classifier Systems

35.1 Introduction

In the late 1970s, Holland, the father of Genetic Algorithms, also invented the concept of
classifier systems (CS) [1251, 1254, 1256]. These systems are a special case of production
systems [696, 697] and consist of four major parts:

1. a set of interacting production rules, called classifiers,


2. a performance algorithm which directs the actions of the system in the environment,
3. a learning algorithm which keeps track on the success of each classifier and distributes
rewards, and
4. a Genetic Algorithm which modifies the set of classifiers so that variants of good classi-
fiers persist and new, potentially better ones are created in an efficient manner [1255].

By time, classifier systems have undergone some name changes. In 1986, reinforcement
learning was added to the approach and the name changed to Learning Classifier Systems1
(LCS) [1220, 2534]. Learning Classifier Systems are sometimes subsumed under a machine
learning paradigm called evolutionary reinforcement learning (ERL) [1220] or Evolutionary
Algorithms for Reinforcement Learning (EARLs) [1951].

35.2 Families of Learning Classifier Systems

The exact definition of Learning Classifier Systems [1257, 1588, 1677, 2534] still seems con-
tentious and there exist many different implementations. There are, for example, versions
without message list where the action part of the rules does not encode messages but direct
output signals. The importance of the role of Genetic Algorithms in conjunction with the
reinforcement learning component is also not quite clear. There are scientists who emphasize
more the role of the learning components [2956] and others who tend to grant the Genetic
Algorithms a higher weight [655, 1256].
The families of Learning Classifier Systems have been listed and discussed by Brownlee
[430] elaborately. Here we will just summarize their differences in short. De Jong [712, 713]
and Grefenstette [1127] divide Learning Classifier Systems into two main types, depending
on how the Genetic Algorithm acts: The Pitt approach originated at the University of
Pittsburgh with the LS-1 system developed by Smith [2536]. It was then developed further
and applied by Bacardit i Peñarroya [158, 159], De Jong and Spears [716], De Jong et al.
1 http://en.wikipedia.org/wiki/Learning_classifier_system [accessed 2007-07-03]
458 35 LEARNING CLASSIFIER SYSTEMS
[717], Spears and De Jong [2564], and Bacardit i Peñarroya and Krasnogor [161]. Pittsburgh-
style Learning Classifier Systems work on a population of separate classifier systems, which
are combined and reproduced by the Genetic Algorithm.
The original idea of Holland and Reitman [1256] were Michigan-style LCSs, where the
whole population itself is considered as classifier system. They focus on selecting the best
rules in this rule set [706, 1074, 1751]. Wilson [2952, 2953] developed two subtypes of
Michigan-style LCS:

1. In ZCS systems, there is no message list use fitness sharing [440, 442, 591, 2952] for a
Q-learning-like reinforcement learning approach called QBB.

2. ZCS have later been somewhat superseded by XCS systems in which the Bucket
Brigade Algorithm has fully been replaced by Q-learning. Furthermore, the credit
assignment is based on the accuracy (usefulness) of the classifiers. The Genetic Algo-
rithm is applied to sub-populations containing only classifiers which apply to the same
situations. [1587, 1681, 2953–2955]
35.3. GENERAL INFORMATION ON LEARNING CLASSIFIER SYSTEMS 459
35.3 General Information on Learning Classifier Systems

35.3.1 Applications and Examples

Table 35.1: Applications and Examples of Learning Classifier Systems.


Area References
Combinatorial Problems [2158]
Control [1074]
Data Mining [53, 158, 160, 162, 279, 633, 675, 676, 706, 717, 904, 994,
1527, 1587, 2095, 2461, 2674, 2894, 2902]
Databases [162]
Digital Technology [706, 1587, 2954]
Distributed Algorithms and Sys- [1218, 2461, 2767]
tems
Economics and Finances [440]
Engineering [53, 706, 1218, 1587, 2158, 2767, 2954]
Function Optimization [713]
Graph Theory [1218, 2461, 2767]
Healthcare [1527, 2674]
Mathematics [591]
Multi-Agent Systems [440]
Prediction [1587]
Security [53, 1218, 2461]
Wireless Communication [2158]

35.3.2 Books
1. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
2. Foundations of Learning Classifier Systems [443]
3. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
4. Anticipatory Learning Classifier Systems [459]
5. Applications of Learning Classifier Systems [441]

35.3.3 Conferences and Workshops

Table 35.2: Conferences and Workshops on Learning Classifier Systems.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
460 35 LEARNING CLASSIFIER SYSTEMS
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICML: International Conference on Machine Learning
See Table 10.2 on page 129.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
IWLCS: International Workshop on Learning Classifier Systems
History: 2007: London, UK, see [1589, 2579]
2006/07: Seattle, WA, USA, see [2441]
2005/06: Washington, DC, USA, see [27]
2004/06: Seattle, WA, USA, see [2280]
2003/07: Chicago, IL, USA, see [2578]
2002/09: Granada, Spain, see [1680]
2001/07: San Francisco, CA, USA, see [1679]
2000/09: Paris, France, see [1678]
1999/07: Orlando, FL, USA, see [2094]
1992: Houston, TX, USA, see [2002, 2534]
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
35.3. GENERAL INFORMATION ON LEARNING CLASSIFIER SYSTEMS 461
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
ECML: European Conference on Machine Learning
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
EWSL: European Working Session on Learning
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
462 35 LEARNING CLASSIFIER SYSTEMS
See Table 10.2 on page 134.
ICMLC: International Conference on Machine Learning and Cybernetics
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
TAAI: Conference on Technologies and Applications of Artificial Intelligence
See Table 10.2 on page 134.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
ECML PKDD: European Conference on Machine Learning and European Conference on Principles
and Practice of Knowledge Discovery in Databases
See Table 10.2 on page 135.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

35.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
Chapter 36
Memetic and Hybrid Algorithms

Starting with the research contributed by Bosworth et al. [368] in 1972, Bethke [288] in 1980,
and Brady [382] in 1985, there is a long tradition of hybridizing Evolutionary Algorithms
with other optimization methods such as Hill Climbing, Simulated Annealing, or Tabu
Search [2507]. A comprehensive review on this topic has been provided by Grosan and
Abraham [1139, 1141].
Hybridization approaches are not limited to GAs as “basis” algorithm. In ?? for example,
we have already listed a wide variety of approaches to combine the Downhill Simplex with
population-based optimization methods spanning from Genetic Algorithms to Differential
Evolution and Particle Swarm Optimization. Many of these approaches can be subsumed
under the umbrella term Memetic Algorithms1 (MAs).

36.1 Memetic Algorithms

The principle of Genetic Algorithms is to simulate the natural evolution in order to solve
optimization problems. In nature, the features of phenotypes are encoded in the genes
of their genotypes. The term Memetic Algorithm was coined by Moscato [1961, 1962] as
allegory for simulating a social evolution where behavioral patterns are passed on in memes 2 .
A meme has been defined by Dawkins [700] as “unit of imitation in cultural transmission”.
With the simulation of social cooperation and competition, optimization problems can be
solved efficiently.
Moscato [1961] uses the example of Chinese martial art Kung-Fu which has developed
over many generations of masters teaching their students certain sequences of movements,
the so-called forms. Each form is composed of a set of elementary aggressive and defensive
patterns. These undecomposable sub-movements can be interpreted as memes. New memes
are rarely introduced and only few amongst the masters of the art have the ability to do
so. Being far from random, such modifications involve a lot of problem-specific knowledge
and almost always result in improvements. Furthermore, only the best of the population
of Kung-Fu practitioners can become masters and teach decibels. Kung-Fu fighters can
determine their fitness by evaluating their performance or by competing with each other in
tournaments.
Based on this analog, Moscato [1961] creates an example algorithm for solving the Trav-
eling Salesman Problem [118, 1694] by using the three principles of

1. intelligent improvement based on local search with problem-specific operators,


2. competition in form of a selection procedure, and
3. cooperation in form of a problem-specific crossover operator.
1 http://en.wikipedia.org/wiki/Memetic_algorithm [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Meme [accessed 2008-09-10]
464 36 MEMETIC AND HYBRID ALGORITHMS
Interesting research work directly focusing on Memetic Algorithms has been contributed,
amongst many others, by Moscato et al. in [329, 451, 1260, 1961, 1963, 2055], Radcliffe and
Surry [2245], Digalakis and Margaritis [794, 795], and Krasnogor and Smith [1618]. Further
early works on Genetic Algorithm hybridization are Ackley [19] (1987), Goldberg [1075]
(1989), Gorges-Schleuter [1107] (1989), Mühlenbein [1969, 1970, 1972] (1989), Brown et al.
[429] (1989) and Davis [695] (1991).
The definition of Memetic Algorithms given by Moscato [1961] is relatively general and
encompasses many different approaches. Even though Memetic Algorithms are a metaphor
based on social evolution, there also exist two theories in natural evolution which fit to
the same idea of hybridizing Evolutionary Algorithms with other search methods [2112].
Lamarckism and the Baldwin effect are both concerned with phenotypic changes in living
creatures and their influence on the fitness and adaptation of species.

36.2 Lamarckian Evolution

Lamarckian evolution3 is a model of evolution accepted by science before the discovery of


genetics. Superseding early the ideas of Erasmus Darwin [686] (the grandfather of Charles
Darwin), Lamarck [722] laid the foundations of the theory later known as Lamarckism with
his book Philosophie Zoologique published in 1809. Lamarckism has two basic principles:

1. Individuals can attain new, beneficial characteristics during their lifetime and lose
unused abilities.
2. They inherit their traits (also those acquired during their life) to their offspring.

While the first concept is obviously correct, the second one contradicts the state of knowledge
in modern biology. This does not decrease the merits of de Monet, Chevalier de Lamarck,
who provided an early idea about how evolution could proceed. In his era, things like genes
and the DNA simply had not been discovered yet. Weismann [2918] was the first to argue
that the heredity information of higher organisms is separated from the somatic cells and,
thus, could not be influenced by them [2739]. In nature, no phenotype-genotype mapping
can take place.
ian evolution can be “included” in Evolutionary Algorithms by performing a local search
starting with each new individual resulting from applications of the reproduction operations.
This search can be thought of as training or learning and its results are coded back into the
genotypes g ∈ G [2933]. Therefore, this local optimization approach usually works directly
in the search space G. Here, algorithms such as greedy search Hill Climbing, Simulated
Annealing, or Tabu Search can be utilized, but simply modifying the genotypes randomly
and remembering the best results is also possible.

36.3 Baldwin Effect

The Baldwin effect4 , [2495, 2740, 2830, 2831] first proposed by Baldwin [191, 192], Morgan
[1943, 1944], and Osborn [2101] in 1896, is a evolution theory which remains controversial
until today [710, 2179]. Suzuki and Arita [2637] describe it as a “possible scenario of in-
teractions between evolution and learning caused by balances between benefit and cost of
learning” [2873]. Learning is a rather local phenomenon, normally involving only single
individuals, whereas evolution usually takes place in the global scale of a population. The
Baldwin effect combines both in two steps [2739]:
3 http://en.wikipedia.org/wiki/Lamarckism [accessed 2008-09-10]
4 http://en.wikipedia.org/wiki/Baldwin_effect [accessed 2008-09-10]
36.3. BALDWIN EFFECT 465
1. First, lifelong learning gives the individuals the chance to adapt to their environment
or even to change their phenotype. This phenotypic plasticity 5 may help the creatures
to increase their fitness and, hence, their probability to produce more offspring. Dif-
ferent from Lamarckian evolution, the abilities attained this way do not influence the
genotypes nor are inherited.

2. In the second phase, evolution step by step generates individuals which can learn
these abilities faster and easier and, finally, will have encoded them in their genome.
Genotypical traits then replace the learning (or phenotypic adaptation) process and
serve as an energy-saving shortcut to the beneficial traits. This process is called genetic
assimilation 6 [2829–2832].

f(gpm(g))

with learning

without learning

gÎG

Fig. 36.1.a: The influence of learning capabilities of individuals on the fitness


landscape, extension of the figure in [1235, 1236].

f(gpm(g))

Case 1: Case 4:
neutralized landscape smoothed out landscape

Case 2: Case 3:
increased decreased
gradient gradient gÎG

Dg Dg
Fig. 36.1.b: The positive and negative influence of learning capabilities of
individuals on the fitness landscape, extension of the figure in [2637].

Figure 36.1: The Baldwin effect.

Hinton and Nowlan [1235, 1236] performed early experiments on the Baldwin effect with
Genetic Algorithms [253, 1208]. They found that the evolutionary interaction with learning
smoothens the fitness landscape [1144] and illustrated this effect on the graphical example of
a needle-in-a-haystack problem. Mayley [1844] used experiments on Kauffman’s NK fitness
landscapes [1501] (see Section 50.2.1) to show that the Baldwin effect can also have negative
influence. We use Figure 36.1 to sketch both effects that can occur when the fitness of an
5 http://en.wikipedia.org/wiki/Phenotypic_plasticity [accessed 2008-09-10]
6 http://en.wikipedia.org/wiki/Genetic_assimilation [accessed 2008-09-10]
466 36 MEMETIC AND HYBRID ALGORITHMS
individual is computed by performing a local search around it, returning the objective values
of the best candidate solution found (without preserving this optimized phenotype).
Whereas learning adds gradient information in regions of the search space which are
distant from local or global optima (case 2 in Fig. 36.1.b), it decreases the information in
their near proximity (called hiding effect [1457, 1844], case 3 in Fig. 36.1.b). Similarly, using
the Baldwin effect in the described way may smoothen out rugged fitness landscapes (case
4), it could also reduce smaller gradient information (case 1). Suzuki and Arita [2637] found
that the Baldwin effect decreases the evolution speed in their rugged experimental fitness
landscape, but also led to significantly better results in the long term.
One interpretation of these issues is that learning capabilities help individuals to survive
in adverse conditions since they may find good abilities by learning and phenotypic adap-
tation. On the other hand, it makes not much of a difference whether an individual learns
certain abilities or whether it was already born with them when it can exercise them at the
same level of perfection. Thus, the selection pressure furthering the inclusion of good traits
in the heredity information decreases if a life form can learn or adapt its phenotypes.
Like Lamarckian evolution, the Baldwin effect can also be added to Evolutionary Al-
gorithms by performing a local search starting at each new offspring individual. Different
from Lamarckism, the abilities and characteristics attained by this process only influence
the objective values of an individual and are not coded back to the genotypes. Hence, it
plays no role whether the search takes place in the search space G or in the problem space
X. The best objective values f (p′.x) found in a search around individual p become its own
objective values, but the modified variant p′ of p actually scoring them is discarded [2933].
Nevertheless, the implementer will store these individuals somewhere else if they were
the best candidate solutions ever found. She must furthermore ensure that the user will be
provided with the correct objective values of the final set of candidate solutions resulting
from the optimization process (f (p.x), not f (p′.x)).
36.4. GENERAL INFORMATION ON MEMETIC ALGORITHMS 467
36.4 General Information on Memetic Algorithms

36.4.1 Applications and Examples

Table 36.1: Applications and Examples of Memetic Algorithms.


Area References
Combinatorial Problems [451, 457, 619, 649, 1486, 1581, 1618, 1859, 1961, 2055, 2245,
2374, 2669, 3071]
Computer Graphics [2707]
Data Mining [1581]
Databases [3071]
Distributed Algorithms and Sys- [769, 872, 1007, 1435, 2020, 2897]
tems
Economics and Finances [1060]
Engineering [769, 1435, 1486, 2897]
Function Optimization [795, 1927]
Graph Theory [63, 329, 769, 872, 1007, 1435, 1486, 2020, 2237, 2374, 2897]
Healthcare [2021, 2022]
Logistics [451, 619, 1581, 1859, 1961, 2245, 2669]
Motor Control [489, 490]
Security [1486]
Software [1486, 2897]
Wireless Communication [63, 1486, 2237, 2374]

36.4.2 Books
1. Recent Advances in Memetic Algorithms [1205]
2. Hybrid Evolutionary Algorithms [1141]
3. Advances in Evolutionary Algorithms [1581]

36.4.3 Conferences and Workshops

Table 36.2: Conferences and Workshops on Memetic Algorithms.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
468 36 MEMETIC AND HYBRID ALGORITHMS
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
36.4. GENERAL INFORMATION ON MEMETIC ALGORITHMS 469
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
470 36 MEMETIC AND HYBRID ALGORITHMS
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
WOMA: International Workshop on Memetic Algorithms
History: 2009/03: Nashville, TN, USA, see [1394]

36.4.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
3. Memetic Computing published by Springer-Verlag GmbH
Chapter 37
Ant Colony Optimization

37.1 Introduction

Inspired by the research done by Deneubourg et al. [767, 768, 1111] on real ants and probably
by the simulation experiments by Stickland et al. [2613], Dorigo et al. [810] developed the
Ant Colony Optimization1 (ACO) Algorithm for problems that can be reduced to finding
optimal paths in graphs in 1996. [807, 811, 819, 1820, 1823] Ant Colony Optimization is
based on the metaphor of ants seeking food. In order to do so, an ant will leave the anthill
and begin to wander into a random direction. While the little insect paces around, it lays
a trail of pheromone. Thus, after the ant has found some food, it can track its way back.
By doing so, it distributes another layer of pheromone on the path. An ant that senses
the pheromone will follow its trail with a certain probability. Each ant that finds the food
will excrete some pheromone on the path. By time, the pheromone density of the path
will increase and more and more ants will follow it to the food and back. The higher the
pheromone density, the more likely will an ant stay on a trail. However, the pheromones
vaporize after some time. If all the food is collected, they will no longer be renewed and the
path will disappear after a while. Now, the ants will head to new, random locations.
This process of distributing and tracking pheromones is one form of stigmergy2 and was
first described by Grassé [1123]. Today, we subsume many different ways of communication
by modifying the environment under this term, which can be divided into two groups:
sematectonic and sign-based [2425]. According to Wilson [2948], we call modifications in
the environment due to a task-related action which leads other entities involved in this task
to change their behavior sematectonic stigmergy. If an ant drops a ball of mud somewhere,
this may cause other ants to place mud balls at the same location. Step by step, these
effects can cumulatively lead to the growth of complex structures. Sematectonic stigmergy
has been simulated on computer systems by, for instance, Théraulaz and Bonabeau [2693]
and with robotic systems by Werfel and Nagpal [2431, 2920, 2921].
The second form, sign-based stigmergy, is not directly task-related. It has been attained
evolutionary by social insects which use a wide range of pheromones and hormones for
communication. Computer simulations for sign-based stigmergy were first performed by
Stickland et al. [2613] in 1992.
The sign-based stigmergy is copied by Ant Colony Optimization [810], where optimiza-
tion problems are visualized as (directed) graphs. First, a set of ants performs randomized
walks through the graphs. Proportional to the goodness of the solutions denoted by the
paths, pheromones are laid out, i. e., the probability to walk into the direction of the paths
1 http://en.wikipedia.org/wiki/Ant_colony_optimization [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Stigmergy [accessed 2007-07-03]
472 37 ANT COLONY OPTIMIZATION
is shifted. The ants run again through the graph, following the previously distributed
pheromone. However, they will not exactly follow these paths. Instead, they may deviate
from these routes by taking other turns at junctions, since their walk is still randomized.
The pheromones modify the probability distributions.
The intention of Ant Colony Optimization is solving combinatorial problems and not
simulating ants realistically [810]. The ants ACO thus differ from “real” ones in three major
aspects:

1. they have some memory,


2. they are not completely blind, and
3. they are simulated in an environment where time is discrete.

It is interesting to note that even real vector optimizations can be mapped to a graph
problem, as introduced by Korošec and Šilc [1580]. Thanks to such ideas, the applicability
of Ant Colony Optimization is greatly increased.
37.2. GENERAL INFORMATION ON ANT COLONY OPTIMIZATION 473
37.2 General Information on Ant Colony Optimization

37.2.1 Applications and Examples

Table 37.1: Applications and Examples of Ant Colony Optimization.


Area References
Combinatorial Problems [308, 354, 371, 444, 445, 802, 807, 808, 810, 812, 820, 1011–
1013, 1145, 1148, 1149, 1486, 1730, 1822–1825, 1871, 2212,
2811, 3101]
Cooperation and Teamwork [2424]
Data Mining [2674]
Databases [371, 1145, 1822, 1824, 1825]
Distributed Algorithms and Sys- [328, 353, 354, 786, 811, 843, 871, 963, 1006, 1292, 1633,
tems 1733, 1800, 1863, 2387, 2425, 2493, 2694]
E-Learning [1158, 2763]
Economics and Finances [1060, 1825]
Engineering [786, 787, 871, 1486, 1717, 1733, 2083, 2425, 2694]
Graph Theory [63, 328, 353, 354, 786, 843, 963, 1486, 1733, 1800, 1863,
2387, 2425, 2493, 2694]
Healthcare [2674]
Logistics [354, 444, 445, 802, 808, 810, 812, 820, 1011, 1013, 1148,
1149, 1730, 1823, 2212, 2811]
Multi-Agent Systems [967]
Security [540, 871, 1486]
Software [1486]
Telecommunication [820]
Wireless Communication [63, 787, 1486, 1717]

37.2.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
2. New Optimization Techniques in Engineering [2083]
3. Swarm Intelligent Systems [2013]
4. Swarm Intelligence: From Natural to Artificial Systems [353]
5. Swarm Intelligence: Collective, Adaptive [1524]
6. Swarm Intelligence – Introduction and Applications [341]
7. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
8. Advances in Metaheuristics for Hard Optimization [2485]
9. Ant Colony Optimization [809]
10. Fundamentals of Computational Swarm Intelligence [881]

37.2.3 Conferences and Workshops

Table 37.2: Conferences and Workshops on Ant Colony Optimization.

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
474 37 ANT COLONY OPTIMIZATION
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
IAAI: Conference on Innovative Applications of Artificial Intelligence
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
ANTS: International Workshop on Ant Colony Optimization and Swarm Intelligence
History: 2010/09: Brussels, Belgium, see [434]
2008/09: Brussels, Belgium, see [2580]
2006/09: Brussels, Belgium, see [821]
2004/09: Brussels, Belgium, see [818]
2002/09: Brussels, Belgium, see [816]
2000/09: Brussels, Belgium, see [814, 815]
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
37.2. GENERAL INFORMATION ON ANT COLONY OPTIMIZATION 475
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
IEA/AIE: International Conference on Industrial and Engineering Applications of Artificial Intel-
ligence and Expert Systems
See Table 10.2 on page 133.
ISA: International Workshop on Intelligent Systems and Applications
See Table 10.2 on page 133.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
See Table 29.2 on page 355.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
ICSI: International Conference on Swarm Intelligence
476 37 ANT COLONY OPTIMIZATION
See Table 39.2 on page 485.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
TAAI: Conference on Technologies and Applications of Artificial Intelligence
See Table 10.2 on page 134.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SIS: IEEE Swarm Intelligence Symposium
See Table 39.2 on page 486.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

37.2.4 Journals
1. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
Chapter 38
River Formation Dynamics

River Formation Dynamics (RFD) is a heuristic optimization method recently developed by


Basalo et al. [229, 230]. It is inspired by the way water forms rivers by eroding the ground
and depositing sediments. In its structure, it is very close to Ant Colony Optimization. In
Ant Colony Optimization, paths through a graph are searched by attaching attributes (the
pheromones) to its edges. The pheromones are laid out by ants (+) and vaporize as time
goes by (-). In River Formation Dynamics, the heights above sea level are the attributes
of the vertexes of the graph. On this landscape, rain begins to fall. Forced by gravity, the
drops flow downhill and try to reach the sea. The altitudes of the points in the graph are
decreased by erosion (-) when water flows over them and increased by sedimentation (+)
if drops end up in a dead end, vaporize, and leave the material which they have eroded
somewhere else behind. Sedimentation punishes inefficient paths: If drops reaching a node
surrounded only by nodes of higher altitudes will increase height more and more until it
reaches the level of its neighbors and is not a dead end anymore. While flowing over the
map, the probability that a drop takes a certain edge depends on gradient of the down slope.
This gradient, in turn, depends on the difference in altitude of the nodes it connects and
their distance (i. e., the cost function). Initially, all nodes have the same altitude except for
the destination node which is a hole. New drops are inserted in the origin node and flow
over the landscape, reinforce promising paths, and either reach the destination or vaporize
in dead ends.
Different from ACO, cycles cannot occur in RFD because the water always flows downhill.
Of course, rivers in nature may fork and reunite, too. But, unlike ACO, River Formation
Dynamics implicitly creates direction information in its resulting graphs. If this information
is considered to be part of the solution, then cycles are impossible. If it is stripped away,
cycles may occur.
478 38 RIVER FORMATION DYNAMICS
38.1 General Information on River Formation Dynamics

38.1.1 Applications and Examples

Table 38.1: Applications and Examples of River Formation Dynamics.


Area References
Combinatorial Problems [229, 230]
Graph Theory [230]
Logistics [229]

38.1.2 Books
1. Ant Colony Optimization [809]
2. Fundamentals of Computational Swarm Intelligence [881]
3. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
4. Swarm Intelligence – Introduction and Applications [341]
5. Swarm Intelligence: Collective, Adaptive [1524]
6. Swarm Intelligence: From Natural to Artificial Systems [353]
7. Swarm Intelligent Systems [2013]
8. Nature-Inspired Algorithms for Optimisation [562]
9. New Optimization Techniques in Engineering [2083]

38.1.3 Conferences and Workshops

Table 38.2: Conferences and Workshops on River Formation Dynamics.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
38.1. GENERAL INFORMATION ON RIVER FORMATION DYNAMICS 479
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

38.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 39
Particle Swarm Optimization

39.1 Introduction

Particle Swarm Optimization1 (PSO), developed by Eberhart and Kennedy [853, 855, 1523]
in 1995, is a form of swarm intelligence in which the behavior of a biological social system
like a flock of birds or a school of fish [2129] is simulated. When a swarm looks for food, its
individuals will spread in the environment and move around independently. Each individual
has a degree of freedom or randomness in its movements which enables it to find food
accumulations. So, sooner or later, one of them will find something digestible and, being
social, announces this to its neighbors. These can then approach the source of food, too.
Particle Swarm Optimization has been discussed, improved, and refined by many researchers
such as Cai et al. [468], Gao and Duan [1018], Venter and Sobieszczanski-Sobieski [2799], and
Gao and Ren [1019]. Comparisons with other evolutionary approaches have been provided
by Eberhart and Shi [854] and Angeline [103].
With Particle Swarm Optimization, a swarm of particles (individuals) in a n-dimensional
search space G is simulated, where each particle p has a position p.g ∈ G ⊆ Rn and a
velocity p.v ∈ Rn . The position p.g corresponds to the genotypes, and, in most cases,
also to the candidate solutions, i. e., p.x = p.g, since most often the problem space X
is also the Rn and X = G. However, this is not necessarily the case and generally, we
can introduce any form of genotype-phenotype mapping in Particle Swarm Optimization.
The velocity vector p.v of an individual p determines in which direction the search will
continue and if it has an explorative (high velocity) or an exploitive (low velocity) character.
It represents endogenous parameters which we discussed under the subject of Evolution
Strategy in Chapter 30. Whereas the endogenous information in Evolution Strategy is used
for an undirected mutation, the velocity in Particle Swarm Optimization is used to perform
a directed modification of the genotypes (particles’ positions).

39.2 The Particle Swarm Optimization Algorithm

In the initialization phase of Particle Swarm Optimization, the positions and velocities of all
individuals are randomly initialized. In each step, first the velocity of a particle is updated
and then its position. Therefore, each particle p has another endogenous parameter: a
memory holding its best position best(p) ∈ G.
1 http://en.wikipedia.org/wiki/Particle_swarm_optimization [accessed 2007-07-03]
482 39 PARTICLE SWARM OPTIMIZATION
39.2.1 Communication with Neighbors – Social Interaction

In order to realize the social component of natural swarms, a particle furthermore has
a set of topological neighbors N(p). This set could be pre-defined for each particle at
startup. Alternatively, it could contain adjacent particles within a specific perimeter, i. e.,
all individuals which are no further away from p.g than a given distance δ according to a
certain distance measure2 “dist”. Using the Euclidian distance measure “dist eucl” specified
in ?? on page ?? we get:

∀ p, q ∈ pop : q ∈ N(p) ⇔ dist eucl(p.g, q.g) ≤ δ (39.1)

Each particle can communicate with its neighbors, so the best position found so far by any
element in N(p) is known to all of them as best(N(p)). The best position ever visited by
any individual in the population (which the optimization algorithm always keeps track of)
is referred to as best(pop).

39.2.2 Particle Update

One of the basic methods to conduct Particle Swarm Optimization is to use either best(N(p))
or best(pop) for adjusting the velocity of the particle p. If the globally best position is
utilized, the algorithm will converge quickly but maybe prematurely, i. e., may not find the
global optimum. If, on the other hand, neighborhood communication and best(N(p)) is
used, the convergence speed drops but the global optimum is found more likely.
The search operation q = psoUpdate(p, pop) applied in Particle Swarm Optimization
creates a new particles q to replace an existing one (p) by incorporating its genotype p.g and
its velocity p.v. We distinguish local updating (Equation 39.3) and global updating (Equa-
tion 39.2), which additionally uses the data from the whole population pop. “psoUpdate”
thus fulfills one of these two equations and Equation 39.4, which shows how the ith compo-
nents of the corresponding vectors are computed.

q.vi = p.vi + (randomUni[0, ci ) ∗ (best(p) .gi − p.gi )) + (39.2)


(randomUni[0, di ) ∗ (best(pop) .gi − p.gi ))
q.vi = p.vi + (randomUni[0, ci ) ∗ (best(p) .gi − p.gi )) + (39.3)
(randomUni[0, di ) ∗ (best(N(p)) .gi − p.gi ))
q.gi = p.gi + p.vi (39.4)

The learning rate vectors c and d have strong influence of the convergence speed of Par-
ticle Swarm Optimization. The search space G (and thus, also the values of p.g) is normally
confined by minimum and maximum boundaries. For the absolute values of the velocity,
normally maximum thresholds also exist. Thus, real implementations of “psoUpdate” have
to check and refine their results before the utility of the candidate solutions is evaluated.

39.2.3 Basic Procedure

Algorithm 39.1 illustrates the native form of the Particle Swarm Optimization using the
basic update procedure. Like Hill Climbing, this algorithm can easily be generalized for
multi-objective optimization and for returning sets of optimal solutions (compare with Sec-
tion 26.3 on page 230).

2 See ?? on page ?? for more information on distance measures.


39.3. GENERAL INFORMATION ON PARTICLE SWARM OPTIMIZATION 483

Algorithm 39.1: x⋆ ←− PSO(f, ps)


Input: f: the function to optimize
Input: ps: the population size
Data: pop: the particle population
Data: i: a counter variable
Output: x⋆ : the best value found
1 begin
2 pop ←− createPop(ps)
3 while ¬terminationCriterion do
4 for i ←− 0 up to len(pop) − 1 do
5 pop[i] ←− psoUpdate(pop[i], pop)
6 return best(pop) .x

39.3 General Information on Particle Swarm Optimization

39.3.1 Applications and Examples

Table 39.1: Applications and Examples of Particle Swarm Optimization.


Area References
Combinatorial Problems [527, 1581, 1780, 1781, 1835, 2212, 2633]
Data Mining [1581, 3018]
Databases [1780, 1781, 2633]
Distributed Algorithms and Sys- [156, 202, 404, 721, 1298, 1835, 2440, 2628, 3061]
tems
E-Learning [721, 1298, 2628]
Economics and Finances [1060]
Engineering [156, 853, 1835, 2799, 2861]
Function Optimization [103, 853, 855, 1018, 1523, 1727, 2799]
Graph Theory [202, 1835, 2440]
Logistics [527, 1581, 2212]
Motor Control [490]
Security [404]
Wireless Communication [68, 155]

39.3.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
2. Swarm Intelligent Systems [2013]
3. Swarm Intelligence: Collective, Adaptive [1524]
4. Parallel Evolutionary Computations [2015]
5. Swarm Intelligence – Introduction and Applications [341]
6. Swarm Intelligence – Focus on Ant and Particle Swarm Optimization [525]
7. Particle Swarm Optimization [589]
8. Fundamentals of Computational Swarm Intelligence [881]
9. Advances in Evolutionary Algorithms [1581]

39.3.3 Conferences and Workshops

Table 39.2: Conferences and Workshops on Particle Swarm Optimization.


484 39 PARTICLE SWARM OPTIMIZATION

GECCO: Genetic and Evolutionary Computation Conference


See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Artificial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
IAAI: Conference on Innovative Applications of Artificial Intelligence
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
ANTS: International Workshop on Ant Colony Optimization and Swarm Intelligence
See Table 37.2 on page 474.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Artificial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
39.3. GENERAL INFORMATION ON PARTICLE SWARM OPTIMIZATION 485
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
IEA/AIE: International Conference on Industrial and Engineering Applications of Artificial Intel-
ligence and Expert Systems
See Table 10.2 on page 133.
ISA: International Workshop on Intelligent Systems and Applications
See Table 10.2 on page 133.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Artificial Intelligence and Soft Computing
See Table 10.2 on page 134.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
See Table 29.2 on page 355.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
ICSI: International Conference on Swarm Intelligence
History: 2010/06: Běijı̄ng, China, see [2665, 2666]
486 39 PARTICLE SWARM OPTIMIZATION
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
TAAI: Conference on Technologies and Applications of Artificial Intelligence
See Table 10.2 on page 134.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SIS: IEEE Swarm Intelligence Symposium
History: 2003/04: Indianapolis, IN, USA, see [1387]
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.

39.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Evolutionary Computation published by MIT Press
3. Proceedings of the National Academy of Science of the United States of America (PNAS)
published by National Academy of Sciences
4. Journal of Theoretical Biology published by Academic Press Professional, Inc. and Elsevier
Science Publishers B.V.
5. Natural Computing: An International Journal published by Kluwer Academic Publishers and
Springer Netherlands
6. Biosystems published by Elsevier Science Ireland Ltd. and North-Holland Scientific Publish-
ers Ltd.
7. Australian Journal of Biological Science (AJBS)
8. BioData Mining published by BioMed Central Ltd.
9. International Journal of Computational Intelligence Research (IJCIR) published by Research
India Publications
10. The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) pub-
lished by Idea Group Publishing (Idea Group Inc., IGI Global)
11. IEEE Computational Intelligence Magazine published by IEEE Computational Intelligence
Society
12. Trends in Ecology and Evolution (TREE) published by Cell Press and Elsevier Science Pub-
lishers B.V.
13. Acta Biotheoretica published by Springer Netherlands
39.3. GENERAL INFORMATION ON PARTICLE SWARM OPTIMIZATION 487
14. Bulletin of the American Iris Society published by American Iris Society (AIS)
15. Behavioral Ecology and Sociobiology published by Springer-Verlag
16. FEBS Letters published by Elsevier Science Publishers B.V. and Federation of European
Biochemical Societies (FEBS)
17. Naturwissenschaften – The Science of Nature published by Springer-Verlag
18. International Journal of Biometric and Bioinformatics (IJBB) published by Computer Sci-
ence Journals Sdn BhD
19. Journal of Experimental Biology (JEB) published by The Company of Biologists Ltd. (COB)
20. Bulletin of Mathematical Biology published by Springer New York
21. Computational Intelligence published by Wiley Periodicals, Inc.
22. Journal of Heredity published by American Genetic Association
23. Nature Genetics published by Nature Publishing Group (NPG)
24. The American Naturalist published by University of Chicago Press for The American Society
of Naturalists
25. Philosophical Transactions of the Royal Society B: Biological Sciences published by Royal
Society of Edinburgh
26. Astrobiology published by Mary Ann Liebert, Inc.
27. Current Biology published by Elsevier Science Publishers B.V.
28. Quarterly Review of Biology (QRB) published by Stony Brook University
29. Proceedings of the Royal Society B: Biological Sciences published by Royal Society Publishing
30. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
Chapter 40
Tabu Search

Tabu Search1 (TS) has been developed by Glover [1065, 1069] in the mid-1980s based on
foundations from the 1970s [1064]. Some of the basic ideas were introduced by Hansen
[1189] and further contributions in terms of formalizing this method have been made by
Glover [1066, 1067], and de Werra and Hertz [729] (as summarized by Hertz et al. [1228] in
their tutorial on Tabu Search) as well as by Battiti and Tecchiolli [232] and Cvijović and
Klinowski [665].
According to Hertz et al. [1228], Tabu Search can be considered as a metaheuristic variant
of neighborhood search as introduced in Definition D46.3 on page 512. The word “tabu”2
stems from Polynesia and describes a sacred place or object. Things that are tabu must be
left alone and may not be visited or touched.

1 http://en.wikipedia.org/wiki/Tabu_search [accessed 2007-07-03]


2 http://en.wikipedia.org/wiki/Tapu_%28Polynesian_culture%29 [accessed 2008-03-27]
490 40 TABU SEARCH
40.1 General Information on Tabu Search

40.1.1 Applications and Examples

Table 40.1: Applications and Examples of Tabu Search.


Area References
Chemistry [2680]
Combinatorial Problems [83, 190, 232, 308, 403, 457, 640, 1066, 1069, 1189, 1228,
1486, 1737, 1858, 2075, 2105, 2423, 2491, 2648, 2649]
Databases [640]
Distributed Algorithms and Sys- [2173, 3002]
tems
Economics and Finances [1060]
Engineering [640, 1486, 2173, 3002, 3075]
Function Optimization [232, 665, 2484]
Graph Theory [63, 640, 1228, 1486, 2173, 2175, 2235, 2236, 2597, 3002]
Logistics [83, 190, 403, 1069, 1858, 2075, 2491, 2648, 2649]
Security [1486]
Software [1486]
Telecommunication [1067]
Theorem Proofing and Automatic [640]
Verification
Wireless Communication [63, 154, 640, 1486, 2175, 2235, 2236, 2597]

40.1.2 Books
1. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
2. How to Solve It: Modern Heuristics [1887]

40.1.3 Conferences and Workshops

Table 40.2: Conferences and Workshops on Tabu Search.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
40.1. GENERAL INFORMATION ON TABU SEARCH 491
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

40.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 41
Extremal Optimization

41.1 Introduction

Different from Simulated Annealing, which is an optimization method based on the metaphor
of thermal equilibria from physics, the Extremal Optimization1 (EO) algorithm developed
by Boettcher and Percus [344–348] is inspired by ideas of non-equilibrium physics. Especially
important in this context is the property of self-organized criticality2 (SOC) [186, 1439].

41.1.1 Self-Organized Criticality

The theory of SOC states that large interactive systems evolve to a state where a change
in one single of their elements may lead to avalanches or domino effects that can reach
any other element in the system. The probability distribution of the number of elements n
involved in these avalanches is proportional to n−τ (τ > 0). Hence, changes involving only
few elements are most likely, but even avalanches involving the whole system are possible
with a non-zero probability. [728]

41.1.2 The Bak-Sneppen model of Evolution

The Bak-Sneppen model3 of evolution [185] exhibits self-organized criticality and was the
inspiration for Extremal Optimization. Rather than focusing on single species, this model
considers a whole ecosystem and the co-evolution of many different species.
In the model, each species is represented only by a real fitness value between 0 and 1. In
each iteration, the species with the lowest fitness is mutated. The model does not include any
representation for genomes, instead, mutation changes the fitness of the species directly by
replacing it with a random value uniformly distributed in [0, 1]. In nature, this corresponds
to the process where one species has developed further or was replaced by another one.
So far, mutation (i. e., development) would become less likely the more the fitness
increases. Fitness can also be viewed as a barrier: New characteristics must be at least as
fit as the current ones to proliferate. In an ecosystem however, no species lives alone but
depends on others, on its successors and predecessors in the food chain, for instance. Bak
and Sneppen [185] consider this by arranging the species in a one dimensional line. If one
species is mutated, the fitness values of its successor and predecessor in that line are also
1 http://en.wikipedia.org/wiki/Extremal_optimization [accessed 2008-08-24]
2 http://en.wikipedia.org/wiki/Self-organized_criticality [accessed 2008-08-23]
3 http://en.wikipedia.org/wiki/Bak-Sneppen_model [accessed 2010-12-06]
494 41 EXTREMAL OPTIMIZATION
set to random values. In nature, the development of one species can foster the development
of others and this way, even highly fit species may become able to (re-)adapt.
After a certain amount of iterations, the species in simulations based on this model reach
a highly-correlated state of self-organized critically where all of them have a fitness above
a certain threshold. This state is very similar to the idea of punctuated equilibria from
evolutionary biology and groups of species enter a state of passivity lasting multiple cycles.
Sooner or later, this state is interrupted because mutations occurring nearby undermine
their fitness. The resulting fluctuations may propagate like avalanches through the whole
ecosystem. Thus, such non-equilibrium systems exhibit a state of high adaptability without
limiting the scale of change towards better states [344].

41.2 Extremal Optimization and Generalized Extremal


Optimization
Boettcher and Percus [345] want to utilize this phenomenology to obtain near-optimal solu-
tions for optimization problems. In Extremal Optimization, the search spaces G are always
spaces of structured tuples g = (g [1], g [2], . . . , g [n]). Extremal Optimization works on a single
individual and requires some means to determine the contributions of its genes to the overall
fitness.
Extremal Optimization was originally applied to a graph bi-partitioning problem [345],
were the n points of a graph had to be divided into two groups, each of size n/2. The objective
was to minimize the number of edges connecting the two groups. Search and problem space
can be considered as identical and a solution candidate x = gpm(g) = g consisted of n
genes, each of which standing for one point of the graph and denoting the Boolean decision
to which set it belongs. Analogously to the Bak-Sneppen model, each such gene g [i] had an
own fitness contribution λ(g [i]), the ratio of its outgoing edges connected to nodes from the
same set in relation
Pn to its total edge number. The higher this value, the better, but notice
that f(x = g) 6= i=1 λ(g [i]), since f(g) corresponds to the number of edges crossing the cut.
In general, the Extremal Optimization algorithm proceeds as follows:
1. Create an initial individual p with a random genotype p.g and set the currently best
known solution candidate x⋆ to its phenotype: x⋆ = p.x.
2. Sort all genes p.g [i] of p.g in a list in ascending order according to their fitness contri-
bution λ(p.g [i]).
3. Then, the gene p.g [i] with the lowest fitness contribution is selected from this list
and modified randomly, leading to a new individual p and a new solution candidate
x = p.x = gpm(p.g).
4. If p.x is better that x⋆ , i. e., p.x ≺ x⋆ , set x⋆ = p.x.
5. If the termination criterion has not yet been met, continue at step Point 2.
Instead of always picking the weakest part of g, Boettcher and Percus [346] selected the
gene(s) to be modified randomly in order to prevent the method from getting stuck in local
optima. In their work, the probability of a gene at list index j for being drawn is proportional
to j −τ . This variation was called τ -EO and showed superior performance compared to the
simple Extremal Optimization. In the graph partitioning problem on which Boettcher and
Percus [346] have worked, two genes from different sets needed to be drawn this way in each
step, since always two nodes had to be swapped in order to keep the size of the sub-graphs
constant. Values of τ in 1.3 . . . 1.6 have been reported to produce good results [346].
The major problem a user is confronted with in Extremal Optimization is how to de-
termine the fitness contributions λ(p.g [i]) of the elements p.g [i] of the genotypes p.g of the
solution candidates p.x. Boettcher and Percus [347] point out themselves that the “drawback
41.2. EXTREMAL OPTIMIZATION AND GENERALIZED EXTREMAL OPTIMIZATION495
to EO is that a general definition of fitness for individual variables may prove ambiguous or
even impossible” [728]. de Sousa and Ramos [725, 727, 728] therefore propose and exten-
sion to EO, called the Generalized Extremal Optimization (GEO) for fixed-length binary
genomes G = Bn . Each gene (bit) p.g [i] in the element p.g of the search space currently
examined, the following procedure is performed:

1. Create a copy g ′ of p.g.

2. Toggle bit i in g ′ .

3. Set λ(p.g [i]) to −f(gpm(g ′ )) for maximization and to f(gpm(g ′ )) in case of minimiza-
tion.4

By doing so, λ(p.g [i]) becomes a measure for how adapted the gene is. If f is subject to
maximization, high positive values of f(gpm(g ′ )) (corresponding to low λ(p.g [i])) indicate
that gene i should be mutated and has a low fitness. For minimization, low f(gpm(g ′ ))
indicated the mutating gene i would yield high improvements in the objective value.

4 In the original work of de Sousa et al. [728], f(x⋆ ) is subtracted from this value. Since we rank the genes,

this has basically no influence and is omitted here.


496 41 EXTREMAL OPTIMIZATION
41.3 General Information on Extremal Optimization

41.3.1 Applications and Examples

Table 41.1: Applications and Examples of Extremal Optimization.


Area References
Combinatorial Problems [344–348, 1835, 2036, 2895]
Distributed Algorithms and Sys- [708, 1835]
tems
Engineering [547, 548, 726–728, 1835]
Function Optimization [725, 727, 1773]
Graph Theory [344–348, 1835]
Logistics [345, 727]

41.3.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]

41.3.3 Conferences and Workshops

Table 41.2: Conferences and Workshops on Extremal Optimization.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
41.3. GENERAL INFORMATION ON EXTREMAL OPTIMIZATION 497
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

41.3.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 42
GRASPs
500 42 GRASPS
42.1 General Information on GRAPSs

42.1.1 Applications and Examples

Table 42.1: Applications and Examples of GRAPSs.


Area References
Combinatorial Problems [2105]

42.1.2 Conferences and Workshops

Table 42.2: Conferences and Workshops on GRAPSs.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

42.1.3 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 43
Downhill Simplex (Nelder and Mead)

[1864]

43.1 Introduction

The Downhill Simplex1 (or Nelder-Mead method or amoeba algorithm2 ) published by Nelder
and Mead [2017] in 1965 is an single-objective optimization approach for searching the space
of n-dimensional real vectors (G ⊆ Rn ) [1655, 2073]. Historically, it is closely related to the
simplex extension by Spendley et al. [2572] to the Evolutionary Operation method mentioned
in Section 28.1.3 on page 264 [1719]. Since it only uses the values of the objective functions
without any derivative information (explicit or implicit), it falls into the general class of
direct search methods [2724, 2984], as most of the optimization approaches discussed in this
book do.
Downhill Simplex optimization uses n + 1 points in the Rn . These points form a poly-
tope3 , a generalization of a polygone, in the n-dimensional space – a line segment in R1 , a
triangle in R2 , a tetrahedron in R3 , and so on. Nondegenerated simplexes, i. e., those where
the set of edges adjacent to any vertex form a basis in the Rn , have one important festure:
The result of replacing a vertex with its reflection through the opposite face is again, a
nondegenerated simplex (see ??). The goal of Downhill Simplex optimization is to replace
the best vertex of the simplex with an even better one or to ascertain that it is a candidate
for the global optimum [1719]. Therefore, its other points are constantly flipped around in
an intelligent manner as we will outline in ??.
Like Hill Climbing approaches, the Downhill Simplex may not converge to the global
minimum and can get stuck at local optima [1655, 1854, 2712]. Random restarts (as in Hill
Climbing With Random Restarts discussed in Section 26.5 on page 232) can be helpful here.

1 http://en.wikipedia.org/wiki/Nelder-Mead_method [accessed 2008-06-14]


2 Inthe book Numerical Recipes in C++ by Vettering et al. [2808], this optimization method is called
“amoeba algorithm”.
3 http://en.wikipedia.org/wiki/Polytope [accessed 2008-06-14]
502 43 DOWNHILL SIMPLEX (NELDER AND MEAD)
43.2 General Information on Downhill Simplex

43.2.1 Applications and Examples

Table 43.1: Applications and Examples of Downhill Simplex.


Area References
Chemistry [214, 1573]
Function Optimization [1017, 1655, 1854, 2017, 2073, 2712]

43.2.2 Conferences and Workshops

Table 43.2: Conferences and Workshops on Downhill Simplex.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

43.2.3 Journals
1. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
2. Discrete Optimization published by Elsevier Science Publishers B.V.
3. Engineering Optimization published by Informa plc and Taylor and Francis LLC
4. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientific Publishers Ltd.
5. Journal of Global Optimization published by Springer Netherlands
6. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
43.2. GENERAL INFORMATION ON DOWNHILL SIMPLEX 503
7. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
8. Journal of the Operational Research Society (JORS) published by Palgrave Macmillan Ltd.
9. Optimization Methods and Software published by Taylor and Francis LLC
10. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
11. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
Chapter 44
Random Optimization
506 44 RANDOM OPTIMIZATION
44.1 General Information on Random Optimization

44.1.1 Applications and Examples

44.1.2 Conferences and Workshops

Table 44.1: Conferences and Workshops on Random Optimization.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

44.1.3 Journals
1. Journal of Heuristics published by Springer Netherlands
Part IV

Non-Metaheuristic Optimization Algorithms


508 44 RANDOM OPTIMIZATION
Chapter 45
Introduction

One of the most fundamental principles in our world is the search for optimal states. It begins
in the microcosm in physics, where atom bonds1 minimize the energy of electrons [2137].
When molecules combine to solid bodies during the process of freezing, they assume crystal
structures which come close to energy-optimal configurations. Such a natural optimization
process, of course, is not driven by any higher intention but the result from the laws of
physics.
The same goes for the biological principle of survival of the fittest [2571] which, together
with the biological evolution [684], leads to better adaptation of the species to their envi-
ronment. Here, a local optimum is a well-adapted species that dominates its niche. We, the
Homo sapiens, have reached this level and so have ants, bears, cockroaches, lions, and many
other creatures.
As long as humankind exists, we strive for perfection in many areas. We want to reach
a maximum degree of happiness with the least amount of effort. In our economy, profit and
sales must be maximized and costs should be as low as possible. Therefore, optimization
is one of the oldest sciences which even extends into daily life [2023]. Here, we also can
observe the phenomenon of trade-offs between different criteria which are also common in
many other problem domains: One person may choose to work hard with the goal of securing
stability and pursuing wealth for her future life, accepting very little spare time in its current
stage. Another person can, deliberately, choose a lax life full of joy in the current phase
but rather unsecure in the long term. Similar, most optimization problems in real-world
applications involve multiple, usually conflicting, objectives.
Furthermore, many optimization problems involve certain constraints. The guy with the
relaxed lifestyle, for instance, regardless how lazy he might become, cannot waive dragging
himself to the supermarket from time to time in order to acquire food. The eager beaver, on
the other hand, will not be able to keep up with 18 hours of work for seven days per week in
the long run without damaging her health considerably and thus, rendering the goal of an
enjoyable future unattainable. Likewise, the parameters of the solutions of an optimization
problem may be constrained as well.
Every task which has the goal of approaching certain configurations considered as optimal
in the context of pre-defined criteria can be viewed as an optimization problem. If something
is important, general, and abstract enough, there is always a mathematical discipline dealing
with it. Global Optimization2 [609, 2126, 2181, 2714] is the branch of applied mathematics
and numerical analysis that focuses on, well, optimization. Closely related and overlapping
areas are mathematical economics 3 [559] and operations research 4 [580, 1232].
In this book, we will first discuss the things which constitute an optimization problem
and which are the basic components of each optimization process (in the remainder of this
1 http://en.wikipedia.org/wiki/Chemical_bond [accessed 2007-07-12]
2 http://en.wikipedia.org/wiki/Global_optimization [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Mathematical_economics [accessed 2009-06-16]
4 http://en.wikipedia.org/wiki/Operations_research [accessed 2009-06-19]
510 45 INTRODUCTION
chapter). In Part II, we take a look on all the possible aspects which can make an optimiza-
tion task difficult. Understanding them will allow you to anticipate possible problems and
to prevent their occurrence from the start. In the remainder of the first part of this book,
we discuss a variety of different optimization methods. Part V then gives many examples
of applications for which optimization algorithms can be used. Part VI is a collection of
definitions and knowledge which can be useful to understand the terms and formulas used
in the rest of the book. It is provided in an effort to make the book stand-alone and to allow
students to understand it even if they have little mathematical or theoretical background.
Chapter 46
State Space Search

46.1 Introduction
State space search (SSS) strategies1 are not directly counted to the optimization algorithms.
In Global Optimization, objective functions f ∈ f are optimized. Normally, all elements of
the problem space X are valid and only differ in their utility as solution. Optimization is
about to find the elements with the best utility. State space search, instead, is based on
a criterion isGoal : X 7→ B which determines whether an element of the problem space is
a valid solution or not. The purpose of the search process is to find elements x for which
isGoal(x) is true. [635, 797, 2356]

46.1.1 The Baseline: A Binary Goal Criterion

Definition D46.1 (isGoal). The function isGoal : X 7→ B is the target predicate of state
space search algorithms which states whether a given candidate solution x ∈ X is a valid
solution (by returning true) or not (by returning false).

To built such an “isGoal” criterion for State space search strategies for Global Optimization
based on objective functions f , we could, for example, specify a threshold y̆i for each objective
function fi ∈ f . If we assume minimization, isGoal(x) of a candidate solution x can be defined
as true if and only if the values of all objective functions for x are below or equal to the
corresponding thresholds, as stated in Equation 46.1.

true if fi (x) ≤ y̌i ∀i ∈ 1.. |f |
isGoal(x) = (46.1)
false otherwise

46.1.2 State Space and Neighborhood Search

The search space G is referred to as state space in state space search and the genotypes
g ∈ G are called states. Furthermore, G is often identical to the problem space (G = X).
Most state space search can only be applied if the search space is enumerable. One feature
of the search algorithms introduced here is that they usually are deterministic. This means
that they will yield the same results in each run (when applied to the same problem, that
is). Jones [1467] provides a further comparison between SSS and Global Optimization.
1 http://en.wikipedia.org/wiki/State_space_search [accessed 2007-08-06]
512 46 STATE SPACE SEARCH
46.1.2.1 The Neighborhood Expansion Operation
Instead of the basic search operations as introduced in Section 4.2, state space search al-
gorithms often use an operator which enumerates the neighborhood of a state. Here, we
call this operation “expand” and specify it in Definition D46.2. Hertz et al. [1228] refer to
optimization methods that base their search on such an operation, i. e., to state space search
approaches, as neighborhood search.

Definition D46.2 (expand). The operator expand : G 7→ P (G) receives one element g
from the search space G as input and computes a set G of elements which can be reached
from it. Based on Equation 4.3 on page 84, we can define “expand” in the context of
optimization according to Equation 46.2.
expand(g) = Op1 (g) (46.2)

Definition D46.3 (Neighborhood Search). Neighborhood search methods are itera-


tive procedures in which a neighborhood expand(g) is defined for each feasible solution
x = gpm(g), and the next solution x′ = gpm(g ′ ) is searched among the solutions in
g ′ ∈ expand(g). [1228]

46.1.2.2 Features of the “expand” Operator


“expand” is the exploration operation of state space search algorithms. Different from the
mutation operator of Evolutionary Algorithms, it usually is deterministic and returns a set
of genotypes instead of single one. Applying it to the same element g values will thus usually
yield the same set G.
The realization of “expand” has severe impact on the performance of search algorithms.
An efficient implementation, for example, should not include states in the returned set that
have already been visited before by the search. If the same elements are returned, the same
candidate solutions and all of their children will possibly be evaluated multiple times, which
would be useless and time consuming.
Another problem may occur if there are two elements g1 , g2 ∈ G with g1 ∈ expand(g2 )
and g2 ∈ expand(g1 ) exist. Especially the uninformed state space search algorithms which
we discuss in the next section may get trapped in an endless loop. Thus, visiting a genotype
twice should always be avoided in state space search. Often, it is possible to design the search
operations in a way preventing this from the start. Otherwise, tabu lists should be used, as
done in the previously discussed Tabu Search algorithm (see Chapter 40 on page 489).

46.1.2.3 Expansion to Individual Lists


Since we want to keep our algorithm definitions as general as possible, we will keep the
notation of individuals p that encompass a genotype p.g ∈ G and a phenotype p.x ∈ X.
Therefore, we need to an expansion operator that returns a set of individuals P rather than
a set G of elements of the search space. We therefore define the operation “expandP” in
Algorithm 46.1.

46.1.3 The Search Space as Graph


In Figure 46.1 we sketch the search space G as a graph spanned by the search operations
“searchOp” (combined to the set Op used by the “expand” operator). The genotypes gi are
the vertexes in such a graph. A directed edge from a genotype gi to another genotype gj
denotes that gj can be reached from gi with one search step, i. e., gj ∈ expand(gi ). In order
to apply (uninformed) search operations efficiently, this graph should by acyclic, meaning
that if there exists a path from gi to gj along the directed edges, there should not be a path
back from gj to gi , as we discussed before.
46.1. INTRODUCTION 513

Algorithm 46.1: P ←− expandP(g)


Input: g ∈ G: the element of the search space G to be expanded
Data: i: a counter variable
Data: p: an individual record
Output: P : the list of individual records resulting from the expansion
1 begin
2 G ←− expand(g)
3 P ←− ()
4 for i ←− 0 up to len(G) − 1 do
5 p ←− ∅
6 p.g ←− G[i]
7 p.x ←− gpm(p.g)
8 P ←− addItem(P, p)
9 return P

g1

g2 g3 g4

g5 g6 g7 g8

g1 g9 g10 g11 g12 g13

g14 g15 g16 g17


gi gj : gj=searchOp(gi) or, more general:
1
gjÎOp (gi) Þ gjÎexpand(gi)

G g18
ga : isGoal(gpm(ga))=false

gb : isGoal(gpm(gb))=true

Figure 46.1: An example for a possible search space sketched as acyclic graph.
514 46 STATE SPACE SEARCH
46.1.4 Key Efficiency Features

Many state space search methods fall into the category of local search algorithms as given
in Definition D26.1 on page 229. For all state space search strategies, we can define four
criteria that tell if they are suitable for a given problem or not [797].

1. Completeness. Does the search algorithm guarantee to find a solution (given that
there exists one)? (Do not mix up with the completeness of search operations specified
in Definition D4.8 on page 85.)

2. Time Consumption. How much time will the strategy need to find a solution?

3. Memory Consumption. How much memory will the algorithm need to store inter-
mediate steps? Together with time consumption this property is closely related to
complexity theory, as discussed in ?? on page ??.

4. Optimiality. Will the algorithm find an optimal solution if there exist multiple correct
solutions?

46.2 Uninformed Search

The optimization algorithms that we have considered up to now always use some sort of
utility measure to decide in which direction to search for optima: The objective functions
allow us to make fine distinctions between different individuals.
The only exceptions, the only uninformed optimization methods we discussed so far, are
the trivial random walk and sampling as well as exhaustive search algorithms (see Chapter 8
on page 113). Under some circumstances, however, only the criterion “isGoal” is given as a
form of Boolean objective function.

Definition D46.4 (Uninformed Search). An uninformed search algorithm is an opti-


mization algorithm which does not incorporate any utility information about the candidate
solutions into its search decisions. It only applies a Boolean criterion “isGoal” which distin-
guishes between phenotypes that are valid solutions from those which are not.

If the objective functions or utility criteria are of Boolean nature, the methods previously
discussed will not be able to estimate and descend some form of gradient. In the best case,
they will degenerate to random walks which still have some chances to find an optimum. In
the worst case, they will converge to some area in the search space and cease to explore new
solutions soon.
Here, uninformed search strategies2 are a viable alternative, since they do not require or
take into consideration any knowledge about the special nature of the problem (apart from
the knowledge represented by the “expand” operation, of course). Such algorithms are very
general and can be applied to a wide variety of problems with small search spaces.
Their common drawback is that they are not suitable if a deep search in a large search
space is required. Without the incorporation of information in form of heuristic functions,
for example, the search may take very long and quickly becomes infeasible [635, 797, 2356].

2 http://en.wikipedia.org/wiki/Uninformed_search [accessed 2007-08-07]


46.2. UNINFORMED SEARCH 515
46.2.1 Breadth-First Search

The uninformed state space search algorithms usually start at one initial local in the search
space. Breadth-first search3 (BFS) begins with expanding the neighborhood of the corre-
sponding root candidate solution. All of the states derived from this expansion are visited
and then all their children, and so on. In general, first all states in depth d are expanded
before considering any state in depth d + 1.
BFS is complete, since it will always find a solution if there exists one. If so, it will
also find the solution that can be reached from the root state with the smallest number
of expansion steps. Hence, breadth-first search is also optimal if the number of expansion
steps needed from the origin to a state is a monotonous function of the objective value of
a solution, i. e., if depth(p1 .g) > depth(p2 .g) ⇒ f(p1 .x) > f(p2 .x) where depth(p.g) is the
number of “expand” steps from the root state r ∈ G to p.g.

Algorithm 46.2: X ←− BFS(r, isGoal)


Input: r ∈ G: the root state to start the expansion at
Input: isGoal : X 7→ B: an operator that checks whether a state is a goal state or not
Data: p: the state currently processed
Data: P : the queue of states to explore
Output: X ⊆ X: the set with the (single) solution found, or ∅
1 begin
2 P ←− (p = (p.g = r, p.x = gpm(r)))
3 while len(P ) > 0 do
4 p ←− P [0]
5 P ←− deleteItem(P, 0)
6 if isGoal(p.x) then return {p.x}
7 P ←− appendList(P, expandP(p.g))
8 return ∅

Algorithm 46.2 illustrates how breadth-first search works. The algorithm is initialized
with a root state r ∈ G which marks the starting point of the search. BFS uses a state list
P which initially only contains this root individual. In a loop, the first element p is removed
from this list. If the goal predicate isGoal(p.x) evaluates to true, p.x is a goal state and
we can return a set X = {p.x} containing it as the solution. Otherwise, we expand p.g and
append the newly found individuals to the end of queue P . If no solution can be found, this
process will continue until the whole accessible search space has been enumerated and P
becomes empty. Then, an empty set ∅ is returned in place of X, because there is no element
x in the (accessible part of the) problem space X for which isGoal(x) becomes true.
In order to examine the space and time complexity of BFS, we assume a hypothetical
state space GH where the expansion of each state g ∈ GH will always return a set of
len(expand(g)) = b new states. In depth 0, we only have one state, the root state r. In
depth 1, there are b states, and in depth 2 we can expand each of them to again, b new
states which makes b2 , and so on. The total number of states up to depth d is given in
Equation 46.3:
bd+1 + 1 
1 + b + b2 + · · · + bd = ∈ O bd (46.3)
b−1

Thus, the space and time complexity are both O bd and thus, exponentially rise with d. In
the worst case, all nodes in depth d need to be stored, in the best case only those of depth
d − 1.
3 http://en.wikipedia.org/wiki/Breadth-first_search [accessed 2007-08-06]
516 46 STATE SPACE SEARCH
46.2.2 Depth-First Search
Depth-first search4 (DFS) is very similar to BFS. From the algorithmic point of view, the
only difference that it uses a stack instead of a queue as internal storage for states (compare
line 4 in Algorithm 46.3 with line 4 in Algorithm 46.2). Here, always the last state element
of the set of expanded states is considered next. Thus, instead of searching level for level
in the breath as BFS does, DFS searches in depth which – believe it or not – is the reason
for its name. Depth-first search advances in depth until the current state cannot further be
expanded, i. e., expand(p.g) = (). Then, the search steps again up one level. If the whole
search space has been browsed and no solution is found, ∅ is returned.

Algorithm 46.3: X ←− DFS(r, isGoal)


Input: r ∈ G: the root individual to start the expansion at
Input: isGoal : X 7→ B: an operator that checks whether a state is a goal state or not
Data: p: the state currently processed
Data: P : the stack of states to explore
Output: X ⊆ X: the set with the (single) solution found, or ∅
1 begin
2 P ←− (p = (p.g = r, p.x = gpm(r)))
3 while len(P ) > 0 do
4 p ←− P [len(P ) − 1]
5 P ←− deleteItem(P, len(P ) − 1)
6 if isGoal(p.x) then return {p.x}
7 P ←− appendList(P, expandP(p.g))
8 return ∅

The memory consumption of the DFS grows linearly with the search depth, because
in depth d, at most d ∗ b states are held in memory (if we again assume the hypothetical
search space GH from the previous section). If we assume a maximum depth m, the time
complexity is bm in the worst case where the solution is the last child state in the path which
is explored the last. If m is very large or infinite, a depth-first search may take very long to
discover a solution or will not find it at all, since it may get stuck in a “wrong” branch of the
state space which maybe can be followed (expanded) for many step. Also, it may discover
a solution at a deep expansion level while another one may exist in a lower level in another,
not explored branch of the search graph. Hence, depth first search is neither complete nor
optimal.

46.2.3 Depth-Limited Search


The depth-limited search5 [2356] is a depth-first search that only proceeds up to a given
maximum depth d. In other words, it does not examine candidate solutions that are more
than d “expand”-operations away from the root state r, as outlined in Algorithm 46.4 in
a recursive form. The time complexity now becomes bd and the memory complexity is in
O(b ∗ d). Of course, the depth-limited search can neither be complete nor optimal. However,
if the maximum depth of the possible solutions in the search tree is known, DLS can be
sufficient.

46.2.4 Iteratively Deepening Depth-First Search


The iteratively deepening depth-first search6 (IDDFS, [2356]), defined in Algorithm 46.5,
4 http://en.wikipedia.org/wiki/Depth-first_search [accessed 2007-08-06]
5 http://en.wikipedia.org/wiki/Depth-limited_search [accessed 2007-08-07]
6 http://en.wikipedia.org/wiki/IDDFS [accessed 2007-08-08]
46.2. UNINFORMED SEARCH 517

Algorithm 46.4: X ←− DLS(r, isGoal, d)


Input: r ∈ G: the root individual to start the expansion at
Input: isGoal : X 7→ B: an operator that checks whether a state is a goal state or not
Input: d: the (remaining) allowed depth steps
Input: x ∈ X: a phenotype
Input: g ∈ G: a genotype
Data: p: the state currently processed
Output: X ⋆ : the solutions found, or ∅
1 begin
2 x ←− gpm(r)
3 if isGoal(x) then return {x}
4 if d > 0 then
5 foreach g ∈ expand(r) do
6 X ←− DLS(g, isGoal, d − 1)
7 if len(X) > 0 then return X
8 return ∅

iteratively runs a depth-limited search with stepwise increasing maximum depths d. In each
iteration, it visits the states in the according to the depth-first search. Since the maxi-
mum depth is always incremented by one, one new level in terms of distance in “expand”-
operations from the root is explored in each iteration. This effectively leads to some form
of breadth-first search.

Algorithm 46.5: x ←− IDDFS(r, isGoal)


Input: r: the root individual to start the expansion at
Input: isGoal : X 7→ B: an operator that checks whether a state is a goal state or not
Data: d: the current depth limit
Output: X: the set with the (single) solutions found, or ∅
1 begin
2 d ←− 0
3 repeat
4 X ←− DLS(r, isGoal, d)
5 d ←− d + 1
6 until len(X) > 0
7 return X

IDDFS thus unites the advantages of BFS and DFS: It is complete and optimal, but only
has a linearly rising memory consumption in O(d ∗ b). The time consumption, of course, is
still in O bd . IDDFS is the best uninformed search strategy and can be applied to large
search spaces with unknown depth of the solution.
The Algorithm 46.5 is intended for (countable) infinitely large search spaces. In real
˘
systems, there is a maximum d after which the whole space would have be explored and the
algorithm should return ∅ if no solution was found.

46.2.5 General Information on Uninformed Search

46.2.5.1 Applications and Examples


518 46 STATE SPACE SEARCH

Table 46.1: Applications and Examples of Uninformed Search.


Area References
Combinatorial Problems [1999, 2065, 3013]
Distributed Algorithms and Sys- [49, 50, 225, 334, 335, 1146, 1295, 1569, 1570, 1816, 1999,
tems 2065, 2265, 2909, 2998, 3011, 3013, 3067, 3073, 3074]
Economics and Finances [49, 50]
Graph Theory [49, 225, 3013]
Software [1569, 1570, 2265]

46.2.5.2 Conferences and Workshops

Table 46.2: Conferences and Workshops on Uninformed Search.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.

46.3 Informed Search

In an informed search7 , a heuristic function φ helps to decide which states are to be expanded
next. If the heuristic is good, informed search algorithms may dramatically outperform
uninformed strategies [1887, 2140, 2276]. Based on Definition D1.14 on page 34, we now
provide the more specific Definition D46.5.

Definition D46.5 (Heuristic Function). Heuristic functions φ : G × X 7→ R+ are prob-


lem domain dependent functions which map the individuals (comprised of genotypes p.g ∈ G
7 http://en.wikipedia.org/wiki/Search_algorithms#Informed_search [accessed 2007-08-08]
46.3. INFORMED SEARCH 519
and phenotypes p.x ∈ X) to the positive real numbers R+ . All heuristics will be zero for the
elements which are (optimal) solutions, i. e.,

∀p ∈ G × X(p.x ∈ X) : isGoal(p.x) ⇒ φ(p) = 0 ∀ heuristics φ : G × X 7→ R+ (46.4)

There are two possible meanings of the values returned by a heuristic function φ:
1. In the above sense, the value of a heuristic function φ(p.x) for an individual p is the
higher, the more “expand”-steps p.g is probably (or approximately) away from finding
a valid solution. Hence, the heuristic function represents the distance of an individual
to a solution of the optimization problem.
2. The heuristic function can also represent an objective function in some way. Suppose
that we know the minimal value y̌ for an objective function f or at least a value
from where on all solutions are feasible. If this is the case, we could set φ(p.x) =
max {0, f(p.x) − y̌}, assuming that f is subject to minimization. Now the value of
heuristic function will be the smaller, the closer an individual is to a possible correct
solution and Equation 46.4 still holds. In other words, a heuristic function may also
represent the distance to a solution in objective space.
Of course, both meanings are often closely related since states that are close to each other
in problem space are probably also close to each other in objective space (the opposite does
not necessarily hold).

Definition D46.6 (Best-Limited Search). A best-limited search8 [2140] is a search al-


gorithm that incorporates a heuristic function φ in a way which ensures that promising
individuals p with low estimation values φ(p.x) are evaluated before other states q that
receive a higher values φ(q.x) > φ(p.x).

46.3.1 Greedy Search

A greedy search9 is a best-limited search where the currently known candidate solution with
the lowest heuristic value is investigated next. The greedy algorithm internal sorts the list
of currently known states in descending order according to a heuristic function φ. Thus,
the elements with the best (lowest) heuristic value will be at the end of the list, which then
can be used as a stack. The greedy search as specified in Algorithm 46.6 now works like a
depth-first search on this stack and thus, also shares most of the properties of the DFS. It
is neither complete nor optimal and its worst case time consumption is bm . On the other
hand, like breadth-first search, its worst-case memory consumption is also bm .
Greedy Search algorithms always find the best possible solutions very efficiently if the
problem forms a matroid10 [2110]. In other situations, however, greedy search can produce
very bad results and therefore should be used with extreme care [201].

46.3.2 A⋆ Search

A⋆ search11 (pronounced “A-star search”) is a best-limited search that uses an estimation


function φ⋆ : G × X 7→ R+ which is the sum of a heuristic function φ(p) that estimates the
costs needed to get from p to a valid solution and a function g : G × X 7→ R+ that computes
the costs of p.
φ⋆ (p) = g (p) + φ(x) (46.5)
8 http://en.wikipedia.org/wiki/Best-first_search [accessed 2007-09-25]
9 http://en.wikipedia.org/wiki/Greedy_search [accessed 2007-08-08]
10 http://en.wikipedia.org/wiki/Matroid [accessed 2010-09-19]
11 http://en.wikipedia.org/wiki/A%2A_search [accessed 2007-08-09]
520 46 STATE SPACE SEARCH

Algorithm 46.6: X ←− greedySearch(r, isGoal, φ)


Input: r ∈ G: the root individual to start the expansion at
Input: isGoal : X 7→ B: an operator that checks whether a state is a goal state or not
Input: φ : G × X 7→ R+ : the heuristic function
Data: p: the state currently processed
Data: P : the queue of states to explore
Output: X: the solution states found, or ∅
1 begin
2 P ←− (p = (p.g = r, p.x = gpm(r)))
3 while len(P ) > 0 do
4 P ←− sortDsc(P, φ)
5 p ←− P [len(P ) − 1]
6 P ←− deleteItem(P, len(P ) − 1)
7 if isGoal(p.x) then return {p.x}
8 P ←− appendList(P, expandP(p.g))
9 return ∅

A⋆ search proceeds exactly like the greedy search outlined in Algorithm 46.6, if φ⋆ is used
instead of the plain φ. For a given start state r ∈ G, an isGoal operator isGoal : X 7→ B,
and an estimation function φ⋆ according to Equation 46.5, Equation 46.6 holds.

A⋆ Search(r, isGoal, φ⋆ ) = greedySearch(r, isGoal, φ⋆ ) (46.6)

An A⋆ search will definitely find a solution if there exists one, i. e., it is complete.

Definition D46.7 (Admissible Heuristic Function). A heuristic function φ : G × X 7→


R+ is admissible if it never overestimates the minimal costs for reaching a goal state.

Definition D46.8 (Monotonic Heuristic Function). A heuristic function φ : G × X 7→


R+ is monotonic12 if it never overestimates the costs for getting from one state to its suc-
cessor.
φ(p) ≤ g (q) − g (p) + φ(q) ∀q.g ∈ expand(p.g) (46.7)

An A⋆ search is optimal if the heuristic function φ used is admissible. Optimal in this case
means that there exists no search algorithm that can find the same solution as the A⋆ search
needing fewer expansion steps if using the same heuristic. If “expand” is implemented in a
way which prevents that a state is visited more than once, φ also needs to be monotone in
order for the search to be optimal. [1204]

46.3.3 General Information on Informed Search

46.3.3.1 Applications and Examples

Table 46.3: Applications and Examples of Informed Search.

12 see Definition D51.21 on page 647


46.3. INFORMED SEARCH 521
Area References
Combinatorial Problems [36, 54, 116, 201, 233, 266, 570, 640, 644, 687, 733, 762, 901,
1119, 1153, 1154, 1201, 1486, 1639, 1735, 1803, 1990, 2067,
2117, 2283, 2462, 2483, 2653, 2765, 2769, 2795, 2865, 3027]
Data Mining [116]
Databases [36, 116, 233, 570, 640, 762, 1119, 1153, 1154, 1201, 1639,
1735, 1814, 1990, 2117, 2462, 2483, 2765, 2795, 2865, 3012]
Distributed Algorithms and Sys- [48–50, 54, 233, 334, 335, 687, 761, 1473, 1574, 1797, 2064,
tems 2067, 2117, 2264, 2903, 2909, 3027, 3067, 3074]
Economics and Finances [49, 50, 1060, 2067, 2264]
Engineering [640, 644, 901, 1486, 1574]
Graph Theory [49, 201, 640, 644, 687, 1486, 1574, 2596, 2653, 3027]
Logistics [201, 733, 901, 1803, 2769]
Security [1486]
Software [1486]
Theorem Proofing and Automatic [640]
Verification
Wireless Communication [640, 644, 1486, 2596]

46.3.3.2 Conferences and Workshops

Table 46.4: Conferences and Workshops on Informed Search.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
Chapter 47
Branch And Bound
524 47 BRANCH AND BOUND
47.1 General Information on Branch And Bound

47.1.1 Applications and Examples

Table 47.1: Applications and Examples of Branch And Bound.


Area References
Combinatorial Problems [148, 1095, 1546, 2423]
Databases [1095]
Distributed Algorithms and Sys- [202]
tems
Engineering [18]
Graph Theory [202]
Logistics [148, 1546]

47.1.2 Books
1. Introduction to Global Optimization [2126]
2. Global Optimization: Deterministic Approaches [1271]

47.1.3 Conferences and Workshops

Table 47.2: Conferences and Workshops on Branch And Bound.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
47.1. GENERAL INFORMATION ON BRANCH AND BOUND 525
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
Chapter 48
Cutting-Plane Method
528 48 CUTTING-PLANE METHOD
48.1 General Information on Cutting-Plane Method

48.1.1 Applications and Examples

Table 48.1: Applications and Examples of Cutting-Plane Method.


Area References
Combinatorial Problems [252]
Logistics [252]

48.1.2 Books
1. Nonlinear Programming: Analysis and Methods [152]

48.1.3 Conferences and Workshops

Table 48.2: Conferences and Workshops on Cutting-Plane Method.

HIS: International Conference on Hybrid Intelligent Systems


See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization – Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
Part V

Applications
530 48 CUTTING-PLANE METHOD
Chapter 49
Real-World Problems

49.1 Symbolic Regression


In this section, we want to discuss symbolic regression a bit more in detail, after already
shedding some light onto it in Example E31.1 on page 387. In statistics, regression analysis
examines the unknown relation ϕ :∈ Rn 7→∈ R of a dependent variable y ∈ R to specified
independent variables x ∈ Rm . Since ϕ is not known, the goal is to find a reasonable good
approximation ψ ⋆ .

Definition D49.1 (Regression). Regression1 [828, 980, 1548] is a statistic technique used
to predict the value of a variable which is dependent one or more independent variables.

The result of the regression process is a function ψ ⋆ : Rm 7→ R that relates the m independent
variables (subsumed in the vector x to one dependent variable y ≈ ψ ⋆ (x). The function ψ ⋆
is the best estimator chosen from a set Ψ of candidate functions ψ : Rm 7→ ψ. Regression is
strongly related to the estimation theory outlined in Section 53.6 on page 691. In most cases,
like linear2 or nonlinear3 regression, the mathematical model of the candidate functions is
not completely free. Instead, we pick a specific one from an array of parametric functions
by finding the best values for the parameters.

Definition D49.2 (Symbolic Regression). Symbolic regression [149, 846, 1514, 1602,
2251, 2377, 2997] is a general approach to regression where regression functions can be
constructed by combining elements of a set of mathematical expressions, variables and con-
stants.

Symbolic regression is thus not limited to determining the optimal values for the set of
parameters of a certain array of functions. It is much more general as polynomial fitting
methods.

49.1.1 Genetic Programming: Genome for Symbolic Regression

One of the most widespread methods to perform symbolic regression is to apply Genetic
Programming. Here, the candidate functions are constructed and refined by an evolutionary
process. In the following we will discuss the genotypes (which are also the phenotypes) of the
1 http://en.wikipedia.org/wiki/Regression_analysis [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Linear_regression [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Nonlinear_regression [accessed 2007-07-03]
532 49 REAL-WORLD PROBLEMS
evolution as well as the objective functions that drive it. As illustrated in Figure 49.1, the
candidate solutions, i. e., the candidate functions, are represented by a tree of mathematical
expressions where the leaf nodes are either constants or the fields of the independent variable
vector x.

* cos
3*x*x+cos 1
x (( 3 * /
x x 1 x
mathematical expression genetic programming tree representation

Figure 49.1: An example genotype of symbolic regression of with x = x ∈ R1 .

The set Ψ of functions ψ that can possible be evolved is limited by the set of expressions
Σ available to the evolutionary process.

Σ = {+, −, ∗, /, exp, ln, sin, cos, max, min, . . . } (49.1)

Another aspect that influences the possible results of the symbolic regression is the concept
of constants. In general, constants are not really needed since they can be constructed
x
indirectly via expressions. The constant 2.5, for example, equals the expression x+x + lnlnx∗x
x .
The evolution of such artificial constants, however, takes rather long. Koza [1602] has
therefore introduced the concept of ephemeral random constants.

Definition D49.3 (Ephemeral Random Constants). In symbolic regression, an ephemeral


random constant is a leaf node which represents a constant real value in a formula. This
value is chosen randomly at its creations and then remains unchanged, i. e., constant.

If a new individual is created and a leaf in its expression-tree is chosen to be an ephemeral


random constant, a random number is drawn uniformly distributed from a reasonable in-
terval. For each new constant leaf, a new constant is created independently. The values
of the constant leafs remain unchanged and can be moved around and copied by crossover
operations.
According to Koza’s idea ephemeral random constants remain unchanged during the
evolutionary process. In our work, it has proven to be practicable to extend his approach by
providing a mutation operation that changes the value c of a constant leaf of an individual.
A good policy for doing so is by replacing the old constant value cold by a new one cnew
which is a normally distributed random number with the expected value cold (see ?? on
page ??):

cnew = randomNorm cold , σ 2 (49.2)
σ 2 = e−randomUni[0,10) ∗ |cold | (49.3)

Notice that the other reproduction operators for tree genomes have been discussed in detail
in Section 31.3 on page 386.

49.1.2 Sample Data, Quality, and Estimation Theory

In the following elaborations, we will reuse some terms that we have applied in our discus-
sion on likelihood in Section 53.6.2 on page 693 in order to find out what measures will
49.1. SYMBOLIC REGRESSION 533
make good objective functions for symbolic regression problems.Again, we are given a finite
set of sample data A containing n = |A| pairs of (xi , yi ) where the vectors xi ∈ Rm are
known inputs to an unknown function ϕ : Rm 7→ R and the scalars yi are its observed
outputs (possible contaminated with noise and measurement errors subsumed in the term
ηi , see Equation 53.242 on page 693). Furthermore, we can access a (possible infinite large)
set Ψ of functions ψ : Rm 7→ R ∈ Ψ which are possible estimators of ϕ. For the inputs xi ,
the results of these functions ψ deviate by the estimation error ε (see Definition D53.56 on
page 692) from the yi .

yi = ϕ(xi ) + ηi ∀i ∈ [0..n − 1] (49.4)


yi = ψ (xi ) + εi (ψ) ∀ψ ∈ Ψ, i ∈ [0..n − 1] (49.5)

In order to guide the evolution of estimators (in other words, for driving the regression
process), we need an objective function that furthers candidate solutions that represent the
sample data A and thus, resemble the function ϕ, closely. Let us call this “driving force”
quality function.

Definition D49.4 (Quality Function). The quality function f(ψ, A) defines the quality
of the approximation of a function ϕ by a function ψ. The smaller the value of the quality
function is, the more precisely is the approximation of ϕ by ψ in the context of the sample
data A.

Under the conditions that the measurement errors ηi are uncorrelated and are all normally
distributed with an expected value of zero and the same variance (see Equation 53.243, Equa-
tion 53.244, and Equation 53.245 on page 693), we have shown in Section 53.6.2 that the best
estimators minimize the mean square error “MSE” (see Equation 53.258 on page 695, Def-
inition D53.62 on page 696 and Definition D53.59 on page 692). Thus, if the source of the
values yi complies at least in a simplified, theoretical manner with these conditions or even
is a real measurement process, the square error is the quality function to choose.
lenA−1
X 2
fσ6=0 (ψ, A) = (yi − ψ (xi )) (49.6)
i=0

While this is normally true, there is one exception to the rule: The case where the values
yi are no measurements but direct results from ϕ and η = 0. A common example for this
situation is if we apply symbolic regression in order to discover functional identities [1602,
2033, 2359] (see also Example E49.1 on the next page). Different from normal regression
analysis or estimation, we then know ϕ exactly and want to find another function ψ ⋆ that
is another, equivalent form of ϕ. Therefore, we will use ϕ to create sample data set A
beforehand, carefully selecting characteristic points xi . Thus, the noise and the measurement
errors ηi all become zero. If we would still regard them as normally distributed, their variance
s2 would be zero, too.
The proof for the statement that minimizing the square errors maximizes the likelihood
is based on the transition from Equation 53.253 to Equation 53.254 on page 695 where we
cut divisions by s2 . This is not possible if σ becomes zero. Hence, we may or may not select
metrics different from the square error as quality function. Its feature of punishing larger
deviation stronger than small ones, however, is attractive even if the measurement errors
become zero. Another metric which can be used as quality function in these circumstances
are the sums of the absolute values of the estimation errors:
lenA−1
X
fσ=0 (ψ, A) = |yi − ψ (xi )| (49.7)
i=0
534 49 REAL-WORLD PROBLEMS
Example E49.1 (An Example and the Phenomenon of Overfitting).
If multi-objective optimization can be applied, the quality function should be complemented
by an objective function that puts pressure in the direction of smaller estimations ψ. In
symbolic regression by Genetic Programming, the problem of code bloat (discussed in ??
on page ??) is eminent. Here, functions do not only grow large because they include useless
expressions (like x∗x+x
x − x − 1). A large function may consist of functional expressions only,
but instead of really representing or approximating ϕ, it is degenerated to just some sort of
misfit decision table. This phenomenon is called overfitting and has initially been discussed
in Section 19.1 on page 191.
Let us, for example, assume we want to find a function similar to Equation 49.8. Of
course, we would hope to find something like Equation 49.9.

y = ϕ(x) = x2 + 2x + 1 (49.8)
y = ψ ⋆ (x) = (x + 1)2 = (x + 1)(x + 1) (49.9)

i xi yi = ϕ(xi ) f2⋆ (xi )


01 −5 16 15.59
1 −4.9 15.21 15.40
2 0.1 1.21 1.11
3 2.9 15.21 15.61
4 3 16 16
5 3.1 16.81 16.48
6 4.9 34.81 34.54
7 5 36 36.02
8 5.1 37.21 37.56

Table 49.1: Sample Data A = {(xi , yi ) : i ∈ [0..8]} for Equation 49.8


For testing purposes, we choose randomly the nine sample data points listed in Table 49.1.
As result of Genetic Programming based symbolic regression we may obtain something like
Equation 49.10, outlined in Figure 49.2, which represents the data points quite precisely but
has nothing to do with the original form of our equation.

ψ2⋆ (x) = (((((0.934911896352446 * 0.258746335682841) - (x * ((x / ((x -


0.763517999368926) + ( 0.0452368900127981 - 0.947318140392111))) / ((x - (x + x)) +
(0.331546588012695 * (x + x)))))) + 0.763517999368926) + ((x - ((( 0.934911896352446 *
((0.934911896352446 / x) / (x + 0.947390132934724))) + (((x * 0.235903629190878) * (x -
0.331546588012695)) + ((x * x) + x))) / x)) * ((((x - (x * (0.258746335682841 /
0.455160839551232))) / (0.0452368900127981 - 0.763517999368926)) * x) *
(0.763517999368926 * 0.947318140392111)))) - (((((x - (x * (0.258746335682841 /
0.455160839551232))) / (0.0452368900127981 - 0.763517999368926)) * 0.763517999368926)
* x) + (x - (x * (0.258746335682841 * 0.934911896352446))))) 49.10

We obtained both functions ψ1⋆ (in its second form) and ψ2⋆ using the symbolic regression
applet of Hannes Planatscher which can be found at http://www.potschi.de/sr/ [ac-
4
cessed 2007-07-03] . It needs to be said that the first (wanted) result occurred way more often than

absurd variations like ψ2⋆ . But indeed, there are some factors which further the evolution of
such eyesores:
1. If only few sample data points are provided, the set of prospective functions that have
a low estimation error becomes larger. Therefore, chances are that symbolic regression
provides results that only match those points but differ in all other points significantly
from ϕ.
4 Another good applet for symbolic regression can be found at http://alphard.ethz.ch/gerber/

approx/default.html [accessed 2007-07-03]


49.1. SYMBOLIC REGRESSION 535

30

20

10

-5 -4 -3 -2 -1 0 1 2 3 4

-10
y«1(x) º j(x)
-20 y«2(x)

Figure 49.2: ϕ(x), the evolved ψ1⋆ (x) ≡ ϕ(x), and ψ2⋆ (x).

2. If the sample data points are not chosen wisely, their expressiveness is low. We for
instance chose 4.9,5, and 5.1 as well as 2.9, 3 and 3.1 which form two groups with
members very close to each other. Therefore, a curve that approximately hits these
two clouds is rated automatically with a high quality value.

3. A small population size decreases the diversity and furthers “incest” between similar
candidate solutions. Due to a lower rate of exploration, only a local minimum of the
quality value will often be yielded.

4. Allowing functions of large depth and putting low pressure against bloat (see ?? on
page ??) leads to uncontrolled function growth. The real laws ϕ that we want to ap-
proximate with symbolic regression do usually not consist of more than 40 expressions.
This is valid for most physical, mathematical, or financial equations. Therefore, the
evolution of large functions is counterproductive in those cases.

Although we made some of these mistakes intentionally, there are many situations where it
is hard to determine good parameter sets and restrictions for the evolution and they occur
accidentally.

49.1.3 Limits of Symbolic Regression

Often, we cannot obtain an optimal approximation of ϕ, especially if ϕ cannot be repre-


sented by the basic expressions available to the regression process. One of these situations
has already been discussed before: the case where ϕ has no closed arithmetical expression.
Another possibility is that the regression method tries to generate a polynomial that ap-
proximates the ϕ, but ϕ does contain different expressions like sin or ex or polynomials of
an order higher than available. Yet another problem is that the values yi are often not
results computed by ϕ directly but could, for example, be measurements taken from some
physical entity and we want to use regression to determine the interrelations between this
entity and some known parameters. Then, the measurements will be biased by noise and
systematic measurement errors. In this situation, f(ψ ⋆ , A) will be greater than zero even
after a successful regression.
536 49 REAL-WORLD PROBLEMS
49.2 Data Mining

Definition D49.5 (Data Mining). Data mining5 can be defined as the nontrivial extrac-
tion of implicit, previously unknown, and potentially useful information from data [991] and
the science of extracting useful information from large data sets or databases [1176].

Today, gigantic amounts of data are collected in the web, in medical databases, by enterprise
resource planning (ERP) and customer relationship management (CRM) systems in corpo-
rations, in web shops, by administrative and governmental bodies, and in science projects.
These data sets are way too large to be incorporated directly into a decision making process
or to be understood as-is by a human being. Instead, automated approaches have to be
applied that extract the relevant information, to find underlying rules and patterns, or to
detect time-dependent changes. Data mining subsumes the methods and techniques capable
to perform this task. It is very closely related to estimation theory in stochastic (discussed
in Section 53.6 on page 691) – the simplest summaries of a data set are still the arithmetic
mean or the median of its elements. Data mining is also strongly related to artificial in-
telligence [797, 2356], which includes (machine) learning algorithms that can generalize the
given information. Some of the most wide spread and most common data mining techniques
are:
1. artificial neural networks (ANN) [310, 314],
2. support vector machines (SVM) [450, 2773, 2788, 2849],
3. logistic regression [37],
4. decision trees [281, 406, 2234, 2961],
5. Genetic Programming for synthesizing classifiers [481, 993, 1985, 2855],
6. Learning Classifier Systems as introduced in Chapter 35 on page 457, and
7. naı̈ve Bayes Classifiers [805, 2312].

49.2.1 Classification
Broadly speaking, the task of classification is to assign a class k from a set of possible
classes K to vectors (data samples) a = (a[0], a[1], . . . , a[n − 1]) consisting of n attribute
values a[0] ∈ Ai . In supervised approaches, the starting point is a training set At including
training samples at ∈ At for which the corresponding classes class(at ) ∈ K are already
known.
A data mining algorithm is supposed to learn a relation (called classifier ) C : A0 × A1 ×
· · ·×An−1 7→ K which can map such attribute vectors a to a corresponding class k ∈ K. The
training set At usually contains only a small subset of the possible attribute vectors and may
even include contradicting samples (at,1 = at,2 : at,1 , at,2 ∈ At ∧ class(at,1 ) 6= class(at,2 )).
The better the classifier C is, the more often it can correctly classify attribute vectors.
An overfitted classifier has no generalization capability: it learns only the exact relations
provided in the training sample but is unable to classify samples not included in At with
reasonable precision. In order to test whether a classifier C is overfitted, not the complete
available data A is used for learning. Instead, A is divided into the training set At and the
test set Ac . If C’s precision on Ac is much worse than on At , it is overfitted and should not
be used in a practical application.

49.3 Freight Transportation Planning

5 http://en.wikipedia.org/wiki/Data_mining [accessed 2007-07-03]


49.3. FREIGHT TRANSPORTATION PLANNING 537
49.3.1 Introduction

In this section, we outline a real-world application of Evolutionary Computation to freight


transportation planning which has been developed together with a set of partners in the
research project in.west and published in a set of scientific papers [2188, 2189, 2912–2914],
most prominently in the chapter Solving Real-World Vehicle Routing Problems with Evo-
lutionary Algorithms [2912] of Natural Intelligence for Scheduling, Planning and Packing
Problems [564] at 10.1007/978-3-642-04039-9 2.

1200 9
10 t*km
800

400
0
2005 2020 2035 2050

Figure 49.3: The freight traffic on German roads in billion tons*kilometer.

According to the German Federal Ministry of Economics and Technology [447], the freight
traffic volume on German roads will have doubled by 2050 as illustrated in Figure 49.3.
Reasons for this development are the effects of globalization as well as the central location
of the country in Europe. With the steadily increasing freight traffic resulting from trade
inside the European Union and global import and export [2248], transportation and logistics
become more important [2768, 2825]. Thus, a need for intelligent solutions for the strategic
planning of logistics becomes apparent [447]. Such a planning process can be considered as
a multi-objective optimization problem which has the goals [2607, 2914] of increasing the
profit of the logistics companies by

1. ensuring on-time collection and delivery of all parcels,


2. utilizing all available means of transportation (rail, trucks) efficiently, i. e., decreasing
the total transportation distances by using the capacity of the vehicles to the fullest,
while
3. reducing the CO2 production in order to become more environment-friendly.

Fortunately, the last point is a side-effect of the others. By reducing the total distance
covered and by transporting a larger fraction of the freight via (inexpensive) trains, not only
the driver’s work hours and the costs are decreased, but also the CO2 production declines.
Efficient freight planning is not a static procedure. Although it involves building an
overall plan on how to deliver orders, it should also be able to dynamically react to unforeseen
problems such as traffic jams or accidents. This reaction should lead to a local adaptation of
the plan and re-routing of all involved freight vehicles whereas parts of the plan concerning
geographically distant and uninvolved objects are supposed to stay unchanged.
In the literature, the creation of freight plans is known as the Vehicle Routing Prob-
lem [497, 1091, 2162]. In this section, we present an approach to Vehicle Routing for
real-world scenarios: the freight transportation planning component of the in.west system.
in.west, or “Intelligente Wechselbrücksteuerung” in full, is a joint research project of DHL,
Deutsche Post AG, Micromata, BIBA, and OHB Teledata funded by the German Federal
Ministry of Economics and Technology.6
In the following section, we discuss different flavors of the Vehicle Routing Problem and
the general requirements of logistics departments which specify the framework for our freight
planning component. These specific conditions rendered the related approaches outlined in
Section 49.3.3 infeasible for our situation. In Section 49.3.4, we present an Evolutionary
6 See http://www.inwest.org/ [accessed 2008-10-29].
538 49 REAL-WORLD PROBLEMS
Algorithm for multi-objective, real-world freight planning problems [2189]. The problem-
specific representation of the candidate solutions and the intelligent search operators working
on them are introduced, as well as the objective functions derived from the requirements.
Our approach has been tested in many different scenarios and the experimental results are
summarized in Section 49.3.5. The freight transportation planning component described
in this chapter is only one part of the holistic in.west approach to logistics which will be
outlined in Section 49.3.6. Finally, we conclude with a discussion of the results and future
work in Section 49.3.7.

49.3.2 Vehicle Routing in Theory and Practice

49.3.2.1 Vehicle Routing Problems

The Vehicle Routing Problem (VRP) is one of the most famous combinatorial optimization
problems. In simple terms, the goal is to determine a set of routes than can satisfy sev-
eral geographically scattered customers’ demands while minimizing the overall costs [2162].
Usually, a fleet of vehicles located in one depot is supposed to fulfill these requests. In this
context, the original version of the VRP problem was proposed by Dantzig and Ramser
[680] in 1959 who addressed the calculation of a set of optimal routes for a fleet of gasoline
delivery trucks.
As described next, a large number of variants of the VRP exist, adding different con-
straints to the original definition. Within the scope of in.west, we first identified all the
restrictions of real-world Vehicle Routing Problems that occur in companies like and then
analyzed available approaches from the literature.
The Capacitated Vehicle Routing Problem (CVRP), for example, is similar to the classical
Vehicle Routing Problem with the additional constraint that every vehicle must have the
same capacity. A fixed fleet of delivery vehicles must service known customers’ demands
of a single commodity from a common depot at minimum transit costs [806, 2212, 2262].
The Distance Vehicle Routing Problem (DVRP) is a VRP extended with the additional
constraint on the maximum total distance traveled by each vehicle. In addition, Multiple
Depot Vehicle Routing Problems (MDVRP) have several depots from which customers can
be supplied. Therefore, the MDVPR requires the assignment of customers to depots. A
fleet of vehicles is based at each depot. Each vehicle then starts at its corresponding depot,
services the customers assigned to that depot, and returns.
Typically, the planning period for a classical VRP is a single day. Different from this
approach are Periodic Vehicle Routing Problems (PVRP), where the planning period is ex-
tended to a specific number of days and customers have to be served several times with
commodities. In practice, Vehicle Routing Problems with Backhauls (VRPB), where cus-
tomers can return some commodities [2212] are very common. Therefore all deliveries for
each route must be completed before any pickups are made. Then, it also becomes necessary
to take into account that the goods which customers return to the deliverer must fit into
the vehicle.
The Vehicle Routing Problem with Pick-up and Delivering (VRPPD) is a capacitated
Vehicle Routing Problem where each customer can be supplied with commodities as well
as return commodities to the deliverer. Finally the Vehicle Routing Problem with Time
Windows (VRPTW) is similar to the classical Vehicle Routing Problem with the additional
restriction that time windows (intervals) are defined in which the customers have to be
supplied [2212]. Figure 49.4 shows the hierarchy of Vehicle Routing Problem variants and
also the problems which are relevant in the in.west case.
49.3. FREIGHT TRANSPORTATION PLANNING 539

multi-depot periodic
MDVRP VRP PVRP
tim
ew

str or
s
nt

constraints
o n ce
ai
in

capacity
e c tan
do
w

tim d is
s

DVRP VRPTW
CVRP
capacity a.
ba

w. loading a.
time or

unloading
ck
distance ha
ul
const.

DCVRP VRPB
VRPPD

src and dest


depot is
real-world problem
in the DHL/in.west VRPSPD

Figure 49.4: Different flavors of the VRP and their relation to the in.west system.

49.3.2.2 Model of a Real-World Situation

As it becomes obvious from Figure 49.4, the situation in logistics companies is relatively
complicated and involves many different aspects of Vehicle Routing. The basic unit of freight
considered in this work is a swap body b, a standardized container (C 745, EN 284 [2679])
with a dimension of roughly 7.5m × 2.6m × 2.7m and special appliances for easy exchange
between transportation vehicles or railway carriages. Logistics companies like DHL usually
own up to one thousand such containers. We refer to the union of all swap bodies as the set
B.
We furthermore define the union of all possible means of transportation as the set F . All
trucks tr ∈ F can carry at most a certain maximum number v̂ (tr) of swap bodies at once.
Commonly and also in the case of DHL, this limit is v̂ (tr) = 2. The maximum load of trains
z ∈ F , on the other hand, is often more variable and usually ranges somewhere between
30 and 60 (v̂ (z) ∈ [30..60]). Trains have fixed routes, departure, and arrival times whereas
freight trucks can move freely on the map. In many companies, trucks must perform cyclic
tours, i.e., return to their point of departure by the end of the day, in order to allow the
drivers to return home.

b
The clients and the depots of the logistics companies together can form more than one
thousand locations from which freight may be collected or to which it may be delivered.

b
We will refer to the set of all these locations as L. Each transportation order has a fixed
time window [ts , tbs ] in which it must be collected from its source ls ∈ L. From there, it has
to be carried to its destination location ld ∈ L where it must arrive within a time window

b b
[td , tbd ]. An order furthermore has a volume v which we assume to be an integer multiple
of the capacity D of a swap body. Hence, E a transportation order o can fully be described by
the tuple o = ls , ld , [ts , tbs ], [td , tbd ], v . In our approach, orders which require more than one
(v > 1) swap body will be split up into multiple orders requiring one swap body (v = 1)
each.
Logistics companies usually have to service up to a few thousand such orders per day.
The express unit of the project partner DHL, for instance, delivered between 100 and 3000
540 49 REAL-WORLD PROBLEMS
per day in 2007, depending on the day of the week as well as national holidays etc.
The result
of the planning process is a set x of tours. Each single tour ξ is described by
a tuple ξ = ls , ld , f, ť, t̂, b, o . ls and ld are the start and destination locations and ť and
t̂ are the departure and arrival time of the vehicle f ∈ F . On this tour, f carries the set
b = {b1 , b2 , . . . } of swap bodies which, in turn, contain the orders o = {o1 , o2 , . . . }. It is
assumed that, for each truck, there is at least one corresponding truck driver and that the
same holds for all trains.
Tours are the smallest unit of freight transportation. Usually, multiple tours are com-
bined for a delivery: First, a truck tr may need to drive from the depot in Dortmund to
Bochum to pick up an unused swap body sb (ξ1 = hDortmund, Bochum, tr, 9am, 10am, ∅, ∅i).
In a subsequent tour ξ2 = hBochum, Essen, tr, 10.05am, 11am, {sb} , ∅i, it car-
ries the empty swap body sb to a customer in Essen. There, the order
o is loaded into sb and then transported to its destination o.ld = Hannover
(ξ3 = hEssen, Hannover, tr, 11.30am, 4pm, {sb} , {o}i).
Obviously, the set x must be physically sound. It must,  for instance, not contain
any two intersecting tours ξ1 , ξ2 ξ1 .ť < ξ2 .t̂ ∧ ξ2 .ť < ξ1 .t̂ involving the same vehicle
(ξ1 .f = ξ2 .f ), swap bodies (ξ1 .b ∩ ξ2 .b 6= ∅), or orders (ξ1 .o ∩ ξ2 .o 6= ∅). Also, it must be en-
sured that all objects involved in a tour ξ reside at ξ.ls at time ξ.ť. Furthermore, the capacity
limits of all involved means of transportation must be respected, i. e., 0 ≤ |ξ.b| ≤ v̂ (ξ.f ). If
some of the freight is carried by trains, the fixed halting locations of the trains as well as
their assigned departure and arrival times must be considered. The same goes for laws re-
stricting the maximum amount of time a truck driver is allowed to drive without breaks and
constraints imposed by the company’s policies such as the aforementioned cyclic character
of truck tours. Only plans for which all these conditions hold can be considered as correct.
From the perspective of the planning system’s user, runtime constraints are of the same
importance: Ideally, the optimization process should not exceed one day. Even the best
results become useless if their computation takes longer than the time span from receiving
the orders to the day where they actually have to be delivered.
Experience has shown that hiring external carriers for a small fraction of the freight
can often reduce the number of required tours to be carried out by the own vehicles and
the corresponding total distance to be covered significantly, if the organization’s existing
capacities are already utilized to their limits. Therefore, a good transportation planning
system should also be able to make suggestions on opportunities for such an on-demand
outsourcing. An example for this issue is illustrated in Fig. 49.7.b.
The framework introduced in this section holds for practical scenarios in logistics compa-
nies like DHL and Deutsche Post. It proposes a hard challenge for research, since it involves
multiple intertwined optimization problems and combines several aspects even surpassing
the complexity of the most difficult Vehicle Routing Problems known from the literature.

49.3.3 Related Work


The approaches discussed in the literature on freight transportation planning can roughly be
divided into two basic families: exact and stochastic or metaheuristic methods (see Part III
and Part IV). The exact approaches are usually only able to solve small instances of Vehicle
Routing Problems – i. e., those with very limited numbers of orders, customers, or locations
– and therefore cannot be applied in most real-world situations. Using them in scenarios
with many constraints further complicates the problem. [2162]
Heuristic methods are reliable and efficient approaches to address Vehicle Routing Prob-
lems of larger scale. Despite the growing problem dimension, they are still able to provide
high quality approximate solutions in a reasonable time. This makes them more attractive
than exact methods for practical applications. In over 40 years of research, a large number
of heuristics have been proposed for VRPs.
Especially, the metaheuristic optimization methods have received more and more at-
tention. Well-known members of this family of algorithms which have been applied to
49.3. FREIGHT TRANSPORTATION PLANNING 541
Vehicle Routing and freight transportation planning are Tabu Search [83, 178, 403, 1065],
Simulated Annealing [667, 2770], Ant Systems [445, 802], and particularly Evolutionary
Algorithms [64, 402, 1446, 2686, 3084].
Ombuki-Berman and Hanshar [2075], for example, proposed a Genetic Algorithm (GA)
for a Multiple Depot Vehicle Routing Problems. They therefore adopted an indirect and
adaptive inter-depot mutation exchange strategy, coupled with capacity and route-length
restrictions.
Machado et al. [1803] used a basic Vehicle Routing Problem to compare a standard
evolutionary approach with a coevolutionary method. They showed that the inclusion of a
heuristic method into evolutionary techniques significantly improves the results. Instead of
using additional heuristics, knowledge of the problem domain is incorporated into the search
operations in our work.
A cellular and thus, decentralized, GA for solving the Capacitated Vehicle Routing Prob-
lem was presented by Alba Torres and Dorronsoro Dı́az [64, 65]. This method has a high
performance in terms of the quality of the solutions found and the number of function evalu-
ations needed. Decentralization is a good basis for distributing EAs, a method for speeding
up the evolution which we will consider in our future work.
These methods perform a single-objective optimization enriched with problem-specific
constraints. The size of the problems tackled is roughly around a few hundred customers
and below 1000 orders. This is the case in most of the test sets available. Examples of such
benchmarks are the datasets by Augerat et al. [148], Van Breedam [2770], Golden et al.
[1090], Christofides et al. [577], and Taillard [2648] which are publicly available at [825,
2120, 2261]. Using these (partly artificial) benchmarks in our work was not possible since
the framework conditions in in.west are very different. Therefore, we could not perform a
direct comparison of our system with the other approaches mentioned.
To our knowledge, the problem most similar to the practical situation specified in Sec-
tion 49.3.2.2 is the Multiple Depot Vehicle Routing Problem with Pickup, Delivery and In-
termediary Depots (MDVRPPDID) defined by Sigurjónsson [2491]. This problem, however,
does not consider orders and freight containers as different objects. Instead, each container
has a source and a target destination and corresponds to one order. Also, all vehicles have
the same capacity of one container which is not the case in our system where trucks can
usually transport two containers and trains have much higher capacities. The Tabu Search
approach developed by Sigurjónsson [2491] is similar to our method in that it incorporates
domain knowledge in the solution structure and search operations. However, it also al-
lows infeasible intermediate solutions which we rule out in Section 49.3.4. It was tested
on datasets with up to 16 depots, 40 vehicles, and 100 containers which is more than a
magnitude smaller than the problem dimensions the in.west system has to deal with.
Confessore et al. [619] define a Genetic Algorithm for the Capacitated Vehicle Routing
Problem with Time Windows (CVRPTW, see Figure 49.4) for real-world scenarios with a
heterogeneous vehicle fleet with different capacities, multi-dimensional capacity constraints,
order/vehicle, item/vehicle, and item/item compatibility constraints. In in.west, the hetero-
geneity of the vehicles is taken a step further in the sense that trains have totally different
characteristics in terms of the degrees of freedom regarding the tour times and end points.
Furthermore, in in.west, orders are not assigned to vehicles but to containers which, in turn,
are assigned to trucks and trains.
The general idea of using Evolutionary Algorithms and their hybrids for Vehicle Routing
Problems has proven to be very efficient [2212]. The quality of solutions produced by
evolutionary or genetic methods is often higher than that obtained by classic heuristics.
Potvin [2212] pointed out that Evolutionary Algorithms can also outperform widely used
metaheuristics like Tabu Search on classic problems. He also states that other approaches
like artificial neural networks have more or less been abandoned by now in the area of VRPs
due to their poor performance on off-the-shelf computer platforms.
In many of the publications listed in this section, it is indicated that metaheuristics work
542 49 REAL-WORLD PROBLEMS
best when a good share of domain knowledge is incorporated. This holds not only for Vehicle
Routing, but also in virtually every other application of Global Optimization [2244, 2892,
2915]. Nevertheless, such knowledge is generally used as an extension, as a method to tweak
generic operators and methods. In this work, we have placed problem-specific knowledge at
the center of the approach.

49.3.4 Evolutionary Approach

Evolutionary Algorithms in general are discussed in-depth in Chapter 28. Here we just focus
on the methods that we introduced to solve the transportation planning problem at hand.
We base our approach on an EA with new, intelligent operators and a complex solution
representation.

49.3.4.1 Search Space

When analyzing the problem structure outlined in Section 49.3.2.2, it becomes very obvious
that standard encodings such as binary [1075] or integer strings, matrixes, or real vectors
cannot be used in the context of this very general logistics planning task. Although it might
be possible to create a genotype-phenotype mapping capable of translating an integer string
into a tuple ξ representing a valid tour, trying to encode a set x of a variable number of such
tours in an integer string is not feasible. First, there are many substructures involved in a
tour which have variable length such as the sets of orders o and swap bodies b. Second, it
would be practically impossible to ensure the required physical soundness of the tours given
that the reproduction operations would randomly modify the integer strings.
In our work, we adhered to the premise that all candidate solutions must represent
correct solutions according to the specification given in Section 49.3.2.2 and none of the
search operations are allowed to violate this correctness. A candidate solution x ∈ X does
not necessarily contain a complete plan which manages to deliver all orders. Instead, partial
solutions (again as demanded in Section 49.3.2.2) are admitted, too.

Location (lÎL) *
* Tour (x) 1..*
* *
Phenotype (x)
1 1 1
startLocationID (ls)
1 1
Order (o) ^ (ld)
endLocationID
startLocationID (ls) startTime (t)
endLocationID (ld) ^
endTime (t) * SwapBody (b)
minStartTime (^
t s) orderIDs[] (o)
^
0..v(f) *
maxStartTime ( ts) swapBodyIDs[] (b)
1..* * Vehicle (fÎF)
minEndTime (^
td) vehicleID (f)
maxEndTime ( td)

Figure 49.5: The structure of the phenotypes x.

In order to achieve such a behavior, it is clear that all reproduction operations in the
Evolutionary Algorithm must have access to the complete set x of tuples ξ. Only then, they
can check whether the modifications to be applied may impair the correctness of the plans.
Therefore, the phenotypes are not encoded at all, but instead, they are the plan objects in
their native representations as illustrated in Figure 49.5.
This figure holds the UML specification of the phenotypes in our planning system. The
exactly same data structures are also used by the in.west middleware and graphical user
interface. The location IDs (startLocationID, endLocationID) of the Orders and Tours are
49.3. FREIGHT TRANSPORTATION PLANNING 543
indices into a database. They are also used to obtain distances and times of travel between
locations from a sparse distance matrix which can be updated asynchronously from different
information sources. The orderIDs, swapBodyIDs, and vehicleIDs are indices into a database
as well.

49.3.4.2 Search Operations

By using this explicit representation, the search operations have full access to all the infor-
mation in the freight plans. Standard crossover and mutation operators are, however, no
longer applicable. Instead, intelligent operators have to be introduced which respect the
correctness of the candidate solutions.
For the in.west planning system, three crossover and sixteen mutation operations have
been defined, each dealing with a specific constellation in the phenotypes and performing one
distinct type of modification. During the evolution, individuals to be mutated are processed
by a randomly picked operator. If the operator is not applicable because the individual does
not belong to the corresponding constellation, another operator is tried. This is repeated,
until either the individual is modified or all operators were tested. Two individuals to be
combined with crossover are processed by a randomly selected operator as well.

B
B B B
xÎO
x yÎO
A
x
C A C A C A C

D
D
yx
D D

Fig. 49.6.a: Add an order. Fig. 49.6.b: Append an order.

B B B B
x

yÎO

x
A A A C A C
C C
x yx
y

y
D D D D

Fig. 49.6.c: Incorporate an order. Fig. 49.6.d: Create a freight exchange.

Figure 49.6: Some mutation operators from the freight planning EA.

Obviously, we cannot give detailed specifications on all twenty genetic operations [2188]
(including the initial individual creation) in this chapter. Instead, we will outline the muta-
tion operators sketched in Figure 49.6 exemplarily.
The first operator (Fig. 49.6.a) is applicable if there is at least one order which would
not be delivered if the plan in the input phenotype x was carried out. This operator chooses
randomly from all available means of transportation. Available in this context means “not
involved in another tour for the time between the start and end times of the order”. The
freight transporters closer to the source of the order are picked with higher probability.
Then, a swap body is allocated in the same manner. This process leads to between one and
three new tours being added to the phenotype. If the transportation vehicle is a truck, a
fourth tour is added which allows it to travel back to its starting point. This step is optional
and is applied only if the system is configured to send all trucks back “home” after the end
of their routes, as it is the case in DHL.
Fig. 49.6.b illustrates one operator which tries to include an additional order o into an
already existing set of related tours. If the truck driving these tours has space for another
swap body, at least one free swap body b is available, and picking up o and b as well as
delivering o is possible without violating the time constraints of the other transportation
544 49 REAL-WORLD PROBLEMS
orders already involved in the set of tours, the order is included and the corresponding new
tours are added to the plan.
The mutator sketched in Fig. 49.6.c does the same if an additional order can be included
in already existing tours because of available capacities in swap bodies. Such spare capacities
occur from time to time since the containers are not left at the customers’ locations after
unloading the goods but transported back to the depots. For all operators which add new
orders, swap bodies, or tours to the candidate solutions, inverse operations which remove
these elements are provided, too.
One exceptional operator is the “truck-meets-truck” mechanism. Often, two trucks are
carrying out deliveries in opposite directions (B → D and D → B in Fig. 49.6.d). The
operator tries to find a location C which is close to both, B and D. If the time windows of
the orders allow it, the two involved trucks can meet at this halting point C and exchange
their freight. This way, the total distance that they have to drive can almost be halved from
4 ∗ BD to 2 ∗ BC + 2 ∗ CD where BC + CD ≈ BD.
The first recombination operator used in the in.west system copies all tours from the
first parent and then adds all tours from the second parent in a way that does not lead to a
violation of the solution’s correctness. In this process, tours which belong together such as
those created by the first mutator mentioned are kept together. A second crossover method
tries to find sets of tours in the first parent which intersect with similar sets in the second
parent and joins them into an offspring plan in the same way the truck-meets-truck mutator
combines tours.

49.3.4.3 Objective Functions

The freight transportation planning process run by the Evolutionary Algorithm is driven
by a set f of three objective functions (f = {f1 , f2 , f3 }). These functions, all subject to
minimization, are based on the requirements stated in Section 49.3.1 and are combined via
Pareto comparisons (see Section 3.3.5 on page 65 and [593, 599]) in the fitness assignment
processes.

49.3.4.3.1 f1 : Order Delivery

One of the most important aspects of freight planning is to deliver as many orders as possible.
Therefore, the first objective function f1 (x) returns the number of orders which will not be
delivered in a timely manner if the plan x was carried out. The optimum of f1 is zero.
Human operators need to hire external carriers for orders which cannot be delivered (due
to insufficient resources, for instance).

49.3.4.3.2 f2 : Kilometers Driven

By using a sparse distance matrix stored in memory, the second objective function deter-
mines the total distance covered by all vehicles involved. Minimizing this distance will lead
to less fuel consumption and thus, lower costs and lesser CO2 production. The global opti-
mum of this function is not known a priori and may not be discovered by the optimization
process as well.

49.3.4.3.3 f3 : Full Utilization of the Capacities

The third objective function minimizes the spare capacities of the vehicles involved in tours.
In other words, it considers the total volume left empty in the swap bodies on the road and
the unused swap body slots of the trucks and trains. f2 does not consider whether trucks
are driving tours empty or loaded with empty containers. These aspects are handled by f3
which again has the optimum zero.
49.3. FREIGHT TRANSPORTATION PLANNING 545
49.3.5 Experiments

Because of the special requirements of the in.west project and the many constraints im-
posed on the corresponding optimization problem, the experimental results cannot be di-
rectly compared with other works. As we have shown in our discussion of related work
in Section 49.3.3, none of the approaches in the Vehicle Routing literature are sufficiently
similar to this scenario.
Hence, it was especially important to evaluate our freight planning system rigorously. We
have therefore carried out a series of tests according to the full factorial design of experiments
paradigm [378, 3030]. These experiments (which we will discuss in Section 49.3.5.1) are based
on a single, real-world set of orders. The results of additional experiments performed with
different datasets are outlined in Section 49.3.5.2. All data used have been reconstructed
from the actual order database of the project partner DHL, one of the largest logistics
companies worldwide. This database is also the yardstick with which we have measured the
utility of our system.
The experiments were conducted using a simplified distance matrix for both, the EA
and the original plans. Since the original plans did not involve trains, we deactivated the
mutation operators which incorporate train tours into candidate solutions too – otherwise
the results would have been incomparable. Legal aspects like statutory idle periods of the
truck drivers have not been considered in the reproduction operators either. However, only
plans not violating these constraints were considered in the experimental evaluation.

49.3.5.1 Full Factorial Tests

Evolutionary Algorithms have a wide variety of parameters, ranging from the choice of sub-
algorithms (like those computing a fitness value from the vectors of objective values for
each individual) to the mutation rate determining the fraction of the selected candidate
solutions which are to undergo mutation. The performance of an EA strongly depends on
the configuration of these parameters. In different optimization problems, usually differ-
ent configurations are beneficial and a setting finding optimal solutions in one application
may lead to premature convergence to a local optimum in other scenarios. Because of the
novelty of the presented approach for transportation planning, performing a large number
of experiments with different settings of the EA was necessary in order to find the optimal
configuration to be utilized in the in.west system in practice.
We, therefore, decided to conduct a full factorial experimental series, i. e., one where all
possible combinations of settings of a set of configuration parameters are tested. As basis
for this series, we used a test case consisting of 183 orders reconstructed from one day in
December 2007. The original freight plan xo for these orders contained 159 tours which
covered a total distance of d = f2 (xo ) = 19 109 km. The capacity of the vehicles involved
was filled to 65.5%. The parameters examined in these experiments are listed in Table 49.2.
546 49 REAL-WORLD PROBLEMS
Param. Setting and Meaning
ss
In every generation of the EA, new individuals are created by the reproduction
operations. The parent individuals in the population are then either discarded
(generational, ss = 0) or compete with their offspring (steady-state, ss = 1).
el
Elitist Evolutionary Algorithms keep an additional archive preserving the best
candidate solutions found (el = 1). Using elitism ensures that these candidate
solutions cannot be lost due to the randomness of the selection process. Turning
off this feature (el = 0) may allow the EA to escape local optima easier.
ps Allowing the EA to work with populations consisting of many individuals in-
creases its chance of finding good solutions but also increases its runtime. Three
different population sizes were tested: ps ∈ {200, 500, 1000}
fa
Either simple Pareto-Ranking [599] (fa = 0) or an extended assignment process
(fa = 1, called variety preserving in [2892]) with sharing was applied. Shar-
ing [742, 1250] decreases the fitness of individuals which are very similar to others
in the population in order to force the EA to explore many different areas in the
search space.
c
The simple convergence prevention (SCP) method proposed in Section 28.4.7.2
on page 304 was either used (c = 0.3) or not (c = 0). SCP is a clearing ap-
proach [2167, 2391] applied in the objective space which discards candidate solu-
tions with equal objective values with probability c.
mr/cr Different settings for the mutation rate mr ∈ {0.6, 0.8} and the crossover rate
cr ∈ {0.2, 0.4} were tested. These rates do not necessarily sum up to 1, since
individuals resulting from recombination may undergo mutation as well.

Table 49.2: The configurations used in the full-factorial experiments.

These settings were varied in the experiments and each of the 192 possible configurations
was tested ten times. All runs utilized a tournament selection scheme with five contestants
and were granted 10 000 generations. The measurements collected are listed in Table 49.3.

Meas. Meaning
ar The number of runs which found plans that completely covered all orders.
at The median number of generations needed by these runs until such plans were
found.
gr The number of runs which managed to find such plans which additionally were
at least as good as the original freight plans.
gt The median number of generations needed by these runs in order to find such
plans.
et The median number of generations after which f2 did not improve by more than
1%, i. e., the point where the experiments could have been stopped without a
significant loss in the quality of the results.
eτ The median number of individual evaluations until this point.
d The median value of f2 , i. e., the median distance covered.

Table 49.3: The measurements taken during the experiments.


49.3. FREIGHT TRANSPORTATION PLANNING 547
# mr cr cp el ps ss fa ar at gr gt et eτ d
1. 0.8 0.4 0.3 1 1000 1 1 10 341 10 609 3078 3 078 500 15 883 km
2. 0.6 0.2 0.3 0 1000 1 1 10 502 10 770 5746 5 746 500 15 908 km
3. 0.8 0.2 0.3 1 1000 1 1 10 360 10 626 4831 4 831 000 15 929 km
4. 0.6 0.4 0.3 0 1000 1 1 10 468 10 736 5934 5 934 000 15 970 km
5. 0.6 0.2 0.3 1 1000 1 1 10 429 10 713 6236 6 236 500 15 971 km
6. 0.8 0.2 0.3 0 1000 1 1 10 375 10 674 5466 5 466 000 16 003 km
7. 0.8 0.4 0.3 1 1000 1 0 10 370 10 610 5691 5 691 500 16 008 km
8. 0.8 0.2 0.3 0 1000 0 1 10 222 10 450 6186 6 186 500 16 018 km
9. 0.8 0.4 0 0 1000 0 1 10 220 10 463 4880 4 880 000 16 060 km
10. 0.8 0.2 0 1 1000 0 0 10 277 10 506 2862 2 862 500 16 071 km
11. 0.8 0.4 0.3 0 1000 1 0 10 412 10 734 5604 5 604 000 16 085 km
12. 0.8 0.2 0.3 1 1000 0 1 10 214 10 442 4770 4 770 500 16 093 km
13. 0.8 0.2 0.3 1 1000 1 0 10 468 10 673 4970 4 970 500 16 100 km
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
15. 0.8 0.2 0 0 200 1 0 10 1286 2 6756 6773 1 354 700 20 236 km
16. 0.6 0.2 0 0 500 1 0 10 1546 1 9279 9279 4 639 500 19 529 km
17. 0.8 0.4 0.3 0 200 0 0 10 993 0 ∅ ∅ ∅ 19 891 km
18. 0.8 0.4 0 0 200 0 0 10 721 0 ∅ ∅ ∅ 20 352 km
19. 0.6 0.2 0 0 200 1 0 10 6094 0 ∅ ∅ ∅ 23 709 km
20. 0.6 0.4 0 0 1000 1 0 0 ∅ 0 ∅ ∅ ∅ ∞
21. 0.8 0.4 0 0 1000 1 0 3 6191 0 ∅ ∅ ∅ ∞
22. 0.8 0.4 0 0 500 1 0 4 5598 0 ∅ ∅ ∅ ∞
23. 0.6 0.4 0 0 200 0 0 3 2847 0 ∅ ∅ ∅ ∞
24. 0.6 0.4 0 0 200 1 0 0 ∅ 0 ∅ ∅ ∅ ∞
25. 0.8 0.4 0 0 200 1 0 0 ∅ 0 ∅ ∅ ∅ ∞
26. 0.6 0.4 0 0 500 1 0 0 ∅ 0 ∅ ∅ ∅ ∞

Table 49.4: The best and the worst evaluation results in the full-factorial tests.

Table 49.4 contains the thirteen best and the twelve worst configurations, sorted accord-
ing to gr, d, and eτ . The best configuration managed to reduce the distance to be covered
by over 3000 km (17%) consistently. Even the configuration ranked 170 (not in Table 49.4)
saved almost 1100 km in median. In total, 172 out of the 192 test series managed to surpass
the original plans for the orders in the dataset in all of ten runs and only ten configurations
were unable to achieve this goal at all.
The experiments indicate that a combination of the highest tested population size
(ps = 1000), steady-state and elitist population treatment, SCP with rejection probability
c = 0.3, a sharing-based fitness assignment process, a mutation rate of 80%, and a crossover
rate of 40% is able to produce the best results. We additionally applied significance tests –
the sign test [2490] and Wilcoxon’s signed rank test [2490, 2939] – in order to check whether
there also are settings of single parameters which generally have positive influence. On
a significance level of α = 0.02, we considered a tendency only if both (two-tailed) tests
agreed. Applying the convergence prevention mechanism (SCP) [2892], larger population
sizes, variety preserving fitness assignment [2892], elitism, and higher mutation and lower
crossover rates have significantly positive influence in general.
Interestingly, the steady-state configurations lost in the significance tests against the
generational ones, although the seven best-performing settings were steady-state. Here the
utility of full factorial tests becomes obvious: steady-state population handling performed
very well if (and only if) sharing and the SCP mechanism were applied, too. In the other
cases, it led to premature convergence.
This behavior shows the following: transportation planning is a multimodal optimization
problem with a probably rugged fitness landscape [2915] or with local optima which are many
548 49 REAL-WORLD PROBLEMS
search steps (applications of reproduction operators) apart. Hence, applying steady-state
EAs for Vehicle Routing Problems similar to the one described here can be beneficial, but
only if diversity-preserving fitness assignment or selection algorithms are used in conjunction.
Only then, the probability of premature convergence is kept low enough and different local
optima and distant areas of the search space are explored sufficiently.

49.3.5.2 Tests with Multiple Datasets

We ran experiments with many other order datasets for which the actual freight plans used by

70 000
original plan
performance
f2

60 000

55 000

50 000 100% order satisfaction

45 000 order satisfaction


goal reached
generations
40 000
0 4000 8000 1200
Fig. 49.7.a: For 642 orders (14% better).

100 000 original plan


performance

f2

60 000 100% order satisfaction


99% order satisfaction

40 000

20 000 order satisfaction


goal reached
generations
0 4000 8000 12 000 16 000 20 000
Fig. 49.7.b: For 1016 orders (3/10% better).

Figure 49.7: Two examples for the freight plan evolution.

the project partners were available. In all scenarios, our approach yielded an improvement
which was never below 1%, usually above 5%, and for some days even exceeding 15%.
Figure 49.7 illustrates the best f2 -values (the total kilometers) of the individuals with the
most orders satisfied in the population for two typical example evolutions.
In both diagrams, the total distance first increases as the number of orders delivered by
the candidate solutions rises due to the pressure from f1 . At some point, plans which are
49.3. FREIGHT TRANSPORTATION PLANNING 549
able to deliver all orders evolved and f1 is satisfied (minimized). Now, its corresponding
dimension of the objective space begins to collapse, the influence of f2 intensifies, and the
total distances of the plans decrease. Soon afterwards, the efficiency of the original plans is
surpassed. Finally, the populations of the EAs converge to a Pareto frontier and no further
improvements occur. In Fig. 49.7.a, this limit was 54 993 km, an improvement of more than
8800 km or 13.8% compared to the original distance of 63 812 km.
Each point in the graph of f2 in the diagrams represents one point in the Pareto frontier
of the corresponding generation. Fig. 49.7.b illustrates one additional graph for f2 : the best
plans which can be created when at most 1% of the orders are outsourced. Compared to the
transportation plan including assignments for all orders which had a length of 79 464 km,
these plans could reduce the distance to 74 436 km, i. e., another 7% of the overall distance
could be saved. Thus, in this case, an overall reduction of around 7575 km is achieved in
comparison to the original plan, which had a length of 82 013 km.

49.3.5.3 Time Consumption

One run of the algorithm (prototypically implemented in Java) for the dataset used in
the full factorial tests (Section 49.3.5.1) took around three hours. For sets with around
1000 orders it still easily fulfills the requirement of delivering result within 24 hours. Runs
with close to 2000 orders, however, take longer than one week (in our experiments, we used
larger population sizes for them). Here, it should be pointed out that these measurements
were taken on a single dual-core 2.6 GHz machine, which is only a fraction of the capacity
available in the dedicated data centers of the project partners. It is well known that EAs can
be efficiently parallelized and distributed over clusters [1791, 2897]. The final implementation
of the in.west system will incorporate distribution mechanisms and thus be able to deliver
results in time for all situations.

49.3.6 Holistic Approach to Logistics

Logistics involve many aspects and the planning of efficient routes is only one of them.
Customers and legislation [2768], for instance, require traceability of the production data
and thus, also of the goods on the road. Experts require more linkage between the traffic
carriers and efficient distribution of traffic to cargo rail and water transportation.

Communication via
Satelite-based Middleware
location

Swap body
with sensor node

Transportation
Planner

Communication
Graphical User Interface (Screenshot) Software/GUI via Middleware

Figure 49.8: An overview of the in.west system.

Technologies like telematics are regarded as the enablers of smart logistics which optimize
the physical value chain. Only a holistic approach combining intelligent planning and such
technologies can solve the challenges [2248, 2607] faced by logistics companies. Therefore, the
focus of the in.west project was to combine software, telematics, and business optimization
approaches into one flexible and adaptive system sketched in Figure 49.8.
550 49 REAL-WORLD PROBLEMS
Data from telematics become input to the optimization system which suggests trans-
portation plans to the human operator who selects or modifies these suggestions. The final
plans then, in turn, are used to configure the telematic units. A combination of both allows
the customers to track their deliveries and the operator to react to unforeseen situations.
In such situations, traffic jams or accidents, for instance, the optimization component can
again be used to make ad-hoc suggestions for resolutions.

49.3.6.1 The Project in.west

The prototype introduced in this chapter was provided in the context of the BMWi promoted
project in.west and will be evaluated in field test in the third quarter of 2009. The analyses
take place in the area of freight transportation in the business segment courier -, express-
and parcel services of DHL, the market leader in this business. Today the transport volume
of this company constitutes a substantial portion of the traffic volume of the traffic carriers
road, ship, and rail. Hence, a significant decrease in the freight traffic caused by DHL might
lead to a noticeable reduction in the freight traffic volume on German roads.
The goal of this project is to achieve this decrease by utilizing information and com-
munication technologies on swap bodies, new approaches of planning, and novel control
processes. The main objective is to design a decision support tool to assist the operator
with suggestions for traffic reduction and a smart swap body telematic unit.
The requirements for the in.west software are various, since the needs of both, the
operators and the customers are to be satisfied. From their perspective, for example, a
constant documentation of the transportation processes is necessary. The operators require
that this documentation start with the selection, movement, and employment of the swap
bodies. From the view of the customer, only tracking and tracing of the containers on the
road must be supported. With the web-based user interfaces we provide, a spontaneous
check of the load condition, the status of the container, and information about the carrier
is furthermore possible.

49.3.6.2 Smart Information and Communication Technology

All the information required by customers and operators on the status of the freight have
to be obtained by the software system first. Therefore, in.west also features a hardware
development project with the goal of designing a telematic unit. A device called YellowBox
(illustrated in Figure 49.9) was developed which enables swap bodies to transmit real time
geo positioning data of the containers to a software system. The basic functions of the device
are location, communication, and identification. The data received from the YellowBox is
processed by the software for planning and controlling the swap body in the logistic network.

Figure 49.9: The YellowBox – a mobile sensor node.

The YellowBox consist of a main board, a location unit (GPS), a communication unit
(GSM/GPRS) and a processor unit. Digital input and output interfaces ensure the scal-
ability of the device, e.g., for load volume monitoring. Swap bodies do not offer reliable
power sources for technical equipment like telematic systems. Thus, the YellowBox has
been designed as a sensor node which uses battery current [2189, 2912].
49.3. FREIGHT TRANSPORTATION PLANNING 551
One of the most crucial criteria for the application of these sensor nodes in practice
was long battery duration and thus, low power consumption. The YellowBox is therefore
turned on and off for certain time intervals. The software system automatically configures
the device with the route along which it will be transported (and which has been evolved
by the planner). In pre-defined time intervals, it can thus check whether or not it is “on the
right track”. Only if location deviations above a certain threshold are detected, it will notify
the middleware. Also, if more than one swap body is to be transported by the same vehicle,
only one of them needs to perform this notification. With these approaches, communication
– one of the functions with the highest energy consumption – is effectively minimized.

49.3.7 Conclusions

In this section, we presented the in.west freight planning component which utilizes an Evolu-
tionary Algorithm with intelligent reproduction operations for general transportation plan-
ning problems. The approach was tested rigorously on real-world data from the in.west
partners and achieved excellent results. It has been integrated as a constituting part of the
holistic in.west logistics software system.
We presented this work at the EvoTRANSLOG’09 workshop [1052] in Tübingen [2914].
One point of the very fruitful discussion there was the question why we did not utilize
heuristics to create some initial solutions for the EA. We intentionally left this for our
future work for two reasons: First, we fear that creating such initial solutions may lead
to a decrease of diversity in the population. In Section 49.3.5.1 we showed that diversity
is a key to finding good solutions to this class of problem. Second, as can be seen in the
diagrams provided in Figure 49.7, finding initial solutions where all orders are assigned to
routes is not the time consuming part of the EA – optimizing them to plans with a low
total distance is. Hence, incorporating measures for distribution and efficient parallelization
may be a more promising addition to our approach. If a cluster of, for instance, twenty
computers is available, we can assume that distribution according to client-server or island
model schemes [1108, 1838, 2897] will allow us to decrease the runtime to at least one tenth
of the current value. Nevertheless, testing the utility of heuristics for creating the initial
population is both, interesting and promising.
The runtime issues for very large datasets of our system could be resolved by paral-
lelization and distribution. Another interesting feature for future work would be to allow
for online updates of the distance matrix which is used to both, to compute f2 and also to
determine the time a truck needs to travel from one location to another. The system would
then then be capable to (a) perform planning for the whole orders of one day in advance,
and (b) update smaller portions of the plans online if traffic jams occur. It should be noted
that even with such a system, the human operator cannot be replaced. There are always
constraints and pieces of information which cannot be employed in even the most advanced
automated optimization process. Hence, the solutions generated by our transportation plan-
ner are suggestions rather than doctrines. They are displayed to the operator and she may
modify them according to her needs.
Chapter 50
Benchmarks

50.1 Introduction
In this chapter, we discuss some benchmark and toy problems which are used to demonstrate
the utility of Global Optimization algorithms. Such problems have usually no direct real-
world application but are well understood, widely researched, and can be used
1. to measure the speed and the ability of optimization algorithms to find good solutions,
as done for example by Borgulya [360], Liu et al. [1747], Luke and Panait [1787], and
Yang et al. [3020],
2. as benchmark for comparing different optimization approaches, as done by Purshouse
and Fleming [2229], Zitzler et al. [3098], Yang et al. [3020], for instance,
3. to derive theoretical results, as done for example by Jansen and Wegener [1433],
4. as basis to verify theories, as used for instance by Burke et al. [455] and Langdon and
Poli [1672],
5. as playground to test new ideas, research, and developments,
6. as easy-to-understand examples to discuss features and problems of optimization (as
done here in Example E3.5 on page 57, for instance), and
7. for demonstration purposes, since they normally are interesting, funny, and can be
visualized in a nice manner.

50.2 Bit-String based Problem Spaces

50.2.1 Kauffman’s NK Fitness Landscapes

The ideas of fitness landscapes1 and epistasis2 came originally from evolutionary biology
and later were adopted by Evolutionary Computation theorists. It is thus not surprising
that biologists also contributed much to the research of both. In the late 1980s, Kauffman
1 Fitness landscapes have been introduced in Section 5.1 on page 93.
2 Epistasis is discussed in Chapter 17 on page 177.
554 50 BENCHMARKS
[1499] defined the NK fitness landscape [1499, 1501, 1502], a family of objective functions
with tunable epistasis, in an effort to investigate the links between epistasis and ruggedness.
The problem space and also the search space of this problem are bit strings of the length
N , i. e., G = X = BN . Only one single objective function is used and referred to as fitness
function FN,K : BN 7→ R+ . Each gene xi contributes one value fi : BK+1 7→ [0, 1] ⊂ R+
to the fitness function which is defined as the average of all of these N contributions. The
fitness fi of a gene xi is determined by its allele and the alleles at K other loci xi1 , xi2 , .., xiK
with i1..K ∈ [0, N − 1] \ {i} ⊂ N0 , called its neighbors.
N −1
1 X
FN,K (x) = fi (xi , xi1 , xi2 , .., xiK ) (50.1)
N i=0

Whenever the value of a gene changes, all the fitness values of the genes to whose neighbor set
it belongs will change too – to values uncorrelated to their previous state. While N describes
the basic problem complexity, the intensity of this epistatic effect can be controlled with the
parameter K ∈ 0..N : If K = 0, there is no epistasis at all, but for K = N − 1 the epistasis
is maximized and the fitness contribution of each gene depends on all other genes. Two
different models are defined for choosing the K neighbors: adjacent neighbors, where the K
nearest other genes influence the fitness of a gene or random neighbors where K other genes
are therefore randomly chosen.
The single functions fi can be implemented by a table of length 2K+1 which is indexed
by the (binary encoded) number represented by the gene xi and its neighbors. These tables
contain a fitness value for each possible value of a gene and its neighbors. They can be filled
by sampling an uniform distribution in [0, 1) (or any other random distribution).
We may also consider the fi to be single objective functions that are combined to a
fitness value FN,K by a weighted sum approach, as discussed in Section 3.3.3. Then, the
nature of NK problems will probably lead to another well known aspect of multi-objective
optimization: conflicting criteria. An improvement in one objective may very well lead to
degeneration in another one.
The properties of the NK landscapes have intensely been studied in the past and the
most significant results from Kauffman [1500], Weinberger [2882], and Fontana et al. [959]
will be discussed here. We therefore borrow from the summaries provided by Altenberg [79]
and Platel et al. [2186]. Further information can be found in [572, 573, 1016, 2515, 2982]. An
analysis of the behavior of Estimation of Distribution Algorithms and Genetic Algorithms
in NK landscapes has been provided by Pelikan [2146].

50.2.1.1 K=0

For K = 0, the fitness function is not epistatic. Hence, all genes can be optimized separately
and we have the classical additive multi-locus model.

1. There is a single optimum x⋆ which is globally attractive, i. e., which can and will be
found by any (reasonable) optimization process regardless of the initial configuration.

2. For each individual x 6= x⋆ , there exists a fitter neighbor.

3. An adaptive walk3 from any point in the search space will proceed by reducing the
Hamming distance to the global optimum by 1 in each step (if each mutation only
affects one single gene). The number of better neighbors equals the Hamming distance
to the global optimum. Hence, the estimated number of steps of such a walk is N2 .

4. The fitness of direct neighbors is highly correlated since it shares N − 1 components.

3 See Section 8.2.1 on page 115 for a discussion of adaptive walks.


50.2. BIT-STRING BASED PROBLEM SPACES 555
50.2.1.2 K =N −1

For K = N − 1, the fitness function equals a random assignment of fitness to each point of
the search space.
1
1. The probability that a genotype is a local optimum is N −1 .

2N
2. The expected total number of local optima is thus N +1 .

3. The average distance between local optima is approximately 2 ln (N − 1).

4. The expected length of adaptive walks is approximately ln (N − 1).

5. The expected number of mutants to be tested in an adaptive walk before reaching a


Plog (N −1)−1 i
local optimum is i=02 2.

6. With increasing N , the expected fitness of local optima reached by an adaptive from
a random initial configuration decreases towards the mean fitness FN,K = 21 of the
search space. This is called the complexity catastrophe [1500].

For K = N − 1, the work of Flyvbjerg and Lautrup [935] is of further interest.

50.2.1.3 Intermediate K

1. For small K, the best local optima share many common alleles. As K increases, this
correlation diminishes. This degeneration proceeds faster for the random neighbors
method than for the nearest neighbors approach.

2. For larger K, the fitness of the local optima approach a normal distribution with mean
m and variance s approximately
p
m = µ + σ 2 ln (K + 1)K + 1 (50.2)
(K + 1)σ 2
s= (50.3)
N (K + 1 + 2(K + 2) ln (K + 1))

where µ is the expected value of the fi and σ 2 is their variance.

3. The mean distance between local optima, roughly twice the length of an adaptive walk,
is approximately N log 2 (K+1)
2(K+1) .

4. The autocorrelation function4 ρ(k, FN,K ) and the correlation length τ are:
 k
K +1
ρ(k, FN,K ) = 1− (50.4)
N
−1
τ =   (50.5)
K +1
ln 1 −
N

4 See Definition D14.2 on page 163 for more information on autocorrelation.


556 50 BENCHMARKS
50.2.1.4 Computational Complexity

Altenberg [79] nicely summarizes the four most important theorems about the computational
complexity of optimization of NK fitness landscapes. These theorems have been proven using
different algorithms introduced by Weinberger [2883] and Thompson and Wright [2703].

1. The NK optimization problem with adjacent neighbors is solvable in O 2K N steps
and thus in P [2883].

2. The NK optimization problem with random neighbors is N P-complete for K ≥


2 [2703, 2883].

3. The NK optimization problem with random neighbors and K = 1 is solvable in poly-


nomial time. [2703].

50.2.1.5 Adding Neutrality – NKp, NKq, and Technological Landscapes

As we have discussed in Section 16.1, natural genomes exhibit a certain degree of neutral-
ity. Therefore, researchers have proposed extensions for the NK landscape which introduce
neutrality, too [1031, 1032]. Two of them, the NKp and NKq landscapes, achieve this by
altering the contributions fi of the single genes to the total fitness. In the following, assume
that there are N tables, each with 2K entries representing these contributions.
The NKp landscapes devised by Barnett [219] achieves neutrality by setting a certain
number of entries in each table to zero. Hence, the corresponding allele combinations do
not contribute to the overall fitness of an individual. If a mutation leads to a transition
from one such zero configuration to another one, it is effectively neutral. The parameter p
denotes the probability that a certain allele combination does not contribute to the fitness.
As proven by Reidys and Stadler [2287], the ruggedness of the NKp landscape does not vary
for different values of p. Barnett [218] proved that the degree of neutrality in this landscape
depends on p.
Newman and Engelhardt [2028] follow a similar approach with their NKq model. Here,
the fitness contributions fi are integers drawn from the range [0, q) and the total fitness of a
candidate solution is normalized by multiplying it with 1/q−1. A mutation is neutral when
the new allelic combination resulting from it leads to the same contribution than the old
one. In NKq landscapes, the neutrality decreases with rising values of q. In [1032], you can
find a thorough discussion of the NK, the NKp, and the NKq fitness landscape.
With their technological landscapes, Lobo et al. [1754] follow the same approach from
the other side: the discretize the continuous total fitness function FN,K . The parameter M
of their technological landscapes corresponds to a number of bins [0, 1/M ), [1/M , 2/M ), . . . ,
into which the fitness values are sorted and put away.

50.2.2 The p-Spin Model

Motivated by the wish of researching the models for the origin of biological information
by Anderson [94, 2317] and Tsallis and Ferreira [2725], Amitrano et al. [86] developed
the p-spin model. This model is an alternative to the NK fitness landscape for tunable
ruggedness [2884]. Other than the previous models, it includes a complete definition for all
genetic operations which will be discussed in this section.
The p-spin model works with a fixed population size ps of individuals of an also fixed
length N . There is no distinction between genotypes and phenotype, in other words, G = X.
Each gene of an individual x is a binary variable which can take on the values −1 and 1.
N
G = {−1, 1} , xi [j] ∈ {−1, 1} ∀i ∈ [1..ps], j ∈ [0..N − 1] (50.6)
50.2. BIT-STRING BASED PROBLEM SPACES 557
On the 2N possible genotypic configurations, a space with the topology of an N -dimensional
hypercube is defined where neighboring individuals differ in exactly one element. On this
genome, the Hamming distance “dist ham” can be defined as
N −1
1 X
dist ham(x1 , x2 ) = (1 − x1 [i]x2 [i]) (50.7)
2 i=0

Two configurations are said to be ν mutations away from each other if they have the Ham-
ming distance ν. Mutation in this model is applied to a fraction of the N ∗ ps genes in the
population. These genes are chosen randomly and their state is changed, i. e., x[i] → −x[i].
The objective function fK (which is called fitness function in this context) is subject to
maximization, i. e., individuals with a higher value of fK are less likely to be removed from
the population. For very subset z of exactly K genes of a genotype, one contribution A(z)
is added to the fitness.
X
fK (x) = A(x[z [0]], x[z [1]], . . . , x[z [K − 1]]) (50.8)
∀z∈P([0..K])∧|z|=K

A(z) = az ∗ z [0] ∗ z [1] ∗ · · · ∗ z [K − 1] is the product of an evenly distributed random number az


PK−1 Pk−1
and the elements of z. For K = 2, f2 can be written as f2 (x) = i=0 j=0 aij x[i]x[j], which
corresponds to the spin-glass [311, 1879] function first mentioned by Anderson [94] in this
context. With rising values of K, this fitness landscape becomes more rugged. Its correlation
length τ is approximately N/2K, as discussed thoroughly by Weinberger and Stadler [2884].
For selection, Amitrano et al. [86] suggest to use the measure PD (x) defined by Rokhsar
et al. [2317] as follows:
1
PD (x) = β(fK (x)−H0 )
(50.9)
1+e
where the coefficient β is a sharpness parameter and H0 is a threshold value. For β → ∞,
all individuals x with fK (x) < H0 will die and for β = 0, the death probability is always 12 .
The individuals which have died are then replaced with copies of the survivors.

50.2.3 The ND Family of Fitness Landscapes


The ND family of fitness landscapes has been developed by Beaudoin et al. [242] in order
to provide a model problem with tunable neutrality.
The degree of neutrality ν is defined as the number (or, better, the fraction of) neutral
neighbors (i. e., those with same fitness) of a candidate solution, as specified in Equation 16.1
on page 171. The populations of optimization processes residing on a neutral network
(see Section 16.4 on page 173) tend to converge into the direction of the individual which
has the highest degree of neutrality on it. Therefore, Beaudoin et al. [242] create a landscape
with a predefined neutral degree distribution.
The search space is again the set of all binary strings of the length N , G = X = BN .
Thus, a genotype has minimally 0 and at most N neighbors with Hamming distance 1 that
have the same fitness. The array D has the length N + 1 and the element D[i] represents
the fraction of genotypes in the population that have i neutral neighbors.
Beaudoin et al. [242] provide an algorithm that divides the search space into neutral
networks according to the values in D. Since this approach cannot exactly realize the
distribution defined by D, the degrees of neutrality of the single individuals are subsequently
refined with a Simulated Annealing algorithm. The objective (fitness) function is created in
form of a complete table mapping X 7→ R. All members of a neutral network then receive
the same, random fitness.
If it is ensured that all members in a neutral network always have the same fitness,
its actual value can be modified without changing the topology of the network. Tunable
deceptiveness is achieved by setting the fitness values according to the Trap Functions [19,
1467? ].
558 50 BENCHMARKS
50.2.3.1 Trap Functions
Trap functions fb,r,x⋆ : BN 7→ R are subject to maximization based on the Hamming distance
to a pre-defined global optimum x⋆ . They build a second, local optimum in form of a hill
with a gradient pointing away from the global optimum. This trap is parameterized with
two values, b and r, where b corresponds to the width of the attractive basins and r to their
relative importance.
( ⋆
1 − dist ham(x,x
Nb
)
if N ∗ dist ham(x, x⋆ ) < b
fb,r,x⋆ (x) = r( N dist ham(x,x )−b)
1 ⋆ (50.10)
1−b otherwise

Equation 50.11 shows a similar “Trap” function defined by Ackley [19] where u(x) is the
number of ones in the bit string x of length n and z = ⌊3n/4⌋ [1467]. The objective function
f(x) is subject to maximization is sketched in Figure 50.1.

(8n/z) (z − u(x)) if u(x) ≤ z
f(x) = (50.11)
(10n/ (n − z)) (u(x) − z) otherwise

f(x) global optimium


with small basin
of attraction

local optimium
with large basin
of attraction

u(x)

Figure 50.1: Ackley’s “Trap” function [19, 1467].

50.2.4 The Royal Road


The Royal Road functions developed by Mitchell et al. [1913] and presented first at the
Fifth International Conference on Genetic Algorithms in July 1993 are a set of special fitness
landscapes for Genetic Algorithms [970, 1464, 1913, 2233, 2780]. Their problem space X and
search space G are fixed-length bit strings. The Royal Road functions are closely related
to the Schema Theorem5 and the Building Block Hypothesis6 and were used to study the
way in which highly fit schemas are discovered. They therefore define a set of schemas
S = {s1 , s2 , . . . , sn } and an objective function (here referred to as fitness function), subject
to maximization, as X
f(x) = c(s)σ(s, x) (50.12)
∀s∈S
where x ∈ X is a bit string, c(s) is a value assigned to the schema s and σ(s, x) is defined as

1 if x is an instance of s
σ(s, x) = (50.13)
1 otherwise
In the original version, c(s) is the order of the schema s, i. e., c(s) ≡ order(s), and S is
specified as follows (where * stands for the don’t care symbol as usual).
5 See Section 29.5 on page 341 for more details.
6 The Building Block Hypothesis is elaborated on in Section 29.5.5 on page 346
50.2. BIT-STRING BASED PROBLEM SPACES 559
1 s1 = 11111111********************************************************; c(s1 ) = 8
2 s2 = ********11111111************************************************; c(s2 ) = 8
3 s3 = ****************11111111****************************************; c(s3 ) = 8
4 s4 = ************************11111111********************************; c(s4 ) = 8
5 s5 = ********************************11111111************************; c(s5 ) = 8
6 s6 = ****************************************11111111****************; c(s6 ) = 8
7 s7 = ************************************************11111111********; c(s7 ) = 8
8 s8 = ********************************************************11111111; c(s8 ) = 8
9 s9 = 1111111111111111************************************************; c(s9 ) = 16
10 s10 = ****************1111111111111111********************************; c(s10 ) = 16
11 s11 = ********************************1111111111111111****************; c(s11 ) = 16
12 s12 = ************************************************1111111111111111; c(s12 ) = 16
13 s13 = 11111111111111111111111111111111********************************; c(s13 ) = 32
14 s14 = ********************************11111111111111111111111111111111; c(s14 ) = 32
15 s15 = 1111111111111111111111111111111111111111111111111111111111111111; c(s15 ) = 64
Listing 50.1: An example Royal Road function.

By the way, from this example, we can easily see that a fraction of all mutation and crossover
operations applied to most of the candidate solutions will fall into the don’t care areas. Such
modifications will not yield any fitness change and therefore are neutral.
The Royal Road functions provide certain, predefined stepping stones (i. e., building
blocks) which (theoretically) can be combined by the Genetic Algorithm to successively
create schemas of higher fitness and order. Mitchell et al. [1913] performed several tests
with their Royal Road functions. These tests revealed or confirmed that

1. Crossover is a useful reproduction operation in this scenario. Genetic Algorithms


which apply this operation clearly outperform Hill Climbing approaches solely based
on mutation.

2. In the spirit of the Building Block Hypothesis, one would expect that the intermediate
steps (for instance order 32 and 16) of the Royal Road functions would help the Genetic
Algorithm to reach the optimum. The experiments of Mitchell et al. [1913] showed the
exact opposite: leaving them away speeds up the evolution significantly. The reason
is the fitness difference between the intermediate steps and the low-order schemas is
high enough that the first instance of them will lead the GA to converge to it and wipe
out the low-order schemas. The other parts of this intermediate solution play no role
and may allow many zeros to hitchhike along.

Especially this last point gives us another insight on how we should construct genomes: the
fitness of combinations of good low-order schemas should not be too high so other good
low-order schemas do not extinct when they emerge. Otherwise, the phenomenon of domino
convergence researched by Rudnick [2347] and outlined in Section 13.2.3 and Section 50.2.5.2
may occur.

50.2.4.1 Variable-Length Representation

The original Royal Road problems can be defined for binary string genomes of any given
length n, as long as n is fixed. A Royal Road benchmark for variable-length genomes has
been defined by Platel et al. [2185].
The problem space XΣ of the VLR (variable-length representation) Royal Road problem
is based on an alphabet Σ with N = |Σ| letters. The fitness of an individual x ∈ XΣ is
determined by whether or not consecutive building blocks of the length b of the letters l ∈ Σ
are present. This presence can be defined as

1 if ∃i : (0 ≤ i < (len(x) − b)) ∧ (x[i + j] = l ∀j : 0 ≤ j < (b − 1))
Bb (x, l) = (50.14)
0 otherwise

1. Where b ≥ 1 is the length of the building blocks,


560 50 BENCHMARKS
2. Σ is the alphabet with N = |Σ| letters,
3. l is a letter in Σ,
4. x ∈ XΣ is a candidate solution, and
5. x[k] is the k th locus of x.
Bb (x, l) is 1 if a building block, an uninterrupted sequence of the letter l, of at least length
b, is present in x. Of course, if len(x) < b this cannot be the case and Bb (x, l) will be zero.
We can now define the functional objective function fΣb : XΣ 7→ [0, 1] which is subject
to maximization as
N
1 X
fΣb (x) = Bb (x, Σ[i]) (50.15)
N i=1
An optimal individual x⋆ solving the VLR Royal Road problem is thus a string that includes
building blocks of length b for all letters l ∈ Σ. Notice that the position of these blocks plays
no role. The set Xb⋆ of all such optima with fΣb (x⋆ ) = 1 is then

Xb⋆ ≡ {x⋆ ∈ XΣ : Bb (x⋆ , l) = 1 ∀l ∈ Σ} (50.16)

Such an optimum x⋆ for b = 3 and Σ = {A, T, G, C} is

x⋆ = AAAGTGGGTAATTTTCCCTCCC (50.17)

The relevant building blocks of x⋆ are written in bold face. As it can easily be seen, their
location plays no role, only their presence is important. Furthermore, multiple occurrences of
building blocks (like the second CCC) do not contribute to the fitness. The fitness landscape
has been designed in a way ensuring that fitness degeneration by crossover can only occur
if the crossover points are located inside building blocks and not by block translocation or
concatenation. In other words, there is no inter-block epistasis.

50.2.4.2 Epistatic Road

Platel et al. [2186] combined their previous work on the VLR Royal Road with Kauffman’s
NK landscapes and introduced the Epistatic Road. The original NK landscape works on
binary representation of the fixed length N . To each locus i in the representation, one
fitness function fi is assigned denoting its contribution to the overall fitness. fi however is
not exclusively computed using the allele at the ith locus but also depends on the alleles of
K other loci, its neighbors.
The VLR Royal Road uses a genome based on the alphabet Σ with N = len(Σ) letters.
It defines the function Bb (x, l) which returns 1 if a building block of length b containing only
the character l is present in x and 0 otherwise. Because of the fixed size of the alphabet
Σ, there exist exactly N such functions. Hence, the variable-length representation can be
translated to a fixed-length, binary one by simply concatenating them:

Bb (x, Σ[0]) Bb (x, Σ[1]) . . . Bb (x, Σ[N − 1]) (50.18)

Now we can define a NK landscape for the Epistatic Road by substituting the Bb (x, l)
into Equation 50.1 on page 554:
N −1
1 X
FN,K,b (x) = fi (Bb (x, Σ[i]), Bb (x, Σ[i1 ]), . . . , Bb (x, Σ[iK ])) (50.19)
N i=0

The only thing left is to ensure that the end of the road, i. e., the presence of all N building
blocks, also is the optimum of FN,K,b . This is done by exhaustively searching the space BN
and defining the fi in a way that Bb (x, l) = 1 ∀ l ∈ Σ ⇒ FN,K,b (x) = 1.
50.2. BIT-STRING BASED PROBLEM SPACES 561
50.2.4.3 Royal Trees

An analogue of the Royal Road for Genetic Programming has been specified by Punch et al.
[2227]. This Royal Tree problem specifies a series of functions A, B, C, . . . with increasing
arity, i. e., A has one argument, B has two arguments, C has three, and so on. Additionally,
a set of terminal nodes x, y, z is defined.

B B B B

A A A A A A A A A

x x x x x x x x x
Fig. 50.2.a: Perfect A-level Fig. 50.2.b: Perfect B-level Fig. 50.2.c: Perfect C-level

Figure 50.2: The perfect Royal Trees.

For the first three levels, the perfect trees are shown Figure 50.2. An optimal A-level
tree consists of an A node with an x leaf attached to it. The perfect level-B tree has a B as
root with two perfect level-A trees as children. A node labeled with C having three children
which all are optimal B-level trees is the optimum at C-level, and so on.
The objective function, subject to maximization, is computed recursively. The raw fitness
of a node is the weighted sum of the fitness of its children. If the child is a perfect tree at the
appropriate level, a perfect C tree beneath a D-node, for instance, its fitness is multiplied
with the constant F ullBonus, which normally has the value 2. If the child is not a perfect
tree, but has the correct root, the weight is P artialBonus (usually 1). If it is otherwise
incorrect, its fitness is multiplied with P enalty, which is 13 per default. If the whole tree is
a perfect tree, its raw fitness is finally multiplied with CompleteBonus which normally is
also 2. The value of a x leaf is 1.
From Punch et al. [2227], we can furthermore borrow three examples for this fitness
assignment and outline them in Figure 50.3. A tree which represents a perfect A level has
the score of CompleteBonus ∗ F ullBonus ∗ 1 = 2 ∗ 2 ∗ 1 = 4. A complete and perfect tree at
level B receives CompleteBonus(F ullBonus∗4+F ullBonus∗4) = 2∗(2∗4+2∗4) = 32. At
level C, this makes CompleteBonus(F ullBonus ∗ 32 + F ullBonus ∗ 32 + F ullBonus ∗ 32) =
2(2 ∗ 32 + 2 ∗ 32 + 2 ∗ 32) = 384.

C C C

B B B B B B B x x

A A A A A A A A A A x x A A

x x x x x x x x x x x x
Fig. 50.3.a: 2(2 ∗ 32 + 2 ∗ 32 + Fig. 50.3.b: 2(2 ∗ 32 + 2 ∗ 32 + 1
Fig. 50.3.c: 2(2 ∗ 32 + 3
∗1+
2
2 ∗ 32) = 384 3
∗ 1) = 128 23 1
∗ 1) = 64 32
3

Figure 50.3: Example fitness evaluation of Royal Trees


562 50 BENCHMARKS
50.2.4.4 Other Derived Problems

Storch and Wegener [2616, 2617, 2618] used their Real Royal Road for showing that there
exist problems where crossover helps improving the performance of Evolutionary Algorithms.
Naudts et al. [2009] have contributed generalized Royal Road functions functions in order
to study epistasis.

50.2.5 OneMax and BinInt

The OneMax and BinInt are two very simple model problems for measuring the convergence
of Genetic Algorithms.

50.2.5.1 The OneMax Problem

The task in the OneMax (or BitCount) problem is to find a binary string of length n
consisting of all ones. The search and problem space are both the fixed-length bit strings
G = X = Bn . Each gene (bit) has two alleles 0 and 1 which also contribute exactly this
value to the total fitness, i. e.,
n−1
X
f(x) = x[i], ∀x ∈ X (50.20)
i=0

For the OneMax problem, an extensive body of research has been provided by Ackley
[19], Bäck [165], Blickle and Thiele [338], Miller and Goldberg [1898], Mühlenbein and
Schlierkamp-Voosen [1975], Thierens and Goldberg [2697], and Wilson and Kaur [2947].

50.2.5.2 The BinInt Problem

The BinInt problem devised by Rudnick [2347] also uses the bit strings of the length n as
search and problem space (G = X = Bn ). It is something like a perverted version of the
OneMax problem, with the objective function defined as
n−1
X
f(x) = 2n−i−1 x[i], x[i] ∈ {0, 1} ∀i ∈ [0..n − 1] (50.21)
i=0

Since the bit at index i has a higher contribution to the fitness than all other bit at higher
indices together, the comparison between two candidate solutions x1 and x2 is won by the
lexicographically bigger one. Thierens et al. [2698] give the example x1 = (1, 1, 1, 1, 0, 0, 0, 0)
and x2 = (1, 1, 0, 0, 1, 1, 0, 0), where the first deviating bit (underlined, at index 2) fully
determines the outcome of the comparison of the two.
We can expect that the bits with high contribution (high salience) will converge quickly
whereas the other genes with lower salience are only pressured by selection when all oth-
ers have already been fully converged. Rudnick [2347] called this sequential convergence
phenomenon domino convergence due to its resemblance with a row of falling domino
stones [2698] (see Section 13.2.3). Generally, he showed that first, the highly salient genes
converge (i. e., take on the correct values in the majority of the population). Then, step
by step, the building blocks of lower significance can converge, too. Another result of Rud-
nick’s work is that mutation may stall the convergence because it can disturb the highly
significant genes, which then counters the effect of the selection pressure on the less salient
ones. Then, it becomes very less likely that the majority of the population will have the
best alleles in these genes. This somehow dovetails with the idea of error thresholds from
theoretical biology [867, 2057] which we have mentioned in Section 14.3. It is also explains
some the experimental results obtained with the Royal Road problem from Section 50.2.4.
The BinInt problem was used in the studies of Sastry and Goldberg [2401, 2402].
50.2. BIT-STRING BASED PROBLEM SPACES 563
One of the maybe most important conclusions from the behavior of GAs applied to the
BinInt problem is that applying a Genetic Algorithm to solve a numerical problem (X ⊆ Rn )
whilst encoding the candidate solutions binary (G ⊆ Bn ) in a straightforward manner will
like produce suboptimal solutions. Schraudolph and Belew [2429], for instance, recognized
this problem and suggested a Dynamic Parameter Encoding (DPE) where the values genes of
the bit string are readjusted over time: Initially, optimization takes place on a rather coarse
grained scale and after the optimum on this scale is approximated, the focus is shifted to a
finer interval and the genotypes are re-encoded to fit into this interval. In their experiments,
this method works better as the direct encoding.

50.2.6 Long Path Problems

The long path problems have been designed by Horn et al. [1267] in order to construct a
unimodal, non-deceptive problem without noise which Hill Climbing algorithms still can
only solve in exponential time. The idea is to wind a path with increasing fitness through
the search space so that any two adjacent points on the path are no further away than one
search step and any two points not adjacent on the path are away at least two search steps.
All points which are not on the path should guide the search to its origin.
The problem space and search space in their concrete realization is the space of the
binary strings G = X = Bl of the fixed, odd length l. The objective function flp (x) is
subject to maximization. It is furthermore assumed that the search operations in Hill
Climbing algorithms alter at most one bit per search step, from which we can follow that
two adjacent points on the path have a Hamming distance of one and two non-adjacent
points differ in at least two bits.
The simplest instance of the long path problems that Horn et al. [1267] define is the
Root2path Pl . Paths of this type are constructed by iteratively increasing the search space
dimension. Starting with P1 = (0, 1), the path Pl+2 is constructed from two copies Pla = Plb
of the path Pl as follows. First, we prepend 00 to all elements of the path Pla and 11 to
all elements of the path Plb . For l = 1 this makes P1a = (000, 001) and Plb = (110, 111).
Obviously, two elements on Pla or on Plb still have a Hamming distance of one whereas
each element from Pla differs at least two bits from each element on Plb . Then, a bridge
element Bl is created that equals the last element of Pla , but has 01 as the first two bits,
i. e., B1 = 011. Now the sequence of the elements in Plb is reversed and Pla , Bl , and the
reversed Plb are concatenated. Hence, P3 = (000, 001, 011, 111, 110). Due to this recursive
structure of the path construction, the path length increases exponentially with l (for odd
l):

len(Pl+2 ) = 2 ∗ len(Pl ) + 1 (50.22)


l−1
len(Pl ) = 3 ∗ 2 2 −1 (50.23)

The basic fitness of a candidate solution is 0 if it is not on the path and its (zero-based)
position on the path plus one if it is part of the path. The total number of points in the
l+1
space is Bl is 2l and thus, the fraction occupied by the path is approximately 3 ∗ 2− 2 , i. e.,
decreases exponentially. In order to avoid that the long path problem becomes a needle-in-a-
haystack problem7 , Horn et al. [1267] assign a fitness that leads the search algorithm to the
path’s origin to all off-path points xo . Since the first point of the path is always the string 00.
..0 containing only zeros, subtracting the number of ones from l, i. e., flp (xo ) = l−count1xo ,
is the method of choice. To all points on the path, l is added to the basic fitness, making
them superior to all other candidate solutions.

7 See Section 16.7 for more information on needle-in-a-haystack problems.


564 50 BENCHMARKS
110 111
flp(110)=7 flp(111)=6

010 011
flp(010)=2 flp(011)=5

100 101
flp(100)=2 flp(101)=1

000 001
flp(000)=3 flp(001)=4

Figure 50.4: The root2path for l = 3.

P1 = (0 , 1 )
P3 = (000, 001, 011, 111, 110)
P5 = (00000, 00001, 00011, 00111, 00110, 01110, 11110, 11111, 11011, 11001, 11000)
P7 = (0000000, 0000001, 0000011, 0000111, 0000110, 0001110, 0011110, 0011111,
0011011, 0011001, 0011000, 0111000, 1111000, 1111001, 1111011, 1111111,
1111110, 1101110, 1100110, 1100111, 1100011, 1100001, 1100000)
P9 = (000000000, 000000001, 000000011, 000000111, 000000110, 000001110, 000011110,
000011111, 000011011, 000011001, 000011000, 000111000, 001111000, 001111001,
001111011, 001111111, 001111110, 001101110, 001100110, 001100111, 001100011,
001100001, 001100000, 011100000, 111100000, 111100001, 111100011, 111100111,
111100110, 111101110, 111111110, 111111111, 111111011, 111111001, 111111000,
110111000, 110011000, 110011001, 110011011, 110011111, 110011110, 110001110,
110000110, 110000111, 110000011, 110000001, 110000000)
P11 = (00000000000, 00000000001, 00000000011, 00000000111, 00000000110,
00000001110, 00000011110, 00000011111, 00000011011, 00000011001,
00000011000, 00000111000, 00001111000, 00001111001, 00001111011,
00001111111, 00001111110, 00001101110, 00001100110, 00001100111,
00001100011, 00001100001, 00001100000, 00011100000, 00111100000,
00111100001, 00111100011, 00111100111, 00111100110, 00111101110,
00111111110, 00111111111, 00111111011, 00111111001, 00111111000,
00110111000, 00110011000, 00110011001, 00110011011, 00110011111,
00110011110, 00110001110, 00110000110, 00110000111, 00110000011,
00110000001, 00110000000, 01110000000, 11110000000, 11110000001,
11110000011, 11110000111, 11110000110, 11110001110, 11110011110,
11110011111, 11110011011, 11110011001, 11110011000, 11110111000,
11111111000, 11111111001, 11111111011, 11111111111, 11111111110,
11111101110, 11111100110, 11111100111, 11111100011, 11111100001,
11111100000, 11011100000, 11001100000, 11001100001, 11001100011,
11001100111, 11001100110, 11001101110, 11001111110, 11001111111,
11001111011, 11001111001, 11001111000, 11000111000, 11000011000,
11000011001, 11000011011, 11000011111, 11000011110, 11000001110,
11000000110, 11000000111, 11000000011, 11000000001, 11000000000)
50.2. BIT-STRING BASED PROBLEM SPACES 565
Table 50.1: Some long Root2paths for l from 1 to 11 with underlined bridge elements.

Algorithm 50.1: r ←− flp (x)


Input: x: the candidate solution with an odd length
Data: s: the current position in x
Data: sign: the sign of the next position
Data: isOnP ath: true if and only if x is on the path
Output: r: the objective value
1 begin
2 sign ←− 1
3 s ←− len(x) − 1
4 r ←− 0
5 isOnP ath ←− true
6 while (s ≥ 0) ∧ isOnP ath do
7 if s = 0 then
8 if x[0] = 1 then r ←− r + sign
9 sub ←− subListxs − 22
10 if sub = 11 then 
s
11 r ←− r + sign ∗ 3 ∗ 2 2 − 2
12 sign ←− −sign
13 else
14 if sub 6= 00 then
15 if (x[s] = 0) ∧ (x[s − 1] = 1) ∧ (x[s − 2] = 1) then
16 if (s = 2) ∨ [(x[s − 3] = 1)∧ then
(count1subListx0s − 3 = 0)]
s 
r ←− r + sign ∗ 3 ∗ 2 2 −1 − 1
17 else else isOnP ath ←− false
18 else isOnP ath ←− false
19 s ←− s − 2
20 if isOnP ath then r ←− r + len(x)
21 else r ←− len(x) − count1x − 1

Some examples for the construction of Root2paths can be found in Table 50.1 and the
path for l = 3 is illustrated in Figure 50.4. In Algorithm 50.1 we try to outline how the
objective value of a candidate solution x can be computed online. Here, please notice two
things: First, this algorithm deviates from the one introduced by Horn et al. [1267] – we
tried to resolve the tail recursion and also added some minor changes. Another algorithm
for determining flp is given by Rudolph [2349]. The second thing to realize is that for small
l, we would not use the algorithm during the individual evaluation but rather a lookup
table. Each candidate solution could directly be used as index for this table which contains
the objective values. For l = 20, for example, a table with entries of the size of 4B would
consume 4MiB which is acceptable on today’s computers.
The experiments of Horn et al. [1267] showed that Hill Climbing methods that only
concentrate on sampling the neighborhood of the currently known best solution perform very
poor on long path problems whereas Genetic Algorithms which combine different candidate
solutions via crossover easily find the correct solution. Rudolph [2349] shows that it does so
in polynomial expected time. He also extends this idea long k-paths in [2350]. Droste et al.
[836] and Garnier and Kallel [1028] analyze this path and find that also (1+1)-EAs can have
exponential expected runtime on such unimodal functions. Höhn and Reeves [1246] further
elaborate on why long path problems are hard for Genetic Algorithms.
566 50 BENCHMARKS
It should be mentioned that the Root2paths constructed according to the method de-
scribed in this section here do not have the maximum length possible for long paths. Horn
et al. [1267] also introduce Fibonacci paths which are longer than the Root2paths. The
problem of finding maximum length paths in a l-dimensional hypercube is known as the
snake-in-the-box 8 problem [682, 1505] which was first described by Kautz [1505] in the late
1950s. It is a very hard problem suffering from combinatorial explosion and currently, max-
imum snake lengths are only known for small values of l.

50.2.7 Tunable Model for Problematic Phenomena

What is a good model problem? Which model fits best to our purposes? These questions
should be asked whenever we apply a benchmark, whenever we want to use something for
testing the ability of a Global Optimization approach. The mathematical functions intro-
duced in ??, for instance, are good for testing special mathematical reproduction operations
like used in Evolution Strategies and for testing the capability of an Evolutionary Algorithm
for estimating the Pareto frontier in multi-objective optimization. Kauffman’s NK fitness
landscape (discussed in Section 50.2.1) was intended to be a tool for exploring the relation
of ruggedness and epistasis in fitness landscapes but can prove very useful for finding out
how capable an Global Optimization algorithm is to deal with problems exhibiting these
phenomena. In Section 50.2.4, we outlined the Royal Road functions, which were used to
investigate the ability of Genetic Algorithms to combine different useful formae and to test
the Building Block Hypothesis. The Artificial Ant (??) and the GCD problem from ?? are
tests for the ability of Genetic Programming of learning algorithms.
All these benchmarks and toy problems focus on specific aspects of Global Optimization
and will exhibit different degrees of the problematic properties of optimization problems to
which we had devoted Part II:

1. premature convergence and multimodality (Chapter 13),


2. ruggedness (Chapter 14),
3. deceptiveness (Chapter 15),
4. neutrality and redundancy (Chapter 16),
5. overfitting and oversimplification (Section 19.1), and
6. dynamically changing objective functions (Chapter 22).

With the exception of the NK fitness landscape, it remains unclear to which degrees these
phenomena occur in the test problem. How much intrinsic epistasis does the Artificial Ant
or the GCD problem emit? What is the quantity of neutrality inherent in Royal Road for
variable-length representations? Are the mathematical test functions rugged and, if so, to
which degree? All the problems are useful test instances for Global Optimization. They have
not been designed to give us answers to questions like: Which fitness assignment process
can be useful when an optimization problem exhibits weak causality and thus has a rugged
fitness landscape? How does a certain selection algorithm influence the ability of a Genetic
Algorithm to deal with neutrality? Only Kauffman’s NK landscape provides such answers,
but only for epistatis. By fine-tuning its N and K parameters, we can generate problems
with different degrees of epistatis. Applying a Genetic Algorithm to these problems then
allows us to draw conclusions on its expected performance when being fed with high or low
epistatic real-world problems.
In this section, a new model problem is defined that exhibits ruggedness, epistasis, neu-
trality, multi-objectivity, overfitting, and oversimplification features in a controllable manner
Niemczyk [2037], Weise et al. [2910]. Each of them is introduced as a distinct filter compo-
nent which can separately be activated, deactivated, and fine-tuned. This model provides
a perfect test bed for optimization algorithms and their configuration settings. Based on a
8 http://en.wikipedia.org/wiki/Snake-in-the-box [accessed 2008-08-13]
50.2. BIT-STRING BASED PROBLEM SPACES 567
rough estimation of the structure of the fitness landscape of a given problem, tests can be
run very fast using the model as a benchmark for the settings of an optimization algorithm.
Thus, we could, for instance, determine a priori whether increasing the population size of
an Evolutionary Algorithm over an approximated limit is likely to provide a gain.
With it, we also can evaluate the behavior of an optimization method in the presence of
various problematic aspects, like epistasis or neutrality. This way, strengths and weaknesses
of different evolutionary approaches could be explored in a systematic manner. Additionally,
it is also well suited for theoretical analysis because of its simplicity. The layers of the model,
sketched using an example in Figure 50.5, are specified in the following.

1 Genotype
gÎG 010001100000111010000

2 Introduction of Neutrality
m=2 01 00 01 10 00 00 11 10 10 00 0

u2 (g)ÎG 1 0 1 1 0 0 1 1 1 0

3 Introduction of Epistasis
h=4 1011 0011 10 insufficient bits,
at the end, use
e4 e4 e2
h=2 instead of
1001 0110 11 h=4

4 Multi-Objectivity and Phenotype


m=2, n=6 1001011011
padding:
(x1,x2)ÎX x«[5]=0
100110 011010
x1 x2

5 Introduction of Overfitting
tc=5, e=1, o=1 100110 011010
T1 h*=4 *10001 h*=3
T2 h*=2 0101*0 h*=2
T3 h*=2 0*0111 h*=3
T4 h*=5 011*01 h*=2
T5 h*=3 11*101 h*=4
* (x )=16
f1,1,5 f1,1,5
* (x )=14
1 2

6 Introduction of Ruggedness
g`=34
g=57, q=25 f1,1,5(x1)=16 f1,1,5 (x2)=14
(D(rg=57)=82)
r57[f1,1,5(x1)]=17 r57[f1,1,5 (x2))]=15

Figure 50.5: An example for the fitness landscape model.


568 50 BENCHMARKS
50.2.7.1 Model Definition

The basic optimization task in this model is to find a bit string x⋆ of a predefined length
n = len(x⋆ ) consisting of alternating zeros and ones in the space of all possible bit strings
X = B∗ , as sketched in box 1 in Figure 50.5. The tuning parameter for the problem size is
n ∈ N1 .
Although the model is based on finding optimal bit strings, it is not intended to be a
benchmark for Genetic Algorithms. Instead, the purpose of the model is to provide problems
with specific fitness landscape features to the optimization algorithm. An optimization algo-
rithm initially creates a set of random candidate solutions with a nullary creation operator.
It evaluates these solutions and selects promising solutions. To these individuals, unary and
binary reproduction operations are applied. New candidate solutions result, which subse-
quently again undergo selection and reproduction. Notice that the nature of the nullary,
unary, and binary operations – and hence, the data structure used to represent the candi-
date solutions – is not important from the perspective of the optimization algorithm. The
algorithm makes its selection decisions purely based on the obtained objective values and
also may use this information to decide how often mutation and recombination should be
applied. In other words, the behavior of a general optimization algorithm, such as an EA,
is purely based on the fitness landscape features, and not on the precise data structures
representing genotypes and phenotypes.
From this perspective, a Genetic Algorithm is just an instantiation of an Evolutionary
Algorithm with search operations for bit strings. Genetic Programming is just an EA for
trees. If we can find a good strategy with which a Genetic Algorithm can solve a highly
epistatic problem and this strategy does not involve changes inside the reproduction op-
erators, then this strategy is likely to hold for problems in Genetic Programming as well.
Hence, our benchmark model – though working on bit-string based genomes – is well suit-
able for providing insights which are helpful for other search spaces as well. It was used,
for example, in order to find good setups for large-scale experimental studies on GP, where
each experiment was very costly and direct parameter tuning was not possible [2888].

x⋆ = 0101010101010 . . . 01 (50.24)

50.2.7.1.1 Overfitting and Oversimplification (box 5)

Searching this optimal string could be done by comparing each genotype g with x⋆ . Therefore
we would use the Hamming distance9 [1173] dist ham(a, b), which defines the difference
between two bit strings a and b of equal length as the number of bits in which they differ
(see ??).
Instead of doing this directly, we test the candidate solution against tc training samples
T1 , T2 , .., Ttc . These samples are modified versions of the perfect string x⋆ .
As outlined in Section 19.1 on page 191, we can distinguish between overfitting and
oversimplification. The latter is often caused by incompleteness of the training cases and
the former can originate from noise in the training data. Both forms can be expressed in
terms of our model by the objective function fε,o,tc (based on a slightly extended version of
the Hamming distance dist∗Ham ) which is subject to minimization.

dist∗Ham (a, b) = |{∀i : (a[i] 6= b[i]) ∧ (b[i] 6= ∗) ∧ (0 ≤ i < |a|)}| (50.25)


tc
X
fε,o,tc (x) = dist∗Ham (x, Ti ) , fε,o,tc (x) ∈ [0, f̂] ∀x ∈ X (50.26)
i=1

In the case of oversimplification, the perfect solution x⋆ will always reach a perfect score in
all training cases. There may be incorrect solutions reaching this value in some cases too,
9 ?? on page ?? includes the specification of the Hamming distance.
50.2. BIT-STRING BASED PROBLEM SPACES 569
because some of the facets of the problem are hidden. We take this into consideration by
placing o don’t care symbols (*) uniformly distributed into the training cases. The values of
the candidate solutions at their loci have no influence on the fitness.
When overfitting is enabled, the perfect solution will not reach the optimal score in any
training case because of the noise present. Incorrect solutions may score better in some
cases and even outperform the real solution if the noise level is high. Noise is introduced
in the training cases by toggling ε of the remaining n − o bits, again following a uniform
distribution. An optimization algorithm can find a correct solution only if there are more
training samples with correctly defined values for each locus than with wrong or don’t care
values.
The optimal objective value is zero and the maximum f̂ of fε,o,tc is limited by the upper
boundary f̂ ≤ (n − o)tc. Its exact value depends on the training cases. For each bit index i,
we have to take into account whether a zero or a one in the phenotype would create larger
errors:
count(i, val) = |{j ∈ 1..n : Tj [i] = val}| (50.27)

count(i, 1) if count(i, 1) ≥ count(i, 0)
e(i) = (50.28)
count(i, 0) otherwise
tc
X
f̂ = e(i) (50.29)
i=1

This process is illustrated as box 5 of Figure 50.5.


50.2.7.1.2 Neutrality (box 2)
We can create a well-defined amount of neutrality during the genotype-phenotype mapping
by applying a transformation uµ that shortens the candidate solutions by a factor µ, as
sketched in box 2 of Figure 50.5. The ith bit in uµ (g) is defined as 0 if and only if the
majority of the µ bits starting at locus i ∗ µ in g is also 0, and as 1 otherwise. The default
value 1 set in draw situations has (in average) no effect on the fitness since the target solution
x⋆ is defined as a sequence of alternating zeros and ones. If the length of a genotype g is
not a multiple of µ, the remaining len(g) mod µ bits are ignored. The tunable parameter
for the neutrality in our model is µ. If µ is set to 1, no additional neutrality is modeled.
It should be noted that, since our objective functions are based on the Hamming distance,
the number of possible objective values is always lower as the number of possible genotypes.
Therefore, there always exists some inherent neutrality in the model additionally to the one
modeled with the redundancy reduction.
50.2.7.1.3 Epistasis (box 3)
Epistasis in general means that a slight change in one gene of a genotype influences some
other genes. We can introduce epistasis in our model as part of the genotype mapping and
apply it after the neutrality transformation. We therefore define a bijective function eη that
translates a bit string z of length η to a bit string eη (z) of the same length. Assume we have
two bit strings z1 and z2 which differ only in one single locus, i. e., their Hamming distance
is one. eη (i)ntroduces epistasis by exhibiting the following property:
dist ham(z1 , z2 ) = 1 ⇒ dist ham(eη (z1 ) , eη (z2 )) ≥ η − 1 ∀z1 , z2 ∈ Bη (50.30)
The meaning of Equation 50.30 is that a change of one bit in a genotype g leads to the change
of at least η − 1 bits in the corresponding mapping eη (x). This, as well as the demand for
bijectivity, is provided if we define eη as follows:
 O

 z [j] if 0 ≤ z < 2η−1 , i. e., z [η − 1] = 0

 ∀j:0≤j<η∧
eη (z) [i] = j6=(i−1) mod η (50.31)



 
¬eη z − 2η−1 [i] otherwise
570 50 BENCHMARKS
In other words, for all strings z ∈ Bη which have the most significant bit (MSB) not set,
the eη transformation is performed bitwise. The ith bit in eη (z) equals the exclusive or
combination of all but one bit in z. Hence, each bit in z influences the value of η − 1 bits in
eη (z). For all z with 1 in the MSB, eη (z) is simply set to the negated eη transformation of z
with the MSB cleared (the value of the MSB is 2η−1 ). This division in e is needed in order
to ensure its bijectiveness. This and the compliance with Equation 50.30 can be shown with
a rather lengthy proof omitted here.
In order to introduce this model of epistasis in genotypes of arbitrary length, we divide
them into blocks of the length η and transform each of them separately with eη . If the length
of a given genotype g is no multiple of η, the remaining len(g) mod η bits at the end will be
transformed with the function elen(g) mod η instead of eη , as outlined in box 3 of Figure 50.5.
It may also be an interesting fact that the eη transformations are a special case of the NK
landscape discussed in Section 50.2.1 with N = η and K ≈ η − 2.

z e4(z) z e4(z) z e4(z)


0000 ® 0000 1111 ® 1110 0011 ® 0110
0001 ® 1101 0111 ® 0001 0101 ® 1010
h=1

h³3

0010 ® 1011 1011 ® 1001 0110 ® 1100


0100 ® 0111 1101 ® 0101 1001 ® 0010
1000 ® 1111 1110 ® 0011 1010 ® 0100
1100 ® 1000
Figure 50.6: An example for the epistasis mapping z → e4 (z).

The tunable parameter η for the epistasis ranges from 2 to n ∗ m, the product of the
basic problem length n and the number of objectives m (see next section). If it is set to a
value smaller than 3, no additional epistasis is introduced. Figure 50.6 outlines the mapping
for η = 4.
Again it should be noted that, besides the explicit epistasis introduced via this trans-
formation, implicit epistasis can occur through the neutrality and ruggedness (see ??) map-
pings. It is not possible to study these three effects completely separately, but with our
model, well-dosed degrees of epistasis, neutrality, and ruggedness can separately or jointly
generated.

50.2.7.1.4 Multi-Objectivity (box 4)

A multi-objective problem with m criteria can easily be created by interleaving m instances


of the benchmark problem with each other and introducing separate objective functions
for each of them. Instead of just dividing the genotype g in m blocks, each standing for
one objective, we scatter the objectives as illustrated in box 4 of Figure 50.5. The bits for
the first objective function comprise x1 = (g [0], g [m], g [2m], . . . ), those used by the second
objective function are x2 = (g [1], g [m + 1], g [2m + 1], . . . ). Notice that no bit in g is used by
more than one objective function. Superfluous bits (beyond index nm − 1) are ignored. If g
is too short, the missing bits in the phenotypes are replaced with the complement from x⋆ ,
i. e., if one objective misses the last bit (index n − 1), it is padded by ¬x⋆ [n − 1] which will
worsen the objective by 1 on average.
Because of the interleaving, the objectives will begin to conflict if epistasis (η > 2) is
applied, similar to NK landscapes. Changing one bit in the genotype will change the outcome
of at most min {η, m} objectives. Some of them may improve while others may worsen.
A non-functional objective function minimizing the length of the genotypes is added if
variable-length genomes are used during the evolution. If fixed-length genomes are used,
they can be designed in a way that the blocks for the single objectives have always the right
length.
50.2. BIT-STRING BASED PROBLEM SPACES 571
50.2.7.1.5 Ruggedness (box 6)

There are two (possibly interacting) sources of ruggedness in a fitness landscape. The first
one, epistasis, has already been modeled and discussed. The other source concerns the
objective functions themselves, the nature of the problem. We will introduce this type
of ruggedness a posteriori by artificially lowering the causality of the problem space. We
therefore shuffle the objective values with a permutation r : [0, f̂] 7→ [0, f̂], where f̂ the
abbreviation for the maximum possible objective value, as defined in Equation 50.29.
Before we do that, let us shortly outline what makes a function rugged. Ruggedness
is obviously the opposite of smoothness and causality. In a smooth objective function, the
objective values of the candidate solutions neighboring in problem space are also neighboring.
In our original problem with o = 0, ε = 0, and tc = 1 for instance, two individuals differing
in one bit will also differ by one in their objective values. We can write down the list of
objective values the candidate solutions will take
 on when theyare stepwise improved from
the worst to the best possible configuration as f̂, f̂ − 1, .., 2, 1, 0 . If we exchange two of the
values in this list, we will create some artificial ruggedness. A measure for the ruggedness
of such a permutation r is ∆(r):

f̂−1
X
∆(r) = |r[i] − r[i + 1]| (50.32)
i=0

ˇ = f̂ and the maximum


The original sequence of objective values has the minimum value ∆
ˆ f̂(f̂+1)
possible value is ∆ = 2 . There exists at least one permutation for each ∆ value in
ˇ ∆.
∆.. ˆ We can hence define the permutation rγ which is applied after the objective values
are computed and which has the following features:

1. It is bijective (since it is a permutation).


2. It must preserve the optimal value, i. e., rγ [0] = 0.
3. ∆(rγ ) = ∆ ˇ + γ.
h i
With γ ∈ 0, ∆ ˆ −∆ ˇ , we can fine-tune the ruggedness. For γ = 0, no ruggedness
is introduced. Fora given
 f̂, we can compute the permutations rγ with the procedure
buildRPermutation γ, f̂ defined in Algorithm 50.2.

r0[f] r1[f] r2[f] r3[f] r4[f] r5

D(r0)=5 f D(r1)=6 f D(r2)=7 f D(r3)=8 f D(r4)=9 f D(r5)=10 f


g=0 g=1 g=2 g=3 g=4 g=5
identity rugged rugged rugged rugged deceptive

r6 r7 r8 r9 r10

D(r6)=11 f D(r7)=12 f D(r8)=13 f D(r9)=14 f D(r10)=15 f


g=60 g=7 g=8 g=9 g=10
deceptive deceptive rugged rugged rugged

Figure 50.7: An example for rγ with γ = 0..10 and f̂ = 5.


572 50 BENCHMARKS

 
Algorithm 50.2: rγ ←− buildRPermutation γ, f̂
Input: γ: the γ value
Input: f̂: the maximum objective value
Data: i, j, d, tmp: temporary variables
Data: k, start, r: parameters of the subalgorithm
Output: rγ : the permutation rγ
1 begin
2 Subalgorithm r ←− permutate(k, r, start)
3 begin
4 if k > 0 then
5 if k ≤ (f̂ − 1) then
6 r ←− permutate(k
hi
− 1, r, start)
7 tmp ←− r f̂
hi h i
8 r f̂ ←− r f̂ − k
h i
9 r f̂ − k ←− tmp
10 else  
start + 1
11 i ←−
2
12 if (start mod 2) = 0 then
13 i ←− f̂ + 1 − i
14 d ←− −1
15 else
16 d ←− 1
17 for j ←− start up to f̂ do
18 r[j] ←− i
19 i ←− i + d
 
20 r ←− permutate k − f̂ + start, r, start + 1
 
21 r ←− 0, 1, 2, .., f̂ − 1, f̂
22 return permutate(γ, r, 1)
50.2. BIT-STRING BASED PROBLEM SPACES 573
Figure 50.7 outlines all ruggedness permutations rγ for an objective function which can
range from 0 to f̂ = 5. As can be seen, the permutations scramble the objective function more
and more with rising γ and reduce its gradient information. The ruggedness transformation
is sketched in box 6 of Figure 50.5.

50.2.7.2 Experimental Validation

In this section, we will use a selection of the experimental results obtained with our model in
order to validate the correctness of the approach.10 In our experiments, we applied variable-
length bit string genomes (see Section 29.4) with lengths between 1 and 8000 bits. We apply
a plain Genetic Algorithm to the default benchmark function set f = {fε,o,tc , fnf }, where
fnf is the non-functional length criterion fnf (x) = len(x). Pareto ranking (Section 28.3.3) is
used for fitness assignment and Tournament selection (see Section 28.4.4) with a tournament
size k = 5 for selection. In a population of ps = 1000 individuals, we perform single-point
crossover at a crossover rate of cr = 80% and apply single-bit mutation to mr = 20% of the
genotypes. The genotypes are mapped to phenotypes as discussed in Section 50.2.7.1. Each
run performs exactly maxt = 1001 generations. For each of the experiment-specific settings
discussed later, at least 50 runs have been performed.

50.2.7.2.1 Basic Complexity

In the experiments, we distinguish between success and perfection. Success means finding
individuals x of optimal functional fitness, i. e., fε,o,tc (x) = 0. Multiple such successful
strings may exist, since superfluous bits at the end of genotypes do not influence their
functional objective. We will refer to the number of generations needed to find a successful
individual as success generations. The perfect string x⋆ has no useless bits, it is the shortest
possible solution with fε,o,tc = 0 and, hence, also optimal in the non-functional length

b
criterion. In our experiments, we measure:

1. the fraction s/r of experimental runs that turned out successful,


2. the number st of generations needed by the fastest (successful) experimental run to
find a successful individual,
3. the average number st of generations needed by the (successful) experimental runs to
find a successful individual,
4. the number of generations st b needed by the slowest (successful) experimental run to
find a successful individual, and
5. the average number pt of generations needed by the experimental runs to find a perfect,

b
neither overfitted nor oversimplified, individual.

In Figure 50.8, we have computed the minimum, average, and maximum number of the
b for values of n ranging from 8 to 800. As illustrated, the
success generations (st, st, and st)
problem hardness increases steadily with rising string length n. Trimming down the solution
strings to the perfect length becomes more and more complicated with growing n. This is
likely because the fraction at the end of the strings where the trimming is to be performed
will shrinks in comparison with its overall length.

50.2.7.2.2 Ruggedness

In Figure 50.9, we plotted the average success generations st with n = 80 and different
ruggedness settings γ. Interestingly, the gray original curve behaves very strangely. It is
divided into alternating solvable and unsolvable11 problems. The unsolvable ranges of γ
10 More experimental results and more elaborate discussions can be found in the bachelor’s thesis of

Niemczyk [2037].
11 We call a problem unsolvable if it has not been solved within 1000 generations.
574 50 BENCHMARKS

200 Number of Generations -


pt

150
^
st

100
-
st

50
^
st
n
0 100 200 300 400 500 600 700
Figure 50.8: The basic problem hardness.

at n=80
110 -
st

100
original g
90
rearranged g`
80

70

60

50

40 deceptivenes/
ruggness fold
30
deceptive gaps g
20
0 500 1000 1500 2000 2500 3000
Figure 50.9: Experimental results for the ruggedness.
50.2. BIT-STRING BASED PROBLEM SPACES 575
correspond to gaps in the curve. With rising γ, the solvable problems require more and
more generations until they are solved. After a certain (earlier) γ threshold value, the
unsolvable sections become solvable. From there on, they become simpler with rising γ. At
some point, the two parts of the curve meet.
 
Algorithm 50.3: γ ←− translate γ ′ , f̂
Input: γ ′ : the raw γ value
Input: f̂: the maximum value of fε,o,tc
Data: i, j, k, l: some temporary variables
Output: γ: the translated γ value
1 begin  
f̂ f̂ − 1
2 l ←−
$ %2 $ %
f̂ f̂ + 1
3 i ←− ∗
2 2
4 if γ ≤ f̂i then s
 
 2 
 f̂ + 2 f̂ 
5 j ←−  − + 1 − γ
2 4
 
6 k ←− γ − j f̂ + 2 + j 2 + f̂
   
7 return k + 2 j f̂ + 2 − j 2 − f̂ − j
8 else  v 
  u   
 f̂ mod 2 + 1 u 1 − f̂ mod 2 
 t 
9 j ←− 
 + + γ − 1 − i 

2 4
  
10 k ←− γ − j − f̂ mod 2 (j − 1) − 1 − i
 
11 return l − k − 2j 2 + j − f̂ mod 2 (−2j + 1)

The reason for this behavior is rooted in the way that we construct the ruggedness
mapping r and illustrates the close relation between ruggedness and deceptiveness. Algo-
rithm 50.2 is a greedy algorithm which alternates between creating groups of mappings that
are mainly rugged and such that are mainly deceptive. In Figure 50.7 for instance, from
γ = 5 to γ = 7, the permutations exhibit a high degree of deceptiveness whilst just being
rugged before and after that range. Thus, it seems to be a good idea to rearrange these
sections of the ruggedness mapping. The identity mapping should still come first, followed
by the purely rugged mappings ordered by their ∆-values. Then, the permutations should
gradually change from rugged to deceptive and the last mapping should be the most de-
ceptive one (γ = 10 in Figure 50.7). The black curve in Figure 50.9 depicts the results of
rearranging the γ-values with Algorithm 50.3. This algorithm maps deceptive gaps to higher
γ-values and, by doing so, makes the resulting curve continuous.12
Fig. 50.10.a sketches the average success generations for the rearranged ruggedness prob-
lem for multiple values of n and γ ′ . Depending on the basic problem size n, the problem
hardness increases steeply with rising values of γ ′ .
In Algorithm 50.2 and Algorithm 50.3, we use the maximum value of the functional
objective function (abbreviated with f̂) in order to build and to rearrange the ruggedness
12 This is a deviation from our original idea, but this idea did not consider deceptiveness.
576 50 BENCHMARKS

300
- 300
200 st -
100 200 st
0 100
0
0
40 0
80 g` 40 aled)
120 n 8000 80 n g e d n ess (sc
6000 120 rug 8
160 4000 160 4 6
0 2000 0 2
Fig. 50.10.a: Unscaled ruggedness st plot. Fig. 50.10.b: Scaled ruggedness st plot.

1-s r
/

-
st 100%
200 66%
100 33%
0 0%
decep n decep n
tiven 160 tiven 160
8 ess (s 120 8 ess (s 120
6 cale d) 6 caled
4 80 4 ) 80
2 40 2 40
0 0
Fig. 50.10.c: Scaled deceptiveness – average success Fig. 50.10.d: Scaled deceptiveness – failed runs
generations st

Figure 50.10: Experiments with ruggedness and deceptiveness.


50.2. BIT-STRING BASED PROBLEM SPACES 577
permutations r. Since this value depends on the basic problem length n, the number of
different permutations and thus, the range of the γ ′ values will too. The length of the
lines in direction of the γ ′ axis in Fig. 50.10.a thus increases with n. We introduce two
additional scaling functions for ruggedness and deceptiveness with a parameter g spanning
from zero to ten, regardless of n. Only one of these functions can be used at a time, de-
pending on whether experiments should be run for rugged (Equation 50.33) or for deceptive

(Equation 50.34) j problems.
k l mFor scaling, we use the highest γ value which maps to rugged
mappings γr′ = 0.5f̂ ∗ 0.5f̂ , and the minimum and maximum ruggedness values according
to Equation 50.32.

rugged: γ ′ = round(0.1g ∗ γr′ ) (50.33)


(
  0 if g ≤ 0

deceptive: γ ′ = ′ ˆ ˇ ′ (50.34)
γr + round 0.1g ∗ ∆ − ∆ − γr otherwise

When using this scaling mechanism, the curves resulting from experiments with different
n-values can be compared more easily: Fig. 50.10.b based on the scale from Equation 50.33,
for instance, shows much clearer how the problem difficulty rises with increasing ruggedness
than Fig. 50.10.a. We also can spot some irregularities which always occur at about the
same degree of ruggedness, near g ≈ 9.5, and that we will investigate in future.
The experiments with the deceptiveness scale Equation 50.34 show the tremendous effect
of deceptiveness in the fitness landscape. Not only does the problem hardness rise steeply
with g (Fig. 50.10.c), after certain threshold, the Evolutionary Algorithm becomes unable to
solve the model problem at all (in 1000 generations), and the fraction of failed experiments
in Fig. 50.10.d jumps to 100% (since the fraction s/r of solved ones goes to zero).

50.2.7.2.3 Epistasis

Fig. 50.11.a illustrates the relation between problem size n, the epistasis factor η, and

800
-
st 1-s r
/

600
80%
400 40%
200
0 0

n 160 h n 160
80 120
h
50 40 50 40
30 20 80120
30 20 40 10 40
10 0 0
Fig. 50.11.a: Epistasis η and problem length n. Fig. 50.11.b: Epistasis η and problem length n:
failed runs.

Figure 50.11: Experiments with epistasis.

the average success generations. Although rising epistasis makes the problems harder, the
complexity does not rise as smoothly as in the previous experiments. The cause for this is
likely the presence of crossover – if mutation was allowed solely, the impact of epistasis would
most likely be more intense. Another interesting fact is that experimental settings with odd
values of η tend to be much more complex than those with even ones. This relation becomes
even more obvious in Fig. 50.11.b, where the proportion of failed runs, i. e., those which
were not able to solve problem in less than 1000 generations, is plotted. A high plateau
for greater values of η is cut by deep valleys at positions where η = 2 + 2i ∀i ∈ N1 . This
578 50 BENCHMARKS
phenomenon has thoroughly been discussed by Niemczyk [2037] and can be excluded from
the experiments by applying the scaling mechanism with parameter y ∈ [0, 10] as defined in
Equation 50.35: 
 y ≤ 0 if 2
epistasis: η = y ≥ 10 if 41 (50.35)

2 ⌊2y⌋ + 1 otherwise

50.2.7.2.4 Neutrality

Figure 50.12 illustrates the average number of generations st needed to grow an individual

-
300 st
200
100
0
n 120160
m 80
50 40 30 40
20 10 0
Figure 50.12: The results from experiments with neutrality.

with optimal functional fitness for different values of the neutrality parameter µ. Until
µ ≈ 10, the problem hardness increases rapidly. For larger degrees of redundancy, only
minor increments in st can be observed. The reason for this strange behavior seems to be the
crossover operation. Niemczyk [2037] shows that a lower crossover rate makes experiments
involving the neutrality filter of the model problem very hard. We recommend using only
µ-values in the range from zero to eleven for testing the capability of optimization methods
of dealing with neutrality.

50.2.7.2.5 Epistasis and Neutrality

Our model problem consists of independent filters for the properties that may influence the
hardness of an optimization task. It is especially interesting to find out whether these filters
can be combined arbitrarily, i. e., if they are indeed free of interaction. In the ideal case, st
of an experiment with n = 80, µ = 8, and η = 0 added to st of an experiment for n = 80,
µ = 0, and η = 4 should roughly equal to st of an experiment with n = 80, µ = 8, and η = 4.
In Figure 50.13, we have sketched these expected values (Fig. 50.13.a) and the results of the
corresponding real experiments (Fig. 50.13.b). In fact, these two diagrams are very similar.
The small valleys caused by the “easier” values of η (see Paragraph 50.2.7.2.3) occur in both
charts. The only difference is a stronger influence of the degree of neutrality.

50.2.7.2.6 Ruggedness and Epistasis

It is a well-known fact that epistasis leads to ruggedness, since it violates the causality as
discussed in Chapter 17. Combining the ruggedness and the epistasis filter therefore leads
to stronger interactions. In Fig. 50.14.b, the influence of ruggedness seems to be amplified
by the presence of epistasis when compared with the estimated results shown in Fig. 50.14.a.
Apart from this increase in problem hardness, the model problem behaves as expected. The
characteristic valleys stemming from the epistasis filter are clearly visible, for instance.
50.2. BIT-STRING BASED PROBLEM SPACES 579

300 400
-
st
-
st
200
100 100
0 0
m 9 m 9
h 7 h 7
9 5 9 5
7 5 3 7 5 3
3 1 3 1
Fig. 50.13.a: The expected results. Fig. 50.13.b: The experimentally obtained re-
sults.

Figure 50.13: Expection and reality: Experiments involving both, epistasis and neutrality

300 -
450 st
-
st 300
100 150
0 0
le) le)
( sca ( sca
h ess 8 h ess 8
9 n 9 n
7 ged 4 6 7 ged 4 6
5 rug 2 5 rug2
3 3
10 10
Fig. 50.14.a: The expected results. Fig. 50.14.b: The experimentally obtained re-
sults.

Figure 50.14: Expection and reality: Experiments involving both, ruggedness and epistasis
580 50 BENCHMARKS
50.2.7.3 Summary

In summary, this model problem has proven to be a viable approach for simulating prob-
lematic phenomena in optimization. It is

1. functional, i. e., allows us to simulate many problematic features,


2. tunable – each filter can be tuned independently,
3. easy to understand,
4. allows for very fast fitness computation,
5. easily extensible – each filter can be replaced with other approaches for simulating the
same feature.

Niemczyk [2037] has written a stand-alone Java class implementing the model
problem which is provided at http://www.it-weise.de/documents/files/
TunableModel.java [accessed 2008-05-17]. This class allows setting the parameters discussed
in this section and provides methods for determining the objective values of individuals in
the form of byte arrays. In the future, some strange behaviors (like the irregularities in the
ruggedness filter and the gaps in epistasis) of the model need to be revisited, explained, and,
if possible, removed.

50.3 Real Problem Spaces

Mathematical benchmark functions are especially interesting for testing and comparing
techniques based on real vectors (X ⊆ Rn ) like plain Evolution Strategy (see Chapter 30
on page 359), Differential Evolution (see Chapter 33 on page 419), and Particle Swarm
Optimization (see Chapter 39 on page 481). However, they only require such vectors as
candidate solutions, i. e., elements of the problem space X. Hence, techniques with differ-
ent search spaces G, like Genetic Algorithms, can also be applied to them, given that a
genotype-phenotype mapping gpm : G 7→ X ⊆ Rn is provided accordingly.
The optima or the Pareto frontier of benchmark functions often have already been de-
termined theoretically. When applying an optimization algorithm to the functions, we are
interested in the number of candidate solutions which they need to process in order to find
the optima and how close we can get to them. They also give us a great opportunity to
find out about the influence of parameters like population size, the choice of the selection
algorithm, or the efficiency of reproduction operations.

50.3.1 Single-Objective Optimization

In this section, we list some of the most important benchmark functions for scenarios in-
volving only a single optimization criterion. This, however, does not mean that the search
space has only a single dimension – even a single-objective optimization can take place in
n-dimensional space Rn .
50.3. REAL PROBLEM SPACES 581
50.3.1.1 Sphere

The sphere functions listed by Suganthan et al. [2626] (or as F1 by De Jong [711] and
in [2934], f1 in [3020, 3026]) and defined here in Table 50.2 as fRSO1 are a very simple mea-
sure of efficiency of optimization methods. They have, for instance, been used by Rechenberg
[2278] for testing his Evolution Strategy-approach. In Table 50.3 we list some results ob-
tained for optimizing the sphere function from related work and in Figure 50.15, we illustrate
fRSO1 for one-dimensional and two-dimensional problem spaces X. The code snippet provided
in Listing 50.2 shows how the sphere function can be computed and in Table 50.3, we list
some results for the application of optimizers to the sphere function from literature.
n
X
function: fRSO1 (x) = xi 2 50.36
i=1
n
domain: X = [−v, v] , v ∈ {5.12, 10, 100} 50.37
optimization: minimization
T
optimum: x⋆ = (0, 0, .., 0) , fRSO1 (x⋆ ) = 0 50.38
separable: yes
multimodal: no

Table 50.2: Properties of the sphere function.

^
f x
10000

8000
f
18000
^
6000 12000
x
6000
4000
0

2000 100
-100
xÎX 50
0
xÎX -50 0
0 -50
50
-100 -50 0 50 100 100 -100
Fig. 50.15.a: 1d Fig. 50.15.b: 2d

Figure 50.15: Examples for the sphere function (v = 100).

1 public static final double computeSphere(final double[] ind) {


2 int i;
3 double res;
4
5 res = 0;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 res += (ind[i] * ind[i]);
8 }
9
10 return res;
11 }
Listing 50.2: A Java code snippet computing the sphere function.

Table 50.3: Results obtained for the sphere function with τ function evaluations.
582 50 BENCHMARKS
n, v τ fRSO1 (x) Algorithm Class Reference
500, 100 2.5 · 106 2.41E-11 SaNSDE DE [3020]
500, 100 2.5 · 106 4.90E-8 FEPCC EA [3020]
500, 100 2.5 · 106 2.28E-21 DECC-O DE [3020]
500, 100 2.5 · 106 6.33E-27 DECC-G DE [3020]
6
1000, 100 5 · 10 6.97 SaNSDE DE [3020]
1000, 100 5 · 106 5.40E-8 FEPCC EA [3020]
1000, 100 5 · 106 1.77E-20 DECC-O DE [3020]
1000, 100 5 · 106 2.17E-25 DECC-G DE [3020]

50, 10 2.5 · 105 1.38E-18 LSRS HC [1140]


50, 10 1 · 107 1.534E1 GA [1140]
50, 10 1 · 107 1.0016E2 PSO [1140]
100, 10 2.5 · 105 6.94E-16 LSRS HC [1140]
100, 10 1 · 107 6.483E1 GA [1140]
100, 10 1 · 107 2.492E2 PSO [1140]
500, 10 2.5 · 106 9E-16 LSRS HC [1140]
500, 10 1 · 107 1.2E3 GA [1140]
500, 10 1 · 107 2.18035E3 PSO [1140]
1000, 10 2.5 · 106 1.25E-18 LSRS HC [1140]
1000, 10 1 · 107 3.45E3 GA [1140]
1000, 10 1 · 107 5.5E3 PSO [1140]
2000, 10 2.5 · 106 2.35E-33 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 583
50.3.1.2 Schwefel’s Problem 2.22 (“Schwefel Function”)

This function is defined as f2 in [3026]. Here, we name it fRSO2 and list its features in
Table 50.4. In Figure 50.16 we sketch it one and two-dimensional version, Listing 50.3
contains a Java snippet showing how it can be computed, and provides some results for
this function from literature Table 50.5.
n
X n
Y
function fRSO2 (x) = |xi | + |xi | 50.39
i=1 i=1
n
domain X = [−v, v] , v ∈ {10} 50.40
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO2 (x⋆ ) = 0 50.41
separable yes
multimodal no

Table 50.4: Properties of Schwefel’s problem 2.22.

^
x
f
16 f
120
^
12 x 80
40
8
0

4 10
5
-10 xÎX 0
xÎX -5
0 0 -5
5
-10 -5 0 5 10 10 -10
Fig. 50.16.a: 1d Fig. 50.16.b: 2d

Figure 50.16: Examples for Schwefel’s problem 2.22.

1 public static final double computeSchwefel222(final double[] ind) {


2 int i;
3 double r1, r2, x;
4
5 r1 = 1d;
6 r2 = 0d;
7 for (i = (ind.length - 1); i >= 0; i--) {
8 x = Math.abs(ind[i]);
9 r1 *= x;
10 r2 += x;
11 }
12
13 return (r1 + r2);
14 }
Listing 50.3: A Java code snippet computing Schwefel’s problem 2.22.

Table 50.5: Results obtained for Schwefel’s problem 2.22 with τ function evaluations.
584 50 BENCHMARKS
n, v τ fRSO2 (x) Algorithm Class Reference
500, 10 2.5 · 106 5.27E-2 SaNSDE DE [3020]
500, 10 2.5 · 106 1.3E-3 FEPCC EA [3020]
500, 10 2.5 · 106 3.77E-10 DECC-O DE [3020]
500, 10 2.5 · 106 5.95E-15 DECC-G DE [3020]
6
500, 10 2.5 · 10 4.08E-19 LSRS HC [1140]
500, 10 1 · 107 6.0051E2 GA [1140]
500, 10 1 · 107 8.9119E2 PSO [1140]
1000, 10 5 · 106 1.24 SaNSDE DE [3020]
1000, 10 5 · 106 2.6E-3 FEPCC EA [3020]
1000, 10 5 · 106 +∞ DECC-O DE [3020]
1000, 10 5 · 106 5.37E-14 DECC-G DE [3020]
6
1000, 10 2.5 · 10 1.12E-17 LSRS HC [1140]
1000, 10 1 · 107 1.492E3 GA [1140]
1000, 10 1 · 107 4.1E47 PSO [1140]
50, 10 2.5 · 105 1.91E-11 LSRS HC [1140]
50, 10 1 · 107 1.346E1 GA [1140]
50, 10 1 · 107 4.68E1 PSO [1140]
100, 10 2.5 · 105 3.98E-10 LSRS HC [1140]
100, 10 1 · 107 4.007E1 GA [1140]
100, 10 1 · 107 1.0486E2 PSO [1140]
2000, 10 2.5 · 106 3.08E-17 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 585
50.3.1.3 Schwefel’s Problem 1.2 (Quadric Function)

This function is defined as f3 in [3026] and is here introduced as fRSO3 in Table 50.6. In
Figure 50.17, we sketch this function for one and two dimensions, in Listing 50.4, we give a
Java code snippet showing how it can be computed, and Table 50.7 contains some results
from literature.
 2
n
X Xi
function fRSO3 (x) =  xi  50.42
i=1 j=1
n
domain X = [−v, v] , v ∈ {10, 100} 50.43
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO3 (x⋆ ) = 0 50.44
separable no
multimodal no

Table 50.6: Properties of Schwefel’s problem 1.2.

f ^
10000
x
8000
f
^ 40e3
6000 30e3
x 20e3
10e3
4000 0
100
2000 50
xÎX -100 xÎX 0
0 -50
0 -50
50 100 -100
-100 -50 0 50 100
Fig. 50.17.a: 1d Fig. 50.17.b: 2d

Figure 50.17: Examples for Schwefel’s problem 1.2.

1 public static final double computeSchwefel12(final double[] ind) {


2 int i, j;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = 0d;
8 for (j = 0; j <= i; j++) {
9 d += ind[j];
10 }
11 res += (d * d);
12 }
13
14 return res;
15 }
Listing 50.4: A Java code snippet computing Schwefel’s problem 1.2.

Table 50.7: Results obtained for Schwefel’s problem 1.2 with τ function evaluations.
586 50 BENCHMARKS
n, v τ fRSO3 (x) Algorithm Class Reference
500, 100 2.5 · 106 2.03E-8 SaNSDE DE [3020]
500, 100 2.5 · 106 – FEPCC EA [3020]
500, 100 2.5 · 106 2.93E-19 DECC-O DE [3020]
500, 100 2.5 · 106 6.17E-25 DECC-G DE [3020]
6
1000, 100 5 · 10 6.43E1 SaNSDE DE [3020]
1000, 100 5 · 106 – FEPCC EA [3020]
1000, 100 5 · 106 8.69E-18 DECC-O DE [3020]
1000, 100 5 · 106 3.71E-23 DECC-G DE [3020]

50, 10 2.5 · 105 6.27E-19 LSRS HC [1140]


50, 10 1 · 107 6.113E3 GA [1140]
50, 10 1 · 107 7.81E4 PSO [1140]
100, 10 2.5 · 105 1.15E-15 LSRS HC [1140]
100, 10 1 · 107 2.22E5 GA [1140]
100, 10 1 · 107 7.81E5 PSO [1140]
500, 10 2.5 · 106 4.31E-11 LSRS HC [1140]
500, 10 1 · 107 1.25E8 GA [1140]
500, 10 1 · 107 1.47E8 PSO [1140]
1000, 10 2.5 · 106 1.38E-29 LSRS HC [1140]
1000, 10 1 · 107 6.28E8 GA [1140]
1000, 10 1 · 107 1.61E6 PSO [1140]
2000, 10 2.5 · 106 1.69E-7 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 587
50.3.1.4 Schwefel’s Problem 2.21

This function is defined as f4 in [3026] and is here introduced as fRSO4 in Table 50.8. In
Figure 50.18, we sketch this function for one and two dimensions, in Listing 50.5, we give a
Java code snippet showing how it can be computed, and Table 50.9 contains some results
from literature.

function fRSO4 (x) = maxi {|xi | : i ∈ 1..n} 50.45


n
domain X = [−v, v] , v ∈ {100} 50.46
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO4 (x⋆ ) = 0 50.47
separable yes
multimodal no

Table 50.8: Properties of Schwefel’s problem 2.21.

^
x
100 f

80
^ 90
60
f
60 x
30
40 0
100
20 50
xÎX -100 xÎX 0
0 -50 -50
0
50
-100 -50 0 50 100 100 -100
Fig. 50.18.a: 1d Fig. 50.18.b: 2d

Figure 50.18: Examples for Schwefel’s problem 2.21.

1 public static final double computeSchwefel221(final double[] ind) {


2 int i;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = ind[i];
8 if (d < 0d) {
9 d = -d;
10 }
11 if (d > res) {
12 res = d;
13 }
14 }
15
16 return res;
17 }
Listing 50.5: A Java code snippet computing Schwefel’s problem 2.21.

Table 50.9: Results obtained for Schwefel’s problem 2.21 with τ function evaluations.
588 50 BENCHMARKS
n, v τ fRSO4 (x) Algorithm Class Reference
500, 100 2.5 · 106 4.07E1 SaNSDE DE [3020]
500, 100 2.5 · 106 9E-5 FEPCC EA [3020]
500, 100 2.5 · 106 6.01E1 DECC-O DE [3020]
500, 100 2.5 · 106 4.58E-5 DECC-G DE [3020]
6
1000, 100 5 · 10 4.99E1 SaNSDE DE [3020]
1000, 100 5 · 106 8.5E-5 FEPCC EA [3020]
1000, 100 5 · 106 7.92E1 DECC-O DE [3020]
1000, 100 5 · 106 1.01E1 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 589
50.3.1.5 Generalized Rosenbrock’s Function

Originally proposed by Rosenbrock [2333] and also known under the name “banana valley
function” [1841], this function is defined as f5 in [3026] and in [2934], the two-dimensional
variant is given as F2 . The interesting feature of Rosenbrock’s function is that it is unimodal
for two dimension but becomes multimodal for higher ones [2464]. Here, we introduce it
as fRSO5 in Table 50.10. In Figure 50.19, we sketch this function for two dimensions, in
Listing 50.6, we give a Java code snippet showing how it can be computed, and Table 50.11
contains some results from literature.

Xh
n−1
2 2
i
function fRSO5 (x) = 100 xi+1 − xi 2 + (xi − 1) 50.48
i=1
n
domain X = [v1 , v2 ] , v1 ∈ {−5, −30} , v2 ∈ {10, 30} 50.49
optimization minimization
T
optimum x⋆ = (1, 1, .., 1) , fRSO5 (x⋆ ) = 0 50.50
separable no
multimodal no for n ≤ 2, yes otherwise

Table 50.10: Properties of the generalized Rosenbrock’s function.

^
x
f
8e+7
6e+7
4e+7
2e+7
0
30
20
10
-30 -20 xÎX 0
-10 -10
0 10 -20
20 30 -30

Figure 50.19: The 2d-example for the generalized Rosenbrock’s function.

1 public static final double computeGenRosenbrock(final double[] ind) {


2 int i;
3 double res, ox, x, z;
4
5 res = 0d;
6 i = (ind.length - 1);
7 x = ind[i];
8 for (--i; i >= 0; i--) {
9 ox = x;
10 x = ind[i];
11
12 z = (ox + (x * x));
13 res += (100d * (z * z));
14 z = (x - 1);
15 res += (x * x);
16 }
17
18 return res;
19 }
Listing 50.6: A Java code snippet computing the generalized Rosenbrock’s function.
590 50 BENCHMARKS

Table 50.11: Results obtained for the generalized Rosenbrock’s function with τ function
evaluations.
n, X τ fRSO5 (x) Algorithm Class Reference
500, [−30, 30] 2.5 · 106 1.33E3 SaNSDE DE [3020]
500, [−30, 30] 2.5 · 106 – FEPCC EA [3020]
500, [−30, 30] 2.5 · 106 6.64E2 DECC-O DE [3020]
500, [−30, 30] 2.5 · 106 4.92E2 DECC-G DE [3020]

1000, [−30, 30] 5 · 106 3.31E3 SaNSDE DE [3020]


1000, [−30, 30] 5 · 106 – FEPCC EA [3020]
1000, [−30, 30] 5 · 106 1.48E3 DECC-O DE [3020]
1000, [−30, 30] 5 · 106 9.87E2 DECC-G DE [3020]
50, [−5, 10] 2.5 · 105 1.38E-18 LSRS HC [1140]
50, [−5, 10] 1 · 107 5.534E1 GA [1140]
50, [−5, 10] 1 · 107 5.9E4 PSO [1140]
100, [−5, 10] 2.5 · 105 6.94E-16 LSRS HC [1140]
100, [−5, 10] 1 · 107 2.74E4 GA [1140]
100, [−5, 10] 1 · 107 2.19E5 PSO [1140]
500, [−5, 10] 2.5 · 106 2.61E-11 LSRS HC [1140]
500, [−5, 10] 1 · 107 5.6662E2 GA [1140]
500, [−5, 10] 1 · 107 2.56E6 PSO [1140]
1000, [−5, 10] 2.5 · 106 7.41E27 LSRS HC [1140]
1000, [−5, 10] 1 · 107 1.12E3 GA [1140]
1000, [−5, 10] 1 · 107 6.58E6 PSO [1140]
2000, [−5, 10] 2.5 · 106 1.48E-26 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 591
50.3.1.6 Step Function

This function is defined as f6 in [3026] and a similar, five-dimensional function is provided


as F3 in [2934]. Here, the function is introduced as fRSO6 in Table 50.12. In Figure 50.20, we
sketch this function for one dimension, in Listing 50.7, we give a Java code snippet showing
how it can be computed, and Table 50.13 contains some results from literature.
n
X 2
function fRSO6 (x) = ⌊xi + 0.5⌋ 50.51
i=1
n
domain X = [−v, v] , v ∈ {10, 100} 50.52
optimization minimization
optimum X ⋆ ∈ [−0.5, 0.0]n , fRSO6 (x⋆ ) = 0∀x⋆ ∈ X ⋆ 50.53
separable yes
multimodal no

Table 50.12: Properties of the step function.

100
f
80

60 ^
x
40

20

xÎX
0
-10 -8 -6 -4 -2 0 2 4 6 8 10

Figure 50.20: The 1d-example for the step function (v = 10).

1 public static final double computeStepFunction(final double[] ind) {


2 int i;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = Math.floor(ind[i] + 0.5d);
8 res += (d * d);
9 }
10
11 return res;
12 }
Listing 50.7: A Java code snippet computing the step function function.

Table 50.13: Results obtained for the step function with τ function evaluations.
n, v τ fRSO6 (x) Algorithm Class Reference
500, 100 2.5 · 106 3.12E2 SaNSDE DE [3020]
592 50 BENCHMARKS
500, 100 2.5 · 106 0 FEPCC EA [3020]
500, 100 2.5 · 106 0 DECC-O DE [3020]
500, 100 2.5 · 106 0 DECC-G DE [3020]

1000, 100 5 · 106 3.93E3 SaNSDE DE [3020]


1000, 100 5 · 106 0 FEPCC EA [3020]
1000, 100 5 · 106 0 DECC-O DE [3020]
1000, 100 5 · 106 0 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 593
50.3.1.7 Quartic Function with Noise

This function is defined as f7 in [3026]. Whitley et al. [2934] defines a similar function as F4
but uses normally distributed noise instead of uniformly distributed one. Here, the quartic
function with noise is introduced as fRSO7 in Table 50.14. In Figure 50.21, we sketch this
function for one and two dimensions, in Listing 50.8, we give a Java code snippet showing
how it can be computed, and Table 50.15 contains some results from literature.
n
X
function fRSO7 (x) = ixi 4 + randomUni[0, 1) 50.54
i=1
n
domain X = [−v, v] , v ∈ {500} 50.55
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO7 (x⋆ ) = 0 50.56
separable yes
multimodal no

Table 50.14: Properties of the quartic function with noise.

^
7e+10
f x
6e+10

5e+10 f
2e+11
4e+10
^ 1.6e+11
x 1.2e+11
3e+10 8e+10
4e+10
2e+10 0

1e+10 400
200
xÎX -400 xÎX 0
0 -200 -200
0
-400 -200 0 200 400 200 -400
400
Fig. 50.21.a: 1d Fig. 50.21.b: 2d

Figure 50.21: Examples for the quartic function with noise.

1 private static final Random RAND = new Random();


2
3 public static final double computeQuarticN(final double[] ind) {
4 int i;
5 double res, d;
6
7 res = 0d;
8 for (i = (ind.length - 1); i >= 0; i--) {
9 d = ind[i];
10 d *= d;
11 d *= d;
12 res += (i * d);
13 }
14
15 return res + (RAND.nextDouble());
16 }
Listing 50.8: A Java code snippet computing the quartic function with noise.

Table 50.15: Results obtained for the quartic function with noise with τ function evaluations.
594 50 BENCHMARKS
n, v τ fRSO7 (x) Algorithm Class Reference
500, 500 2.5 · 106 1.28 SaNSDE DE [3020]
500, 500 2.5 · 106 – FEPCC EA [3020]
500, 500 2.5 · 106 1.04E1 DECC-O DE [3020]
500, 500 2.5 · 106 1.5E-3 DECC-G DE [3020]
6
1000, 500 5 · 10 1.18E1 SaNSDE DE [3020]
1000, 500 5 · 106 – FEPCC EA [3020]
1000, 500 5 · 106 2.21E1 DECC-O DE [3020]
1000, 500 5 · 106 8.4E-3 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 595
50.3.1.8 Generalized Schwefel’s Problem 2.26

This function is defined as f8 in [3026] and as F7 in [2934]. Here, we name it fRSO8 and list
its features in Table 50.16. In Figure 50.22 we sketch it one and two-dimensional version,
Listing 50.9 contains a Java snippet showing how it can be computed, and provides some
results for this function from literature Table 50.17.
n 
X p 
function fRSO8 (x) = − xi sin |xi | 50.57
i=1
n
domain X = [−v, v] , v ∈ {500} 50.58
optimization minimization
T
optimum v = 500, x⋆ = (420.9687, .., 420.9687) , fRSO8 (x⋆ ) ≈ 418.983n 50.59
separable yes
multimodal yes

Table 50.16: Properties of Schwefel’s problem 2.26.

500 ^
f
400 x
300 f
800
200
^ 400
100
x 0
0
-400
-100
-800
-200
-300 400
xÎX 200
-400 -400 0
-500
xÎX -200 -200
0 -400
200
-400 -200 0 200 400 400
Fig. 50.22.a: 1d Fig. 50.22.b: 2d

Figure 50.22: Examples for Schwefel’s problem 2.26.

1 public static final double computeSchwefel221(final double[] ind) {


2 int i;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = ind[i];
8 res += ((-d) * Math.sin(Math.sqrt(Math.abs(d))));
9 }
10
11 return res;
12 }
Listing 50.9: A Java code snippet computing Schwefel’s problem 2.26 function.

Table 50.17: Results obtained for Schwefel’s problem 2.26 with τ function evaluations.
596 50 BENCHMARKS
n, v τ fRSO8 (x) Algorithm Class Reference
500, 500 2.5 · 106 -201796.5 SaNSDE DE [3020]
500, 500 2.5 · 106 -209316.4 FEPCC EA [3020]
500, 500 2.5 · 106 -209491.0 DECC-O DE [3020]
500, 500 2.5 · 106 -209491.0 DECC-G DE [3020]
1000, 500 5 · 106 -372991.0 SaNSDE DE [3020]
1000, 500 5 · 106 -418622.6 FEPCC EA [3020]
1000, 500 5 · 106 -418983 DECC-O DE [3020]
1000, 500 5 · 106 -418983 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 597
50.3.1.9 Generalized Rastrigin’s Function

This function is defined as f9 in [3026] and as F6 in [2934]. In the context of this work, we
refer to it as fRSO9 in Table 50.18. In Figure 50.23, we sketch this function for one and two
dimensions, in Listing 50.10, we give a Java code snippet showing how it can be computed,
and Table 50.19 contains some results from literature.
n
X  
function fRSO9 (x) = xi 2 − 10cos(2πxi ) + 10 50.60
i=1
n
domain X = [−v, v] , v ∈ {5.12} 50.61
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO9 (x⋆ ) = 0 50.62
separable yes
multimodal yes

Table 50.18: Properties of the generalized Rastrigin’s function.

^
100 x
90
f
80
^ f
70 200
x 160
60
120
50 80
40 40
0
30
600
20 400
200
10
xÎX -600 -400 xÎX -200
0
0 -200 0
200 400 -400
-600 -400 -200 0 200 400 600 600-600
Fig. 50.23.a: 1d Fig. 50.23.b: 2d

Figure 50.23: Examples for the generalized Rastrigin’s function.

1 private static final double PI2 = Math.PI+Math.PI;


2
3 public static final double computeGenRastrigin(final double[] ind) {
4 int i;
5 double res, d;
6
7 res = 1d;
8 for (i = (ind.length - 1); i >= 0; i--) {
9 d = ind[i];
10 res += ((d * d) - (10d * Math.cos(PI2 * d)) + 10d);
11 }
12
13 return res;
14 }
Listing 50.10: A Java code snippet computing the generalized Rastrigin’s function.

Table 50.19: Results obtained for the generalized Rastrigin’s function with τ function eval-
uations.
598 50 BENCHMARKS
n, v τ fRSO9 (x) Algorithm Class Reference
500, 5.12 2.5 · 106 2.84E2 SaNSDE DE [3020]
500, 5.12 2.5 · 106 1.43E-1 FEPCC EA [3020]
500, 5.12 2.5 · 106 1.76E1 DECC-O DE [3020]
500, 5.12 2.5 · 106 0 DECC-G DE [3020]
6
1000, 5.12 5 · 10 8.69E2 SaNSDE DE [3020]
1000, 5.12 5 · 106 3.13E-1 FEPCC EA [3020]
1000, 5.12 5 · 106 3.12E1 DECC-O DE [3020]
1000, 5.12 5 · 106 3.55E-16 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 599
50.3.1.10 Ackley’s Function
Ackley’s function is defined as f10 in [3026] and is here introduced as fRSO10 in Table 50.20.
In Figure 50.24, we sketch this function for one and two dimensions, in Listing 50.11, we
give a Java code snippet showing how it can be computed, and Table 50.21 contains some
results from literature.
 q P  Pn 
n
function fRSO10 (x) = −20exp −0.2 n1 i=1 xi 2 − exp n1 i=1 cos(2πxi ) 50.63
+20 + e
n
domain X = [−v, v] , {10, 32} 50.64
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO10 (x⋆ ) = 0 50.65
separable yes
multimodal yes

Table 50.20: Properties of Ackley’s function.

^
x
25
^
f
x
20 f
25
20
15 15
10
10 5
0

5 30
20
10
xÎX -30 -20 xÎX 0
0 -10 0 -10
10 20 -20
-30 -20 -10 0 10 20 30 30 -30
Fig. 50.24.a: 1d Fig. 50.24.b: 2d

Figure 50.24: Examples for Ackley’s function.

1 private static final double PI2 = (Math.PI + Math.PI);


2
3 public static final double computeAckley(final double[] ind) {
4 int i;
5 final int dim;
6 double r1, r2, d;
7
8 r1 = 0d;
9 r2 = 0d;
10 dim = ind.length;
11 for (i = (dim - 1); i >= 0; i--) {
12 d = ind[i];
13 r1 += (d * d);
14 r2 += Math.cos(PI2 * d);
15 }
16
17 r1 /= dim;
18 r2 /= dim;
19
20 return ((-20d * Math.exp(-0.2 * Math.sqrt(r1))) - Math.exp(r2) + 20d + Math.E);
21 }
Listing 50.11: A Java code snippet computing Ackley’s function.
600 50 BENCHMARKS

Table 50.21: Results obtained for Ackley’s function with τ function evaluations.
n, v τ fRSO10 (x) Algorithm Class Reference
500, 32 2.5 · 106 7.88 SaNSDE DE [3020]
500, 32 2.5 · 106 5.7E-4 FEPCC EA [3020]
500, 32 2.5 · 106 1.86E-11 DECC-O DE [3020]
500, 32 2.5 · 106 9.13E-14 DECC-G DE [3020]
1000, 32 5 · 106 1.12E1 SaNSDE DE [3020]
1000, 32 5 · 106 9.5E-4 FEPCC EA [3020]
1000, 32 5 · 106 4.39E-11 DECC-O DE [3020]
1000, 32 5 · 106 2.22E-13 DECC-G DE [3020]

50, 10 2.5 · 105 0 LSRS HC [1140]


50, 10 1 · 107 3.38 GA [1140]
50, 10 1 · 107 6.055 PSO [1140]
100, 10 2.5 · 105 0 LSRS HC [1140]
100, 10 1 · 107 4.43 GA [1140]
100, 10 1 · 107 6.78 PSO [1140]
500, 10 2.5 · 106 0 LSRS HC [1140]
500, 10 1 · 107 7.21 GA [1140]
500, 10 1 · 107 8.14 PSO [1140]
1000, 10 2.5 · 106 1.3E-18 LSRS HC [1140]
1000, 10 1 · 107 7.87 GA [1140]
1000, 10 1 · 107 9.02 PSO [1140]
2000, 10 2.5 · 106 0 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 601
50.3.1.11 Generalized Griewank Function

The generalized Griewank function (also called Griewangk function [1106, 2934]) is defined
as f11 in [3026] and as F8 in [2934]. Locatelli [1755] showed that this function becomes
easier to optimize with increasing dimension n and Cho et al. [569] found a way to derive
the number of (local) minima of this function (wich grows exponentiall with n). Here, we
name the function fRSO11 and list its features in Table 50.22. In Figure 50.25 we sketch it one
and two-dimensional version, Listing 50.12 contains a Java snippet showing how it can be
computed, and provides some results for this function from literature Table 50.23.

n n  
1 X 2 Y xi
function fRSO11 (x) = xi − cos √ + 1 50.66
4000 i=1 i=1 i
n
domain X = [−v, v] , v ∈ {600} 50.67
optimization minimization
T
optimum x⋆ = (0, 0, .., 0) , fRSO11 (x⋆ ) = 0 50.68
separable no
multimodal yes

Table 50.22: Properties of the generalized Griewank function.

^
100 x
90 f
80 f
^ 200
70
x 160
60 120
50 80
40 40
0
30
600
20 400
200
10 -600 -400 xÎX 0
0
xÎX -200 0 -200
200 400 -400
-600 -400 -200 0 200 400 600 600 -600
Fig. 50.25.a: 1d Fig. 50.25.b: 2d

Figure 50.25: Examples for the generalized Griewank function.

1 public static final double computeGenGriewank(final double[] ind) {


2 int i;
3 double r1, r2, d;
4
5 r1 = 0d;
6 r2 = 1d;
7 for (i = (ind.length - 1); i >= 0; i--) {
8 d = ind[i];
9 r1 += (d * d);
10 r2 *= (Math.cos(d / Math.sqrt(i + 1)));
11 }
12
13 return (1d + (r1 / 4000d) - r2);
14 }
Listing 50.12: A Java code snippet computing the generalized Griewank function.
602 50 BENCHMARKS
Table 50.23: Results obtained for the generalized Griewank function with τ function evalu-
ations.
50.3. REAL PROBLEM SPACES 603
n, v τ fRSO11 (x) Algorithm Class Reference
500, 600 2.5 · 106 1.82E-1 SaNSDE DE [3020]
500, 600 2.5 · 106 2.9E-2 FEPCC EA [3020]
500, 600 2.5 · 106 5.02E-16 DECC-O DE [3020]
500, 600 2.5 · 106 4.4E-16 DECC-G DE [3020]
6
1000, 600 5 · 10 4.8E-1 SaNSDE DE [3020]
1000, 600 5 · 106 2.5E-2 FEPCC EA [3020]
1000, 600 5 · 106 2.04E-15 DECC-O DE [3020]
1000, 600 5 · 106 1.01E-15 DECC-G DE [3020]
604 50 BENCHMARKS
50.3.1.12 Generalized Penalized Function 1

This function is defined as f12 in [3026] and is here introduced as fRSO12 in Table 50.24. In
Figure 50.26, we sketch this function for one and two dimensions, in Listing 50.13, we give a
Java code snippet showing how it can be computed, and Table 50.25 contains some results
from literature.
n Pn−1 2  
function fRSO12 (x) = nπ 10sin2 (πy1 ) + i=1 (yi − 1) · 1 + 10sin2 (πyi+1 ) 50.69
o P
2 n
+ (yn − 1) + i=1 u(xi , 10, 100, 4)
 m
 k (xi − a) if xi > a
u(xi , a, k, m) = 0 if − a ≤ xi ≤ a 50.70
 m
k (−xi − a) otherwise
xi + 1
yi = 1 + 50.71
n4
domain X = [−v, v] , v ∈ {50} 50.72
optimization minimization
T
optimum x⋆ = (−1, −1, .., −1) , fRSO12 (x⋆ ) = 0 50.73
separable yes
multimodal yes

Table 50.24: Properties of the generalized penalized function 1.

^
x
3e+8
f
2.5e+8 ^
x
2e+8 6e+8
f
4e+8
1.5e+8
2e+8
1e+8 0

5e+7 40
20
xÎX -40 xÎX 0
0 -20
0 -20
-40 -20 0 20 40 20 -40
40
Fig. 50.26.a: 1d Fig. 50.26.b: 2d

Figure 50.26: Examples for the general penalized function 1.

1 private static final double y(final double d) {


2 return (1d + 0.25 * (d + 1d));
3 }
4
5 private static final double u(final double x, final int a, final int k,
6 final int m) {
7 if (x > a) {
8 return (k * Math.pow(x - a, m));
9 }
10 if (x < (-a)) {
11 return (k * Math.pow(-x - a, m));
12 }
13 return 0d;
14 }
15
50.3. REAL PROBLEM SPACES 605
16 public static final double computePenalizedA(final double[] ind) {
17 int i;
18 double r1, y, ly, s;
19
20 r1 = Math.sin(Math.PI * y(ind[0]));
21 r1 *= r1;
22 r1 *= 10d;
23
24 i = (ind.length - 1);
25 y = y(ind[i]);
26 r1 += ((y - 1) * (y - 1));
27
28 for (--i; i >= 0; i--) {
29 ly = y;
30 y = y(ind[i]);
31 s = Math.sin(Math.PI * ly);
32
33 r1 += ((y - 1) * (y - 1)) * (1d + (10d * s * s));
34 }
35
36 r1 *= Math.PI;
37 i = ind.length;
38 r1 /= i;
39
40 for (--i; i >= 0; i--) {
41 r1 += u(ind[i], 10, 100, 4);
42 }
43
44 return r1;
45 }
Listing 50.13: A Java code snippet computing the generalized penalized function 1.

Table 50.25: Results obtained for the generalized penalized function 1 with τ function
evaluations.
n, v τ fRSO12 (x) Algorithm Class Reference
500, 50 2.5 · 106 2.96 SaNSDE DE [3020]
500, 50 2.5 · 106 – FEPCC EA [3020]
500, 50 2.5 · 106 2.17E-25 DECC-O DE [3020]
500, 50 4.29 · 10−21 0 DECC-G DE [3020]
1000, 50 5 · 106 2.96 SaNSDE DE [3020]
1000, 50 5 · 106 – FEPCC EA [3020]
1000, 50 5 · 106 1.08E-24 DECC-O DE [3020]
1000, 50 5 · 106 6.89E-25 DECC-G DE [3020]
606 50 BENCHMARKS
50.3.1.13 Generalized Penalized Function 2

This function is defined as f13 in [3026] and is here introduced as fRSO13 in Table 50.26. In
Figure 50.27, we sketch this function for one and two dimensions, in Listing 50.14, we give a
Java code snippet showing how it can be computed, and Table 50.27 contains some results
from literature.
n Pn−1 2  
function fRSO13 (x) = 0.1 sin2 (3πxi ) + i=1 (xi − 1) · 1 + sin2 (3πxi+1 ) + 50.74
2   o Pn
(xn − 1) · 1 + sin2 (2πxn ) + i=1 u(xi , 5, 100, 4)
see Equation 50.70 for u(xi , a, k, m)
n
domain X = [−v, v] , v ∈ {50} 50.75
optimization minimization
T
optimum x⋆ = (1, 1, .., 1) , fRSO13 (x⋆ ) = 0 50.76
separable yes
multimodal yes

Table 50.26: Properties of the generalized penalized function 2.

^
x
4.5e+8
4e+8 f

3.5e+8 ^
f
3e+8 x 8e+8
2.5e+8 6e+8
4e+8
2e+8
2e+8
1.5e+8 0
1e+8
40
5e+7 20
xÎX -40 xÎX 0
0 -20 -20
0
-40 -20 0 20 40 20 -40
40
Fig. 50.27.a: 1d Fig. 50.27.b: 2d

Figure 50.27: Examples for the general penalized function 2.

1 private static final double PI2 = (Math.PI + Math.PI);


2 private static final double PI3 = (PI2 + Math.PI);
3
4 private static final double u(final double x, final int a, final int k,
5 final int m) {
6 if (x > a) {
7 return (k * Math.pow(x - a, m));
8 }
9 if (x < (-a)) {
10 return (k * Math.pow(-x - a, m));
11 }
12 return 0d;
13 }
14
15 public static final double computePenalizedB(final double[] ind) {
16 double res, x, ox, xm1, s;
17 int i;
18
19 i = (this.m_dimensions - 1);
20 x = ind[i];
50.3. REAL PROBLEM SPACES 607
21
22 xm1 = (x - 1d);
23 s = Math.sin(PI2 * x);
24 res = ((xm1 * xm1) * (1d + (s * s)));
25
26 for (--i; i >= 0; i--) {
27 ox = x;
28 x = ind[i];
29
30 s = Math.sin(PI3 * ox);
31 xm1 = (x - 1d);
32 res += ((xm1 * xm1) * (1d + (s * s)));
33 }
34
35 s = Math.sin(PI3 * x);
36 res += (s * s);
37 res *= 0.1d;
38
39 for (i = (this.m_dimensions - 1); i >= 0; i--) {
40 res += F12.u(ind[i], 5, 100, 4);
41 }
42
43 return res;
44 }
Listing 50.14: A Java code snippet computing the generalized penalized function 2.

Table 50.27: Results obtained for the generalized penalized function 2 with τ function
evaluations.
n, v τ fRSO13 (x) Algorithm Class Reference
500, 50 2.5 · 106 1.89E2 SaNSDE DE [3020]
500, 50 2.5 · 106 – FEPCC EA [3020]
500, 50 2.5 · 106 5.04E-23 DECC-O DE [3020]
500, 50 4.29 · 10−21 5.34E-18 DECC-G DE [3020]
6
1000, 50 5 · 10 7.41E2 SaNSDE DE [3020]
1000, 50 5 · 106 – FEPCC EA [3020]
1000, 50 5 · 106 4.82E-22 DECC-O DE [3020]
1000, 50 5 · 106 2.55E-21 DECC-G DE [3020]
608 50 BENCHMARKS
50.3.1.14 Levy’s Function

This function is defined in [1140].


Pn−1 2  
function fRSO14 (x) = sin2 (πy1 ) + i=1 (yi − 1) · 1 + 10sin2 (πyi + 1) 50.77
2 
+ (yn − 1) 1 + sin2 (2 ∗ πxn )
xi + 1
yi = 1 + 50.78
n4
domain X = [−v, v] , v ∈ {10} 50.79
optimization minimization
T
optimum x⋆ = (1, 1, .., 1) , fRSO12 (x⋆ ) = 0 50.80
separable yes
multimodal yes

Table 50.28: Properties of Levy’s function.

16
f ^
14 x
f
12 ^ 100
80
10 x 60
8 40
20
6 0
4 10
5
2 -10 xÎX 0
xÎX -5
0 0 -5
5
-10 -5 0 5 10 -10 10
Fig. 50.28.a: 1d Fig. 50.28.b: 2d

Figure 50.28: Examples for Levy’s function.

Table 50.29: Results obtained for Levy’s function with τ function evaluations.
n, v τ fRSO14 (x) Algorithm Class Reference
50, 10 2.5 · 105 2.9E-39 LSRS HC [1140]
50, 10 1 · 107 3.27E-1 GA [1140]
50, 10 1 · 107 1.888E1 PSO [1140]
100, 10 2.5 · 105 2.9E-39 LSRS HC [1140]
100, 10 1 · 107 6.509E-1 GA [1140]
100, 10 1 · 107 5.572E1 PSO [1140]
500, 10 2.5 · 106 2.9E-39 LSRS HC [1140]
500, 10 1 · 107 3.33 GA [1140]
500, 10 1 · 107 5.4602E2 PSO [1140]
1000, 10 2.5 · 106 2.9E-39 LSRS HC [1140]
1000, 10 1 · 107 7.17 GA [1140]
50.3. REAL PROBLEM SPACES 609
1000, 10 1 · 107 1.62E3 PSO [1140]
2000, 10 2.5 · 106 2.9E-39 LSRS HC [1140]
610 50 BENCHMARKS
50.3.1.15 Shifted Sphere

The sphere function given in Section 50.3.1.1 is shifted in X by a vector o and in the
objective space by a bias fbias . This function is defined as F1 in [2626, 2627] and [2668].
Here, we specify it as fRSO15 in Table 50.30 and provide a Java snipped showing how it can
be computed in Listing 50.15 before listing some results from the literature in Table 50.31.
n
X
function: fRSO15 (x) = zi2 + fbias 50.81
i=1
zi = x i + o i 50.82
n
domain: X = [−100, 100] 50.83
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO15 (x⋆ ) = fbias 50.84
separable: yes
multimodal: no
shifted: yes
rotated: no
scalable: no

Table 50.30: Properties of the shifted sphere function.

1 private static final double[] DATA = new double[] {


2 // shift data oi , see Tang et al. [2668]
3 };
4
5 public static final double computeShiftedSphere(final double[] ind) {
6 int i;
7 double res, x;
8
9 res = 0;
10 for (i = (ind.length - 1); i >= 0; i--) {
11 x = (ind[i] - DATA[i]);
12 res += (x * x);
13 }
14
15 return res + -450.0d; // bias fbias = −450, see [2668]
16 }
Listing 50.15: A Java code snippet computing the shifted sphere function.

Table 50.31: Results obtained for the shifted sphere function with τ function evaluations.
n, fbias τ fRSO15 (x) Algorithm Class Reference
500, 0 2.5 · 106 2.61E-11 SaNSDE DE [3020]
500, 0 2.5 · 106 1.04E-12 DECC-O DE [3020]
500, 0 2.5 · 106 3.71E-13 DECC-G DE [3020]
6
1000, 0 5 · 10 1.17 SaNSDE DE [3020]
1000, 0 5 · 106 3.66E-8 DECC-O DE [3020]
1000, 0 5 · 106 6.84E-13 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 611
50.3.1.16 Shifted Rotated High Conditioned Elliptic Function

An ellipse is shifted in X by a vector o, rotated by the orthogonal matrix M, and shifted in


the objective space by a bias fbias . This function is defined as F3 in [2626, 2627].
n
X  n−1
i−1
function: fRSO16 (x) = 106 ∗ zi2 + fbias 50.85
i=1
z=x−o∗M 50.86
n
domain: X = [−100, 100] 50.87
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO16 (x⋆ ) = fbias 50.88
separable: no
multimodal: no
shifted: yes
rotated: yes
scalable: no

Table 50.32: Properties of the shifted and rotated elliptic function.

Table 50.33: Results obtained for the shifted and rotated elliptic function with τ function
evaluations.
n, fbias τ fRSO16 (x) Algorithm Class Reference
500, ? 2.5 · 106 6.88E8 SaNSDE DE [3020]
500, ? 2.5 · 106 4.78E8 DECC-O DE [3020]
500, ? 2.5 · 106 3.06E8 DECC-G DE [3020]
6
1000, ? 5 · 10 2.34E9 SaNSDE DE [3020]
1000, ? 5 · 106 1.08E9 DECC-O DE [3020]
1000, ? 5 · 106 8.11E8 DECC-G DE [3020]
612 50 BENCHMARKS
50.3.1.17 Shifted Schwefel’s Problem 2.21

Schwefel’s problem 2.21 given in Section 50.3.1.4 is shifted in X by a vector o and in the
objective space by a bias fbias − 450. This function is named F2 in [2668]. In Table 50.34,
we define it as fRSO17 and in Listing 50.16, we provide a Java snippet showing how it can be
computed.

function: fRSO17 (x) = maxi {|zi |} + fbias 50.89


zi = x i + o i 50.90
n
domain: X = [−100, 100] 50.91
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO17 (x⋆ ) = fbias 50.92
separable: false
multimodal: no
shifted: yes
rotated: no
scalable: yes

Table 50.34: Properties of shifted Schwefel’s problem 2.21.

1 private static final double[] DATA = new double[] {


2 // shift data oi , see material accompanying Tang et al. [2668]
3 };
4
5 public static final double computeShiftedSchwefel221(final double[] ind) {
6 int i;
7 double res, d;
8
9 res = 0d;
10 for (i = (this.m_dimensions - 1); i >= 0; i--) {
11 d = ind[i] - DATA[i];
12 if (d < 0d) {
13 d = -d;
14 }
15 if (d > res) {
16 res = d;
17 }
18 }
19
20 return res + -450.0d; // bias fbias = −450, see [2668]
21 }
Listing 50.16: A Java code snippet computing shifted Schwefel’s problem 2.21.
50.3. REAL PROBLEM SPACES 613
50.3.1.18 Schwefel’s Problem 2.21 with Optimum on Bounds

This function is defined as F5 in [2626, 2627]. In Equation 50.93, we define it as fRSO18 . In


the table, A is a n × n matrix and its tth row is Ai . det(A) 6= 0 holds and A is initialized
with random numbers from [−500, 500]. Bi is a vector of dimension n and computed by
multiplying the optimum vector o to Ai , i. e., Bi = Ai ∗ o. o is initialized with random
numbers from [−100, 100].

function: fRSO18 (x) = maxi {Ai x − Bi } + fbias 50.93


n
domain: X = [−100, 100] 50.94
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO18 (x⋆ ) = fbias 50.95
separable: yes
multimodal: no
shifted: yes
rotated: no
scalable: no

Table 50.35: Properties of Schwefel’s modified problem 2.21 with optimum on the bounds.

Table 50.36: Results obtained for Schwefel’s modified problem 2.21 with optimum on the
bounds with τ function evaluations.
n, fbias τ fRSO18 (x) Algorithm Class Reference
500, 0 2.5 · 106 4.96E5 SaNSDE DE [3020]
500, 0 2.5 · 106 2.4E5 DECC-O DE [3020]
500, 0 2.5 · 106 1.15E5 DECC-G DE [3020]
6
1000, 0 5 · 10 5.03E5 SaNSDE DE [3020]
1000, 0 5 · 106 3.73E5 DECC-O DE [3020]
1000, 0 5 · 106 2.2E5 DECC-G DE [3020]
614 50 BENCHMARKS
50.3.1.19 Shifted Rosenbrock’s Function

The generalized Rosenbrock function given in Section 50.3.1.5 is shifted in X by a vector o


and in the objective space by a bias fbias . This function is defined as F6 in [2626, 2627] and as
F3 in [2668]. Here, we specify it as fRSO19 in Table 50.37 and provide a Java snipped showing
how it can be computed in Listing 50.17 before listing some results from the literature in
Table 50.38.

X
n−1
2 2

function: fRSO19 (x) = 100 zi2 − zi+1 + (zi − 1) + fbias 50.96
i=1
zi = x i − o i + 1 50.97
n
domain: X = [−100, 100] 50.98
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO19 (x⋆ ) = fbias 50.99
separable: no
multimodal: yes
shifted: yes
rotated: no
scalable: yes

Table 50.37: Properties of the shifted Rosenbrock’s function.

1 private static final double[] DATA = new double[] {


2 // shift data oi , see material accompanying Tang et al. [2668]
3 };
4
5 public static final double computeShiftedRosenbrock(final double[] ind) {
6 int i;
7 double res, ox, x, z;
8
9 res = 0d;
10 i = (ind.length - 1);
11 x = ind[i] - DATA[i] + 1;
12 for (--i; i >= 0; i--) {
13 ox = x;
14 x = ind[i] - DATA[i] + 1;
15
16 z = (ox + (x * x));
17 res += (100d * (z * z));
18 z = (x - 1);
19 res += (x * x);
20 }
21
22 return res + 390.0d; // bias fbias = 390, see [2668]
23 }
Listing 50.17: A Java code snippet computing the shifted Rosenbrock’s function.

Table 50.38: Results obtained for the shifted Rosenbrock’s function with τ function evalua-
tions.
n, fbias τ fRSO19 (x) Algorithm Class Reference
500, ? 2.5 · 106 2.71E3 SaNSDE DE [3020]
500, ? 2.5 · 106 1.71E3 DECC-O DE [3020]
50.3. REAL PROBLEM SPACES 615
500, ? 2.5 · 106 1.56E3 DECC-G DE [3020]

1000, ? 5 · 106 1.35E4 SaNSDE DE [3020]


1000, ? 5 · 106 3.13E3 DECC-O DE [3020]
1000, ? 5 · 106 2.22E3 DECC-G DE [3020]
616 50 BENCHMARKS
50.3.1.20 Shifted Ackley’s Function

Ackley’s function provided in Section 50.3.1.10 is shifted in X by a vector o and in the


objective space by a bias fbias = −140. This function is defined as F6 in [2668]. Here,
we specify it as fRSO20 in Table 50.39 and provide a Java snipped showing how it can be
computed in Listing 50.18.
 q P  Pn 
n
function fRSO20 (x) = −20exp −0.2 n1 i=1 zi2 − exp 1
n i=1 cos(2πzi ) 50.100
+20 + e
zi = x i − o i 50.101
n
domain X = [−v, v] , {10, 32} 50.102
optimization minimization
T
optimum x⋆ = (o1 , o2 , .., on ) , fRSO20 (x⋆ ) = 0 50.103
separable yes
multimodal yes
shifted: yes
rotated: no
scalable: yes

Table 50.39: Properties of shifted Ackley’s function.

1 private static final double PI2 = (Math.PI + Math.PI);


2
3 private static final double[] DATA = new double[] {
4 // shift data oi , see material accompanying Tang et al. [2668]
5 };
6
7
8 public static final double computeAckley(final double[] ind) {
9 int i;
10 final int dim;
11 double r1, r2, d;
12
13 r1 = 0d;
14 r2 = 0d;
15 dim = ind.length;
16 for (i = (dim - 1); i >= 0; i--) {
17 d = ind[i] - DATA[i];
18 r1 += (d * d);
19 r2 += Math.cos(PI2 * d);
20 }
21
22 r1 /= dim;
23 r2 /= dim;
24
25 return ((-20d * Math.exp(-0.2 * Math.sqrt(r1))) -
26 Math.exp(r2) + 20d + Math.E) +
27 -140.0d; // bias fbias = −140, see [2668]
28 }
Listing 50.18: A Java code snippet computing shifted Ackley’s function.
50.3. REAL PROBLEM SPACES 617
50.3.1.21 Shifted Rotated Ackely’s Function

The Ackley’s function given in Section 50.3.1.10 is shifted in X by a vector o, rotated with
a matrix M, and shifted in the objective space by a bias fbias . This function is defined as
F8 in [2626, 2627].
 q P  Pn 
n
function fRSO21 (x) = −20exp −0.2 n1 i=1 zi2 − exp 1
n i=1 cos(2πzi ) 50.104
+20 + fbias
z = (x + o) ∗ M 50.105
n
domain: X = [−32, 32] 50.106
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO21 (x⋆ ) = fbias 50.107
separable: no
multimodal: yes
shifted: yes
rotated: yes
scalable: yes

Table 50.40: Properties of the shifted and rotated Ackley’s function.

Table 50.41: Results obtained for the shifted and rotated Ackley’s function with τ function
evaluations.
n, fbias τ fRSO21 (x) Algorithm Class Reference
500, ? 2.5 · 106 2.15E1 SaNSDE DE [3020]
500, ? 2.5 · 106 2.14E1 DECC-O DE [3020]
500, ? 2.5 · 106 2.16E1 DECC-G DE [3020]
1000, ? 5 · 106 2.16E1 SaNSDE DE [3020]
1000, ? 5 · 106 2.14E1 DECC-O DE [3020]
1000, ? 5 · 106 2.16E1 DECC-G DE [3020]
618 50 BENCHMARKS
50.3.1.22 Shifted Rastrigin’s Function

The generalized Rastrigin’s function given in Section 50.3.1.9 is shifted in X by a vector o


and in the objective space by a bias fbias = −330. This function is defined as F9 in [2626,
2627] and as F4 in [2668]. Here, we specify it as fRSO22 in Table 50.42 and provide a Java
snipped showing how it can be computed in Listing 50.19 before listing some results from
the literature in Table 50.43.
n
X  
function fRSO22 (x) = zi2 − 10cos(2πzi ) + 10 + fbias 50.108
i=1
zi = x i − o i 50.109
n
domain: X = [−5, 5] 50.110
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO22 (x⋆ ) = fbias 50.111
separable: yes
multimodal: yes
shifted: yes
rotated: no
scalable: yes

Table 50.42: Properties of the shifted Rastrigin’s function.

1 private static final double PI2 = Math.PI+Math.PI;


2
3 private static final double[] DATA = new double[] {
4 // shift data oi , see material accompanying Tang et al. [2668]
5 };
6
7 public static final double computeShiftedRastrigin(final double[] ind) {
8 int i;
9 double res, d;
10
11 res = 1d;
12 for (i = (ind.length - 1); i >= 0; i--) {
13 d = ind[i] - DATA[i];
14 res += ((d * d) - (10d * Math.cos(PI2 * d)) + 10d);
15 }
16 }
17
18 return res + -330.0d; // bias fbias = −330, see [2668]
19 }
Listing 50.19: A Java code snippet computing the shifted Rastrigin’s function.

Table 50.43: Results obtained for the shifted Rastrigin’s function with τ function evaluations.
n, fbias τ fRSO22 (x) Algorithm Class Reference
500, ? 2.5 · 106 6.6E2 SaNSDE DE [3020]
500, ? 2.5 · 106 8.66 DECC-O DE [3020]
500, ? 2.5 · 106 4.5E2 DECC-G DE [3020]
1000, ? 5 · 106 3.2E3 SaNSDE DE [3020]
1000, ? 5 · 106 8.96E1 DECC-O DE [3020]
1000, ? 5 · 106 6.32E2 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 619
50.3.1.23 Shifted and Rotated Rastrigin’s Function

The generalized Rastrigin’s function given in Section 50.3.1.9 is shifted in X by a vector o,


roated by a matrix M, and shifted in the objective space by a bias fbias . This function is
defined as F10 in [2626, 2627].
n
X  
function fRSO23 (x) = zi2 − 10cos(2πzi ) + 10 + fbias 50.112
i=1
z = (x − o) ∗ M 50.113
n
domain: X = [−5, 5] 50.114
optimization: minimization
T
optimum: x⋆ = (o1 , o2 , .., o2 ) , fRSO23 (x⋆ ) = fbias 50.115
separable: no
multimodal: yes
shifted: yes
rotated: yes
scalable: yes

Table 50.44: Properties of the shifted and rotated Rastrigin’s function.

Table 50.45: Results obtained for the shifted and rotated Rastrigin’s function with τ function
evaluations.
n, fbias τ fRSO23 (x) Algorithm Class Reference
500, ? 2.5 · 106 6.97E3 SaNSDE DE [3020]
500, ? 2.5 · 106 1.5E4 DECC-O DE [3020]
500, ? 2.5 · 106 5.33E3 DECC-G DE [3020]
6
1000, ? 5 · 10 1.27E4 SaNSDE DE [3020]
1000, ? 5 · 106 3.18E4 DECC-O DE [3020]
1000, ? 5 · 106 9.73E3 DECC-G DE [3020]
620 50 BENCHMARKS
50.3.1.24 Schwefel’s Problem 2.13

This function is defined as F13 in [2626, 2627]. A and B are n × n matrices, aij and bij
T
are integer random numbers from −100..100, and the αj in α = (α1 , α2 , .., αn ) are random
numbers in [−π, π].
2
function fRSO24 (x) = sumni=1 (Ai − Bi (x)) + fbias 50.116
Xn
Ai = (aij sin(αj ) + bij cos(αj )) 50.117
j=1
n
X
Bi (x) = (aij sin(xj ) + bij cos(xj )) 50.118
j=1
n
domain X = [−π, π] 50.119
optimization minimization
T
optimum x⋆ = (α1 , α2 , .., αn ) , fRSO24 (x⋆ ) = fbias 50.120
separable no
multimodal yes
shifted: yes
rotated: no
scalable: yes

Table 50.46: Properties of Schwefel’s problem 2.13.

Table 50.47: Results obtained for Schwefel’s problem 2.13 with τ function evaluations.
n, v τ fRSO24 (x) Algorithm Class Reference
500, 100 2.5 · 106 2.53E2 SaNSDE DE [3020]
500, 100 2.5 · 106 2.81E1 DECC-O DE [3020]
500, 100 2.5 · 106 2.09E2 DECC-G DE [3020]
1000, 100 5 · 106 6.61E2 SaNSDE DE [3020]
1000, 100 5 · 106 7.52E1 DECC-O DE [3020]
1000, 100 5 · 106 3.56E2 DECC-G DE [3020]
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 621
50.3.1.25 Shifted Griewank Function

The Griewank function provided in Section 50.3.1.11 is shifted in X by a vector o and in


the objective space by a bias fbias = −180. This function is defined as F5 in [2668]. Here,
we specify it as fRSO25 in Table 50.48 and provide a Java snipped showing how it can be
computed in Listing 50.20.
n n  
1 X 2 Y zi
function fRSO25 (x) = zi − cos √ + 1 50.121
4000 i=1 i=1 i
zi = x i − o i 50.122
n
domain X = [−600, 600] 50.123
optimization minimization
T
optimum x⋆ = (o1 , o2 , .., on ) , fRSO25 x⋆ = fbias 50.124
separable no
multimodal yes
shifted: yes
rotated: no
scalable: yes

Table 50.48: Properties of the shifted Griewank function.

1 private static final double[] DATA = new double[] {


2 // shift data oi , see material accompanying Tang et al. [2668]
3 };
4
5 public static final double computeShiftedGriewank(final double[] ind) {
6 int i;
7 double r1, r2, d;
8
9 r1 = 0d;
10 r2 = 1d;
11 for (i = (ind.length - 1); i >= 0; i--) {
12 d = ind[i] - DATA[i];
13 r1 += (d * d);
14 r2 *= (Math.cos(d / Math.sqrt(i + 1)));
15 }
16
17 return (1d + (r1 / 4000d) - r2) + -180d; // bias fbias = −180, see [2668]
18 }
Listing 50.20: A Java code snippet computing the shifted Griewank function.

50.4 Close-to-Real Vehicle Routing Problem Benchmark

50.4.1 Introduction

Logistics are one of the most important services in our economy. The transportation of
goods, parcel, and mail from one place to another is what connects different businesses and
people. Without such transportation, basically all economical processes in an industrial
society would break done.
Logistics is omnipresent. But is it efficient? We already discussed that the Travel-
ing Salesman Problem (see Example E2.2 on page 45) is N P-complete in Section 12.3 on
page 147. Solving the TSP means finding an efficient travel plan for a single salesman. Plan-
ning for a whole fleet of vehicles, a set of orders, under various constraints and conditions,
622 50 BENCHMARKS
seems to be at least as complex and, from an algorithmic and implementation perspective,
is probably much harder. Hence, most logistics companies today still rely on manual plan-
ning. Knowing that, we can be pretty sure that they can benefit greatly from solving such
Vehicle Routing Problems [497, 1091, 2162] in an optimal manner which reduces costs, travel
distances, and CO2 emissions.
In Section 49.3 on page 536, we discuss a real-world application of Evolutionary Algo-
rithms in the area of logistics: the search for optimal goods transportation plans, a problem
which belongs to the class of Vehicle Routing Problems (VRP). Section 49.3, you can find a
detailed description and model gleaned from the situation in one of the divisions of a large
logistics company (Section 49.3.2) as well as an in-depth discussion of related research work
on Vehicle Routing Problems (Section 49.3.3). Here, we want to introduce a benchmark
generator for a general class of VRPs which are close to the situation in today’s parcel
companies (if not even closer than the work introduced in Section 49.3).

Truck, driver, and container


are in the depot. The orders are picked up at
the customer’s location within
a given time window..
order o1 order o2
driver d1

container c1
Another container is picked
up in another depot.
truck t1 customer C
depot A pickup location

container
c2

depot B

delivery
location

The orders are delivered


within a time window.
The truck can now order o1 order o2
return home to the depot.

Figure 50.29: A close-to-real-world Vehicle Routing Problem


50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 623
50.4.1.1 Overview on the Involved Objects

A real-world logistics company knows five kinds of objects: orders, containers, trucks,
drivers, and locations. With these objects, the situation faced by the planner can be com-
pletely described. In Figure 50.29 we sketch how these objects interact and describe this
interaction below in detail.

1. Orders are transportation tasks given by customers. Each order is characterized by a


pickup and a delivery location as well as a pickup and delivery time window. The order
must be picked up by a truck within the pickup time window at the pickup location and
delivered within the delivery time window to the delivery location. See Section 50.4.2.2
on page 626 for more information.

2. Orders are placed into containers which are provided by the logistics company. A
container can either be empty or contain exactly one order. We assume that each
order fits into exactly one container. If an order was larger than one container, we
can imagine that it will be split up into n single orders without loss of generality. A
customer has to pay for at least one container. If the orders are smaller, the company
still uses (and charges for) one full container. Containers reside at depots and, after
all deliveries are performed, must be transported back to a depot (not necessarily their
starting depot). For more information, see Section 50.4.2.4.

3. Trucks are them vehicles that transport the vehicles. The company can own different
types of trucks with different capacities. The capacity limit per truck is usually between
one and three containers. A truck can either be empty or transport at most as many
containers as its capacity allows. A truck is driven always by one driver, but some
trucks allow that another driver can travel along (i.e., have a higher number of seats).
Trucks also reside at depots and, after they have finished their tours, must arrive at
a depot again (not necessarily their starting depot). A truck cannot drive without a
driver. See Section 50.4.2.3 on page 627 for more information.

4. Drivers work for the logistics company and drive the trucks. Each driver has a “home
depot” to which she needs to return after the orders have been delivered. For more
information, see Section 50.4.2.5 on page 628.

5. Locations are places on the map. The customers reside at locations and deliveries take
place between locations. Depots are special locations: Here, the logistics company
stores its containers, employs its drivers, and parks its trucks. Trucks pick up orders at
given locations and deliver them to other locations. Also, trucks may meet at locations
and exchange their freight. See Section 50.4.2.1 on page 625 for more information.

50.4.1.2 Overview on the Optimization Goals and Constraints

Basically, a logistics company has two goals:

1. Pick up and deliver all orders within their time windows and, hence, to fulfill its con-
tracts with the customer. Every missed time window or undelivered order is considered
as a delivery error. The number of delivery errors must be minimized.

2. Orders should be delivered at the lowest possible costs. Hence, the costs should be
minimized.

While trying to achieve these goals, it obviously has to obey to the laws of physics and logic.
The following constraints do always hold:
624 50 BENCHMARKS
1. No truck, container, order, or driver can be at more than one place at one point in
time. A driver cannot drive a truck from A to B while simultaneously driving another
truck from C to D. A container cannot be used to transport order O from C to D
while, at the same time, being carried by another truck from C to E. And so on.

2. Trucks, containers, orders, and drivers can only leave from places where they actually
reside: A truck which is parked at depot A cannot leave depot B (without driving
there first). A truck driver who is located at depot C cannot get into a truck which
is parked in depot D (without driving there first). An container located at depot E
cannot be loaded into a truck residing in depot F (without being driven there first).
And so on.

3. Driving from one location to another takes time. A truck cannot get from location A
to a distant location B within 0s. The time needed for getting from A to B depends
on (1) the distance between A and B and (2) the maximum speed of the truck. A
truck cannot go from one location to another faster than what the (Newtonian) laws
of physics [2029] allow.

4. If an object (truck, driver, order, container) is moved from location A to location B,


it must reside at location A before the move and will reside at location B after arrival
(i. e., after the time has passed which the truck needs to get there). An object can be
involved in at most one transportation move at any given time.

5. Objects can only be moved by trucks and trucks are driven by drivers: (a) A truck can-
not move without a driver. (b) A driver cannot move without a truck. (c) A container
cannot move without a truck. (d) An order cannot move without a container.

6. The capacity limits of the involved objects (trucks, containers) must be obeyed. A
truck can only carry a finite weight and a container can only contain a finite volume
(here: one order).

7. Orders can only be picked up during the pickup time interval and only be delivered
during the delivery time interval.

8. No new objects appear during the planning process. No new trucks, drivers, or con-
tainers can somehow be “created” and no new orders come in.

9. The features of the involved objects do not change during the planning process. A
truck cannot become faster or slower and the capacity limits of the involved objects
cannot be increased or decreased.

For additional information, please see Section 50.4.4 on page 634.

50.4.1.3 Overview on the Input Data

The input data of an optimization algorithm for this benchmark problem comprises the
sets of objects and their features describing a complete starting situation for the planning
process.

1. The orders with their IDs pickup and destination locations and time windows.

2. The IDs of the containers and the corresponding starting locations (depots).

3. The trucks with their IDs, starting locations (depots), speed and capacity limits as
well as their costs (per kmh).

4. The drivers with their IDs, home depots, and salary (per day).
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 625
5. The distance matrix13 containing the distances between the different locations. Notice
that, in the real world, these matrices are slightly asymmetric, meaning that the
distance from location A to B may be (slightly) different from the distance between
B and A. The distances are given in m.

50.4.1.4 Overview on the Optimization Task

The optimizer then must find a plan which involves the input objects shortly listed in Sec-
tion 50.4.1.3. Such a plan, which can be constructed by any of the optimization algorithms
discussed in Part III and Part IV, has the following features:

1. It assigns trucks, containers, and drivers to moves. A move is one atomic logistic
action which is denoted byarabic (a) a truck, (b) a (possibly empty) set of orders to
be transported by that truck, (c) a non-empty set of truck drivers (contains at least
one driver), (d) a (possibly empty) set of containers which will contain the orders,
(e) a starting location, (f ) a starting time,orders, (g) and a destination location.
Notice that the arrival time at the destination is determined by the distance between
the start and destination location, the truck’s average speed, and the starting time.
See Section 50.4.3.1 on page 632 for more information.

2. A plan may consist of an arbitrary number of such moves. See Section 50.4.3.2 on
page 634 for more information.

3. This plan must fulfill all the constraints given in Section 50.4.1.2 while minimizing the
two objectives, cost and delivery errors.

50.4.2 Involved Objects

50.4.2.1 Locations

A location is a place where a truck can drive to. Each location l = (id, isDepot) has the
following features:

1. A unique identifier id which is a string that starts with an uppercase L followed by six
decimal digits. Example: L000021.

2. A Boolean value isDepot which is true if the location denotes a depot or false if it is
a normal place (such as a customer’s location). Trucks, containers, and drivers always
reside at depots at the beginning of a planning period and must return to depots at
the end of a plan. Example: true.

In our benchmark Java implementation given in Section 57.3, locations are represented by
the class Location given in Listing 57.24 on page 914. This class allows creating and reading
textual representations of the location data. Hence, using our benchmark is not bound to
using the Java language. A list of locations can be given in the form of Listing 50.21.
13 http://en.wikipedia.org/wiki/Distance_matrix [accessed 2010-11-09]
626 50 BENCHMARKS

Listing 50.21: The location data of the first example problem instance.
1 %id is_depot
2 L000000 true
3 L000001 true
4 L000002 true
5 L000003 true
6 L000004 true
7 L000005 false
8 L000006 false
9 L000007 false
10 L000008 false
11 L000009 true
12 L000010 false
13 L000011 false
14 L000012 false
15 L000013 false
16 L000014 false
17 L000015 true
18 L000016 false
19 L000017 false
20 L000018 false
21 L000019 true

50.4.2.2 Orders

An order o = (id, ps , pe , pl , ds , de , dl ) is a delivery task which is booked by a customer from


the logistics company. A customer basically says: “Take order o from location pl to location
dl . Pick up the order between the times ps and pe and deliver it within the time window
[ds , de ]”. An order is hence described by the following data elements:
1. The order ID id. Order IDs start with an uppercase O followed by a six-digit decimal
number. The order IDs form a strict sequence of numbers. The order ID gives no
information about the pickup or delivery time of an order. Example: O000004.
2. The pickup window start time ps ∈ N0 . A positive natural number which denotes the
start of the pickup window in milliseconds, measured from the beginning of the plan-
ning period. These numbers have long precision, i. e., are 64bit numbers. Example:
2 700 000.
3. The pickup window start end pe ∈ N0 . A positive natural number which denotes the
end of the pickup window in milliseconds, measured from the beginning of the planning
period. These numbers have long precision, i. e., are 64bit numbers. It always holds
that pe > ps . Example: 5 400 000.
4. The pickup location pl is where the order has to be picked up. An order can only be
picked up during its pickup time window, i. e., at times t with ps ≤ t ≤ pe . The pickup
location is denoted with a location identifier. Example: L000016. All pick-ups outside
the pickup window are delivery errors.
5. The delivery window start time ds ∈ N0 . A positive natural number which denotes
the start of the delivery window in milliseconds, measured from the beginning of
the planning period. These numbers have long precision, i. e., are 64bit numbers.
Example: 2 700 000.
6. The delivery window start end de ∈ N0 . A positive natural number which denotes
the end of the delivery window in milliseconds, measured from the beginning of the
planning period. These numbers have long precision, i. e., are 64bit numbers. It always
holds that de > ds . Notice that the pickup and delivery window may overlap, but it
will always hold the ds ≥ ps . Example: 7 200 000.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 627

Listing 50.22: The order data of the first example problem instance.
1 %id pickup_window_start pickup_window_end pickup_location delivery_window_start
delivery_window_end delivery_location
2 O000000 2700000 5400000 L000016 2700000 7200000 L000000
3 O000001 1800000 5400000 L000016 2700000 7200000 L000000
4 O000002 4500000 6300000 L000010 5400000 8100000 L000006
5 O000003 4500000 7200000 L000006 4500000 8100000 L000000
6 O000004 1800000 2700000 L000008 2700000 3600000 L000011
7 O000005 0 3600000 L000011 0 5400000 L000018
8 O000006 1800000 3600000 L000018 2700000 5400000 L000005
9 O000007 2700000 5400000 L000005 2700000 6300000 L000013
10 O000008 2700000 7200000 L000010 2700000 9900000 L000008
11 O000009 3600000 5400000 L000002 4500000 7200000 L000004

7. The delivery location dl is where the order has to be delivered to. An order can only
be delivered during its delivery time window, i. e., at times t with ds ≤ t ≤ de . The
delivery location is denoted with a location identifier. Example: L000000. If a truck
arrives at a delivery location before ds , it has to wait until ds before it can deliver the
order. All deliveries outside the delivery window are delivery errors.

In Listing 57.25 on page 915, we provide a Java class to hold the order information. This
class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.22.

50.4.2.3 Trucks

Trucks transport orders from one location to another. Each truck t =


(id, sl , maxC, maxD, costHKm, avgSpeed) is characterized by

1. A unique identifier id which is a string that starts with an uppercase T followed by six
decimal digits. Example: T000003.

2. A starting location sl , a depot, where the truck is parked. This location is denoted by
a location identifier. Example: L000015.

3. The maximum number maxC ∈ N1 of containers that this truck can transport. This
number, usually 0 < maxc ≤ 3 can be different for each truck. A truck cannot carry
more than maxc containers. Example: 2.

4. The maximum number maxD ∈ N1 of drivers which can be in the truck. A truck must
be driven by at least one driver. It may, however, be possible that a truck has more
than one seat, so that two truck drivers can drive to their home depot together on
their return trip, for example. Example: 2

5. The costs per hour-kilometer costHKm in USD-cent (or RMB-fen). If the truck t drives
for m kilometers needing n hours, the total costs of this move are n ∗ m ∗ costHKm.
These costs include, for example, fuel consumption and material/maintenance costs
for the truck. These costs, of course, may be different for different trucks. Example:
170.

6. The average speed avgSpeed in km/h with which the truck can move. Of course, this
speed can again be different from truck to truck, some larger trucks may be slower,
smaller trucks may be faster. The same goes for older and newer trucks. Example:
45.
628 50 BENCHMARKS

Listing 50.23: The truck data of the first example problem instance.
1 %id start_location max_containers max_drivers costs_per_h_km avg_speed_km_h
2 T000000 L000000 3 2 230 50
3 T000001 L000019 2 1 185 50
4 T000002 L000015 3 1 215 30
5 T000003 L000002 3 1 160 50
6 T000004 L000004 1 1 185 50
7 T000005 L000015 1 2 170 50
8 T000006 L000001 2 1 145 50
9 T000007 L000019 2 1 230 40
10 T000008 L000019 2 1 145 45
11 T000009 L000003 2 2 200 55
12 T000010 L000002 3 1 270 45
13 T000011 L000003 2 1 230 40
14 T000012 L000009 3 1 160 50

A truck will always be located at a depot at the beginning of a planning period and must
also be located at a depot after the plan was executed. The end depot may, however, be a
different one from the start depot.
In Listing 57.26 on page 918, we provide a Java class to hold the truck information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.23.

50.4.2.4 Containers

Containers c = (id, sl ) are simple objects which are used to package the orders for trans-
portation. They are characterized by

1. The container ID id. Container IDs start with an uppercase C followed by a six-digit
decimal number. The container IDs form a strict sequence of numbers. Example:
C000003.

2. The start location sl – the depot where the container resides. sl is a location ID.
Example: L000000.

A container will always be located at a depot at the beginning of a planning period and
must also be located at a depot after the plan was executed. The end depot may, however,
be a different one from the start depot.
In Listing 57.27 on page 920, we provide a Java class to hold the container information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in ??.

50.4.2.5 Drivers

Drivers d = (id, sl , costs) are needed to drive the trucks. They are characterized by

1. The driver ID id. Driver IDs start with an uppercase D followed by a six-digit decimal
number. The driver IDs form a strict sequence of numbers. Example: D000007.

2. The home location sl – the depot where the driver starts her work. sl is a location
ID. Example: L000002.

3. The costs per day costs denote the amount in USD – cent (or RMB-fen) that have to
paid for each day on which the driver is working. Even if the driver only works for
one millisecond of the day, the full amount must be paid.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 629

Listing 50.24: The container data of the first example problem instance.
1 %id start
2 C000000 L000000
3 C000001 L000000
4 C000002 L000019
5 C000003 L000015
6 C000004 L000019
7 C000005 L000019
8 C000006 L000019
9 C000007 L000001
10 C000008 L000001
11 C000009 L000004
12 C000010 L000000
13 C000011 L000019
14 C000012 L000009
15 C000013 L000000

Listing 50.25: The driver data of the first example problem instance.
1 %id home costs_per_day
2 D000000 L000000 7000
3 D000001 L000000 11000
4 D000002 L000019 9000
5 D000003 L000015 10000
6 D000004 L000003 11000
7 D000005 L000009 7000
8 D000006 L000001 10000
9 D000007 L000019 7000
10 D000008 L000019 8000
11 D000009 L000009 11000
12 D000010 L000019 9000
13 D000011 L000001 11000
14 D000012 L000009 10000
15 D000013 L000009 11000

A driver will always be located at a depot at the beginning of a planning period and must
also be located at exactly this depot after the plan was executed.
In Listing 57.28 on page 921, we provide a Java class to hold the truck information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.25.

50.4.2.6 Distance Matrix

The asymmetric distance matrix ∆ contains the distances between the different locations in
meters. If there are n locations, the distance matrix is an n × n matrix.
The element ∆i,j at position i, j ∈ N0 denotes the distance between the locations i and
j. The distance between L000009 and L000017 would be found at ∆9,17 . Notice that this
may be different from the distance between L000017 and L000009 which can be found at
∆17,9 . Since the resolution of the distance matrix is meters, it is easy to understand that
these values may slightly differ. If highways are used, the on-ramps and exits may to the
highway may be at slightly different positions.
The time T (t, i, j) in milliseconds that a truck t needs to drive from location i to location
j can be computed as specified in Equation 50.125.
   
∆i,j ∆i,j
T (t, i, j) = ∗ 3600 ∗ 1000 = ∗ 3600 (50.125)
1000 ∗ t.avgSpeed t.avgSpeed
630 50 BENCHMARKS
The driving costs C(t, i, j) in USD-cent (or RMB-fen) consumed by a truck t when driving
from location i to location j can be computed as specified in Equation 50.126.
 
T (t, i, j) ∆i,j
C(t, i, j) = ∗ ∗ t.costsHKm (50.126)
3 600 000 1000

These costs do not include the costs that have to be spent for the drivers. In Listing 57.29
on page 923, we provide a Java class to hold the distance information. This class can
produce and read a textual representation of the data mentioned above. An example for
this representation is given in Listing 50.26.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK
Listing 50.26: The distance data of the first example problem instance.
1 0 15700 10500 16450 5600 6750 5000 4900 5200 14100 14000 11850 13150 14200 13700 12800 14450 18050 12350 4550
2 15550 0 17050 22950 12150 13300 11550 11450 11750 20600 20550 18350 19650 20650 20200 19300 20900 24600 18800 11050
3 10450 17050 0 17950 7100 8150 6450 6350 6700 15500 15500 13300 14500 15550 15150 14150 15800 19500 13700 5950
4 16400 23100 18050 0 12150 13300 11550 12400 11750 6500 2450 4650 3500 4050 4500 5350 7050 10750 5100 12000
5 5550 12250 7100 12150 0 1150 650 1450 400 9750 9700 7500 8850 9800 9400 8400 10100 13750 7950 1150
6 6650 13300 8200 13300 1150 0 1700 2600 1550 10850 10850 8600 9900 10950 10500 9550 11200 14900 9100 2200
7 4950 11650 6500 11650 650 1750 0 850 200 9200 9100 6900 8250 9300 8900 7850 9500 13200 7400 450
8 4800 11500 6400 12300 1450 2600 900 0 1050 9950 9900 7700 8950 10050 9600 8600 10250 13900 8100 350
9 5200 11850 6700 11800 400 1550 200 1050 0 9400 9300 7150 8550 9550 9150 8100 9700 13450 7650 750
10 13950 20600 15500 6500 9750 10850 9150 9850 9350 0 4050 2300 3600 2400 2300 1300 1500 5250 1850 9500
11 13950 20600 15500 2450 9750 10850 9150 9850 9300 4050 0 2250 1050 1700 1950 2950 4650 8300 2700 9500
12 11800 18400 13350 4700 7550 8700 6900 7700 7150 2250 2250 0 1300 2350 1950 900 2600 6300 450 7300
13 13050 19700 14650 3500 8800 10000 8200 8950 8400 3600 1050 1300 0 2800 3000 2200 3900 7600 1750 8600
14 14000 20700 15600 4050 9800 10950 9250 10000 9450 2400 1700 2350 2750 0 450 1450 3100 6750 1900 9550
15 13650 20250 15150 4400 9400 10500 8850 9550 9050 2350 1950 1950 3000 450 0 1000 2650 6350 1500 9150
16 12700 19300 14250 5400 8450 9550 7850 8600 8100 1300 2950 950 2250 1450 1000 0 1700 5350 550 8200
17 14350 21050 15900 7000 10100 11250 9500 10300 9750 1500 4600 2600 3900 3100 2650 1700 0 3650 2200 9950
18 18050 24650 19550 10750 13750 14850 13200 13900 13400 5200 8300 6250 7550 6750 6300 5400 3650 0 5850 13550
19 12150 18900 13700 5100 7950 9050 7350 8100 7550 1850 2700 450 1750 1900 1500 550 2200 5850 0 7750
20 4450 11100 6000 11950 1100 2200 450 350 650 9550 9500 7300 8550 9700 9200 8200 9850 13550 7750 0

631
632 50 BENCHMARKS
50.4.3 Problem Space

50.4.3.1 Move: The Atomic Action

A candidate solution for this optimization problem is a list of atomic transportation actions
which we here call moves. A move m = (t, o, d, c, sl , st , dl ) is a tuple consisting of the
following elements.

1. A truck t denoted by a truck identifier. Example: T000000.

2. A (possibly empty) set of o of orders to be transported by the truck t. Notice that


0 ≤ |o| ≤ |c|. Example: (O000002).

3. A non-empty set d of drivers, one of which will drive the truck (whereas the others, if
any, just use the truck for transportation). Notice that 0 < |d| ≤ t.maxD. Example:
(D000000, D000001).

4. A (possibly empty) set of c of containers to be transported by the truck t. Notice that


0 ≤ |c| ≤ t.maxC. Example: (C000001, C000000).

5. A starting location sl from where the truck will start to drive. Example: L000010.

6. A starting time st at which the truck will leave the starting location (measured in
milliseconds from the beginning of the planning period). Example: 5 691 600.

7. A destination location dl 6= sl where the truck will drive to. Example: L000006.

It is assumed that truck t will arrive at m.dl at time m.st + T (t, m.sl , m.dl ) (see Equa-
tion 50.125), hence it is not necessary to store an arrival time in the tuple m. One part of
the costs of the move can be computed according to Equation 50.126. The other part must
determined by adding the driver costs (see Section 50.4.2.5).
In Listing 57.30 on page 925, we provide a Java class to hold the information of a single
move. This class can produce and read a textual representation of the data mentioned above.
An example for this representation is given in Listing 50.27.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK
Listing 50.27: The move data of the solution to the first example problem instance.
1 %truck orders drivers containers from start time to
2 T000000 () (D000000,D000001) (C000001,C000000) L000000 2610000 L000016
3 T000000 (O000001,O000000) (D000000,D000001) (C000001,C000000) L000016 3650400 L000000
4 T000000 () (D000000,D000001) (C000001,C000000) L000000 4683600 L000010
5 T000000 (O000002) (D000000,D000001) (C000001,C000000) L000010 5691600 L000006
6 T000000 (O000003) (D000000,D000001) (C000001,C000000) L000006 6350400 L000000
7 T000001 () (D000002) (C000002) L000019 2520000 L000008
8 T000001 (O000004) (D000002) (C000002) L000008 2566800 L000011
9 T000001 (O000005) (D000002) (C000002) L000011 3081600 L000018
10 T000001 (O000006) (D000002) (C000002) L000018 3114000 L000005
11 T000001 (O000007) (D000002) (C000002) L000005 3765600 L000013
12 T000001 () (D000002) (C000002) L000013 4554000 L000010
13 T000001 (O000008) (D000002) (C000002) L000010 4676400 L000008
14 T000001 () (D000002) (C000002) L000008 5346000 L000019
15 T000002 () (D000003) (C000003) L000015 3060000 L000002
16 T000002 (O000009) (D000003) (C000003) L000002 4770000 L000004
17 T000002 () (D000003) (C000003) L000004 5619600 L000015

633
634 50 BENCHMARKS

1 Total Errors : 4
2 Total Costs : 63863
3 Total Time Error : 0
4 Container Errors : 0
5 Driver Errors : 0
6 Order Errors : 4
7 Truck Errors : 0
8 Pickup Time Error : 0
9 Delivery Time Error : 0
10 Truck Costs : 5863
11 Driver Costs : 58000
Listing 50.28: An example output of the features of an example plan (with errors).

50.4.3.2 Plans: The Candidate Solutions

In Listing 50.27, we do not display a single move. Instead, a set of multiple moves is
presented. Here, we refer to such sets of moves as plans x = (m1 , m2 , . . . ). Trucks first drive
to the pickup location of the orders. The pickup the orders and deliver them and then drive
back. Sometimes, a truck may not drive back directly but could pick up another order on
the way.
The problem space X thus consists of all possible plans x, i. e., of lists of all possible
moves m. It is the goal of the optimization process to construct good plans.
The solutions to an instance of the presented type of Vehicle Routing Problem follow
exactly the format given in Listing 50.27. They are textual representations of the candidate
solutions x and, hence, can be created using an arbitrary synthesis/optimization method
implemented in an arbitrary programming language.

50.4.4 Objectives and Constraints

The objectives of solving the (benchmarking) real-world Vehicle Routing Problem are quite
easy, as already stated in Section 50.4.1.2 on page 623: (a) We want to deliver all orders
in time (b) at the least possible costs. Additional to these objectives, we have to consider
constraints: the synthesized plans must be physically sound.
Evaluating a candidate solution of this kind of problem is relatively complex. Basically,
we can do this as follows:
1. Initialize an internal data structure that keeps track of the positions and arrival times
of all objects by placing all objects at their starting locations.
2. Sort the list x ∈ X of moves m ∈ x according to the starting times of the moves.
3. For each move, starting with the earliest one, do:
(a) Compute the costs of the move and add them to an internal cost counter.
(b) Update the positions and arrival times of all involved objects in each step.
(c) After each step, check if any physical or time window constraint has been violated.
If so, count the number of errors.
The feature values obtained by such a computation can be used by the objective functions.
There is no clear rule on how to design the objective functions – we already showed in Task 67
on page 238 that often, the same optimization criterion can be expressed in different ways.
Hence, we just provide a class Compute which determines some key features of a candidate
solution (plan) x in Listing 57.31 on page 931. This class can evaluate a plan and generate
outputs such as the one in Listing 50.28.The class Compute allows us to process the moves
of a plan x step-by-step and to check, in each step, if new errors occurred or how the total
costs changed. This would also allow for a successive generation of solutions.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 635
Notice that, if you wish to use the evaluation capabilities of this class without imple-
menting your solution in Java, you can still produce plans as presented in Listing 50.27 on
page 633, parse them in with Java, and then apply Compute to the parsed list.

50.4.5 Framework

As already outlined in the previous sections, we provide a complete framework for this
benchmark problem and list the most important source files in Section 57.3 on page 914.
Additional to the classes listed representing the essential objects involved in the optimization
process (see Section 50.4.2 and Section 50.4.3), we also provide an instance generator which
can create datasets such as those given as examples in the two mentioned sections.
With this instance generator, we create six benchmark instances which differ in the
number of involved objects. Naturally, we would expect that smaller problem instances
(such as case01 which was used as example in our discussion on the benchmark objects)
are easier to solve. Solutions to the larger problem instances should be much harder to
discover. For each of the benchmark sets, we provide a baseline solution. This solution
is not necessarily the best possible solution to the benchmark dataset, but it fulfills all
constraints. Matter of fact, the best solutions to the benchmark datasets are unknown and
yet to be discovered.
These benchmark datasets can be loaded via a function loadInstance of the class
InstanceLoader. Notice that each of the classes Container, Driver, Location, Order, Truck,
and DistanceMatrix contains a static variable which allows accessing the list of instances of
that class as defined in the benchmark dataset. The class Driver, for example, has a static
variable called DRIVERS which lists all of its instances. Container has a similar variable called
CONTAINERS.
Part VI

Background
638 50 BENCHMARKS
Chapter 51
Set Theory

Set theory1 [763, 1169, 2615] is an important part of the mathematical theory. Numerous
other disciplines like algebra, analysis, and topology are based on it. Since set theory is not
the topic of this book but a mere prerequisite, this chapter (and the ones to follow) will
just briefly introduce it. We make heavy use of wild definitions and, in some cases, even
rough simplifications. More information on the topics discussed can be retrieved from the
literature references and, again, from [2938].
Set theory can be divided into naı̈ve set theory2 and axiomatic set theory3 . The first
approach, the naı̈ve set theory, is inconsistent and therefore not regarded in this book.

Definition D51.1 (Set). A set is a collection of objects considered as a whole4 . The


objects of a set are called elements or members. They can be anything, from numbers and
vectors, to complex data structures, algorithms, or even other sets. Sets are conventionally
denoted with capital letters, A, B, C, etc. while their elements are usually referred to with
small letters a, b, c.

51.1 Set Membership

The expression a ∈ A means that the element a is a member of the set A while b 6∈ A means
that b is not a member of A. There are three common forms to define sets:

1. With their elements in braces: A = {1, 2, 3} defines a set A containing the three
elements 1, 2, and 3. {1, 1, 2, 3} = {1, 2, 3} since the curly braces only denote set
membership.

2. The same set can be specified using logical operators to describe its elements: ∀a ∈
N1 : (a ≥ 1) ∧ (a < 4) ⇔ a ∈ A.

3. A shortcut for the previous form is to denote the logical expression in braces, like
A = {(a ≥ 1) ∧ (a < 4), a ∈ N1 }.

The cardinality of a set A is written as |A| and stands for the count of elements in the
set.
1 http://en.wikipedia.org/wiki/Set_theory [accessed 2007-07-03]
2 http://en.wikipedia.org/wiki/Naive_set_theory [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Axiomatic_set_theory [accessed 2007-07-03]
4 http://en.wikipedia.org/wiki/Set_%28mathematics%29 [accessed 2007-07-03]
640 51 SET THEORY
51.2 Special Sets

Special sets used in the context of this book are

1. The empty set ∅ = {} contains no elements (|∅| = 0).

2. The natural numbers5 N1 without zero include all whole numbers bigger than 0. (N1 =
{1, 2, 3, . . . })

3. The natural numbers including zero (N0 ) comprise all whole numbers bigger than or
equal to 0. (N0 = {0, 1, 2, 3, . . . })

4. Z is the set of all integers, positive and negative. (Z = {. . . , −2, −1, 0, 1, 2, . . . })


a
5. The rational numbers6 Q are defined as Q = b : a, b ∈ Z, b 6= 0 .

6. The real numbers7 R include all rational and irrational numbers (such as 2).

7. R+ denotes the positive real numbers including 0: (R+ = [0, +∞)).

8. C is the set of complex numbers8 . When needed in the context of this book, we
abbreviate the immaginary unit with i. The real and imaginary parts of a complex
number z are denoted as “Re”z and “Im”z, respectively.

9. We define B = {0, 1} as the set of values a Boolean variable may take on, where
0 ≡ false and 1 ≡ true.

N1 ⊂ N0 ⊂ Z ⊂ Q ⊂ R ⊂ C (51.1)
N1 ⊂ N0 ⊂ R+ ⊂ R ⊂ C (51.2)

For these numerical sets, special subsets, so called intervals, can be specified. [1, 5) is a set
which contains all the numbers starting from (including) 1 up to (exclusively) 5. (1, 5] on
the other hand contains all numbers bigger than 1 and up to inclusively 5. In order to avoid
ambiguities, such sets will always used in a context where it is clear if the numbers in the
set are natural or real.

51.3 Relations between Sets

Between two sets A and B, the following subset relations may exist: If all elements of A are
also elements of B, A ⊆ B holds.

(∀a ∈ A ⇒ a ∈ B) ⇔ A ⊆ B (51.3)

Hence, {1, 2, 3} ⊆ {1, 2, 3, 4} and {x, y, z} ⊆ {x, y, z} but not {α, β, γ} ⊆ {β, γ, δ}. If
additionally there is at least one element b in B which is not in A, then A ⊂ B.

(A ⊆ B) ∧ (∃b ∈ B : ¬ (b ∈ A)) ⇔ A ⊂ B (51.4)

Thus, {1, 2, 3} ⊂ {1, 2, 3, 4} but not {x, y, z} ⊂ {x, y, z}. The set relations can be negated by
crossing them out, i. e., A 6⊂ B means ¬ (A ⊂ B) and A 6⊆ B denotes ¬ (A ⊆ B), respectively.
51.4. OPERATIONS ON SETS 641

A A AB

A B BA

AÇB
B AÈB

Figure 51.1: Set operations performed on sets A and B inside a set A.

51.4 Operations on Sets

Let us now define the possible unary and binary operations on sets, some of which are
illustrated in Figure 51.1.

Definition D51.2 (Set Union). The union9 C of two sets A and B is written as A ∪ B
and contains all the objects that are element of at least one of the sets.
C = A ∪ B ⇔ ((c ∈ A) ∨ (c ∈ B) ⇔ (c ∈ C)) (51.5)
A∪B = B∪A (51.6)
A∪∅ = A (51.7)
A∪A = A (51.8)
A ⊆ A∪B (51.9)

Definition D51.3 (Set Intersection). The intersection10 D of two sets A and B, denoted
by A∩B, contains all the objects that are elements of both of the sets. If A∩B = ∅, meaning
that A and B have no elements in common, they are called disjoint.
D = A ∩ B ⇔ ((d ∈ A) ∧ (d ∈ B) ⇔ (d ∈ D)) (51.10)
A∩B = B∩A (51.11)
A∩∅ = ∅ (51.12)
A∩A = A (51.13)
A∩B ⊆ A (51.14)
5 http://en.wikipedia.org/wiki/Natural_numbers [accessed 2008-01-28]
6 http://en.wikipedia.org/wiki/Rational_number [accessed 2008-01-28]
7 http://en.wikipedia.org/wiki/Real_numbers [accessed 2008-01-28]
8 http://en.wikipedia.org/wiki/Complex_number [accessed 2008-01-29]
9 http://en.wikipedia.org/wiki/Union_%28set_theory%29 [accessed 2007-07-03]
10 http://en.wikipedia.org/wiki/Intersection_%28set_theory%29 [accessed 2007-07-03]
642 51 SET THEORY
Definition D51.4 (Set Difference). The difference E of two sets A and B, A\B, contains
the objects that are element of A but not of B.

E = A \ B ⇔ ((e ∈ A) ∧ (e 6∈ B) ⇔ (e ∈ E)) (51.15)


A\∅ = A (51.16)
∅\A = ∅ (51.17)
A\A = ∅ (51.18)
A\B ⊆ A (51.19)

Definition D51.5 (Set Complement). The complementary set A of the set A in a set A
includes all the elements which are in A but not element of A:

A⊆A⇒A=A\A (51.20)

Definition D51.6 (Cartesian Product). The Cartesian product11 P of two sets A and
B, denoted P = A × B is the set of all ordered pairs (a, b) whose first component is an
element from A and the second is an element of B.

P = A × B ⇔ P = {(a, b) : a ∈ A, b ∈ B} (51.21)
An = A × A × .. × A (51.22)
| {z }
n times

Definition D51.7 (Countable Set). A set S is called countable12 if there exists an


injective function13 which maps it to the natural numbers (which are countable), i. e.,
∃f : S 7→ N0 .

Definition D51.8 (Uncountable Set). A set is uncountable if it is not countable, i. e.,


no such function exists for the set.

N0 , Z, Q, and B are countable, R, R+ , and C are not.

Definition D51.9 (Power Set). The power set14 P (A) of the set A is the set of all subsets
of A.
∀p ∈ P (A) ⇔ p ⊆ A (51.23)

11 http://en.wikipedia.org/wiki/Cartesian_product [accessed 2007-07-03]


12 http://en.wikipedia.org/wiki/Countable_set [accessed 2007-07-03]
13 see definition of function on page 646
14 http://en.wikipedia.org/wiki/Axiom_of_power_set [accessed 2007-07-03]
51.5. TUPLES 643
51.5 Tuples

Definition D51.10 (Type). A type is a set of values that a variable, constant, function,
or similar entity may take on.

We can, for instance, specify the type T = {1, 2, 3}. A variable x which is an instance of
this type then can take on the values 1, 2, or 3.

Definition D51.11 (Tuple). A tuple15 is an ordered, finite sequence of elements, where


each element is an instance of a certain type.

To each position in i a tuple t, a type Ti is assigned. The element t[i] at a position i must
then be an element of Ti . Other than sets, tuples may contain the same element more than
once.
In the context of this book, we define tuples with parenthesis like (a, b, c) whereas sets
are specified using braces {a, b, c}. Since every item of a tuple may be of a different type,
(Monday, 23, {a, b, c}) is a valid tuple.

Definition D51.12 (Tuple Type). To formalize this relation, we define the tuple type T .
T specifies the basic sets for the elements of the tuples. If a tuple t meets the constraints
imposed to its values by T , we can write t ∈ T which means that the tuple t is an instance
of T .

T = (T0 , T1 , ..Tn−1 ) , n ∈ N1 (51.24)


t = (t[0], t[1], ..t[n − 1]) ∈ T ⇔ ti ∈ Ti ∀i : 0 ≤ i < n ∧ len(t) = len(T ) (51.25)

51.6 Permutations

A permutation π 16 is a special tuple. A permutation is an arrangement of n elements where


each element is unique.

Definition D51.13 (Permutation). Without loss of generality, we assume that a permu-


tation π of length n is a vector of natural numbers of that length for which the following
conditions hold:

π [i] ∈ 0..n − 1 ∀i ∈ 0..n − 1 (51.26)


∀i, j ∈ 0..n − 1 ⇒ (π [i] = π [j] ⇒ i = j) (51.27)
∀p ∈ 0..n − 1 ∃i ∈ 0..n − 1 : π [[]i] = p (51.28)

The permutation space Π(n) be the space of all permutations π ∈ Π(n) for which the
conditions from Definition D51.13 hold.
15 http://en.wikipedia.org/wiki/Tuple [accessed 2007-07-03]
16 http://en.wikipedia.org/wiki/Permutations [accessed 2008-01-31]
644 51 SET THEORY
51.7 Binary Relations

Definition D51.14 (Binary Relation). A binary17 relation18 R is defined as an ordered


triplet (A, B, P ) where A and B are arbitrary sets, and P is a subset of the Cartesian product
A × B (see Equation 51.21). The sets A and B are called the domain and codomain of the
relation and P is called its graph.
The statement (a, b) ∈ P (a ∈ A ∧ b ∈ B) is read “a is R-related to b” and is written as
R(a, b). The order of the elements in each pair of P is important: If a 6= b, then R(a, b) and
R(b, a) both can be true or false independently of each other.

Some types and possible properties of binary relations are listed below and illustrated in
Figure 51.2. A binary relation can be [923]:

A B A B A B

left-total left-total left-total


surjective surjective surjective
non-injective injective injective
functional functional non-functional
non-bijective bijective non-bijective

A B A B A B A B

left-total not left-total left-total not left-total


non-surjective non-surjective non-surjective non-surjective
non-injective non-injective injective injective
functional non-functional functional functional
non-bijective non-bijective non-bijective non-bijective

Figure 51.2: Properties of a binary relation R with domain A and codomain B.

1. Left-total if
∀a ∈ A ∃b ∈ B : R(a, b) (51.29)

2. Surjective19 or right-total if

∀b ∈ B ∃a ∈ A : R(a, b) (51.30)

3. Injective20 if
∀a1 , a2 ∈ A, b ∈ B : R(a1 , b) ∧ R(a2 , b) ⇒ a1 = a2 (51.31)
17 http://en.wikipedia.org/wiki/Binary_relation [accessed 2007-07-03]
18 http://en.wikipedia.org/wiki/Relation_%28mathematics%29 [accessed 2007-07-03]
19 http://en.wikipedia.org/wiki/Surjective [accessed 2007-07-03]
20 http://en.wikipedia.org/wiki/Injective [accessed 2007-07-03]
51.8. ORDER RELATIONS 645
4. Functional if
∀a ∈ A, b1 , b2 ∈ B : R(a, b1 ) ∧ R(a, b2 ) ⇒ b1 = b2 (51.32)

5. Bijective21 if it is left-total, right-total and functional.


6. Transititve22 if

∀a ∈ A, ∀b ∈ B, ∀c ∈ A ∩ B : R(a, c) ∧ R(c, b) ⇒ R(a, b) (51.33)

51.8 Order Relations

All of us have learned the meaning and the importance of order since the earliest years
in school. The alphabet is ordered, the natural numbers are ordered, the marks we can score
in school are ordered, and so on. Matter of fact, we come into contact with orders even way
before entering school by learning to distinguish things according to their size, for instance.
Order relations23 are binary relations which are used to express the order amongst the
elements of a set A. Since order relations are imposed on single sets, both their domain and
their codomain are the same (A, in this case). For such relations, we can define an additional
number of properties which can be used to characterize and distinguish the different types
of order relations:

1. Antisymmetrie:
R(a1 , a2 ) ∧ R(a2 , a1 ) ⇒ a1 = a2 ∀a1 , a2 ∈ A (51.34)

2. Asymmetrie
R(a1 , a2 ) ⇒ ¬R(a2 , a1 ) ∀a1 , a2 ∈ A (51.35)

3. Reflexivenss
R(a, a) ∀a ∈ A (51.36)

4. Irreflexivenss
6 ∃a ∈ A : R(a, a) (51.37)

All order relations are transitive24 , and either antisymmetric or symmetric and either
reflexive or irreflexive:

Definition D51.15 (Partial Order). A binary relation R defines a (non-strict, reflexive)


partial order if and only if it is reflexive, antisymmetric, and transitive.

The ≤ and ≥ operators, for instance, represent non-strict partial orders on the set of the
complex numbers C. Partial orders that correspond to the > and < comparators are called
strict. The Pareto dominance relation introduced in Definition D3.13 on page 65 is another
example for such a strict partial order.

Definition D51.16 (Strict Partial Order). A binary relation R defines a strict (or ir-
reflexive) partial order if it is irreflexive, asymmetric, and transitive.

21 http://en.wikipedia.org/wiki/Bijective [accessed 2007-07-03]


22 http://en.wikipedia.org/wiki/Transitive_relation [accessed 2007-07-03]
23 http://en.wikipedia.org/wiki/Order_relation [accessed 2007-07-03]
24 See Equation 51.33 for the definition of transitivity.
646 51 SET THEORY
Definition D51.17 (Total Order). A total order25 (or linear order, simple order) R im-
posed on the set A is a partial order which is complete/total.

R(a1 , a2 ) ∨ R(a2 , a1 ) ∀a1 , a2 ∈ A (51.38)

The real numbers R for example are totally ordered whereas on the set of complex
numbers C, only (strict or reflexive) partial (non-total) orders can be defined since it is
continuous in two dimensions.

51.9 Equivalence Relations

Another important class of relations are equivalence relations26 [2774, 2842] which are
often abbreviated with ≡ or ∼, i. e., a1 ≡ a2 and a1 ∼ a2 mean R(a1 , a2 ) for the equivalence
relation R imposed on the set A and a1 , a2 ∈ A. Unlike order relations, equivalence relations
are symmetric, i. e.,
R(a1 , a2 ) ⇒ R(a2 , a1 ) ∀a1 , a2 ∈ A (51.39)

Definition D51.18 (Equivalence Relation). The binary relation R defines an equiva-


lence relation on the set A if and only if it is reflexive, symmetric, and transitive.

Definition D51.19 (Equivalence Class). If an equivalence relation R is defined on a


set A, the subset A′ ⊆ A of A is an equivalence class27 if and only if ∀a1 , a2 ∈ A′ ⇒
R(a1 , a2 ) (a1 ∼ a2 ).

51.10 Functions

Definition D51.20 (Function). A function f : X 7→ Y is a binary relation with the


property that for each element x of the domain28 X there is no more than one element y in
the codomain Y such that x is related to y (by f ). This uniquely determined element y is
denoted by f (x). In other words, a function is a functional binary relation (see Point 4 on
the previous page) and we can write:

∀x ∈ X, y1 , y2 ∈ Y : f (x, y1 ) ∧ f (x, y2 ) ⇒ y1 = y2 (51.40)

A function maps each element of X to one element in Y . The domain X is the set of possible
input values of f and the codomain Y is the set its possible outputs. The set of all actual
outputs {f (x) : x ∈ X} is called range. This distinction between range and codomain can
be made obvious with a small example. The sine function can be defined as a mapping from
the real numbers to the real numbers sin : R 7→ R, making R its codomain. Its actual range,
however, is just the real interval [−1, 1].
25 http://en.wikipedia.org/wiki/Total_order [accessed 2007-07-03]
26 http://en.wikipedia.org/wiki/Equivalence_relation [accessed 2007-07-28]
27 http://en.wikipedia.org/wiki/Equivalence_class [accessed 2007-07-28]
28 http://en.wikipedia.org/wiki/Domain_%28mathematics%29 [accessed 2007-07-03]
51.11. LISTS 647
51.10.1 Monotonicity

Functions are monotone, i. e., have the property of monotonicity29 , if they preserve orders.
In the following definitions, we assume that two order relations RX and RY are defined on
the domain X and codomain Y of a function f , respectively. These orders are expressed
with the <, ≤, and ≥ operators. In the common case that X = Y or X ⊆ R and Y ⊆ R,
RX = RY usually holds and both correspond to the intuitive order of the real numbers.

Definition D51.21 (Monotonically Increasing). A function f : X 7→ Y is called mono-


tonic, monotonically increasing, increasing, or non-decreasing, if and only if Equation 51.41
holds (according to the order relations defined on X and Y ).

∀x1 < x2 , x1 , x2 ∈ X ⇒ f (x1 ) ≤ f (x2 ) (51.41)

Definition D51.22 (Monotonically Decreasing). A function f : X 7→ Y is called


monotonically decreasing, decreasing, or non-increasing, if and only if Equation 51.42 holds
(according to the order relations defined on X and Y ).

∀x1 , x2 ∈ X : x1 < x2 ⇒ f (x1 ) ≥ f (x2 ) (51.42)

51.11 Lists

Definition D51.23 (List). Lists30 are abstract data types which can be regarded as spe-
cial tuples. They are sequences where every item is of the same type.

Other than our discussions on set theory, the following text about the data structure list is
strictly local to this book and not to be understood as a general mathematical theory. All
the functions and operations defined here are only given in order to allow for a clear and
well-defined notation in the other parts of the book. When specifying population-oriented
optimization algorithms, for instance, we will use lists rather than sets for representing the
populations.
The definitions provided here are not founded upon any related work in particular. We
introduce functions that will add elements to or remove elements from lists; that sort lists
or search within them. All list function are without side effect31 , i. e., they do not change
their parameters. Instead, the result of a function processing a list l is a new list l′ to which
is the modified version of l whereas l is left untouched.
We do not assume any specific form of list implementation such as linear arrays or linked
lists32 . Instead, we leave the actual implementation of the methods discussed in the following
completely open and try to specify their features axiomatically.
Like tuples, lists can be defined using parenthesis in this book. The single elements of a
list are accessed by their index written in brackets ((a, b, c) [1] = b). The first element of a
list is located at index 0 and the last element has the index n − 1 where n is the number of
elements in the list: n = len((a, b, c)) = 3. The empty list is abbreviated with ().

29 http://en.wikipedia.org/wiki/Monotonic_function [accessed 2007-08-08]


30 http://en.wikipedia.org/wiki/List_%28computing%29 [accessed 2007-07-03]
31 http://en.wikipedia.org/wiki/Side_effect_(computer_science) [accessed 2009-06-13]
32 http://en.wikipedia.org/wiki/Linked_list [accessed 2009-06-13]
648 51 SET THEORY
Definition D51.24 (createList). The l = createList(n, q) method creates a new list l of
the length n filled with the item q.

l = createList(n, q) ⇔ len(l) = n ∧ ∀0 ≤ i < n ⇒ l[i] = q (51.43)

Definition D51.25 (insertItem). The function m = insertItem(l, i, q) creates a new list


m by inserting one element q in a list l at the index i : 0 ≤ i ≤ len(l). By doing so, it shifts
all elements located at index i and above to the right by one position.

m = insertItem(l, i, q) ⇔ len(m) = len(l) + 1 ∧ m[i] = q ∧


∀j : 0 ≤ j < i ⇒ m[j] = l[j] ∧
∀j : i ≤ j < len(l) ⇒ m[j + 1] = l[j] (51.44)

Definition D51.26 (addItem). The addItem function is a shortcut for inserting one item
at the end of a list:
addItem(l, q) ≡ insertItem(l, len(l) , q) (51.45)

Definition D51.27 (deleteItem). The function m = deleteItem(l, i) creates a new list m


by removing the element at index i : 0 ≤ i < len(l) from the list l where len(l) ≥ i + 1 must
hold.

m = deleteItem(l, i) ⇔ len(m) = len(l) − 1 ∧


∀j : 0 ≤ j < i ⇒ m[j] = l[j]
∀j : i < j < len(l) ⇒ m[j − 1] = l[j] (51.46)

Definition D51.28 (deleteRange). The method m = deleteRange(l, i, c) creates a new


list m by removing c elements beginning at index i : 0 ≤ i < len(l) from the list l. len(l) ≥
i + c and c ≥ 0 must both hold.

m = deleteRange(l, i, c) ⇔ len(m) = len(l) − c ∧


∀j : 0 ≤ j < i ⇒ m[j] = l[j] ∧
∀j : i + c ≤ j < len(l) ⇒ m[j − c] = l[j] (51.47)

Definition D51.29 (appendList). The function appendList(l1 , l2 ) is a shortcut for


adding all the elements of a list l2 to a list l1 . We define it recursively as:

l1 if len(l2 ) = 0
appendList(l1 , l2 ) ≡
appendList(addItem(l1 , l2 [0]) , deleteItem(l2 , 0)) otherwise
(51.48)

Definition D51.30 (count). The function count(x, l) returns the number of occurrences
of the element x in the list l.

count(x, l) = |{i ∈ 0..len(l) − 1 : l[i] = x}| (51.49)


51.11. LISTS 649
Definition D51.31 (subList). The method subList(l, i, c) extracts c elements from the
list l beginning at index i and returns them as a new list. Both, 0 ≤ i < len(l) and
len(l) − i ≥ c must hold.
subList(l, i, s) ≡ deleteRange(deleteRange(l, 0, i) , c, |l| − i − c) (51.50)

Definition D51.32 (reverseList). The method m = reverseList(l) creates a list m which


contains the same elements as the list l but in reverse order, i. e.,
m = reverseList(l) ⇔ len(m) = len(l) ∧ (i ∈ 0..len(l) − 1 ⇒ m[i] = l[len(l) − i − 1]) (51.51)

51.11.1 Sorting

Definition D51.33 (Sorted List). A list l is considered as being sorted, if l[i] ≤ l[j] ↔
i ≤ j holds for each i, j ∈ 0..len(l) − 1 according to a natural (or explicitly specified) order
relation defined on the type of the elements of the list. Thus, we can define the predicate
“isSorted” as:
isSorted(l) ⇔ (i ∈ 1..len(l) − 1 ⇒ l[i − 1] ≤ l[i]) (51.52)

The concept of comparator functions has been introduced in Definition D3.18 on page 77.
cmp(u1 , u2 ) returns a negative value if u1 is smaller than u2 , a positive number if u1 is
greater than u2 , and 0 if both are equal. Comparator functions are very versatile, they from
the foundation of the sorting mechanisms of the Java framework [1109, 1110], for instance.
In Global Optimization, they are perfectly suited to represent the Pareto dominance or
prevalence relations discussed in Section 3.3.5 on page 65 and Section 3.5.2. Here, we use
them to express the orders for according to which a list can be sorted.Hence, we can redefine
Equation 51.52 to:
isSorted(l, cmp) ⇔ (i ∈ 1..len(l) − 1 ⇒ cmp(l[i − 1], l[i]) < 0) (51.53)
Sorting algorithms33 transform unsorted lists into sorted ones. Here, we will not explicitly
discuss these algorithms and, instead, only specify prototype interfaces to them.Generally,
a list U can be sorted in O(len(U ) log len(U )) time complexity. For concrete examples
of sorting algorithms, see [635, 1557, 2446].We define the functions sortAsc(U, cmp) and
sortDsc(U, cmp) to sort a list U using a comparator function cmp(u1 , u2 ) in ascending
or descending order, respectively. In other words, their parameter list U is an arbitrary
(possible unsorted) list and the return value is a list containing the same elements sorted
according to cmp.
S = sortAsc(U, cmp) ⇒ ∀u ⇒ count(u, U ) = count(u, S) (51.54)
∧isSorted(S, cmp)
The result of S = sortDsc(U, cmp) is sorted reversely:
S = sortDsc(U, cmp) ⇒ ∀u ⇒ count(u, U ) = count(u, S) (51.55)
∧isSorted(reverseList(S) , cmp)
Sorting according to a specific function f of only one parameter can easily be performed
by building the comparator cmp(u1 , u2 ) ≡ (f (u1 ) − f (u2 )). Thus, we will furthermore
synonymously use the sorting predicate also with unary functions f .
sort(U, f ) ≡ sort(U, cmp(u1 , u2 ) ≡ (f (u1 ) − f (u2 ))) (51.56)

33 http://en.wikipedia.org/wiki/Sorting_algorithm [accessed 2007-07-03]


650 51 SET THEORY
51.11.2 Searching

Searching an element u in an unsorted list U means walking through it until either the ele-
ment is found or the end of the whole list has been reached, which corresponds to complexity
O(len(U )). We therefore define the operation searchItemUS(u, U ) which returns either the
index of the searched element u in the list or −1 if u 6∈ U .

i : U [i] = u if u ∈ U
searchItemUS(u, U ) = (51.57)
−1 otherwise

Searching an element s in sorted list S usually means to perform a fast binary search34
returning the index of the element if it is contained in S. For the case that s is not contained
in S, we again specify an extension similar to the one used in Java: If s 6∈ S, a search
operation for sorted lists returns a negative number indicating the position where the element
could be inserted into the list without violating its order. The function searchItemAS(s, S)
searches in sorted list with elements in ascending order, searchItemDS(s, S) searches in a
descending sorted list. Searching in a sorted list is done in O(log len(S)) time. For concrete
algorithm examples, again see [635, 1557, 2446].

 i : S [i] = s if s ∈ S
searchItemAS(s, S) = (−i − 1) : (∀j ≥ 0, j < i ⇒ S [j] ≤ s)∧ otherwise (51.58)

(∀j < len(S) , j ≥ i ⇒ S [j] > s)

 i : S [i] = s if s ∈ S
searchItemDS(s, S) = (−i − 1) : (∀j ≥ 0, j < i ⇒ S [j] ≥ s)∧ otherwise (51.59)

(∀j < len(S) , j ≥ i ⇒ S [j] < s)

Definition D51.34 (removeItem). The function removeItem(l, q) finds one occurrence


of an element q in a list l by using the appropriate search algorithm and returns a new list
m where it is deleted. For finding the element, we denote any of the previously defined
search operators with searchItem(q, l).

l if searchItem(q, l) < 0
m = removeItem(l, q) ⇔ (51.60)
deleteItem(l, searchItem(q, l)) otherwise

51.11.3 Transformations between Sets and Lists

We can further define transformations between sets and lists which will implicitly be used
when needed in this book. It should be noted that setToListis not the inverse function of
listToSet.

B = setToList(set A) ⇒ ∀a ∈ A ∃i : B [i] = a ∧
∀i ∈ 0..len(B) − 1 ⇒ B [i] ∈ A ∧
len(setToList(A)) = len(A) (51.61)

A = listToSet(list B) ⇒ ∀i ∈ 0..len(B) − 1 ⇒ B [i] ∈ A ∧


∀a ∈ A ∃i ∈ 0..len(B) − 1 : B [i] = a ∧
|listToSet(B)| ≤ len(B) (51.62)

34 http://en.wikipedia.org/wiki/Binary_search [accessed 2007-07-03]


Chapter 52
Graph Theory

The graph theory1 [791, 2741] is an area of mathematics which investigates with properties
of graphs, relations between graphs, and algorithms applied to graphs.

Definition D52.1 (Graph). A graph2 G is a tuple consisting of set of vertices (or nodes)
V and a set of edges (or connections) E which connect the vertices.

We distinguish between finite graphs, where the set of vertices is finite, and infinite
graphs, where this is not the case. The set of edges E defines a binary relation (see Sec-
tion 51.7) on the vertices v ∈ V . If the edges of a graph have no orientation, the graph
is called undirected . Then, E can either be considered as a symmetric relation, i. e., if
(v1 , v2 ) ∈ E ⇔ (v2 , v1 ), or the single vertexes are defined as 2-multisets, i. e., {v1 , v2 } in-
stead of (v1 , v2 ) and (v2 , v1 ). The edges in directed graphs have an orientation and, in
sketches, are often represented by arrows whereas undirected edges are usually denoted by
simple lines between the nodes v.

Definition D52.2 (Path). A path3 p = (p0 , p1 , . . . ) in a graph G = (V, E) is a sequence


of vertices such that from each vertex there is an edge to the next vertex in the path:

p is path in G ⇔ ∀i : 0 ≤ i < len(p) ⇒ pi ∈ V (52.1)


∀i : 0..len(p) − 2 ⇒ (pi , pi + 1) ∈ E (52.2)

A cycle is a path where the start and end vertex are the same, i. e., p0 = plen(p)−1 .
A graph is a weighted graph if values (weights) w are assigned to its edges e ∈ E. The
weight might stand for a distance or cost. The total weight of a path p in such a graph then
evaluates to
len(p)−1
X
w(o) = w(pi ) (52.3)
i=0

1 http://en.wikipedia.org/wiki/Graph_theory [accessed 2009-06-15]


2 http://en.wikipedia.org/wiki/Graph_(mathematics) [accessed 2009-06-15]
3 http://en.wikipedia.org/wiki/Path_(graph_theory) [accessed 2009-06-15]
Chapter 53
Stochastic Theory and Statistics

In this chapter we give a rough introduction into stochastic theory1 [1438, 1693, 2292], which
subsumes
1. probability2 theory3 , the mathematical study of phenomena characterized by random-
ness or uncertainty, and
2. statistics4 , the art of collecting, analyzing, interpreting, and presenting data.

53.1 Probability

Probability theory is used to determine the likeliness of the occurrence of an event under
ideal mathematical conditions. [1481, 1482] Let us start this short summary with some very
basic definitions.

Definition D53.1 (Random Experiment). Random experiments can be repeated arbi-


trary often, their results cannot be predicted.

Definition D53.2 (Elementary Event). The possible outcomes of random experiments


are called elementary events or samples ω.

Definition D53.3 (Sample Space). The set of all possible outcomes (elementary events,
samples) ω of a random experiment is the sample space ∀ω : ω ∈ Ω.

Example E53.1 (Dice Throw Sample Space).


When throwing dice5 , for example, the sample space will be Ω = {ω1 , ω2 , ω3 , ω4 , ω5 , ω6 }
whereas ωi means that the number i was thrown.

1 http://en.wikipedia.org/wiki/Stochastic [accessed 2007-07-03]


2 http://en.wikipedia.org/wiki/Probability [accessed 2007-07-03]
3 http://en.wikipedia.org/wiki/Probability_theory [accessed 2007-07-03]
4 http://en.wikipedia.org/wiki/Statistics [accessed 2007-07-03]
5 Throwing a dice is discussed as example for stochastic extensively in Section 53.5 on page 689.
654 53 STOCHASTIC THEORY AND STATISTICS
Definition D53.4 (Random Event). A random event A is a subset of the sample space
Ω (A ⊆ Ω). If any ω ∈ A occurs, then A occurs too.

Definition D53.5 (Certain Event). The certain event is the random event will always
occur in each repetition of a random experiment. Therefore, it is equal to the whole sample
space Ω.

Definition D53.6 (Impossible Event). The impossible event will never occur in any rep-
etition of a random experiment, it is defined as ∅.

Definition D53.7 (Conflicting Events). Two conflicting events A1 and A2 can never
occur together in a random experiment. Therefore, A1 ∩ A2 = ∅.

53.1.1 Probabily as defined by Bernoulli (1713)


In some idealized situations, like throwing ideal coins or ideal dice, all elementary events of
the sample space have the same probability [720].
1
P (ω) = ∀ω ∈ Ω (53.1)
|Ω|
Equation 53.1 is also called the Laplace-assumption. If it holds, the probability of an event
A can be defined as:
number of possible events in favor of A |A|
P (A) = = (53.2)
number of possible events |Ω|

53.1.2 Combinatorics
For many random experiments of this type, we can use combinatorial6 approaches in order
to determine the number of possible outcomes. Therefore, we want to shortly outline the
mathematical concepts of factorial numbers, combinations, and permutations.7

Definition D53.8 (Factorial). The factorial8 n! of a number n ∈ N0 is the product of


n and all natural numbers (except 0) smaller than it. It is a specialization of the Gamma
function for positive integer numbers, see ?? on page ??.
n
Y
n! = i (53.3)
i=1
0! = 1 (53.4)

In combinatorial mathematics9 , we often want to know in how many ways we can arrange
n ∈ N1 elements from a set Ω with M = |Ω| ≥ n elements. We can distinguish between
combinations, where the order of the elements in the arrangement plays no role, and permuta-
tions, where it is important. (a, b, c) and (c, b, a), for instance, denote the same combination
but different permutations of the elements {a, b, c}. We furthermore distinguish between
arrangements where each element of Ω can occurred at most once (without repetition) and
arrangements where the same elements may occur multiple time (with repetition).
6 http://en.wikipedia.org/wiki/Combinatorics [accessed 2007-07-03]
7 http://en.wikipedia.org/wiki/Combinations_and_permutations [accessed 2007-07-03]
8 http://en.wikipedia.org/wiki/Factorial [accessed 2007-07-03]
9 http://en.wikipedia.org/wiki/Combinatorics [accessed 2008-01-31]
53.1. PROBABILITY 655
53.1.2.1 Combinations
The number of possible combinations10 C (M, n) of n ∈ N1 elements out of a set Ω with
M = |Ω| ≥ n elements without repetition is
 
M M!
C (M, n) = = (53.5)
n n! (M − n)!
 
M M M −1 M −2 M −n+1
= ∗ ∗ ∗ .. ∗ (53.6)
n n n−1 n−2 1
     
M +1 M M
C (M + 1, n) = C (M, n) + C (M, n − 1) = = + (53.7)
n n n−1
If the elements of Ω may repeatedly occur in the arrangements, the number of possible
combinations becomes
   
M + n − 1! M +n−1 M +n−1
= = = C (M + n − 1, n) = C (M + n − 1, n − 1)
n! (M − 1)! n n−1
(53.8)

53.1.2.2 Permutations
The number of possible permutations (see Section 51.6) P erm(M, n) of n ∈ N1 elements
out of a set Ω with M = |Ω| ≥ n elements without repetition is
M!
P erm(M, n) = (M )n = (53.9)
(M − n)!
If an element from Ω can occur more than once in the arrangements, the number of possible
permutations is
Mn (53.10)

53.1.3 The Limiting Frequency Theory of von Mises


von Mises [2822] proposed that if we repeat a random experiment multiple times, the number
of occurrences of a certain event should somehow reflect its probability. The more often we
perform the experiment, the more reliable will the estimations of the event probability
become. We can express this relation using the notation of frequency.

Definition D53.9 (Absolute Frequency). The number H(A, n) denoting how often an
event A occurred during n repetitions of a random experiment is its absolute frequency11 .

Definition D53.10 (Relative Frequency). The relative frequency h(A, n) of an event A


is its absolute frequency normalized to the total number of experiments n. The relative
frequency has the following properties:
H(A, n)
h(A, n) = (53.11)
n
0 ≤ h(A, n) ≤ 1 (53.12)
h(Ω, n) = 1 ∀n ∈ N1 (53.13)
H(A, n) + H(B, n)
A ∩ B = ∅ ⇒ h(A ∪ B, n) = = h(A, n) + h(B, n) (53.14)
n
10 http://en.wikipedia.org/wiki/Combination [accessed 2008-01-31]
11 http://en.wikipedia.org/wiki/Frequency_%28statistics%29 [accessed 2007-07-03]
656 53 STOCHASTIC THEORY AND STATISTICS
According to von Mises [2822], the (statistical) probability P (A) of an event A computing
the limit of its relative frequency h(A, n) as n approaching infinity. This is the limit of
the quotient of the number of elementary events favoring A and the number of all possible
elementary events for infinite many repetitions. [2822, 2823]
nA
P (A) = lim h(A, n) = lim (53.15)
n→∞ n→∞ n

53.1.4 The Axioms of Kolmogorov

Definition D53.11 (σ-algebra). A subset S of the power set P (Ω) of the sample space
Ω is called σ-algebra12 , if the following axioms hold:

Ω ∈ S (53.16)
∅ ∈ S (53.17)
A∈S⇔A∈S (53.18)
A ∈ S ∧ B ∈ S ⇒ (A ∪ B) ∈ S (53.19)

From these axioms others can be deduced, for example:

A∈S∧B ∈S ⇒A∈S∧B ∈S (53.20)


⇒A∪B ∈S
⇒A∪B ∈S (53.21)
⇒A∩B ∈S
A ∈ S ∧ B ∈ S ⇒ (A ∩ B) ∈ S (53.22)

Definition D53.12 (Probability Space). A probability space (or random experiment)


is defined by the triplet (Ω, S, P ) where
1. Ω is the sample space, a set of elementary events,
2. S is a σ-algebra defined on Ω, and
3. P ω defines a probability measure13 which determines the probability of occurrence for
each event ω ∈ Ω. (Kolmogorov [1565, 1566] axioms14 )

Definition D53.13 (Probability). A mapping P which maps a real number to each el-
ementary event ω ∈ Ω is called probability measure if and only if the σ-algebra S on Ω
holds:

∀A ∈ S ⇒ 0 ≤ P (A) ≤ 1 (53.23)
P (Ω) = 1 (53.24)
!
[ X
∀disjoint Ai ∈ S ⇒ P (A) = P Ai = P (Ai ) (53.25)
∀i ∀i
12 http://en.wikipedia.org/wiki/Sigma-algebra [accessed 2007-07-03]
13 http://en.wikipedia.org/wiki/Probability_measure [accessed 2007-07-03]
14 http://en.wikipedia.org/wiki/Kolmogorov_axioms [accessed 2007-07-03]
53.1. PROBABILITY 657
From these axioms, it can be deduced that:
P (∅) = 0 (53.26)

P (A) = 1 − P A (53.27)

P A ∩ B = P (A) − P (A ∩ B) (53.28)
P (A ∪ B) = P (A) + P (B) − P (A ∩ B) (53.29)

53.1.5 Conditional Probability

Definition D53.14 (Conditional Probability). The conditional probability15 P (A |B )


is the probability of an event A, given that another event B already occurred. P (A |B ) is
read “the probability of A, given B”.
P (A ∩ B)
P (A|B) = (53.30)
P (B)
P (A ∩ B) = P (A|B) P (B) (53.31)

Definition D53.15 (Statistical Independence). Two events A and B are (statistically)


independent if and only if P (A ∩ B) = P (A) P (B) holds. If two events A and B are
statistically independent, we can deduce:
P (A ∩ B) = P (A) P (B) (53.32)
P (A |B ) = P (A) (53.33)
P (B |A ) = P (B) (53.34)

53.1.6 Random Variables

Definition D53.16 (Random Variable). The function X which relates the sample space
Ω to the real numbers R is called random variable16 in the probability space (Ω, S, P ).
X : Ω 7→ R (53.35)

Using such a random variable, we can replace the sample space Ω with the new sample space
ΩX . Furthermore, the σ-algebra S can be replaced with a σ-algebra SX , which consists of
subsets of ΩX instead of Ω. Last but not least, we replace the probability measure P which
relates the ω ∈ Ω to the interval [0, 1] by a new probability measure PX which relates the
real numbers R to this interval.

Definition D53.17 (Probability Space of a Random Variable). Is X : Ω 7→ R a ran-


dom variable, then the probability space of X is defined as the triplet
(ΩX , SX , PX ) (53.36)

One example for such a new probability measure would be the probability that a random
variable X takes on a real value which is smaller or equal a value x:
PX (X ≤ x) = P ({ω : ω ∈ Ω ∧ X (ω) ≤ x}) (53.37)

15 http://en.wikipedia.org/wiki/Conditional_probability [accessed 2007-07-03]


16 http://en.wikipedia.org/wiki/Random_variable [accessed 2007-07-03]
658 53 STOCHASTIC THEORY AND STATISTICS
53.1.7 Cumulative Distribution Function

Definition D53.18 (Cumulative Distribution Function). If X is a random variable of


a probability space (ΩX = R, SX , PX ), we call the function F : R 7→ [0, 1] the (cumulative)
distribution function17 (CDF) of the random variable X if it fulfills Equation 53.38.

F ≡ PX (X ≤ x) ≡ P ({ω : ω ∈ Ω ∧ X (ω) ≤ x}) (53.38)


| {z } | {z }
definition rnd. var. definition probability space

A cumulative distribution function F has the following properties:


1. FX (X) is normalized:

lim FX (x) = 0, lim FX (x) = 1 (53.39)


x→−∞ x→+∞
| {z } | {z }
impossible event certain event

2. FX (X) is monotonously18 growing:

FX (x1 ) ≤ FX (x2 ) ∀x1 ≤ x2 (53.40)

3. FX (X) is (right-sided) continuous19 :

lim FX (x + h) = FX (x) (53.41)


h→0

4. The probability that the random variable X takes on values in the interval x0 ≤ X ≤ x1
can be computed using the CDF:

P (x0 ≤ X ≤ x1 ) = FX (x1 ) − FX (x0 ) (53.42)

5. The probability that the random variable X takes on the value of a single random
number x is:
P (X = x) = FX (x) − lim FX (x − h) (53.43)
h→0

We can further distinguish between sample spaces Ω which contain at most countable infinite
many elements and such that are continuums. Hence, there are discrete20 and continuous21
random variables:

Definition D53.19 (Discrete Random Variable). A random variable X (and its prob-
ability measure PX (X) respectively) is called discrete if it takes on at most countable infinite
many values. Its cumulative distribution function FX (X) therefore has the shape of a stair-
way.

Definition D53.20 (Continuous Random Variable). A random variable X (and its


probability measure PX respectively) is called continuous if it can take on uncountable
infinite many values and its cumulative distribution function FX (X) is also continuous.

17 http://en.wikipedia.org/wiki/Cumulative_distribution_function [accessed 2007-07-03]


18 http://en.wikipedia.org/wiki/Monotonicity [accessed 2007-07-03]
19 http://en.wikipedia.org/wiki/Continuous_function [accessed 2007-07-03]
20 http://en.wikipedia.org/wiki/Discrete_random_variable [accessed 2007-07-03]
21 http://en.wikipedia.org/wiki/Continuous_probability_distribution [accessed 2007-07-03]
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 659
53.1.8 Probability Mass Function

Definition D53.21 (Probability Mass Function). The probability mass function22


(PMF) f is defined for discrete distributions only and assigns a probability to each value a
discrete random variable X can take on.

f : Z 7→ [0, 1] : fX (x) := PX (X = x) (53.44)

We specify the relation between the PMF and its corresponding (discrete) CDF in Equa-
tion 53.45 and Equation 53.45. We further define the probability of an event A in Equa-
tion 53.47 using the PMF.
x
X
PX (X ≤ x) = FX (x) = fX (x) (53.45)
i=−∞
PX (X = x) = fX (x) = FX (x) − FX (x − 1) (53.46)
X
PX (A) = fX (x) (53.47)
∀x∈A

53.1.9 Probability Density Function

The probability density function23 (PDF) is the counterpart of the PMF for continuous
distributions. The PDF does not represent the probabilities of the single values of a random
variable. Since a continuous random variable can take on uncountable many values, each
distinct value itself has the probability 0.

Example E53.2 (PDF: No Probability!).


If we, for instance, picture the current temperature outside as (continuous) random variable,
the probability that it takes on the value 18 for 18◦ C is zero. It will never be exactly 18◦ C
outside, we can at most declare with a certain probability that we have a temperature
between like 17.999◦ C and 18.001◦ C.

Definition D53.22 (Probability Density Function). If a random variable X is contin-


uous, its probability density function f is defined as
Z +∞
f : R 7→ [0, ∞) : FX (x) = fX (ξ) dξ ∀x ∈ R (53.48)
−∞

53.2 Parameters of Distributions and their Estimates


Each random variable X which conforms to a probability distribution F may have certain
properties such as a maximum and a mean value, a variance, and a value which will be taken
on by X most often. If the cumulative distribution function F of X is known, these values
can usually be computed directly from its parameters. On the other hand, it is possible that
22 http://en.wikipedia.org/wiki/Probability_mass_function [accessed 2007-07-03]
23 http://en.wikipedia.org/wiki/Probability_density_function [accessed 2007-07-03]
660 53 STOCHASTIC THEORY AND STATISTICS
we only know the values A[i] which X took on during some random experiments. From this
set of sample data A, we can estimate the properties of the underlying (possibly unknown)
distribution of X using statistical methods (with a certain error, of course).
In the following, we will elaborate on the properties of a random variable X ∈ R both
from the viewpoint of knowing the PMF/PDF fX (x) and the CDF FX (x) as well as from
the statistical perspective, where only a sample A of past values of X is known. In the
latter case, we define the sample as a list A with the length n = len(A) and the elements
A[i] : i ∈ [0, n − 1].

53.2.1 Count, Min, Max and Range

The most primitive features of a random distribution are the minimum, maximum, and the
range of its values. From the statistical perspective, the number of values A[i] in the data
sample A is another primitive parameter.

Definition D53.23 (Count). n = len(A) is the number of elements in the data sample A.

The item number is only defined for data samples, not for random variables, since random
variables represent experiments which can infinitely be repeated and thus stand for infinitely
many values. The number of items should not be mixed up with the possible number
of different values the random variable may take on. A data sample A may contain the
same value a multiple times. When throwing a dice seven times, one may throw A =
(1, 4, 3, 3, 2, 6, 1), for example24 .

Definition D53.24 (Minimum: Statistics). There exists no smaller element in the sam-
ple data A than the minimum sample ǎ ≡ min(A).

min(A) ≡ ǎ ∈ A : ∀a ∈ A ⇒ ǎ ≤ a (53.49)

Definition D53.25 (Minimum: Stochastic). The minimum is the lower boundary x̌ of


the random variable X (or negative infinity, if no such boundary exists).

min(X) = x̌ ⇔ FX (x̌) > 0 ∧ (¬∃x ∈ R : FX (x) > 0 ∧ x < x̌) (53.50)

Both definitions are fully compliant to Definition D3.2 on page 53 and the maximum can
specified similarly.

Definition D53.26 (Maximum: Statistics). There exists no larger element in the sam-
ple data A than the maximum sample â ≡ max(A).

max(A) ≡ â ∈ A : ∀a ∈ A ⇒ â ≥ A (53.51)

Definition D53.27 (Maximum: Stochastic). The value x̂ is the upper boundary of the
values a random variable X may take on (or positive infinity, if X is unbounded).

x̂ = max(X) ⇔ FX (x̂) ≥ FX (x) ∀x ∈ R (53.52)

24 Throwing a dice is discussed as example for stochastic extensively in Section 53.5 on page 689.
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 661
Definition D53.28 (Range: Statistics). The range “range”A of the sample data A is the
difference of the maximum max(A) and the minimum min(A) elements in A and therefore
represents the span covered by the data.
range(A) = â − ǎ = max(A) − min(A) (53.53)

Definition D53.29 (Range: Stochastic). If a random variable X is limited in both di-


rections, it has a finite range “range”X, otherwise this range is infinite too.
range(X) = x̂ − x̌ = max(X) − min(X) (53.54)

53.2.2 Expected Value and Arithmetic Mean


The expected value EX and the a are basic measures for random variables and data samples
that help us to estimate the regions where their values will likely be distributed around.

Definition D53.30 (Expected Value). The expected value25 of a random variable X is


the sum of the probability of each possible outcome of the random experiment multiplied
by the outcome value.

The expected value is abbreviated by EX or µ. For discrete distributions, it can be computed


using Equation 53.55 and for continuous ones Equation 53.56 holds.

X
EX = xfX (x) (53.55)
x=−∞
Z ∞
EX = xfX (x) dx (53.56)
−∞

If the expected value EX of a random variable X is known, the following statements can
be derived for the expected values of related random variables as follows:
Y = a + X ⇒ EY = a + EX (53.57)
Z = bX ⇒ EZ = bEX (53.58)

Definition D53.31 (Sum). The sum(A) represents the sum of all elements in a set of data
samples A.
n−1
X
sum(A) = A[i] (53.59)
i=0

Definition D53.32 (Arithmetic Mean). The arithmetic mean26 a is the sum of all ele-
ments in the sample data A divided by the total number of values.

In the spirit of the limiting frequency method of von Mises [2822], it is an estimation of the
expected value a ≈ EX of the random variable X that produced the sample data A.
n−1 n−1
sum(A) 1X X
a= = A[i] = h(A[i], n) (53.60)
n n i=0 i=0

25 http://en.wikipedia.org/wiki/Expected_value [accessed 2007-07-03]


26 http://en.wikipedia.org/wiki/Arithmetic_mean [accessed 2007-07-03]
662 53 STOCHASTIC THEORY AND STATISTICS
53.2.3 Variance and Standard Deviation

53.2.3.1 Variance

The variance27 [928] is a measure of statistical dispersion. As stochastic entity, it illustrates


how close the outcomes of a random variable or are to their expected value EX. In statistics,
it denotes the proximity of the elements a in a data sample A to the arithmetical mean a of
the sample.

Definition D53.33 (Variance: Random Variable). The variance D2 X ≡ var(X) ≡ σ 2


of a random variable X is defined as
h i  
2 2
var(X) = D2 X = E (X − EX) = E X 2 − (EX) (53.61)

The variance of a discrete random variable X can be computed using Equation 53.62 and
for continuous distributions, Equation 53.63 will hold.
∞ ∞
" ∞ #2
X X X
2 2 2
D X= fX (x) (x − EX) = x fX (x) − xfX (x)
x=−∞ x=−∞ x=−∞
" ∞
#
X 2
= x2 fX (x) − (EX) (53.62)
x=−∞
Z ∞ Z ∞ Z ∞ 2
2
D2 X = (x − EX) dx = x2 fX (x) dx − xfX (x) dx
−∞ −∞ −∞
Z ∞ 
2
= x2 fX (x) dx − (EX) (53.63)
−∞

If the variance D2 X of a random variable X is known, we can derive the variances the some
related random variables as follows:
Y = a + X ⇒ D2 Y = D2 X (53.64)
Z = bX ⇒ D2 Z = b2 D2 X (53.65)

Definition D53.34 (Sum of Squares). The function sumSqrs(A) represents the sum of
the squares of all elements in the data sample A in statistics.
n−1
X 2
sumSqrs(A) = (A[i]) (53.66)
i=0

Definition D53.35 (Variance: Estimation in Statistics). The (unbiased) estimator28


s2 of the variance of the random variable which produced the sample values A according to
Equation 53.67. The variance is zero for all samples with (n = len(A)) ≤ 1.
n−1
!
X 2
1 2 1 (sum(A))
s2 = (A[i] − a) = sumSqrs(A) − (53.67)
n − 1 i=0 n−1 n

27 http://en.wikipedia.org/wiki/Variance [accessed 2007-07-03]


28 see Definition D53.58 on page 692
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 663
53.2.3.2 Standard Deviation etc.

Definition D53.36 (Standard Deviation). The standard deviation29 is the square root
of the variance. The standard deviation of a random variable X is abbreviated with DX
and σ, its statistical estimate is s.

DX = D2 X (53.68)

s= s 2 (53.69)
The standard deviation is zero for all samples with n ≤ 1.

Definition D53.37 (Coefficient of Variation). The coefficient of variation30 cV of a


random variable X is the ratio of the standard deviation by expected value of X. For
data samples, its estimate c≈V is defined as the ration of the estimate of the standard
deviation and the arithmetic mean.
DX σ
cV = = (53.70)
EX µ
v
u 2
u
u sumSqrs(A) − (sum(A))
n t n
c≈V = (53.71)
sum(A) n−1

53.2.3.3 Covariance

Definition D53.38 (Covariance: Random Variables). The covariance31 cov(X, Y ) of


two random variables X and Y is a measure for how much they are related. It exists if the
expected values EX 2 and EY 2 exist and is defined as
cov(X, Y ) = E [X − EX] ∗ E [Y − EY ] (53.72)
= E [X ∗ Y ] − EX ∗ EY (53.73)
(53.74)

If X and Y are statistically independent, then their covariance is zero, since


E [X ∗ Y ] = EX ∗ EY (53.75)
Furthermore, the following formulas hold for the covariance
D2 X = cov(X, X) (53.76)
2 2 2
D [X + Y ] = cov(X + Y, X + Y ) = D X + D Y + 2cov(X, Y ) (53.77)
D2 [X − Y ] = cov(X − Y, X + Y ) = D2 X + D2 Y − 2cov(X, Y ) (53.78)
cov(X, Y ) = cov(Y, X) (53.79)
cov(aX, Y ) = a cov(Y, X) (53.80)
cov(X + Y, Z) = cov(X, Z) + cov(Y, Z) (53.81)
cov(aX + b, cY + d) = a c cov(X, Y ) (53.82)
cov(X, Y ) = cov(Y, X) (53.83)

29 http://en.wikipedia.org/wiki/Standard_deviation [accessed 2007-07-03]


30 http://en.wikipedia.org/wiki/Coefficient_of_variation [accessed 2007-07-03]
31 http://en.wikipedia.org/wiki/Covariance [accessed 2008-02-05]
664 53 STOCHASTIC THEORY AND STATISTICS
Definition D53.39 (Covariance: Statistical Estimate). The covariance cov(A, B) of
two random data (paired) samples A and B with length n and elements a and b is defined
as X 
n − 1 (ai − a) bi − b
i=0
cov(A, B) = (53.84)
n

Definition D53.40 (Covariance Matrix: Random Variables). The covariance ma-


trix Σ of n random variables Xi : i ∈ 1..n is an n × n matrix whose elements are defined as
Σ[i, j] = cov(Xi , Xj ) for all i, j ∈ 1..n.

Definition D53.41 (Covariance Matrix: Statistic Estimate). The covariance matrix


C of n data samples A[i] : i ∈ 1..n is an n × n matrix whose elements are defined as
C[i, j] = cov(A[i], A[j]) for all i, j ∈ 1..n.

For covariance matrices, Σ[i, j] = Σ[j, i] holds for all i, j ∈ 1..n as well as Σ[i, i] = D2 Xi for
all i ∈ 1..n (and Σ[i, i] = s2 Xi for the estimated covariance matrix).

53.2.4 Moments

Definition D53.42 (Moment). The kth moment32 µ′k (c) about a value c is defined for a
random distribution X as h i
k
µ′k (c) = E (X − c) (53.85)

It can be specified for discrete (Equation 53.86) and continuous (Equation 53.87) probability
distributions using Equation 53.55 and Equation 53.56 as follows.

X k
µ′k (c) = fX (x) (x − c) (53.86)
x=−∞
Z ∞
k
µ′k (c) = fX (x) (x − c) dx (53.87)
−∞

Definition D53.43 (Statistical Moment). The kth statistical moment µ′ of a random


distribution is its kth moment about zero, i. e., the expected value of its values raised to the
kth power.  
µ′ = µ′k (0) = E X k (53.88)

Definition D53.44 (Central Moment). The kth moment about the mean (or central
moment)33 is the expected value of the difference between elements and their expected
value raised to the kth power. h i
k
µ = E (X − EX) (53.89)

Hence, the variance D2 X equals the second central moment µ2 .

Definition D53.45 (Standardized Moment). The kth standardized moment µ is the


quotient of the kth central moment and the standard deviation raised to the kth power.
µ
µ= k (53.90)
σ

32 http://en.wikipedia.org/wiki/Moment_%28mathematics%29 [accessed 2008-02-01]


33 http://en.wikipedia.org/wiki/Moment_about_the_mean [accessed 2007-07-03]
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 665
53.2.5 Skewness and Kurtosis

The two other most important moments of random distributions are the skewness γ1 and
the kurtosis γ2 and their estimates G1 and G2 .

Definition D53.46 (Skewness). The skewness34 γ1 is the third standardized moment.


µ3
γ1 = µσ,3 = (53.91)
σ3

The skewness is a measure of asymmetry of a probability distribution. If γ1 > 0, the right


part of the distribution function is either longer or fatter (positive skew, right-skewed). If
γ1 < 0, the distribution’s left part is longer or fatter.For sample data A the skewness of the
underlying random variable is approximated with the estimator G1 where s is the estimated
standard deviation. The sample skewness is only defined for sets A with at least three
elements.
n X  A[i] − a 3
n−1
G1 = (53.92)
(n − 1)(n − 2) i=0 s

Definition D53.47 (Kurtosis). The excess kurtosis35 γ2 is a measure for the sharpness
of a distribution’s peak.
µ4
γ2 = µσ,4 − 3 = 3 − 3 (53.93)
s

A distribution with a high kurtosis has a sharper “peak” and fatter “tails”, while a distri-
bution with a low kurtosis has a more rounded peak with wider “shoulders”. The normal
distribution (see Section 53.4.2) has a zero kurtosis.
For sample data A which represents only a subset of a greater amount of data, the sample
kurtosis can be approximated with the estimator G2 where s is the estimate of the sample’s
standard deviation. The kurtosis is only defined for sets with at least four elements.
" #
n(n + 1) X  A[i] − a 4
n−1
3(n − 1)2
G2 = − (53.94)
(n − 1)(n − 2)(n − 3) i=0 s (n − 2)(n − 3)

53.2.6 Median, Quantiles, and Mode

Definition D53.48 (Median). The median med(X) is the value right in the middle of a
sample or distribution, dividing it into two equal halves.

1 1
P (X ≤ med(X)) ≥ ∧ P (X ≥ med(X)) ≥ ∧ P (X ≤ med(X)) ≤ P (X ≥ med(X))
2 2
(53.95)

The probability of drawing an element less than med(X) is equal to the probability of draw-
ing an element larger than med(X). We can determine the median med(X) of continuous
34 http://en.wikipedia.org/wiki/Skewness [accessed 2007-07-03]
35 http://en.wikipedia.org/wiki/Kurtosis [accessed 2008-02-01]
666 53 STOCHASTIC THEORY AND STATISTICS
and discrete distributions by solving Equation 53.96 and Equation 53.97 respectively.
Z med(X)
1
= fX (x) dx (53.96)
2 −∞
med(X)−1 ∞
X 1 X
fX (x) ≤ ≤ fX (x) (53.97)
i=−∞
2
i=med(X)

(53.98)

If a sample A has an odd element count, the median med(X) is the element in the middle,
otherwise (in a set with an even element count there exists no single “middle”-element), the
arithmetic mean of the two middle elements.

Example E53.3 (Median vs. Mean).


The median represents the dataset in an unbiased manner. If you have, for example, the
dataset A = (1, 1, 1, 1, 1, 2, 2, 2, 500 000), the arithmetic mean, biased by the large element
500 000 would be very high (55556.7). The median however would be 1 and thus represents
the sample better. The median of a sample can be computed as:

As ≡ sortAsc(A, >) (53.99)


  
As n−1 2 if n = len(A) is odd
med(A) = 1  n
  n (53.100)
2 (As 2 + As 2 − 1 ) otherwise

Quantiles36 are points taken at regular intervals from a sorted dataset (or a cumulative
distribution function).

Definition D53.49 (Quantile). The q-quantiles divide a distribution function F or data


sample A into q parts Ti with equal probability.
1
∀ x ∈ R, i ∈ [0, q − 1] ⇒ ≤ P (x ∈ Ti ) (53.101)
q

They can be regarded as the generalized median, or vice versa, the median is the 2-quantile.
A sorted data sample is divided into q subsets of equal length by the q-quantiles. The cu-
mulative distribution function of a random variable X is divided by the q-quantiles into q
subsets of equal area. The quantiles are the boundaries between the subsets/areas. There-
fore, the kth q-quantile is the value ζ so that the probability that the random variable (or
an element of the data set) will take on a value less than ζ is at most kq and the probability
q−k
that it will take on a value greater than or equal to ζ is at most q . There exist q − 1
q-quantiles (k spans from 1 to q − 1). The k th
q-quantile quantilekq (A) of a dataset A can
be computed as:

As ≡ sortAscA> (53.102)
k∗n
t= (53.103)
q
1
(A [t] + As [t − 1]) if if t is integer
quantilekq (A) = 2 s (53.104)
As [⌊t⌋] otherwise

For some special values of q, the quantiles have been given special names which are listed in
Table 53.1.
36 http://en.wikipedia.org/wiki/Quantiles [accessed 2007-07-03]
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 667
q name
100 percentiles
10 deciles
9 noniles
5 quintiles
4 quartiles
2 median

Table 53.1: Special Quantiles

Definition D53.50 (Interquartile Range). The interquartile range37 is the range be-
tween the first and the third quartile and defined as quantile34 (X) − quantile14 (X).

Definition D53.51 (Mode). The mode38 is the value that most often occurs in a data
sample or is most frequently assumed by a random variable.

There exist unimodal distributions/samples that have one mode value and multimodal dis-
tributions/samples with multiple modes. In [369, 2821] you can find further information of
the relation between the mode, the mean and the skewness.

53.2.7 Entropy

The information entropy39 H(X) defined by Shannon [2465] is a measure of uncertainty


for discrete probability mass functions f of random variables X in probability theory (see
Equation 53.105). For data samples A, it is estimated based on the relative frequency h(a, n)
of the value a amongst the n samples in A (see Equation 53.106).

Definition D53.52 (Information Entropy). The information entropy is a measure for


the information content of a system and defined as follows:

X   ∞
X
1
H(X) = fX (x) log2 =− fX (x) log fX (x) (53.105)
x=−∞
fX (y) x−∞
X
H(A) = − h(a, n) log h(a, n) (53.106)
∀a∈A

Definition D53.53 (Differential Entropy). The differential (also called continuous) en-
tropy h(X) is a generalization of the information entropy to continuous probability density
functions f of random variables X. [1699]
Z ∞
h(X) = − fX (x) ln fX (x) dx (53.107)
−∞

37 http://en.wikipedia.org/wiki/Inter-quartile_range [accessed 2007-07-03]


38 http://en.wikipedia.org/wiki/Mode_%28statistics%29 [accessed 2007-07-03]
39 http://en.wikipedia.org/wiki/Information_entropy [accessed 2007-07-03]
668 53 STOCHASTIC THEORY AND STATISTICS
53.2.8 The Law of Large Numbers

The law of large numbers (LLN) combines statistics and probability by showing that if an
event e with the probability P (e) = p is observed in n independent repetitions of a random
experiment, its relative frequency h(e, n) (see Definition D53.10) converges to its probability
p if n becomes larger.
In the following, assume that the A is an infinite sequence of samples from equally
distributed and pairwise independent random variables Xi with the (same) expected value
EX. The weak law of large numbers states that the mean a of the sequence A converges to
a value in (EX − ε, EX + ε) for each positive real number ε > 0, ε ∈ R+ .

lim P (|a − EX| < ε) = 1 (53.108)


n→∞

In other words, the weak law of large numbers says that the sample average will converge
to the expected value of the random experiment if the experiment is repeated many times.
According to the strong law of large numbers, the mean a of the sequence A even con-
verges to the expected value EX of the underlying distribution for infinite large n

 
P lim a = EX = 1 (53.109)
n→∞

The law of large numbers implies that the accumulated results of each random experiment
will approximate the underlying distribution function if repeated infinitely (under the con-
dition that there exists an invariable underlying distribution function).
Many random processes follow in their characteristics well-known probability distributions,
some of which we will discuss here. Parts of the information provided in this and the
following section have been obtained from Wikipedia – The Free Encyclopedia [2938].

53.3 Some Discrete Distributions

In this section, we discuss some of the common discrete distributions. Discrete probability
distributions assign probabilities to the elements of a finite (or, at most, countable infinite)
set of discrete events/outcomes of a random experiment.

53.3.1 Discrete Uniform Distribution


53.3. SOME DISCRETE DISTRIBUTIONS 669
Parameter Definition
parameters a, b ∈ Z, a > b 53.110
|Ω| |Ω| = r = range = b − a+1 53.111
1
if a ≤ x ≤ b, x ∈ Z
PMF P (X = x) = fX (x) = r 53.112
 otherwise
0
 0 if x < a
CDF P (X ≤ x) = FX (x) = ⌊ x−a+1 r ⌋ if a ≤ x ≤ b 53.113

1 otherwise
a+b
mean EX = 53.114
2
a+b
median med = 53.115
2
mode mode = ∅ 53.116
r2 − 1
variance D2 X = 53.117
12
skewness γ1 = 0 53.118
6(r2 + 1)
kurtosis γ2 = − 2 53.119
5(r − 1)
entropy H(X) = ln r 53.120
eat − e(b+1)t
mgf MX (t) = 53.121
r(1 − et )
e − ei(b+1)t
iat
char. func. ϕX (t) = 53.122
r(1 − eit )

Table 53.2: Parameters of the discrete uniform distribution.

The uniform distribution exists in a discrete40 as well as in a continuous form. In this


section we want to discuss the discrete form whereas the continuous form is elaborated on
in Section 53.4.1 on page 676.
All possible outcomes ω ∈ Ω of a random experiment which obeys the uniform distribu-
tion have exactly the same probability. In the discrete uniform distribution, Ω is usually a
finite set.

Example E53.4 (Uniform Distribution: Ideal Dice and Coins).


The best example for this distribution is throwing an ideal dice. This experiment has six
possible outcomes ωi where each has the same probability P (ωi ) = 16 . Throwing ideal coins
and drawing one element out of a set of n possible elements are other examples where a
discrete uniform distribution can be assumed.

Table 53.2 contains the characteristics of the discrete uniform distribution. In Fig. 53.1.a
you can find some example uniform probability mass functions and in Fig. 53.1.b we have
sketched their according cumulative distribution functions.

40 http://en.wikipedia.org/wiki/Uniform_distribution_%28discrete%29 [accessed 2007-07-03]


670 53 STOCHASTIC THEORY AND STATISTICS

fX(x)

1/n2=0,2
0.2
n1=10
n2=5
single, discrete point
discontinuous point

1/n1=0,1 n2=b2-a2=5
0.1

n1=b1-a1=10

a1 b1 a2 b2
0 5 10 15 20 25 x
Fig. 53.1.a: The PMFs of some discrete uniform distributions.
FX(x)

n1=10 n2=5 discontinuous point


1
0.9
n1=b1-a1=10
0.8
0.7
0.6
0.5 n2=b2-a2=5
0.4
0.3
0.2
0.1
a1 b1 a2 b2
0 5 10 15 20 25 x
Fig. 53.1.b: The CDFs of some discrete uniform distributions.

Figure 53.1: Examples for the discrete uniform distribution


53.3. SOME DISCRETE DISTRIBUTIONS 671
53.3.2 Poisson Distribution πλ
The Poisson distribution41 πλ [47] complies with the reference model telephone switchboard.
It describes a process where the number of events that occur (independently of each other)
in a certain time interval only depends on the duration of the interval and not of its position
(prehistory). Events do not have any aftermath and thus, there is no mutual influence of
non-overlapping time intervals (homogeneity). Furthermore, only the time when an even
occurs is considered and not the duration of the event.
Example E53.5 (Telephone Switchboard: Poisson).
The most common example for the Poisson distribution is the telephone switchboard. As
already mentioned, we are only interested in the time at which an event, i. e., a call comes in,
not in the length of the call. In this model, no events occur in infinitely short time intervals.

The features of the Poisson distribution are listed in Table 53.342 and examples for its PDF
and CDF are illustrated in Fig. 53.2.a and Fig. 53.2.b.
Parameter Definition
parameters λ = µt > 0 53.123
(µt)x −µt λx −λ
PMF P (X = x) = fX (x) = e = e 53.124
x! x! x
Γ(⌊k + 1⌋ , λ) X e−λ λi
CDF P (X ≤ x) = FX (x) = = 53.125
⌊k⌋! i=0
i!!
mean EX = µt = λ 53.126
1 1
median med ≈ ⌊λ + − ⌋ 53.127
3 5λ
mode mode = ⌊λ⌋ 53.128
variance D2 X = µt = λ 53.129
1
skewness γ1 = λ− 2 53.130
1
kurtosis γ2 = 53.131
λ ∞
X λk ln(k!)
entropy H(X) = λ (1 − ln λ) + e−λ 53.132
k!
k=0
t
mgf MX (t) = eλ(e −1) 53.133
it
char. func. ϕX (t) = eλ(e −1) 53.134

Table 53.3: Parameters of the Poisson distribution.

53.3.2.1 Poisson Process


The Poisson process43 [2539] is a process that obeys the Poisson distribution – just like the
example of the telephone switchboard mentioned before. Here, λ is expressed as the product
1
of the intensity µ and the time t. µ normally describes a frequency, for example µ = min .
Both, the expected value as well as the variance of the Poisson process are λ = µt. In
Equation 53.135, the probability that k events occur in a Poisson process in a time interval
of the length t is defined.
k
(µt) −µt λk −λ
P (Xt = k) = e = e (53.135)
k! k!
The probability that in a time interval [t, t + ∆t]
41 http://en.wikipedia.org/wiki/Poisson_distribution [accessed 2007-07-03]
42 The Γ in Equation 53.125 denotes the (upper) incomplete gamma function. More information on the
gamma function Γ can be found in ?? on page ??.
43 http://en.wikipedia.org/wiki/Poisson_process [accessed 2007-07-03]
672 53 STOCHASTIC THEORY AND STATISTICS

0.4 fX(x)
0.35
0.3 l=1
0.25 l=3
0.2 l=6
0.15
l=10
0.1
l=20
0.05

0
4 l
8 l=20
12
16 l=6
20
24 x l=1
28
Fig. 53.2.a: The PMFs of some Poisson distributions.

1
FX(x)

0.9
0.8
0.7
0.6
0.5
l=1
0.4 l=3
0.3 l=6
l=10
0.2 l=20
0.1 discontinuous point

0 5 10 15 20 25 x 30
Fig. 53.2.b: The CDFs of some Poisson distributions.

Figure 53.2: Examples for the Poisson distribution.


53.3. SOME DISCRETE DISTRIBUTIONS 673
1. no events occur is 1 − λ∆t + o(∆t).

2. exactly one event occurs is λ∆t + o(∆t).


3. multiple events occur o(∆t).

Here we use an infinitesimal version the small-o notation.44 The statement that f ∈ o(ξ) ⇒
|f (x)| ≪ |ξ (x)| is normally only valid for x → ∞. In the infinitesimal variant, it holds for
x → 0. Thus, we can state that o(∆t) is much smaller than ∆t. In principle, the above
equations imply that in an infinite small time span either no or one event occurs, i. e., events
do not arrive simultaneously:
lim P (Xt > 1) = 0 (53.136)
t→0

53.3.2.2 The Relation between the Poisson Process and the Exponential
Distribution

It is important to know that the (time) distance between two events of the Poisson process is
exponentially distributed (see Section 53.4.3 on page 682). The expected value of the number
of events to arrive per time unit in a Poisson process is EXpois , then the expected value of
the time between two events EX1pois . Since this is the excepted value EXexp = EX1pois of the
exponential distribution, its λexp -value is λexp = EX1exp = 11 = EXpois . Therefore, the
EXpois

λexp -value of the exponential distribution equals the λpois -value of the Poisson distribution
λexp = λpois = EXpois . In other words, the time interval between (neighboring) events of the
Poisson process is exponentially distributed with the same λ value as the Poisson process,
as illustrated in Equation 53.137.

Xi ∼ πλ ⇔ (t(Xi+1 ) − tXi ) ∼ exp(λ) ∀i ∈ N1 (53.137)

44 See Definition D12.1 on page 144 and ?? on page ?? for a detailed elaboration on the small-o notation.
674 53 STOCHASTIC THEORY AND STATISTICS
53.3.3 Binomial Distribution B (n, p)

The binomial distribution45 B (n, p) is the probability distribution that describes the prob-
ability of the possible numbers successes of n independent experiments with the success
probability p each. Such experiments is called Bernoulli experiments or Bernoulli trials. For
n = 1, the binomial distribution is a Bernoulli distribution46 .
Table 53.447 points out some of the properties of the binomial distribution. A few
examples for PMFs and CDFs of different binomial distributions are given in Fig. 53.3.a
and Fig. 53.3.b.

Parameter Definition
parameters n ∈ N0 , 0 ≤ p ≤ 1, p ∈R  53.138
n x n−x
PMF P (X = x) = fX (x) = p (1 − p) 53.139
x
⌊x⌋
X
CDF P (X ≤ x) = FX (x) = fX (x) = I1−p (n − ⌊x⌋, 1 + ⌊x⌋) 53.140
i=0
mean EX = np 53.141
median med is one of {⌊np⌋ − 1, ⌊np⌋, ⌊np⌋ + 1} 53.142
mode mode = ⌊(n + 1)p⌋ 53.143
variance D2 X = np(1 − p) 53.144
1 − 2p
skewness γ1 = p 53.145
np(1 − p)
1 − 6p(1 − p)
kurtosis γ2 = 53.146
np(1 − p)  
1 1
entropy H(X) = ln (2πn e p (1 − p)) + O 53.147
2 n
mgf MX (t) = (1 − p + pet )n 53.148
char. func. ϕX (t) = (1 − p + peit )n 53.149

Table 53.4: Parameters of the Binomial distribution.

For n → ∞, the binomial distribution approaches a normal distribution. For large n,


B(n, p) can therefore often be approximated with the normal distribution (see Section 53.4.2)
N (np, np(1 − p)). Whether this approximation is good or not can be found out by rules of
thumb, some of them are:

np > 5 ∧ n(1 − p) > 5 (53.150)


p
µ ± 3σ ≈ np ± 3 np(1 − p) ∈ [0, n] (53.151)

In case these rules hold, we still need to transform a continuous distribution to a discrete
one. In order to do so, we add 0.5 to the x values, i. e., FX,X,bin (x) ≈ FX,X,normal (x + 0.5).

45 http://en.wikipedia.org/wiki/Binomial_distribution [accessed 2007-10-01]


46 http://en.wikipedia.org/wiki/Bernoulli_distribution [accessed 2007-10-01]
47 I in Equation 53.140 denotes the regularized incomplete beta function.
1−p
53.3. SOME DISCRETE DISTRIBUTIONS 675

fX(x)

n=20,p=0.3
0.18 n=20,p=0.6
0.16 n=30,p=0.6
n=40,p=0.6
0.14
0.12
0.1
0.08
0.06
0.04
0.02

0 5 10 15 20 25 x 35
single, discrete point
Fig. 53.3.a: The PMFs of some binomial distributions.
FX(x)

1
0.9
0.8
0.7
0.6
0.5 n=20,p=0.3
0.4 n=20,p=0.6
n=30,p=0.6
0.3 n=40,p=0.6
0.2
0.1 discontinuous point

0 5 10 15 20 25 x 35
Fig. 53.3.b: The CDFs of some binomial distributions.

Figure 53.3: Examples for the binomial distribution.


676 53 STOCHASTIC THEORY AND STATISTICS
53.4 Some Continuous Distributions
In this section we will introduce some common continuous distributions. Unlike the discrete
distributions, continuous distributions have an uncountable infinite large set of possible
outcomes of random experiments. Thus, the PDF does not assign probabilities to certain
events. Only the CDF makes statements about the probability of a sub-set of possible
outcomes of a random experiment.

53.4.1 Continuous Uniform Distribution

After discussing the discrete uniform distribution in Section 53.3.1, we now elaborate on
its continuous form48 . In a uniform distribution, all possible outcomes in a range [a, b], b > a
have exactly the same probability. The characteristics of this distribution can be found in
Table 53.5. Examples of its probability density function is illustrated in Fig. 53.4.a whereas
the according cumulative density functions are outlined Fig. 53.4.b.

Parameter Definition
parameters a, b ∈ R, a≥ b 53.152
1
b−a if x ∈ [a, b]
PDF fX (x) = 53.153
0 otherwise

 0 if x < a
CDF P (X ≤ x) = FX (x) = x−a if x ∈ [a, b] 53.154
 b−a
1 otherwise
1
mean EX = (a + b) 53.155
2
1
median med = (a + b) 53.156
2
mode mode = ∅ 53.157
1
variance D2 X = (b − a)2 53.158
12
skewness γ1 = 0 53.159
6
kurtosis γ2 = − 53.160
5
entropy h(X) = ln (b − a) 53.161
etb − eta
mgf MX (t) = 53.162
t(b − a)
eitb − eita
char. func. ϕX (t) = 53.163
it(b − a)

Table 53.5: Parameters of the continuous uniform distribution.

48 http://en.wikipedia.org/wiki/Uniform_distribution_%28continuous%29 [accessed 2007-07-03]


53.4. SOME CONTINUOUS DISTRIBUTIONS 677

fX(x)

1/(b2-a2)
0.2
a=5, b=15
a=20, b=25

1/(b1-a1)
0.1

a1=5 b1=15 a2=20 b2=25


0 5 10 15 20 25 x
Fig. 53.4.a: The PDFs of some continuous uniform distributions.
fX(x)

1
a=5, b=15
a=20, b=25

a1=5 b1=15 a2=20 b2=25


0 5 10 15 20 25 x
Fig. 53.4.b: The CDFs of some continuous uniform distributions.

Figure 53.4: Some examples for the continuous uniform distribution.


678 53 STOCHASTIC THEORY AND STATISTICS

53.4.2 Normal Distribution N µ, σ 2

Many phenomena in nature, like the size of chicken eggs, noise, errors in measurement, and
such and such, can be considered as outcomes of random experiments
 with properties that
can be approximated by the normal distribution49 N µ, σ 2 [3060]. Its probability density
function, shown for some example values in Fig. 53.5.a, is symmetric to the expected value
µ and becomes flatter with rising standard deviation σ.
The cumulative density function is outline for the same example values in Fig. 53.5.b.
Other characteristics of the normal distribution can be found in Table 53.6.

Parameter Definition
parameters µ ∈ R, σ ∈ R+ 53.164
1 (x−µ)2
PDF fX (x) = √ e− 2σ2 53.165
σ 2π Z x
1 (z−µ)2
CDF P (X ≤ x) = FX (x) = √ e− 2σ2 dz 53.166
σ 2π −∞
mean EX = µ 53.167
median med = µ 53.168
mode mode = µ 53.169
variance D2 X = σ2 53.170
skewness γ1 = 0 53.171
kurtosis γ2 = 0  √  53.172
entropy h(X) = ln σ 2πe 53.173
2 2
µt+ σ 2t
mgf MX (t) = e 53.174
σ 2 t2
char. func. ϕX (t) = eµit+ 2 53.175

Table 53.6: Parameters of the normal distribution.

53.4.2.1 Standard Normal Distribution

For the sake of simplicity, the standard normal distribution N (0, 1) with the CDF Φ(x) is
defined with µ = 0 and σ = 1. Values of this function are listed in tables. You can compute
the CDF of any normal distribution using the one of the standard normal distribution by
applying Equation 53.176.
Z x
1 z2
Φ(x) = √ dz e− 2 (53.176)
2π −∞
 
x−µ
P (X ≤ x) = Φ (53.177)
σ

Some values of Φ(x) are listed in Table 53.7. For the sake of saving space by using two
dimensions, we compose the values of x as a sum of a row and column value. If you want to
look up Φ(2.13) for example, you would go to the row which starts with 2.1 and the column
of 0.03, so you’d find Φ(2.13) ≈ 0.9834.

49 http://en.wikipedia.org/wiki/Normal_distribution [accessed 2007-07-03]


53.4. SOME CONTINUOUS DISTRIBUTIONS 679

fX(x)
0.4

0.35

standard normal 0.3


distribution:
m=0, s=1 0.25 m=0, s=1
m=0, s=5
0.2
m=3, s=1
0.15 m=3, s=5
m=6, s=1
0.1 m=6, s=5
0.05

-15 -10 -5 0 m 5 10 15 x 20
Fig. 53.5.a: The PDFs of some normal distributions.

1
FX(x)

0.9

the CDF F of the 0.8


standard normal
distribution: m=0, 0.7
m=0, s=1
s=1 0.6 m=0, s=5
m=3, s=1
0.5 m=3, s=5
0.4 m=6, s=1
m=6, s=5
0.3

0.2

0.1

-15 -10 -5 0 5 10 15 20 x
Fig. 53.5.b: The CDFs of some normal distributions.

Figure 53.5: Examples for the normal distribution.


680 53 STOCHASTIC THEORY AND STATISTICS
x 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990

Table 53.7: Some values of the standardized normal distribution.

53.4.2.2 The Probit Function

The inverse of the cumulative distribution function of the standard normal distribution is
called the “probit” function. It is also often denoted as z-quantile of the standard normal
distribution.

z(y) ≡ probit(y) ≡ Φ−1 (y) (53.178)


y = Φ(x) ⇒ Φ−1 (y) = z(y) = x (53.179)

The values of the quantiles of the standard normal distribution can also be looked up in
Table 53.7. Therefore, the previously discussed process is simply reversed. If we wanted to
find the value “z”0.922, we locate the closest match in the table. In Table 53.7, we will find
0.9222 which leads us to x = 1.4 + 0.02. Hence, z (0.922) ≈ 1.42.

53.4.2.3 Multivariate Normal Distribution

The probability density function PDF of the multivariate normal distribution50 [2345, 2523,
2664] is illustrated in Equation 53.180 and Equation 53.181 in the general case (where Σ is
the covariance matrix) and in Equation 53.182 in the uncorrelated form. If the distributions,
50 http://en.wikipedia.org/wiki/Multivariate_normal_distribution [accessed 2007-07-03]
53.4. SOME CONTINUOUS DISTRIBUTIONS 681
additionally to being uncorrelated, also have the same parameters σ and µ, the probability
density function of the multivariate normal distribution can be expressed as it is done in
Equation 53.183.
p
Σ−1 − 1 (x−µ)T Σ−1 (x−µ)
fX (x) = n e
2 (53.180)
(2π) 2
1 − 21 (x−µ)T Σ−1 (x−µ)
= n 1 e (53.181)
(2π) Σ2 2

Yn (x −µ )2
1 − i 2i
fX (x) = √ e 2σ
i (53.182)
i=1 2πσi
n
Y 1 (xi −µi )2
fX (x) = √ e− 2σ 2

i=1 2πσ
  n2 Pn 2
1 i=1 (xi −µ)
= e− 2σ 2 (53.183)
2πσ 2

53.4.2.4 Central Limit Theorem


Pn
The central limit theorem 51 (CLT) states that the sum Sn = i=1 Xi of i identically dis-
tributed random variables Xi with finite expected values E [Xi ] and non-zero variances
D2 [Xi ] > 0 approaches a normal distribution for n → +∞. [925, 1481, 2704]

51 http://en.wikipedia.org/wiki/Central_limit_theorem [accessed 2008-08-19]


682 53 STOCHASTIC THEORY AND STATISTICS
53.4.3 Exponential Distribution exp(λ)

The exponential distribution52 exp(λ) [781] is often used if the probabilities of lifetimes of
apparatuses, half-life periods of radioactive elements, or the time between two events in
the Poisson process (see Section 53.3.2.2 on page 673) has to be approximated. Its PDF
is sketched in Fig. 53.6.a for some example values of λ the according cases of the CDF are
illustrated Fig. 53.6.b. The most important characteristics of the exponential distribution
can be obtained from Table 53.8.

Parameter Definition
parameters λ ∈ R+  53.184
0 if x ≤ 0
PDF fX (x) = 53.185
λe−λx otherwise

0 if x ≤ 0
CDF P (X ≤ x) = FX (x) = 53.186
1 − e−λx otherwise
1
mean EX = 53.187
λ
ln 2
median med = 53.188
λ
mode mode = 0 53.189
1
variance D2 X = 2 53.190
λ
skewness γ1 = 2 53.191
kurtosis γ2 = 6 53.192
entropy h(X) = 1 − ln λ 53.193
 −1
t
mgf MX (t) = 1 − 53.194
λ
 −1
it
char. func. ϕX (t) = 1 − 53.195
λ

Table 53.8: Parameters of the exponential distribution.

52 http://en.wikipedia.org/wiki/Exponential_distribution [accessed 2007-07-03]


53.4. SOME CONTINUOUS DISTRIBUTIONS 683

3
fX(x)
2.5

2 1/l

1.5 l=0.3
l=0.5
1 l=1
l=2
l=3
0.5

0 0.5 1 1.5 2 2.5 3 x 3.5


Fig. 53.6.a: The PDFs of some exponential distributions.

1
F (x)
0.9 X
0.8
0.7
0.6
0.5 l=0.3
0.4 l=0.5
l=1
0.3
l=2
0.2 l=3
0.1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x 5


Fig. 53.6.b: The CDFs of some exponential distributions.

Figure 53.6: Some examples for the exponential distribution.


684 53 STOCHASTIC THEORY AND STATISTICS
53.4.4 Chi-square Distribution

The chi-square (or χ2 ) distribution53 is a steady probability distribution on the set of


positive real numbers. It is a so-called sample distribution which is used for the estimation
of parameters like the variance of other distributions. We can also describe the sum of
independent standardized normal distributions with it. Its sole parameter, n, denotes the
degrees of freedom.

In Table 53.954 , the characteristic parameters of the χ2 distribution are outlined. A few
examples for the PDF and CDF of the χ2 distribution are illustrated in Fig. 53.7.a and
Fig. 53.7.b.

Table 53.10 provides some selected values of the χ2 distribution. The headline of the
table contains results of the cumulative distribution function FX (x) of a χ2 distribution
with n degrees of freedom (values in the first column). The cells now denote the x values
that belong to these (m, FX (x)) combinations.

Parameter Definition
parameters n ∈ R+ , n
(> 0 53.196
0 if x ≤ 0
PDF fX (x) = n
2− /2 n/2−1 −x/2 53.197
Γ(n/2) x e otherwise
γ (n/2, x/2)
CDF P (X ≤ x) = FX (x) = = Pγ (n/2, x/2) 53.198
Γ(n/2)
mean EX = n 53.199
2
median med ≈ n − 53.200
3
mode mode = n − 2 if n ≥ 2 53.201
variance D2 X r= 2n 53.202
8
skewness γ1 = 53.203
n
12
kurtosis γ2 = 53.204
n n
entropy h(X) = + ln (2Γ(n/2)) + (1 − n/2)ψ (n/2) 53.205
2
MX (t) = (1 − 2t)− /2 for 2t < 1
n
mgf 53.206
ϕX (t) = (1 − 2it)− /2
n
char. func. 53.207

Table 53.9: Parameters of the χ2 distribution.

53 http://en.wikipedia.org/wiki/Chi-square_distribution [accessed 2007-09-30]


54 γ (n, z) in Equation 53.198 is the lower incomplete Gamma function and P (n, z) is the regularized
γ
Gamma function.
53.4. SOME CONTINUOUS DISTRIBUTIONS 685

0.5
fX(x)
0.4 n=10
n=7
0.35 n=4
n=2
0.3 n=1

0.25
0.2
0.15
0.1
0.05

0 1 2 3 4 5 6 7 8 x 10
Fig. 53.7.a: The PDFs of some χ2 distributions.

1
FX(x)
0.8
0.7
0.6
0.5
0.4
0.3
n=10
0.2 n=7
n=4
0.1 n=2
n=1
0 1 2 3 4 5 6 7 8 x 10
Fig. 53.7.b: The CDFs of some χ2 distributions.

Figure 53.7: Examples for the χ2 distribution.


686 53 STOCHASTIC THEORY AND STATISTICS
n 0.995 .99 .975 .95 .9 .1 .05 .025 .01 .005
1 – – 0.001 0.004 0.016 2.706 3.841 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 4.605 5.991 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.345 12.838
4 0.207 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.833 15.086 16.750
6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.449 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.013 18.475 20.278
8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.535 20.090 21.955
9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.023 21.666 23.589
10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188
11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.725 26.757
12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.042 19.812 22.362 24.736 27.688 29.819
14 4.075 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319
15 4.601 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.578 32.801
16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267
17 5.697 6.408 7.564 8.672 10.085 24.769 27.587 30.191 33.409 35.718
18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156
19 6.844 7.633 8.907 10.117 11.651 27.204 30.144 32.852 36.191 38.582
20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997
21 8.034 8.897 10.283 11.591 13.240 29.615 32.671 35.479 38.932 41.401
22 8.643 9.542 10.982 12.338 14.041 30.813 33.924 36.781 40.289 42.796
23 9.260 10.196 11.689 13.091 14.848 32.007 35.172 38.076 41.638 44.181
24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.559
25 10.520 11.524 13.120 14.611 16.473 34.382 37.652 40.646 44.314 46.928
26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290
27 11.808 12.879 14.573 16.151 18.114 36.741 40.113 43.195 46.963 49.645
28 12.461 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.993
29 13.121 14.256 16.047 17.708 19.768 39.087 42.557 45.722 49.588 52.336
30 13.787 14.953 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672
40 20.707 22.164 24.433 26.509 29.051 51.805 55.758 59.342 63.691 66.766
50 27.991 29.707 32.357 34.764 37.689 63.167 67.505 71.420 76.154 79.490
60 35.534 37.485 40.482 43.188 46.459 74.397 79.082 83.298 88.379 91.952
70 43.275 45.442 48.758 51.739 55.329 85.527 90.531 95.023 100.425 104.215
80 51.172 53.540 57.153 60.391 64.278 96.578 101.879 106.629 112.329 116.321
90 59.196 61.754 65.647 69.126 73.291 107.565 113.145 118.136 124.116 128.299
100 67.328 70.065 74.222 77.929 82.358 118.498 124.342 129.561 135.807 140.169

Table 53.10: Some values of the χ2 distribution.


53.4. SOME CONTINUOUS DISTRIBUTIONS 687
53.4.5 Student’s t-Distribution

The Student’s t-distribution55 is based on the insight that the mean of a normally dis-
tributed feature of a sample is no longer normally distributed if the variance is unknown
and needs to be estimated from the data samples [929, 1112, 1113]. It has been design by
Gosset [1112] who published it under the pseudonym Student.
The parameter n of the distribution denotes the degrees of freedom of the distribution.
If n approaches infinity, the t-distribution approaches the standard normal distribution.
The characteristic properties of Student’s t-distribution are outlined in Table 53.1156
and examples for its PDF and CDF are illustrated in Fig. 53.8.a and Fig. 53.8.b.
Table 53.12 provides some selected values for the quantiles t1−α,n of the t-distribution
(one-sided confidence intervals, see Section 53.6.3 on page 696). The headline of the table
contains results of the cumulative distribution function FX (x) of a Student’s t-distribution
with n degrees of freedom (values in the first column). The cells now denote the x values
that belong to these (n, FX (x)) combinations.

Parameter Definition
parameters n ∈ R+ , n > 0 53.208
Γ((n+1)/2) 2
−(n+1)/2
PDF fX (x) = √ 1 + x /n 53.209
nπΓ(n/2)  2
1 n+1 3 x
  2 F1 , , ,−
1 n+1 2 2 2 n
CDF P (X ≤ x) = FX (x) = + xΓ √  n  53.210
2 2 nπΓ
2
mean EX = 0 53.211
median med = 0 53.212
mode mode = 0 53.213
n
variance D2 X = for n > 2, otherwise undefined 53.214
n−2
skewness γ1 = 0 for n > 3 53.215
6
kurtosis γ2 = for n > 4 53.216
n − 4    n    
n n+1 √ n 1
entropy h(X) = ψ −ψ + log nB , 53.217
2 2 2 2 2
mgf undefined 53.218

Table 53.11: Parameters of the Student’s t- distribution.

55 http://en.wikipedia.org/wiki/Student%27s_t-distribution [accessed 2007-09-30]


56 More information on the gamma function Γ used in Equation 53.209 and Equation 53.210 can be
found in ?? on page ??. 2 F in Equation 53.210 stands for the hypergeometric function, ψ and “B” in
Equation 53.217 are the digamma and the beta function.
688 53 STOCHASTIC THEORY AND STATISTICS

0.4 fX(x)

0.35

The Student`s t-distribution 0.3 normal distribution


approaches the normal n=5
distribution N(0,1) 0.25 n=2
for n®¥. n=1
0.2

0.15

0.1

0.05

-4 -3 -2 -1 0 1 2 3 x 5
Fig. 53.8.a: The PDFs of some Student’s t-distributions

1
FX(x)
0.9
0.8
0.7
The Student`s t-distribution 0.6
approaches the normal
distribution N(0,1) 0.5
for n®¥. normal distribution
0.4 n=5
n=2
0.3 n=1

0.2
0.1

-4 -3 -2 -1 0 1 2 3 x 5
Fig. 53.8.b: The CDFs of some Student’s t-distributions

Figure 53.8: Examples for Student’s t-distribution.


53.5. EXAMPLE – THROWING A DICE 689
n 0.75 .8 .85 .875 .9 .95 .975 .99 .995 .9975 .999 .9995
1 1.000 1.376 1.963 2.414 3.078 6.314 12.71 31.82 63.66 127.3 318.3 636.6
2 0.816 1.061 1.386 1.605 1.886 2.920 4.303 6.965 9.925 14.09 22.33 31.60
3 0.765 0.978 1.250 1.423 1.638 2.353 3.182 4.541 5.841 7.453 10.21 12.92
4 0.741 0.941 1.190 1.344 1.533 2.132 2.776 3.747 4.604 5.598 7.173 8.610
5 0.727 0.920 1.156 1.301 1.476 2.015 2.571 3.365 4.032 4.773 5.893 6.869
6 0.718 0.906 1.134 1.273 1.440 1.943 2.447 3.143 3.707 4.317 5.208 5.959
7 0.711 0.896 1.119 1.254 1.415 1.895 2.365 2.998 3.499 4.029 4.785 5.408
8 0.706 0.889 1.108 1.240 1.397 1.860 2.306 2.896 3.355 3.833 4.501 5.041
9 0.703 0.883 1.100 1.230 1.383 1.833 2.262 2.821 3.250 3.690 4.297 4.781
10 0.700 0.879 1.093 1.221 1.372 1.812 2.228 2.764 3.169 3.581 4.144 4.587
11 0.697 0.876 1.088 1.214 1.363 1.796 2.201 2.718 3.106 3.497 4.025 4.437
12 0.695 0.873 1.083 1.209 1.356 1.782 2.179 2.681 3.055 3.428 3.930 4.318
13 0.694 0.870 1.079 1.204 1.350 1.771 2.160 2.650 3.012 3.372 3.852 4.221
14 0.692 0.868 1.076 1.200 1.345 1.761 2.145 2.624 2.977 3.326 3.787 4.140
15 0.691 0.866 1.074 1.197 1.341 1.753 2.131 2.602 2.947 3.286 3.733 4.073
16 0.690 0.865 1.071 1.194 1.337 1.746 2.120 2.583 2.921 3.252 3.686 4.015
17 0.689 0.863 1.069 1.191 1.333 1.740 2.110 2.567 2.898 3.222 3.646 3.965
18 0.688 0.862 1.067 1.189 1.330 1.734 2.101 2.552 2.878 3.197 3.610 3.922
19 0.688 0.861 1.066 1.187 1.328 1.729 2.093 2.539 2.861 3.174 3.579 3.883
20 0.687 0.860 1.064 1.185 1.325 1.725 2.086 2.528 2.845 3.153 3.552 3.850
21 0.686 0.859 1.063 1.183 1.323 1.721 2.080 2.518 2.831 3.135 3.527 3.819
22 0.686 0.858 1.061 1.182 1.321 1.717 2.074 2.508 2.819 3.119 3.505 3.792
23 0.685 0.858 1.060 1.180 1.319 1.714 2.069 2.500 2.807 3.104 3.485 3.767
24 0.685 0.857 1.059 1.179 1.318 1.711 2.064 2.492 2.797 3.091 3.467 3.745
25 0.684 0.856 1.058 1.178 1.316 1.708 2.060 2.485 2.787 3.078 3.450 3.725
26 0.684 0.856 1.058 1.177 1.315 1.706 2.056 2.479 2.779 3.067 3.435 3.707
27 0.684 0.855 1.057 1.176 1.314 1.703 2.052 2.473 2.771 3.057 3.421 3.690
28 0.683 0.855 1.056 1.175 1.313 1.701 2.048 2.467 2.763 3.047 3.408 3.674
29 0.683 0.854 1.055 1.174 1.311 1.699 2.045 2.462 2.756 3.038 3.396 3.659
30 0.683 0.854 1.055 1.173 1.310 1.697 2.042 2.457 2.750 3.030 3.385 3.646
40 0.681 0.851 1.050 1.167 1.303 1.684 2.021 2.423 2.704 2.971 3.307 3.551
50 0.679 0.849 1.047 1.164 1.299 1.676 2.009 2.403 2.678 2.937 3.261 3.496
60 0.679 0.848 1.045 1.162 1.296 1.671 2.000 2.390 2.660 2.915 3.232 3.460
80 0.678 0.846 1.043 1.159 1.292 1.664 1.990 2.374 2.639 2.887 3.195 3.416
100 0.677 0.845 1.042 1.158 1.290 1.660 1.984 2.364 2.626 2.871 3.174 3.390
120 0.677 0.845 1.041 1.157 1.289 1.658 1.980 2.358 2.617 2.860 3.160 3.373
∞ 0.674 0.842 1.036 1.150 1.282 1.645 1.960 2.326 2.576 2.807 3.090 3.291

Table 53.12: Table of Student’s t-distribution with right-tail probabilities.

53.5 Example – Throwing a Dice

Let us now discuss the different parameters of a random variable at the example of throwing
a dice. On a dice, numbers from one to six are written and the result of throwing it is the
number written on the side facing upwards. If a dice is ideal, the numbers one to six will
show up with exactly the same probability, 61 . The set of all possible outcomes of throwing
a dice Ω is thus n o
Ω= 1, 2, 3, 4, 5, 6 (53.219)

We define a random variable X : Ω 7→ R that assigns real numbers to the possible outcomes
of throwing the dice in a way that the value of X matches the number on the dice:

X : Ω 7→ {1, 2, 3, 4, 5, 6} (53.220)
690 53 STOCHASTIC THEORY AND STATISTICS
It is obviously a uniformly distributed discrete random variable (see Section 53.3.1 on
page 668) that can take on six states. We can now define the probability mass function PMF
and the according cumulative distribution function CDF as follows (see also Figure 53.9):


 0 if x < 1
FX (x) = P (X ≤ x) = x6 if 1 ≤ x ≤ 6 (53.221)

1 otherwise

 0 if x < 1
fX (x) = P (X = x) = 16 if 1 ≤ x ≤ 6 (53.222)

0 otherwise

We now can discuss the statistical parameters of this experiment. This is a good opportunity

1 CDF of the dice throwing experiment


PMF of the dice throwing experiment
single, discrete point
5/6 discontinuous point

2/3

1/2

1/3

1/6

0 1 2 3 4 5 6 x

Figure 53.9: The PMF and CMF of the dice throw

to compare the “real” parameters and their estimates. We therefore assume that the dice
was thrown ten times (n = 10) in an experiment. The following numbers have been thrown
as illustrated in Figure 53.10):

A = {4, 5, 3, 2, 4, 6, 4, 2, 5, 3} (53.223)

Table 53.13 outlines how the parameters of the random variable are computed. The real
values of the parameters are defined using the PMF or CDF functions, while the estimations
are based on the sample data obtained from an actual experiment.
53.6. ESTIMATION THEORY 691
6
A
5

0
Figure 53.10: The numbers thrown in the dice example

parameter true value estimate


count nonexistent n = len(A) = 10 53.224
minimum a = min {x : fX (x) > 0} = 1 a = minA = 2 ≈ a
e 53.225
maximum b = max {x : fX (x) > 0} = 6 eb = maxA = 6 ≈ b 53.226
range range = r = b − a + 1 = 6 re = eb − e
a + 1 = 5 ≈ range 53.227
n−1
a+b 7 1X 19
mean EX = = = 3.5 a= A[i] = = 3.8 ≈ EX 53.228
2 2 n i=0 5
a+b 7 As = sortAscA>
n n 
median med = = = 3.5 53.229
2 2 g = As 2 +As 2 − 1 = 4 ≈ med
med 2
mode mode = ∅ ^ = {4} ≈ mode
mode 53.230
n−1
2
r −1 35 1 X 2 26
variance D2 X = σ2 = = ≈ 2.917 s2 = (A[i] − a) = ≈ 1.73 ≈ σ 2 53.231
12 12 n − 1 i=0 15
skewness γ1 = 0  G1 ≈ 0.0876 ≈ γ1 53.232
6 r2 + 1 222
kurtosis γ2 = − 2
 =− ≈ −1.269 G2 ≈ −0.7512 ≈ γ2 53.233
5 r −1 175

Table 53.13: Parameters of the dice throw experiment.

As you can see, the estimations of the parameters sometimes differ significantly from their
true values. More information about estimation can be found in the following section.

53.6 Estimation Theory

53.6.1 Introduction

Estimation theory is the science of approximating the values of parameters based on mea-
surements or otherwise obtained sample data [1506, 2203, 2299, 2494, 2782]. The center of
this branch of statistics is to find good estimators in order to approximate the real values
the parameters as good as possible.
692 53 STOCHASTIC THEORY AND STATISTICS
Definition D53.54 (Estimator). An estimator57 θe is a rule (most often a mathematical
function) that takes a set of sample data A as input and returns an estimation of one
parameter θ of the random distribution of the process sampled with this data set.

We have already discussed some estimators in Section 53.2 – the arithmetic mean of a sample
data set (see Definition D53.32 on page 661), for example, is an estimator for the expected
value (see Definition D53.30 on page 661) and in Equation 53.67 on page 662 we introduced
an estimator for the sample variance. Obviously, an estimator θe is the better the closer its
results (the estimates) come to the real values of the parameter θ.

Definition D53.55 (Point Estimator). We define a point estimator θe to be an estimator


which is a mathematical function θe : Rn 7→ R. This function takes the data sample A (here
considered as a real vector A ∈ Rn ) as input and returns the estimate in the form of a (real)
scalar value.

Definition D53.56 ((Estimation) Error). The absolute (estimation) error ε58 is the dif-
ference between the value returned by a point estimator θe of a parameter θ for a certain
input A and its real value. Notice that the error ε can be zero, positive, or negative.
 
εA θe = θ(A)
e −θ (53.234)

In the following, we will most often not explicitly refer to the data sample A as basis of the
estimation θe anymore. We assume that it is implicitly clear that estimations are usually
based on such samples and that subscripts like the A in εA in Equation 53.234 are not
needed.
 
Definition D53.57 (Bias). The bias Bias θe of an estimator θe is the expected value of
the difference of the estimate and the real value.
  h i h  i
Bias θe = E θe − θ = E ε θe (53.235)

Definition D53.58 (Unbiased Estimator). An unbiased estimator has a zero bias.


  h i h  i
Bias θe = E θe − θ = E ε θe = 0 ⇔ E θe = θ (53.236)

 
Definition D53.59 (Mean Square Error). The mean square error59 MSE θe of an es-
timator θe is the expected value of the square of the estimation error ε. It is also the sum of
the variance of the estimator and the square of its bias.
   2     
2
e
MSE θ = E θ − θ e = E ε θe (53.237)
    2
MSE θe = D2 θe + Bias θe (53.238)
57 http://en.wikipedia.org/wiki/Estimator [accessed 2007-07-03], http://mathworld.wolfram.

com/Estimator.html [accessed 2007-07-03]


58 http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics [accessed 2007-07-03]
59 http://en.wikipedia.org/wiki/Mean_squared_error [accessed 2007-07-03]
53.6. ESTIMATION THEORY 693
The “MSE” is a measure for how much an estimator differs from the quantity to be estimated.
Notice that the “MSE” of unbiased estimators coincides with the variance D2 θe of θ.e For
e
estimating the mean square error of an estimator θ, we use the sample mean:
  Xn  
] θe = 1
MSE θei − θe (53.239)
n i=1

53.6.2 Likelihood and Maximum Likelihood Estimators

Likelihood60 is a mathematical expression complementary to probability. Whereas proba-


bility allows us to predict the outcome of a random experiment based on known parameters,
likelihood allows us to predict unknown parameters based on the outcome

Definition D53.60 (Likelihood Function). The likelihood function “L” returns a value
that is proportional to the probability of a postulated underlying law or probability distri-
bution ϕ according to an observed outcome (denoted as the vector y).

L[ϕ |y ] ∝ P ( y| ϕ) (53.240)

Notice that “L” not necessarily represents a probability density/mass function and its inte-
gral also does not necessarily equal to 1. In many sources, “L” is defined in dependency of
a parameter θ instead of the function ϕ. We preferred the latter notation since it is a more
general superset of the first one.

53.6.2.1 Observation of an Unknown Process ϕ

Assume that we are given a finite set A of n sample data points.

A = {(x0 , y0 ) , (x1 , y1 ) , .., (xn−1 , yn−1 )} , xi , yi ∈ R ∀i ∈ 0..n − 1 (53.241)

The xi are known inputs or parameters of an unknown process defined by the function
ϕ : R 7→ R. By observing the corresponding outputs of the process, we have obtained the yi
values. During our observations, we make the measurement errors61 ηi .

yi = ϕ(xi ) + ηi ∀i ∈ 0..n − 1 (53.242)

About this measurement error η we make the following assumptions:

Eη = 0 (53.243)

η ∼ N 0, σ 2 0 < σ < +∞ (53.244)
cov(ηi , ηj ) = 0 ∀i, j ∈ 0..n − 1 (53.245)

1. The expected values of η in Equation 53.243 are all zero. Our measurement device
thus gives us, in average, unbiased results. If the expected value of η was not zero, we
could simple recalibrate our (imaginary) measurement equipment in order to subtract
Eη from all measurements and would obtain unbiased observations.

2. Furthermore, Equation 53.244 states that the ηi are normally distributed around the
zero point with an unknown, nonzero variance σ 2 . Supposing measurement errors to
be normally distributed is quite common and a reasonable good approximation in most
60 http://en.wikipedia.org/wiki/Likelihood [accessed 2007-07-03]
61 http://en.wikipedia.org/wiki/Measurement_error [accessed 2007-07-03]
694 53 STOCHASTIC THEORY AND STATISTICS
cases. The white noise62 in transmission of signals for example is often modeled with
Gaussian distributed63 amplitudes. This second assumption
 includes, of course, the
first one: Being normally distributed with N µ = 0, σ 2 implies a zero expected value
of the error.
3. With Equation 53.245, we assume that the errors ηi of the single measurements are
stochastically independent. If there existed a connection between them, it would be
part of the underlying physical law ϕ and could be incorporated in our measurement
device and again be subtracted.

53.6.2.2 Objective: Estimation


Assume that we can choose from a, possible infinite large, set of functions (estimators)
f ∈ F.
f ∈ F ⇒ f : R 7→ R (53.246)
From this set, we wish to select the function f ⋆ ∈ F which resembles ϕ the best (i. e., no
worse than all other f ∈ F ). ϕ is not necessarily an element of F , so we cannot always
presume to find a f ⋆ ≡ ϕ.
Each estimator f deviates by the estimation error ε(f ) (see Definition D53.56 on
page 692) from the yi -values. The estimation error depends on f and may vary for dif-
ferent estimators.
yi = f (xi ) + εi (f ) ∀i ∈ 0..n − 1 (53.247)
We consider all f ∈ F to be valid estimators for ϕ and simple look for the one that “fits
best”. We now can combine Equation 53.247 with 53.242:
f (xi ) + εi (f ) = yi = ϕ(xi ) + ηi ∀i ∈ 0..n − 1 (53.248)
We do not know ϕ and thus, cannot determine the ηi . According to the likelihood method,
we pick the function f ∈ F that would have most probably produced the outcomes yi .
In other words, we have to maximize the likelihood of the occurrence of the εi (f ). The
likelihood here is defined under the assumption that the true measurement errors ηi are
normally distributed (see Equation 53.244). So what we can do is to determine the εi in
a way that their occurrence is most probable according to the distribution of the random
variable that created the ηi , N 0, σ 2 , that is. In the best case, the ε(f ⋆ ) = ηi and thus, f ⋆
is equivalent to ϕ(xi ), at least in for the sample information A available to us.

53.6.2.3 Maximizing the Likelihood


Therefore, we can regard the εi (f ) as outcomes of independent random experiments, as
uncorrelated random variables, and combine them to a multivariate normal distribution.
For the ease of notation, we define the ε(f ) to be the vector containing all εi (f )-values.
 
ε1 (f )
 ε2 (f ) 
 
ε(f ) =  .  (53.249)
 .. 
εn (f )
The probability density function of a multivariate normal distribution with independent
variables εi that have the same variance σ 2 looks like this (as defined in Equation 53.183 on
page 681):
  n2 Pn−1   n2 Pn−1
1 (ε (f )−µ)2
1 (ε (f ))2
− i=0 2σi 2 − i=02σ2i
fX (ε(f )) = e = e (53.250)
2πσ 2 2πσ 2
62 http://en.wikipedia.org/wiki/White_noise [accessed 2007-07-03]
63 http://en.wikipedia.org/wiki/Gaussian_noise [accessed 2007-07-03]
53.6. ESTIMATION THEORY 695

Amongst all possible vectors ε(f ) : f ∈ F we need to find the most probable one ε⋆ = ε(f ⋆ )
according to Equation 53.250. The function f ⋆ which produces this vector will then be the
one which most probably matches to ϕ.
In order to express how likely the observation of some outcomes is under a certain set of
parameters, we have defined the likelihood function “L” in Definition D53.60. Here we can
use the probability density function f of the normal distribution, since the maximal values
of f are those that are most probable to occur.
 n2
 Pn−1
1 (ε (f ))2
− i=02σ2i
L[ ε(f )| f ] = fX (ε(f )) = e (53.251)
2πσ 2
⋆ ⋆ ⋆
f ∈ F : L[ ε(f )| f ] = max∀f ∈F L[ ε(f )| f ] (53.252)
  n2 Pn−1
1 (ε (f ))2
− i=02σ2i
= max∀f ∈F 2 e (53.253)
2πσ

Finding a f ⋆ which maximizes the function f however is equal to find a f ⋆ which mini-
mizes the sum of the squares of the ε-values.
n−1
X n−1
X
2 2
f⋆ ∈ F : (εi (f ⋆ )) = min∀f ∈F (εi (f )) (53.254)
i=0 i=0

According to Equation 53.247 we can now substitute the εi -values with the difference be-
tween the observed outcomes yi and the estimates f (xi ).
n
X n−1
X
2 2
(εi (f )) = (yi − f (xi )) (53.255)
i=1 i=0

Definition D53.61 (Maximum Likelihood Estimator). A maximum likelihood esti-


mator64 [70] f ⋆ is an estimator which fits with maximum likelihood to a given set of sample
data A.

Under the particular assumption of uncorrelated error terms normally distributed around
zero, a MLE minimizes Equation 53.256.
n−1
X n−1
X
2 2
f⋆ ∈ F : (yi − f ⋆ (xi )) = min∀f ∈F (yi − f (xi )) (53.256)
i=0 i=0

Minimizing the sum of the difference between the observed yi and the estimates f (xi )
also minimizes their mean, so with this we have also shown that the estimator that mini-
mizes mean square error MSE (see Definition D53.59) is the best estimator according to the
likelihood of the produced outcomes.

n−1 n−1
1X 2 1X 2
f⋆ ∈ F : (yi − f ⋆ (xi )) = min∀f ∈F (yi − f (xi )) (53.257)
n i=0 n i=0
f ⋆ ∈ F : MSE(f ⋆ ) = min∀f ∈F MSE(f ) (53.258)
2
The term (yi − f (xi )) is often justified by the statement that large deviations of f from
the y-values are punished harder than smaller ones. The correct reason why we minimize
the square error, however, is that we maximize the likelihood of the resulting estimator.
64 http://en.wikipedia.org/wiki/Maximum_likelihood [accessed 2007-07-03]
696 53 STOCHASTIC THEORY AND STATISTICS
At this point, one should also notice that the xi also could be replaced with vectors
xi ∈ Rm without any further implications or modifications of the equations.
In most practical cases, the set F of possible functions is closely defined. It usually
contains only one type of parameterized function, so we only have to determine the unknown
parameters in order to find f ⋆ . Let us consider a set of linear functions as example. If we
want to find estimators of the form F = {∀ f (x) = ax + b : a, b ∈ R}, we will minimize
Equation 53.259 by determining the best possible values for a and b.
n−1
1X
MSE( f (x)| a, b) = (axi + b − yi )2 (53.259)
n i=0

If we now could find a perfect estimator fp⋆ and our data would be free of any measurement
error, all parts of the sum would become zero. For n > 2, this perfect estimator would be the
solution of the over-determined system of linear equations illustrated in Equation 53.260.

0 = ax1 + b − y1
0 = ax2 + b − y2
(53.260)
... ...
0 = axn + b − yn

Since it is normally not possible to obtain a perfect estimator because there are measurement
errors or other uncertainties like unknown dependencies, the system in Equation 53.260 often
cannot be solved but only minimized.

53.6.2.4 Best Linear Unbiased Estimators

The Gauss-Markov Theorem65 defines BLUEs (best linear unbiased estimators) according
to the facts just discussed:

Definition D53.62 (BLUE). In a linear model in which the measurement errors εi are
uncorrelated and are all normally distributed with an expected value of zero and the same
variance, the best linear unbiased estimators (BLUE) of the (unknown) coefficients are the
least-square estimators [2184].

Hence, for the best linear unbiased estimator also the same three assumptions (Equa-
tion 53.243, Equation 53.244, and Equation 53.245) as for the maximum likelihood estimator
hold.

53.6.3 Confidence Intervals

There is a very simple principle in statistics that always holds: All estimates may as well
be wrong. There is no guarantee whatsoever that we have estimated a parameter of an
underlying distribution correct regardless how many samples we have analyzed. However, if
we can assume or know the underlying distribution of the process which has been sampled,
we can compute certain intervals which include the real value of the estimated parameter
with a certain probability.Unlike point estimators, which approximate a parameter of a data
sample with a single value, confidence intervals66 (CIs) are estimations that give certain
upper and lower boundaries in which the true value of the parameter will be located with a
certain, predefined probability. [505, 674, 931]

65 http://en.wikipedia.org/wiki/Gauss-Markov_theorem [accessed 2007-07-03], http://www.


answers.com/topic/gauss-markov-theorem [accessed 2007-07-03]
66 http://en.wikipedia.org/wiki/Confidence_interval [accessed 2007-10-01]
53.6. ESTIMATION THEORY 697
Definition D53.63 (Confidence Interval). A confidence interval is estimations that
providing a range in which the true value of the estimated parameter will be located with a
certain probability.

The advantage of confidence intervals is that we can directly derive the significance of the
data samples from them – the larger the intervals are, the less reliable is the sample. The
narrower confidence intervals get for high predefined probabilities, the more profound, i. e.,
significant, will the conclusions drawn from them be.

Example E53.6 (Chickens and Confidence).


Imagine we run a farm and own 25 chickens. Each chicken lays one egg a day. We collect
all the eggs in the morning and weigh them in order to find the average weight of the eggs
produced by our farm. Assume our sample contains the values (in g):

{ 120, 121, 119, 116, 115, 122, 121, 123, 122, 120
A= (53.261)
119, 122, 121, 120, 119, 121, 123, 117, 118, 121 }
n = len(A) = 20 (53.262)

From these measurements, we can determine the arithmetic mean a and the sample variance
s2 according to Equation 53.60 on page 661 and Equation 53.67 on page 662:
n−1
1X 2400
a= A[i] = = 120 (53.263)
n i=0 20
n−1
1 X 92
s2 = (A[i] − a)2 = (53.264)
n − 1 i=0 19

The question that arises now is if the mean of 120 is significant, i. e., whether it likely
approximates the expected value of the egg weight, or if the data sample was too small to
be representative. Furthermore, we would like to know in which interval the expected value
of the egg weights will likely be located. Here, confidence intervals come into play.
First, we need to find out what the underlying distribution of the random variable
producing A as sample output is. In case of chicken eggs, we safely can assume 67 that
it is the normal distribution discussed in Section 53.4.2 on page 678. With that we can
calculate an interval which includes the unknown parameter µ (i. e., the real expected value)
with a confidence probability of γ. γ = 1 − a is the so-called confidence coefficient and a is
the probability that the real value of the estimated parameter lies not inside the confidence
interval.
Let us compute the interval including the expected value µ of the chicken egg weights
with a probability of γ = 1 − a = 95%. Thus, a = 0.05. Therefore, we have to pick the right
formula from Section 53.6.3.1 on the following page (here it is Equation 53.277 on the next
page) and substitute in the proper values:
 
s
µγ ∈ a ± t1− a2 ,n−1 √ (53.265)
n
 r 
92
 19 
µ95% ∈ 120 ± t0.975,19 ∗ √19 
 (53.266)

µ95% ∈ [120 ± 2.093 ∗ 0.5048] (53.267)


µ95% ∈ [118.94, 121.06] (53.268)
67 Notice that such an assumption is also a possible source of error!
698 53 STOCHASTIC THEORY AND STATISTICS
The value of t19,0.025 can easily be obtained from Table 53.12 on page 689 which contains the
respective quantiles of Student’s t-distribution discussed in Section 53.4.5 on page 687. Let
us repeat the procedure in order to find the interval that will contain µ with probabilities
1 − γ = 99% ⇒ a = 0.01 and 1 − γ = 90% ⇒ a = 0.1:

µ99% ∈ [120 ± t0.995,19 ∗ 0.5048] (53.269)


µ99% ∈ [120 ± 2.861 ∗ 0.5048] (53.270)
µ99% ∈ [118.56, 121.44] (53.271)
µ90% ∈ [120 ± t0.95,19 ∗ 0.5048] (53.272)
µ90% ∈ [120 ± 1.729 ∗ 0.5048] (53.273)
µ90% ∈ [119.13, 120.87] (53.274)
(53.275)

As you can see, the higher the confidence probabilities we specify the larger become the
intervals in which the parameter is contained. We can be to 99% sure that the expected
value of laid eggs is somewhere between 118.56 and 121.44. If we narrow the interval down
to [119.13, 120.87], we can only be 90% confident that the real expected value falls in it
based on the data samples which we have gathered.

53.6.3.1 Some Confidence Intervals


e
hThe followingi confidence intervals are two-sided, i. e., we determine a range θγ ∈
θe′ − x, θe′ + x that contains the parameter θ with probability γ based on the estimate
 i h 
e If you need a one-sided confidence interval like θeγ ∈ −∞, θe + x or θeγ ∈ θe′ − x, ∞ ,
θ.
you just need to replace 1 − a2 with 1 − a in the equations.

53.6.3.1.1 Expected Value of a Normal Distribution N

53.6.3.1.1.1 With knowing the variance σ 2 If the exact variance σ 2 of the distribution
underlying our data samples is known, and we have an estimate of the expected value µ by
the arithmetic mean a according to Equation 53.60 on page 661, the two-sided confidence
interval (of probability γ) for the expected value of the normal distribution is:
  
a σ
µγ ∈ a ± z 1 − √ (53.276)
2 n

Where z (y) ≡ probit(y) ≡ Φ−1 y is the y-quantil of the standard normal distribution
(see Section 53.4.2.2 on page 680) which can for example be looked up in Table 53.7.
53.6.3.1.1.2 With estimated sample variance s2 Often, the true variance σ 2 of an underly-
ing distribution is not known and instead estimated with the sample variance s2 according
to Equation 53.67 on page 662. The two-sided confidence interval (of probability γ) for
the expected value can then be√computed using the arithmetic mean a and the estimate
of the standard deviation s = s2 of the sample and the tn−1,1− a2 quantile of Student’s
t-distribution which can be looked up in Table 53.12 on page 689.
 
s2
µγ ∈ a ± t1− 2 ,n−1 √
a (53.277)
n
53.7. STATISTICAL TESTS 699
53.6.3.1.2 Variance of a Normal Distribution
The two-sided confidence interval (of probability γ) for the variance of a normal distribution
can computed using sample variance s2 and the χ2 (p, k)-quantile of the χ2 distribution
which can be looked up in Table 53.10 on page 686.
 
2 2
(n − 1)s (n − 1)s
σγ2 ∈   a , α  (53.278)
χ2 1 − , n − 1 χ2 ,n − 1
2 2

53.6.3.1.3 Success Probability p of a B (1, p) Binomial Distribution


The two-sided confidence interval (of probability γ) of the success probability p of a B (1, p)
binomial distribution can be computed as follows:
  s 
 2
n a + 1 z1− a(1 − a) 1 2
pγ ∈  2
a ± z
2
1− a + z a  (53.279)
2
n + z1− a 2n 2 2 n 2n 1− 2
2

53.6.3.1.4 Expected Value of an Unknown Distribution with Sample Variance


The two-sided confidence interval (of probability γ) of the expected value EX of an unknown
distribution with an unknown real variance D2 X can be determined using the arithmetic
mean a and the sample variance s2 if the sample data set contains more than n = 50
elements.   
a s
EXγ ∈ a ± z 1 − √ (53.280)
2 n

53.6.3.1.5 Confidence Intervals from Tests


Many statistical tests (such as the Wilcoxon’s signed rank test introduced in ??) can be
inverted in order to obtain confidence intervals [307]. The topic of statistical tests are
discussed in Section 53.7.

53.7 Statistical Tests

53.7.1 Introduction
With statistical tests [674, 1160, 1202, 1716, 2476, 2490], it is possible to find out whether
an alternative hypothesis H1 about the distribution(s) from a set of measured data A is
likely to be true. This is done by showing that the sampled data would very unlikely have
occurred if the opposite hypothesis, the null hypothesis H0 , holds.

Example E53.7 (EA Settings Test).


If we want to show, for instance, that two different settings for an Evolutionary Algorithm
will probably lead to different solution qualities (H1 ), we assume that the distributions of
the objective values of the candidate solutions returned by them are equal (H0 ). Then,
we run the two Evolutionary Algorithms multiple times and measure the outcome, i. e.,
obtain A. Based on A, we can estimate the probability α with which the two different sets
of measurements (the samples) would have occurred if H0 was true. In the case that this
probability is very low, let’s say α < 5%, H0 can be rejected (with 5% probability of making
a type 1 error) and H1 is likely to hold. Otherwise, we would expect H0 to hold and reject
H1 .
700 53 STOCHASTIC THEORY AND STATISTICS

Neyman and Pearson [2030, 2031] distinguish two classes of errors68 that can be made when
performing hypothesis tests:

Definition D53.64 (Type 1 Error). Type 1 ErrorA type 1 error (α error, false positive)
is the rejection of a correct null hypothesis H0 , i. e., the acceptance of a wrong alternative
hypothesis H1 . Type 1 errors are made with probability α.

Definition D53.65 (Type 2 Error). Type 2 ErrorA type 2 error (β error, false negative)
is the acceptance of a wrong null hypothesis H0 , i. e., the rejection of a correct alternative
hypothesis H1 . Type 2 errors are made with probability β.

Definition D53.66 (Power). PowerThe (statistical) power 69 of a statistical test is the


probability of rejecting a false null hypothesis H0 . Therefore, the power equals 1 − β.

A few basic principles for testing should be mentioned before going more into detail:

1. The more samples we have, the better the quality and significance of the conclusions
that we can make by testing. An arithmetic mean of the runtime 7s is certainly more
significant when being derived from 1000 runs of certain algorithm than from the
sample set A = {9s, 5s}. . .

2. The more assumptions that we can make about the sampled probability distribution,
the more powerful will the tests be that are available.

3. Wrong assumptions, falsely carried out measurements, or other misconduct will nullify
all results and efforts put into testing.

In the following, we will discuss multiple methods for hypothesis testing. We can, for
instance, distinguish between tests based on paired samples and those for independent pop-
ulations. In Table 53.14, we have illustrated an example for the former, where pairs of
elements (a, b) are drawn from two different populations. Table 53.15 contains two indepen-
dent samples a and b with a different number of elements (na = 6 6= nb = 8).

68 http://en.wikipedia.org/wiki/Type_I_and_type_II_errors [accessed 2008-08-15]


69 http://en.wikipedia.org/wiki/Statistical_power [accessed 2008-08-15]
53.7. STATISTICAL TESTS 701
Example E53.8 (Example for Paired Data Samples).

Row a b d=b−a Sign Rank |r| Rank r


1. 2 10 +8 + 13 13
2. 3 4 +1 + 2 2
3. 6 10 +4 + 10 10
4. 4 6 +2 + 6 6
5. 6 11 +5 + 11 11
6. 5 6 +1 + 2 2
7. 4 11 +7 + 12 12
8. 9 6 −3 - 9 −9
9. 10 12 +2 + 6 6
10. 8 8 0 = – –
11. 6 8 +2 + 6 6
12. 7 6 −1 - 2 −2
13. 4 4 0 = – –
14. 4 6 +2 + 6 6
15. P 9 P 7 P−2 - 6 P −6
ai = 87 bi = 115 D= di = 28 R= ri = 57

med(a) = 6; a = 5.8
med(b) = 7; b = 7.6 ≈ 7.67

Table 53.14: Example for paired samples (a, b).

This example data set consists of n = na = nb = 15 tuples (a, b). The leftmost column
is the tuple index, the columns a and b contain the tuple values, and d is the inner-tuple
difference b − a. The three columns to the right contain intermediate values which will later
be used in the example calculations accompanying the descriptions of the tests.
The arithmetic mean of the samples a is a = 5.8 and for b it is b = 7.6 ≈ 7.67. The
medians are med(a) = 6 and med(b) = 7. Although it looks as if a and b were different and
that the values of a seem to be lower than those of the sample b, we cannot confirm this
without a statistical test.
702 53 STOCHASTIC THEORY AND STATISTICS
Example E53.9 (Example for Unpaired Data Samples).

Row a b Ranks ra Ranks rb


1. 2 1.5
2. 2 1.5
3. 3 4.0
4. 3 4.0
5. 3 4.0
6. 4 6.0
7. 5 8.5
8. 5 8.5
9. 5 8.5
10. 5 8.5
11. 6 11.5
12. 6 11.5
13. 7 13.5
14. 7 13.5
med(a) = 3 med(b) = 5.5 Ra = 28 Rb = 77

na = 6 nb = 8

Table 53.15: Example for unpaired samples.

Unlike the data given in Example E53.9, Table 53.14 contains two unpaired samples a and
b consisting of na = 6 and nb = 8 samples, respectively. We again provide the item index
in the leftmost column followed by the elements of a and b in the next two columns. In the
two columns to the right, again the ranks are given.
The arithmetic mean of the samples a is a = 20/3 = 3.3 ≈ 3.33 and for b it is b = 5.375.
The medians are med(a) = 3 and med(b) = 5.5. Again, it looks like the sample a contains
the smaller values. Whether this is actually the case we will have to check with tests. Or
better, with which error probability this assumption holds, to be precise.
Part VII

Implementation
704 53 STOCHASTIC THEORY AND STATISTICS
Chapter 54
Introduction

In this book part, we try to provide straightforward implementations of some of the algo-
rithms discussed in this book. The latest versions of the sources of the following programs,
including documentation, are available at http://goa-taa.cvs.sourceforge.net/
viewvc/goa-taa/Book/implementation/java/sources.zip [accessed 2010-10-02]. This
url directly obtains the sources, binaries, and documentation from the version management
system in which the complete book is located (see page 4 for details).
In this part of the book, we will not provide much discussion. The theoretical aspects
of optimization and the algorithm structures have already been outlined in length in the
previous parts. Here, we will just give the code with some documentation (with clickable
links) which glues the implementation to these aforementioned theoretical discussions.
The implementations and program sources given here have been developed on the fly
during the course Practical Optimization Algorithm Design given by me at School of Software
Engineering in the Suzhou Institute of Advanced Studies of the University of Science and
Technology of China (USTC) in the winter semester 2010. The goal of this course was
to solve optimization problems interactively together with my students. Some of the code
presented here is the result of the suggestions of my students and a discussion which I
tried to coarsely steer but not to control. I thus like to thank my students here for their
participation and good suggestions.

Programming Language JavaTM


java version "1.6.0_21"
Java(TM)SE Runtime Environment (build 1.6.0_21-b07)
Java HotSpot(TM)Client VM (build 17.0-b17, mixed mode,
sharing)
See http://java.sun.com/ [accessed 2010-09-30]
Developer Environment Eclipse
Build id: 20090920-1017
See http://www.eclipse.org/ [accessed 2010-09-30]
Operating System Microsoft R Windows Vista R Business

Version: 6.0 (Build 6002: Service Pack 2)


32 Bit Operating System
See http://windows.microsoft.com/ [accessed 2010-09-30]
Computer Toshiba Satego X200-21U
R Core
Intel R 2 Duo CPU T7500 @ 2.20GHz

4GiB RAM
See http://www.toshiba.com/ [accessed 2010-09-30]

Table 54.1: The precise developer environment used implementing and testing.

We use the Java programming language for the implementation, mainly because it is more
or less simple and because it is the language I am more or less fluent in. In Table 54.1, we
706 54 INTRODUCTION
provide the exact description of the developer environment and testing machine used. This
may serve as a reference for verifying our implementation.
The code listed in this book is not complete. We merely provide those classes which
include the essential algorithms and skip most of the basic classes or helper classes. These
are available in the source code archive published together with this book. Here, they would
just waste space without contributing much to the understanding of the algorithms.
It is not clear how the Java language will develop, how it will be supported, or whether
the behavior of Java virtual machines remains the same or is the same as in the past.
Hence, we cannot guarantee for the functionality of our example in newer or older versions
of Java as well as in different execution environments. All the code in the following sections
is put under the GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999),
which you can find in Chapter C on page 955.
Chapter 55
The Specification Package

In this package, we give interfaces providing the abstract specifications of the basic elements
of the optimization processes given in Part I. We divide the interface specification of the
modules of optimization algorithms the objective functions from their actual implementa-
tions which will be provided Chapter 56.
Also, we provide interfaces to modules used in some specific algorithm patterns, such as
the temperature schedule in Simulated Annealing (see Listing 55.12 and Chapter 27) and
selection in Evolutionary Algorithms (see Listing 55.13 and Chapter 28).

55.1 General Definitions


In this section, we discuss general interface definitions dealing with the basic abstractions
and structural elements of optimization. They are based on the definitions given in Part I.

Listing 55.1: The basic interface for everything involved in optimization.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.io.Serializable;
7
8 /**
9 * A simple basic interface common to all modules of optimization
10 * algorithms that we provide in this package. It ensures that all objects
11 * are serializable, i.e., can be written to an ObjectOutputStream and read
12 * from an ObjectInputStream, for instance. It furthermore provides some
13 * simple methods for converting an object to representative strings.
14 *
15 * @author Thomas Weise
16 */
17 public interface IOptimizationModule extends Serializable {
18
19 /**
20 * Get the name of the optimization module
21 *
22 * @param longVersion
23 * true if the long name should be returned, false if the short
24 * name should be returned
25 * @return the name of the optimization module
26 */
27 public abstract String getName(final boolean longVersion);
28
29 /**
30 * Get the full configuration which holds all the data necessary to
31 * describe this object.
708 55 THE SPECIFICATION PACKAGE
32 *
33 * @param longVersion
34 * true if the long version should be returned, false if the
35 * short version should be returned
36 * @return the full configuration
37 */
38 public abstract String getConfiguration(final boolean longVersion);
39
40 /**
41 * Get the string representation of this object, i.e., the name and
42 * configuration.
43 *
44 * @param longVersion
45 * true if the long version should be returned, false if the
46 * short version should be returned
47 * @return the string version of this object
48 */
49 public abstract String toString(final boolean longVersion);
50 }

55.1.1 Search Operations

A nullary search operations takes no arguments and just creates one new, random genotype
g ∈ G.

Listing 55.2: The interface for all nullary search operations.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * This interface provides the method for a nullary search operations.
10 * Search operations are discussed in Section 4.2 on page 83.
11 *
12 * @param <G>
13 * the search space (genome, Section 4.1 on page 81)
14 * @author Thomas Weise
15 */
16 public interface INullarySearchOperation<G> extends IOptimizationModule {
17
18 /**
19 * This is the nullary search operation. It takes no argument from the
20 * search space and produces one new element in the search. This new
21 * element is usually created randomly. For this reason, we pass a random
22 * number generator in as parameter so we can use the same random number
23 * generator in all parts of an optimization algorithm.
24 *
25 * @param r
26 * the random number generator
27 * @return a new genotype (see Definition D4.2 on page 82)
28 */
29 public abstract G create(final Random r);
30
31 }

Unary search operations take one existing genotype g ∈ G and use it as basis to create a
new, similar one g ′ ∈ G.

Listing 55.3: The interface for all unary search operations.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
55.1. GENERAL DEFINITIONS 709
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * This interface provides the method for a unary search operations. Search
10 * operations are discussed in Section 4.2 on page 83.
11 *
12 * @param <G>
13 * the search space (genome, Section 4.1 on page 81)
14 * @author Thomas Weise
15 */
16 public interface IUnarySearchOperation<G> extends IOptimizationModule {
17
18 /**
19 * This is the unary search operation. It takes one existing genotype g
20 * (see Definition D4.2 on page 82) from the genome and produces one new
21 * element in the search space. This new element is usually a copy of te
22 * existing element g which is slightly modified in a random manner. For
23 * this purpose, we pass a random number generator in as parameter so we
24 * can use the same random number generator in all parts of an
25 * optimization algorithm.
26 *
27 * @param g
28 * the existing genotype in the search space from which a
29 * slightly modified copy should be created
30 * @param r
31 * the random number generator
32 * @return a new genotype
33 */
34 public abstract G mutate(final G g, final Random r);
35
36 }

Binary search operations take two existing genotype gp1 , gp2 ∈ G and use them as basis to
create a new genotype g ′ ∈ G which, hopefully, unites some of the positive features of its
“parents”.
Listing 55.4: The interface for all binary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * A binary search operation (see Definition D4.6 on page 83) such
10 * as the recombination operation in Evolutionary Algorithms (see
11 * Definition D28.14 on page 307) and its specific string-based version
12 * used in Genetic Algorithms (see
13 * Section 29.3.4 on page 335).
14 *
15 * @param <G>
16 * the search space (genome, Section 4.1 on page 81)
17 * @author Thomas Weise
18 */
19 public interface IBinarySearchOperation<G> extends IOptimizationModule {
20
21 /**
22 * This is the binary search operation. It takes two existing genotypes
23 * p1 and p2 (see Definition D4.2 on page 82) from the genome and produces
24 * one new element in the search space. There are two basic assumptions
25 * about this operator: 1) Its input elements are good because they have
26 * previously been selected. 2) It is somehow possible to combine these
27 * good traits and hence, to obtain a single individual which unites them
710 55 THE SPECIFICATION PACKAGE
28 * and thus, has even better overall qualities than its parents. The
29 * original underlying idea of this operation is the
30 * "Building Block Hypothesis" (see
31 * Section 29.5.5 on page 346) for which, so far, not much
32 * evidence has been found. The hypothesis
33 * "Genetic Repair and Extraction" (see Section 29.5.6 on page 346)
34 * has been developed as an alternative to explain the positive aspects
35 * of binary search operations such as recombination.
36 *
37 * @param p1
38 * the first "parent" genotype
39 * @param p2
40 * the second "parent" genotype
41 * @param r
42 * the random number generator
43 * @return a new genotype
44 */
45 public abstract G recombine(final G p1, final G p2, final Random r);
46
47 }

Ternary search operations take three existing genotype gp1 , gp2 , gp3 ∈ G and use them as
basis to create a new genotype g ′ ∈ G.

Listing 55.5: The interface for all ternary search operations.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * A ternary search operation (see Definition D4.6 on page 83) such
10 * as the recombination operation in Differential Evolution (see
11 * Section 33.2 on page 419).
12 *
13 * @param <G>
14 * the search space (genome, Section 4.1 on page 81)
15 * @author Thomas Weise
16 */
17 public interface ITernarySearchOperation<G> extends IOptimizationModule {
18
19 /**
20 * This is the ternary search operation. It takes three existing
21 * genotypes p1, p2, and p3 (see Definition D4.2 on page 82) from the
22 * genome and produces one new element in the search space.
23 *
24 * @param p1
25 * the first "parent" genotype
26 * @param p2
27 * the second "parent" genotype
28 * @param p3
29 * the third parent genotype
30 * @param r
31 * the random number generator
32 * @return a new genotype
33 */
34 public abstract G ternaryRecombine(final G p1, final G p2, final G p3,
35 final Random r);
36
37 }

N -ary search operations take n existing genotype gp1 , gp2 , .., gpn ∈ G and use them as basis
to create a new genotype g ′ ∈ G which again, hopefully, unites some of the positive features
of its “parents”.
55.1. GENERAL DEFINITIONS 711
Listing 55.6: The interface for all binary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * This interface provides the method for an n-ary search operation. Search
10 * operations are discussed in Section 4.2 on page 83.
11 *
12 * @param <G>
13 * the search space (genome, Section 4.1 on page 81)
14 * @author Thomas Weise
15 */
16 public interface INarySearchOperation<G> extends IOptimizationModule {
17
18 /**
19 * This is an n-ary search operation. It takes n existing genotype gi
20 * (see Definition D4.2 on page 82) from the genome and produces one new
21 * element in the search space.
22 *
23 * @param gs
24 * the existing genotypes in the search space which will be
25 * combined to a new one
26 * @param r
27 * the random number generator
28 * @return a new genotype
29 */
30 public abstract G combine(final G[] gs, final Random r);
31
32 }

55.1.2 Genotype-Phenotype Mapping

The genotype-phenotype mapping gpm : G 7→ X translates one genotype g ∈ G to a candi-


date solution x ∈ X.

Listing 55.7: The interface common to all genotype-phenotype mappings.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * This interface provides the method for genotype-phenotype mappings as
10 * discussed in Section 4.3 on page 85.
11 *
12 * @param <G>
13 * the search space (genome, Section 4.1 on page 81)
14 * @param <X>
15 * the problem space (phenome, Section 2.1 on page 43)
16 * @author Thomas Weise
17 */
18 public interface IGPM<G, X> extends IOptimizationModule {
19
20 /**
21 * This function carries out the genotype-phenotype mapping as defined in
22 * Definition D4.11 on page 86. In other words, it translates one genotype (an
23 * element in the search space) to one element in the problem space,
24 * i.e., a phenotype.
712 55 THE SPECIFICATION PACKAGE
25 *
26 * @param g
27 * the genotype ( Definition D4.2 on page 82)
28 * @param r
29 * a randomizer
30 * @return the phenotype (see Definition D2.2 on page 43)
31 * corresponding to the genotype g
32 */
33 public abstract X gpm(final G g, final Random r);
34
35 }

55.1.3 Objective Function

Objective functions f : X 7→ R rates a candidate solution x ∈ X to a utility value.

Listing 55.8: The interface for objective functions.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * In Section 2.2 on page 44, we introduce the concept of
10 * objective functions. An objective function is an optimization criterion
11 * which rates one candidate solution (see
12 * Definition D2.2 on page 43) from the problem space according to
13 * its utility. According to Section 6.3.4 on page 106, smaller
14 * objective values indicate better fitness, i.e., higher utility.
15 *
16 * @param <X>
17 * the problem space (phenome, Section 2.1 on page 43)
18 * @author Thomas Weise
19 */
20 public interface IObjectiveFunction<X> extends IOptimizationModule {
21
22 /**
23 * Compute the objective value, i.e., determine the utility of the
24 * solution candidate x as specified in
25 * Definition D2.3 on page 44.
26 *
27 * @param x
28 * the phenotype to be rated
29 * @param r
30 * a randomizer
31 * @return the objective value of x, the lower the better (see
32 * Section 6.3.4 on page 106)
33 */
34 public abstract double compute(final X x, final Random r);
35
36 }

55.1.4 Termination Criterion

Termination criteria are used to determine when the optimization process should finish.

Listing 55.9: The interface for termination criteria.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
55.1. GENERAL DEFINITIONS 713
4 package org.goataa.spec;
5
6 /**
7 * The termination criterion is used to tell the optimization process when
8 * to stop. It is defined in Section 6.3.3 on page 105.
9 *
10 * @author Thomas Weise
11 */
12 public interface ITerminationCriterion extends IOptimizationModule {
13
14 /**
15 * The function terminationCriterion, as stated in
16 * Definition D6.7 on page 105, returns a Boolean value which
17 * is true if the optimization process should terminate and false
18 * otherwise.
19 *
20 * @return true if the optimization process should terminate, false if it
21 * can continue
22 */
23 public abstract boolean terminationCriterion();
24
25 /**
26 * Reset the termination criterion to make it useable more than once
27 */
28 public abstract void reset();
29 }

55.1.5 Optimization Algorithm

Here we provide a simple interface which unites the functionality of all single-objective
optimization algorithms discussed in this book.

Listing 55.10: An interface to all single-objective optimization algorithms.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.List;
7 import java.util.Random;
8 import java.util.concurrent.Callable;
9
10 import org.goataa.impl.utils.Individual;
11
12 /**
13 * A simple and general interface which enables us to provide the
14 * functionality of a single-objective optimization algorithm (see
15 * Section 1.3 on page 31 and
16 * Definition D6.3) in a unified way.
17 *
18 * @param <G>
19 * the search space (genome, Section 4.1 on page 81)
20 * @param <X>
21 * the problem space (phenome, Section 2.1 on page 43)
22 * @param <IT>
23 * the individual type
24 * @author Thomas Weise
25 */
26 public interface ISOOptimizationAlgorithm<G, X, IT extends Individual<G, X>>
27 extends IOptimizationModule, Callable<List<IT>>, Runnable {
28
29 /**
30 * Invoke the optimization process. This method calls the optimizer and
31 * returns the list of best individuals (see Definition D4.18 on page 90)
32 * found. Usually, only a single individual will be returned.
714 55 THE SPECIFICATION PACKAGE
33 *
34 * @return computed result
35 */
36 public abstract List<IT> call();
37
38 /**
39 * Invoke the optimization process. This method calls the optimizer and
40 * returns the list of best individuals (see Definition D4.18 on page 90)
41 * found. Usually, only a single individual will be returned. Different
42 * from the parameterless call method, here a randomizer and a
43 * termination criterion are directly passed in. Also, a list to fill in
44 * the optimization results is provided. This allows recursively using
45 * the optimization algorithms.
46 *
47 * @param r
48 * the randomizer (will be used directly without setting the
49 * seed)
50 * @param term
51 * the termination criterion (will be used directly without
52 * resetting)
53 * @param result
54 * a list to which the results are to be appended
55 */
56 public abstract void call(final Random r,
57 final ITerminationCriterion term, final List<IT> result);
58
59 /**
60 * Set the maximum number of solutions which can be returned by the
61 * algorithm
62 *
63 * @param max
64 * the maximum number of solutions
65 */
66 public void setMaxSolutions(final int max);
67
68 /**
69 * Get the maximum number of solutions which can be returned by the
70 * algorithm
71 *
72 * @return the maximum number of solutions
73 */
74 public int getMaxSolutions();
75
76 /**
77 * Set the genotype-phenotype mapping (GPM, see
78 * Section 4.3 on page 85 and
79 * Listing 55.7 on page 711.
80 *
81 * @param gpm
82 * the GPM to use during the optimization process
83 */
84 public abstract void setGPM(final IGPM<G, X> gpm);
85
86 /**
87 * Get the genotype-phenotype mapping (GPM, see
88 * Section 4.3 on page 85 and
89 * Listing 55.7 on page 711.
90 *
91 * @return gpm the GPM which is used during the optimization process
92 */
93 public abstract IGPM<G, X> getGPM();
94
95 /**
96 * Set the termination criterion (see
97 * Section 6.3.3 on page 105 and
98 * Listing 55.9 on page 712).
99 *
55.1. GENERAL DEFINITIONS 715
100 * @param term
101 * the termination criterion
102 */
103 public abstract void setTerminationCriterion(
104 final ITerminationCriterion term);
105
106 /**
107 * Get the termination criterion (see
108 * Section 6.3.3 on page 105 and
109 * Listing 55.9 on page 712).
110 *
111 * @return the termination criterion
112 */
113 public abstract ITerminationCriterion getTerminationCriterion();
114
115 /**
116 * Set the objective function (see Section 2.2 on page 44
117 * and Listing 55.8 on page 712) to be used to
118 * guide the optimization process
119 *
120 * @param f
121 * the objective function
122 */
123 public abstract void setObjectiveFunction(final IObjectiveFunction<X> f);
124
125 /**
126 * Get the objective function (see Section 2.2 on page 44
127 * and Listing 55.8 on page 712) which is used
128 * to guide the optimization process
129 *
130 * @return the objective function
131 */
132 public abstract IObjectiveFunction<X> getObjectiveFunction();
133
134 /**
135 * Set the seed for the internal random number generator
136 *
137 * @param seed
138 * the seed for the random number generator
139 */
140 public abstract void setRandSeed(final long seed);
141
142 /**
143 * Get the current seed for the internal random number generator
144 *
145 * @return the current seed for the random number generator
146 */
147 public abstract long getRandSeed();
148
149 /**
150 * Set that a randomly selected seed should be used for the random number
151 * generator every time. If you set a random seed with setRandSeed, the
152 * effect of this method will be nullified. Vice versa, this method
153 * nullifies the effect of a previous random seed setting via
154 * setRandSeed.
155 */
156 public abstract void useRandomRandSeed();
157
158 /**
159 * Are we currently using a random rand seed for each run of the
160 * optimizer?
161 *
162 * @return true if we are currently using a random rand seed for each run
163 * of the optimizer, false otherwise
164 */
165 public abstract boolean usesRandomRandSeed();
166
716 55 THE SPECIFICATION PACKAGE
167 }

Listing 55.11: An interface to all multi-objective optimization algorithms.


1 // Copyright (c) 2011 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import org.goataa.impl.utils.MOIndividual;
7
8 /**
9 * A simple and general interface which enables us to provide the
10 * functionality of a multi-objective optimization algorithm (see
11 * Section 1.3 on page 31 and
12 * Definition D6.3) in a unified way.
13 *
14 * @param <G>
15 * the search space (genome, Section 4.1 on page 81)
16 * @param <X>
17 * the problem space (phenome, Section 2.1 on page 43)
18 * @author Thomas Weise
19 */
20 public interface IMOOptimizationAlgorithm<G, X> extends
21 ISOOptimizationAlgorithm<G, X, MOIndividual<G, X>> {
22
23 /**
24 * Get the individual comparator
25 *
26 * @return the individual comparator
27 */
28 public abstract IIndividualComparator getComparator();
29
30 /**
31 * Set the individual comparator
32 *
33 * @param pcmp
34 * the individual comparator
35 */
36 public abstract void setComparator(final IIndividualComparator pcmp);
37
38 /**
39 * Get the objective functions
40 *
41 * @return the objective functions
42 */
43 public abstract IObjectiveFunction<X>[] getObjectiveFunctions();
44
45 /**
46 * Set the objective functions
47 *
48 * @param o
49 * the objective functions
50 */
51 public abstract void setObjectiveFunctions(
52 final IObjectiveFunction<X>[] o);
53
54 }

55.2 Algorithm Specific Interfaces

The temperature schedule is used in Simulated Annealing to determine the current temper-
ature and hence, the acceptance probability for inferior candidate solutions.
55.2. ALGORITHM SPECIFIC INTERFACES 717
Listing 55.12: The interface for temperature schedules.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 /**
7 * The interface ITemperatureSchedule allows us to implement the different
8 * temperature scheduling methods for Simulated Annealing (see
9 * Chapter 27) as listed in
10 * Section 27.3.
11 *
12 * @author Thomas Weise
13 */
14 public interface ITemperatureSchedule extends IOptimizationModule {
15
16 /**
17 * Get the temperature to be used in the iteration t. This method
18 * implements the getTemperature function specified in
19 * Equation 27.13 on page 247.
20 *
21 * @param t
22 * the iteration
23 * @return the temperature to be used for determining whether a worse
24 * solution should be accepted or not
25 */
26 public abstract double getTemperature(final int t);
27
28 }

55.2.1 Evolutionary Algorithms

In Evolutionary Algorithms, the selection step determines which individuals should become
the parents of the individuals in the next generation.

Listing 55.13: The interface for selection algorithms.


1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package org.goataa.spec;
10
11 import java.util.Random;
12
13 import org.goataa.impl.utils.Individual;
14
15 /**
16 * A selection algorithm (see Section 28.4 on page 285) receives a
17 * population of individuals (see Definition D4.18 on page 90) as input and
18 * chooses some individuals amongst them to fill a mating pool. Selection
19 * is an essential step in Evolutionary Algorithms (EAs) which you can find
20 * introduced in Chapter 28 on page 253. In each
21 * generation of an EA, a set of new individuals are created and evaluated.
22 * All the individuals which are currently processed form the population
23 * (see Definition D4.19 on page 90). Some of these individuals will
24 * further be investigated, i.e., processed by the search operations. The
25 * selection step chooses which of the individuals should be chosen for
26 * this purpose. Usually, this decision is randomized and better
27 * individuals are selected with higher probability whereas worse
28 * individuals are chosen with lower probability.
718 55 THE SPECIFICATION PACKAGE
29 *
30 * @author Thomas Weise
31 */
32 public interface ISelectionAlgorithm extends IOptimizationModule {
33
34 /**
35 * Perform the selection step (see Section 28.4 on page 285), i.e.,
36 * fill the mating pool mate with individuals chosen from the population
37 * pop.
38 *
39 * @param pop
40 * the population (see Definition D4.19 on page 90)
41 * @param popStart
42 * the index of the first individual in the population
43 * @param popCount
44 * the number of individuals to select from
45 * @param mate
46 * the mating pool (see Definition D28.10 on page 285)
47 * @param mateStart
48 * the first index to which the selected individuals should be
49 * copied
50 * @param mateCount
51 * the number of individuals to be selected
52 * @param r
53 * the random number generator
54 */
55 public abstract void select(final Individual<?, ?>[] pop,
56 final int popStart, final int popCount,
57 final Individual<?, ?>[] mate, final int mateStart,
58 final int mateCount, final Random r);
59
60 }

Listing 55.14: The interface for fitness assignment processes.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 import org.goataa.impl.utils.MOIndividual;
9
10 /**
11 * A fitness assignment process translates the (vectors of) objective
12 * values used in multi-objective optimization
13 * ( Section 3.3 on page 55) of a set of individuals to
14 * single fitness values as specified in
15 * Definition D28.6 on page 274.
16 *
17 * @author Thomas Weise
18 */
19 public interface IFitnessAssignmentProcess extends IOptimizationModule {
20
21 /**
22 * Assign fitness to the individuals of a population based on their
23 * objective values as specified in
24 * Definition D28.6 on page 274.
25 *
26 * @param pop
27 * the population of individuals
28 * @param start
29 * the index of the first individual in the population
30 * @param count
31 * the number of individuals in the population
32 * @param cmp
33 * the individual comparator which tells which individual is
55.2. ALGORITHM SPECIFIC INTERFACES 719
34 * better than another one
35 * @param r
36 * than randomizer
37 */
38 public abstract void assignFitness(final MOIndividual<?, ?>[] pop,
39 final int start, final int count, final IIndividualComparator cmp,
40 final Random r);
41
42 }

Listing 55.15: The interface for comparator functions.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Comparator;
7
8 import org.goataa.impl.utils.MOIndividual;
9
10 /**
11 * A comparator interface for multi-objective individuals. During the
12 * fitness assignment processes in multi-objective optimization, we can use
13 * such a comparator to check which individuals are better and which are
14 * worst. Based on the comparison results, the fitness values can be
15 * computed which are then used during a subsequent selection process. This
16 * interface provides the functionality specified in
17 * Definition D3.18 on page 77 in
18 * Section 3.5.2.
19 *
20 * @author Thomas Weise
21 */
22 public interface IIndividualComparator extends
23 Comparator<MOIndividual<?, ?>>, IOptimizationModule {
24
25 /**
26 * Compare two individuals with each other usually by using their
27 * objective values as defined in Definition D3.18 on page 77. .
28 * Warning: The fitness values (a.v, b.v) must not be used here since
29 * they are usually computed by using the comparators during the fitness
30 * assignment process.
31 *
32 * @param a
33 * the first individual
34 * @param b
35 * the second individual
36 * @return -1 if the a is better than b, 1 if b is better than a, 0 if
37 * neither of them is better
38 */
39 public abstract int compare(final MOIndividual<?, ?> a,
40 final MOIndividual<?, ?> b);
41 }

55.2.2 Estimation Of Distribution Algorithms

Listing 55.16: The interface for model building and updating.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
720 55 THE SPECIFICATION PACKAGE
7
8 import org.goataa.impl.utils.Individual;
9
10 /**
11 * Here we give a basic interface to the model building algorithms used in
12 * EDAs as introduced in Section 34.3 on page 431.
13 *
14 * @param <M>
15 * the model type
16 * @param <G>
17 * the search space (genome, Section 4.1 on page 81)
18 * @author Thomas Weise
19 */
20 public interface IModelBuilder<M, G> extends IOptimizationModule {
21
22 /**
23 * Build a model from a set of selected points in the search space and an
24 * old model as introduced in Section 34.3 on page 431.
25 *
26 * @param mate
27 * the mating pool of selected individuals
28 * @param start
29 * the index of the first individual in the mating pool
30 * @param count
31 * the number of individuals in the mating pool
32 * @param oldModel
33 * the old model
34 * @param r
35 * a random number generator
36 * @return the new model
37 */
38 public abstract M buildModel(final Individual<G, ?>[] mate,
39 final int start, final int count, final M oldModel, final Random r);
40
41 }

Listing 55.17: The interface for sampling points from a model.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /**
9 * Here we give a basic interface for sampling models which is a basic
10 * functionality of EDAs Section 34.3 on page 431.
11 *
12 * @param <M>
13 * the model type
14 * @param <G>
15 * the search space (genome, Section 4.1 on page 81)
16 * @author Thomas Weise
17 */
18 public interface IModelSampler<M, G> extends IOptimizationModule {
19
20 /**
21 * Sample one point from the model as required in Estimation of
22 * Distribution algorithms introduced in
23 * Chapter 34 on page 427.
24 *
25 * @param model
26 * the model
27 * @param r
28 * a random number generator
29 * @return the new model
30 */
55.2. ALGORITHM SPECIFIC INTERFACES 721
31 public abstract G sampleModel(final M model, final Random r);
32
33 }
Chapter 56
The Implementation Package

In this package, we implement the interfaces given in the package org.goataa.spec and dis-
cussed in Chapter 55 on page 707.

Listing 56.1: The base class for everything involved in optimization.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl;
5
6 import org.goataa.spec.IOptimizationModule;
7
8 /**
9 * The base class for all objects involved in this implementation of
10 * optimization. This class implements the interface given in
11 * Listing 55.1 on page 707.
12 *
13 * @author Thomas Weise
14 */
15 public class OptimizationModule implements IOptimizationModule {
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /** Instantiate */
20 protected OptimizationModule() {
21 super();
22 }
23
24 /**
25 * Get the name of the optimization module
26 *
27 * @param longVersion
28 * true if the long name should be returned, false if the short
29 * name should be returned
30 * @return the name of the optimization module
31 */
32 public String getName(final boolean longVersion) {
33 return this.getClass().getSimpleName();
34 }
35
36 /**
37 * Get the full configuration which holds all the data necessary to
38 * describe this object.
39 *
40 * @param longVersion
41 * true if the long version should be returned, false if the
42 * short version should be returned
43 * @return the full configuration
44 */
45 public String getConfiguration(final boolean longVersion) {
724 56 THE IMPLEMENTATION PACKAGE
46 return ""; //N ON − N LS − 1
47 }
48
49 /**
50 * Get the string representation of this object, i.e., the name and
51 * configuration.
52 *
53 * @param longVersion
54 * true if the long version should be returned, false if the
55 * short version should be returned
56 * @return the string version of this object
57 */
58 public String toString(final boolean longVersion) {
59 String s, t;
60 int i, j;
61 char[] ch;
62
63 s = this.getName(longVersion);
64 t = this.getConfiguration(longVersion);
65
66 if ((t == null) || ((i = t.length()) <= 0)) {
67 return s;
68 }
69
70 j = s.length();
71 ch = new char[i + j + 2];
72 s.getChars(0, j, ch, 0);
73 ch[j] = ’(’;
74 t.getChars(0, i, ch, ++j);
75 ch[i + j] = ’)’;
76 return String.valueOf(ch);
77 }
78
79 /**
80 * Obtain the string representation of this object.
81 *
82 * @return the string representation of this object.
83 */
84 @Override
85 public final String toString() {
86 return this.toString(false);
87 }
88 }

56.1 The Optimization Algorithms


In this package, we provide the implementations of the optimization algorithms discussed in
Part III and Part IV according to the specifications given in Chapter 55. Before we do that,
we first provide the baseline search patterns from Chapter 8 for the sake of completeness
and for comparison.

56.1.1 Algorithm Base Classes

With this class, we provide a default implementation of the interface for single-objective
optimization methods given in Listing 55.10 on page 713.

Listing 56.2: A base class for all single-objective optimization methods.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms;
5
56.1. THE OPTIMIZATION ALGORITHMS 725
6 import java.util.ArrayList;
7 import java.util.List;
8 import java.util.Random;
9
10 import org.goataa.impl.OptimizationModule;
11 import org.goataa.impl.gpms.IdentityMapping;
12 import org.goataa.impl.termination.TerminationCriterion;
13 import org.goataa.impl.utils.Individual;
14 import org.goataa.spec.IGPM;
15 import org.goataa.spec.IObjectiveFunction;
16 import org.goataa.spec.IOptimizationModule;
17 import org.goataa.spec.ISOOptimizationAlgorithm;
18 import org.goataa.spec.ITerminationCriterion;
19
20 /**
21 * The default base class for implementations of the optimization algorithm
22 * interface given in
23 * Listing 55.10 on page 713
24 *
25 * @param <G>
26 * the search space (genome, Section 4.1 on page 81)
27 * @param <X>
28 * the problem space (phenome, Section 2.1 on page 43)
29 * @param <IT>
30 * the individual type
31 * @author Thomas Weise
32 */
33 public class SOOptimizationAlgorithm<G, X, IT extends Individual<G, X>>
34 extends OptimizationModule implements
35 ISOOptimizationAlgorithm<G, X, IT> {
36 /** a constant required by Java serialization */
37 private static final long serialVersionUID = 1;
38
39 /** the seed generator */
40 private static final Random SEED = new Random(0l);
41
42 /** the genotype-phenotype mapping */
43 private IGPM<G, X> gpm;
44
45 /** the termination criterion */
46 private ITerminationCriterion tc;
47
48 /** the objective function */
49 private IObjectiveFunction<X> f;
50
51 /** the internal randomizer */
52 private final Random random;
53
54 /** the seed */
55 private long seed;
56
57 /** should we use a random seed ? */
58 private boolean randomSeed;
59
60 /** the maximum number of solutions */
61 private int maxs;
62
63 /** Create a new optimization algorithm */
64 @SuppressWarnings("unchecked")
65 protected SOOptimizationAlgorithm() {
66 super();
67 this.random = new Random();
68 this.gpm = ((IGPM) (IdentityMapping.IDENTITY_MAPPING));
69 this.tc = TerminationCriterion.NEVER_TERMINATE;
70 this.randomSeed = true;
71 this.seed = SEED.nextLong();
72 this.maxs = 1;
726 56 THE IMPLEMENTATION PACKAGE
73 }
74
75 /** {@inheritDoc} */
76 @Override
77 public List<IT> call() {
78 final List<IT> l;
79 final ITerminationCriterion c;
80 final Random r;
81
82 l = new ArrayList<IT>();
83
84 c = this.getTerminationCriterion();
85 c.reset();
86
87 r = this.random;
88 if (this.randomSeed) {
89 this.seed = SEED.nextLong();
90 }
91 r.setSeed(this.seed);
92
93 this.call(r, c, l);
94
95 return l;
96 }
97
98 /** {@inheritDoc} */
99 @Override
100 public void call(final Random r, final ITerminationCriterion term,
101 final List<IT> result) {
102 result.clear();
103 }
104
105 /** {@inheritDoc} */
106 @Override
107 public final void setGPM(final IGPM<G, X> agpm) {
108 this.gpm = agpm;
109 }
110
111 /** {@inheritDoc} */
112 @Override
113 public final IGPM<G, X> getGPM() {
114 return this.gpm;
115 }
116
117 /** {@inheritDoc} */
118 @Override
119 public void setTerminationCriterion(final ITerminationCriterion aterm) {
120 this.tc = ((aterm != null) ? aterm
121 : TerminationCriterion.NEVER_TERMINATE);
122 }
123
124 /** {@inheritDoc} */
125 @Override
126 public ITerminationCriterion getTerminationCriterion() {
127 return this.tc;
128 }
129
130 /** {@inheritDoc} */
131 @Override
132 public void setObjectiveFunction(final IObjectiveFunction<X> af) {
133 this.f = af;
134 }
135
136 /** {@inheritDoc} */
137 @Override
138 public final IObjectiveFunction<X> getObjectiveFunction() {
139 return this.f;
56.1. THE OPTIMIZATION ALGORITHMS 727
140 }
141
142 /** {@inheritDoc} */
143 @Override
144 public final void setRandSeed(final long aseed) {
145 this.randomSeed = false;
146 this.seed = aseed;
147 }
148
149 /** {@inheritDoc} */
150 @Override
151 public final long getRandSeed() {
152 return this.seed;
153 }
154
155 /** {@inheritDoc} */
156 @Override
157 public final void useRandomRandSeed() {
158 this.randomSeed = true;
159 }
160
161 /** {@inheritDoc} */
162 @Override
163 public final boolean usesRandomRandSeed() {
164 return this.randomSeed;
165 }
166
167 /** {@inheritDoc} */
168 @Override
169 public void setMaxSolutions(final int max) {
170 this.maxs = ((max > 1) ? max : 1);
171 }
172
173 /** {@inheritDoc} */
174 @Override
175 public int getMaxSolutions() {
176 return this.maxs;
177 }
178
179 /** {@inheritDoc} */
180 @Override
181 @SuppressWarnings("unchecked")
182 public String getConfiguration(final boolean longVersion) {
183 StringBuilder sb;
184 IOptimizationModule o;
185
186 sb = new StringBuilder();
187
188 o = this.getGPM();
189 if (o != null) {
190 if (longVersion || (!(o instanceof IdentityMapping))) {
191 sb.append("gpm="); //N ON − N LS − 1
192 sb.append(o.toString(longVersion));
193 }
194 }
195
196 o = this.getObjectiveFunction();
197 if (o != null) {
198 if (sb.length() > 0) {
199 sb.append(’,’);
200 }
201 sb.append("f=");//N ON − N LS − 1
202 sb.append(o.toString(longVersion));
203 }
204
205 o = this.getTerminationCriterion();
206 if (o != null) {
728 56 THE IMPLEMENTATION PACKAGE
207 if (sb.length() > 0) {
208 sb.append(’,’);
209 }
210 sb.append(longVersion ? "terminationCriterion=" : //N ON − N LS − 1
211 "tc=");//N ON − N LS − 1
212 sb.append(o.toString(longVersion));
213 }
214
215 if (sb.length() > 0) {
216 sb.append(’,’);
217 }
218 if (this.usesRandomRandSeed()) {
219 sb.append(longVersion ? "fixedRandomSeed=" : //N ON − N LS − 1
220 "frs=");//N ON − N LS − 1
221 sb.append(this.getRandSeed());
222 } else {
223 sb.append(longVersion ? "randomSeed" : "r");//N ON − N LS − 1//N ON − N LS − 2
224 }
225
226 return sb.toString();
227 }
228
229 /** A simple wrapper to make this algorithm Runnable */
230 public void run() {
231 this.call();
232 }
233 }

Listing 56.3: A base class for all multi-objective optimization methods.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms;
5
6 import org.goataa.impl.comparators.Pareto;
7 import org.goataa.impl.utils.MOIndividual;
8 import org.goataa.spec.IIndividualComparator;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /**
12 * An algorithm for multi-objective optimization
13 *
14 * @param <G>
15 * the search space (genome, Section 4.1 on page 81)
16 * @param <X>
17 * the problem space (phenome, Section 2.1 on page 43)
18 * @author Thomas Weise
19 */
20 public class MOOptimizationAlgorithm<G, X> extends
21 SOOptimizationAlgorithm<G, X, MOIndividual<G, X>> {
22 /** a constant required by Java serialization */
23 private static final long serialVersionUID = 1;
24
25 /** the individual comparator */
26 private IIndividualComparator cmp;
27
28 /** the objective functions */
29 private IObjectiveFunction<X>[] ofs;
30
31 /** Instantiate a new multi-objective optimization algorithm */
32 protected MOOptimizationAlgorithm() {
33 super();
34 this.cmp = Pareto.PARETO_COMPARATOR;
35 this.setMaxSolutions(20);
36 }
37
38 /**
56.1. THE OPTIMIZATION ALGORITHMS 729
39 * Get the individual comparator
40 *
41 * @return the individual comparator
42 */
43 public IIndividualComparator getComparator() {
44 return this.cmp;
45 }
46
47 /**
48 * Set the individual comparator
49 *
50 * @param pcmp
51 * the individual comparator
52 */
53 public void setComparator(final IIndividualComparator pcmp) {
54 this.cmp = pcmp;
55 }
56
57 /**
58 * Get the objective functions
59 *
60 * @return the objective functions
61 */
62 public IObjectiveFunction<X>[] getObjectiveFunctions() {
63 return this.ofs;
64 }
65
66 /**
67 * Set the objective functions
68 *
69 * @param o
70 * the objective functions
71 */
72 public void setObjectiveFunctions(final IObjectiveFunction<X>[] o) {
73 this.ofs = o;
74 if (o.length > 0) {
75 super.setObjectiveFunction(o[0]);
76 }
77 }
78
79 /**
80 * Set the objective function (see Section 2.2 on page 44
81 * and Listing 55.8 on page 712) to be used to
82 * guide the optimization process
83 *
84 * @param af
85 * the objective function
86 */
87 @Override
88 @SuppressWarnings("unchecked")
89 public void setObjectiveFunction(final IObjectiveFunction<X> af) {
90 this.setObjectiveFunctions(new IObjectiveFunction[] { af });
91 }
92 }

56.1.2 Baseline Search Patterns

Random sampling and random walks, both discussed in Chapter 8, are two baseline search
patterns which do not utilize any information from the objective functions during their
course (apart from for remembering the best candidate solution found).

Listing 56.4: The random sampling algorithm “randomSampling” given in Algorithm 8.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
730 56 THE IMPLEMENTATION PACKAGE
4 package org.goataa.impl.algorithms;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.searchOperations.NullarySearchOperation;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IGPM;
12 import org.goataa.spec.INullarySearchOperation;
13 import org.goataa.spec.IObjectiveFunction;
14 import org.goataa.spec.IOptimizationModule;
15 import org.goataa.spec.ITerminationCriterion;
16
17 /**
18 * A simple implementation of the Random Sampling algorithm introduced as
19 * Algorithm 8.1 on page 114.
20 *
21 * @param <G>
22 * the search space (genome, Section 4.1 on page 81)
23 * @param <X>
24 * the problem space (phenome, Section 2.1 on page 43)
25 * @author Thomas Weise
26 */
27 public final class RandomSampling<G, X> extends
28 SOOptimizationAlgorithm<G, X, Individual<G, X>> {
29
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** the nullary search operation */
34 private INullarySearchOperation<G> o0;
35
36 /** instantiate the random sampling class */
37 @SuppressWarnings("unchecked")
38 public RandomSampling() {
39 super();
40 this.o0 = (INullarySearchOperation) (NullarySearchOperation.NULL_CREATION);
41 }
42
43 /**
44 * Set the nullary search operation to be used by this optimization
45 * algorithm (see Section 4.2 on page 83 and
46 * Listing 55.2 on page 708).
47 *
48 * @param op
49 * the nullary search operation to use
50 */
51 @SuppressWarnings("unchecked")
52 public void setNullarySearchOperation(final INullarySearchOperation<G> op) {
53 this.o0 = ((op != null) ? op
54 : ((INullarySearchOperation) (NullarySearchOperation.NULL_CREATION)));
55 }
56
57 /**
58 * Get the nullary search operation which is used by this optimization
59 * algorithm (see Section 4.2 on page 83 and
60 * Listing 55.2 on page 708).
61 *
62 * @return the nullary search operation to use
63 */
64 public final INullarySearchOperation<G> getNullarySearchOperation() {
65 return this.o0;
66 }
67
68 /**
69 * Invoke the random sampling process.
70 *
56.1. THE OPTIMIZATION ALGORITHMS 731
71 * @param r
72 * the randomizer (will be used directly without setting the
73 * seed)
74 * @param term
75 * the termination criterion (will be used directly without
76 * resetting)
77 * @param result
78 * a list to which the results are to be appended
79 */
80 @Override
81 public void call(final Random r, final ITerminationCriterion term,
82 final List<Individual<G, X>> result) {
83
84 result.add(RandomSampling.randomSampling(this.getObjectiveFunction(),//
85 this.getNullarySearchOperation(), //
86 this.getGPM(), term, r));
87
88 }
89
90 /**
91 * We place the complete Random Sampling method as defined in
92 * Algorithm 8.1 on page 114 into this single procedure.
93 *
94 * @param f
95 * the objective function ( Definition D2.3 on page 44)
96 * @param create
97 * the nullary search operator
98 * ( Section 4.2 on page 83) for creating the initial
99 * genotype
100 * @param gpm
101 * the genotype-phenotype mapping
102 * ( Section 4.3 on page 85)
103 * @param term
104 * the termination criterion
105 * ( Section 6.3.3 on page 105)
106 * @param r
107 * the random number generator
108 * @return the individual holding the best candidate solution
109 * ( Definition D2.2 on page 43) found
110 * @param <G>
111 * the search space ( Section 4.1 on page 81)
112 * @param <X>
113 * the problem space ( Section 2.1 on page 43)
114 */
115 public static final <G, X> Individual<G, X> randomSampling(
116 //
117 final IObjectiveFunction<X> f,
118 final INullarySearchOperation<G> create,//
119 final IGPM<G, X> gpm,//
120 final ITerminationCriterion term, final Random r) {
121
122 Individual<G, X> p, pbest;
123 int t;
124
125 p = new Individual<G, X>();
126 pbest = new Individual<G, X>();
127
128 // create the first genotype, map it to a phenotype, and evaluate it
129 p.g = create.create(r);
130 t = 1;
131
132 // check the termination criterion
133 while (!(term.terminationCriterion())) {
134 p.x = gpm.gpm(p.g, r);
135 p.v = f.compute(p.x, r);
136
137 // remember the best candidate solution
732 56 THE IMPLEMENTATION PACKAGE
138 if ((t == 1) || (p.v < pbest.v)) {
139 pbest.assign(p);
140 }
141
142 t++;
143
144 // Instead of modifying existing genotypes as done in
145 // Listing 56.5, we create
146 // new ones in each round
147 p.g = create.create(r);
148 }
149
150 return pbest;
151 }
152
153 /**
154 * Get the full configuration which holds all the data necessary to
155 * describe this object.
156 *
157 * @param longVersion
158 * true if the long version should be returned, false if the
159 * short version should be returned
160 * @return the full configuration
161 */
162 @Override
163 public String getConfiguration(final boolean longVersion) {
164 StringBuilder sb;
165 IOptimizationModule o;
166
167 sb = new StringBuilder();
168
169 sb.append(super.getConfiguration(longVersion));
170
171 o = this.getNullarySearchOperation();
172 if (o != null) {
173 if (sb.length() > 0) {
174 sb.append(’,’);
175 }
176 sb.append("o0=");//N ON − N LS − 1
177 sb.append(o.toString(longVersion));
178 }
179
180 return sb.toString();
181 }
182
183 /**
184 * Get the name of the optimization module
185 *
186 * @param longVersion
187 * true if the long name should be returned, false if the short
188 * name should be returned
189 * @return the name of the optimization module
190 */
191 @Override
192 public String getName(final boolean longVersion) {
193 if (longVersion) {
194 return this.getClass().getSimpleName();
195 }
196 return "RS"; //N ON − N LS − 1
197 }
198 }

Listing 56.5: The random walk algorithm “randomWalk” given in Algorithm 8.2.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms;
56.1. THE OPTIMIZATION ALGORITHMS 733
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Individual;
10 import org.goataa.spec.IGPM;
11 import org.goataa.spec.INullarySearchOperation;
12 import org.goataa.spec.IObjectiveFunction;
13 import org.goataa.spec.ITerminationCriterion;
14 import org.goataa.spec.IUnarySearchOperation;
15
16 /**
17 * A simple implementation of the Random Walk algorithm introduced as
18 * Algorithm 8.2 on page 115.
19 *
20 * @param <G>
21 * the search space (genome, Section 4.1 on page 81)
22 * @param <X>
23 * the problem space (phenome, Section 2.1 on page 43)
24 * @author Thomas Weise
25 */
26 public final class RandomWalk<G, X> extends
27 LocalSearchAlgorithm<G, X, Individual<G, X>> {
28
29 /** a constant required by Java serialization */
30 private static final long serialVersionUID = 1;
31
32 /** instantiate the random walk class */
33 public RandomWalk() {
34 super();
35 }
36
37 /**
38 * Invoke the random walk.
39 *
40 * @param r
41 * the randomizer (will be used directly without setting the
42 * seed)
43 * @param term
44 * the termination criterion (will be used directly without
45 * resetting)
46 * @param result
47 * a list to which the results are to be appended
48 */
49 @Override
50 public void call(final Random r, final ITerminationCriterion term,
51 final List<Individual<G, X>> result) {
52
53 result.add(RandomWalk.randomWalk(this.getObjectiveFunction(),//
54 this.getNullarySearchOperation(), //
55 this.getUnarySearchOperation(),//
56 this.getGPM(), term, r));
57 }
58
59 /**
60 * We place the complete Random Walk method as defined in
61 * Algorithm 8.2 on page 115 into this single procedure.
62 *
63 * @param f
64 * the objective function ( Definition D2.3 on page 44)
65 * @param create
66 * the nullary search operator
67 * ( Section 4.2 on page 83) for creating the initial
68 * genotype
69 * @param mutate
70 * the unary search operator ( Section 4.2 on page 83)
71 * for modifying existing genotypes
734 56 THE IMPLEMENTATION PACKAGE
72 * @param gpm
73 * the genotype-phenotype mapping
74 * ( Section 4.3 on page 85)
75 * @param term
76 * the termination criterion
77 * ( Section 6.3.3 on page 105)
78 * @param r
79 * the random number generator
80 * @return the individual holding the best candidate solution
81 * ( Definition D2.2 on page 43) found
82 * @param <G>
83 * the search space ( Section 4.1 on page 81)
84 * @param <X>
85 * the problem space ( Section 2.1 on page 43)
86 */
87 public static final <G, X> Individual<G, X> randomWalk(
88 //
89 final IObjectiveFunction<X> f,
90 final INullarySearchOperation<G> create,//
91 final IUnarySearchOperation<G> mutate,//
92 final IGPM<G, X> gpm,//
93 final ITerminationCriterion term, final Random r) {
94
95 Individual<G, X> p, pbest;
96 int t;
97
98 p = new Individual<G, X>();
99 pbest = new Individual<G, X>();
100
101 // create the first genotype, map it to a phenotype, and evaluate it
102 p.g = create.create(r);
103 t = 1;
104
105 // check the termination criterion
106 while (!(term.terminationCriterion())) {
107 p.x = gpm.gpm(p.g, r);
108 p.v = f.compute(p.x, r);
109
110 // remember the best candidate solution
111 if ((t == 1) || (p.v < pbest.v)) {
112 pbest.assign(p);
113 }
114
115 t++;
116
117 // modify the last point checked, map the new point to a phenotype
118 // and evaluat it - this is the main difference to
119 // ?? on page ?? is that we
120 // do not use the best genotype for this
121 p.g = mutate.mutate(p.g, r);
122 }
123
124 return pbest;
125 }
126
127 /**
128 * Get the name of the optimization module
129 *
130 * @param longVersion
131 * true if the long name should be returned, false if the short
132 * name should be returned
133 * @return the name of the optimization module
134 */
135 @Override
136 public String getName(final boolean longVersion) {
137 if (longVersion) {
138 return this.getClass().getSimpleName();
56.1. THE OPTIMIZATION ALGORITHMS 735
139 }
140 return "RW"; //N ON − N LS − 1
141 }
142 }

56.1.3 Local Search Algorithms

56.1.3.1 Hill Climbers

Here we provide Hill Climbing algorithms as discussed in Chapter 26. Hill Climbing is one
of the most basic optimization methods. As discussed in Chapter 26, it is a local search
that keeps a single individual and repeatedly modifies its genotype. It transcend to the new,
resulting candidate solution if and only if it is better than the current one.

Listing 56.6: The Hill Climbing algorithm “hillClimbing” given in Algorithm 26.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.hc;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IGPM;
12 import org.goataa.spec.INullarySearchOperation;
13 import org.goataa.spec.IObjectiveFunction;
14 import org.goataa.spec.ITerminationCriterion;
15 import org.goataa.spec.IUnarySearchOperation;
16
17 /**
18 * A simple implementation of the Hill Climbing algorithm introduced as
19 * Algorithm 26.1 on page 230.
20 *
21 * @param <G>
22 * the search space (genome, Section 4.1 on page 81)
23 * @param <X>
24 * the problem space (phenome, Section 2.1 on page 43)
25 * @author Thomas Weise
26 */
27 public final class HillClimbing<G, X> extends
28 LocalSearchAlgorithm<G, X, Individual<G, X>> {
29
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** instantiate the hill climbing class */
34 public HillClimbing() {
35 super();
36 }
37
38 /**
39 * Invoke the optimization process. This method calls the optimizer and
40 * returns the list of best individuals (see Definition D4.18 on page 90)
41 * found. Usually, only a single individual will be returned. Different
42 * from the parameterless call method, here a randomizer and a
43 * termination criterion are directly passed in. Also, a list to fill in
44 * the optimization results is provided. This allows recursively using
45 * the optimization algorithms.
46 *
736 56 THE IMPLEMENTATION PACKAGE
47 * @param r
48 * the randomizer (will be used directly without setting the
49 * seed)
50 * @param term
51 * the termination criterion (will be used directly without
52 * resetting)
53 * @param result
54 * a list to which the results are to be appended
55 */
56 @Override
57 public void call(final Random r, final ITerminationCriterion term,
58 final List<Individual<G, X>> result) {
59
60 result.add(HillClimbing.hillClimbing(this.getObjectiveFunction(),//
61 this.getNullarySearchOperation(), //
62 this.getUnarySearchOperation(),//
63 this.getGPM(), term, r));
64
65 }
66
67 /**
68 * We place the complete Hill Climbing method as defined in
69 * Algorithm 26.1 on page 230 into this single procedure.
70 *
71 * @param f
72 * the objective function ( Definition D2.3 on page 44)
73 * @param create
74 * the nullary search operator
75 * ( Section 4.2 on page 83) for creating the initial
76 * genotype
77 * @param mutate
78 * the unary search operator ( Section 4.2 on page 83)
79 * for modifying existing genotypes
80 * @param gpm
81 * the genotype-phenotype mapping
82 * ( Section 4.3 on page 85)
83 * @param term
84 * the termination criterion
85 * ( Section 6.3.3 on page 105)
86 * @return the individual holding the best candidate solution
87 * @param r
88 * the random number generator
89 * ( Definition D2.2 on page 43) found
90 * @param <G>
91 * the search space ( Section 4.1 on page 81)
92 * @param <X>
93 * the problem space ( Section 2.1 on page 43)
94 */
95 public static final <G, X> Individual<G, X> hillClimbing(
96 //
97 final IObjectiveFunction<X> f,
98 final INullarySearchOperation<G> create,//
99 final IUnarySearchOperation<G> mutate,//
100 final IGPM<G, X> gpm,//
101 final ITerminationCriterion term, final Random r) {
102
103 Individual<G, X> p, pnew;
104
105 p = new Individual<G, X>();
106 pnew = new Individual<G, X>();
107
108 // create the first genotype, map it to a phenotype, and evaluate it
109 p.g = create.create(r);
110 p.x = gpm.gpm(p.g, r);
111 p.v = f.compute(p.x, r);
112
113 // check the termination criterion
56.1. THE OPTIMIZATION ALGORITHMS 737
114 while (!(term.terminationCriterion())) {
115
116 // modify the best point known, map the new point to a phenotype and
117 // evaluat it
118 pnew.g = mutate.mutate(p.g, r);
119 pnew.x = gpm.gpm(pnew.g, r);
120 pnew.v = f.compute(pnew.x, r);
121
122 // In Algorithm 26.1 on page 230, the objective functions are
123 // evaluated here. By storing the objective values in the individual
124 // records, we avoid evaluating p.x more than once.
125 if (pnew.v < p.v) {
126 p.assign(pnew);
127 }
128 }
129
130 return p;
131 }
132
133 /**
134 * Get the name of the optimization module
135 *
136 * @param longVersion
137 * true if the long name should be returned, false if the short
138 * name should be returned
139 * @return the name of the optimization module
140 */
141 @Override
142 public String getName(final boolean longVersion) {
143 if (longVersion) {
144 return this.getClass().getSimpleName();
145 }
146 return "HC"; //N ON − N LS − 1
147 }
148 }

Listing 56.7: The Multi-Objective Hill Climbing algorithm “hillClimbingMO” given in Al-
gorithm 26.2.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.hc;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.MOLocalSearchAlgorithm;
10 import org.goataa.impl.utils.Archive;
11 import org.goataa.impl.utils.MOIndividual;
12 import org.goataa.impl.utils.Utils;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.ITerminationCriterion;
18 import org.goataa.spec.IUnarySearchOperation;
19
20 /**
21 * An implementation of the Multi-Objective Hill Climbing algorithm
22 * introduced as Algorithm 26.2 on page 231.
23 *
24 * @param <G>
25 * the search space (genome, Section 4.1 on page 81)
26 * @param <X>
27 * the problem space (phenome, Section 2.1 on page 43)
28 * @author Thomas Weise
738 56 THE IMPLEMENTATION PACKAGE
29 */
30 public final class MOHillClimbing<G, X> extends
31 MOLocalSearchAlgorithm<G, X> {
32
33 /** a constant required by Java serialization */
34 private static final long serialVersionUID = 1;
35
36 /** instantiate the hill climbing class */
37 public MOHillClimbing() {
38 super();
39 }
40
41 /**
42 * Invoke the multi-objective hill climber.
43 *
44 * @param r
45 * the randomizer (will be used directly without setting the
46 * seed)
47 * @param term
48 * the termination criterion (will be used directly without
49 * resetting)
50 * @param result
51 * a list to which the results are to be appended
52 */
53 @Override
54 public void call(final Random r, final ITerminationCriterion term,
55 final List<MOIndividual<G, X>> result) {
56
57 MOHillClimbing.hillClimbingMO(//
58 this.getObjectiveFunctions(),//
59 this.getComparator(),//
60 this.getNullarySearchOperation(), //
61 this.getUnarySearchOperation(),//
62 this.getGPM(), term,//
63 this.getMaxSolutions(),//
64 r, //
65 result);
66
67 }
68
69 /**
70 * We place the complete Hill Climbing method as defined in
71 * Algorithm 26.1 on page 230 into this single procedure.
72 *
73 * @param f
74 * the objective functions ( Definition D2.3 on page 44)
75 * @param cmp
76 * the comparator
77 * @param create
78 * the nullary search operator
79 * ( Section 4.2 on page 83) for creating the initial
80 * genotype
81 * @param mutate
82 * the unary search operator ( Section 4.2 on page 83)
83 * for modifying existing genotypes
84 * @param gpm
85 * the genotype-phenotype mapping
86 * ( Section 4.3 on page 85)
87 * @param term
88 * the termination criterion
89 * ( Section 6.3.3 on page 105)
90 * @param result
91 * the list where the best individuals should be added to
92 * @param r
93 * the random number generator
94 * ( Definition D2.2 on page 43) found
95 * @param maxSolutions
56.1. THE OPTIMIZATION ALGORITHMS 739
96 * the maximum number of allowed solutions
97 * @param <G>
98 * the search space ( Section 4.1 on page 81)
99 * @param <X>
100 * the problem space ( Section 2.1 on page 43)
101 */
102 public static final <G, X> void hillClimbingMO(
103 //
104 final IObjectiveFunction<X>[] f,//
105 final IIndividualComparator cmp,//
106 final INullarySearchOperation<G> create,//
107 final IUnarySearchOperation<G> mutate,//
108 final IGPM<G, X> gpm,//
109 final ITerminationCriterion term,//
110 final int maxSolutions,//
111 final Random r,//
112 final List<MOIndividual<G, X>> result) {
113
114 MOIndividual<G, X> p, pnew;
115 Archive<G, X> arc;
116 int max;
117
118 p = new MOIndividual<G, X>(f.length);
119 pnew = new MOIndividual<G, X>(f.length);
120 arc = new Archive<G, X>(maxSolutions, cmp);
121
122 // create the first genotype, map it to a phenotype, and evaluate it
123 p.g = create.create(r);
124 p.x = gpm.gpm(p.g, r);
125 p.evaluate(f, r);
126 arc.add(p);
127
128 // check the termination criterion
129 while (!(term.terminationCriterion())) {
130 pnew = new MOIndividual<G, X>(f.length);
131
132 mutateCreate: {
133 for (max = 1000; (--max) >= 0;) {
134 // pick a random individual from the set of best individuals
135 p = arc.get(r.nextInt(arc.size()));
136
137 pnew.g = mutate.mutate(p.g, r);
138 if (!(Utils.equals(pnew.g, p.g))) {
139 break mutateCreate;
140 }
141 }
142 pnew.g = create.create(r);
143 }
144 pnew.x = gpm.gpm(pnew.g, r);
145 pnew.evaluate(f, r);
146
147 // check whether the individual should enter the archive
148 arc.add(pnew);
149 }
150
151 result.addAll(arc);
152 }
153
154 /**
155 * Get the name of the optimization module
156 *
157 * @param longVersion
158 * true if the long name should be returned, false if the short
159 * name should be returned
160 * @return the name of the optimization module
161 */
162 @Override
740 56 THE IMPLEMENTATION PACKAGE
163 public String getName(final boolean longVersion) {
164 if (longVersion) {
165 return this.getClass().getSimpleName();
166 }
167 return "MOHC"; //N ON − N LS − 1
168 }
169 }

56.1.3.2 Simulated Annealing

In this package, we provide the implementations of the Simulated Annealing algorithm and
its modules as discussed in Chapter 27.
Listing 56.8: The Simulated Annealing algorithm “simulatedAnnealing” given in Algo-
rithm 27.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.sa;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.algorithms.sa.temperatureSchedules.Exponential;
11 import org.goataa.impl.utils.Individual;
12 import org.goataa.spec.IGPM;
13 import org.goataa.spec.INullarySearchOperation;
14 import org.goataa.spec.IObjectiveFunction;
15 import org.goataa.spec.ITemperatureSchedule;
16 import org.goataa.spec.ITerminationCriterion;
17 import org.goataa.spec.IUnarySearchOperation;
18
19 /**
20 * A simple implementation of the Simulated Annealing algorithm introduced
21 * as Algorithm 27.1 on page 245.
22 *
23 * @param <G>
24 * the search space (genome, Section 4.1 on page 81)
25 * @param <X>
26 * the problem space (phenome, Section 2.1 on page 43)
27 * @author Thomas Weise
28 */
29 public final class SimulatedAnnealing<G, X> extends
30 LocalSearchAlgorithm<G, X, Individual<G, X>> {
31
32 /** a constant required by Java serialization */
33 private static final long serialVersionUID = 1;
34
35 /** the temperature schedule */
36 private ITemperatureSchedule schedule;
37
38 /** instantiate the simulated annealing class */
39 public SimulatedAnnealing() {
40 super();
41 }
42
43 /**
44 * Set the temperature schedule (see
45 * Definition D27.2 on page 247) which will be responsible for
46 * setting the acceptance probability of inferior candidate solutions
47 * during all optimization steps.
48 *
49 * @param aschedule
50 * the temperature schedule
56.1. THE OPTIMIZATION ALGORITHMS 741
51 */
52 public final void setTemperatureSchedule(
53 final ITemperatureSchedule aschedule) {
54 this.schedule = aschedule;
55 }
56
57 /**
58 * Get the temperature schedule (see
59 * Definition D27.2 on page 247) which is responsible for setting
60 * the acceptance probability of inferior candidate solutions during all
61 * optimization steps.
62 *
63 * @return the temperature schedule
64 */
65 public final ITemperatureSchedule getTemperatureSchedule() {
66 return this.schedule;
67 }
68
69 /**
70 * Invoke the optimization process. This method calls the optimizer and
71 * returns the list of best individuals (see Definition D4.18 on page 90)
72 * found. Usually, only a single individual will be returned. Different
73 * from the parameterless call method, here a randomizer and a
74 * termination criterion are directly passed in. Also, a list to fill in
75 * the optimization results is provided. This allows recursively using
76 * the optimization algorithms.
77 *
78 * @param r
79 * the randomizer (will be used directly without setting the
80 * seed)
81 * @param term
82 * the termination criterion (will be used directly without
83 * resetting)
84 * @param result
85 * a list to which the results are to be appended
86 */
87 @Override
88 public void call(final Random r, final ITerminationCriterion term,
89 final List<Individual<G, X>> result) {
90
91 result.add(SimulatedAnnealing.simulatedAnnealing(//
92 this.getObjectiveFunction(),//
93 this.getNullarySearchOperation(), //
94 this.getUnarySearchOperation(),//
95 this.getGPM(), this.getTemperatureSchedule(),//
96 term, r));
97 }
98
99 /**
100 * We place the complete Simulated Annealing method as defined in
101 * Algorithm 27.1 on page 245 into this single procedure.
102 *
103 * @param f
104 * the objective function ( Definition D2.3 on page 44)
105 * @param create
106 * the nullary search operator
107 * ( Section 4.2 on page 83) for creating the initial
108 * genotype
109 * @param mutate
110 * the unary search operator ( Section 4.2 on page 83)
111 * for modifying existing genotypes
112 * @param gpm
113 * the genotype-phenotype mapping
114 * ( Section 4.3 on page 85)
115 * @param temperature
116 * the temperature schedule according to
117 * ( Definition D27.2 on page 247
742 56 THE IMPLEMENTATION PACKAGE
118 * @param term
119 * the termination criterion
120 * ( Section 6.3.3 on page 105)
121 * @param r
122 * the random number generator
123 * @return the best candidate solution
124 * ( Definition D2.2 on page 43) found
125 * @param <G>
126 * the search space ( Section 4.1 on page 81)
127 * @param <X>
128 * the problem space ( Section 2.1 on page 43)
129 */
130 public static final <G, X> Individual<G, X> simulatedAnnealing(
131 //
132 final IObjectiveFunction<X> f,
133 final INullarySearchOperation<G> create,//
134 final IUnarySearchOperation<G> mutate,//
135 final IGPM<G, X> gpm,//
136 final ITemperatureSchedule temperature,
137 final ITerminationCriterion term, final Random r) {
138
139 Individual<G, X> pbest, pcur, pnew;
140 int t;
141 double T, DE;
142
143 pbest = new Individual<G, X>();
144 pnew = new Individual<G, X>();
145 pcur = new Individual<G, X>();
146
147 // create the first genotype, map it to a phenotype, and evaluate it
148 pcur.g = create.create(r);
149 pcur.x = gpm.gpm(pcur.g, r);
150 pcur.v = f.compute(pcur.x, r);
151 pbest.assign(pcur);
152 t = 1;
153
154 // check the termination criterion
155 while (!(term.terminationCriterion())) {
156
157 // compute the current temperature
158 T = temperature.getTemperature(t);
159
160 // modify the best point known, map the new point to a phenotype and
161 // evaluat it
162 pnew.g = mutate.mutate(pcur.g, r);
163 pnew.x = gpm.gpm(pnew.g, r);
164 pnew.v = f.compute(pnew.x, r);
165
166 // compute the energy difference according to
167 // Equation 27.2 on page 244
168 DE = pnew.v - pcur.v;
169
170 // implement Equation 27.3 on page 244
171 if (DE <= 0.0d) {
172 pcur.assign(pnew);
173
174 // remember the best candidate solution
175 if (pnew.v < pbest.v) {
176 pbest.assign(pnew);
177 }
178 } else {
179 // otherwise, use
180 if (r.nextDouble() < Math.exp(-DE / T)) {
181 pcur.assign(pnew);
182 }
183 }
184
56.1. THE OPTIMIZATION ALGORITHMS 743
185 t++;
186 }
187
188 return pbest;
189 }
190
191 /**
192 * Get the name of the optimization module
193 *
194 * @param longVersion
195 * true if the long name should be returned, false if the short
196 * name should be returned
197 * @return the name of the optimization module
198 */
199 @Override
200 public String getName(final boolean longVersion) {
201 ITemperatureSchedule ts;
202
203 ts = this.getTemperatureSchedule();
204
205 if (ts instanceof Exponential) {
206 return (longVersion ? "SimulatedQuenching" :
"SQ");//N ON − N LS − 1//N ON − N LS − 2
207 }
208
209 if (longVersion) {
210 return this.getClass().getSimpleName();
211 }
212 return "SA"; //N ON − N LS − 1
213 }
214
215 /**
216 * Get the full configuration which holds all the data necessary to
217 * describe this object.
218 *
219 * @param longVersion
220 * true if the long version should be returned, false if the
221 * short version should be returned
222 * @return the full configuration
223 */
224 @Override
225 public String getConfiguration(final boolean longVersion) {
226 ITemperatureSchedule ts;
227 String s;
228
229 s = super.getConfiguration(longVersion);
230 ts = this.getTemperatureSchedule();
231 if (ts != null) {
232 s += ’,’ + ts.toString(longVersion);
233 }
234
235 return s;
236 }
237 }

56.1.3.2.1 Temperature Schedules

In this package, we provide the implementations of the temperature schedules for the Sim-
ulated Annealing algorithm and its modules as discussed in Section 27.3.

Listing 56.9: The logarithmic temperature schedule as specified in Equation 27.16.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.sa.temperatureSchedules;
5
744 56 THE IMPLEMENTATION PACKAGE
6 /**
7 * A logarithmic temperature schedule as defined in
8 * Section 27.3.1 on page 248 which implements the interface
9 * ITemperatureSchedule given in
10 * Listing 55.12 on page 717.
11 *
12 * @author Thomas Weise
13 */
14 public class Logarithmic extends TemperatureSchedule {
15
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /**
20 * Create a new logarithmic temperature schedule
21 *
22 * @param pTs
23 * the starting temperature
24 */
25 public Logarithmic(final double pTs) {
26 super(pTs);
27 }
28
29 /**
30 * Get the temperature to be used in the iteration t according to a
31 * logarithmic schedule Section 27.3.1 on page 248
32 * according to Equation 27.13 on page 247.
33 *
34 * @param t
35 * the iteration
36 * @return the temperature to be used for determining whether a worse
37 * solution should be accepted or not
38 */
39 @Override
40 public final double getTemperature(final int t) {
41 if (t < Math.E) {
42 return this.Ts;
43 }
44 return (this.Ts / Math.log(t));
45 }
46
47 /**
48 * Get the name of the optimization module
49 *
50 * @param longVersion
51 * true if the long name should be returned, false if the short
52 * name should be returned
53 * @return the name of the optimization module
54 */
55 @Override
56 public String getName(final boolean longVersion) {
57 if (longVersion) {
58 return this.getClass().getSimpleName();
59 }
60 return "log"; //N ON − N LS − 1
61 }
62 }

Listing 56.10: The logarithmic temperature schedule as specified in Equation 27.17.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.sa.temperatureSchedules;
5
6 import org.goataa.impl.utils.TextUtils;
7
8 /**
56.1. THE OPTIMIZATION ALGORITHMS 745
9 * An exponential temperature schedule as defined in
10 * Section 27.3.2 on page 249 which implements the interface
11 * ITemperatureSchedule given in
12 * Listing 55.12 on page 717.
13 *
14 * @author Thomas Weise
15 */
16 public class Exponential extends TemperatureSchedule {
17
18 /** a constant required by Java serialization */
19 private static final long serialVersionUID = 1;
20
21 /** the epsilong value */
22 private final double e;
23
24 /**
25 * Create a new exponential temperature schedule
26 *
27 * @param pTs
28 * the starting temperature
29 * @param pE
30 * the epsilon value
31 */
32 public Exponential(final double pTs, final double pE) {
33 super(pTs);
34 this.e = pE;
35 }
36
37 /**
38 * Get the temperature to be used in the iteration t according to a
39 * exponential schedule Section 27.3.2 on page 249
40 * according to Equation 27.13 on page 247.
41 *
42 * @param t
43 * the iteration
44 * @return the temperature to be used for determining whether a worse
45 * solution should be accepted or not
46 */
47 @Override
48 public final double getTemperature(final int t) {
49 return (Math.pow((1 - this.e), t) * this.Ts);
50 }
51
52 /**
53 * Get the name of the optimization module
54 *
55 * @param longVersion
56 * true if the long name should be returned, false if the short
57 * name should be returned
58 * @return the name of the optimization module
59 */
60 @Override
61 public String getName(final boolean longVersion) {
62 if (longVersion) {
63 return this.getClass().getSimpleName();
64 }
65 return "exp"; //N ON − N LS − 1
66 }
67
68 /**
69 * Get the full configuration which holds all the data necessary to
70 * describe this object.
71 *
72 * @param longVersion
73 * true if the long version should be returned, false if the
74 * short version should be returned
75 * @return the full configuration
746 56 THE IMPLEMENTATION PACKAGE
76 */
77 @Override
78 public String getConfiguration(final boolean longVersion) {
79 return super.getConfiguration(longVersion) + ’,’
80 + (longVersion ? ("e=" + TextUtils.formatNumber(this.e)) : //N ON − N LS − 1
81 TextUtils.formatNumber(this.e));
82 }
83 }

56.1.4 Population-based Metaheuristics

56.1.4.1 Evolutionary Algorithms

In this package, we provide the implementations of the Evolutionary Algorithms discussed


in Chapter 28.

Listing 56.11: A base class providing the features Algorithm 28.1 in a generational way (see
Section 28.1.4.1).
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Constants;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IBinarySearchOperation;
12 import org.goataa.spec.IGPM;
13 import org.goataa.spec.INullarySearchOperation;
14 import org.goataa.spec.IObjectiveFunction;
15 import org.goataa.spec.IOptimizationModule;
16 import org.goataa.spec.ISelectionAlgorithm;
17 import org.goataa.spec.ITerminationCriterion;
18 import org.goataa.spec.IUnarySearchOperation;
19
20 /**
21 * A straightforward implementation of the Evolutionary Algorithm given in
22 * Section 28.1.1 and specified in
23 * Algorithm 28.1 on page 255 with generational population treatment
24 * (see Section 28.1.4.1 on page 265).
25 *
26 * @param <G>
27 * the search space (genome, Section 4.1 on page 81)
28 * @param <X>
29 * the problem space (phenome, Section 2.1 on page 43)
30 * @author Thomas Weise
31 */
32 public final class SimpleGenerationalEA<G, X> extends EABase<G, X> {
33
34 /** a constant required by Java serialization */
35 private static final long serialVersionUID = 1;
36
37 /** instantiate the simple generational EA */
38 public SimpleGenerationalEA() {
39 super();
40 }
41
42 /**
43 * Perform a simple EA as defined in Algorithm 28.1 on page 255 while
56.1. THE OPTIMIZATION ALGORITHMS 747
44 * using a generational population handling (see
45 * Section 28.1.4.1 on page 265).
46 *
47 * @param f
48 * the objective function ( Definition D2.3 on page 44)
49 * @param create
50 * the nullary search operator
51 * ( Section 4.2 on page 83) for creating the initial
52 * genotype
53 * @param mutate
54 * the unary search operator (mutation,
55 * Section 4.2 on page 83) for modifying existing
56 * genotypes
57 * @param recombine
58 * the recombination operator ( Definition D28.14 on page 307)
59 * @param gpm
60 * the genotype-phenotype mapping
61 * ( Section 4.3 on page 85)
62 * @param sel
63 * the selection algorithm ( Section 28.4 on page 285
64 * @param term
65 * the termination criterion
66 * ( Section 6.3.3 on page 105)
67 * @param ps
68 * the population size
69 * @param mps
70 * the mating pool size
71 * @param cr
72 * the crossover rate
73 * @param mr
74 * the mutation rate
75 * @param r
76 * the random number generator
77 * @return the individual containing the best candidate solution
78 * ( Definition D2.2 on page 43) found
79 * @param <G>
80 * the search space ( Section 4.1 on page 81)
81 * @param <X>
82 * the problem space ( Section 2.1 on page 43)
83 */
84 @SuppressWarnings("unchecked")
85 public static final <G, X> Individual<G, X> evolutionaryAlgorithm(
86 //
87 final IObjectiveFunction<X> f,
88 final INullarySearchOperation<G> create,//
89 final IUnarySearchOperation<G> mutate,//
90 final IBinarySearchOperation<G> recombine,
91 final IGPM<G, X> gpm,//
92 final ISelectionAlgorithm sel, final int ps, final int mps,
93 final double mr, final double cr, final ITerminationCriterion term,
94 final Random r) {
95
96 Individual<G, X>[] pop, mate;
97 int mateIndex, popIndex, i, j, k;
98 Individual<G, X> p, best;
99
100 best = new Individual<G, X>();
101 best.v = Constants.WORST_FITNESS;
102 pop = new Individual[ps];
103 mate = new Individual[mps];
104
105 // build the initial population of random individuals
106 for (i = pop.length; (--i) >= 0;) {
107 p = new Individual<G, X>();
108 pop[i] = p;
109 p.g = create.create(r);
110 }
748 56 THE IMPLEMENTATION PACKAGE
111
112 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
113 for (;;) {
114 // for eachgeneration do...
115
116 // for each individual in the population
117 for (i = pop.length; (--i) >= 0;) {
118 p = pop[i];
119
120 // perform the genotype-phenotype mapping
121 p.x = gpm.gpm(p.g, r);
122 // compute the objective value
123 p.v = f.compute(p.x, r);
124
125 // is the current individual the best one so far?
126 if (p.v < best.v) {
127 best.assign(p);
128 }
129
130 // after each objective function evaluation, check if we should
131 // stop
132 if (term.terminationCriterion()) {
133 // if we should stop, return the best individual found
134 return best;
135 }
136 }
137
138 // perform the selection step
139 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
140
141 mateIndex = 0;
142 popIndex = ps;
143
144 // create cr*ps individuals with crossover
145 k = (int) (0.5 + (cr * ps));
146 for (j = 0; (j < k) && (popIndex > 0); j++) {
147 p = new Individual<G, X>();
148
149 // recombine an individual with another
150 p.g = recombine.recombine(mate[mateIndex].g,
151 mate[r.nextInt(mps)].g, r);
152 pop[--popIndex] = p;
153 mateIndex = ((mateIndex + 1) % mps);
154 }
155
156 // create mr*ps individuals with mutation
157 k = (int) (0.5 + (mr * ps));
158 for (j = 0; (j < k) && (popIndex > 0); j++) {
159 p = new Individual<G, X>();
160
161 // recombine an individual with another
162 p.g = mutate.mutate(mate[mateIndex].g, r);
163 pop[--popIndex] = p;
164 mateIndex = ((mateIndex + 1) % mps);
165 }
166
167 // copy the remaining individuals
168 for (; popIndex > 0;) {
169 pop[--popIndex] = mate[mateIndex];
170 mateIndex = ((mateIndex + 1) % mps);
171 }
172 }
173 }
174
175 /**
176 * Invoke the simple ga
177 *
56.1. THE OPTIMIZATION ALGORITHMS 749
178 * @param r
179 * the randomizer (will be used directly without setting the
180 * seed)
181 * @param term
182 * the termination criterion (will be used directly without
183 * resetting)
184 * @param result
185 * a list to which the results are to be appended
186 */
187 @Override
188 public void call(final Random r, final ITerminationCriterion term,
189 final List<Individual<G, X>> result) {
190
191 result.add(SimpleGenerationalEA.evolutionaryAlgorithm(//
192 this.getObjectiveFunction(),//
193 this.getNullarySearchOperation(), //
194 this.getUnarySearchOperation(),//
195 this.getBinarySearchOperation(),//
196 this.getGPM(), //
197 this.getSelectionAlgorithm(),//
198 this.getPopulationSize(),//
199 this.getMatingPoolSize(),//
200 this.getMutationRate(),//
201 this.getCrossoverRate(),//
202 term, // //
203 r));
204 }
205
206 /**
207 * Get the name of the optimization module
208 *
209 * @param longVersion
210 * true if the long name should be returned, false if the short
211 * name should be returned
212 * @return the name of the optimization module
213 */
214 @Override
215 public String getName(final boolean longVersion) {
216 IOptimizationModule om;
217 boolean ga;
218
219 ga = false;
220
221 om = this.getNullarySearchOperation();
222 if (om != null) {
223 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
224 ga = true;
225 }
226 } else {
227 om = this.getUnarySearchOperation();
228 if (om != null) {
229 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
230 ga = true;
231 }
232 } else {
233 om = this.getBinarySearchOperation();
234 if (om != null) {
235 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
236 ga = true;
237 }
238 }
239 }
240 }
241
750 56 THE IMPLEMENTATION PACKAGE
242 if (ga) {
243 return (longVersion ? "sGeneticAlgorithm" :
"sGA");//N ON − N LS − 1//N ON − N LS − 2
244 }
245
246 if (longVersion) {
247 return this.getClass().getSimpleName();
248 }
249 return "sEA"; //N ON − N LS − 1
250 }
251
252 }

Listing 56.12: A base class providing the features Algorithm 28.2 in a generational way.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Archive;
10 import org.goataa.impl.utils.MOIndividual;
11 import org.goataa.spec.IBinarySearchOperation;
12 import org.goataa.spec.IFitnessAssignmentProcess;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.IOptimizationModule;
18 import org.goataa.spec.ISelectionAlgorithm;
19 import org.goataa.spec.ITerminationCriterion;
20 import org.goataa.spec.IUnarySearchOperation;
21
22 /**
23 * A straightforward implementation of the multi-objective Evolutionary
24 * Algorithm given in Section 28.1.1 and specified in
25 * Algorithm 28.2 on page 256 with generational population treatment
26 * (see Section 28.1.4.1 on page 265).
27 *
28 * @param <G>
29 * the search space (genome, Section 4.1 on page 81)
30 * @param <X>
31 * the problem space (phenome, Section 2.1 on page 43)
32 * @author Thomas Weise
33 */
34 public class SimpleGenerationalMOEA<G, X> extends MOEABase<G, X> {
35
36 /** a constant required by Java serialization */
37 private static final long serialVersionUID = 1;
38
39 /** instantiate the simple generational EA */
40 public SimpleGenerationalMOEA() {
41 super();
42 }
43
44 /**
45 * Perform a simple EA as defined in Algorithm 28.1 on page 255 while
46 * using a generational population handling (see
47 * Section 28.1.4.1 on page 265).
48 *
49 * @param f
50 * the objective functions ( Definition D2.3 on page 44)
51 * @param cmp
52 * the comparator
53 * @param create
56.1. THE OPTIMIZATION ALGORITHMS 751
54 * the nullary search operator
55 * ( Section 4.2 on page 83) for creating the initial
56 * genotype
57 * @param mutate
58 * the unary search operator (mutation,
59 * Section 4.2 on page 83) for modifying existing
60 * genotypes
61 * @param recombine
62 * the recombination operator ( Definition D28.14 on page 307)
63 * @param gpm
64 * the genotype-phenotype mapping
65 * ( Section 4.3 on page 85)
66 * @param sel
67 * the selection algorithm ( Section 28.4 on page 285
68 * @param fa
69 * the fitness assignment process
70 * @param term
71 * the termination criterion
72 * ( Section 6.3.3 on page 105)
73 * @param ps
74 * the population size
75 * @param mps
76 * the mating pool size
77 * @param cr
78 * the crossover rate
79 * @param mr
80 * the mutation rate
81 * @param maxSolutions
82 * the maximum number of solutions
83 * @param r
84 * the random number generator
85 * @param result
86 * the list to add the best candidate solutions
87 * ( Definition D2.2 on page 43) to
88 * @param <G>
89 * the search space ( Section 4.1 on page 81)
90 * @param <X>
91 * the problem space ( Section 2.1 on page 43)
92 */
93 @SuppressWarnings("unchecked")
94 public static final <G, X> void evolutionaryAlgorithmMO(
95 //
96 final IObjectiveFunction<X>[] f,//
97 final IIndividualComparator cmp,//
98 final INullarySearchOperation<G> create,//
99 final IUnarySearchOperation<G> mutate,//
100 final IBinarySearchOperation<G> recombine, final IGPM<G, X> gpm,//
101 final ISelectionAlgorithm sel,//
102 final IFitnessAssignmentProcess fa,//
103 final int ps, final int mps,//
104 final double mr,//
105 final double cr,//
106 final ITerminationCriterion term, final int maxSolutions,//
107 final Random r,//
108 final List<MOIndividual<G, X>> result) {
109
110 MOIndividual<G, X>[] pop, mate;
111 int mateIndex, popIndex, i, j, k;
112 MOIndividual<G, X> p;
113 Archive<G, X> best;
114
115 best = new Archive<G, X>(maxSolutions, cmp);
116 pop = new MOIndividual[ps];
117 mate = new MOIndividual[mps];
118
119 // build the initial population of random individuals
120 for (i = pop.length; (--i) >= 0;) {
752 56 THE IMPLEMENTATION PACKAGE
121 p = new MOIndividual<G, X>(f.length);
122 pop[i] = p;
123 p.g = create.create(r);
124 }
125
126 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
127 for (;;) {
128 // for each generation do...
129
130 // for each individual in the population
131 for (i = pop.length; (--i) >= 0;) {
132 p = pop[i];
133
134 // perform the genotype-phenotype mapping
135 p.x = gpm.gpm(p.g, r);
136
137 // compute the objective values
138 p.evaluate(f, r);
139
140 // update the archive
141 best.add(p);
142
143 // after each objective function evaluation, check if we should
144 // stop
145 if (term.terminationCriterion()) {
146 // if we should stop, return the best individuals found
147 result.addAll(best);
148 return;
149 }
150 }
151
152 // assign the fitness
153 fa.assignFitness(pop, 0, pop.length, cmp, r);
154
155 // perform the selection step
156 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
157
158 mateIndex = 0;
159 popIndex = ps;
160
161 // create cr*ps individuals with crossover
162 k = (int) (0.5 + (cr * ps));
163 for (j = 0; (j < k) && (popIndex > 0); j++) {
164 p = new MOIndividual<G, X>(f.length);
165
166 // recombine an individual with another
167 p.g = recombine.recombine(mate[mateIndex].g,
168 mate[r.nextInt(mps)].g, r);
169 pop[--popIndex] = p;
170 mateIndex = ((mateIndex + 1) % mps);
171 }
172
173 // create mr*ps individuals with mutation
174 k = (int) (0.5 + (mr * ps));
175 for (j = 0; (j < k) && (popIndex > 0); j++) {
176 p = new MOIndividual<G, X>(f.length);
177
178 // recombine an individual with another
179 p.g = mutate.mutate(mate[mateIndex].g, r);
180 pop[--popIndex] = p;
181 mateIndex = ((mateIndex + 1) % mps);
182 }
183
184 // copy the remaining individuals
185 for (; popIndex > 0;) {
186 pop[--popIndex] = mate[mateIndex];
187 mateIndex = ((mateIndex + 1) % mps);
56.1. THE OPTIMIZATION ALGORITHMS 753
188 }
189 }
190 }
191
192 /**
193 * Invoke the multi-objective EA.
194 *
195 * @param r
196 * the randomizer (will be used directly without setting the
197 * seed)
198 * @param term
199 * the termination criterion (will be used directly without
200 * resetting)
201 * @param result
202 * a list to which the results are to be appended
203 */
204 @Override
205 public void call(final Random r, final ITerminationCriterion term,
206 final List<MOIndividual<G, X>> result) {
207
208 evolutionaryAlgorithmMO(//
209 this.getObjectiveFunctions(),//
210 this.getComparator(),//
211 this.getNullarySearchOperation(), //
212 this.getUnarySearchOperation(),//
213 this.getBinarySearchOperation(),//
214 this.getGPM(), //
215 this.getSelectionAlgorithm(),//
216 this.getFitnesAssignmentProcess(),//
217 this.getPopulationSize(),//
218 this.getMatingPoolSize(),//
219 this.getMutationRate(),//
220 this.getCrossoverRate(),//
221 term, //
222 this.getMaxSolutions(),//
223 r,//
224 result);
225 }
226
227 /**
228 * Get the name of the optimization module
229 *
230 * @param longVersion
231 * true if the long name should be returned, false if the short
232 * name should be returned
233 * @return the name of the optimization module
234 */
235 @Override
236 public String getName(final boolean longVersion) {
237 IOptimizationModule om;
238 boolean ga;
239
240 ga = false;
241
242 om = this.getNullarySearchOperation();
243 if (om != null) {
244 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
245 ga = true;
246 }
247 } else {
248 om = this.getUnarySearchOperation();
249 if (om != null) {
250 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
251 ga = true;
252 }
754 56 THE IMPLEMENTATION PACKAGE
253 } else {
254 om = this.getBinarySearchOperation();
255 if (om != null) {
256 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//N ON − N LS − 1
257 ga = true;
258 }
259 }
260 }
261 }
262
263 if (ga) {
264 return (longVersion ? "sMOGeneticAlgorithm" :
"sMOGA");//N ON − N LS − 1//N ON − N LS − 2
265 }
266
267 if (longVersion) {
268 return this.getClass().getSimpleName();
269 }
270 return "sMOEA"; //N ON − N LS − 1
271 }
272
273 }

56.1.4.1.1 Selection Algorithms

In this package, we provide the implementations of the selection algorithms discussed in


Section 28.4.

Listing 56.13: The truncation selection method given in Algorithm 28.8 on page 289.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Arrays;
12 import java.util.Random;
13
14 import org.goataa.impl.OptimizationModule;
15 import org.goataa.impl.utils.Constants;
16 import org.goataa.impl.utils.Individual;
17 import org.goataa.spec.ISelectionAlgorithm;
18
19 /**
20 * An implementation of the truncation selection algorithm which unites
21 * both, Algorithm 28.7 on page 289 and
22 * Algorithm 28.8 on page 289 from
23 * Section 28.4.2 in one class.
24 *
25 * @author Thomas Weise
26 */
27 public class TruncationSelection extends OptimizationModule implements
28 ISelectionAlgorithm {
29
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** the number of best individuals to select */
34 private final int k;
35
36 /** Create a new instance of the truncation selection algorithm */
56.1. THE OPTIMIZATION ALGORITHMS 755
37 public TruncationSelection() {
38 this(-1);
39 }
40
41 /**
42 * Create a new instance of the truncation selection algorithm
43 *
44 * @param ak
45 * the number of survivors
46 */
47 public TruncationSelection(final int ak) {
48 super();
49 this.k = ((ak > 0) ? ak : Integer.MAX_VALUE);
50 }
51
52 /**
53 * Perform the truncation selection as defined in
54 * Section 28.4.2.
55 *
56 * @param pop
57 * the population (see Definition D4.19 on page 90)
58 * @param popStart
59 * the index of the first individual in the population
60 * @param popCount
61 * the number of individuals to select from
62 * @param mate
63 * the mating pool (see Definition D28.10 on page 285)
64 * @param mateStart
65 * the first index to which the selected individuals should be
66 * copied
67 * @param mateCount
68 * the number of individuals to be selected
69 * @param r
70 * the random number generator
71 */
72 @Override
73 public void select(final Individual<?, ?>[] pop, final int popStart,
74 final int popCount, final Individual<?, ?>[] mate,
75 final int mateStart, final int mateCount, final Random r) {
76 int i, x, e;
77
78 Arrays.sort(pop, popStart, popStart + popCount,
79 Constants.FITNESS_SORT_ASC_COMPARATOR);
80
81 i = mateStart;
82 e = (i + mateCount);
83 while (i < e) {
84 x = Math.min(this.k, Math.min(mateCount - i, popCount));
85 System.arraycopy(pop, popStart, mate, i, x);
86 i += x;
87 }
88 }
89
90 /**
91 * Get the name of the optimization module
92 *
93 * @param longVersion
94 * true if the long name should be returned, false if the short
95 * name should be returned
96 * @return the name of the optimization module
97 */
98 @Override
99 public String getName(final boolean longVersion) {
100 if (longVersion) {
101 return this.getClass().getSimpleName();
102 }
103 return "Trunc"; //N ON − N LS − 1
756 56 THE IMPLEMENTATION PACKAGE
104 }
105
106 /**
107 * Get the full configuration which holds all the data necessary to
108 * describe this object.
109 *
110 * @param longVersion
111 * true if the long version should be returned, false if the
112 * short version should be returned
113 * @return the full configuration
114 */
115 @Override
116 public String getConfiguration(final boolean longVersion) {
117 if ((this.k > 0) && (this.k < Integer.MAX_VALUE)) {
118 return "k=" + String.valueOf(this.k); //N ON − N LS − 1
119 }
120 return super.getConfiguration(longVersion);
121 }
122 }

Listing 56.14: The tournament selection method given in Algorithm 28.11 on page 298.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Random;
12
13 import org.goataa.impl.OptimizationModule;
14 import org.goataa.impl.utils.Individual;
15 import org.goataa.spec.ISelectionAlgorithm;
16
17 /**
18 * The tournament selection algorithm with replacement as specified in
19 * Algorithm 28.11 on page 298 in
20 * Section 28.4.4.
21 *
22 * @author Thomas Weise
23 */
24 public class TournamentSelection extends OptimizationModule implements
25 ISelectionAlgorithm {
26
27 /** a constant required by Java serialization */
28 private static final long serialVersionUID = 1;
29
30 /** the tournament size */
31 private final int k;
32
33 /**
34 * Create a new instance of the tournament selection algorithm with
35 * replacement
36 *
37 * @param ik
38 * the tournament size
39 */
40 public TournamentSelection(final int ik) {
41 super();
42 this.k = ik;
43 }
44
45 /**
46 * Perform the tournament selection as defined in
56.1. THE OPTIMIZATION ALGORITHMS 757
47 * Algorithm 28.11 on page 298.
48 *
49 * @param pop
50 * the population (see Definition D4.19 on page 90)
51 * @param popStart
52 * the index of the first individual in the population
53 * @param popCount
54 * the number of individuals to select from
55 * @param mate
56 * the mating pool (see Definition D28.10 on page 285)
57 * @param mateStart
58 * the first index to which the selected individuals should be
59 * copied
60 * @param mateCount
61 * the number of individuals to be selected
62 * @param r
63 * the random number generator
64 */
65 @Override
66 public void select(final Individual<?, ?>[] pop, final int popStart,
67 final int popCount, final Individual<?, ?>[] mate,
68 final int mateStart, final int mateCount, final Random r) {
69 Individual<?, ?> x, y;
70 int i, j;
71
72 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
73 x = pop[popStart + r.nextInt(popCount)];
74 for (j = 2; j <= this.k; j++) {
75 y = pop[popStart + r.nextInt(popCount)];
76 if (y.v < x.v) {
77 x = y;
78 }
79 }
80 mate[i] = x;
81 }
82
83 }
84
85 /**
86 * Get the name of the optimization module
87 *
88 * @param longVersion
89 * true if the long name should be returned, false if the short
90 * name should be returned
91 * @return the name of the optimization module
92 */
93 @Override
94 public String getName(final boolean longVersion) {
95 if (longVersion) {
96 return this.getClass().getSimpleName();
97 }
98 return "Tour"; //N ON − N LS − 1
99 }
100
101 /**
102 * Get the full configuration which holds all the data necessary to
103 * describe this object.
104 *
105 * @param longVersion
106 * true if the long version should be returned, false if the
107 * short version should be returned
108 * @return the full configuration
109 */
110 @Override
111 public String getConfiguration(final boolean longVersion) {
112 return "k=" + String.valueOf(this.k); //N ON − N LS − 1
113 }
758 56 THE IMPLEMENTATION PACKAGE
114 }

Listing 56.15: The random selection method given in Algorithm 28.15 on page 302.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Random;
12
13 import org.goataa.impl.utils.Individual;
14 import org.goataa.spec.ISelectionAlgorithm;
15
16 /**
17 * An selection operator which randomly selects individuals as specified in
18 * Algorithm 28.15 on page 302. This operator should mainly be used
19 * for parental selection but not for survival selection.
20 *
21 * @author Thomas Weise
22 */
23 public class RandomSelection extends SelectionAlgorithm {
24
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /** the globally shared instance of random selection */
29 public static final ISelectionAlgorithm RANDOM_SELECTION = new RandomSelection();
30
31 /** Create a new instance of the random selection algorithm */
32 protected RandomSelection() {
33 super();
34 }
35
36 /**
37 * Perform the random selection as defined in
38 * Algorithm 28.15 on page 302.
39 *
40 * @param pop
41 * the population (see Definition D4.19 on page 90)
42 * @param popStart
43 * the index of the first individual in the population
44 * @param popCount
45 * the number of individuals to select from
46 * @param mate
47 * the mating pool (see Definition D28.10 on page 285)
48 * @param mateStart
49 * the first index to which the selected individuals should be
50 * copied
51 * @param mateCount
52 * the number of individuals to be selected
53 * @param r
54 * the random number generator
55 */
56 @Override
57 public final void select(final Individual<?, ?>[] pop,
58 final int popStart, final int popCount,
59 final Individual<?, ?>[] mate, final int mateStart,
60 final int mateCount, final Random r) {
61 int i;
62
63 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
64 mate[i] = pop[popStart + r.nextInt(popCount)];
56.1. THE OPTIMIZATION ALGORITHMS 759
65 }
66
67 }
68
69 /**
70 * Get the name of the optimization module
71 *
72 * @param longVersion
73 * true if the long name should be returned, false if the short
74 * name should be returned
75 * @return the name of the optimization module
76 */
77 @Override
78 public String getName(final boolean longVersion) {
79 if (longVersion) {
80 return this.getClass().getSimpleName();
81 }
82 return "RND"; //N ON − N LS − 1
83 }
84 }

56.1.4.1.2 Fitness Assignment Processes

Different fitness assignment processes as discussed in Section 28.3 on page 274.

Listing 56.16: Pareto ranking fitness assignment.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea.fitnessAssignment;
5
6 import java.util.Random;
7
8 import org.goataa.impl.utils.MOIndividual;
9 import org.goataa.spec.IFitnessAssignmentProcess;
10 import org.goataa.spec.IIndividualComparator;
11
12 /**
13 * The Pareto ranking algorithm as discussed in
14 * Section 28.3.3 on page 275.
15 *
16 * @author Thomas Weise
17 */
18 public class ParetoRanking extends FitnessAssignmentProcess {
19 /** a constant required by Java serialization */
20 private static final long serialVersionUID = 1;
21
22 /** the globally shared Pareto ranking procedure */
23 public static final IFitnessAssignmentProcess PARETO_RANKING = new
ParetoRanking();
24
25 /** instantitate the pareto ranking algorithm */
26 protected ParetoRanking() {
27 super();
28 }
29
30 /**
31 * Rank individuals according to the Pareto dominance relationship
32 * Section 28.3.3 on page 275.
33 *
34 * @param pop
35 * the population of individuals
36 * @param start
37 * the index of the first individual in the population
38 * @param count
39 * the number of individuals in the population
760 56 THE IMPLEMENTATION PACKAGE
40 * @param cmp
41 * the individual comparator which tells which individual is
42 * better than another one
43 * @param r
44 * than randomizer
45 */
46 @Override
47 public void assignFitness(final MOIndividual<?, ?>[] pop,
48 final int start, final int count, final IIndividualComparator cmp,
49 final Random r) {
50 int i, j, res;
51 MOIndividual<?, ?> a, b;
52
53 for (i = (start + count); (--i) >= start;) {
54 pop[i].v = 1d;
55 }
56
57 // compare each individual
58 for (i = (start + count); (--i) > start;) {
59 a = pop[i];
60 // with each other individual
61 for (j = i; (--j) >= start;) {
62 b = pop[j];
63 res = cmp.compare(a, b);
64 if (res < 0) {
65 // ’a’ dominates ’b’ -> make fitness of ’b’ bigger = worse
66 b.v++;
67 } else {
68 if (res > 0) {
69 // ’b’ dominates ’a’ -> make fitness of ’a’ bigger = worse
70 a.v++;
71 }
72 }
73 }
74 }
75 }
76
77 }

56.1.4.2 Evolution Strategies

In this package, we provide the implementations of the Evolutionary Strategies discussed in


Chapter 30.
Listing 56.17: An example implementation of the Evolution Strategy Algorithm 30.1 on
page 361.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.es;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.algorithms.ea.selection.RandomSelection;
11 import org.goataa.impl.algorithms.ea.selection.TruncationSelection;
12 import
org.goataa.impl.searchOperations.strings.real.nary.DoubleArrayDominantRecombination;
13 import
org.goataa.impl.searchOperations.strings.real.nary.DoubleArrayIntermediateRecombination;
14 import
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation;
15 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayStdDevNormalMutation;
56.1. THE OPTIMIZATION ALGORITHMS 761
16 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayStrategyLogNormalMutation;
17 import org.goataa.impl.utils.Constants;
18 import org.goataa.impl.utils.Individual;
19 import org.goataa.impl.utils.TextUtils;
20 import org.goataa.spec.IGPM;
21 import org.goataa.spec.INarySearchOperation;
22 import org.goataa.spec.INullarySearchOperation;
23 import org.goataa.spec.IObjectiveFunction;
24 import org.goataa.spec.IOptimizationModule;
25 import org.goataa.spec.ISelectionAlgorithm;
26 import org.goataa.spec.ITerminationCriterion;
27 import org.goataa.spec.IUnarySearchOperation;
28
29 /**
30 * An Evolution Strategy implementation which follows the definitions in
31 * Chapter 30 and which comes close to
32 * Algorithm 30.1. This implementation uses vectors of standard
33 * deviations as endogenous strategy parameters, as discussed in
34 * Section 30.4.1.2 on page 365.
35 *
36 * @param <X>
37 * the problem space (phenome, Section 2.1 on page 43)
38 * @author Thomas Weise
39 */
40 public class EvolutionStrategy<X> extends
41 LocalSearchAlgorithm<double[], X, Individual<double[], X>> {
42
43 /** a constant required by Java serialization */
44 private static final long serialVersionUID = 1;
45
46 /** the survival selection algorithm */
47 private ISelectionAlgorithm survivalSelection;
48
49 /** the survival parental algorithm */
50 private ISelectionAlgorithm parentalSelection;
51
52 /**
53 * mu - the number of parents in the mating pool, see
54 * Section 28.1.4.3 on page 266
55 */
56 private int mu;
57
58 /**
59 * lambda - the number of offsprings, see
60 * Section 28.1.4.3 on page 266
61 */
62 private int lambda;
63
64 /**
65 * rho - the number of parents per offspring, see
66 * Section 28.1.4.3 on page 266
67 */
68 private int rho;
69
70 /** true for (lambda+mu) strategies, false for (lambda,mu) strategies */
71 private boolean plus;
72
73 /** the dimension */
74 private int dimension;
75
76 /** the minimum values per dimension */
77 private double minX;
78
79 /** the maximum values per dimension */
80 private double maxX;
81
762 56 THE IMPLEMENTATION PACKAGE
82 /** do we need an internal setup ? */
83 private boolean internalSetupNeeded;
84
85 /** the unary reproduction operation was set by the user */
86 private boolean unaryOperationSet;
87
88 /** the nullary reproduction operation was set by the user */
89 private boolean nullaryOperationSet;
90
91 /** the internal search operation for creating the strategy parameters */
92 private INullarySearchOperation<double[]> createStrategyParameters;
93
94 /** the strategy parameter mutator */
95 private IUnarySearchOperation<double[]> mutateStrategyParameters;
96
97 /** the genotype crossover method */
98 private INarySearchOperation<double[]> crossoverGenotype;
99
100 /** the strategy crossover method */
101 private INarySearchOperation<double[]> crossoverStrategy;
102
103 /** instantiate the evolution strategy */
104 public EvolutionStrategy() {
105 super();
106
107 this.survivalSelection = new TruncationSelection();
108 this.parentalSelection = RandomSelection.RANDOM_SELECTION;
109 this.crossoverGenotype =
DoubleArrayDominantRecombination.DOUBLE_ARRAY_DOMINANT_RECOMBINATION;
110 this.crossoverStrategy =
DoubleArrayIntermediateRecombination.DOUBLE_ARRAY_INTERMEDIATE_RECOMBINATION;
111 this.lambda = 100;
112 this.mu = 10;
113 this.rho = 2;
114 this.plus = true;
115 this.dimension = 10;
116 this.minX = -1d;
117 this.maxX = 1d;
118 this.internalSetupNeeded = true;
119 }
120
121 /**
122 * Apply an Evolution Strategy implementation which follows the
123 * definitions in Chapter 30 and which comes
124 * close to Algorithm 30.1. This implementation uses vectors
125 * of standard deviations as endogenous strategy parameters, as discussed
126 * in Section 30.4.1.2 on page 365.
127 *
128 * @param f
129 * the objective function ( Definition D2.3 on page 44)
130 * @param createGenotype
131 * the nullary search operator
132 * ( Section 4.2 on page 83) for creating the initial
133 * genotypes
134 * @param createStrategy
135 * the nullary search operator
136 * ( Section 4.2 on page 83) for creating the initial
137 * strategy parameters
138 * @param mutateGenotype
139 * the unary search operator (mutation,
140 * Section 4.2 on page 83) for modifying existing
141 * genotypes
142 * @param mutateStrategy
143 * the unary search operator (mutation,
144 * Section 4.2 on page 83) for modifying existing
145 * strategy parameters
146 * @param recombineGenotype
56.1. THE OPTIMIZATION ALGORITHMS 763
147 * the n-ary recombination operator for genotypes
148 * @param recombineStrategy
149 * the n-ary recombination operator for strategy parameters
150 * @param gpm
151 * the genotype-phenotype mapping
152 * ( Section 4.3 on page 85)
153 * @param survivalSel
154 * the survival selection algorithm ( Section 28.4 on page 285
155 * @param parentalSel
156 * the parental/sexual selection algorithm
157 * ( Section 28.4 on page 285
158 * @param m
159 * the number of parents in the mating pool
160 * @param la
161 * the number of offspring
162 * @param rh
163 * the number of parents per offspring
164 * @param pl
165 * true for a (lambda+mu), false for a (lambda,mu) strategy
166 * @param term
167 * the termination criterion
168 * ( Section 6.3.3 on page 105)
169 * @param r
170 * the random number generator
171 * @return the individual containing the best candidate solution
172 * ( Definition D2.2 on page 43) found
173 * @param <X>
174 * the problem space ( Section 2.1 on page 43)
175 */
176 @SuppressWarnings("unchecked")
177 public static final <X> Individual<double[], X> evolutionaryStrategy(
178 //
179 final IObjectiveFunction<X> f,
180 final INullarySearchOperation<double[]> createGenotype,//
181 final INullarySearchOperation<double[]> createStrategy,//
182 final IUnarySearchOperation<double[]> mutateGenotype,//
183 final IUnarySearchOperation<double[]> mutateStrategy,//
184 final INarySearchOperation<double[]> recombineGenotype,//
185 final INarySearchOperation<double[]> recombineStrategy,//
186 final IGPM<double[], X> gpm,//
187 final ISelectionAlgorithm survivalSel,//
188 final ISelectionAlgorithm parentalSel,//
189 final int m,//
190 final int la,//
191 final int rh,//
192 final boolean pl,//
193 final ITerminationCriterion term,//
194 final Random r) {
195
196 ESIndividual<X>[] selected, pop, parents;
197 double[][] parents2, parents3;
198 int i, j;
199 ESIndividual<X> p;
200 Individual<double[], X> best;
201 DoubleArrayStdDevNormalMutation mut;
202
203 best = new ESIndividual<X>();
204 best.v = Constants.WORST_FITNESS;
205 selected = new ESIndividual[m];
206 pop = selected;
207 parents = new ESIndividual[rh];
208 parents2 = new double[rh][];
209 parents3 = new double[rh][];
210
211 if (mutateGenotype instanceof DoubleArrayStdDevNormalMutation) {
212 mut = ((DoubleArrayStdDevNormalMutation) mutateGenotype);
213 } else {
764 56 THE IMPLEMENTATION PACKAGE
214 mut = null;
215 }
216
217 // build the initial population of random individuals
218 for (i = selected.length; (--i) >= 0;) {
219 p = new ESIndividual<X>();
220 selected[i] = p;
221 p.g = createGenotype.create(r);
222 p.w = createStrategy.create(r);
223 }
224
225 // the basic loop of the Evolution Strategy Algorithm 30.1 on page 361
226 for (;;) {
227 // for eachgeneration do...
228
229 // for each individual in the population
230 for (i = pop.length; (--i) >= 0;) {
231 p = pop[i];
232
233 // perform the genotype-phenotype mapping
234 p.x = gpm.gpm(p.g, r);
235 // compute the objective value
236 p.v = f.compute(p.x, r);
237
238 // is the current individual the best one so far?
239 if (p.v < best.v) {
240 best.assign(p);
241 }
242
243 // after each objective function evaluation, check if we should
244 // stop
245 if (term.terminationCriterion()) {
246 // if we should stop, return the best individual found
247 return best;
248 }
249 }
250
251 // perform the selection step, usually truncation selection (see
252 // Algorithm 28.8 on page 289)
253 if (pop != selected) {
254 // for (lambda+mu) strategies use both, parents and offspring
255 if (pl) {
256 System.arraycopy(selected, 0, pop, la, m);
257 }
258 survivalSel.select(pop, 0, pop.length, selected, 0,
259 selected.length, r);
260 }
261
262 if (pop == selected) {
263 pop = new ESIndividual[pl ? (la + m) : la];
264 }
265
266 // fill the new population with new offspring
267 for (i = pop.length; (--i) >= 0;) {
268 // select the parents for the new individual, which usually is done
269 // randomly with Algorithm 28.15 on page 302
270 parentalSel.select(selected, 0, selected.length, parents, 0,
271 parents.length, r);
272
273 for (j = parents.length; (--j) >= 0;) {
274 p = parents[j];
275 parents2[j] = p.g;
276 parents3[j] = p.w;
277 }
278
279 p = new ESIndividual<X>();
280 pop[i] = p;
56.1. THE OPTIMIZATION ALGORITHMS 765
281
282 // usually done via dominate recombination, see
283 // Algorithm 30.2 on page 363
284 p.g = recombineGenotype.combine(parents2, r);
285
286 // We only use the strategy parameters if the mutator actually uses
287 // them: mut is null if it is not the ES-typical mutator.
288 if (mut != null) {
289 // usually done via intermediate recombination, see
290 // Algorithm 30.3 on page 363
291 p.w = recombineStrategy.combine(parents3, r);
292
293 // usually done via log-normal mutation, see
294 // Algorithm 30.8 on page 371
295 p.w = mutateStrategy.mutate(p.w, r);
296 mut.setStdDevs(p.w);
297 }
298
299 // usually done via normally distributed mutation as specified in
300 // Algorithm 30.5 on page 366.
301 p.g = mutateGenotype.mutate(p.g, r);
302 }
303 }
304 }
305
306 /**
307 * Invoke the evolution strategy
308 *
309 * @param r
310 * the randomizer (will be used directly without setting the
311 * seed)
312 * @param term
313 * the termination criterion (will be used directly without
314 * resetting)
315 * @param result
316 * a list to which the results are to be appended
317 */
318 @Override
319 public void call(final Random r, final ITerminationCriterion term,
320 final List<Individual<double[], X>> result) {
321
322 this.internalSetup();
323
324 result.add(EvolutionStrategy.evolutionaryStrategy(//
325 this.getObjectiveFunction(),//
326 this.getNullarySearchOperation(),//
327 this.createStrategyParameters,//
328 this.getUnarySearchOperation(),//
329 this.mutateStrategyParameters, //
330 this.crossoverGenotype, this.crossoverStrategy,//
331 this.getGPM(),//
332 this.getSelectionAlgorithm(), //
333 this.parentalSelection,//
334 this.getMu(),//
335 this.getLambda(),//
336 this.getRho(), //
337 this.isPlus(), //
338 term,//
339 r));
340 }
341
342 /**
343 * Set the dimension of the search space
344 *
345 * @param dim
346 * the dimension of the search space
347 */
766 56 THE IMPLEMENTATION PACKAGE
348 public final void setDimension(final int dim) {
349 this.dimension = dim;
350 this.internalSetupNeeded = true;
351 }
352
353 /**
354 * Get the dimension of the search space
355 *
356 * @return the dimension of the search space
357 */
358 public final int getDimension() {
359 return this.dimension;
360 }
361
362 /**
363 * Set the minimum value of the search space
364 *
365 * @param x
366 * the minimum value of the search space
367 */
368 public final void setMinimum(final double x) {
369 this.minX = x;
370 this.internalSetupNeeded = true;
371 }
372
373 /**
374 * Get the minimum value of the search space
375 *
376 * @return the minimum value of the search space
377 */
378 public final double getMaximum() {
379 return this.maxX;
380 }
381
382 /**
383 * Set the maximum value of the search space
384 *
385 * @param x
386 * the maximum value of the search space
387 */
388 public final void setMaximum(final double x) {
389 this.maxX = x;
390 this.internalSetupNeeded = true;
391 }
392
393 /**
394 * Get the minimum value of the search space
395 *
396 * @return the minimum value of the search space
397 */
398 public final double getMinimum() {
399 return this.minX;
400 }
401
402 /** perform some internalsetup */
403 private final void internalSetup() {
404 final double maxS;
405
406 if (this.internalSetupNeeded) {
407 this.internalSetupNeeded = false;
408
409 if (!(this.nullaryOperationSet)) {
410 super.setNullarySearchOperation(new DoubleArrayUniformCreation(
411 this.dimension, this.minX, this.maxX));
412 }
413
414 maxS = Math.sqrt(this.maxX - this.minX);
56.1. THE OPTIMIZATION ALGORITHMS 767
415
416 this.createStrategyParameters = new DoubleArrayUniformCreation(
417 this.dimension, Double.MIN_VALUE, maxS);
418 this.mutateStrategyParameters = new DoubleArrayStrategyLogNormalMutation(
419 maxS);
420
421 if (!(this.unaryOperationSet)) {
422 super.setUnarySearchOperation(new DoubleArrayStdDevNormalMutation(
423 this.dimension, this.minX, this.maxX));
424 }
425 }
426 }
427
428 /**
429 * Set mu, the number of parents in the mating pool (see
430 * Section 28.1.4.3 on page 266)
431 *
432 * @param m
433 * the parameter mu
434 */
435 public final void setMu(final int m) {
436 this.mu = Math.max(1, m);
437 }
438
439 /**
440 * Get mu, the number of parents in the mating pool (see
441 * Section 28.1.4.3 on page 266)
442 *
443 * @return the parameter mu
444 */
445 public final int getMu() {
446 return this.mu;
447 }
448
449 /**
450 * Set lambda, the number of offspring (see
451 * Section 28.1.4.3 on page 266)
452 *
453 * @param l
454 * the parameter lambda
455 */
456 public final void setLambda(final int l) {
457 this.lambda = Math.max(1, l);
458 }
459
460 /**
461 * Get lambda, the number of offspring (see
462 * Section 28.1.4.3 on page 266)
463 *
464 * @return the parameter lambda
465 */
466 public final int getLambda() {
467 return this.lambda;
468 }
469
470 /**
471 * Set rho, the number of parents per offspring (see
472 * Section 28.1.4.3 on page 266)
473 *
474 * @param r
475 * the parameter rho
476 */
477 public final void setRho(final int r) {
478 this.rho = Math.max(1, r);
479 }
480
481 /**
768 56 THE IMPLEMENTATION PACKAGE
482 * Get rho, the number of parents per offspring (see
483 * Section 28.1.4.3 on page 266)
484 *
485 * @return the parameter rho
486 */
487 public final int getRho() {
488 return this.rho;
489 }
490
491 /**
492 * Set whether we have a (lambda+mu) strategy or a (lambda,mu) strategy
493 *
494 * @param p
495 * true for (lambda+mu) strategies, false for (lambda,mu)
496 * strategies
497 */
498 public final void setPlus(final boolean p) {
499 this.plus = p;
500 }
501
502 /**
503 * Get whether we have a (lambda+mu) strategy or a (lambda,mu) strategy
504 *
505 * @return true for (lambda+mu) strategies, false for (lambda,mu)
506 * strategies
507 */
508 public final boolean isPlus() {
509 return this.plus;
510 }
511
512 /**
513 * Set the selection algorithm (see Section 28.4 on page 285) to be
514 * used by this evolutionary algorithm.
515 *
516 * @param sel
517 * the selection algorithm
518 */
519 public final void setSelectionAlgorithm(final ISelectionAlgorithm sel) {
520 this.survivalSelection = sel;
521 }
522
523 /**
524 * Get the selection algorithm (see Section 28.4 on page 285) which is
525 * used by this evolutionary algorithm.
526 *
527 * @return the selection algorithm
528 */
529 public final ISelectionAlgorithm getSelectionAlgorithm() {
530 return this.survivalSelection;
531 }
532
533 /**
534 * Set the unary search operation to be used by this optimization
535 * algorithm (see Section 4.2 on page 83 and
536 * Listing 55.3 on page 708).
537 *
538 * @param op
539 * the unary search operation to use
540 */
541 @Override
542 public void setUnarySearchOperation(
543 final IUnarySearchOperation<double[]> op) {
544 this.unaryOperationSet = (op != null);
545 super.setUnarySearchOperation(op);
546 }
547
548 /**
56.1. THE OPTIMIZATION ALGORITHMS 769
549 * Set the nullary search operation to be used by this optimization
550 * algorithm (see Section 4.2 on page 83 and
551 * Listing 55.2 on page 708).
552 *
553 * @param op
554 * the nullary search operation to use
555 */
556 @Override
557 public void setNullarySearchOperation(
558 final INullarySearchOperation<double[]> op) {
559 this.nullaryOperationSet = (op != null);
560 super.setNullarySearchOperation(op);
561 }
562
563 /**
564 * Get the name of the optimization module
565 *
566 * @param longVersion
567 * true if the long name should be returned, false if the short
568 * name should be returned
569 * @return the name of the optimization module
570 */
571 @Override
572 public String getName(final boolean longVersion) {
573 if (longVersion) {
574 return super.getName(longVersion);
575 }
576 return "ES"; //N ON − N LS − 1
577 }
578
579 /**
580 * Get the full configuration which holds all the data necessary to
581 * describe this object.
582 *
583 * @param longVersion
584 * true if the long version should be returned, false if the
585 * short version should be returned
586 * @return the full configuration
587 */
588 @Override
589 public String getConfiguration(final boolean longVersion) {
590 StringBuilder sb;
591 IOptimizationModule m;
592
593 sb = new StringBuilder();
594
595 sb.append(’(’);
596 sb.append(this.mu);
597 if (this.rho != 1) {
598 sb.append(’/’);
599 sb.append(this.rho);
600 }
601
602 sb.append(this.plus ? ’+’ : ’,’);
603 sb.append(this.lambda);
604 sb.append(’)’);
605
606 sb.append(",G=["); //N ON − N LS − 1
607 sb.append(TextUtils.formatNumber(this.minX));
608
609 sb.append(","); //N ON − N LS − 1
610 sb.append(TextUtils.formatNumber(this.maxX));
611
612 sb.append("]ˆ");//N ON − N LS − 1
613 sb.append(this.dimension);
614
615 sb.append(’,’);
770 56 THE IMPLEMENTATION PACKAGE
616 sb.append(super.getConfiguration(longVersion));
617
618 m = this.createStrategyParameters;
619 if (m != null) {
620 sb.append(",o0s="); //N ON − N LS − 1
621 sb.append(m.toString(longVersion));
622 }
623
624 m = this.mutateStrategyParameters;
625 if (m != null) {
626 sb.append(",o1s="); //N ON − N LS − 1
627 sb.append(m.toString(longVersion));
628 }
629
630 m = this.crossoverStrategy;
631 if (m != null) {
632 sb.append(",oxs="); //N ON − N LS − 1
633 sb.append(m.toString(longVersion));
634 }
635
636 m = this.crossoverGenotype;
637 if (m != null) {
638 sb.append(",oxg="); //N ON − N LS − 1
639 sb.append(m.toString(longVersion));
640 }
641
642 if (this.survivalSelection != null) {
643 sb.append(",selSur="); //N ON − N LS − 1
644 sb.append(this.survivalSelection.toString(longVersion));
645 }
646
647 if (this.parentalSelection != null) {
648 sb.append(",selPar="); //N ON − N LS − 1
649 sb.append(this.parentalSelection.toString(longVersion));
650 }
651
652 return sb.toString();
653 }
654 }

Listing 56.18: An individual record also holding endogenous information.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.es;
5
6 import org.goataa.impl.utils.Individual;
7
8 /**
9 * An individual record for evolution strategies which also contains a
10 * field for the strategy parameter. The search space is always the vectors
11 * of real numbers.
12 *
13 * @param <X>
14 * the problem space (phenome, Section 2.1 on page 43)
15 * @author Thomas Weise
16 */
17 public class ESIndividual<X> extends Individual<double[], X> {
18
19 /** a constant required by Java serialization */
20 private static final long serialVersionUID = 1;
21
22 /** the strategy parameter */
23 public double[] w;
24
25 /** The constructor creates a new Evolution Strategy individual record */
26 public ESIndividual() {
56.1. THE OPTIMIZATION ALGORITHMS 771
27 super();
28 }
29
30 /**
31 * Copy the values of another individual record to this record. This
32 * method saves us from excessively creating new individual records.
33 *
34 * @param to
35 * the individual to copy
36 */
37 @Override
38 @SuppressWarnings("unchecked")
39 public void assign(final Individual<double[], X> to) {
40 super.assign(to);
41 if (to instanceof ESIndividual) {
42 this.w = ((ESIndividual) to).w;
43 }
44 }
45 }

56.1.4.3 Differential Evolution

In this package, we provide implementations of Differential Evolution according to Chap-


ter 33.
Listing 56.19: An example implementation of the Differential Evolution algorithm which
uses ternary crossover.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.de;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.SOOptimizationAlgorithm;
10 import org.goataa.impl.searchOperations.NullarySearchOperation;
11 import org.goataa.impl.utils.Constants;
12 import org.goataa.impl.utils.Individual;
13 import org.goataa.impl.utils.TextUtils;
14 import org.goataa.spec.IGPM;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.IOptimizationModule;
18 import org.goataa.spec.ITerminationCriterion;
19 import org.goataa.spec.ITernarySearchOperation;
20
21 /**
22 * The base class for Differential Evolution-based algorithms as discussed
23 * in Chapter 33 on page 419.
24 *
25 * @param <X>
26 * the problem space (phenome, Section 2.1 on page 43)
27 * @author Thomas Weise
28 */
29 public class DifferentialEvolution1<X> extends
30 SOOptimizationAlgorithm<double[], X, Individual<double[], X>> {
31
32 /** a constant required by Java serialization */
33 private static final long serialVersionUID = 1;
34
35 /** the nullary search operation */
36 private INullarySearchOperation<double[]> o0;
37
38 /** the population size */
772 56 THE IMPLEMENTATION PACKAGE
39 private int ps;
40
41 /** the ternary operator for crossover */
42 private ITernarySearchOperation<double[]> tc;
43
44 /** instantiate the Differential Evolution algorithm */
45 @SuppressWarnings("unchecked")
46 public DifferentialEvolution1() {
47 super();
48 this.o0 = (INullarySearchOperation) (NullarySearchOperation.NULL_CREATION);
49 this.ps = 100;
50 }
51
52 /**
53 * Set the nullary search operation to be used by this optimization
54 * algorithm (see Section 4.2 on page 83 and
55 * Listing 55.2 on page 708).
56 *
57 * @param op
58 * the nullary search operation to use
59 */
60 @SuppressWarnings("unchecked")
61 public void setNullarySearchOperation(
62 final INullarySearchOperation<double[]> op) {
63 this.o0 = ((op != null) ? op
64 : ((INullarySearchOperation) (NullarySearchOperation.NULL_CREATION)));
65 }
66
67 /**
68 * Get the nullary search operation which is used by this optimization
69 * algorithm (see Section 4.2 on page 83 and
70 * Listing 55.2 on page 708).
71 *
72 * @return the nullary search operation to use
73 */
74 public final INullarySearchOperation<double[]> getNullarySearchOperation() {
75 return this.o0;
76 }
77
78 /**
79 * Perform a simple differential evolution as defined in
80 * Chapter 33 on page 419
81 *
82 * @param f
83 * the objective function ( Definition D2.3 on page 44)
84 * @param create
85 * the nullary search operator
86 * ( Section 4.2 on page 83) for creating the initial
87 * genotype
88 * @param crossover
89 * the ternary crossover operator
90 * @param gpm
91 * the genotype-phenotype mapping
92 * ( Section 4.3 on page 85)
93 * @param term
94 * the termination criterion
95 * ( Section 6.3.3 on page 105)
96 * @param ps
97 * the population size
98 * @param r
99 * the random number generator
100 * @return the individual containing the best candidate solution
101 * ( Definition D2.2 on page 43) found
102 * @param <X>
103 * the problem space ( Section 2.1 on page 43)
104 */
105 @SuppressWarnings("unchecked")
56.1. THE OPTIMIZATION ALGORITHMS 773
106 public static final <X> Individual<double[], X> differentialEvolution(
107 //
108 final IObjectiveFunction<X> f,
109 final INullarySearchOperation<double[]> create,//
110 final ITernarySearchOperation<double[]> crossover,//
111 final IGPM<double[], X> gpm,//
112 final int ps, final ITerminationCriterion term, final Random r) {
113
114 Individual<double[], X>[] pop, npop;
115 Individual<double[], X> p, best;
116 int i, a, b, c;
117
118 best = new Individual<double[], X>();
119 best.v = Constants.WORST_FITNESS;
120 pop = new Individual[ps];
121 npop = new Individual[ps];
122
123 // build the initial population of random individuals
124 for (i = npop.length; (--i) >= 0;) {
125 p = new Individual<double[], X>();
126 npop[i] = p;
127 p.g = create.create(r);
128 }
129
130 // the basic loop of the Differential Evolution
131 for (;;) {
132 // for eachgeneration do...
133
134 // for each individual in the population
135 for (i = npop.length; (--i) >= 0;) {
136 p = npop[i];
137
138 // perform the genotype-phenotype mapping
139 p.x = gpm.gpm(p.g, r);
140 // compute the objective value
141 p.v = f.compute(p.x, r);
142
143 // fight against direct parent
144 if ((pop[i] == null) || (p.v < pop[i].v)) {
145 pop[i] = p;
146
147 // is the current individual the best one so far?
148 if (p.v < best.v) {
149 best.assign(p);
150 }
151 }
152
153 // after each objective function evaluation, check if we should
154 // stop
155 if (term.terminationCriterion()) {
156 // if we should stop, return the best individual found
157 return best;
158 }
159 }
160
161 for (i = npop.length; (--i) >= 0;) {
162 a = pop[i];
163 do {
164 b = r.nextInt(pop.length);
165 } while (a == b);
166
167 do {
168 c = r.nextInt(pop.length);
169 } while ((a == c) || (b == c));
170
171 npop[i] = p = new Individual<double[], X>();
172 p.g = crossover.ternaryRecombine(pop[a].g, pop[b].g, pop[c].g, r);
774 56 THE IMPLEMENTATION PACKAGE
173
174 }
175 }
176 }
177
178 /**
179 * Invoke the differential evolution
180 *
181 * @param r
182 * the randomizer (will be used directly without setting the
183 * seed)
184 * @param term
185 * the termination criterion (will be used directly without
186 * resetting)
187 * @param result
188 * a list to which the results are to be appended
189 */
190 @Override
191 public void call(final Random r, final ITerminationCriterion term,
192 final List<Individual<double[], X>> result) {
193
194 result.add(DifferentialEvolution1.differentialEvolution(//
195 this.getObjectiveFunction(),//
196 this.getNullarySearchOperation(),//
197 this.getTernarySearchOperation(),//
198 this.getGPM(), //
199 this.getPopulationSize(), //
200 term, //
201 r));
202 }
203
204 /**
205 * Obtain the ternary search operation
206 *
207 * @return the ternary search operation
208 */
209 public final ITernarySearchOperation<double[]> getTernarySearchOperation() {
210 return this.tc;
211 }
212
213 /**
214 * Set the ternary search operation
215 *
216 * @param atc
217 * the ternary search operation
218 */
219 public final void setTernarySearchOperation(
220 final ITernarySearchOperation<double[]> atc) {
221 this.tc = atc;
222 }
223
224 /**
225 * Set the population size, i.e., the number of individuals which are
226 * kept in the population.
227 *
228 * @param aps
229 * the population size
230 */
231 public final void setPopulationSize(final int aps) {
232 this.ps = Math.max(1, aps);
233 }
234
235 /**
236 * Set the population size, i.e., the number of individuals which are
237 * kept in the population.
238 *
239 * @return the population size
56.1. THE OPTIMIZATION ALGORITHMS 775
240 */
241 public final int getPopulationSize() {
242 return this.ps;
243 }
244
245 /**
246 * Get the full configuration which holds all the data necessary to
247 * describe this object.
248 *
249 * @param longVersion
250 * true if the long version should be returned, false if the
251 * short version should be returned
252 * @return the full configuration
253 */
254 @Override
255 public String getConfiguration(final boolean longVersion) {
256 StringBuilder sb;
257 IOptimizationModule o;
258
259 sb = new StringBuilder(super.getConfiguration(longVersion));
260
261 o = this.getNullarySearchOperation();
262 if (o != null) {
263 if (sb.length() > 0) {
264 sb.append(’,’);
265 }
266 sb.append("o0=");//N ON − N LS − 1
267 sb.append(o.toString(longVersion));
268 }
269
270 sb.append(",ps=");//N ON − N LS − 1
271 sb.append(TextUtils.formatNumber(this.getPopulationSize()));
272
273 if (this.tc != null) {
274 sb.append(",op3=");//N ON − N LS − 1
275 sb.append(this.tc.toString(longVersion));
276 }
277
278 return sb.toString();
279 }
280
281 /**
282 * Get the name of the optimization module
283 *
284 * @param longVersion
285 * true if the long name should be returned, false if the short
286 * name should be returned
287 * @return the name of the optimization module
288 */
289 @Override
290 public String getName(final boolean longVersion) {
291 return (longVersion ? super.getName(true) : "DE-1"); //N ON − N LS − 1
292 }
293 }

56.1.4.4 Estimation Of Distribution Algorithms

In this package, we provide the implementations of Estimation of Distribution Algorithms


as discussed in Chapter 34.
Listing 56.20: The Estimation Of Distribution Algorithm algorithm “PlainEDA” given in
Algorithm 34.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
776 56 THE IMPLEMENTATION PACKAGE
4 package org.goataa.impl.algorithms.eda;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.SOOptimizationAlgorithm;
10 import org.goataa.impl.utils.Constants;
11 import org.goataa.impl.utils.Individual;
12 import org.goataa.impl.utils.TextUtils;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IModelBuilder;
15 import org.goataa.spec.IModelSampler;
16 import org.goataa.spec.INullarySearchOperation;
17 import org.goataa.spec.IObjectiveFunction;
18 import org.goataa.spec.IOptimizationModule;
19 import org.goataa.spec.ISelectionAlgorithm;
20 import org.goataa.spec.ITerminationCriterion;
21
22 /**
23 * The basic Estimation of Distribution Algorithm (EDA) scheme according to
24 * Algorithm 34.1 on page 432.
25 *
26 * @param <G>
27 * the search space (genome, Section 4.1 on page 81)
28 * @param <X>
29 * the problem space (phenome, Section 2.1 on page 43)
30 * @param <M>
31 * the model type
32 * @author Thomas Weise
33 */
34 public class EDA<G, X, M> extends
35 SOOptimizationAlgorithm<G, X, Individual<G, X>> {
36 /** a constant required by Java serialization */
37 private static final long serialVersionUID = 1;
38
39 /** the model creation operation */
40 private INullarySearchOperation<M> modelCreate;
41
42 /** the model builder */
43 private IModelBuilder<M, G> modelBuilder;
44
45 /** the model sampler */
46 private IModelSampler<M, G> modelSampler;
47
48 /** the selection algorithm */
49 private ISelectionAlgorithm selectionAlgorithm;
50
51 /** the population size */
52 private int ps;
53
54 /** the mating pool size */
55 private int mps;
56
57 /** instantiate an EDA */
58 public EDA() {
59 super();
60 this.ps = 100;
61 this.mps = 100;
62 }
63
64 /**
65 * Set the population size, i.e., the number of individuals which are
66 * kept in the population.
67 *
68 * @param aps
69 * the population size
70 */
56.1. THE OPTIMIZATION ALGORITHMS 777
71 public final void setPopulationSize(final int aps) {
72 this.ps = Math.max(1, aps);
73 }
74
75 /**
76 * Set the population size, i.e., the number of individuals which are
77 * kept in the population.
78 *
79 * @return the population size
80 */
81 public final int getPopulationSize() {
82 return this.ps;
83 }
84
85 /**
86 * Set the mating pool size, i.e., the number of individuals which will
87 * be selected into the mating pool.
88 *
89 * @param amps
90 * the mating pool size
91 */
92 public final void setMatingPoolSize(final int amps) {
93 this.mps = Math.max(1, amps);
94 }
95
96 /**
97 * Set the mating pool size, i.e., the number of individuals which will
98 * be selected into the mating pool.
99 *
100 * @return the mating pool size
101 */
102 public final int getMatingPoolSize() {
103 return this.mps;
104 }
105
106 /**
107 * Set the algorithm used to create a model
108 *
109 * @param create
110 * the model creation algorithm
111 */
112 public final void setModelCreator(final INullarySearchOperation<M> create) {
113 this.modelCreate = create;
114 }
115
116 /**
117 * Get the algorithm used to create a model
118 *
119 * @return the model creation algorithm
120 */
121 public final INullarySearchOperation<M> getModelCreator() {
122 return this.modelCreate;
123 }
124
125 /**
126 * Set the algorithm used to build/update a model
127 *
128 * @param builder
129 * the model building/updating algorithm
130 */
131 public final void setModelBuilder(final IModelBuilder<M, G> builder) {
132 this.modelBuilder = builder;
133 }
134
135 /**
136 * Get the algorithm used to build/update a model
137 *
778 56 THE IMPLEMENTATION PACKAGE
138 * @return the model building/updating algorithm
139 */
140 public final IModelBuilder<M, G> getModelBuilder() {
141 return this.modelBuilder;
142 }
143
144 /**
145 * Set the algorithm used to sample a model
146 *
147 * @param sampler
148 * the model sampling algorithm
149 */
150 public final void setModelSampler(final IModelSampler<M, G> sampler) {
151 this.modelSampler = sampler;
152 }
153
154 /**
155 * Get the algorithm used to sample a model
156 *
157 * @return the model sampling algorithm
158 */
159 public final IModelSampler<M, G> getModelSampler() {
160 return this.modelSampler;
161 }
162
163 /**
164 * Set the selection algorithm (see Section 28.4 on page 285) to be
165 * used by this evolutionary algorithm.
166 *
167 * @param sel
168 * the selection algorithm
169 */
170 public final void setSelectionAlgorithm(final ISelectionAlgorithm sel) {
171 this.selectionAlgorithm = sel;
172 }
173
174 /**
175 * Get the selection algorithm (see Section 28.4 on page 285) which is
176 * used by this evolutionary algorithm.
177 *
178 * @return the selection algorithm
179 */
180 public final ISelectionAlgorithm getSelectionAlgorithm() {
181 return this.selectionAlgorithm;
182 }
183
184 /**
185 * Get the full configuration which holds all the data necessary to
186 * describe this object.
187 *
188 * @param longVersion
189 * true if the long version should be returned, false if the
190 * short version should be returned
191 * @return the full configuration
192 */
193 @Override
194 public String getConfiguration(final boolean longVersion) {
195 StringBuilder sb;
196 IOptimizationModule m;
197
198 sb = new StringBuilder(super.getConfiguration(longVersion));
199
200 sb.append(",ps=");//N ON − N LS − 1
201 sb.append(TextUtils.formatNumber(this.getPopulationSize()));
202
203 sb.append(",mps=");//N ON − N LS − 1
204 sb.append(TextUtils.formatNumber(this.getMatingPoolSize()));
56.1. THE OPTIMIZATION ALGORITHMS 779
205
206 m = this.getModelCreator();
207 if (m != null) {
208 sb.append(",new=");//N ON − N LS − 1
209 sb.append(m.toString(longVersion));
210 }
211
212 m = this.getModelBuilder();
213 if (m != null) {
214 sb.append(",build=");//N ON − N LS − 1
215 sb.append(m.toString(longVersion));
216 }
217
218 m = this.getModelSampler();
219 if (m != null) {
220 sb.append(",sample=");//N ON − N LS − 1
221 sb.append(m.toString(longVersion));
222 }
223
224 m = this.getSelectionAlgorithm();
225 if (m != null) {
226 sb.append(",sel=");//N ON − N LS − 1
227 sb.append(m.toString(longVersion));
228 }
229
230 return sb.toString();
231 }
232
233 /**
234 * Perform a basic Estimation of Distribution algorithm as defined in
235 * Algorithm 34.1 on page 432.
236 *
237 * @param f
238 * the objective function ( Definition D2.3 on page 44)
239 * @param newModel
240 * the model creation operation
241 * @param buildModel
242 * the model building/updating operator
243 * @param sampleModel
244 * the model sampling algorithm
245 * @param gpm
246 * the genotype-phenotype mapping
247 * ( Section 4.3 on page 85)
248 * @param sel
249 * the selection algorithm ( Section 28.4 on page 285
250 * @param term
251 * the termination criterion
252 * ( Section 6.3.3 on page 105)
253 * @param ps
254 * the population size
255 * @param mps
256 * the mating pool size
257 * @param r
258 * the random number generator
259 * @return the individual containing the best candidate solution
260 * ( Definition D2.2 on page 43) found
261 * @param <G>
262 * the search space ( Section 4.1 on page 81)
263 * @param <X>
264 * the problem space ( Section 2.1 on page 43)
265 * @param <M>
266 * the model type
267 */
268 @SuppressWarnings("unchecked")
269 public static final <G, X, M> Individual<G, X> eda(
270 //
271 final IObjectiveFunction<X> f,
780 56 THE IMPLEMENTATION PACKAGE
272 final INullarySearchOperation<M> newModel,//
273 final IModelBuilder<M, G> buildModel,//
274 final IModelSampler<M, G> sampleModel,
275 final IGPM<G, X> gpm,//
276 final ISelectionAlgorithm sel, final int ps, final int mps,
277 final ITerminationCriterion term, final Random r) {
278
279 Individual<G, X>[] pop, mate;
280 Individual<G, X> p, best;
281 M model;
282 int i;
283
284 best = new Individual<G, X>();
285 best.v = Constants.WORST_FITNESS;
286 pop = new Individual[ps];
287 mate = new Individual[mps];
288
289 model = newModel.create(r);
290
291 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
292 for (;;) {
293 // for eachgeneration do...
294
295 // for each individual in the population
296 for (i = pop.length; (--i) >= 0;) {
297 pop[i] = p = new Individual<G, X>();
298
299 // sample the model
300 p.g = sampleModel.sampleModel(model, r);
301
302 // perform the genotype-phenotype mapping
303 p.x = gpm.gpm(p.g, r);
304 // compute the objective value
305 p.v = f.compute(p.x, r);
306
307 // is the current individual the best one so far?
308 if (p.v < best.v) {
309 best.assign(p);
310 }
311
312 // after each objective function evaluation, check if we should
313 // stop
314 if (term.terminationCriterion()) {
315 // if we should stop, return the best individual found
316 return best;
317 }
318 }
319
320 // perform the selection step
321 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
322
323 // build the model for the next generation
324 model = buildModel.buildModel(mate, 0, mate.length, model, r);
325 }
326 }
327
328 /**
329 * Invoke the estimation of distribution algorithm
330 *
331 * @param r
332 * the randomizer (will be used directly without setting the
333 * seed)
334 * @param term
335 * the termination criterion (will be used directly without
336 * resetting)
337 * @param result
338 * a list to which the results are to be appended
56.1. THE OPTIMIZATION ALGORITHMS 781
339 */
340 @Override
341 public void call(final Random r, final ITerminationCriterion term,
342 final List<Individual<G, X>> result) {
343
344 result.add(EDA.eda(this.getObjectiveFunction(),//
345 this.getModelCreator(),//
346 this.getModelBuilder(),//
347 this.getModelSampler(),//
348 this.getGPM(),//
349 this.getSelectionAlgorithm(),//
350 this.getPopulationSize(),//
351 this.getMatingPoolSize(),//
352 term,//
353 r));
354 }
355 }

56.1.4.4.1 UMDA

This package contains an implementation of the UMDA, an Estimation of Distribution


Algorithm introduced in Section 34.4.1.

Listing 56.21: The creator for the initial model to be used in UMDA.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.eda.umda;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.OptimizationModule;
10 import org.goataa.spec.INullarySearchOperation;
11
12 /**
13 * The UMDA-style creation operation for models, as defined in
14 * Algorithm 34.3 on page 436, fills the initial probability vector with 0.5.
15 *
16 * @author Thomas Weise
17 */
18 public class UMDAModelCreator extends OptimizationModule implements
19 INullarySearchOperation<double[]> {
20 /** a constant required by Java serialization */
21 private static final long serialVersionUID = 1;
22
23 /** the dimension */
24 private final int dimension;
25
26 /**
27 * Instantiate the UMDA model creation operation
28 *
29 * @param dim
30 * the dimension of the search space
31 */
32 public UMDAModelCreator(final int dim) {
33 super();
34 this.dimension = dim;
35 }
36
37 /**
38 * Create a vector of all 0.5 values, as stated in Algorithm 34.3 on page 436
39 *
40 * @param r
41 * the random number generator
42 * @return the vector of 0.5 values
782 56 THE IMPLEMENTATION PACKAGE
43 */
44 public final double[] create(final Random r) {
45 double[] d;
46
47 d = new double[this.dimension];
48 Arrays.fill(d, 0.5d);
49
50 return d;
51 }
52
53 /**
54 * Get the full configuration which holds all the data necessary to
55 * describe this object.
56 *
57 * @param longVersion
58 * true if the long version should be returned, false if the
59 * short version should be returned
60 * @return the full configuration
61 */
62 @Override
63 public String getConfiguration(final boolean longVersion) {
64 return ("dim=" + this.dimension); //N ON − N LS − 1
65 }
66 }

Listing 56.22: The model building method used in UMDA.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.eda.umda;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.eda.ModelBuilder;
10 import org.goataa.impl.utils.Individual;
11
12 /**
13 * The build a model UMDA-style, as defined in
14 * Algorithm 34.5 on page 437
15 *
16 * @author Thomas Weise
17 */
18 public class UMDAModelBuilder extends ModelBuilder<double[], boolean[]> {
19 /** a constant required by Java serialization */
20 private static final long serialVersionUID = 1;
21
22 /** the temporary counter */
23 private int[] tmp;
24
25 /** Instantiate the UMDA model building operation */
26 public UMDAModelBuilder() {
27 super();
28 }
29
30 /**
31 * Build a model from a set of selected points in the search space and an
32 * old model as introduced in Algorithm 34.5 on page 437: Count the
33 * number of bits which are false at each locus and use their frequency
34 * as probability value in the new model
35 *
36 * @param mate
37 * the mating pool of selected individuals
38 * @param start
39 * the index of the first individual in the mating pool
40 * @param count
41 * the number of individuals in the mating pool
56.1. THE OPTIMIZATION ALGORITHMS 783
42 * @param oldModel
43 * the old model
44 * @param r
45 * a random number generator
46 * @return the new model
47 */
48 public double[] buildModel(final Individual<boolean[], ?>[] mate,
49 final int start, final int count, final boolean[] oldModel,
50 final Random r) {
51 double[] m;
52 int[] cc;
53 boolean[] b;
54 int i, j;
55
56 m = new double[oldModel.length];
57
58 cc = this.tmp;
59 if ((cc == null) || (m.length > cc.length)) {
60 this.tmp = cc = new int[m.length];
61 } else {
62 Arrays.fill(cc, 0, m.length, 0);
63 }
64
65 for (i = (count + start); (--i) >= start;) {
66 b = mate[i].g;
67 for (j = m.length; (--j) >= 0;) {
68 if (!(b[j])) {
69 cc[j]++;
70 }
71 }
72 }
73
74 for (i = m.length; (--i) >= 0;) {
75 m[i] = ((((double) (cc[i])) / (count)));
76 }
77
78 return m;
79 }
80 }

Listing 56.23: Sample points from the UMDA model.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.eda.umda;
5
6 import java.util.Random;
7
8 import org.goataa.impl.algorithms.eda.ModelSampler;
9
10 /**
11 * The sample a model (a probability vector) UMDA-style accoring to
12 * Algorithm 34.4 on page 437
13 *
14 * @author Thomas Weise
15 */
16 public class UMDAModelSampler extends ModelSampler<double[], boolean[]> {
17 /** a constant required by Java serialization */
18 private static final long serialVersionUID = 1;
19
20 /** Instantiate the UMDA model sampling operation */
21 public UMDAModelSampler() {
22 super();
23 }
24
25 /**
26 * According to Algorithm 34.4 on page 437, every value of the
784 56 THE IMPLEMENTATION PACKAGE
27 * model is between 0 and 1 and denotes the probability that the bit at
28 * the same position in the genotype will be false. The model is sampled
29 * using uniform distributions.
30 *
31 * @param model
32 * the model
33 * @param r
34 * a random number generator
35 * @return the new model
36 */
37 @Override
38 public final boolean[] sampleModel(final double[] model, final Random r) {
39 boolean[] b;
40 int i;
41
42 i = model.length;
43 b = new boolean[i];
44
45 for (; (--i) >= 0;) {
46 b[i] = (r.nextDouble() > model[i]);
47 }
48
49 return b;
50 }
51 }

56.2 Search Operation


In this package, we provide implementations for the search operations.

56.2.1 Operations for String-based Search Spaces

Here we discuss string-based search operators. Strings are very common search spaces, be it
strings of bits or of real vectors. They also can be used to represent permutations of objects.

56.2.1.1 Real Vectors: G ⊆ Rn

One of the most common search spaces are vectors of real numbers and here, we introduce
search operators for them.

56.2.1.1.1 Nullary Search Operations: create : ∅ 7→ G

Here we provide the implementation of Listing 55.2 on page 708 for real vectors, that is, a
nullary search operation which creates random vectors with elements distributed (uniformly)
in a given range.

Listing 56.24: Create uniformly distributed real vectors.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorCreation;
9
10 /**
11 * A nullary search operation (see Section 4.2 on page 83) for
12 * real vectors in a bounded subspace of the real vectors of dimension n.
13 * This operation creates vectors which are uniformly distributed in the
56.2. SEARCH OPERATION 785
14 * interval [min,max]ˆn.
15 *
16 * @author Thomas Weise
17 */
18 public class DoubleArrayUniformCreation extends RealVectorCreation {
19
20 /** a constant required by Java serialization */
21 private static final long serialVersionUID = 1;
22
23 /**
24 * Instantiate the real-vector creation operation
25 *
26 * @param dim
27 * the dimension of the search space
28 * @param mi
29 * the minimum value of the allele ( Definition D4.4 on page 83) of
30 * a gene
31 * @param ma
32 * the maximum value of the allele of a gene
33 */
34 public DoubleArrayUniformCreation(final int dim, final double mi,
35 final double ma) {
36 super(dim, mi, ma);
37 }
38
39 /**
40 * This operation just produces uniformly distributed random vectors in
41 * the interval [this.min, this.max]ˆthis.n
42 *
43 * @param r
44 * the random number generator
45 * @return a new random real vector of dimension n
46 */
47 @Override
48 public final double[] create(final Random r) {
49 double[] g;
50 int i;
51
52 // create a new real vector (genotype, Definition D4.2 on page 82) of
53 // dimension n
54 g = new double[this.n];
55
56 // initialize each gene ( Definition D4.3 on page 82) of the vector...
57 for (i = this.n; (--i) >= 0;) {
58 // ...by setting it to a random number uniformly distributed in
59 // [min,max] (i is the locus Definition D4.5 on page 83)
60 g[i] = (this.min + (r.nextDouble() * (this.max - this.min)));
61 }
62
63 return g;
64 }
65
66 /**
67 * Get the name of the optimization module
68 *
69 * @param longVersion
70 * true if the long name should be returned, false if the short
71 * name should be returned
72 * @return the name of the optimization module
73 */
74 @Override
75 public String getName(final boolean longVersion) {
76 if (longVersion) {
77 return super.getName(true);
78 }
79 return "DA-U"; //N ON − N LS − 1
80 }
786 56 THE IMPLEMENTATION PACKAGE
81 }

56.2.1.1.2 Unary Search Operations: mutate : G 7→ G

It is quite easy to define multiple different search operators for real vectors which adhere to
the interface given in Listing 55.3 on page 708.
56.2.1.1.2.1 Evolution Strategy Operators
Listing 56.25: Modify a real vector by adding normally distributed random numbers with a
given standard deviation.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
10
11 /**
12 * This is a mutation operation that performes mutation using normally
13 * distributed random numbers according to a given set of standard
14 * deviation parameters, as suggested in
15 * Section 30.4.1.2 on page 365. This operator is quite similar,
16 * but more sophisticated than the simple mutation method defined in
17 * Listing 56.28 on page 791
18 *
19 * @author Thomas Weise
20 */
21 public final class DoubleArrayStdDevNormalMutation extends
22 RealVectorMutation {
23
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /** the standard deviation to use */
28 private final double[] stddev;
29
30 /**
31 * Create a new real-vector mutation operation
32 *
33 * @param dim
34 * the dimension of the search space
35 * @param mi
36 * the minimum value of the allele ( Definition D4.4 on page 83) of
37 * a gene
38 * @param ma
39 * the maximum value of the allele of a gene
40 */
41 public DoubleArrayStdDevNormalMutation(final int dim, final double mi,
42 final double ma) {
43 super(mi, ma);
44 this.stddev = new double[dim];
45 Arrays.fill(this.stddev, 1d);
46 }
47
48 /**
49 * Perform a mutation operation that performes mutation using normally
50 * distributed random numbers according to a given set of standard
51 * deviation parameters, as suggested in
52 * Section 30.4.1.2 on page 365.
53 *
54 * @param g
55 * the existing genotype in the search space from which a
56.2. SEARCH OPERATION 787
56 * slightly modified copy should be created
57 * @param r
58 * the random number generator
59 * @return a new genotype
60 */
61 @Override
62 public final double[] mutate(final double[] g, final Random r) {
63 final double[] gnew, stddevs;
64 double d;
65 int i;
66
67 i = g.length;
68
69 // create a new real vector of dimension n
70 gnew = new double[i];
71 stddevs = this.stddev;
72
73 // set each gene Definition D4.3 on page 82 of gnew to ...
74 for (; (--i) >= 0;) {
75 do {
76 // Use a normally distributed random number with a standard
77 // deviation as specified in the array stddev.
78 d = (g[i] + (r.nextGaussian() * stddevs[i]));
79 } while ((d < this.min) || (d > this.max));
80 gnew[i] = d;
81 }
82
83 return gnew;
84 }
85
86 /**
87 * Set the standard deviations
88 *
89 * @param s
90 * the standard deviations
91 */
92 public final void setStdDevs(final double[] s) {
93 System.arraycopy(s, 0, this.stddev, 0, this.stddev.length);
94 }
95
96 /**
97 * Set the standard deviations to a single, fixed value
98 *
99 * @param s
100 * the standard deviation
101 */
102 public final void setStdDev(final double s) {
103 Arrays.fill(this.stddev, s);
104 }
105
106 /**
107 * Get the name of the optimization module
108 *
109 * @param longVersion
110 * true if the long name should be returned, false if the short
111 * name should be returned
112 * @return the name of the optimization module
113 */
114 @Override
115 public String getName(final boolean longVersion) {
116 if (longVersion) {
117 return super.getName(true);
118 }
119 return "DA-SDNM"; //N ON − N LS − 1
120 }
121 }
788 56 THE IMPLEMENTATION PACKAGE
Listing 56.26: A strategy mutation operator for Evolution Strategy.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /**
11 * This operator realizes the mutation strength strategy vector mutation
12 * for Evolution Strategies ( Chapter 30 on page 359) as
13 * specified in Algorithm 30.8 on page 371,
14 * Equation 30.12, and Equation 30.13.
15 *
16 * @author Thomas Weise
17 */
18 public final class DoubleArrayStrategyLogNormalMutation extends
19 RealVectorMutation {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /**
25 * Create a new real-vector mutation operation
26 *
27 * @param ma
28 * the maximum value of the allele of a gene
29 */
30 public DoubleArrayStrategyLogNormalMutation(final double ma) {
31 super(Double.MIN_VALUE, ma);
32 }
33
34 /**
35 * Perform the mutation strength strategy vector mutation for Evolution
36 * Strategies ( Chapter 30 on page 359) as specified in
37 * Algorithm 30.8 on page 371.
38 *
39 * @param g
40 * the existing genotype in the search space from which a
41 * slightly modified copy should be created
42 * @param r
43 * the random number generator
44 * @return a new genotype
45 */
46 @Override
47 public final double[] mutate(final double[] g, final Random r) {
48 final double[] gnew;
49 final double t0, t, c, nu;
50 int i;
51 double x;
52
53 i = g.length;
54
55 c = 1d;
56
57 // set tau0 according to Equation 30.12 on page 372
58 t0 = (c / Math.sqrt(2 * i));
59
60 // set tau according to Equation 30.13 on page 372
61 t = (c / Math.sqrt(2 * Math.sqrt(i)));
62
63 nu = Math.exp(t0 * r.nextGaussian());
64 gnew = g.clone();
65
66 // set each gene Definition D4.3 on page 82 of gnew to ...
56.2. SEARCH OPERATION 789
67 for (; (--i) >= 0;) {
68 do {
69 x = gnew[i] * nu * Math.exp(t * r.nextGaussian());
70 } while ((x < this.min) || (x > this.max));
71 gnew[i] = x;
72 }
73
74 return gnew;
75 }
76
77 /**
78 * Get the name of the optimization module
79 *
80 * @param longVersion
81 * true if the long name should be returned, false if the short
82 * name should be returned
83 * @return the name of the optimization module
84 */
85 @Override
86 public String getName(final boolean longVersion) {
87 if (longVersion) {
88 return super.getName(true);
89 }
90 return "DA-SLN"; //N ON − N LS − 1
91 }
92 }

56.2.1.1.2.2 Operators Suggested by Students The next two operators introduced here stem
from the discussion in my lecture: ?? is an operator which performs small uniformly dis-
tributed changes to the vector elements whereas ?? performs a similar modification using
normally distributed random numbers.

Listing 56.27: Modify a real vector by adding uniformly distributed random numbers.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /**
11 * A unary search operation (see Section 4.2 on page 83) for
12 * real vectors in a bounded subspace of the real vectors of dimension n.
13 * This operation tales an existing genotype (see
14 * Definition D4.2 on page 82) and adds a small uniformly distributed (see
15 * Section 53.4.1 on page 676,
16 * Section 53.3.1 on page 668) random number to each of its
17 * genes (see Definition D4.3 on page 82).
18 *
19 * @author Thomas Weise
20 */
21 public final class DoubleArrayAllUniformMutation extends
22 RealVectorMutation {
23
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /**
28 * Create a new real-vector mutation operation
29 *
30 * @param mi
31 * the minimum value of the allele ( Definition D4.4 on page 83) of
32 * a gene
33 * @param ma
790 56 THE IMPLEMENTATION PACKAGE
34 * the maximum value of the allele of a gene
35 */
36 public DoubleArrayAllUniformMutation(final double mi, final double ma) {
37 super(mi, ma);
38 }
39
40 /**
41 * This is an unary search operation for vectors of real numbers. It
42 * takes one existing genotype g (see Definition D4.2 on page 82) from the
43 * search space and produces one new genotype. This new element is a
44 * slightly modified version of g which is obtained by adding uniformly
45 * distributed random numbers to its elements.
46 *
47 * @param g
48 * the existing genotype in the search space from which a
49 * slightly modified copy should be created
50 * @param r
51 * the random number generator
52 * @return a new genotype
53 */
54 @Override
55 public final double[] mutate(final double[] g, final Random r) {
56 final double[] gnew;
57 final double strength;
58 int i;
59
60 i = g.length;
61
62 // create a new real vector of dimension n
63 gnew = new double[i];
64
65 // the mutation strength: here we use a constant which is small
66 // compared to the range min...max
67 strength = 0.001d * (this.max - this.min);
68
69 // set each gene Definition D4.3 on page 82 of gnew to ...
70 for (; (--i) >= 0;) {
71 do {
72 // the original allele ( Definition D4.4 on page 83) of the gene plus a
73 // random number uniformly distributed
74 // Section 53.4.1 on page 676 in
75 // [-0.5*strength,+0.5*strength]
76 gnew[i] = g[i] + ((r.nextDouble() - 0.5) * strength);
77 // and repeat this until the new allele falls into the specified
78 // boundaries
79 } while ((gnew[i] < this.min) || (gnew[i] > this.max));
80 }
81
82 return gnew;
83 }
84
85 /**
86 * Get the name of the optimization module
87 *
88 * @param longVersion
89 * true if the long name should be returned, false if the short
90 * name should be returned
91 * @return the name of the optimization module
92 */
93 @Override
94 public String getName(final boolean longVersion) {
95 if (longVersion) {
96 return super.getName(true);
97 }
98 return "DA-AUM"; //N ON − N LS − 1
99 }
100 }
56.2. SEARCH OPERATION 791
Listing 56.28: Modify a real vector by adding normally distributed random numbers.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /**
11 * A unary search operation (see Section 4.2 on page 83) for
12 * real vectors in a bounded subspace of the real vectors of dimension n.
13 * This operation tales an existing genotype (see
14 * Definition D4.2 on page 82) and adds a small normally distributed random
15 * number to each of its genes (see Definition D4.3 on page 82). There are two
16 * main differences between normally distributed random numbers and
17 * uniformly distributed ones: 1) The normal distribution (see
18 * Section 53.4.2 on page 678) is unbounded whereas the uniform
19 * distribution has limits to each side (see
20 * Section 53.4.1 on page 676 and
21 * Section 53.3.1 on page 668) 2) The normal distribution gives
22 * the elements around its expected value
23 * Section 53.2.2 on page 661 a higher probability and a lower
24 * probability to elements distant from it, whereas all possible samples
25 * have the same probability in a uniform distribution.
26 *
27 * @author Thomas Weise
28 */
29 public final class DoubleArrayAllNormalMutation extends RealVectorMutation {
30
31 /** a constant required by Java serialization */
32 private static final long serialVersionUID = 1;
33
34 /**
35 * Create a new real-vector mutation operation
36 *
37 * @param mi
38 * the minimum value of the allele ( Definition D4.4 on page 83) of
39 * a gene
40 * @param ma
41 * the maximum value of the allele of a gene
42 */
43 public DoubleArrayAllNormalMutation(final double mi, final double ma) {
44 super(mi, ma);
45 }
46
47 /**
48 * This is an unary search operation for vectors of real numbers. It
49 * takes one existing genotype g (see Definition D4.2 on page 82) from the
50 * search space and produces one new genotype. This new element is a
51 * slightly modified version of g which is obtained by adding normally
52 * distributed random numbers to its elements.
53 *
54 * @param g
55 * the existing genotype in the search space from which a
56 * slightly modified copy should be created
57 * @param r
58 * the random number generator
59 * @return a new genotype
60 */
61 @Override
62 public final double[] mutate(final double[] g, final Random r) {
63 final double[] gnew;
64 final double strength;
65 int i;
66
792 56 THE IMPLEMENTATION PACKAGE
67 i = g.length;
68
69 // create a new real vector of dimension n
70 gnew = new double[i];
71
72 // the mutation strength: here we use a constant which is small
73 // compared to the range min...max
74 strength = 0.001d * (this.max - this.min);
75
76 // set each gene Definition D4.3 on page 82 of gnew to ...
77 for (; (--i) >= 0;) {
78 do {
79 // the original allele ( Definition D4.4 on page 83) of the gene plus a
80 // random number normally distributed
81 // Section 53.4.2 on page 678 with N(0, strengthˆ2)
82 gnew[i] = g[i] + (r.nextGaussian() * strength);
83 // and repeat this until the new allele falls into the specified
84 // boundaries
85 } while ((gnew[i] < this.min) || (gnew[i] > this.max));
86 }
87
88 return gnew;
89 }
90
91 /**
92 * Get the name of the optimization module
93 *
94 * @param longVersion
95 * true if the long name should be returned, false if the short
96 * name should be returned
97 * @return the name of the optimization module
98 */
99 @Override
100 public String getName(final boolean longVersion) {
101 if (longVersion) {
102 return super.getName(true);
103 }
104 return "DA-NAM"; //N ON − N LS − 1
105 }
106 }

56.2.1.1.3 Binary Search Operations: recombine : G × G 7→ G

Recombination operations for real-valued strings, i.e., implementations of Listing 55.4 on


page 709 for arrays of real numbers.

Listing 56.29: A weighted average crossover operator.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.binary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.BinarySearchOperation;
9 import org.goataa.spec.IBinarySearchOperation;
10
11 /**
12 * A weighted average crossover operator as defined in
13 * Section 29.3.4.5 on page 337.
14 *
15 * @author Thomas Weise
16 */
17 public final class DoubleArrayWeightedMeanCrossover extends
18 BinarySearchOperation<double[]> {
19
56.2. SEARCH OPERATION 793
20 /** a constant required by Java serialization */
21 private static final long serialVersionUID = 1;
22
23 /**
24 * the globally shared instance of the double array weighted mean
25 * crossover
26 */
27 public static final IBinarySearchOperation<double[]>
DOUBLE_ARRAY_WEIGHTED_MEAN_CROSSOVER = new
DoubleArrayWeightedMeanCrossover();
28
29 /** Create a new real-vector crossover operation */
30 protected DoubleArrayWeightedMeanCrossover() {
31 super();
32 }
33
34 /**
35 * Perform a weighted average crossover as discussed in
36 * Section 29.3.4.5 on page 337.
37 *
38 * @param p1
39 * the first "parent" genotype
40 * @param p2
41 * the second "parent" genotype
42 * @param r
43 * the random number generator
44 * @return a new genotype
45 */
46 @Override
47 public final double[] recombine(final double[] p1, final double[] p2,
48 final Random r) {
49 final double[] gnew;
50 double gamma;
51 int i;
52
53 i = p1.length;
54 gnew = new double[i];
55
56 for (; (--i) >= 0;) {
57 gamma = r.nextDouble();
58 gamma = (gamma * gamma * gamma);
59 if (r.nextBoolean()) {
60 gamma = 0.5d + (0.5d * gamma);
61 } else {
62 gamma = 0.5d - (0.5d * gamma);
63 }
64 gnew[i] = ((p1[i] * gamma) + (p2[i] * (1d - gamma)));
65 }
66
67 return gnew;
68 }
69
70 /**
71 * Get the name of the optimization module
72 *
73 * @param longVersion
74 * true if the long name should be returned, false if the short
75 * name should be returned
76 * @return the name of the optimization module
77 */
78 @Override
79 public String getName(final boolean longVersion) {
80 if (longVersion) {
81 return super.getName(true);
82 }
83 return "DA-WMC"; //N ON − N LS − 1
84 }
794 56 THE IMPLEMENTATION PACKAGE
85 }

56.2.1.1.4 Ternary Search Operations: ternaryRecombine : G × G × G 7→ G

Recombination operations for real-valued strings, i.e., implementations of Listing 55.5 on


page 710 for arrays of real numbers.

Listing 56.30: The binomial crossover operator from Differential Evolution.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.ternary;
5
6 import java.util.Random;
7
8 /**
9 * The ternary binominal crossover operator for Differential Evolution as
10 * defined in Section 33.2 on page 419.
11 *
12 * @author Thomas Weise
13 */
14 public final class DoubleArrayDEbin extends DEbase {
15
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /**
20 * Create a new real-vector ternary recombination operation
21 *
22 * @param mi
23 * the minimum value of the allele ( Definition D4.4 on page 83) of
24 * a gene
25 * @param ma
26 * the maximum value of the allele of a gene
27 * @param acr
28 * the gene crossover rate cr
29 * @param af
30 * the influence factor F
31 */
32 public DoubleArrayDEbin(final double mi, final double ma, double acr, double af)
{
33 super(mi, ma, acr, af);
34 }
35
36 /**
37 * Create a new real-vector ternary recombination operation
38 *
39 * @param dim
40 * the dimension of the search space
41 * @param mi
42 * the minimum value of the allele ( Definition D4.4 on page 83) of
43 * a gene
44 * @param ma
45 * the maximum value of the allele of a gene
46 */
47 public DoubleArrayDEbin(final int dim, final double mi, final double ma) {
48 super(mi, ma);
49 }
50
51 /**
52 * Perform the ternary Differential Evolution recombination method as
53 * discussed in
54 * Section 33.2 on page 419 using a
55 * normally distributed weight.
56 *
57 * @param p1
56.2. SEARCH OPERATION 795
58 * the first "parent" genotype
59 * @param p2
60 * the second "parent" genotype
61 * @param p3
62 * the third parent genotype
63 * @param r
64 * the random number generator
65 * @return a new genotype
66 */
67 @Override
68 public double[] ternaryRecombine(final double[] p1, final double[] p2,
69 final double[] p3, final Random r) {
70 final double[] gnew;
71 double weight, xn;
72 boolean change;
73 int i, j, maxTrials;
74 final double f, cr;
75
76 i = p1.length;
77 gnew = new double[i];
78
79 f = this.getF();
80 cr = this.getCR();
81
82 for (maxTrials = 10; maxTrials > 0; maxTrials--) {
83 // choose a weight very close to F with a tiny variation
84 weight = f * (1d + (0.01d * r.nextGaussian()));
85 change = false;
86 j = r.nextInt(gnew.length);
87
88 // compute the new vector
89 for (; (--i) >= 0;) {
90
91 // if we should perform the crossover in this dimension
92 if ((i == j) || (r.nextDouble() < cr)) {
93 // compute the new allele and bound it to the search space
94 xn = Math.min(this.max, Math.max(this.min, p3[i]
95 + (weight * (p1[i] - p2[i]))));
96
97 if (xn != p3[i]) {
98 change = true;
99 }
100 } else {
101 // if not, just copy the old value
102 xn = p3[i];
103 }
104
105 gnew[i] = xn;
106 }
107
108 if (change) {
109 return gnew;
110 }
111 }
112
113 return p3;
114 }
115
116 /**
117 * Get the name of the optimization module
118 *
119 * @param longVersion
120 * true if the long name should be returned, false if the short
121 * name should be returned
122 * @return the name of the optimization module
123 */
124 @Override
796 56 THE IMPLEMENTATION PACKAGE
125 public String getName(final boolean longVersion) {
126 return "DA-1/rand/bin"; //N ON − N LS − 1
127 }
128 }

Listing 56.31: The exponential crossover operator from Differential Evolution.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.ternary;
5
6 import java.util.Random;
7
8 /**
9 * The ternary exponential crossover operator for Differential Evolution as
10 * defined in Section 33.2 on page 419.
11 *
12 * @author Thomas Weise
13 */
14 public final class DoubleArrayDEexp extends DEbase {
15
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /**
20 * Create a new real-vector ternary recombination operation
21 *
22 * @param mi
23 * the minimum value of the allele ( Definition D4.4 on page 83) of
24 * a gene
25 * @param ma
26 * the maximum value of the allele of a gene
27 * @param acr
28 * the gene crossover rate cr
29 * @param af
30 * the influence factor F
31 */
32 public DoubleArrayDEexp(final double mi, final double ma, double acr,
33 double af) {
34 super(mi, ma, acr, af);
35 }
36
37 /**
38 * Create a new real-vector ternary recombination operation
39 *
40 * @param mi
41 * the minimum value of the allele ( Definition D4.4 on page 83) of
42 * a gene
43 * @param ma
44 * the maximum value of the allele of a gene
45 */
46 public DoubleArrayDEexp(final double mi, final double ma) {
47 super(mi, ma);
48 }
49
50 /**
51 * Perform the ternary Differential Evolution recombination method as
52 * discussed in
53 * Section 33.2 on page 419 using a
54 * normally distributed weight.
55 *
56 * @param p1
57 * the first "parent" genotype
58 * @param p2
59 * the second "parent" genotype
60 * @param p3
61 * the third parent genotype
56.2. SEARCH OPERATION 797
62 * @param r
63 * the random number generator
64 * @return a new genotype
65 */
66 @Override
67 public double[] ternaryRecombine(final double[] p1, final double[] p2,
68 final double[] p3, final Random r) {
69 final double[] gnew;
70 double weight, xn;
71 boolean change, copy;
72 int idx, i, maxTrials;
73 final double f, cr;
74
75 i = p1.length;
76 gnew = new double[i];
77
78 f = this.getF();
79 cr = this.getCR();
80
81 outer: for (maxTrials = 10; maxTrials > 0; maxTrials--) {
82 // choose a weight very close to F with a tiny variation
83 weight = f * (1d + (0.01d * r.nextGaussian()));
84 change = false;
85 idx = r.nextInt(gnew.length);
86 copy = false;
87
88 // compute the new vector
89 for (; (--i) >= 0;) {
90
91 if (copy) {
92 xn = p3[idx];
93 } else {
94 // compute the new allele and bound it to the search space
95 xn = Math.min(this.max, Math.max(this.min, p3[i]
96 + (weight * (p1[i] - p2[i]))));
97 if (xn != p3[i]) {
98 change = true;
99 }
100 }
101
102 gnew[i] = xn;
103
104 // should we toggle from mutation to copy mode?
105 if (!copy) {
106 if (r.nextDouble() < cr) {
107 // restart if this will result in the same result
108 if (!change) {
109 continue outer;
110 }
111 copy = true;
112 }
113 }
114
115 // the next index
116 idx = ((idx + 1) % gnew.length);
117 }
118
119 if (change) {
120 return gnew;
121 }
122
123 if (change) {
124 return gnew;
125 }
126 }
127
128 return p3;
798 56 THE IMPLEMENTATION PACKAGE
129 }
130
131 /**
132 * Get the name of the optimization module
133 *
134 * @param longVersion
135 * true if the long name should be returned, false if the short
136 * name should be returned
137 * @return the name of the optimization module
138 */
139 @Override
140 public String getName(final boolean longVersion) {
141 return "DA-1/rand/exp"; //N ON − N LS − 1
142 }
143
144 }

56.2.1.1.5 N -ary Search Operations: searchOp : Gn 7→ G

N -ary search operations for real-valued strings, i.e., implementations of Listing 55.6 on
page 711 for arrays of real numbers.

Listing 56.32: The dominant recombination operator defined in Section 30.3.1.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.nary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.NarySearchOperation;
9 import org.goataa.spec.INarySearchOperation;
10
11 /**
12 * The dominant recombination operator as defined in
13 * Algorithm 30.2 on page 363.
14 *
15 * @author Thomas Weise
16 */
17 public final class DoubleArrayDominantRecombination extends
18 NarySearchOperation<double[]> {
19
20 /** a constant required by Java serialization */
21 private static final long serialVersionUID = 1;
22
23 /**
24 * the globally shared instance of the double array dominant
25 * recombination operation
26 */
27 public static final INarySearchOperation<double[]>
DOUBLE_ARRAY_DOMINANT_RECOMBINATION = new DoubleArrayDominantRecombination();
28
29 /** Create a new real-vector n-ary search operation */
30 protected DoubleArrayDominantRecombination() {
31 super();
32 }
33
34 /**
35 * Perform the dominant recombination operator as defined in
36 * Algorithm 30.2 on page 363.
37 *
38 * @param gs
39 * the existing genotypes in the search space which will be
40 * combined to a new one
41 * @param r
42 * the random number generator
56.2. SEARCH OPERATION 799
43 * @return a new genotype
44 */
45 @Override
46 public final double[] combine(final double[][] gs, final Random r) {
47 final double[] gnew;
48 final int dim;
49 int i;
50
51 if ((gs == null) || (gs.length <= 0)) {
52 return null;
53 }
54
55 dim = gs[0].length;
56 gnew = new double[dim];
57
58 for (i = dim; (--i) >= 0;) {
59 gnew[i] = gs[r.nextInt(gs.length)][i];
60 }
61
62 return gnew;
63 }
64
65 /**
66 * Get the name of the optimization module
67 *
68 * @param longVersion
69 * true if the long name should be returned, false if the short
70 * name should be returned
71 * @return the name of the optimization module
72 */
73 @Override
74 public String getName(final boolean longVersion) {
75 if (longVersion) {
76 return super.getName(true);
77 }
78 return "DA-domX"; //N ON − N LS − 1
79 }
80 }

Listing 56.33: The intermedtiate recombination operator defined as Section 30.3.2.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.nary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.INarySearchOperation;
10
11 /**
12 * A average n-ary search operator which computes the average of n
13 * elements, i.e., performs intermediate recombination as defined in
14 * Algorithm 30.3 on page 363.
15 *
16 * @author Thomas Weise
17 */
18 public final class DoubleArrayIntermediateRecombination extends
19 OptimizationModule implements INarySearchOperation<double[]> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /**
25 * the globally shared instance of the double array intermediate
26 * recombination operation
27 */
800 56 THE IMPLEMENTATION PACKAGE
28 public static final INarySearchOperation<double[]>
DOUBLE_ARRAY_INTERMEDIATE_RECOMBINATION = new
DoubleArrayDominantRecombination();
29
30 /** Create a new real-vector n-ary search operation */
31 protected DoubleArrayIntermediateRecombination() {
32 super();
33 }
34
35 /**
36 * Compute theaverage the average of n genotypes, i.e., perform
37 * intermediate recombination as defined in
38 * Algorithm 30.3 on page 363.
39 *
40 * @param gs
41 * the existing genotypes in the search space which will be
42 * combined to a new one
43 * @param r
44 * the random number generator
45 * @return a new genotype
46 */
47 public final double[] combine(final double[][] gs, final Random r) {
48 final double[] gnew;
49 double[] g;
50 final int dim;
51 final double gamma;
52 int i, j;
53
54 if ((gs == null) || (gs.length <= 0)) {
55 return null;
56 }
57
58 dim = gs[0].length;
59 gnew = new double[dim];
60 gamma = (1d / gs.length);
61
62 for (j = gs.length; (--j) >= 0;) {
63 g = gs[j];
64 for (i = dim; (--i) >= 0;) {
65 gnew[i] += (g[i] * gamma);
66 }
67 }
68
69 return gnew;
70 }
71
72 /**
73 * Get the name of the optimization module
74 *
75 * @param longVersion
76 * true if the long name should be returned, false if the short
77 * name should be returned
78 * @return the name of the optimization module
79 */
80 @Override
81 public String getName(final boolean longVersion) {
82 if (longVersion) {
83 return super.getName(true);
84 }
85 return "DA-IX"; //N ON − N LS − 1
86 }
87 }

56.2.1.2 Bit Strings

In this package we provide search operations for bit string as used by the Genetic Algorithms
56.2. SEARCH OPERATION 801
discussed in Chapter 29.

56.2.1.2.1 Boolean Strings

A trivial implementation of the search operations for bit strings which is only for teaching
purposes. Bit strings can much better be expressed in an array of byte where we pack eight
bits into each byte. If we use an array of Boolean, we waste a lot of memory... but for
teaching purposes, it is OK.
56.2.1.2.1.1 Nullary Search Operations: create : ∅ 7→ G The bit string creation operations
(nullary search operators).

Listing 56.34: A uniform random string generator.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.bits.booleans.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.FixedLengthStringCreation;
9
10 /**
11 * An uniform bit string creator which builds random bit strings as defined
12 * in Algorithm 29.1 on page 328.
13 *
14 * @author Thomas Weise
15 */
16 public final class BooleanArrayUniformCreation extends
17 FixedLengthStringCreation<boolean[]> {
18
19 /** a constant required by Java serialization */
20 private static final long serialVersionUID = 1;
21
22 /**
23 * The uniform boolean string creator
24 *
25 * @param dim
26 * the dimension of the search space
27 */
28 public BooleanArrayUniformCreation(final int dim) {
29 super(dim);
30 }
31
32 /**
33 * This is a nullary search operation which creates strings of boolean
34 * values
35 *
36 * @param r
37 * the random number generator
38 * @return a new genotype (see Definition D4.2 on page 82)
39 */
40 @Override
41 public boolean[] create(final Random r) {
42 boolean[] bs;
43 int i;
44
45 i = this.n;
46 bs = new boolean[i];
47
48 for (; (--i) >= 0;) {
49 bs[i] = r.nextBoolean();
50 }
51
52 return bs;
53 }
802 56 THE IMPLEMENTATION PACKAGE
54
55 /**
56 * Get the name of the optimization module
57 *
58 * @param longVersion
59 * true if the long name should be returned, false if the short
60 * name should be returned
61 * @return the name of the optimization module
62 */
63 @Override
64 public String getName(final boolean longVersion) {
65 if (longVersion) {
66 return super.getName(true);
67 }
68 return "BA-U"; //N ON − N LS − 1
69 }
70 }

56.2.1.2.1.2 Unary Search Operations: mutate : G 7→ G Mutation (unary search opera-


tors) operators for bit strings.

Listing 56.35: A single bit-flip mutator.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.bits.booleans.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.UnarySearchOperation;
9 import org.goataa.spec.IUnarySearchOperation;
10
11 /**
12 * A unary search operation which flips exactly one bit of a bit string, as
13 * sketched in Fig. 29.2.a on page 330 and discussed
14 * in Section 29.3.2.1 on page 330.
15 *
16 * @author Thomas Weise
17 */
18 public final class BooleanArraySingleBitFlipMutation extends
19 UnarySearchOperation<boolean[]> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /** the globally shared instance of the single bit flip mutation */
25 public static final IUnarySearchOperation<boolean[]>
BOOLEAN_ARRAY_SINGLE_BIT_FLIP_MUTATION = new
BooleanArraySingleBitFlipMutation();
26
27 /** Create a new bit-string single bit mutation operation */
28 protected BooleanArraySingleBitFlipMutation() {
29 super();
30 }
31
32 /**
33 * The single-bit flip unary search operation (mutator) sketched in
34 * Fig. 29.2.a on page 330 and discussed in
35 * Section 29.3.2.1 on page 330.
36 *
37 * @param g
38 * the existing genotype in the search space from which a
39 * slightly modified copy should be created
40 * @param r
41 * the random number generator
42 * @return a new genotype
56.2. SEARCH OPERATION 803
43 */
44 @Override
45 public final boolean[] mutate(final boolean[] g, final Random r) {
46 final boolean[] gnew;
47 int i;
48
49 gnew = g.clone();
50 i = r.nextInt(gnew.length);
51 gnew[i] = (!(gnew[i]));
52
53 return gnew;
54 }
55
56 /**
57 * Get the name of the optimization module
58 *
59 * @param longVersion
60 * true if the long name should be returned, false if the short
61 * name should be returned
62 * @return the name of the optimization module
63 */
64 @Override
65 public String getName(final boolean longVersion) {
66 if (longVersion) {
67 return super.getName(true);
68 }
69 return "BA-SBF"; //N ON − N LS − 1
70 }
71 }

56.2.1.2.1.3 Nullary Search Operations: recombine : G × G 7→ G Recombination operators


(binary search operators) for bit strings.

Listing 56.36: The uniform crossover operator.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.bits.booleans.binary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.BinarySearchOperation;
9 import org.goataa.spec.IBinarySearchOperation;
10
11 /**
12 * A binary search operation which mixes the bits of two parent genotypes
13 * sketched in Fig. 29.5.d on page 335 and discussed
14 * in Section 29.3.4.4 on page 337.
15 *
16 * @author Thomas Weise
17 */
18 public final class BooleanArrayUniformCrossover extends
19 BinarySearchOperation<boolean[]> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /** the globally shared instance of the boolean array uniform crossover */
25 public static final IBinarySearchOperation<boolean[]>
BOOLEAN_ARRAY_UNIFORM_CROSSOVER = new BooleanArrayUniformCrossover();
26
27 /** Create a new bit-string uniform crossover operation */
28 protected BooleanArrayUniformCrossover() {
29 super();
30 }
31
804 56 THE IMPLEMENTATION PACKAGE
32 /**
33 * The binary search operation which mixes the bits of two parent
34 * genotypes sketched in Fig. 29.5.d on page 335
35 * and discussed in Section 29.3.4.4 on page 337.
36 *
37 * @param p1
38 * the first "parent" genotype
39 * @param p2
40 * the second "parent" genotype
41 * @param r
42 * the random number generator
43 * @return a new genotype
44 */
45 @Override
46 public final boolean[] recombine(final boolean[] p1, final boolean[] p2,
47 final Random r) {
48 final boolean[] gnew;
49 int i;
50
51 gnew = p1.clone();
52 for (i = Math.min(gnew.length, p2.length); (--i) >= 0;) {
53 if (r.nextBoolean()) {
54 gnew[i] = p2[i];
55 }
56 }
57
58 return gnew;
59 }
60
61 /**
62 * Get the name of the optimization module
63 *
64 * @param longVersion
65 * true if the long name should be returned, false if the short
66 * name should be returned
67 * @return the name of the optimization module
68 */
69 @Override
70 public String getName(final boolean longVersion) {
71 if (longVersion) {
72 return super.getName(true);
73 }
74 return "BA-UX"; //N ON − N LS − 1
75 }
76 }

56.2.1.3 Real Vectors: G ⊆ Rn

In this package, we provide permutation-based search operators. A permutation of n objects


can be expressed as an array of the first n natural numbers, i.e., the numbers from 0 to n−1.
It can hence be stored in an integer array if we ensure that each number from 0 to n − 1
always occurs exactly once in the array. Search operations which have this feature are
presented in this package.

56.2.1.3.1 Nullary Search Operations: create : ∅ 7→ G

Here we present creation operations for permutations. Such nullary operators create integer
arrays consisting of n elements, each of which is a unique number from 0 to n − 1. They
furthermore should have the feature that every possible permutation has the same chance
of being created. Create a new permutation according to Algorithm 29.2.

Listing 56.37: Create uniformly distributed permutations.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
56.2. SEARCH OPERATION 805
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.permutation.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.FixedLengthStringCreation;
9
10 /**
11 * A nullary search operation (see Section 4.2 on page 83) for
12 * permutatitons of n elements which works according to
13 * Algorithm 29.2 on page 329.
14 *
15 * @author Thomas Weise
16 */
17 public final class IntPermutationUniformCreation extends
18 FixedLengthStringCreation<int[]> {
19
20 /** a constant required by Java serialization */
21 private static final long serialVersionUID = 1;
22
23 /** a temporary variable */
24 private transient int[] temp;
25
26 /**
27 * Instantiate the permutation creation operation
28 *
29 * @param dim
30 * the number of objects to permutate
31 */
32 public IntPermutationUniformCreation(final int dim) {
33 super(dim);
34 }
35
36 /**
37 * This operation creates a random permutation according to
38 * Algorithm 29.2 on page 329.
39 *
40 * @param r
41 * the random number generator
42 * @return a new random permutation of the numbers 0..n-1
43 */
44 @Override
45 public final int[] create(final Random r) {
46 int[] g, tmp;
47 int i, j;
48
49 i = this.n;
50 g = new int[i];
51
52 // get the temporary variable
53 tmp = this.temp;
54 if (tmp == null) {
55 this.temp = tmp = new int[i];
56 }
57
58 // initialize the temporary variable by setting the value of the ith
59 // element to i
60 for (; (--i) >= 0;) {
61 tmp[i] = i;
62 }
63
64 i = this.n;
65 // repeat for all genes in the genotype
66 while (i > 0) {
67 // select the next number for the permutation from the i available
68 // ones randomly by chosing its index uniformly distributed in 0..i-1
806 56 THE IMPLEMENTATION PACKAGE
69 j = r.nextInt(i);
70
71 // in each step, there is one number less
72 i--;
73
74 // put the selected number from index j to index i in the genotype
75 g[i] = tmp[j];
76
77 // delete the jth number from the temporary variable by replacing it
78 // with the last number in the arry. Since we decreased i already,
79 // that last number could be accessed in the next interation anyway
80 tmp[j] = tmp[i];
81 }
82
83 return g;
84 }
85
86 /**
87 * Get the name of the optimization module
88 *
89 * @param longVersion
90 * true if the long name should be returned, false if the short
91 * name should be returned
92 * @return the name of the optimization module
93 */
94 @Override
95 public String getName(final boolean longVersion) {
96 if (longVersion) {
97 return super.getName(true);
98 }
99 return "IP-U"; //N ON − N LS − 1
100 }
101 }

56.2.1.3.2 Unary Search Operations: mutate : G 7→ G

Here we provide mutation operations for permutations. Such unary operators take one
integer array as parameter and return a slightly modified version. The integer arrays have
length n and represent permutations of the first n natural numbers, i.e., from 0 to n−1. The
mutation operations hence must ensure that no duplicate alleles occur and that no number
is lost. They may achieve this by simply swapping the elements of the arrays. Listing 56.38
switches elements according to Algorithm 29.4 on page 333 whereas Listing 56.39 is an
implementation of Algorithm 29.5 on page 334.

Listing 56.38: Modify a permutation by swapping two genes.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.permutation.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.UnarySearchOperation;
9 import org.goataa.spec.IUnarySearchOperation;
10
11 /**
12 * A unary search operation (see Section 4.2 on page 83) for
13 * permutations of n elements expressed as integer arrays of length n which
14 * works according to Algorithm 29.4 on page 333. This
15 * operation tales an existing genotype (see Definition D4.2 on page 82),
16 * picks two different loci ( Definition D4.5 on page 83) uniformly distributed
17 * in 0..n-1, and swaps the alleles ( Definition D4.4 on page 83) of the genes
18 * ( Definition D4.3 on page 82) at these positions.
19 *
56.2. SEARCH OPERATION 807
20 * @author Thomas Weise
21 */
22 public final class IntPermutationSingleSwapMutation extends
23 UnarySearchOperation<int[]> {
24
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /**
29 * the globally shared instance of the int permutation single swap
30 * mutation
31 */
32 public static final IUnarySearchOperation<int[]>
INT_PERMUTATION_SINGLE_SWAP_MUTATION = new
IntPermutationSingleSwapMutation();
33
34 /** Create a new permutation mutation operation */
35 protected IntPermutationSingleSwapMutation() {
36 super();
37 }
38
39 /**
40 * This is an unary search operation for permutations as defined in
41 * Algorithm 29.4 on page 333. It takes one existing
42 * genotype g (see Definition D4.2 on page 82) from the search space and
43 * produces one new genotype. This new element is a slightly modified
44 * version of g which is obtained by swapping two elements in the
45 * permutation.
46 *
47 * @param g
48 * the existing genotype in the search space from which a
49 * slightly modified copy should be created
50 * @param r
51 * the random number generator
52 * @return a new genotype
53 */
54 @Override
55 public final int[] mutate(final int[] g, final Random r) {
56 final int[] gnew;
57 int i, j, t;
58
59 // copy g
60 gnew = g.clone();
61
62 // draw the first index i
63 i = r.nextInt(g.length);
64
65 // draw a second index j which is different from g
66 do {
67 j = r.nextInt(g.length);
68 } while (i == j);
69
70 t = gnew[i];
71 gnew[i] = gnew[j];
72 gnew[j] = t;
73
74 return gnew;
75 }
76
77 /**
78 * Get the name of the optimization module
79 *
80 * @param longVersion
81 * true if the long name should be returned, false if the short
82 * name should be returned
83 * @return the name of the optimization module
84 */
808 56 THE IMPLEMENTATION PACKAGE
85 @Override
86 public String getName(final boolean longVersion) {
87 if (longVersion) {
88 return super.getName(true);
89 }
90 return "IP-SS"; //N ON − N LS − 1
91 }
92 }

Listing 56.39: Modify a permutation by repeatedly swapping two genes.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.permutation.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.UnarySearchOperation;
9 import org.goataa.spec.IUnarySearchOperation;
10
11 /**
12 * A unary search operation (see Section 4.2 on page 83) for
13 * permutations of n elements expressed as integer arrays of length n which
14 * works according to Algorithm 29.4 on page 333. This
15 * operation tales an existing genotype (see Definition D4.2 on page 82),
16 * and iteratively picks two different loci ( Definition D4.5 on page 83)
17 * uniformly distributed in 0..n-1, and swaps the alleles
18 * ( Definition D4.4 on page 83) of the genes ( Definition D4.3 on
page 82) at these
19 * positions. This is repeated more a number of times which is more or less
20 * exponentially distributed.
21 *
22 * @author Thomas Weise
23 */
24 public final class IntPermutationMultiSwapMutation extends
25 UnarySearchOperation<int[]> {
26
27 /** a constant required by Java serialization */
28 private static final long serialVersionUID = 1;
29
30 /**
31 * the globally shared instance of the int permutation multi swap
32 * mutation
33 */
34 public static final IUnarySearchOperation<int[]>
INT_PERMUTATION_MULTI_SWAP_MUTATION = new IntPermutationMultiSwapMutation();
35
36 /** Create a new permutation mutation operation */
37 protected IntPermutationMultiSwapMutation() {
38 super();
39 }
40
41 /**
42 * This is an unary search operation for permutations according to
43 * Algorithm 29.5 on page 334. It takes one existing
44 * genotype g (see Definition D4.2 on page 82) from the search space and
45 * produces one new genotype. This new element is a slightly modified
46 * version of g which is obtained by repeatedly swapping two elements in
47 * the permutation.
48 *
49 * @param g
50 * the existing genotype in the search space from which a
51 * slightly modified copy should be created
52 * @param r
53 * the random number generator
54 * @return a new genotype
55 */
56.2. SEARCH OPERATION 809
56 @Override
57 public final int[] mutate(final int[] g, final Random r) {
58 final int[] gnew;
59 int i, j, t;
60
61 // copy g
62 gnew = g.clone();
63
64 do {
65
66 // draw the first index i
67 i = r.nextInt(g.length);
68
69 // draw a second index j which is different from g
70 do {
71 j = r.nextInt(g.length);
72 } while (i == j);
73
74 t = gnew[i];
75 gnew[i] = gnew[j];
76 gnew[j] = t;
77
78 // repeat with 50% probability -> roughly exponentially distributed
79 } while (r.nextBoolean());
80
81 return gnew;
82 }
83
84 /**
85 * Get the name of the optimization module
86 *
87 * @param longVersion
88 * true if the long name should be returned, false if the short
89 * name should be returned
90 * @return the name of the optimization module
91 */
92 @Override
93 public String getName(final boolean longVersion) {
94 if (longVersion) {
95 return super.getName(true);
96 }
97 return "IP-MS"; //N ON − N LS − 1
98 }
99 }

56.2.2 Operations for Tree-based Search Spaces

Reproduction operations for (strongly-typed) tree genomes, as introduced in Section 31.3


on page 386.

56.2.2.1 Utility and Base Classes

Listing 56.40: The base class for search operations for trees.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.Node;
810 56 THE IMPLEMENTATION PACKAGE
10 import org.goataa.impl.searchSpaces.trees.NodeType;
11 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
12
13 /**
14 * A base class for tree-based search operations
15 *
16 * @author Thomas Weise
17 */
18 public class TreeOperation extends OptimizationModule {
19 /** a constant required by Java serialization */
20 private static final long serialVersionUID = 1;
21
22 /** the maximum depth */
23 public final int maxDepth;
24
25 /**
26 * Create a new tree operation
27 *
28 * @param md
29 * the maximum tree depth
30 */
31 protected TreeOperation(final int md) {
32 super();
33 this.maxDepth = Math.max(2, md);
34 }
35
36 /**
37 * Get the full configuration which holds all the data necessary to
38 * describe this object.
39 *
40 * @param longVersion
41 * true if the long version should be returned, false if the
42 * short version should be returned
43 * @return the full configuration
44 */
45 @Override
46 public String getConfiguration(final boolean longVersion) {
47 return (((longVersion) ? "maxDepth" : //N ON − N LS − 1
48 "md") + this.maxDepth); //N ON − N LS − 1
49 }
50
51 /**
52 * Create a sub-tree of the specified size maximum depth. This function
53 * facilitates both, an implementation of the full method
54 * Algorithm 31.1 on page 390 and one of the grow method
55 * Algorithm 31.2 on page 391, which can be chosen by setting the
56 * parameter ’full’ apropriately. It can be used to create trees during
57 * the random population initialization phase or during mutation steps.
58 *
59 * @param types
60 * the node types available for creating the tree
61 * @param maxDepth
62 * the maximum depth of the tree
63 * @param full
64 * Should we construct a sub-tree according to the full method?
65 * If full is false, grow is used.
66 * @param r
67 * the random number generator
68 * @return the new tree
69 * @param <NT>
70 * the node type
71 */
72 @SuppressWarnings("unchecked")
73 public static final <NT extends Node<NT>> NT createTree(
74 final NodeTypeSet<NT> types, final int maxDepth, final boolean full,
75 final Random r) {
76 NodeType<NT, NT> t;
56.2. SEARCH OPERATION 811
77 NT[] x;
78 int i;
79
80 t = null;
81 if (maxDepth <= 1) {
82 t = types.randomTerminalType(r);
83 } else {
84 if (full) {
85 t = types.randomNonTerminalType(r);
86 }
87 if (t == null) {
88 t = types.randomType(r);
89 }
90 }
91 if (t.isTerminal()) {
92 return types.randomTerminalType(r).instantiate(null, r);
93 }
94
95 i = t.getChildCount();
96 x = ((NT[]) (new Node[i]));
97 for (; (--i) >= 0;) {
98 x[i] = createTree(t.getChildTypes(i), maxDepth - 1, full, r);
99 }
100 return t.instantiate(x, r);
101 }
102
103 }

Listing 56.41: An utility class to construct random paths in a tree, i. e., to select random
nodes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.Node;
10
11 /**
12 * This class allows us to select a path in a tree. When applying a
13 * reproduction operation, it is not only necessary to find a tree node
14 * within the tree, but also to find the complete path to this node. This
15 * class allows us to perform such a selection. It also allows us to
16 * replace the end node of such a path, hence creating a completely new
17 * tree: Since genotypes must not directly be changed (because of possible
18 * side-effects if an individual is selected more than once), all
19 * modifications (such as the replacement of a node) will lead to the
20 * creation of new trees.
21 *
22 * @param <NT>
23 * the node type
24 * @author Thomas Weise
25 */
26 public class TreePath<NT extends Node<NT>> extends OptimizationModule {
27 /** a constant required by Java serialization */
28 private static final long serialVersionUID = 1;
29
30 /** the path */
31 private NT[] path;
32
33 /** the path indexes */
34 private int[] pathidx;
35
36 /** the length */
812 56 THE IMPLEMENTATION PACKAGE
37 private int len;
38
39 /** Create a tree path */
40 @SuppressWarnings("unchecked")
41 public TreePath() {
42 super();
43 this.path = ((NT[]) (new Node[16]));
44 this.pathidx = new int[16];
45 }
46
47 /**
48 * Get the path length
49 *
50 * @return the path length
51 */
52 public final int size() {
53 return this.len;
54 }
55
56 /**
57 * Get the element at the specified index
58 *
59 * @param index
60 * the index
61 * @return the element
62 */
63 public final NT get(final int index) {
64 return this.path[index];
65 }
66
67 /**
68 * Get the index of the next child in the path
69 *
70 * @param index
71 * the index of the current parent
72 * @return the index of the next child in the path
73 */
74 public final int getChildIndex(final int index) {
75 return this.pathidx[index];
76 }
77
78 /**
79 * Create a random path through the tree. Each node in the tree is
80 * selected with exactly the same probability.
81 *
82 * @param node
83 * the node
84 * @param r
85 * the randomizer
86 */
87 public final void randomPath(final NT node, final Random r) {
88 int i, w, w2;
89 NT cur, next;
90
91 this.len = 0;
92 cur = node;
93 w = r.nextInt(node.getWeight());
94
95 for (;;) {
96
97 if (w <= 0) {
98 this.addPath(cur, -1);
99 return;
100 }
101
102 // iterate over the children
103 innerLoop: for (i = cur.size(); (--i) >= 0;) {
56.2. SEARCH OPERATION 813
104 next = cur.get(i);
105
106 w2 = next.getWeight();
107 if (w2 >= w) {
108 this.addPath(cur, i);
109 cur = next;
110 break innerLoop;
111 }
112 w -= w2;
113 }
114
115 // take account for the currently selected node
116 w--;
117 }
118 }
119
120 /**
121 * An internal method used to add an element to the path.
122 *
123 * @param node
124 * the node
125 * @param pi
126 * the parent index
127 */
128 @SuppressWarnings("unchecked")
129 private final void addPath(final NT node, final int pi) {
130 NT[] ppp;
131 int l;
132 int[] idx;
133
134 ppp = this.path;
135 idx = this.pathidx;
136 l = this.len;
137
138 if (l >= ppp.length) {
139 ppp = (NT[]) (new Node[l << 1]);
140 System.arraycopy(this.path, 0, ppp, 0, l);
141 this.path = ppp;
142
143 idx = new int[l << 1];
144 System.arraycopy(this.pathidx, 0, idx, 0, l);
145 this.pathidx = idx;
146 }
147
148 idx[l] = pi;
149 ppp[l++] = node;
150
151 this.len = l;
152 }
153
154 /**
155 * Replace the end of this path with the new node. Since tree nodes are
156 * immuatable, this will result in the creation of a completely new tree
157 * and a new root node. The path is updated while this operation is
158 * performed, i.e., it is still valid afterwards.
159 *
160 * @param newNode
161 * the new node
162 * @return the result new root node of the path
163 */
164 public final NT replaceEnd(final NT newNode) {
165 NT[] ppp;
166 int l;
167 int[] idx;
168 NT x;
169
170 l = this.len;
814 56 THE IMPLEMENTATION PACKAGE
171 ppp = this.path;
172 idx = this.pathidx;
173 x = newNode;
174
175 l = this.len;
176 ppp[--l] = x;
177 for (; (--l) >= 0;) {
178 ppp[l] = x = ppp[l].setChild(x, idx[l]);
179 }
180
181 return x;
182 }
183 }

56.2.2.2 Nullary Search Operations for Trees

Tree creation operations as discussed in Section 31.3.2 on page 389 (but for strongly-typed
trees).

Listing 56.42: The ramped-half-and-half creation procedure.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.trees.TreeOperation;
9 import org.goataa.impl.searchSpaces.trees.Node;
10 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
11 import org.goataa.spec.INullarySearchOperation;
12
13 /**
14 * A tree creator using the ramped-half-and-half method as described in
15 * Section 31.3.2.3 on page 390.
16 *
17 * @param <NT>
18 * the node type
19 * @author Thomas Weise
20 */
21 public class TreeRampedHalfAndHalf<NT extends Node<NT>> extends TreeOperation
22 implements INullarySearchOperation<NT> {
23 /** a constant required by Java serialization */
24 private static final long serialVersionUID = 1;
25
26 /** the types to choose from */
27 public final NodeTypeSet<NT> types;
28
29 /**
30 * Create a new ramped-half-and-half
31 *
32 * @param md
33 * the maximum tree depth
34 * @param ptypes
35 * the types
36 */
37 public TreeRampedHalfAndHalf(final NodeTypeSet<NT> ptypes, final int md) {
38 super(md);
39 this.types = ptypes;
40 }
41
42 /**
43 * Create a new tree according to the ramped-half-and-half method given
44 * in Algorithm 31.3 on page 391.
45 *
56.2. SEARCH OPERATION 815
46 * @param r
47 * the random number generator
48 * @return a new genotype (see Definition D4.2 on page 82)
49 */
50 public NT create(final Random r) {
51
52 return TreeOperation.createTree(this.types,//
53 (2 + r.nextInt(this.maxDepth)), r.nextBoolean(), r);
54 }
55
56 /**
57 * Get the full configuration which holds all the data necessary to
58 * describe this object.
59 *
60 * @param longVersion
61 * true if the long version should be returned, false if the
62 * short version should be returned
63 * @return the full configuration
64 */
65 @Override
66 public String getConfiguration(final boolean longVersion) {
67 return super.getConfiguration(longVersion) + ’,’
68 + this.types.toString(longVersion);
69 }
70 }

56.2.2.3 Unary Search Operations for Trees

Unary tree reproduction operations as discussed in Section 31.3.4 on page 393.

Listing 56.43: A simple sub-tree replacement mutator.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.trees.TreeOperation;
9 import org.goataa.impl.searchOperations.trees.TreePath;
10 import org.goataa.impl.searchSpaces.trees.Node;
11 import org.goataa.spec.IUnarySearchOperation;
12
13 /**
14 * A simple mutation operation for a given tree which implants a randomly
15 * created subtree into parent, thereby replacing a randomly picked node in
16 * the parent. This operation basically proceeds according to the ideas
17 * discussed in Section 31.3.4.1 on page 393 with the extension
18 * that it also respects the type system of the strongly-typed GP system.
19 *
20 * @author Thomas Weise
21 */
22 public class TreeMutator extends TreeOperation implements
23 IUnarySearchOperation<Node<?>> {
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /** the internal path */
28 private final TreePath<?> path;
29
30 /**
31 * Create a new tree mutationoperation
32 *
33 * @param md
34 * the maximum tree depth
816 56 THE IMPLEMENTATION PACKAGE
35 */
36 @SuppressWarnings("unchecked")
37 public TreeMutator(final int md) {
38 super(md);
39 this.path = new TreePath();
40 }
41
42 /**
43 * This is the unary search operation. It takes one existing genotype g
44 * (see Definition D4.2 on page 82) from the genome and produces one new
45 * element in the search space. This new element is usually a copy of te
46 * existing element g which is slightly modified in a random manner. For
47 * this purpose, we pass a random number generator in as parameter so we
48 * can use the same random number generator in all parts of an
49 * optimization algorithm.
50 *
51 * @param g
52 * the existing genotype in the search space from which a
53 * slightly modified copy should be created
54 * @param r
55 * the random number generator
56 * @return a new genotype
57 */
58 @SuppressWarnings("unchecked")
59 public Node<?> mutate(final Node<?> g, final Random r) {
60 int trials, len, idx;
61 Node sel, nu, par;
62 TreePath p;
63
64 p = this.path;
65 for (trials = 100; trials > 0; trials--) {
66
67 p.randomPath(g, r);
68 len = p.size() - 1;
69
70 sel = p.get(len);
71 if (sel.isTerminal()) {
72 nu = sel.getType().mutate(sel, r);
73 } else {
74 nu = sel;
75 }
76 if ((nu == sel) && (len > 0)) {
77 len--;
78 idx = p.getChildIndex(len);
79 par = p.get(len);
80 nu = TreeOperation.createTree(par.getType().getChildTypes(idx),
81 this.maxDepth - len + 1, false, r);
82 if (nu == null) {
83 nu = sel;
84 }
85 }
86
87 if (nu != sel) {
88 return p.replaceEnd(nu);
89 }
90 }
91
92 return g;
93 }
94 }

56.2.2.4 Binary Search Operations for Trees

Tree crossover operations as discussed in Section 31.3.5 on page 398 (but for strongly-typed
trees).
56.2. SEARCH OPERATION 817
Listing 56.44: A binary search operation for trees.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees.binary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.trees.TreeOperation;
9 import org.goataa.impl.searchOperations.trees.TreePath;
10 import org.goataa.impl.searchSpaces.trees.Node;
11 import org.goataa.spec.IBinarySearchOperation;
12
13 /**
14 * A simple recombination operation for a given tree which implants a
15 * randomly chosen subtree from one parent into the other parent, thereby
16 * replacing a randomly picked node in the second parent. This operation
17 * basically proceeds according to the ideas discussed in
18 * Section 31.3.5 on page 398 with the extension that
19 * it also respects the type system of the strongly-typed GP system.
20 *
21 * @author Thomas Weise
22 */
23 public class TreeRecombination extends TreeOperation implements
24 IBinarySearchOperation<Node<?>> {
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /** the internal path 1 */
29 private final TreePath<?> pa1;
30
31 /** the internal path 2 */
32 private final TreePath<?> pa2;
33
34 /**
35 * Create a new tree mutationoperation
36 *
37 * @param md
38 * the maximum tree depth
39 */
40 @SuppressWarnings("unchecked")
41 public TreeRecombination(final int md) {
42 super(md);
43 this.pa1 = new TreePath();
44 this.pa2 = new TreePath();
45 }
46
47 /**
48 * This is the binary search operation. It takes two existing genotypes
49 * p1 and p2 (see Definition D4.2 on page 82) from the genome and produces
50 * one new element in the search space. There are two basic assumptions
51 * about this operator: 1) Its input elements are good because they have
52 * previously been selected. 2) It is somehow possible to combine these
53 * good traits and hence, to obtain a single individual which unites them
54 * and thus, has even better overall qualities than its parents. The
55 * original underlying idea of this operation is the
56 * "Building Block Hypothesis" (see
57 * Section 29.5.5 on page 346) for which, so far, not much
58 * evidence has been found. The hypothesis
59 * "Genetic Repair and Extraction" (see Section 29.5.6 on page 346)
60 * has been developed as an alternative to explain the positive aspects
61 * of binary search operations such as recombination.
62 *
63 * @param p1
64 * the first "parent" genotype
65 * @param p2
66 * the second "parent" genotype
818 56 THE IMPLEMENTATION PACKAGE
67 * @param r
68 * the random number generator
69 * @return a new genotype
70 */
71 @SuppressWarnings("unchecked")
72 public Node<?> recombine(final Node<?> p1, final Node<?> p2,
73 final Random r) {
74 TreePath pt1, pt2;
75 int trials, i;
76 Node<?> e1, e2;
77
78 pt1 = this.pa1;
79 pt2 = this.pa2;
80
81 outer: for (trials = 100; trials > 0; trials--) {
82 pt1.randomPath(p1, r);
83 pt2.randomPath(p2, r);
84
85 i = pt1.size();
86 if (i <= 1) {
87 continue outer;
88 }
89
90 e2 = pt2.get(pt2.size() - 1);
91 i -= 2;
92 e1 = pt1.get(i);
93 if (((i + e2.getHeight() + 1) < this.maxDepth)
94 && e1.getType().getChildTypes(pt1.getChildIndex(i))
95 .containsType(e2.getType())) {
96 return pt1.replaceEnd(e2);
97 }
98 }
99
100 return (r.nextBoolean() ? p1 : p2);
101 }
102 }

56.2.3 Multiplexing Operators

Listing 56.45: A multiplexing nullary search operation.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations;
5
6 import java.util.Random;
7
8 import org.goataa.spec.INullarySearchOperation;
9
10 /**
11 * A simple class that allows to randomly choose between multiple different
12 * creators (i.e., nullary search operations as defined in
13 * Definition D4.6 on page 83).
14 *
15 * @param <G>
16 * the search space
17 * @author Thomas Weise
18 */
19 public class MultiCreator<G> extends NullarySearchOperation<G> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
56.2. SEARCH OPERATION 819
24 /** the internally available operations */
25 private final INullarySearchOperation<G>[] o;
26
27 /**
28 * Create a new multimutator
29 *
30 * @param ops
31 * the search operations
32 */
33 public MultiCreator(final INullarySearchOperation<G>... ops) {
34 super();
35
36 this.o = ops;
37 }
38
39 /**
40 * This operation tries to create a genotype by randomly applying one of
41 * its sub-creators.
42 *
43 * @param r
44 * the random number generator
45 * @return a new genotype (see Definition D4.2 on page 82)
46 */
47 @Override
48 public final G create(final Random r) {
49 return this.o[r.nextInt(this.o.length)].create(r);
50 }
51
52 /**
53 * Get the name of the optimization module
54 *
55 * @param longVersion
56 * true if the long name should be returned, false if the short
57 * name should be returned
58 * @return the name of the optimization module
59 */
60 @Override
61 public String getName(final boolean longVersion) {
62 if (longVersion) {
63 return super.getName(true);
64 }
65 return "MC"; //N ON − N LS − 1
66 }
67
68 /**
69 * Get the full configuration which holds all the data necessary to
70 * describe this object.
71 *
72 * @param longVersion
73 * true if the long version should be returned, false if the
74 * short version should be returned
75 * @return the full configuration
76 */
77 @Override
78 public String getConfiguration(final boolean longVersion) {
79 StringBuilder sb;
80 int i;
81
82 sb = new StringBuilder();
83 for (i = 0; i < this.o.length; i++) {
84 if (i > 0) {
85 sb.append(’,’);
86 }
87 sb.append(this.o[i].toString(longVersion));
88 }
89
90 return sb.toString();
820 56 THE IMPLEMENTATION PACKAGE
91 }
92 }

Listing 56.46: A multiplexing unary search operation.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IUnarySearchOperation;
9
10 /**
11 * A simple class that allows to randomly choose between multiple different
12 * mutators, i.e., unary search operators (see
13 * Definition D4.6 on page 83).
14 *
15 * @param <G>
16 * the search space
17 * @author Thomas Weise
18 */
19 public class MultiMutator<G> extends UnarySearchOperation<G> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /** the internally available operations */
25 private final IUnarySearchOperation<G>[] o;
26
27 /** the temporary array */
28 private final int[] tmp;
29
30 /** the permutation */
31 private final int[] perm;
32
33 /**
34 * Create a new multi-mutator
35 *
36 * @param ops
37 * the search operations
38 */
39 public MultiMutator(final IUnarySearchOperation<G>... ops) {
40 super();
41
42 int[] x;
43 int l;
44
45 this.o = ops;
46 l = ops.length;
47 this.tmp = new int[l];
48 this.perm = x = new int[l];
49 for (; (--l) >= 0;) {
50 x[l] = l;
51 }
52 }
53
54 /**
55 * This operation tries to mutate a genotype by randomly applying one of
56 * its sub-mutators. If a sub-mutator is not able to provide a new
57 * genotype, i.e., returns null or its input, then we try the next
58 * sub-mutator, and so on.
59 *
60 * @param g
61 * the existing genotype in the search space from which a
62 * slightly modified copy should be created
63 * @param r
56.2. SEARCH OPERATION 821
64 * the random number generator
65 * @return a new genotype
66 */
67 @Override
68 public final G mutate(final G g, final Random r) {
69 final int[] a;
70 final IUnarySearchOperation<G>[] ops;
71 int i, c;
72 G res;
73
74 ops = this.o;
75 a = this.tmp;
76 c = a.length;
77 System.arraycopy(this.perm, 0, a, 0, c);
78
79 while (c > 0) {
80 i = r.nextInt(c);
81 res = ops[a[i]].mutate(g, r);
82 if ((res != null) && (res != g)) {
83 return res;
84 }
85 a[i] = a[--c];
86 }
87
88 return super.mutate(g, r);
89 }
90
91 /**
92 * Get the name of the optimization module
93 *
94 * @param longVersion
95 * true if the long name should be returned, false if the short
96 * name should be returned
97 * @return the name of the optimization module
98 */
99 @Override
100 public String getName(final boolean longVersion) {
101 if (longVersion) {
102 return super.getName(true);
103 }
104 return "MM"; //N ON − N LS − 1
105 }
106
107 /**
108 * Get the full configuration which holds all the data necessary to
109 * describe this object.
110 *
111 * @param longVersion
112 * true if the long version should be returned, false if the
113 * short version should be returned
114 * @return the full configuration
115 */
116 @Override
117 public String getConfiguration(final boolean longVersion) {
118 StringBuilder sb;
119 int i;
120
121 sb = new StringBuilder();
122 for (i = 0; i < this.o.length; i++) {
123 if (i > 0) {
124 sb.append(’,’);
125 }
126 sb.append(this.o[i].toString(longVersion));
127 }
128
129 return sb.toString();
130 }
822 56 THE IMPLEMENTATION PACKAGE
131 }

Listing 56.47: A multiplexing binary search operation.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IBinarySearchOperation;
9
10 /**
11 * A simple class that allows to randomly choose between multiple different
12 * recombination (binary) search operators (see
13 * Definition D4.6 on page 83) as given in
14 * Definition D28.14 on page 307.
15 *
16 * @param <G>
17 * the search space
18 * @author Thomas Weise
19 */
20 public class MultiRecombinator<G> extends BinarySearchOperation<G> {
21
22 /** a constant required by Java serialization */
23 private static final long serialVersionUID = 1;
24
25 /** the internally available operations */
26 private final IBinarySearchOperation<G>[] o;
27
28 /** the temporary array */
29 private final int[] tmp;
30
31 /** the permutation */
32 private final int[] perm;
33
34 /**
35 * Create a new multi-recombinator
36 *
37 * @param ops
38 * the search operations
39 */
40 public MultiRecombinator(final IBinarySearchOperation<G>... ops) {
41 super();
42
43 int[] x;
44 int l;
45
46 this.o = ops;
47 l = ops.length;
48 this.tmp = new int[l];
49 this.perm = x = new int[l];
50 for (; (--l) >= 0;) {
51 x[l] = l;
52 }
53 }
54
55 /**
56 * Choose a recombination operation randomly from the available
57 * operators.
58 *
59 * @param p1
60 * the first "parent" genotype
61 * @param p2
62 * the second "parent" genotype
63 * @param r
64 * the random number generator
56.2. SEARCH OPERATION 823
65 * @return a new genotype
66 */
67 @Override
68 public final G recombine(final G p1, final G p2, final Random r) {
69 final int[] a;
70 final IBinarySearchOperation<G>[] ops;
71 int i, c;
72 G res;
73 IBinarySearchOperation<G> ox;
74
75 ops = this.o;
76 a = this.tmp;
77 c = a.length;
78 System.arraycopy(this.perm, 0, a, 0, c);
79
80 while (c > 0) {
81 i = r.nextInt(c);
82 ox = ops[a[i]];
83 res = ox.recombine(p1, p2, r);
84 if ((res == p1) && (res == p2)) {
85 res = ox.recombine(p2, p1, r);
86 }
87 if ((res != null) && (res != p1) && (res != p2)) {
88 return res;
89 }
90 a[i] = a[--c];
91 }
92
93 return super.recombine(p1, p2, r);
94 }
95
96 /**
97 * Get the name of the optimization module
98 *
99 * @param longVersion
100 * true if the long name should be returned, false if the short
101 * name should be returned
102 * @return the name of the optimization module
103 */
104 @Override
105 public String getName(final boolean longVersion) {
106 if (longVersion) {
107 return super.getName(true);
108 }
109 return "MX"; //N ON − N LS − 1
110 }
111
112 /**
113 * Get the full configuration which holds all the data necessary to
114 * describe this object.
115 *
116 * @param longVersion
117 * true if the long version should be returned, false if the
118 * short version should be returned
119 * @return the full configuration
120 */
121 @Override
122 public String getConfiguration(final boolean longVersion) {
123 StringBuilder sb;
124 int i;
125
126 sb = new StringBuilder();
127 for (i = 0; i < this.o.length; i++) {
128 if (i > 0) {
129 sb.append(’,’);
130 }
131 sb.append(this.o[i].toString(longVersion));
824 56 THE IMPLEMENTATION PACKAGE
132 }
133
134 return sb.toString();
135 }
136 }

56.3 Data Structures for Special Search Spaces

56.3.1 Tree-based Search Spaces as used in Genetic Programming

Data structure to support the synthesis of strongly-typed trees as discussed in Section 31.3.

Listing 56.48: A base class for tree nodes.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchSpaces.trees;
5
6 import java.util.Arrays;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /**
11 * A class which represents tree nodes. Trees are the search spaces of
12 * Standard Genetic Programming (SGP, see Section 31.3). The node class
13 * here could, however, also be used as problem space (in SGP, search and
14 * problem space are the same) for a different search space. Our node class
15 * here supports strongly-typed GP. In other words, we can define which
16 * type of node is allowed as child for which other node.
17 *
18 * @param <CT>
19 * the child node type
20 * @author Thomas Weise
21 */
22 public class Node<CT extends Node<CT>> extends OptimizationModule
23 implements Cloneable {
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /** no children */
28 public static final Node<?>[] EMPTY_CHILDREN = new Node[0];
29
30 /** the children */
31 Node<?>[] children;
32
33 /** the node type record */
34 private final NodeType<? extends CT, CT> type;
35
36 /** the tree weight */
37 private int weight;
38
39 /** the maximum depth of all children */
40 private int height;
41
42 /**
43 * Create a node with the given children
44 *
45 * @param pchildren
46 * the child nodes
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 825
47 * @param in
48 * the node type record
49 */
50 @SuppressWarnings("unchecked")
51 public Node(final Node<?>[] pchildren,
52 final NodeType<? extends CT, CT> in) {
53 super();
54
55 this.children = (((pchildren != null) && (pchildren.length > 0)) ? pchildren
56 : ((CT[]) EMPTY_CHILDREN));
57
58 this.type = in;
59 this.computeParams();
60 }
61
62 /**
63 * Compute the fixed parameters of this node. This method is called
64 * whenever a reproduced copy of the node is created or when a new node
65 * is created with the constructor.
66 */
67 protected void computeParams() {
68 int i, w, h;
69 Node<?>[] x;
70 Node<?> v;
71
72 x = this.children;
73
74 w = 1;
75 h = 0;
76 for (i = x.length; (--i) >= 0;) {
77 v = x[i];
78 w += v.weight;
79 h = Math.max(h, v.height);
80 }
81 this.weight = w;
82 this.height = (h + 1);
83 }
84
85 /**
86 * Get the full configuration which holds all the data necessary to
87 * describe this object.
88 *
89 * @param longVersion
90 * true if the long version should be returned, false if the
91 * short version should be returned
92 * @return the full configuration
93 */
94 @Override
95 public final String getConfiguration(final boolean longVersion) {
96 StringBuilder sb;
97
98 sb = new StringBuilder();
99 this.fillInText(sb);
100 return sb.toString();
101 }
102
103 /**
104 * Get the number of children
105 *
106 * @return the number of children
107 */
108 public final int size() {
109 return this.children.length;
110 }
111
112 /**
113 * Get a specific child
826 56 THE IMPLEMENTATION PACKAGE
114 *
115 * @param idx
116 * the child index
117 * @return the child at that index
118 */
119 @SuppressWarnings("unchecked")
120 public final CT get(final int idx) {
121 return ((CT) (this.children[idx]));
122 }
123
124 /**
125 * Set the child at the given index and create a new tree with the child
126 * at that index. As usual, we follow the paradigm that genotypes in the
127 * search must not be modified. We just can create a modified copy of a
128 * genotype. Hence, setting the child of a specific node object does not
129 * change the node object itself, but results in a copy of the node
130 * object where the child is set at the specific position.
131 *
132 * @param child
133 * the child to be stored at the given index
134 * @param idx
135 * the index where to insert the child
136 * @return the new tree node with that child
137 */
138 public final CT setChild(final CT child, final int idx) {
139 CT x;
140
141 x = this.clone();
142 x.children = x.children.clone();
143 x.children[idx] = child;
144 x.computeParams();
145 return x;
146 }
147
148 /**
149 * Clone this node
150 *
151 * @return a copy of this node
152 */
153 @SuppressWarnings("unchecked")
154 @Override
155 protected CT clone() {
156 try {
157 return ((CT) (super.clone()));
158 } catch (Throwable t) {
159 return null;
160 }
161 }
162
163 /**
164 * Fill in the text associated with this node
165 *
166 * @param sb
167 * the string builder
168 */
169 public void fillInText(final StringBuilder sb) {
170 sb.append(super.getConfiguration(false));
171 }
172
173 /**
174 * Get the node type record associated with this node
175 *
176 * @return the node type record associated with this node
177 */
178 public final NodeType<? extends CT, CT> getType() {
179 return this.type;
180 }
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 827
181
182 /**
183 * Get the string representation of this object, i.e., the name and
184 * configuration.
185 *
186 * @param longVersion
187 * true if the long version should be returned, false if the
188 * short version should be returned
189 * @return the string version of this object
190 */
191 @Override
192 public String toString(final boolean longVersion) {
193 return this.getConfiguration(longVersion);
194 }
195
196 /**
197 * Obtain the 1+ total number of nodes in all sub-trees of this tree
198 *
199 * @return the total number of nodes in this tree (including this node)
200 */
201 public final int getWeight() {
202 return this.weight;
203 }
204
205 /**
206 * Get the maximum depth of all child nodes
207 *
208 * @return the maximum depth of all child nodes
209 */
210 public final int getHeight() {
211 return this.height;
212 }
213
214 /**
215 * Is this a terminal node?
216 *
217 * @return true if this node is terminal, i.e., a leaf, false otherwise
218 */
219 public final boolean isTerminal() {
220 return (this.weight <= 1);
221 }
222
223 /**
224 * Compare with another object
225 *
226 * @param o
227 * the other object
228 * @return true if the objects are equal
229 */
230 @Override
231 @SuppressWarnings("unchecked")
232 public boolean equals(final Object o) {
233 if (o == null) {
234 return false;
235 }
236 if (o == this) {
237 return true;
238 }
239 if (o.getClass() == this.getClass()) {
240 return Arrays.equals(this.children, ((Node) o).children);
241 }
242 return false;
243 }
244 }

Listing 56.49: A type description for tree nodes.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
828 56 THE IMPLEMENTATION PACKAGE
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchSpaces.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /**
11 * The node type of a node has two main functions: 1) It describes which
12 * types of children are allowed to occur at which position (if any), and
13 * 2) it is used to instantiate nodes of the type. Notice that 1) is the
14 * basic requirement for strongly-typed GP: We can specify which type of
15 * node can occur as child of which other node and thus, build a complex
16 * type system and pre-define node structures precisely. Point 2) somehow
17 * reproduces the capabilities of Java’s reflection system with the
18 * extension of allowing us to perform some additional, randomized actions.
19 * Matter of fact, with the class ReflectionNodeType, we defer the node
20 * creation to Java’s reflection mechanisms in cases where no randomized
21 * instantiation actions are required.
22 *
23 * @param <NT>
24 * the specific node type
25 * @param <CT>
26 * the child node type (i.e., the general type where NT is an
27 * instance of)
28 * @author Thomas Weise
29 */
30 public class NodeType<NT extends Node<CT>, CT extends Node<CT>> extends
31 OptimizationModule {
32 /** a constant required by Java serialization */
33 private static final long serialVersionUID = 1;
34
35 /** the key counter */
36 static int keyCounter = 0;
37
38 /** no children */
39 public static final NodeTypeSet<?>[] EMPTY_CHILDREN = new NodeTypeSet[0];
40
41 /** a list of possible node types for each child */
42 private final NodeTypeSet<CT>[] chTypes;
43
44 /** a search key */
45 final int key;
46
47 /**
48 * Create a new node type record
49 *
50 * @param ch
51 * the types of the possible chTypes
52 */
53 @SuppressWarnings("unchecked")
54 public NodeType(final NodeTypeSet<CT>[] ch) {
55 super();
56
57 this.chTypes = (((ch != null) && (ch.length) > 0) ? ((NodeTypeSet[]) ch)
58 : (EMPTY_CHILDREN));
59 this.key = (keyCounter++);
60 }
61
62 /**
63 * Get the number of chTypes of this node type
64 *
65 * @return the number of chTypes of this node type
66 */
67 public final int getChildCount() {
68 return this.chTypes.length;
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 829
69 }
70
71 /**
72 * Get the possible types for the chTypes at the specific index.
73 *
74 * @param index
75 * the child index
76 * @return the possible types of that child
77 */
78 public final NodeTypeSet<CT> getChildTypes(final int index) {
79 return this.chTypes[index];
80 }
81
82 /**
83 * Instantiate a node
84 *
85 * @param children
86 * a given set of children
87 * @param r
88 * the randomizer
89 * @return the new node
90 */
91 public NT instantiate(final Node<?>[] children, final Random r) {
92 throw new UnsupportedOperationException();
93 }
94
95 /**
96 * Create a new, slightly modified copy of the given node. This may come
97 * in handy if a node represents a real constant or something. We would
98 * then be able to randomly modify it which is better than replacing it
99 * with a completely random node. If no useful way to modify the node in
100 * a suitable way exists, just return the original-
101 *
102 * @param node
103 * the input node
104 * @param r
105 * the randomizer
106 * @return the modified node or the original value, if no modification is
107 * possible
108 */
109 public NT mutate(final NT node, final Random r) {
110 return node;
111 }
112
113 /**
114 * Does this node type describe a terminal node?
115 *
116 * @return true if this node type describes a terminal node, false
117 * otherwise
118 */
119 public final boolean isTerminal() {
120 return (this.chTypes.length <= 0);
121 }
122
123 }

Listing 56.50: A set of tree node types.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchSpaces.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /**
830 56 THE IMPLEMENTATION PACKAGE
11 * A set of node type records. For each child position of a genotype, we
12 * need to specify such a set of possible child types. This set provides an
13 * easy interface to access the possible types, the possible non-leaf
14 * types, and the possible leaf-types stored in it. Also, we can find very
15 * efficiently if a node type is in a node type set (in O(1)), which is a
16 * necessary operation of all tree mutation and crossover operations of
17 * strongly-typed Genetic Programming.
18 *
19 * @param <CT>
20 * the node type
21 * @author Thomas Weise
22 */
23 public class NodeTypeSet<CT extends Node<CT>> extends OptimizationModule {
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /** no children */
28 public static final NodeType<?, ?>[] EMPTY_CHILDREN = new NodeType[0];
29
30 /** the list */
31 private NodeType<CT, CT>[] lst;
32
33 /** the number of node type records */
34 private int cnt;
35
36 /** the terminal nodes */
37 private NodeType<CT, CT>[] terminals;
38
39 /** the number of terminal node type records */
40 private int terminalCnt;
41
42 /** the non-terminal nodes */
43 private NodeType<CT, CT>[] nonTerminals;
44
45 /** the number of non terminal node type records */
46 private int nonTerminalCnt;
47
48 /** the set of available node types */
49 private boolean[] available;
50
51 /**
52 * Create a new node type set
53 */
54 @SuppressWarnings("unchecked")
55 public NodeTypeSet() {
56 super();
57 this.lst = new NodeType[16];
58 this.terminals = new NodeType[16];
59 this.nonTerminals = new NodeType[16];
60 this.available = new boolean[Math.max(16, NodeType.keyCounter)];
61 }
62
63 /**
64 * Get the number of entries
65 *
66 * @return the number of entries
67 */
68 public final int size() {
69 return this.cnt;
70 }
71
72 /**
73 * Get the node type at the specified index
74 *
75 * @param index
76 * the index into the information set
77 * @return the node type at the specified index
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 831
78 */
79 public final NodeType<CT, CT> get(final int index) {
80 return this.lst[index];
81 }
82
83 /**
84 * Get the number of terminal node types
85 *
86 * @return the number of terminal node types
87 */
88 public final int terminalCount() {
89 return this.terminalCnt;
90 }
91
92 /**
93 * Get the terminal node type at the specified index
94 *
95 * @param index
96 * the index into the information set
97 * @return the node type at the specified index
98 */
99 public final NodeType<CT, CT> getTerminal(final int index) {
100 return this.terminals[index];
101 }
102
103 /**
104 * Get the number of non-terminal node types
105 *
106 * @return the number of non-terminal node types
107 */
108 public final int nonTerminalCount() {
109 return this.nonTerminalCnt;
110 }
111
112 /**
113 * Get the non-terminal node type at the specified index
114 *
115 * @param index
116 * the index into the information set
117 * @return the node type at the specified index
118 */
119 public final NodeType<CT, CT> getNonTerminal(final int index) {
120 return this.nonTerminals[index];
121 }
122
123 /**
124 * IfThenElse a new entry to the node type set
125 *
126 * @param type
127 * the node type to be added
128 */
129 @SuppressWarnings("unchecked")
130 public final void add(final NodeType<? extends CT, CT> type) {
131 NodeType<CT, CT>[] l;
132 int i;
133 boolean[] av;
134
135 if (type != null) {
136
137 l = this.lst;
138 i = this.cnt;
139 if (i >= l.length) {
140 l = new NodeType[i << 1];
141 System.arraycopy(this.lst, 0, l, 0, i);
142 this.lst = l;
143 }
144 l[i++] = ((NodeType) type);
832 56 THE IMPLEMENTATION PACKAGE
145 this.cnt = i;
146
147 if (type.isTerminal()) {
148
149 l = this.terminals;
150 i = this.terminalCnt;
151 if (i >= l.length) {
152 l = new NodeType[i << 1];
153 System.arraycopy(this.terminals, 0, l, 0, i);
154 this.terminals = l;
155 }
156 l[i++] = ((NodeType) type);
157 this.terminalCnt = i;
158
159 } else {
160
161 l = this.nonTerminals;
162 i = this.nonTerminalCnt;
163 if (i >= l.length) {
164 l = new NodeType[i << 1];
165 System.arraycopy(this.nonTerminals, 0, l, 0, i);
166 this.nonTerminals = l;
167 }
168 l[i++] = ((NodeType) type);
169 this.nonTerminalCnt = i;
170
171 }
172
173 av = this.available;
174 i = type.key;
175 if (i >= av.length) {
176 av = new boolean[i << 1];
177 System.arraycopy(this.available, 0, av, 0, this.available.length);
178 this.available = av;
179 }
180 av[i] = true;
181 }
182 }
183
184 /**
185 * Obtain a random node type
186 *
187 * @param r
188 * the random number generator
189 * @return the node type
190 */
191 public final NodeType<CT, CT> randomType(final Random r) {
192 final int i;
193
194 i = this.cnt;
195 if (i <= 0) {
196 return null;
197 }
198 return this.lst[r.nextInt(i)];
199 }
200
201 /**
202 * Obtain a random terminal node type
203 *
204 * @param r
205 * the random number generator
206 * @return the terminal node type
207 */
208 public final NodeType<CT, CT> randomTerminalType(final Random r) {
209 final int i;
210
211 i = this.terminalCnt;
56.4. GENOTYPE-PHENOTYPE MAPPINGS 833
212 if (i <= 0) {
213 return null;
214 }
215 return this.terminals[r.nextInt(i)];
216 }
217
218 /**
219 * Obtain a random non-terminal node type
220 *
221 * @param r
222 * the random number generator
223 * @return the non-terminal node type
224 */
225 public final NodeType<CT, CT> randomNonTerminalType(final Random r) {
226 final int i;
227
228 i = this.nonTerminalCnt;
229 if (i <= 0) {
230 return null;
231 }
232 return this.nonTerminals[r.nextInt(i)];
233 }
234
235 /**
236 * Check whether the given node type is contained in this type set or not
237 *
238 * @param t
239 * the node type
240 * @return true if the type is contained, false otherwise
241 */
242 public final boolean containsType(final NodeType<?, ?> t) {
243 final int i;
244 final boolean[] av;
245
246 i = t.key;
247 av = this.available;
248
249 return ((i < av.length) && av[i]);
250 }
251 }

56.4 Genotype-Phenotype Mappings


In this package, we provide a couple of basic genotype-phenotype mappings according to
the interface defined in Listing 55.7 on page 711.

Listing 56.51: The identity mapping for cases where G = X.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.gpms;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IGPM;
9
10 /**
11 * This class provides an identity genotype-phenotype mapping, the simplest
12 * possible variant of genotype-phenotype mappings (see
13 * Section 4.3 on page 85). This mapping can only
14 * be applied if the search space (see Section 4.1 on page 81) and
15 * the problem space (see Section 2.1 on page 43) are identical.
16 *
17 * @param <S>
834 56 THE IMPLEMENTATION PACKAGE
18 * the type which is both, the search and the problem space
19 * @author Thomas Weise
20 */
21 public final class IdentityMapping<S> extends GPM<S, S> {
22
23 /** a constant required by Java serialization */
24 private static final long serialVersionUID = 1;
25
26 /** the globally shared default instance of the identity mapping */
27 public static final IGPM<?, ?> IDENTITY_MAPPING = new IdentityMapping<Object>();
28
29 /** Create a new instance of the identity mapping class */
30 protected IdentityMapping() {
31 super();
32 }
33
34 /**
35 * Perform the identity mapping: the genotype ( Definition D4.2 on page 82)
36 * equals the phenotype ( Definition D2.2 on page 43).
37 *
38 * @param g
39 * the genotype
40 * @param r
41 * the randomizer
42 * @return the phenotype which equals genotype g
43 */
44 @Override
45 public final S gpm(final S g, final Random r) {
46 return g;
47 }
48
49 /**
50 * Get the name of the optimization module
51 *
52 * @param longVersion
53 * true if the long name should be returned, false if the short
54 * name should be returned
55 * @return the name of the optimization module
56 */
57 @Override
58 public String getName(final boolean longVersion) {
59 if (longVersion) {
60 return super.getName(true);
61 }
62 return "I"; //N ON − N LS − 1
63 }
64 }

Listing 56.52: The Random Keys genotype-phenotype mapping introduced in Algo-


rithm 29.8.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.gpms;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 /**
10 * With this class, we implement the Random Keys encoding
11 * genotype-phenotype mapping as introduced in
12 * Section 29.7 on page 349. This mapping which is specified in
13 * Algorithm 29.8 on page 349 translates an n-dimensional vector of
14 * real numbers (i.e., doubles) to a permutation of the first n natural
15 * numbers (i.e., an array of int containing the values 0, 1, 2, ..., n-1).
16 * This mapping can only
56.5. TERMINATION CRITERIA 835
17 *
18 * @author Thomas Weise
19 */
20 public final class RandomKeys extends GPM<double[], int[]> {
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /** a temporary double array */
25 private transient double[] temp;
26
27 /** Instantiate */
28 public RandomKeys() {
29 super();
30 }
31
32 /**
33 * Perform Algorithm 29.8 on page 349, i.e., translate a vector of n
34 * real numbers into a permutation of the first n natural numbers.
35 *
36 * @param g
37 * the genotype ( Definition D4.2 on page 82)
38 * @param r
39 * the randomizer
40 * @return the phenotype (see Definition D2.2 on page 43)
41 * corresponding to the genotype g
42 */
43 @Override
44 public final int[] gpm(final double[] g, final Random r) {
45 int[] x;
46 double[] s;
47 final int l;
48 int i, j;
49
50 l = g.length;
51 x = new int[g.length];
52
53 // We cannot sort g directly, since this would destroy it. Hence, we
54 // use an internal temporary array.
55 s = this.temp;
56 if ((s == null) || (s.length < l)) {
57 this.temp = s = new double[l];
58 }
59
60 // in each step, put one element from g into the right position in s
61 // and update the phenotype accordingly
62 for (i = 0; i < l; i++) {
63 j = Arrays.binarySearch(s, 0, i, g[i]);
64 if (j < 0) {
65 j = ((-j) - 1);
66 }
67 System.arraycopy(s, j, s, j + 1, i - j);
68 s[j] = g[i];
69 System.arraycopy(x, j, x, j + 1, i - j);
70 x[j] = i;
71 }
72
73 return x;
74 }
75
76 }

56.5 Termination Criteria


Termination criteria, which obey to the definition provided in Listing 55.9 on page 712,
determine when an optimization algorithm should terminate. In this package, we provide
836 56 THE IMPLEMENTATION PACKAGE
some simple termination criteria.

Listing 56.53: The step-limit termination criterion.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.termination;
5
6 /**
7 * The step limit termination criterion (see Point 2 in
8 * Section 6.3.3 on page 105) gets true after a maximum number
9 * of steps has been exhausted. The steps are counted in terms of
10 * invocations of the {@link #terminationCriterion()} method.
11 *
12 * @author Thomas Weise
13 */
14 public final class StepLimit extends TerminationCriterion {
15
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /** the step limit */
20 private final int maxSteps;
21
22 /** the number of remaining steps */
23 private int remaining;
24
25 /**
26 * Create a new step limit termination criterion
27 *
28 * @param steps
29 * the number of steps
30 */
31 public StepLimit(final int steps) {
32 this.maxSteps = steps;
33 }
34
35 /**
36 * This method should be invoked after each search step, iteration, or
37 * objective function evaluation. This method returns true after it has
38 * been called exactly the number of times specified in the parameter
39 * "steps" passed to the constructor minus 1. For example, if "1" is
40 * specified in the constructor, the first call of this method will
41 * return true. For "2", the second call will be true while the first
42 * returns false.
43 *
44 * @return true after the specified steps are exhausted, false otherwise
45 */
46 @Override
47 public final boolean terminationCriterion() {
48 return (--this.remaining) <= 0;
49 }
50
51 /** Reset the termination criterion to make it useable more than once */
52 @Override
53 public void reset() {
54 this.remaining = this.maxSteps;
55 }
56
57 /**
58 * Get the full configuration which holds all the data necessary to
59 * describe this object.
60 *
61 * @param longVersion
62 * true if the long version should be returned, false if the
63 * short version should be returned
64 * @return the full configuration
56.6. COMPARATOR 837
65 */
66 @Override
67 public String getConfiguration(final boolean longVersion) {
68 if (longVersion) {
69 return "limit=" + String.valueOf(this.maxSteps); //N ON − N LS − 1
70 }
71 return String.valueOf(this.maxSteps);
72 }
73
74 /**
75 * Get the name of the optimization module
76 *
77 * @param longVersion
78 * true if the long name should be returned, false if the short
79 * name should be returned
80 * @return the name of the optimization module
81 */
82 @Override
83 public String getName(final boolean longVersion) {
84 if (longVersion) {
85 return super.getName(true);
86 }
87 return "sl"; //N ON − N LS − 1
88 }
89 }

56.6 Comparator
Implementations of the individual comparator interface for multi-objective optimization as
introduced in Section 3.5 on page 75.

Listing 56.54: A comparator realizing the lexicographic relationship.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.comparators;
5
6 import org.goataa.impl.OptimizationModule;
7 import org.goataa.impl.utils.MOIndividual;
8 import org.goataa.spec.IIndividualComparator;
9
10 /**
11 * Perform an individual comparison according to the lexicographic
12 * relationship Section 3.3.2 on page 59.
13 *
14 * @author Thomas Weise
15 */
16 public class Lexicographic extends OptimizationModule implements
17 IIndividualComparator {
18 /** a constant required by Java serialization */
19 private static final long serialVersionUID = 1;
20
21 /** the lexicographic comparator */
22 public static final IIndividualComparator LEXICOGRAPHIC_COMPARATOR = new
Lexicographic();
23
24 /** instantitate a lexicographicOptimization comparator */
25 protected Lexicographic() {
26 super();
27 }
28
29 /**
30 * Compare two individuals with each other according to the lexicographic
31 * relationship defined in Section 3.3.2 on page 59
838 56 THE IMPLEMENTATION PACKAGE
32 * based on their objective values.
33 *
34 * @param a
35 * the first individual
36 * @param b
37 * the second individual
38 * @return -1 if the a dominates b, 1 if b dominates a, 0 if neither of
39 * them is better
40 */
41 public int compare(final MOIndividual<?, ?> a, final MOIndividual<?, ?> b) {
42 double[] x, y;
43 int i, cr;
44
45 x = a.f;
46 y = b.f;
47 for (i = 0; i < x.length; i++) {
48 cr = Double.compare(x[i], y[i]);
49 if (cr != 0) {
50 return cr;
51 }
52 }
53
54 return 0;
55 }
56 }

Listing 56.55: A comparator realizing the Pareto dominance relationship.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.comparators;
5
6 import org.goataa.impl.OptimizationModule;
7 import org.goataa.impl.utils.MOIndividual;
8 import org.goataa.spec.IIndividualComparator;
9
10 /**
11 * Perform an individual comparison according to the Pareto dominance
12 * relationship Section 3.3.5 on page 65.
13 *
14 * @author Thomas Weise
15 */
16 public class Pareto extends OptimizationModule implements
17 IIndividualComparator {
18 /** a constant required by Java serialization */
19 private static final long serialVersionUID = 1;
20
21 /** the pareto comparator */
22 public static final IIndividualComparator PARETO_COMPARATOR = new Pareto();
23
24 /** instantitate a pareto comparator */
25 protected Pareto() {
26 super();
27 }
28
29 /**
30 * Compare two individuals with each other according to the dominance
31 * relationship defined in Section 3.3.5 on page 65 based on
32 * their objective values.
33 *
34 * @param a
35 * the first individual
36 * @param b
37 * the second individual
38 * @return -1 if the a dominates b, 1 if b dominates a, 0 if neither of
39 * them is better
40 */
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 839
41 public int compare(final MOIndividual<?, ?> a, final MOIndividual<?, ?> b) {
42 double[] x, y;
43 int i, cr, res;
44
45 x = a.f;
46 y = b.f;
47 res = 0;
48
49 // check every objective value
50 for (i = x.length; (--i) >= 0;) {
51
52 // cr<0: x[i]<y[i]; cr>0: x[i]>y[i], cr=0: x[i]=y[i]
53 cr = Double.compare(x[i], y[i]);
54
55 if (cr > 0) {
56 // x[i]>y[i]
57 if (res < 0) {
58 // we encountered at least one j:x[j]<y[j], hence result is
59 // undecided
60 return 0;
61 }
62 // assume that y dominates x (i.e., b dominates a)
63 res = 1;
64 } else {
65 if (cr < 0) {
66 // x[i]<y[i]
67 if (res > 0) {
68 // we encountered at least one j:x[j]>y[j], hence result is
69 // undecided
70 return 0;
71 }
72 // assume that x dominates y (i.e., a dominates b)
73 res = -1;
74 }
75 }
76 }
77
78 return res;
79 }
80 }

56.7 Utility Classes, Constants, and Routines

In this package, we provide some utility classes and constants.

Listing 56.56: The individual record.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import org.goataa.impl.OptimizationModule;
7
8 /**
9 * An individual record, as defined in Definition D4.18 on page 90. Such a
10 * record always holds one genotype from the search space and the
11 * corresponding phenotype from the problem space. In this implementation,
12 * we also store a fitness value v which corresponds to the objective value
13 * in a single-objective problem. This way, we do not need to evaluate the
14 * objective function more than once per individual. This makes sense since
15 * it would (in most cases) yield the same result anyway and only waste CPU
16 * time.
17 *
18 * @param <G>
840 56 THE IMPLEMENTATION PACKAGE
19 * the search space (genome, Section 4.1 on page 81)
20 * @param <X>
21 * the problem space (phenome, Section 2.1 on page 43)
22 * @author Thomas Weise
23 */
24 public class Individual<G, X> extends OptimizationModule {
25
26 /** a constant required by Java serialization */
27 private static final long serialVersionUID = 1;
28
29 /** the genotype as defined in Definition D4.2 on page 82 */
30 public G g;
31
32 /**
33 * the corresponding phenotype as defined in
34 * Definition D2.2 on page 43
35 */
36 public X x;
37
38 /**
39 * the fitness value, as defined in Definition D5.1 on page 94 which
40 * corresponds to the objective value in single-objective optimization
41 * problems
42 */
43 public double v;
44
45 /** The constructor creates a new individual record */
46 public Individual() {
47 super();
48 // initialize the record with the worst fitness
49 this.v = Constants.WORST_FITNESS;
50 }
51
52 /**
53 * Copy the values of another individual record to this record. This
54 * method saves us from excessively creating new individual records.
55 *
56 * @param to
57 * the individual to copy
58 */
59 public void assign(final Individual<G, X> to) {
60 this.g = to.g;
61 this.x = to.x;
62 this.v = to.v;
63 }
64
65 /**
66 * Get the full configuration which holds all the data necessary to
67 * describe this individual record, i.e., the genotype, phenotype, and
68 * fitness.
69 *
70 * @param longVersion
71 * true if the long version should be returned, false if the
72 * short version should be returned
73 * @return the full configuration
74 */
75 @Override
76 public String getConfiguration(final boolean longVersion) {
77 StringBuilder sb;
78
79 sb = new StringBuilder();
80 if (longVersion) {
81 sb.append("fitness"); //N ON − N LS − 1
82 } else {
83 sb.append(’v’);
84 }
85 sb.append(’=’);
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 841
86 sb.append(this.v);
87
88 if (longVersion) {
89 sb.append(", genotype="); //N ON − N LS − 1
90 } else {
91 sb.append(", g=");}//N ON − N LS − 1
92 TextUtils.toStringBuilder(this.g, sb);
93
94 if (longVersion) {
95 sb.append(", phenotype="); //N ON − N LS − 1
96 } else {
97 sb.append(", x=");}//N ON − N LS − 1
98 TextUtils.toStringBuilder(this.x, sb);
99
100 return sb.toString();
101 }
102
103 /**
104 * Get the name of the individual record
105 *
106 * @param longVersion
107 * true if the long name should be returned, false if the short
108 * name should be returned
109 * @return the name of the optimization module
110 */
111 @Override
112 public String getName(final boolean longVersion) {
113 if (longVersion) {
114 return super.getName(true);
115 }
116 return "ind"; //N ON − N LS − 1
117 }
118 }

Listing 56.57: The individual record for multi-objective optimization.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.spec.IObjectiveFunction;
10
11 /**
12 * An individual record for multi-objective optimization as discussed in
13 * Section 3.3 on page 55. We just extend the basic
14 * individual record with the capability to store multiple objective
15 * values.
16 *
17 * @param <G>
18 * the search space (genome, Section 4.1 on page 81)
19 * @param <X>
20 * the problem space (phenome, Section 2.1 on page 43)
21 * @author Thomas Weise
22 */
23 public class MOIndividual<G, X> extends Individual<G, X> {
24
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /** the objective values */
29 public final double[] f;
30
31 /**
32 * Create a multi-objective individual
842 56 THE IMPLEMENTATION PACKAGE
33 *
34 * @param fc
35 * the number of objective functions
36 */
37 public MOIndividual(final int fc) {
38 super();
39 this.f = new double[fc];
40 // initialize the record with the worst objective values
41 Arrays.fill(this.f, Constants.WORST_OBJECTIVE);
42 }
43
44 /**
45 * Copy the values of another individual record to this record. This
46 * method saves us from excessively creating new individual records.
47 *
48 * @param to
49 * the individual to copy
50 */
51 @SuppressWarnings("unchecked")
52 @Override
53 public void assign(final Individual<G, X> to) {
54 final double[] d;
55 super.assign(to);
56 if (to instanceof MOIndividual) {
57 d = (((MOIndividual) to).f);
58 System.arraycopy(d, 0, this.f, 0, Math.min(d.length, this.f.length));
59 }
60 }
61
62 /**
63 * Get the full configuration which holds all the data necessary to
64 * describe this individual record, i.e., the genotype, phenotype, and
65 * fitness as well as the objective values.
66 *
67 * @param longVersion
68 * true if the long version should be returned, false if the
69 * short version should be returned
70 * @return the full configuration
71 */
72 @Override
73 public String getConfiguration(final boolean longVersion) {
74 StringBuilder sb;
75
76 sb = new StringBuilder();
77 if (longVersion) {
78 sb.append("objectives"); //N ON − N LS − 1
79 } else {
80 sb.append(’f’);
81 }
82 sb.append(’=’);
83 TextUtils.toStringBuilder(this.f, sb);
84 sb.append(’,’);
85 sb.append(’ ’);
86 sb.append(super.getConfiguration(longVersion));
87
88 return sb.toString();
89 }
90
91 /**
92 * Get the name of the multi-objective individual record
93 *
94 * @param longVersion
95 * true if the long name should be returned, false if the short
96 * name should be returned
97 * @return the name of the optimization module
98 */
99 @Override
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 843
100 public String getName(final boolean longVersion) {
101 if (longVersion) {
102 return super.getName(true);
103 }
104 return "mo-ind"; //N ON − N LS − 1
105 }
106
107 /**
108 * Compute the objective values by using the given objective functions
109 *
110 * @param fs
111 * the objective functions
112 * @param r
113 * the randomizer
114 */
115 public final void evaluate(final IObjectiveFunction<X>[] fs,
116 final Random r) {
117 int i;
118 for (i = fs.length; (--i) >= 0;) {
119 this.f[i] = fs[i].compute(this.x, r);
120 }
121 }
122 }

Listing 56.58: A base class for archiving non-dominated individuals.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.util.AbstractList;
7 import java.util.Arrays;
8 import java.util.Collection;
9 import java.util.Comparator;
10 import java.util.Random;
11
12 import org.goataa.impl.comparators.Lexicographic;
13 import org.goataa.impl.comparators.Pareto;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.IOptimizationModule;
16 import org.goataa.spec.ISelectionAlgorithm;
17
18 /**
19 * An archive of non-dominated individuals as discussed in
20 * Section 28.6 on page 308 for multi-objective
21 * optimization Section 3.3 on page 55.
22 *
23 * @param <G>
24 * the search space (genome, Section 4.1 on page 81)
25 * @param <X>
26 * the problem space (phenome, Section 2.1 on page 43)
27 * @author Thomas Weise
28 */
29 public class Archive<G, X> extends AbstractList<MOIndividual<G, X>>
30 implements IOptimizationModule {
31 /** a constant required by Java serialization */
32 private static final long serialVersionUID = 1;
33
34 /** the maximum size */
35 private final int maxSize;
36
37 /** the list */
38 @SuppressWarnings("unchecked")
39 private MOIndividual[] lst;
40
41 /** the individual comparator */
42 private final IIndividualComparator ic;
844 56 THE IMPLEMENTATION PACKAGE
43
44 /** the replacement index */
45 private int replaceIdx;
46
47 /** the epsilons */
48 private transient double[] epsilon;
49
50 /** the number of individuals in the list */
51 private int cnt;
52
53 /** the copy index */
54 private int copyIndex;
55
56 /** Create an archive with the default maximum size 100 */
57 public Archive() {
58 this(100, null);
59 }
60
61 /**
62 * Create an archive with the given maximum size
63 *
64 * @param ms
65 * the maximum size, -1 for unrestrained
66 * @param i
67 * the individual comparator to use
68 */
69 public Archive(final int ms, final IIndividualComparator i) {
70 super();
71
72 boolean b;
73 int is;
74
75 b = ((ms > 0) && (ms < Integer.MAX_VALUE));
76 if (b) {
77 is = Math.min(1024, ms);
78 } else {
79 is = 1024;
80 }
81 this.maxSize = (b ? ms : Integer.MAX_VALUE);
82 this.lst = new MOIndividual[is];
83
84 this.ic = ((i != null) ? i : Pareto.PARETO_COMPARATOR);
85 }
86
87 /**
88 * Get the name of the optimization module
89 *
90 * @param longVersion
91 * true if the long name should be returned, false if the short
92 * name should be returned
93 * @return the name of the optimization module
94 */
95 public String getName(final boolean longVersion) {
96 return (longVersion ? "archive" : "arch"); //N ON − N LS − 1 //N ON − N LS − 2
97 }
98
99 /**
100 * Get the full configuration which holds all the data necessary to
101 * describe this object.
102 *
103 * @param longVersion
104 * true if the long version should be returned, false if the
105 * short version should be returned
106 * @return the full configuration
107 */
108 public String getConfiguration(final boolean longVersion) {
109 StringBuilder sb;
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 845
110
111 sb = new StringBuilder();
112 if ((this.maxSize > 0) && (this.maxSize < Integer.MAX_VALUE)) {
113 sb.append("maxLen="); //N ON − N LS − 1
114 sb.append(this.maxSize);
115 }
116
117 if (this.ic != null) {
118 if (sb.length() > 0) {
119 sb.append(", ");//N ON − N LS − 1
120 }
121 sb.append("cmp=");//N ON − N LS − 1
122 sb.append(this.ic.toString(longVersion));
123 }
124
125 return sb.toString();
126 }
127
128 /**
129 * Get the string representation of this object, i.e., the name and
130 * configuration.
131 *
132 * @param longVersion
133 * true if the long version should be returned, false if the
134 * short version should be returned
135 * @return the string version of this object
136 */
137 public String toString(final boolean longVersion) {
138 return this.getName(longVersion) + ’(’
139 + this.getConfiguration(longVersion) + ’)’;
140 }
141
142 /**
143 * Get the size of the list
144 *
145 * @return the number of individuals in the list
146 */
147 @Override
148 public final int size() {
149 return this.cnt;
150 }
151
152 /**
153 *Get a specific individual
154 *
155 * @param index
156 * the index
157 *@return the individual
158 */
159 @Override
160 @SuppressWarnings("unchecked")
161 public final MOIndividual<G, X> get(final int index) {
162 return this.lst[index];
163 }
164
165 /**
166 * The set operation actually performs a deletion followed by an addition
167 *
168 * @param index
169 * ignored
170 * @param ind
171 * the individual
172 * @return the removed indivdual
173 */
174 @Override
175 public final MOIndividual<G, X> set(final int index,
176 final MOIndividual<G, X> ind) {
846 56 THE IMPLEMENTATION PACKAGE
177 MOIndividual<G, X> r;
178 r = this.remove(index);
179 this.add(ind);
180 return r;
181 }
182
183 /**
184 * IfThenElse an individual to the end of the list
185 *
186 * @param index
187 * ignored
188 * @param ind
189 * the individual
190 */
191 @Override
192 public final void add(final int index, final MOIndividual<G, X> ind) {
193 this.add(ind);
194 }
195
196 /**
197 *Delete the individual at the given index
198 *
199 * @param index
200 * the index
201 *@return the deleted individual
202 */
203 @SuppressWarnings("unchecked")
204 @Override
205 public MOIndividual<G, X> remove(final int index) {
206 MOIndividual<G, X> x;
207 MOIndividual[] v;
208 int q;
209
210 v = this.lst;
211 x = v[index];
212 q = (--this.cnt);
213 v[index] = v[q];
214 v[q] = null;
215 return x;
216 }
217
218 /**
219 * @param fromIndex
220 * index of first element to be removed
221 * @param toIndex
222 * index after last element to be removed
223 */
224 @Override
225 protected final void removeRange(final int fromIndex, final int toIndex) {
226 int x;
227 x = toIndex - fromIndex;
228 System.arraycopy(this.lst, toIndex, this.lst, fromIndex, x);
229 Arrays.fill(this.lst, fromIndex, this.cnt, null);
230 this.cnt -= x;
231 }
232
233 /**
234 * Check in an individual into the archive while maintaining the proper
235 * maximum size of the archive by using techniques such as those
236 * discussed in Section 28.6.3 on page 311.
237 *
238 * @param ind
239 * the individual to be checked in
240 * @return true if the list changed, falseotherwise
241 */
242 @Override
243 @SuppressWarnings("unchecked")
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 847
244 public boolean add(final MOIndividual<G, X> ind) {
245 int i, x;
246 int sze;
247 final int ms;
248 MOIndividual<G, X> m;
249 MOIndividual[] l;
250
251 sze = this.cnt;
252 l = this.lst;
253 // check if we already have that individual or if we can remove some
254 // others
255 for (i = sze; (--i) >= 0;) {
256
257 m = l[i];
258 if (Utils.equals(m.g, ind.g) && Utils.equals(m.x, ind.x)) {
259 return false;
260 }
261
262 x = this.ic.compare(ind, m);
263 if (x > 0) {
264 return false;
265 }
266
267 if (x < 0) {
268 this.remove(i);
269 sze--;
270 }
271 }
272
273 ms = this.maxSize;
274 if (sze < ms) {
275 // ok, we do not yet have that individual, but we have enough space
276 if (sze >= l.length) {
277 l = new MOIndividual[Math.min(ms, Math.max(sze + 1, sze << 1))];
278 System.arraycopy(this.lst, 0, l, 0, sze);
279 this.lst = l;
280 }
281
282 l[sze++] = ind;
283 this.cnt = sze;
284 return true;
285 }
286
287 // the individual is new but the space is not sufficient -> delete one
288 return this.replaceOne(ind);
289 }
290
291 /**
292 * Appends all of the elements in the specified collection to the end of
293 * this list, in the order that they are returned by the specified
294 * collection’s Iterator. The behavior of this operation is undefined if
295 * the specified collection is modified while the operation is in
296 * progress. (This implies that the behavior of this call is undefined if
297 * the specified collection is this list, and this list is nonempty.)
298 *
299 * @param c
300 * collection containing elements to be added to this list
301 * @return <tt>true</tt> if this list changed as a result of the call
302 * @throws NullPointerException
303 * if the specified collection is null
304 */
305 @Override
306 public boolean addAll(Collection<? extends MOIndividual<G, X>> c) {
307 boolean b;
308
309 b = false;
310 for (MOIndividual<G, X> e : c) {
848 56 THE IMPLEMENTATION PACKAGE
311 if (this.add(e)) {
312 b = true;
313 }
314 }
315 return b;
316 }
317
318 /**
319 * Replace one individual in the archive. Currently, this method just
320 * replaces a random individual. However, more sophisticated strategies
321 * could be employed as well.
322 *
323 * @param ind
324 * the individual
325 * @return true if the new individual was added, false otherwise
326 */
327 @SuppressWarnings("unchecked")
328 protected boolean replaceOne(final MOIndividual<G, X> ind) {
329 double w, deps;
330 int i, j, bi, ci, z;
331 double[] a, b, eps;
332 final int s, r;
333 final MOIndividual[] l;
334 boolean cr;
335
336 a = ind.f;
337 r = a.length;
338 s = this.cnt;
339 l = this.lst;
340
341 // do the replacement
342 for (i = s; (--i) >= 0;) {
343 // if the new individual equals and existing one, skip
344 if (Utils.equals(l[i].x, ind.x) || Utils.equals(l[i].g, ind.g)) {
345 return false;
346 }
347 }
348
349 bi = (this.replaceIdx % s);
350
351 eps = this.epsilon;
352 if (eps == null) {
353 this.epsilon = eps = new double[r];
354 } else {
355 Arrays.fill(eps, 0, r, 0d);
356 }
357
358 // try to find the most similar individual and replace it
359 ci = bi;
360 for (z = (r * 3); (--z) >= 0;) {
361 deps = Double.POSITIVE_INFINITY;
362 ci = ((ci + 1) % r);
363
364 for (i = s; (--i) >= 0;) {
365 bi = ((bi + 1) % s);
366 b = l[bi].f;
367 cr = true;
368 for (j = r; (--j) >= 0;) {
369 w = Math.abs(a[j] - b[j]);
370 if (w > eps[j]) {
371 cr = false;
372 if ((j == ci) && (w < deps)) {
373 deps = w;
374 }
375 }
376 }
377
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 849
378 if (cr) {
379 l[bi] = ind;
380 this.replaceIdx = bi;
381 return true;
382 }
383 }
384 eps[ci] = deps;
385 }
386
387 // individuals are not sufficiently similar: delete more or less
388 // randomly
389 bi = ((bi + 1) % s);
390 this.replaceIdx = bi;
391 l[bi] = ind;
392 return true;
393 }
394
395 /**
396 * Sort by a given comparator. Notice that sorting with the archive’s
397 * default comparator makes no sense at all.
398 *
399 * @param cmp
400 * the comparator
401 */
402 @SuppressWarnings("unchecked")
403 public void sort(final IIndividualComparator cmp) {
404 Arrays.sort(this.lst, 0, this.cnt, ((Comparator) cmp));
405 }
406
407 /** Sort lexicographically */
408 public void sortLexicographically() {
409 this.sort(Lexicographic.LEXICOGRAPHIC_COMPARATOR);
410 }
411
412 /**
413 * Select a couple of individuals
414 *
415 * @param sel
416 * the selection algorithm
417 * @param mate
418 * the mating pool to select to
419 * @param mateStart
420 * the start index
421 * @param mateCount
422 * the number of individuals to select
423 * @param r
424 * the randomizer
425 */
426 public final void select(final ISelectionAlgorithm sel,
427 final Individual<?, ?>[] mate, final int mateStart,
428 final int mateCount, final Random r) {
429 final int c;
430
431 c = this.cnt;
432 if (c <= 0) {
433 Arrays.fill(mate, mateStart, mateStart + mateCount, null);
434 } else {
435 sel.select(this.lst, 0, c, mate, mateStart, mateCount, r);
436 }
437 }
438
439 /**
440 * Copy some the individuals
441 *
442 * @param mate
443 * the mating pool to select to
444 * @param mateStart
850 56 THE IMPLEMENTATION PACKAGE
445 * the start index
446 * @param mateCount
447 * the maximum number of individuals to select
448 * @return the number of selected individuals (may be less)
449 */
450 @SuppressWarnings("unchecked")
451 public final int copy(final Individual<?, ?>[] mate,
452 final int mateStart, final int mateCount) {
453 final int c;
454 final MOIndividual[] l;
455 int i, cpy;
456
457 c = this.cnt;
458 l = this.lst;
459 if (mateCount >= c) {
460 System.arraycopy(l, 0, mate, mateStart, c);
461 Arrays.fill(mate, mateStart + c, mateStart + mateCount, null);
462 return c;
463 }
464
465 cpy = this.copyIndex;
466 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
467 cpy = ((cpy + 1) % c);
468 mate[i] = l[cpy];
469 }
470
471 this.copyIndex = cpy;
472
473 return mateCount;
474 }
475 }

Listing 56.59: Some constants useful for optimization.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.io.Serializable;
7 import java.util.Comparator;
8
9 /**
10 * This class holds some common constants
11 *
12 * @author Thomas Weise
13 */
14 public final class Constants {
15
16 /*
17 * based on Section 6.3.4 on page 106, we define the following
18 * constants
19 */
20
21 /** the worst possible objective value */
22 public static final double WORST_OBJECTIVE = Double.POSITIVE_INFINITY;
23
24 /** the best possible objective value */
25 public static final double BEST_OBJECTIVE = Double.NEGATIVE_INFINITY;
26
27 /** the worst possible fitness value */
28 public static final double WORST_FITNESS = Double.POSITIVE_INFINITY;
29
30 /** the best possible fitness value */
31 public static final double BEST_FITNESS = 0d;
32
33 /**
34 * a comparator that can be used for sorting individuals according to
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 851
35 * their fitness in ascending order, from the best to the worst.
36 */
37 public static final Comparator<Individual<?, ?>> FITNESS_SORT_ASC_COMPARATOR =
new FitnessSortASCComparator();
38
39 /**
40 * An internal utility class for sorting individuals according to their
41 * fitness in ascending order.
42 *
43 * @author Thomas Weise
44 */
45 private static final class FitnessSortASCComparator implements
46 Serializable, Comparator<Individual<?, ?>> {
47 /** a constant required by Java serialization */
48 private static final long serialVersionUID = 1;
49
50 /** instantiate */
51 FitnessSortASCComparator() {
52 super();
53 }
54
55 /**
56 * Compare two individual records
57 *
58 * @param a
59 * the first individual record
60 * @param b
61 * the second individual record
62 * @return -1 if a.v<b.v, 0 if a.v=b.v, 1 if a.v>b.v
63 */
64 public final int compare(final Individual<?, ?> a,
65 final Individual<?, ?> b) {
66 return Double.compare(a.v, b.v);
67 }
68 }
69 }

Listing 56.60: A small and handy class for computing some useful statistics.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.util.Arrays;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /**
11 * This class helps us to collect statistics
12 *
13 * @author Thomas Weise
14 */
15 public class BufferedStatistics extends OptimizationModule {
16
17 /** a constant required by Java serialization */
18 private static final long serialVersionUID = 1;
19
20 /** the data */
21 private double[] data;
22
23 /** the length */
24 private int len;
25
26 /** did we calculate the statistics? */
27 private boolean calculated;
28
29 /** the mean */
852 56 THE IMPLEMENTATION PACKAGE
30 private double mean;
31
32 /** the median */
33 private double med;
34
35 /** the minimum */
36 private double min;
37
38 /** the maximum */
39 private double max;
40
41 /**
42 * Create a new statistics object
43 */
44 public BufferedStatistics() {
45 super();
46 this.data = new double[128];
47 this.clear();
48 }
49
50 /**
51 * IfThenElse a double to the buffer
52 *
53 * @param d
54 * the double
55 */
56 public final void add(final double d) {
57 double[] x;
58 int l;
59
60 this.calculated = false;
61 x = this.data;
62 l = this.len;
63 if (l >= x.length) {
64 x = new double[l << 1];
65 System.arraycopy(this.data, 0, x, 0, l);
66 this.data = x;
67 }
68
69 x[l] = d;
70 this.len = (l + 1);
71 }
72
73 /**
74 * Calculate the statistics
75 */
76 private final void calculate() {
77 double[] x;
78 int l;
79 double s;
80
81 if (this.calculated) {
82 return;
83 }
84
85 x = this.data;
86 l = this.len;
87 if (l <= 0) {
88 this.max = Double.NaN;
89 this.min = Double.NaN;
90 this.mean = Double.NaN;
91 this.med = Double.NaN;
92 } else {
93 Arrays.sort(x, 0, l);
94 this.min = x[0];
95 this.max = x[l - 1];
96
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 853
97 if ((l & 1) == 0) {
98 this.med = (0.5d * (x[l >>> 1] + x[(l >>> 1) - 1]));
99 } else {
100 this.med = x[l >>> 1];
101 }
102
103 s = 0d;
104 for (; (--l) >= 0;) {
105 s += x[l];
106 }
107 this.mean = (s / this.len);
108 }
109
110 this.calculated = true;
111 }
112
113 /**
114 * Get the minimum of the collected value
115 *
116 * @return the minimum value
117 */
118 public final double min() {
119 this.calculate();
120 return this.min;
121 }
122
123 /**
124 * Get the maximum of the collected value
125 *
126 * @return the maximum value
127 */
128 public final double max() {
129 this.calculate();
130 return this.max;
131 }
132
133 /**
134 * Get the median of the collected value
135 *
136 * @return the median value
137 */
138 public final double median() {
139 this.calculate();
140 return this.med;
141 }
142
143 /**
144 * Clear this statistics record
145 */
146 public final void clear() {
147 this.len = 0;
148 this.calculated = false;
149 }
150
151 /**
152 * Get the mean of the collected values
153 *
154 * @return the mean value
155 */
156 public final double mean() {
157 this.calculate();
158 return this.mean;
159 }
160
161 /**
162 * Get the full configuration which holds all the data necessary to
163 * describe this object.
854 56 THE IMPLEMENTATION PACKAGE
164 *
165 * @param longVersion
166 * true if the long version should be returned, false if the
167 * short version should be returned
168 * @return the full configuration
169 */
170 @Override
171 public String getConfiguration(final boolean longVersion) {
172 StringBuilder sb;
173
174 sb = new StringBuilder();
175 sb.append("min="); //N ON − N LS − 1
176 sb.append(TextUtils.formatNumberSameWidth(this.min()));
177 sb.append(longVersion ? ", median=" : " med=");//N ON − N LS − 1//N ON − N LS − 2
178 sb.append(TextUtils.formatNumberSameWidth(this.median()));
179 sb.append(longVersion ? ", mean=" : " avg=");//N ON − N LS − 1//N ON − N LS − 2
180 sb.append(TextUtils.formatNumberSameWidth(this.mean()));
181 sb.append(longVersion ? ", max=" : " max=");//N ON − N LS − 1//N ON − N LS − 2
182 sb.append(TextUtils.formatNumberSameWidth(this.max()));
183
184 return sb.toString();
185 }
186
187 /**
188 * Get the name of the optimization module
189 *
190 * @param longVersion
191 * true if the long name should be returned, false if the short
192 * name should be returned
193 * @return the name of the optimization module
194 */
195 @Override
196 public String getName(final boolean longVersion) {
197 if (longVersion) {
198 return super.getName(true);
199 }
200 return "stat"; //N ON − N LS − 1
201 }
202 }

Listing 56.61: Some simple utilities for converting objects to strings.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.lang.reflect.Array;
7 import java.text.DecimalFormat;
8 import java.text.DecimalFormatSymbols;
9 import java.text.NumberFormat;
10 import java.util.Locale;
11
12 /**
13 * A utility class for text
14 *
15 * @author Thomas Weise
16 */
17 public class TextUtils {
18
19 /** the newline string */
20 public static final String NEWLINE = System
21 .getProperty("line.separator"); //N ON − N LS − 1
22
23 /**
24 * Print an object to a string builder while handling arrays correctly
25 *
26 * @param o
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 855
27 * the object
28 * @param sb
29 * the string builder
30 */
31 public static final void toStringBuilder(final Object o,
32 final StringBuilder sb) {
33 int i, j;
34 String s;
35
36 if (o == null) {
37 sb.append((Object) null);
38 return;
39 }
40
41 if (o.getClass().isArray()) {
42
43 sb.append(’[’);
44
45 i = Array.getLength(o);
46 for (j = 0; j < i; j++) {
47 if (j > 0) {
48 sb.append(’,’);
49 sb.append(’ ’);
50 }
51 toStringBuilder(Array.get(o, j), sb);
52 }
53
54 sb.append(’]’);
55 return;
56 }
57
58 if (o instanceof String) {
59 sb.append(’"’);
60 s = ((String) o);
61 i = s.length();
62 for (j = 0; j < i; j++) {
63 escapeCharToStringBuilder(s.charAt(j), sb);
64 }
65 sb.append(’"’);
66 return;
67 }
68
69 if (o instanceof Character) {
70 sb.append(’\’’);
71 escapeCharToStringBuilder(((Character) o).charValue(), sb);
72 sb.append(’\’’);
73 }
74
75 sb.append(o.toString());
76 }
77
78 /**
79 * Print a characte to a string builder while escaping special characters
80 *
81 * @param ch
82 * the character
83 * @param sb
84 * the string builder
85 */
86 public static final void escapeCharToStringBuilder(final char ch,
87 final StringBuilder sb) {
88 if ((ch < 32) || (ch > 127)) {
89 sb.append(’\\’);
90 sb.append(’u’);
91 sb.append(Integer.toHexString(ch));
92 } else {
93 sb.append(ch);
856 56 THE IMPLEMENTATION PACKAGE
94 }
95 }
96
97 /**
98 * Convert an object to a string while handling arrays correctly
99 *
100 * @param o
101 * the object
102 * @return the string
103 */
104 public static final String toString(final Object o) {
105 StringBuilder sb;
106 sb = new StringBuilder();
107 toStringBuilder(o, sb);
108 return sb.toString();
109 }
110
111 /** the internal formatter */
112 private static final NumberFormat FORMATTER = new DecimalFormat(
113 "0.00E00", new DecimalFormatSymbols(Locale.US)); //N ON − N LS − 1
114
115 /**
116 * Format a given number for output
117 *
118 * @param v
119 * the number
120 * @return the corresponding string
121 */
122 public static final String formatNumberSameWidth(final double v) {
123 return FORMATTER.format(v);
124 }
125
126 /**
127 * Format a given number for output
128 *
129 * @param v
130 * the number
131 * @return the corresponding string
132 */
133 public static final String formatNumber(final double v) {
134 String s1, s2;
135 int i;
136
137 s1 = String.valueOf(v);
138
139 for (i = s1.length(); (--i) >= 0;) {
140 if (s1.charAt(i) != ’0’) {
141 break;
142 }
143 }
144 if (i >= 0) {
145 if (s1.charAt(i) == ’.’) {
146 s1 = s1.substring(0, i);
147 } else {
148 if (s1.indexOf(’.’) < i) {
149 s1 = s1.substring(0, i + 1);
150 }
151 }
152 }
153
154 s2 = TextUtils.formatNumberSameWidth(v);
155 if (s2.endsWith("E00")) { //N ON − N LS − 1
156 s2 = s2.substring(0, s2.length() - 3);
157 }
158
159 return (s1.length() <= s2.length()) ? s1 : s2;
160 }
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 857
161 }
Chapter 57
Demos

57.1 Demos for String-based Problem Spaces


In this package, we provide some example optimization tasks with string-based problem
spaces.

57.1.1 Real-Vector based Problem Spaces X ⊆ Rn

Here we provide some benchmark problems for real-valued continuous vector spaces.

57.1.1.1 Some Benchmark Functions

Some real-valued benchmark functions for numerical optimization.

Listing 57.1: The Sphere Function.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /**
12 * The sphere function introduced in Section 50.3.1.1 on page 581
13 * is a very simple benchmark function for real-vector based problem
14 * spaces. It just adds up the squares of the elements of the candidate
15 * solutions ( Definition D2.2 on page 43).
16 *
17 * @author Thomas Weise
18 */
19 public final class Sphere extends OptimizationModule implements
20 IObjectiveFunction<double[]> {
21
22 /** a constant required by Java serialization */
23 private static final long serialVersionUID = 1;
24
25 /** Instantiate the sphere objective function */
26 public Sphere() {
27 super();
28 }
860 57 DEMOS
29
30 /**
31 * Compute the value of the sphere function
32 * ( Section 50.3.1.1 on page 581)
33 *
34 * @param x
35 * the candidate solution ( Definition D2.2 on page 43)
36 * @param r
37 * the randomizer
38 * @return the objective value
39 */
40 public final double compute(final double[] x, final Random r) {
41 double res;
42 int i;
43
44 res = 0d;
45 for (i = x.length; (--i) >= 0;) {
46 res += (x[i] * x[i]);
47 }
48
49 return res;
50 }
51
52 }

Listing 57.2: f1 as defined in Equation 26.1 on page 238.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /**
12 * A function f1 which computes the sum of the cosines of the logarithms of
13 * all elements in a vector See Equation 26.1 on page 238 in
14 * Task 64 on page 238.
15 *
16 * @author Thomas Weise
17 */
18 public final class SumCosLn extends OptimizationModule implements
19 IObjectiveFunction<double[]> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /** Instantiate the sum-cos-ln objective */
25 public SumCosLn() {
26 super();
27 }
28
29 /**
30 * Compute the function value according to
31 * Equation 26.1 on page 238.
32 *
33 * @param x
34 * the candidate solution ( Definition D2.2 on page 43)
35 * @param r
36 * the randomizer
37 * @return the objective value
38 */
39 public final double compute(final double[] x, final Random r) {
40 double res;
41 int i;
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 861
42
43 res = 0d;
44 for (i = x.length; (--i) >= 0;) {
45 res += Math.cos(Math.log(1d + Math.abs(x[i])));
46 }
47
48 return (0.5d - ((1d / (x.length << 1)) * res));
49 }
50
51 /**
52 * Get the name of the optimization module
53 *
54 * @param longVersion
55 * true if the long name should be returned, false if the short
56 * name should be returned
57 * @return the name of the optimization module
58 */
59 @Override
60 public String getName(final boolean longVersion) {
61 if (longVersion) {
62 return super.getName(true);
63 }
64 return "f1"; //N ON − N LS − 1
65 }
66 }

Listing 57.3: f2 as defined in Equation 26.2 on page 238.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /**
12 * A function f2 which computes the cosine of the logarithm of the sum of
13 * all elements in a vector See Equation 26.2 on page 238 in
14 * Task 64 on page 238.
15 *
16 * @author Thomas Weise
17 */
18 public final class CosLnSum extends OptimizationModule implements
19 IObjectiveFunction<double[]> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /** Instantiate the cos-ln-sum objective */
25 public CosLnSum() {
26 super();
27 }
28
29 /**
30 * Compute the function value according to
31 * Equation 26.2 on page 238.
32 *
33 * @param x
34 * the candidate solution ( Definition D2.2 on page 43)
35 * @param r
36 * the randomizer
37 * @return the objective value
38 */
39 public final double compute(final double[] x, final Random r) {
40 double res;
862 57 DEMOS
41 int i;
42
43 res = 0d;
44 for (i = x.length; (--i) >= 0;) {
45 res += Math.abs(x[i]);
46 }
47
48 return (0.5d - (0.5d * Math.cos(Math.log(1d + res))));
49 }
50
51 /**
52 * Get the name of the optimization module
53 *
54 * @param longVersion
55 * true if the long name should be returned, false if the short
56 * name should be returned
57 * @return the name of the optimization module
58 */
59 @Override
60 public String getName(final boolean longVersion) {
61 if (longVersion) {
62 return super.getName(true);
63 }
64 return "f2"; //N ON − N LS − 1
65 }
66 }

Listing 57.4: f3 as defined in Equation 26.3 on page 238.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /**
12 * A function f3 which computes the maximum of all square values in a
13 * vector. See Equation 26.3 on page 238 in
14 * Task 64 on page 238.
15 *
16 * @author Thomas Weise
17 */
18 public final class MaxSqr extends OptimizationModule implements
19 IObjectiveFunction<double[]> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /** Instantiate max-squares objective function */
25 public MaxSqr() {
26 super();
27 }
28
29 /**
30 * Compute the function value according to
31 * Equation 26.3 on page 238.
32 *
33 * @param x
34 * the candidate solution ( Definition D2.2 on page 43)
35 * @param r
36 * the randomizer
37 * @return the objective value
38 */
39 public final double compute(final double[] x, final Random r) {
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 863
40 double res;
41 int i;
42
43 res = 0d;
44 for (i = x.length; (--i) >= 0;) {
45 res = Math.max(res, (x[i] * x[i]));
46 }
47
48 return (0.01d * res);
49 }
50
51 /**
52 * Get the name of the optimization module
53 *
54 * @param longVersion
55 * true if the long name should be returned, false if the short
56 * name should be returned
57 * @return the name of the optimization module
58 */
59 @Override
60 public String getName(final boolean longVersion) {
61 if (longVersion) {
62 return super.getName(true);
63 }
64 return "f3"; //N ON − N LS − 1
65 }
66 }

57.1.1.2 Experiments

57.1.1.2.1 The Function Test

Listing 57.5: Testing the function from Task 64.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real;
5
6 import java.util.List;
7
8 import org.goataa.impl.algorithms.RandomSampling;
9 import org.goataa.impl.algorithms.RandomWalk;
10 import org.goataa.impl.algorithms.de.DifferentialEvolution1;
11 import org.goataa.impl.algorithms.ea.SimpleGenerationalEA;
12 import org.goataa.impl.algorithms.ea.selection.TournamentSelection;
13 import org.goataa.impl.algorithms.ea.selection.TruncationSelection;
14 import org.goataa.impl.algorithms.es.EvolutionStrategy;
15 import org.goataa.impl.algorithms.hc.HillClimbing;
16 import org.goataa.impl.algorithms.sa.SimulatedAnnealing;
17 import org.goataa.impl.algorithms.sa.temperatureSchedules.Exponential;
18 import org.goataa.impl.algorithms.sa.temperatureSchedules.Logarithmic;
19 import
org.goataa.impl.searchOperations.strings.real.binary.DoubleArrayWeightedMeanCrossover;
20 import
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation;
21 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEbin;
22 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEexp;
23 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllNormalMutation;
24 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllUniformMutation;
864 57 DEMOS
25 import org.goataa.impl.termination.StepLimit;
26 import org.goataa.impl.utils.BufferedStatistics;
27 import org.goataa.impl.utils.Individual;
28 import org.goataa.spec.INullarySearchOperation;
29 import org.goataa.spec.IObjectiveFunction;
30 import org.goataa.spec.ISOOptimizationAlgorithm;
31 import org.goataa.spec.ISelectionAlgorithm;
32 import org.goataa.spec.ITemperatureSchedule;
33 import org.goataa.spec.ITernarySearchOperation;
34 import org.goataa.spec.IUnarySearchOperation;
35
36 import demos.org.goataa.strings.real.benchmarkFunctions.CosLnSum;
37 import demos.org.goataa.strings.real.benchmarkFunctions.MaxSqr;
38 import demos.org.goataa.strings.real.benchmarkFunctions.Sphere;
39 import demos.org.goataa.strings.real.benchmarkFunctions.SumCosLn;
40
41 /**
42 * The application of several optimization methods to the benchmark
43 * functions defined in Task 64 on page 238. This
44 * program is a more extensive version of the one given in
45 * Paragraph 57.1.1.2.2 on page 885.
46 *
47 * @author Thomas Weise
48 */
49 public class FunctionTest {
50
51 /**
52 * The main routine
53 *
54 * @param args
55 * the command line arguments which are ignored here
56 */
57 @SuppressWarnings("unchecked")
58 public static final void main(final String[] args) {
59 final IObjectiveFunction<double[]>[] f;
60 INullarySearchOperation<double[]> create;
61 IUnarySearchOperation<double[]>[] mutate;
62 final int maxSteps, runs;
63 ITemperatureSchedule[] schedules;
64 int i, k, l, n, ri, li, mi;
65 HillClimbing<double[], double[]> HC;
66 RandomWalk<double[], double[]> RW;
67 RandomSampling<double[], double[]> RS;
68 SimulatedAnnealing<double[], double[]> SA;
69 SimpleGenerationalEA<double[], double[]> GA;
70 EvolutionStrategy<double[]> ES;
71 DifferentialEvolution1<double[]> DE;
72 ISelectionAlgorithm[] sel;
73 int[] ns, rhos, lambdas, mus;
74 ITernarySearchOperation[] tern;
75
76 maxSteps = 100000;
77 runs = 100;
78
79 System.out.println("Maximum Number of Steps: " + maxSteps); //N ON − N LS − 1
80 System.out.println("Number of Runs : " + runs); //N ON − N LS − 1
81 System.out.println();
82
83 f = new IObjectiveFunction[] { new Sphere(), new SumCosLn(),
84 new CosLnSum(), new MaxSqr() };
85
86 HC = new HillClimbing<double[], double[]>();
87 SA = new SimulatedAnnealing<double[], double[]>();
88 RW = new RandomWalk<double[], double[]>();
89 RS = new RandomSampling<double[], double[]>();
90 GA = new SimpleGenerationalEA<double[], double[]>();
91 ES = new EvolutionStrategy<double[]>();
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 865
92 DE = new DifferentialEvolution1<double[]>();
93
94 GA
95 .setBinarySearchOperation(DoubleArrayWeightedMeanCrossover.DOUBLE_ARRAY_WEIGHTED_MEAN_CROSSOVER);
96 sel = new ISelectionAlgorithm[] { new TournamentSelection(2),
97 new TournamentSelection(3), new TournamentSelection(5),
98 new TruncationSelection(10), new TruncationSelection(50) };
99
100 schedules = new ITemperatureSchedule[] {//
101 new Logarithmic(1d),//
102 new Logarithmic(1d),//
103 new Logarithmic(0.1d),//
104 new Logarithmic(1e-3d),//
105 new Logarithmic(1e-4d),//
106 new Exponential(1d, 1e-5),//
107 new Exponential(1d, 1e-4),//
108 new Exponential(0.1d, 1e-5), //
109 new Exponential(0.1d, 1e-4), //
110 new Exponential(1e-3d, 1e-5),//
111 new Exponential(1e-3d, 1e-4),//
112 new Exponential(1e-4d, 1e-5), //
113 new Exponential(1e-4d, 1e-4) };
114
115 ns = new int[] { 2, 5, 10 };
116
117 rhos = new int[] { 2, 10 };
118 lambdas = new int[] { 50, 100, 200 };
119 mus = new int[] { 10, 20 };
120
121 for (n = 0; n < ns.length; n++) {
122
123 System.out.println();
124 System.out.println();
125 System.out.println("Dimensions: " + ns[n]); //N ON − N LS − 1
126 System.out.println();
127
128 tern = new ITernarySearchOperation[] {
129 new DoubleArrayDEbin(-10d, 10d, 0.3d, 0.7d),
130 new DoubleArrayDEbin(-10d, 10d, 0.7d, 0.3d),
131 new DoubleArrayDEbin(-10d, 10d, 0.5d, 0.5d),
132 new DoubleArrayDEexp(-10d, 10d, 0.3d, 0.7d),
133 new DoubleArrayDEexp(-10d, 10d, 0.7d, 0.3d),
134 new DoubleArrayDEexp(-10d, 10d, 0.5d, 0.5d), };
135
136 mutate = new IUnarySearchOperation[] {//
137 new DoubleArrayAllNormalMutation(-10.0d, 10.0d),//
138 new DoubleArrayAllUniformMutation(-10.0d, 10.0d) };
139 create = new DoubleArrayUniformCreation(ns[n], -10.0d, 10.0d);
140
141 for (i = 0; i < f.length; i++) {
142 System.out.println();
143 System.out.println();
144 System.out.println(f[i].getClass().getSimpleName());
145 System.out.println();
146
147 // Hill Climbing ( Algorithm 26.1 on page 230)
148 HC.setObjectiveFunction(f[i]);
149 HC.setNullarySearchOperation(create);
150 for (k = 0; k < mutate.length; k++) {
151 HC.setUnarySearchOperation(mutate[k]);
152 testRuns(HC, runs, maxSteps);
153 }
154
155 // Simulated Annealing ( Algorithm 27.1 on page 245)
156 // and Simulated Quenching
157 // ( Section 27.3.2 on page 249)
158 SA.setObjectiveFunction(f[i]);
866 57 DEMOS
159 SA.setNullarySearchOperation(create);
160 for (l = 0; l < schedules.length; l++) {
161 SA.setTemperatureSchedule(schedules[l]);
162 for (k = 0; k < mutate.length; k++) {
163 SA.setUnarySearchOperation(mutate[k]);
164 testRuns(SA, runs, maxSteps);
165 }
166 }
167
168 // Generational Genetic/Evolutionary Algorithm
169 // ( Chapter 28 on page 253,
170 // Section 28.1.4.1 on page 265,
171 // Section 29.3 on page 328)
172 GA.setObjectiveFunction(f[i]);
173 GA.setNullarySearchOperation(create);
174 for (l = 0; l < sel.length; l++) {
175 GA.setSelectionAlgorithm(sel[l]);
176 for (k = 0; k < mutate.length; k++) {
177 GA.setUnarySearchOperation(mutate[k]);
178 testRuns(GA, runs, maxSteps);
179 }
180 }
181
182 // Evolution Strategies: Algorithm 30.1 on page 361
183 ES.setObjectiveFunction(f[i]);
184 ES.setNullarySearchOperation(create);
185 ES.setDimension(ns[n]);
186 ES.setMinimum(-10d);
187 ES.setMaximum(10d);
188
189 for (mi = 0; mi < mus.length; mi++) {
190 ES.setMu(mus[mi]);
191 for (li = 0; li < lambdas.length; li++) {
192 ES.setLambda(lambdas[li]);
193 for (ri = 0; ri < rhos.length; ri++) {
194 ES.setRho(rhos[ri]);
195
196 ES.setPlus(true);
197 testRuns(ES, runs, maxSteps);
198
199 ES.setPlus(false);
200 testRuns(ES, runs, maxSteps);
201 }
202 }
203 }
204
205 // Differential Evolution: Chapter 33 on page 419
206 DE.setObjectiveFunction(f[i]);
207 DE.setNullarySearchOperation(create);
208 for (li = 0; li < lambdas.length; li++) {
209 DE.setPopulationSize(lambdas[li]);
210 for (mi = 0; mi < tern.length; mi++) {
211 DE.setTernarySearchOperation(tern[mi]);
212 testRuns(DE, runs, maxSteps);
213 }
214 }
215
216 // Random Walks ( Section 8.2 on page 114)
217 RW.setObjectiveFunction(f[i]);
218 RW.setNullarySearchOperation(create);
219 for (k = 0; k < mutate.length; k++) {
220 RW.setUnarySearchOperation(mutate[k]);
221 testRuns(RW, runs, maxSteps);
222 }
223
224 // Random Sampling ( Section 8.1 on page 113)
225 RS.setObjectiveFunction(f[i]);
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 867
226 RS.setNullarySearchOperation(create);
227 testRuns(RS, runs, maxSteps);
228 }
229 }
230 }
231
232 /**
233 * Perform the test runs
234 *
235 * @param algorithm
236 * the algorithm configuration to test
237 * @param runs
238 * the number of runs to perform
239 * @param steps
240 * the number of steps to execute per run
241 */
242 @SuppressWarnings("unchecked")
243 private static final void testRuns(
244 final ISOOptimizationAlgorithm<?, double[], ?> algorithm,
245 final int runs, final int steps) {
246 int i;
247 BufferedStatistics stat;
248 List<Individual<?, double[]>> solutions;
249 Individual<?, double[]> individual;
250
251 stat = new BufferedStatistics();
252 algorithm.setTerminationCriterion(new StepLimit(steps));
253
254 for (i = 0; i < runs; i++) {
255 algorithm.setRandSeed(i);
256 solutions = ((List<Individual<?, double[]>>) (algorithm.call()));
257 individual = solutions.get(0);
258 stat.add(individual.v);
259 }
260
261 System.out.println(stat.getConfiguration(false) + ’ ’
262 + algorithm.toString(false));
263 }
264 }
868
1 Maximum Number of Steps : 100000
2 Number o f Runs : 100
3
4
5
6 Dimensions : 2
7
8
9
10 Sphere
11
12 min = 6 . 8 5E−11 med = 4 . 9 9E−09 avg = 7 . 1 9E−09 max= 4 . 4 1E−08 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
13 min = 6 . 7 8E−12 med = 9 . 3 4E−10 avg = 1 . 4 8E−09 max= 9 . 3 1E−09 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
14 min = 2 . 4 4E−09 med = 6 . 8 6E−07 avg = 9 . 8 3E−07 max= 7 . 5 5E−06 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 1 ) )
15 min = 1 . 0 1E−08 med = 8 . 5 6E−07 avg = 1 . 5 2E−06 max= 2 . 1 0E−05 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 1 ) )
16 min = 2 . 4 4E−09 med = 6 . 8 6E−07 avg = 9 . 8 3E−07 max= 7 . 5 5E−06 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 1 ) )
17 min = 1 . 0 1E−08 med = 8 . 5 6E−07 avg = 1 . 5 2E−06 max= 2 . 1 0E−05 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 1 ) )
18 min = 4 . 5 9E−10 med = 6 . 3 6E−08 avg = 1 . 0 3E−07 max= 7 . 9 9E−07 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 0 . 1 ) )
19 min = 1 . 6 7E−10 med = 7 . 7 7E−08 avg = 1 . 1 2E−07 max= 7 . 2 6E−07 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 0 . 1 ) )
20 min = 1 . 2 4E−12 med = 7 . 9 1E−09 avg = 1 . 2 7E−08 max= 5 . 3 9E−08 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 0 . 0 0 1 ) )
21 min = 7 . 3 5E−12 med = 1 . 4 1E−09 avg = 1 . 6 2E−09 max= 6 . 2 7E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 0 . 0 0 1 ) )
22 min = 9 . 7 0E−12 med = 5 . 7 4E−09 avg = 8 . 7 6E−09 max= 3 . 9 6E−08 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 1 . 0 E−4) )
23 min = 3 . 3 4E−11 med = 1 . 0 3E−09 avg = 1 . 3 4E−09 max= 6 . 4 6E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , l o g ( 1 . 0 E−4) )
24 min = 1 . 8 3E−07 med = 4 . 2 3E−06 avg = 7 . 1 8E−06 max= 4 . 5 2E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1 , e =1.0E−5) )
25 min = 8 . 0 2E−08 med = 3 . 8 0E−04 avg = 1 . 9 6E−02 max= 3 . 9 0E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1 , e =1.0E−5) )
26 min = 1 . 0 3E−09 med = 2 . 2 4E−08 avg = 2 . 7 5E−08 max= 1 . 2 1E−07 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1 , e =1.0E−4) )
27 min = 8 . 7 7E−11 med = 5 . 1 7E−09 avg = 7 . 3 3E−09 max= 3 . 6 2E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1 , e =1.0E−4) )
28 min = 1 . 0 2E−09 med = 4 . 2 4E−07 avg = 6 . 7 0E−07 max= 3 . 3 3E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
29 min = 1 . 5 6E−08 med = 5 . 5 8E−07 avg = 7 . 9 8E−07 max= 4 . 2 2E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
30 min = 1 . 1 6E−10 med = 1 . 2 1E−08 avg = 1 . 5 6E−08 max= 6 . 1 8E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
31 min = 3 . 3 7E−11 med = 2 . 0 8E−09 avg = 3 . 1 0E−09 max= 2 . 1 3E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
32 min = 1 . 0 1E−10 med = 1 . 1 2E−08 avg = 1 . 6 2E−08 max= 7 . 3 0E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
33 min = 1 . 8 4E−11 med = 4 . 3 1E−09 avg = 6 . 7 3E−09 max= 3 . 6 2E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
34 min = 2 . 3 3E−12 med = 5 . 3 4E−09 avg = 7 . 8 8E−09 max= 3 . 4 1E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
35 min = 5 . 4 7E−11 med = 1 . 0 9E−09 avg = 1 . 4 4E−09 max= 6 . 1 1E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
36 min = 2 . 9 6E−11 med = 6 . 4 9E−09 avg = 9 . 1 5E−09 max= 4 . 8 6E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
37 min = 1 . 5 2E−12 med = 1 . 1 0E−09 avg = 1 . 4 0E−09 max= 6 . 2 6E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
38 min = 8 . 0 3E−12 med = 6 . 3 4E−09 avg = 8 . 7 8E−09 max= 5 . 1 2E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
39 min = 4 . 4 3E−11 med = 8 . 6 6E−10 avg = 1 . 3 4E−09 max= 6 . 8 2E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
40 min = 1 . 3 2E−11 med = 3 . 2 2E−09 avg = 4 . 1 0E−09 max= 2 . 8 5E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
41 min = 5 . 4 1E−13 med = 2 . 4 9E−10 avg = 3 . 2 5E−10 max= 1 . 5 6E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
42 min = 2 . 7 1E−11 med = 8 . 0 6E−10 avg = 1 . 2 3E−09 max= 5 . 9 0E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
43 min = 1 . 5 5E−12 med = 1 . 1 4E−10 avg = 1 . 6 6E−10 max= 9 . 8 2E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
44 min = 1 . 7 4E−13 med = 2 . 4 4E−11 avg = 3 . 6 0E−11 max= 2 . 4 3E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
45 min = 4 . 9 2E−14 med = 4 . 9 3E−12 avg = 7 . 5 0E−12 max= 3 . 4 8E−11 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
46 min = 9 . 1 4E−38 med = 1 . 7 7E−11 avg = 9 . 4 9E−10 max= 1 . 1 2E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
47 min = 1 . 9 4E−52 med = 6 . 2 7E−12 avg = 1 . 5 9E−10 max= 1 . 7 0E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
48 min = 3 . 4 5E−12 med = 4 . 1 6E−10 avg = 5 . 4 6E−10 max= 2 . 2 1E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
49 min = 5 . 7 8E−13 med = 5 . 6 9E−11 avg = 7 . 4 9E−11 max= 2 . 7 8E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
50 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
51 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
52 min = 1 . 6 6E−164 med = 8 . 8 1E−92 avg = 2 . 0 8E−15 max= 2 . 0 8E−13 ES ( ( 1 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
53 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
54 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
55 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
56 min = 4 . 2 3E−132 med = 4 . 2 8E−77 avg = 1 . 5 7E−12 max= 1 . 5 6E−10 ES ( ( 1 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
57 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
58 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
59 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )

57 DEMOS
60 min = 1 . 6 8E−110 med = 1 . 5 6E−60 avg = 3 . 0 8E−11 max= 2 . 5 6E−09 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
61 min = 4 . 8 4E−272 med = 1 . 3 5E−257 avg = 1 . 1 7E−251 max= 7 . 2 1E−250 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
62 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
63 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
64 min = 2 . 5 8E−137 med = 1 . 8 8E−89 avg = 7 . 1 9E−40 max= 7 . 1 9E−38 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
65 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
66 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
67 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
68 min = 1 . 1 5E−122 med = 7 . 9 2E−82 avg = 1 . 2 1E−32 max= 1 . 2 1E−30 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
69 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
70 min = 1 . 5 1E−321 med = 2 . 0 6E−310 avg = 9 . 4 3E−301 max= 5 . 2 3E−299 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
71 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
72 min = 1 . 7 3E−93 med = 3 . 5 9E−61 avg = 4 . 6 0E−25 max= 4 . 6 0E−23 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
73 min = 1 . 1 8E−237 med = 5 . 9 2E−231 avg = 1 . 3 8E−225 max= 1 . 1 9E−223 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
74 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
75 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 3 2E−161 max= 1 . 3 2E−159 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
76 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
77 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
78 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
79 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
80 min = 3 . 9 4E−219 med = 8 . 3 3E−216 avg = 3 . 4 2E−213 max= 1 . 4 7E−211 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
81 min = 9 . 0 3E−275 med = 1 . 6 6E−269 avg = 5 . 7 6E−267 max= 1 . 9 9E−265 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
82 min = 1 . 4 6E−243 med = 5 . 7 3E−240 avg = 6 . 2 8E−237 max= 5 . 3 9E−235 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
83 min = 2 . 8 1E−204 med = 2 . 9 8E−200 avg = 1 . 1 3E−198 max= 4 . 9 4E−197 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
84 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
85 min = 1 . 7 9E−265 med = 1 . 2 3E−259 avg = 1 . 4 1E−257 max= 3 . 9 5E−256 DE−1( f=S p h e r e , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
86 min = 1 . 7 3E−111 med = 1 . 2 6E−108 avg = 9 . 4 0E−108 max= 3 . 6 9E−106 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
87 min = 6 . 9 2E−140 med = 6 . 3 8E−136 avg = 8 . 4 8E−135 max= 2 . 6 6E−133 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
88 min = 3 . 6 8E−123 med = 6 . 7 4E−121 avg = 1 . 1 4E−119 max= 6 . 9 6E−118 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
89 min = 1 . 6 7E−103 med = 2 . 5 9E−101 avg = 2 . 0 8E−100 max= 6 . 1 6E−99 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
90 min = 4 . 0 8E−170 med = 8 . 8 4E−167 avg = 1 . 6 6E−165 max= 7 . 6 9E−164 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
91 min = 1 . 0 6E−133 med = 8 . 0 9E−131 avg = 5 . 8 9E−130 max= 1 . 0 8E−128 DE−1( f=S p h e r e , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
92 min = 1 . 0 9E−05 med = 1 . 0 7 E01 avg = 1 . 6 8 E01 max= 7 . 1 4 E01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
93 min = 1 . 1 8 E00 med = 3 . 0 9 E01 avg = 3 . 7 8 E01 max= 9 . 1 4 E01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , r )
94 min = 9 . 3 6E−06 med = 8 . 1 6E−04 avg = 1 . 1 6E−03 max= 9 . 5 5E−03 RS ( f=S p h e r e , r )
95
96
97 SumCosLn
98
99 min = 9 . 3 0E−13 med = 6 . 3 7E−10 avg = 9 . 1 7E−10 max= 5 . 5 2E−09 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
100 min = 8 . 6 0E−13 med = 9 . 5 8E−11 avg = 1 . 4 4E−10 max= 7 . 2 1E−10 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
101 min = 3 . 1 4E−09 med = 1 . 2 8E−04 avg = 4 . 2 0E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
102 min = 3 . 6 6E−02 med = 4 . 2 1E−01 avg = 4 . 1 6E−01 max= 7 . 0 6E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
103 min = 3 . 1 4E−09 med = 1 . 2 8E−04 avg = 4 . 2 0E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
104 min = 3 . 6 6E−02 med = 4 . 2 1E−01 avg = 4 . 1 6E−01 max= 7 . 0 6E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 ) )
105 min = 4 . 2 1E−09 med = 1 . 0 2E−07 avg = 1 . 3 5E−07 max= 4 . 6 2E−07 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 1 ) )
106 min = 1 . 6 9E−09 med = 4 . 4 0E−07 avg = 5 . 2 8E−02 max= 4 . 1 2E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 1 ) )
107 min = 1 . 3 3E−11 med = 1 . 1 2E−09 avg = 1 . 7 4E−09 max= 7 . 7 4E−09 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
108 min = 1 . 3 3E−11 med = 7 . 3 9E−10 avg = 1 . 1 2E−09 max= 6 . 2 7E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
109 min = 2 . 2 5E−11 med = 7 . 3 4E−10 avg = 8 . 9 5E−10 max= 3 . 9 0E−09 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
110 min = 3 . 1 5E−13 med = 1 . 5 7E−10 avg = 2 . 0 9E−10 max= 1 . 1 8E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
111 min = 3 . 9 8E−06 med = 2 . 2 5E−01 avg = 2 . 1 7E−01 max= 5 . 8 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
112 min = 1 . 1 9E−01 med = 4 . 7 1E−01 avg = 4 . 5 5E−01 max= 7 . 0 6E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
113 min = 9 . 7 9E−11 med = 5 . 9 6E−09 avg = 8 . 1 2E−09 max= 3 . 3 6E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
114 min = 1 . 5 7E−10 med = 3 . 6 0E−09 avg = 4 . 6 3E−09 max= 2 . 2 9E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
115 min = 2 . 3 3E−08 med = 2 . 5 6E−06 avg = 1 . 0 1E−02 max= 3 . 1 1E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
116 min = 2 . 3 4E−04 med = 3 . 8 9E−01 avg = 3 . 8 3E−01 max= 7 . 0 6E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
117 min = 3 . 5 2E−11 med = 1 . 8 3E−09 avg = 2 . 8 9E−09 max= 1 . 3 4E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
118 min = 8 . 6 8E−13 med = 6 . 2 2E−10 avg = 8 . 8 5E−10 max= 4 . 6 0E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
119 min = 3 . 4 1E−11 med = 5 . 6 7E−09 avg = 7 . 2 3E−09 max= 2 . 8 5E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
120 min = 1 . 5 4E−10 med = 5 . 7 2E−09 avg = 8 . 4 1E−09 max= 4 . 1 6E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
121 min = 1 . 6 0E−12 med = 1 . 1 5E−09 avg = 1 . 4 2E−09 max= 7 . 0 1E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
122 min = 6 . 9 0E−12 med = 1 . 6 9E−10 avg = 2 . 4 3E−10 max= 1 . 1 9E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
123 min = 8 . 9 7E−13 med = 1 . 1 2E−09 avg = 1 . 6 0E−09 max= 9 . 4 3E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
124 min = 1 . 6 2E−11 med = 4 . 3 3E−10 avg = 7 . 9 1E−10 max= 3 . 4 4E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
125 min = 3 . 1 8E−12 med = 7 . 3 5E−10 avg = 9 . 8 8E−10 max= 3 . 8 7E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
126 min = 1 . 1 5E−12 med = 1 . 2 8E−10 avg = 1 . 7 4E−10 max= 1 . 0 8E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
127 min = 7 . 8 7E−13 med = 3 . 3 1E−10 avg = 5 . 0 0E−10 max= 3 . 0 8E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
128 min = 6 . 8 1E−14 med = 4 . 5 3E−11 avg = 6 . 0 0E−11 max= 3 . 3 8E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
129 min = 9 . 0 3E−13 med = 1 . 0 5E−10 avg = 1 . 2 8E−10 max= 4 . 8 5E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
130 min = 2 . 2 0E−13 med = 1 . 3 0E−11 avg = 1 . 9 8E−11 max= 1 . 0 4E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
131 min = 3 . 6 5E−14 med = 3 . 2 4E−12 avg = 6 . 0 0E−12 max= 4 . 2 2E−11 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )

869
132 min = 3 . 4 4E−15 med = 4 . 9 2E−13 avg = 7 . 6 2E−13 max= 4 . 8 9E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
133 min = 0 . 0 0 E00 med = 5 . 7 7E−12 avg = 1 . 4 8E−10 max= 3 . 2 9E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
870
134 min = 0 . 0 0 E00 med = 1 . 4 0E−13 avg = 1 . 7 8E−11 max= 3 . 0 5E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
135 min = 3 . 9 9E−12 med = 4 . 4 1E−11 avg = 6 . 2 9E−11 max= 3 . 0 8E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
136 min = 1 . 0 1E−13 med = 6 . 9 2E−12 avg = 1 . 0 1E−11 max= 4 . 9 1E−11 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
137 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
138 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
139 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
140 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
141 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
142 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
143 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 3 8E−11 max= 9 . 4 7E−10 ES ( ( 1 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
144 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
145 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
146 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
147 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 6 5E−16 max= 1 . 8 1E−14 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
148 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
149 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
150 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
151 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
152 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
153 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
154 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
155 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
156 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
157 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
158 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
159 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
160 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
161 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
162 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
163 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
164 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
165 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
166 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
167 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
168 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
169 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
170 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
171 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
172 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
173 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
174 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
175 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
176 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
177 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
178 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
179 min = 1 . 3 6E−06 med = 2 . 5 7E−01 avg = 2 . 5 7E−01 max= 6 . 4 9E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
180 min = 6 . 9 6E−02 med = 4 . 7 7E−01 avg = 4 . 5 8E−01 max= 6 . 9 4E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 1 , r )
181 min = 1 . 1 7E−06 med = 9 . 9 5E−05 avg = 1 . 4 0E−04 max= 1 . 1 1E−03 RS ( f=f 1 , r )
182
183
184 CosLnSum
185
186 min = 1 . 0 3E−11 med = 1 . 8 7E−09 avg = 2 . 5 7E−09 max= 1 . 9 5E−08 HC( o1=NM( [ −10 ,10]ˆ2) , f=f 2 ,r)
187 min = 1 . 2 2E−11 med = 3 . 5 7E−10 avg = 5 . 4 5E−10 max= 2 . 9 1E−09 HC( o1=UM( [ −10 ,10]ˆ2) , f=f 2 ,r)
188 min = 2 . 6 2E−08 med = 4 . 5 7E−06 avg = 1 . 0 5E−01 max= 8 . 7 3E−01 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )
189 min = 3 . 3 2E−03 med = 7 . 6 1E−01 avg = 6 . 8 3E−01 max= 9 . 3 8E−01 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )
190 min = 2 . 6 2E−08 med = 4 . 5 7E−06 avg = 1 . 0 5E−01 max= 8 . 7 3E−01 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )
191 min = 3 . 3 2E−03 med = 7 . 6 1E−01 avg = 6 . 8 3E−01 max= 9 . 3 8E−01 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (1) )

57 DEMOS
192 min = 1 . 9 0E−09 med = 7 . 0 3E−08 avg = 1 . 1 2E−07 max= 4 . 1 9E−07 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.1) )
193 min = 6 . 3 6E−10 med = 2 . 5 1E−07 avg = 5 . 7 7E−02 max= 8 . 5 1E−01 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.1) )
194 min = 3 . 5 4E−12 med = 2 . 5 0E−09 avg = 3 . 5 8E−09 max= 1 . 7 9E−08 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.001) )
195 min = 9 . 0 3E−12 med = 8 . 6 0E−10 avg = 1 . 3 3E−09 max= 7 . 4 0E−09 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , log (0.001) )
196 min = 2 . 1 8E−12 med = 2 . 1 3E−09 avg = 2 . 6 8E−09 max= 1 . 1 6E−08 SA ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , l o g ( 1 . 0 E−4) )
197 min = 5 . 5 2E−12 med = 3 . 0 8E−10 avg = 4 . 7 6E−10 max= 3 . 7 5E−09 SA ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , l o g ( 1 . 0 E−4) )
198 min = 5 . 4 9E−06 med = 4 . 7 9E−01 avg = 4 . 2 0E−01 max= 8 . 9 2E−01 SQ( o1=NM( [ −10 ,10]ˆ2) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
199 min = 1 . 9 3E−01 med = 7 . 9 0E−01 avg = 7 . 3 7E−01 max= 9 . 4 4E−01 SQ( o1=UM( [ −10 ,10]ˆ2) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
200 min = 1 . 8 4E−10 med = 8 . 9 8E−09 avg = 1 . 6 0E−08 max= 7 . 8 0E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
201 min = 6 . 0 2E−12 med = 4 . 2 7E−09 avg = 6 . 5 1E−09 max= 3 . 2 6E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
202 min = 9 . 4 9E−09 med = 1 . 0 6E−06 avg = 3 . 7 4E−02 max= 8 . 1 4E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
203 min = 5 . 3 9E−06 med = 7 . 3 9E−01 avg = 6 . 2 7E−01 max= 9 . 3 4E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
204 min = 9 . 4 2E−11 med = 4 . 9 2E−09 avg = 6 . 8 5E−09 max= 3 . 4 7E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
205 min = 5 . 3 2E−12 med = 1 . 2 6E−09 avg = 1 . 6 9E−09 max= 8 . 5 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
206 min = 9 . 6 5E−12 med = 7 . 9 7E−09 avg = 9 . 7 2E−09 max= 4 . 8 3E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
207 min = 6 . 0 7E−12 med = 3 . 6 5E−09 avg = 5 . 8 4E−09 max= 3 . 7 4E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
208 min = 2 . 0 2E−11 med = 2 . 4 7E−09 avg = 4 . 2 8E−09 max= 1 . 7 4E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
209 min = 7 . 7 7E−12 med = 4 . 6 0E−10 avg = 7 . 1 8E−10 max= 2 . 7 2E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
210 min = 4 . 3 9E−11 med = 2 . 4 7E−09 avg = 3 . 6 9E−09 max= 1 . 5 1E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
211 min = 2 . 2 5E−12 med = 7 . 7 4E−10 avg = 9 . 9 1E−10 max= 5 . 2 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
212 min = 5 . 1 0E−12 med = 2 . 5 0E−09 avg = 3 . 7 0E−09 max= 1 . 8 7E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
213 min = 1 . 9 3E−11 med = 4 . 5 1E−10 avg = 6 . 4 5E−10 max= 2 . 7 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
214 min = 1 . 5 3E−11 med = 1 . 1 0E−09 avg = 1 . 5 3E−09 max= 8 . 5 7E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
215 min = 5 . 4 9E−13 med = 1 . 0 0E−10 avg = 1 . 7 2E−10 max= 9 . 1 7E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
216 min = 1 . 7 6E−11 med = 4 . 6 5E−10 avg = 5 . 6 9E−10 max= 2 . 5 6E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
217 min = 3 . 9 0E−13 med = 3 . 6 8E−11 avg = 5 . 5 7E−11 max= 2 . 6 0E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
218 min = 1 . 3 6E−13 med = 1 . 2 9E−11 avg = 1 . 8 8E−11 max= 1 . 0 6E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
219 min = 1 . 2 9E−14 med = 1 . 5 4E−12 avg = 1 . 9 0E−12 max= 7 . 5 1E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
220 min = 0 . 0 0 E00 med = 3 . 8 2E−12 avg = 6 . 6 2E−10 max= 1 . 0 1E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
221 min = 0 . 0 0 E00 med = 1 . 0 0E−12 avg = 8 . 9 1E−11 max= 2 . 0 7E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
222 min = 1 . 6 9E−12 med = 1 . 3 7E−10 avg = 2 . 1 8E−10 max= 8 . 5 0E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
223 min = 8 . 4 8E−14 med = 1 . 9 6E−11 avg = 2 . 7 9E−11 max= 1 . 8 2E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
224 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
225 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
226 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
227 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
228 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
229 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
230 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 5 3E−11 max= 3 . 5 3E−09 ES ( ( 1 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
231 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
232 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
233 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
234 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 1 3E−12 max= 3 . 1 3E−10 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
235 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
236 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
237 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
238 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
239 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
240 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
241 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
242 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
243 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
244 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
245 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
246 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
247 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
248 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
249 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
250 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
251 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
252 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
253 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
254 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
255 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
256 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
257 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
258 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
259 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
260 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
261 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
262 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )

871
263 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
264 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
872
265 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
266 min = 4 . 4 1E−06 med = 5 . 1 3E−01 avg = 4 . 7 3E−01 max= 9 . 0 8E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
267 min = 1 . 6 9E−01 med = 7 . 7 2E−01 avg = 7 . 3 2E−01 max= 9 . 4 1E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 2 , r )
268 min = 3 . 7 4E−06 med = 2 . 5 2E−04 avg = 4 . 5 2E−04 max= 3 . 8 8E−03 RS ( f=f 2 , r )
269
270
271 MaxSqr
272
273 min = 2 . 5 5E−13 med = 4 . 2 1E−11 avg = 5 . 8 3E−11 max= 2 . 8 5E−10 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
274 min = 7 . 1 2E−14 med = 8 . 5 5E−12 avg = 1 . 1 5E−11 max= 5 . 2 5E−11 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
275 min = 7 . 4 5E−08 med = 1 . 7 0E−04 avg = 5 . 5 7E−03 max= 7 . 3 3E−02 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
276 min = 2 . 6 2E−03 med = 1 . 6 1E−01 avg = 1 . 8 4E−01 max= 6 . 4 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
277 min = 7 . 4 5E−08 med = 1 . 7 0E−04 avg = 5 . 5 7E−03 max= 7 . 3 3E−02 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
278 min = 2 . 6 2E−03 med = 1 . 6 1E−01 avg = 1 . 8 4E−01 max= 6 . 4 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 ) )
279 min = 1 . 4 7E−10 med = 7 . 3 2E−08 avg = 1 . 2 7E−07 max= 7 . 9 8E−07 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 1 ) )
280 min = 5 . 1 3E−09 med = 4 . 8 0E−03 avg = 7 . 6 3E−03 max= 3 . 2 1E−02 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 1 ) )
281 min = 1 . 2 6E−13 med = 6 . 5 1E−10 avg = 9 . 5 4E−10 max= 6 . 6 2E−09 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
282 min = 4 . 5 7E−12 med = 6 . 7 3E−10 avg = 8 . 8 8E−10 max= 4 . 1 5E−09 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
283 min = 4 . 1 0E−13 med = 1 . 1 2E−10 avg = 1 . 7 4E−10 max= 7 . 7 5E−10 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
284 min = 5 . 6 9E−14 med = 7 . 4 7E−11 avg = 1 . 1 2E−10 max= 7 . 1 0E−10 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
285 min = 7 . 0 3E−08 med = 6 . 6 5E−02 avg = 8 . 7 9E−02 max= 4 . 3 6E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
286 min = 1 . 3 6E−02 med = 1 . 8 2E−01 avg = 2 . 4 4E−01 max= 7 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
287 min = 3 . 2 1E−11 med = 3 . 9 0E−09 avg = 5 . 5 4E−09 max= 2 . 4 8E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
288 min = 2 . 1 3E−11 med = 3 . 2 6E−09 avg = 5 . 3 3E−09 max= 3 . 8 0E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
289 min = 1 . 5 5E−08 med = 3 . 4 3E−06 avg = 4 . 5 9E−04 max= 8 . 4 1E−03 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
290 min = 9 . 6 5E−04 med = 1 . 4 5E−01 avg = 1 . 5 0E−01 max= 4 . 7 4E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
291 min = 4 . 8 0E−13 med = 6 . 2 5E−10 avg = 7 . 7 5E−10 max= 3 . 8 9E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
292 min = 1 . 3 3E−11 med = 4 . 1 1E−10 avg = 5 . 6 9E−10 max= 2 . 8 3E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
293 min = 8 . 7 9E−12 med = 3 . 8 9E−09 avg = 5 . 8 3E−09 max= 3 . 4 4E−08 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
294 min = 1 . 0 5E−10 med = 4 . 4 6E−09 avg = 9 . 3 1E−09 max= 1 . 0 0E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
295 min = 1 . 7 0E−13 med = 9 . 9 8E−11 avg = 1 . 4 6E−10 max= 6 . 5 5E−10 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
296 min = 2 . 4 5E−13 med = 1 . 5 0E−11 avg = 2 . 2 5E−11 max= 1 . 4 2E−10 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
297 min = 1 . 2 8E−12 med = 5 . 3 7E−10 avg = 6 . 6 3E−10 max= 3 . 6 3E−09 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
298 min = 4 . 2 0E−12 med = 4 . 6 0E−10 avg = 6 . 8 5E−10 max= 3 . 8 9E−09 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
299 min = 8 . 3 4E−13 med = 4 . 9 5E−11 avg = 9 . 0 3E−11 max= 7 . 9 7E−10 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
300 min = 2 . 9 2E−13 med = 1 . 3 8E−11 avg = 1 . 6 8E−11 max= 9 . 6 3E−11 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
301 min = 1 . 9 4E−13 med = 2 . 3 4E−11 avg = 3 . 6 1E−11 max= 2 . 0 7E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
302 min = 3 . 8 0E−14 med = 2 . 3 9E−12 avg = 3 . 2 5E−12 max= 1 . 4 4E−11 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
303 min = 7 . 1 1E−14 med = 6 . 8 1E−12 avg = 9 . 0 5E−12 max= 5 . 0 8E−11 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
304 min = 1 . 6 3E−14 med = 8 . 7 4E−13 avg = 1 . 2 7E−12 max= 8 . 1 2E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
305 min = 5 . 3 4E−16 med = 2 . 0 3E−13 avg = 3 . 5 7E−13 max= 2 . 4 7E−12 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
306 min = 2 . 1 6E−17 med = 4 . 0 3E−14 avg = 5 . 6 8E−14 max= 3 . 2 8E−13 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
307 min = 6 . 1 7E−55 med = 3 . 5 9E−13 avg = 1 . 5 7E−11 max= 1 . 7 5E−10 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
308 min = 1 . 4 0E−37 med = 8 . 1 4E−15 avg = 1 . 8 1E−12 max= 4 . 5 1E−11 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
309 min = 5 . 4 3E−14 med = 3 . 6 8E−12 avg = 4 . 9 9E−12 max= 1 . 9 8E−11 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
310 min = 2 . 5 7E−15 med = 4 . 6 6E−13 avg = 6 . 4 4E−13 max= 2 . 6 9E−12 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
311 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
312 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
313 min = 3 . 4 2E−176 med = 2 . 7 0E−90 avg = 5 . 2 3E−14 max= 5 . 2 2E−12 ES ( ( 1 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
314 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
315 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
316 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
317 min = 7 . 9 8E−129 med = 7 . 6 6E−76 avg = 1 . 9 8E−15 max= 1 . 4 5E−13 ES ( ( 1 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
318 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
319 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
320 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
321 min = 2 . 1 0E−110 med = 8 . 9 9E−60 avg = 2 . 8 0E−15 max= 2 . 5 2E−13 ES ( ( 1 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
322 min = 1 . 3 2E−271 med = 1 . 7 8E−258 avg = 4 . 9 2E−245 max= 4 . 9 2E−243 ES ( ( 1 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )

57 DEMOS
323 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
324 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
325 min = 5 . 5 2E−153 med = 6 . 2 9E−90 avg = 3 . 9 2E−33 max= 3 . 9 2E−31 ES ( ( 2 0 / 1 0 + 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
326 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
327 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
328 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
329 min = 3 . 6 2E−112 med = 1 . 9 9E−81 avg = 1 . 0 8E−18 max= 1 . 0 8E−16 ES ( ( 2 0 / 1 0 + 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
330 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
331 min = 3 . 8 5E−322 med = 2 . 3 8E−308 avg = 7 . 4 5E−300 max= 6 . 9 8E−298 ES ( ( 2 0 / 2 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
332 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
333 min = 1 . 4 6E−93 med = 3 . 5 2E−64 avg = 2 . 8 4E−28 max= 2 . 8 4E−26 ES ( ( 2 0 / 1 0 + 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
334 min = 2 . 1 4E−237 med = 2 . 0 5E−231 avg = 4 . 2 0E−226 max= 3 . 6 3E−224 ES ( ( 2 0 / 1 0 , 2 0 0 ) , o1=SDNM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
335 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
336 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 6 8E−145 max= 3 . 6 8E−143 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
337 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
338 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F = 0 . 7 ) )
339 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F = 0 . 3 ) )
340 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F = 0 . 5 ) )
341 min = 2 . 7 4E−215 med = 4 . 0 6E−211 avg = 8 . 0 1E−209 max= 3 . 0 3E−207 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
342 min = 2 . 7 5E−276 med = 2 . 9 1E−269 avg = 2 . 7 3E−265 max= 1 . 7 6E−263 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
343 min = 6 . 2 2E−241 med = 9 . 4 1E−238 avg = 1 . 8 6E−235 max= 5 . 0 5E−234 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
344 min = 1 . 4 0E−201 med = 4 . 2 6E−197 avg = 9 . 1 8E−196 max= 2 . 0 4E−194 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
345 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 9 2E−321 max= 1 . 0 4E−319 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
346 min = 1 . 9 2E−259 med = 1 . 6 8E−255 avg = 9 . 9 2E−253 max= 8 . 5 9E−251 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
347 min = 1 . 4 4E−109 med = 2 . 0 4E−107 avg = 1 . 0 3E−106 max= 2 . 1 0E−105 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
348 min = 8 . 2 5E−140 med = 4 . 6 8E−137 avg = 2 . 3 2E−135 max= 7 . 7 2E−134 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
349 min = 1 . 7 2E−123 med = 7 . 1 8E−121 avg = 5 . 0 0E−120 max= 7 . 4 0E−119 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
350 min = 2 . 8 3E−103 med = 1 . 6 7E−100 avg = 7 . 0 0E−100 max= 8 . 5 8E−99 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 3 , F= 0 . 7 ) )
351 min = 1 . 0 6E−167 med = 1 . 5 1E−164 avg = 6 . 7 9E−163 max= 3 . 1 1E−161 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 7 , F= 0 . 3 ) )
352 min = 2 . 4 3E−132 med = 1 . 2 1E−129 avg = 1 . 1 1E−128 max= 5 . 4 9E−127 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 2 , c r = 0 . 5 , F= 0 . 5 ) )
353 min = 9 . 7 2E−08 med = 7 . 5 2E−02 avg = 1 . 1 8E−01 max= 5 . 2 0E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
354 min = 9 . 3 0E−03 med = 1 . 9 4E−01 avg = 2 . 6 0E−01 max= 7 . 1 1E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 2 ) , f=f 3 , r )
355 min = 8 . 4 1E−08 med = 6 . 3 2E−06 avg = 9 . 1 8E−06 max= 7 . 3 0E−05 RS ( f=f 3 , r )
356
357
358 Dimensions : 5
359
360
361
362 Sphere
363
364 min = 1 . 2 0E−06 med = 1 . 2 4E−05 avg = 1 . 2 6E−05 max= 2 . 3 8E−05 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
365 min = 3 . 8 5E−07 med = 1 . 7 9E−06 avg = 1 . 9 4E−06 max= 3 . 6 4E−06 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
366 min = 2 . 2 0E−04 med = 1 . 5 9E−03 avg = 1 . 7 2E−03 max= 4 . 9 4E−03 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
367 min = 3 . 2 8E−04 med = 5 . 2 3E−03 avg = 6 . 2 7E−03 max= 2 . 5 7E−02 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
368 min = 2 . 2 0E−04 med = 1 . 5 9E−03 avg = 1 . 7 2E−03 max= 4 . 9 4E−03 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
369 min = 3 . 2 8E−04 med = 5 . 2 3E−03 avg = 6 . 2 7E−03 max= 2 . 5 7E−02 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 ) )
370 min = 4 . 2 1E−05 med = 1 . 3 1E−04 avg = 1 . 5 3E−04 max= 3 . 2 6E−04 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 1 ) )
371 min = 3 . 0 6E−05 med = 1 . 7 3E−04 avg = 1 . 8 2E−04 max= 5 . 0 7E−04 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 1 ) )
372 min = 4 . 1 4E−07 med = 1 . 2 5E−05 avg = 1 . 3 3E−05 max= 2 . 8 2E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 0 0 1 ) )
373 min = 5 . 5 8E−07 med = 2 . 5 3E−06 avg = 2 . 5 2E−06 max= 5 . 8 7E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 0 . 0 0 1 ) )
374 min = 1 . 4 7E−06 med = 1 . 1 0E−05 avg = 1 . 1 4E−05 max= 2 . 3 0E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 . 0 E−4) )
375 min = 3 . 2 6E−07 med = 1 . 7 9E−06 avg = 1 . 8 4E−06 max= 4 . 1 0E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , l o g ( 1 . 0 E−4) )
376 min = 1 . 0 5E−03 med = 1 . 8 1E−02 avg = 2 . 2 6E−02 max= 5 . 8 9E−02 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−5) )
377 min = 1 . 4 7E−03 med = 1 . 6 5E−01 avg = 2 . 1 1E−01 max= 7 . 5 6E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−5) )
378 min = 1 . 9 3E−06 med = 2 . 4 0E−05 avg = 2 . 4 4E−05 max= 4 . 5 5E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−4) )
379 min = 1 . 2 4E−06 med = 5 . 2 9E−06 avg = 5 . 2 4E−06 max= 1 . 1 6E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1 , e =1.0E−4) )
380 min = 1 . 7 9E−04 med = 7 . 3 5E−04 avg = 8 . 4 3E−04 max= 2 . 2 5E−03 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
381 min = 2 . 9 3E−04 med = 2 . 0 4E−03 avg = 2 . 3 9E−03 max= 7 . 2 3E−03 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
382 min = 3 . 5 8E−06 med = 1 . 6 0E−05 avg = 1 . 6 7E−05 max= 3 . 1 5E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
383 min = 4 . 2 7E−07 med = 3 . 0 2E−06 avg = 2 . 9 7E−06 max= 5 . 8 9E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
384 min = 3 . 2 1E−06 med = 2 . 1 4E−05 avg = 2 . 2 3E−05 max= 4 . 1 9E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
385 min = 1 . 2 0E−06 med = 9 . 3 4E−06 avg = 9 . 2 7E−06 max= 1 . 7 9E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
386 min = 1 . 6 3E−06 med = 1 . 3 4E−05 avg = 1 . 3 1E−05 max= 2 . 7 5E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
387 min = 2 . 5 9E−07 med = 2 . 0 5E−06 avg = 2 . 0 8E−06 max= 3 . 9 8E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
388 min = 2 . 1 4E−06 med = 1 . 2 8E−05 avg = 1 . 2 5E−05 max= 2 . 4 8E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
389 min = 3 . 5 7E−07 med = 2 . 1 3E−06 avg = 2 . 2 0E−06 max= 4 . 0 0E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
390 min = 2 . 5 7E−06 med = 1 . 1 4E−05 avg = 1 . 2 0E−05 max= 2 . 6 2E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
391 min = 1 . 5 2E−07 med = 1 . 6 9E−06 avg = 1 . 8 1E−06 max= 3 . 3 8E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
392 min = 9 . 9 4E−07 med = 5 . 4 2E−06 avg = 5 . 7 7E−06 max= 1 . 0 6E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
393 min = 8 . 8 1E−08 med = 5 . 3 4E−07 avg = 5 . 6 8E−07 max= 1 . 2 5E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
394 min = 3 . 1 2E−07 med = 1 . 7 0E−06 avg = 1 . 8 1E−06 max= 3 . 9 2E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
395 min = 4 . 0 4E−08 med = 1 . 8 9E−07 avg = 1 . 9 5E−07 max= 4 . 0 6E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )

873
396 min = 1 . 7 5E−08 med = 1 . 3 6E−07 avg = 1 . 4 4E−07 max= 3 . 3 6E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
397 min = 8 . 3 3E−10 med = 1 . 6 4E−08 avg = 1 . 6 3E−08 max= 3 . 5 1E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
874
398 min = 8 . 5 0E−08 med = 6 . 4 5E−06 avg = 7 . 5 7E−06 max= 2 . 7 7E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
399 min = 2 . 5 6E−08 med = 7 . 8 1E−07 avg = 9 . 7 0E−07 max= 3 . 9 8E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
400 min = 2 . 8 7E−07 med = 1 . 1 0E−06 avg = 1 . 1 4E−06 max= 2 . 3 8E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
401 min = 7 . 2 0E−09 med = 1 . 0 5E−07 avg = 1 . 1 5E−07 max= 2 . 7 2E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
402 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , r )
403 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , r )
404 min = 3 . 9 2E−262 med = 1 . 3 9E−243 avg = 1 . 3 2E−215 max= 1 . 3 2E−213 ES ( ( 1 0 / 1 0 + 5 0 ) , r )
405 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , r )
406 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , r )
407 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , r )
408 min = 1 . 3 7E−203 med = 1 . 8 0E−187 avg = 3 . 9 4E−166 max= 3 . 7 9E−164 ES ( ( 1 0 / 1 0 + 1 0 0 ) , r )
409 min = 2 . 6 4E−298 med = 1 . 5 5E−290 avg = 2 . 2 4E−285 max= 2 . 1 1E−283 ES ( ( 1 0 / 1 0 , 1 0 0 ) , r )
410 min = 1 . 1 9E−223 med = 8 . 4 2E−217 avg = 2 . 6 1E−210 max= 1 . 7 1E−208 ES ( ( 1 0 / 2 + 2 0 0 ) , r )
411 min = 1 . 8 1E−249 med = 7 . 4 0E−240 avg = 2 . 3 8E−233 max= 2 . 0 5E−231 ES ( ( 1 0 / 2 , 2 0 0 ) , r )
412 min = 1 . 7 4E−137 med = 2 . 0 2E−124 avg = 2 . 6 8E−105 max= 2 . 6 1E−103 ES ( ( 1 0 / 1 0 + 2 0 0 ) , r )
413 min = 4 . 5 6E−179 med = 7 . 8 7E−173 avg = 5 . 6 9E−170 max= 2 . 0 4E−168 ES ( ( 1 0 / 1 0 , 2 0 0 ) , r )
414 min = 1 . 9 7E−310 med = 1 . 0 0E−301 avg = 5 . 6 0E−290 max= 5 . 6 0E−288 ES ( ( 2 0 / 2 + 5 0 ) , r )
415 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , r )
416 min = 1 . 1 3E−169 med = 1 . 7 3E−158 avg = 3 . 8 3E−146 max= 3 . 8 0E−144 ES ( ( 2 0 / 1 0 + 5 0 ) , r )
417 min = 6 . 2 3E−257 med = 3 . 3 0E−251 avg = 3 . 1 4E−245 max= 1 . 2 2E−243 ES ( ( 2 0 / 1 0 , 5 0 ) , r )
418 min = 2 . 5 0E−258 med = 1 . 5 8E−249 avg = 5 . 5 3E−242 max= 4 . 5 0E−240 ES ( ( 2 0 / 2 + 1 0 0 ) , r )
419 min = 0 . 0 0 E00 med = 1 . 5 0E−316 avg = 2 . 8 6E−311 max= 2 . 3 7E−309 ES ( ( 2 0 / 2 , 1 0 0 ) , r )
420 min = 4 . 5 3E−146 med = 4 . 3 0E−138 avg = 1 . 1 9E−130 max= 5 . 9 1E−129 ES ( ( 2 0 / 1 0 + 1 0 0 ) , r )
421 min = 8 . 3 9E−227 med = 2 . 6 7E−223 avg = 5 . 0 5E−220 max= 2 . 3 3E−218 ES ( ( 2 0 / 1 0 , 1 0 0 ) , r )
422 min = 8 . 7 5E−183 med = 2 . 8 8E−177 avg = 2 . 2 6E−173 max= 1 . 8 5E−171 ES ( ( 2 0 / 2 + 2 0 0 ) , r )
423 min = 3 . 8 5E−211 med = 1 . 0 8E−205 avg = 6 . 3 2E−203 max= 2 . 9 0E−201 ES ( ( 2 0 / 2 , 2 0 0 ) , r )
424 min = 3 . 5 6E−110 med = 2 . 2 8E−102 avg = 2 . 2 5E−97 max= 2 . 0 8E−95 ES ( ( 2 0 / 1 0 + 2 0 0 ) , r )
425 min = 6 . 8 0E−151 med = 2 . 9 1E−147 avg = 5 . 9 9E−146 max= 1 . 2 5E−144 ES ( ( 2 0 / 1 0 , 2 0 0 ) , r )
426 min = 1 . 0 8E−192 med = 1 . 4 1E−186 avg = 1 . 6 9E−36 max= 1 . 6 9E−34 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
427 min = 5 . 1 4E−09 med = 2 . 7 3E−02 avg = 1 . 5 6E−01 max= 2 . 7 3 E00 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
428 min = 9 . 2 7E−209 med = 7 . 4 3E−203 avg = 1 . 4 9E−09 max= 1 . 4 9E−07 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
429 min = 9 . 9 4E−195 med = 1 . 0 5E−189 avg = 2 . 2 8E−187 max= 6 . 9 9E−186 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
430 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
431 min = 6 . 9 0E−282 med = 1 . 1 4E−276 avg = 2 . 5 1E−273 max= 1 . 1 0E−271 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
432 min = 6 . 4 6E−97 med = 4 . 0 8E−93 avg = 4 . 8 6E−92 max= 1 . 2 7E−90 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
433 min = 2 . 3 1E−35 med = 1 . 9 6E−08 avg = 1 . 4 8E−03 max= 7 . 6 9E−02 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
434 min = 6 . 6 8E−105 med = 6 . 0 9E−102 avg = 5 . 4 0E−101 max= 1 . 0 1E−99 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
435 min = 2 . 8 0E−97 med = 2 . 4 3E−95 avg = 1 . 7 8E−94 max= 1 . 9 9E−93 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
436 min = 3 . 7 3E−193 med = 9 . 3 5E−187 avg = 1 . 7 1E−184 max= 4 . 0 9E−183 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
437 min = 2 . 2 5E−143 med = 9 . 4 2E−140 avg = 1 . 6 9E−138 max= 3 . 3 8E−137 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
438 min = 7 . 5 9E−48 med = 4 . 5 3E−46 avg = 2 . 1 3E−45 max= 4 . 9 2E−44 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
439 min = 1 . 3 6E−58 med = 2 . 0 3E−56 avg = 7 . 8 0E−22 max= 7 . 8 0E−20 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
440 min = 4 . 1 2E−52 med = 1 . 2 6E−50 avg = 5 . 0 2E−50 max= 1 . 1 2E−48 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
441 min = 9 . 2 7E−49 med = 1 . 2 3E−47 avg = 3 . 0 3E−47 max= 3 . 3 1E−46 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
442 min = 1 . 5 1E−96 med = 6 . 8 5E−94 avg = 8 . 0 7E−93 max= 2 . 3 1E−91 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
443 min = 5 . 8 7E−72 med = 8 . 4 9E−70 avg = 2 . 3 4E−69 max= 3 . 8 1E−68 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
444 min = 2 . 1 5 E00 med = 6 . 9 1 E01 avg = 7 . 6 4 E01 max= 1 . 8 1 E02 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
445 min = 2 . 4 3 E01 med = 1 . 2 2 E02 avg = 1 . 2 3 E02 max= 2 . 4 4 E02 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , r )
446 min = 4 . 6 7E−01 med = 1 . 8 4 E00 avg = 1 . 9 4 E00 max= 3 . 7 4 E00 RS ( r )
447
448
449 SumCosLn
450
451 min = 7 . 0 9E−08 med = 5 . 6 4E−07 avg = 5 . 9 1E−07 max= 1 . 3 7E−06 HC( o1=NM( [ −10 ,10]ˆ5) , f=f 1 ,r)
452 min = 1 . 1 9E−08 med = 9 . 0 4E−08 avg = 9 . 2 4E−08 max= 1 . 9 0E−07 HC( o1=UM( [ −10 ,10]ˆ5) , f=f 1 ,r)
453 min = 5 . 3 4E−03 med = 2 . 5 9E−01 avg = 2 . 6 6E−01 max= 5 . 5 2E−01 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )
454 min = 2 . 1 4E−01 med = 4 . 9 0E−01 avg = 4 . 8 0E−01 max= 7 . 0 7E−01 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )
455 min = 5 . 3 4E−03 med = 2 . 5 9E−01 avg = 2 . 6 6E−01 max= 5 . 5 2E−01 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )

57 DEMOS
456 min = 2 . 1 4E−01 med = 4 . 9 0E−01 avg = 4 . 8 0E−01 max= 7 . 0 7E−01 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (1) )
457 min = 5 . 2 3E−05 med = 4 . 7 3E−04 avg = 6 . 1 5E−04 max= 2 . 9 2E−03 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.1) )
458 min = 1 . 1 0E−02 med = 3 . 0 6E−01 avg = 2 . 7 9E−01 max= 6 . 8 2E−01 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.1) )
459 min = 2 . 3 6E−07 med = 1 . 8 9E−06 avg = 2 . 0 1E−06 max= 4 . 6 6E−06 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.001) )
460 min = 2 . 6 1E−07 med = 1 . 6 3E−06 avg = 1 . 6 2E−06 max= 3 . 2 3E−06 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , log (0.001) )
461 min = 1 . 1 2E−07 med = 6 . 9 7E−07 avg = 7 . 2 9E−07 max= 1 . 8 8E−06 SA ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , l o g ( 1 . 0 E−4) )
462 min = 2 . 0 8E−08 med = 1 . 8 2E−07 avg = 1 . 8 4E−07 max= 3 . 6 0E−07 SA ( o1=UM( [ −10 ,10]ˆ5) , f=f 1 , r , l o g ( 1 . 0 E−4) )
463 min = 1 . 1 6E−01 med = 3 . 6 7E−01 avg = 3 . 5 9E−01 max= 5 . 9 0E−01 SQ( o1=NM( [ −10 ,10]ˆ5) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
464 min = 2 . 5 8E−01 med = 5 . 0 8E−01 avg = 4 . 9 6E−01 max= 7 . 2 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
465 min = 5 . 3 7E−07 med = 3 . 9 3E−06 avg = 4 . 0 0E−06 max= 8 . 6 9E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
466 min = 4 . 8 6E−07 med = 2 . 3 5E−06 avg = 2 . 6 6E−06 max= 7 . 7 7E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
467 min = 1 . 0 3E−02 med = 1 . 8 1E−01 avg = 2 . 0 0E−01 max= 5 . 4 4E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
468 min = 1 . 8 3E−01 med = 4 . 9 1E−01 avg = 4 . 7 1E−01 max= 7 . 2 5E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
469 min = 1 . 3 6E−07 med = 1 . 3 1E−06 avg = 1 . 3 8E−06 max= 2 . 8 9E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
470 min = 6 . 3 6E−08 med = 3 . 8 3E−07 avg = 3 . 8 3E−07 max= 7 . 9 1E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
471 min = 1 . 2 6E−06 med = 9 . 5 0E−06 avg = 9 . 7 6E−06 max= 2 . 1 1E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
472 min = 2 . 5 2E−06 med = 1 . 1 6E−05 avg = 1 . 4 8E−05 max= 5 . 7 3E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
473 min = 1 . 2 5E−07 med = 7 . 2 4E−07 avg = 7 . 0 9E−07 max= 1 . 5 8E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
474 min = 2 . 0 2E−08 med = 1 . 2 2E−07 avg = 1 . 1 9E−07 max= 2 . 8 4E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
475 min = 4 . 0 7E−07 med = 1 . 3 0E−06 avg = 1 . 3 6E−06 max= 3 . 6 0E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
476 min = 6 . 6 0E−08 med = 7 . 9 1E−07 avg = 8 . 5 0E−07 max= 1 . 5 8E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
477 min = 4 . 5 5E−08 med = 6 . 7 7E−07 avg = 6 . 6 3E−07 max= 1 . 3 1E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
478 min = 1 . 0 6E−08 med = 1 . 0 5E−07 avg = 1 . 0 7E−07 max= 2 . 1 6E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
479 min = 7 . 3 5E−08 med = 2 . 9 8E−07 avg = 3 . 0 0E−07 max= 5 . 5 8E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
480 min = 7 . 0 8E−09 med = 3 . 0 2E−08 avg = 3 . 0 8E−08 max= 5 . 6 9E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
481 min = 1 . 3 6E−08 med = 8 . 7 4E−08 avg = 8 . 9 5E−08 max= 1 . 7 7E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
482 min = 6 . 5 2E−10 med = 9 . 0 8E−09 avg = 9 . 6 2E−09 max= 2 . 4 6E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
483 min = 1 . 0 1E−09 med = 6 . 6 6E−09 avg = 7 . 1 7E−09 max= 1 . 9 7E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
484 min = 9 . 1 4E−11 med = 7 . 6 8E−10 avg = 7 . 9 6E−10 max= 1 . 8 1E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
485 min = 4 . 1 7E−09 med = 2 . 4 3E−07 avg = 2 . 7 2E−07 max= 8 . 2 9E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
486 min = 1 . 2 3E−09 med = 4 . 2 7E−08 avg = 5 . 0 6E−08 max= 1 . 5 9E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
487 min = 7 . 8 0E−09 med = 5 . 3 8E−08 avg = 5 . 7 2E−08 max= 1 . 1 2E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
488 min = 8 . 8 9E−10 med = 6 . 2 5E−09 avg = 6 . 4 4E−09 max= 1 . 2 2E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
489 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 1 , r )
490 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 1 , r )
491 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 1 , r )
492 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 1 , r )
493 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 1 , r )
494 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 1 , r )
495 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
496 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
497 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 1 , r )
498 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 1 , r )
499 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
500 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
501 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 1 , r )
502 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 1 , r )
503 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 1 , r )
504 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 1 , r )
505 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 1 , r )
506 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 1 , r )
507 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
508 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
509 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 1 , r )
510 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 1 , r )
511 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
512 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
513 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
514 min = 2 . 1 7E−13 med = 3 . 3 3E−04 avg = 1 . 6 1E−03 max= 2 . 2 8E−02 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
515 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 2 . 2 4E−12 max= 2 . 2 4E−10 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
516 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
517 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
518 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
519 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
520 min = 0 . 0 0 E00 med = 3 . 1 4E−10 avg = 6 . 5 3E−06 max= 4 . 0 3E−04 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
521 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
522 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
523 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
524 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
525 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
526 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 5 . 0 0E−18 max= 5 . 0 0E−16 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )

875
527 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
528 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
876
529 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
530 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
531 min = 5 . 0 6E−02 med = 3 . 6 5E−01 avg = 3 . 6 6E−01 max= 6 . 3 8E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r )
532 min = 2 . 3 3E−01 med = 4 . 9 9E−01 avg = 4 . 8 7E−01 max= 7 . 1 6E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 1 , r )
533 min = 1 . 6 5E−02 med = 4 . 6 5E−02 avg = 4 . 7 0E−02 max= 8 . 3 9E−02 RS ( f=f 1 , r )
534
535
536 CosLnSum
537
538 min = 4 . 3 8E−06 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
539 min = 2 . 3 0E−07 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
540 min = 5 . 2 8E−03 med = 9 . 2 2E−01 avg = 8 . 7 2E−01 max= 9 . 8 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
541 min = 8 . 1 6E−01 med = 9 . 7 3E−01 avg = 9 . 6 6E−01 max= 9 . 9 7E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
542 min = 5 . 2 8E−03 med = 9 . 2 2E−01 avg = 8 . 7 2E−01 max= 9 . 8 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
543 min = 8 . 1 6E−01 med = 9 . 7 3E−01 avg = 9 . 6 6E−01 max= 9 . 9 7E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 ) )
544 min = 6 . 3 0E−05 med = 8 . 6 0E−01 avg = 5 . 7 6E−01 max= 9 . 4 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 1 ) )
545 min = 1 . 5 2E−04 med = 9 . 5 7E−01 avg = 8 . 3 3E−01 max= 9 . 9 7E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 1 ) )
546 min = 4 . 0 0E−06 med = 8 . 5 2E−01 avg = 5 . 7 9E−01 max= 8 . 5 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
547 min = 5 . 5 3E−07 med = 8 . 5 2E−01 avg = 5 . 5 4E−01 max= 8 . 5 2E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
548 min = 2 . 4 2E−06 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
549 min = 1 . 5 4E−07 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
550 min = 4 . 0 7E−01 med = 9 . 3 0E−01 avg = 9 . 1 3E−01 max= 9 . 7 8E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
551 min = 8 . 5 8E−01 med = 9 . 7 3E−01 avg = 9 . 6 7E−01 max= 9 . 9 7E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
552 min = 4 . 7 7E−06 med = 8 . 5 2E−01 avg = 6 . 3 0E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
553 min = 2 . 6 4E−06 med = 8 . 5 2E−01 avg = 6 . 0 5E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
554 min = 7 . 2 4E−04 med = 9 . 2 7E−01 avg = 8 . 5 3E−01 max= 9 . 8 7E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
555 min = 7 . 7 7E−01 med = 9 . 7 2E−01 avg = 9 . 6 4E−01 max= 9 . 9 7E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
556 min = 3 . 8 2E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
557 min = 8 . 1 9E−07 med = 8 . 5 2E−01 avg = 5 . 6 2E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
558 min = 5 . 4 6E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
559 min = 2 . 6 1E−06 med = 8 . 5 2E−01 avg = 5 . 7 9E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
560 min = 1 . 3 7E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
561 min = 6 . 3 4E−07 med = 8 . 5 2E−01 avg = 5 . 7 9E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
562 min = 9 . 0 8E−07 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
563 min = 3 . 5 2E−07 med = 8 . 5 2E−01 avg = 5 . 5 4E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
564 min = 2 . 6 7E−06 med = 8 . 5 2E−01 avg = 5 . 7 1E−01 max= 8 . 5 2E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
565 min = 3 . 2 6E−07 med = 8 . 5 2E−01 avg = 5 . 5 4E−01 max= 8 . 5 2E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
566 min = 3 . 5 6E−07 med = 5 . 0 2E−06 avg = 5 . 0 5E−06 max= 1 . 0 9E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
567 min = 2 . 7 8E−08 med = 4 . 7 3E−07 avg = 4 . 8 7E−07 max= 1 . 0 4E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
568 min = 3 . 6 5E−07 med = 1 . 4 5E−06 avg = 1 . 5 6E−06 max= 3 . 7 7E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
569 min = 1 . 7 8E−08 med = 1 . 6 1E−07 avg = 1 . 7 1E−07 max= 4 . 1 7E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
570 min = 1 . 8 8E−08 med = 9 . 9 3E−08 avg = 1 . 1 4E−07 max= 3 . 5 3E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
571 min = 3 . 0 2E−09 med = 1 . 3 2E−08 avg = 1 . 7 0E−02 max= 8 . 5 2E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
572 min = 2 . 5 3E−07 med = 5 . 0 0E−06 avg = 1 . 7 0E−02 max= 8 . 5 2E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
573 min = 8 . 8 9E−09 med = 7 . 6 5E−07 avg = 1 . 9 1E−02 max= 8 . 5 2E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
574 min = 1 . 9 5E−07 med = 8 . 5 9E−07 avg = 8 . 9 5E−07 max= 1 . 8 1E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
575 min = 2 . 0 6E−08 med = 9 . 7 1E−08 avg = 1 . 0 4E−07 max= 2 . 3 3E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
576 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 2 , r )
577 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 2 , r )
578 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 2 , r )
579 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 2 , r )
580 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 2 , r )
581 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 2 , r )
582 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
583 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
584 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 2 , r )
585 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 2 , r )
586 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 2 , r )

57 DEMOS
587 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
588 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 2 , r )
589 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 2 , r )
590 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 2 , r )
591 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 2 , r )
592 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 2 , r )
593 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 2 , r )
594 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
595 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
596 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 2 , r )
597 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 2 , r )
598 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 2 , r )
599 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
600 min = 0 . 0 0 E00 med = 8 . 6 1E−03 avg = 2 . 9 9E−01 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
601 min = 3 . 8 6E−10 med = 6 . 3 1E−04 avg = 6 . 6 1E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
602 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 6 8E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
603 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 6 7E−01 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
604 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 7 0E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
605 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 8 . 5 2E−03 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
606 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 6 4E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
607 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 8 . 5 3E−03 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
608 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 4 . 1 5E−04 max= 4 . 1 5E−02 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
609 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 5 . 0 9E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
610 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
611 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
612 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 1 7E−02 max= 8 . 5 2E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
613 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
614 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
615 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F = 0 . 7 ) )
616 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F = 0 . 3 ) )
617 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F = 0 . 5 ) )
618 min = 3 . 3 3E−01 med = 9 . 3 4E−01 avg = 9 . 0 6E−01 max= 9 . 7 5E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
619 min = 8 . 4 5E−01 med = 9 . 7 5E−01 avg = 9 . 6 9E−01 max= 9 . 9 8E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 2 , r )
620 min = 1 . 4 1E−01 med = 3 . 3 4E−01 avg = 3 . 3 0E−01 max= 5 . 1 1E−01 RS ( f=f 2 , r )
621
622
623 MaxSqr
624
625 min = 6 . 9 3E−09 med = 6 . 0 0E−08 avg = 5 . 7 5E−08 max= 1 . 1 5E−07 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
626 min = 2 . 5 4E−09 med = 9 . 3 0E−09 avg = 9 . 4 3E−09 max= 1 . 7 5E−08 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
627 min = 5 . 0 3E−03 med = 6 . 2 5E−02 avg = 8 . 3 5E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
628 min = 7 . 9 2E−02 med = 3 . 8 5E−01 avg = 3 . 9 5E−01 max= 8 . 4 3E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
629 min = 5 . 0 3E−03 med = 6 . 2 5E−02 avg = 8 . 3 5E−02 max= 3 . 5 5E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
630 min = 7 . 9 2E−02 med = 3 . 8 5E−01 avg = 3 . 9 5E−01 max= 8 . 4 3E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 ) )
631 min = 6 . 5 5E−05 med = 7 . 0 2E−04 avg = 8 . 3 5E−04 max= 3 . 8 3E−03 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 1 ) )
632 min = 8 . 3 0E−03 med = 6 . 6 3E−02 avg = 7 . 1 6E−02 max= 2 . 1 8E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 1 ) )
633 min = 3 . 1 2E−07 med = 1 . 4 4E−06 avg = 1 . 4 6E−06 max= 3 . 0 0E−06 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
634 min = 2 . 1 9E−07 med = 2 . 0 6E−06 avg = 2 . 4 0E−06 max= 6 . 7 9E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 0 . 0 0 1 ) )
635 min = 4 . 4 1E−08 med = 1 . 9 2E−07 avg = 2 . 0 7E−07 max= 4 . 1 3E−07 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
636 min = 1 . 5 8E−08 med = 1 . 4 0E−07 avg = 1 . 4 8E−07 max= 3 . 8 5E−07 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , l o g ( 1 . 0 E−4) )
637 min = 1 . 9 3E−02 med = 2 . 6 3E−01 avg = 2 . 7 2E−01 max= 6 . 0 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
638 min = 7 . 7 0E−02 med = 5 . 2 7E−01 avg = 4 . 8 7E−01 max= 8 . 2 3E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
639 min = 4 . 0 8E−07 med = 2 . 6 6E−06 avg = 2 . 7 5E−06 max= 5 . 8 1E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
640 min = 3 . 6 8E−07 med = 5 . 0 0E−06 avg = 5 . 5 4E−06 max= 1 . 9 7E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
641 min = 1 . 6 1E−03 med = 2 . 6 5E−02 avg = 3 . 8 0E−02 max= 1 . 7 5E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
642 min = 3 . 6 6E−02 med = 3 . 0 9E−01 avg = 3 . 3 6E−01 max= 7 . 0 0E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
643 min = 5 . 1 8E−08 med = 4 . 0 3E−07 avg = 3 . 9 6E−07 max= 9 . 8 6E−07 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
644 min = 4 . 7 7E−08 med = 2 . 4 0E−07 avg = 2 . 6 8E−07 max= 7 . 3 4E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
645 min = 1 . 4 3E−06 med = 9 . 0 2E−06 avg = 9 . 8 9E−06 max= 2 . 4 0E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
646 min = 2 . 4 3E−06 med = 3 . 1 5E−05 avg = 4 . 1 0E−05 max= 1 . 3 6E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
647 min = 1 . 1 9E−08 med = 9 . 7 0E−08 avg = 9 . 4 1E−08 max= 2 . 0 3E−07 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
648 min = 2 . 6 9E−09 med = 1 . 4 1E−08 avg = 1 . 5 4E−08 max= 3 . 4 8E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
649 min = 1 . 2 0E−07 med = 8 . 5 1E−07 avg = 8 . 3 0E−07 max= 2 . 0 2E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
650 min = 1 . 3 8E−07 med = 9 . 2 6E−07 avg = 1 . 0 5E−06 max= 2 . 1 7E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
651 min = 1 . 0 5E−08 med = 6 . 5 0E−08 avg = 6 . 8 7E−08 max= 1 . 6 3E−07 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
652 min = 3 . 2 8E−09 med = 1 . 0 7E−08 avg = 1 . 1 7E−08 max= 2 . 6 3E−08 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
653 min = 1 . 5 3E−09 med = 2 . 8 7E−08 avg = 2 . 9 2E−08 max= 6 . 1 6E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
654 min = 2 . 7 1E−10 med = 3 . 1 2E−09 avg = 3 . 1 4E−09 max= 7 . 3 5E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
655 min = 1 . 9 5E−09 med = 1 . 1 2E−08 avg = 1 . 1 1E−08 max= 2 . 3 0E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
656 min = 1 . 6 3E−10 med = 1 . 0 4E−09 avg = 1 . 0 8E−09 max= 2 . 3 5E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
657 min = 1 . 1 9E−10 med = 7 . 2 6E−10 avg = 7 . 8 2E−10 max= 1 . 9 2E−09 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
658 min = 6 . 1 6E−12 med = 9 . 6 5E−11 avg = 1 . 0 5E−10 max= 2 . 5 7E−10 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )

877
659 min = 4 . 3 6E−10 med = 2 . 5 4E−08 avg = 3 . 1 7E−08 max= 9 . 1 2E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
660 min = 5 . 6 2E−11 med = 5 . 3 2E−09 avg = 5 . 9 6E−09 max= 1 . 7 9E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
878
661 min = 8 . 6 7E−10 med = 6 . 1 9E−09 avg = 6 . 0 8E−09 max= 1 . 3 7E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
662 min = 1 . 0 2E−10 med = 6 . 1 6E−10 avg = 6 . 5 4E−10 max= 1 . 3 0E−09 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
663 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 3 , r )
664 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 3 , r )
665 min = 8 . 1 5E−238 med = 6 . 6 3E−218 avg = 3 . 0 7E−190 max= 2 . 3 7E−188 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 3 , r )
666 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 3 , r )
667 min = 0 . 0 0 E00 med = 2 . 6 5E−318 avg = 1 . 8 2E−307 max= 1 . 2 2E−305 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 3 , r )
668 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 3 , r )
669 min = 1 . 1 9E−185 med = 1 . 0 5E−170 avg = 7 . 2 8E−154 max= 7 . 2 7E−152 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
670 min = 4 . 6 4E−282 med = 2 . 4 8E−275 avg = 3 . 0 8E−270 max= 9 . 3 5E−269 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
671 min = 3 . 7 4E−218 med = 5 . 5 4E−208 avg = 3 . 6 9E−200 max= 3 . 6 6E−198 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 3 , r )
672 min = 5 . 3 7E−238 med = 5 . 9 8E−232 avg = 8 . 6 1E−228 max= 2 . 8 0E−226 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 3 , r )
673 min = 4 . 5 5E−130 med = 1 . 5 5E−116 avg = 5 . 6 7E−105 max= 5 . 6 4E−103 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
674 min = 9 . 4 6E−173 med = 2 . 4 7E−167 avg = 3 . 5 7E−163 max= 3 . 0 9E−161 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
675 min = 1 . 1 6E−291 med = 1 . 0 5E−279 avg = 1 . 2 9E−268 max= 1 . 2 9E−266 ES ( ( 2 0 / 2 + 5 0 ) , f=f 3 , r )
676 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 3 , r )
677 min = 1 . 1 6E−151 med = 9 . 9 0E−140 avg = 6 . 7 2E−128 max= 6 . 7 1E−126 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 3 , r )
678 min = 4 . 3 5E−225 med = 2 . 0 0E−219 avg = 9 . 1 7E−212 max= 9 . 1 7E−210 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 3 , r )
679 min = 5 . 5 3E−244 med = 2 . 4 3E−234 avg = 1 . 4 3E−229 max= 5 . 2 6E−228 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 3 , r )
680 min = 6 . 6 2E−308 med = 3 . 2 5E−301 avg = 1 . 1 3E−297 max= 6 . 7 0E−296 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 3 , r )
681 min = 1 . 0 1E−136 med = 1 . 3 4E−125 avg = 1 . 2 8E−116 max= 1 . 0 6E−114 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
682 min = 9 . 5 3E−211 med = 5 . 4 3E−207 avg = 5 . 9 2E−203 max= 4 . 0 1E−201 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
683 min = 2 . 0 2E−174 med = 2 . 6 5E−169 avg = 1 . 9 3E−166 max= 6 . 4 7E−165 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 3 , r )
684 min = 2 . 4 0E−202 med = 6 . 0 7E−199 avg = 4 . 0 8E−196 max= 1 . 8 9E−194 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 3 , r )
685 min = 9 . 8 5E−102 med = 1 . 1 8E−95 avg = 7 . 7 9E−89 max= 7 . 6 3E−87 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
686 min = 3 . 7 9E−144 med = 6 . 3 0E−141 avg = 4 . 7 3E−139 max= 1 . 4 0E−137 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
687 min = 1 . 5 8E−169 med = 2 . 9 6E−154 avg = 4 . 2 6E−12 max= 4 . 2 6E−10 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
688 min = 4 . 5 9E−08 med = 1 . 3 0E−04 avg = 7 . 8 4E−04 max= 6 . 3 0E−03 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
689 min = 6 . 2 4E−195 med = 2 . 0 0E−32 avg = 3 . 3 3E−06 max= 1 . 2 4E−04 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
690 min = 2 . 2 2E−174 med = 3 . 1 4E−168 avg = 3 . 1 1E−165 max= 1 . 4 1E−163 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
691 min = 0 . 0 0 E00 med = 4 . 3 6E−318 avg = 8 . 2 6E−23 max= 8 . 2 6E−21 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
692 min = 7 . 3 9E−253 med = 1 . 4 9E−245 avg = 1 . 9 1E−240 max= 1 . 4 1E−238 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
693 min = 1 . 3 0E−88 med = 3 . 4 7E−84 avg = 3 . 3 5E−82 max= 2 . 6 5E−80 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
694 min = 6 . 3 2E−51 med = 9 . 1 6E−11 avg = 2 . 0 7E−05 max= 1 . 1 0E−03 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
695 min = 5 . 1 7E−101 med = 5 . 7 3E−97 avg = 7 . 3 6E−35 max= 7 . 3 6E−33 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
696 min = 9 . 7 2E−89 med = 2 . 5 2E−86 avg = 4 . 1 2E−85 max= 1 . 3 3E−83 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
697 min = 2 . 3 1E−173 med = 1 . 8 8E−165 avg = 5 . 6 4E−161 max= 5 . 6 2E−159 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
698 min = 8 . 3 4E−130 med = 8 . 2 6E−127 avg = 1 . 1 8E−124 max= 5 . 3 3E−123 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
699 min = 1 . 1 8E−44 med = 3 . 2 3E−43 avg = 8 . 4 1E−43 max= 7 . 5 0E−42 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
700 min = 4 . 1 4E−60 med = 4 . 3 1E−57 avg = 3 . 0 7E−30 max= 1 . 5 8E−28 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
701 min = 5 . 5 0E−51 med = 4 . 3 8E−49 avg = 1 . 2 6E−48 max= 1 . 4 6E−47 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
702 min = 9 . 5 8E−46 med = 2 . 2 2E−44 avg = 5 . 0 0E−44 max= 8 . 0 9E−43 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 3 , F= 0 . 7 ) )
703 min = 3 . 4 2E−88 med = 1 . 5 8E−85 avg = 1 . 6 7E−84 max= 5 . 2 9E−83 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 7 , F= 0 . 3 ) )
704 min = 7 . 4 5E−67 med = 7 . 8 0E−65 avg = 2 . 5 4E−64 max= 2 . 9 8E−63 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 5 , c r = 0 . 5 , F= 0 . 5 ) )
705 min = 1 . 0 8E−02 med = 2 . 8 2E−01 avg = 2 . 8 3E−01 max= 5 . 8 5E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
706 min = 9 . 7 0E−02 med = 5 . 3 6E−01 avg = 4 . 9 2E−01 max= 8 . 0 4E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 5 ) , f=f 3 , r )
707 min = 1 . 4 5E−03 med = 9 . 3 6E−03 avg = 9 . 6 7E−03 max= 2 . 0 9E−02 RS ( f=f 3 , r )
708
709
710 Dimensions : 10
711
712
713
714 Sphere
715
716 min = 9 . 5 1E−05 med = 2 . 2 5E−04 avg = 2 . 2 1E−04 max= 3 . 0 9E−04 HC( o1=NM( [ −10 ,10]ˆ10) ,r)
717 min = 1 . 3 6E−05 med = 3 . 1 5E−05 avg = 3 . 0 2E−05 max= 4 . 3 2E−05 HC( o1=UM( [ −10 ,10]ˆ10) ,r)
718 min = 1 . 2 4E−02 med = 3 . 1 9E−02 avg = 3 . 2 3E−02 max= 5 . 8 4E−02 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (1) )
719 min = 1 . 8 0E−02 med = 6 . 4 6E−02 avg = 6 . 6 8E−02 max= 1 . 3 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (1) )

57 DEMOS
720 min = 1 . 2 4E−02 med = 3 . 1 9E−02 avg = 3 . 2 3E−02 max= 5 . 8 4E−02 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (1) )
721 min = 1 . 8 0E−02 med = 6 . 4 6E−02 avg = 6 . 6 8E−02 max= 1 . 3 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (1) )
722 min = 7 . 6 7E−04 med = 2 . 7 7E−03 avg = 2 . 6 5E−03 max= 4 . 0 3E−03 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (0.1) )
723 min = 1 . 1 6E−03 med = 3 . 3 8E−03 avg = 3 . 3 9E−03 max= 6 . 6 3E−03 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (0.1) )
724 min = 1 . 1 1E−04 med = 2 . 3 2E−04 avg = 2 . 3 0E−04 max= 3 . 5 9E−04 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log (0.001) )
725 min = 1 . 5 9E−05 med = 4 . 5 3E−05 avg = 4 . 5 0E−05 max= 6 . 7 4E−05 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log (0.001) )
726 min = 5 . 9 4E−05 med = 2 . 2 4E−04 avg = 2 . 2 5E−04 max= 3 . 7 7E−04 SA ( o1=NM( [ −10 ,10]ˆ10) ,r , log ( 1 . 0 E−4) )
727 min = 9 . 8 2E−06 med = 3 . 0 5E−05 avg = 2 . 9 2E−05 max= 4 . 8 6E−05 SA ( o1=UM( [ −10 ,10]ˆ10) ,r , log ( 1 . 0 E−4) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
728 min = 5 . 6 2E−02 med = 2 . 6 5E−01 avg = 2 . 7 0E−01 max= 5 . 3 5E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−5) )
729 min = 1 . 3 8E−01 med = 7 . 8 2E−01 avg = 8 . 6 4E−01 max= 2 . 0 8 E00 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−5) )
730 min = 1 . 3 5E−04 med = 3 . 1 5E−04 avg = 3 . 2 2E−04 max= 5 . 5 6E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−4) )
731 min = 1 . 8 5E−05 med = 6 . 1 7E−05 avg = 5 . 9 4E−05 max= 9 . 3 7E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1 , e =1.0E−4) )
732 min = 5 . 0 4E−03 med = 1 . 5 1E−02 avg = 1 . 5 3E−02 max= 2 . 5 4E−02 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
733 min = 7 . 2 1E−03 med = 2 . 7 9E−02 avg = 2 . 9 0E−02 max= 6 . 1 9E−02 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
734 min = 1 . 0 3E−04 med = 2 . 8 1E−04 avg = 2 . 7 9E−04 max= 4 . 2 1E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
735 min = 9 . 8 8E−06 med = 4 . 0 1E−05 avg = 3 . 9 4E−05 max= 5 . 4 8E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
736 min = 6 . 3 7E−05 med = 3 . 4 5E−04 avg = 3 . 4 2E−04 max= 5 . 3 2E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
737 min = 6 . 3 7E−05 med = 1 . 3 8E−04 avg = 1 . 4 3E−04 max= 2 . 1 7E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
738 min = 9 . 6 3E−05 med = 2 . 2 6E−04 avg = 2 . 3 1E−04 max= 3 . 6 2E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
739 min = 1 . 0 5E−05 med = 3 . 2 8E−05 avg = 3 . 2 2E−05 max= 5 . 3 0E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
740 min = 9 . 6 9E−05 med = 2 . 4 1E−04 avg = 2 . 3 5E−04 max= 3 . 6 5E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
741 min = 1 . 7 4E−05 med = 3 . 6 5E−05 avg = 3 . 5 9E−05 max= 5 . 1 2E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
742 min = 1 . 0 4E−04 med = 2 . 2 1E−04 avg = 2 . 2 3E−04 max= 3 . 2 8E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
743 min = 1 . 5 2E−05 med = 3 . 2 2E−05 avg = 3 . 1 9E−05 max= 4 . 5 6E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
744 min = 3 . 9 2E−05 med = 1 . 0 3E−04 avg = 1 . 0 2E−04 max= 1 . 5 7E−04 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
745 min = 1 . 9 3E−06 med = 9 . 6 5E−06 avg = 9 . 4 2E−06 max= 1 . 5 9E−05 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
746 min = 1 . 3 0E−05 med = 3 . 2 4E−05 avg = 3 . 2 7E−05 max= 5 . 1 1E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
747 min = 1 . 4 0E−06 med = 3 . 4 5E−06 avg = 3 . 4 8E−06 max= 5 . 8 0E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
748 min = 1 . 3 9E−06 med = 4 . 5 1E−06 avg = 4 . 5 2E−06 max= 8 . 3 6E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
749 min = 1 . 7 6E−07 med = 4 . 4 2E−07 avg = 4 . 5 7E−07 max= 8 . 1 7E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
750 min = 1 . 9 2E−05 med = 1 . 3 0E−04 avg = 1 . 4 1E−04 max= 4 . 0 1E−04 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
751 min = 4 . 2 4E−06 med = 2 . 2 2E−05 avg = 2 . 1 8E−05 max= 4 . 4 6E−05 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
752 min = 9 . 4 8E−06 med = 2 . 2 4E−05 avg = 2 . 2 3E−05 max= 3 . 5 9E−05 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
753 min = 8 . 0 2E−07 med = 2 . 1 5E−06 avg = 2 . 1 3E−06 max= 3 . 7 8E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
754 min = 2 . 7 3E−310 med = 3 . 5 8E−300 avg = 1 . 9 5E−287 max= 1 . 9 5E−285 ES ( ( 1 0 / 2 + 5 0 ) , r )
755 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , r )
756 min = 6 . 4 9E−199 med = 1 . 0 1E−187 avg = 2 . 9 6E−177 max= 2 . 9 6E−175 ES ( ( 1 0 / 1 0 + 5 0 ) , r )
757 min = 2 . 1 7E−297 med = 2 . 7 6E−288 avg = 1 . 1 3E−282 max= 1 . 1 2E−280 ES ( ( 1 0 / 1 0 , 5 0 ) , r )
758 min = 4 . 6 9E−228 med = 5 . 9 7E−218 avg = 4 . 1 3E−211 max= 3 . 9 4E−209 ES ( ( 1 0 / 2 + 1 0 0 ) , r )
759 min = 5 . 0 5E−265 med = 1 . 2 7E−259 avg = 3 . 3 9E−255 max= 2 . 2 1E−253 ES ( ( 1 0 / 2 , 1 0 0 ) , r )
760 min = 1 . 2 0E−151 med = 1 . 9 1E−144 avg = 1 . 7 4E−139 max= 1 . 1 2E−137 ES ( ( 1 0 / 1 0 + 1 0 0 ) , r )
761 min = 2 . 2 3E−201 med = 1 . 4 4E−196 avg = 3 . 6 9E−193 max= 3 . 2 8E−191 ES ( ( 1 0 / 1 0 , 1 0 0 ) , r )
762 min = 5 . 8 9E−146 med = 2 . 6 2E−141 avg = 1 . 3 2E−138 max= 1 . 1 3E−136 ES ( ( 1 0 / 2 + 2 0 0 ) , r )
763 min = 1 . 5 2E−159 med = 2 . 2 9E−156 avg = 1 . 5 9E−152 max= 1 . 5 3E−150 ES ( ( 1 0 / 2 , 2 0 0 ) , r )
764 min = 1 . 3 0E−102 med = 3 . 1 5E−97 avg = 2 . 0 4E−93 max= 1 . 9 2E−91 ES ( ( 1 0 / 1 0 + 2 0 0 ) , r )
765 min = 6 . 4 0E−123 med = 1 . 2 5E−119 avg = 1 . 6 5E−117 max= 9 . 1 5E−116 ES ( ( 1 0 / 1 0 , 2 0 0 ) , r )
766 min = 2 . 0 3E−199 med = 5 . 2 4E−193 avg = 2 . 4 1E−189 max= 1 . 1 6E−187 ES ( ( 2 0 / 2 + 5 0 ) , r )
767 min = 1 . 5 0E−246 med = 8 . 1 3E−241 avg = 1 . 0 1E−236 max= 5 . 8 8E−235 ES ( ( 2 0 / 2 , 5 0 ) , r )
768 min = 4 . 0 6E−124 med = 1 . 5 8E−118 avg = 3 . 5 9E−115 max= 1 . 8 9E−113 ES ( ( 2 0 / 1 0 + 5 0 ) , r )
769 min = 3 . 6 4E−164 med = 3 . 6 4E−160 avg = 4 . 6 0E−156 max= 2 . 8 2E−154 ES ( ( 2 0 / 1 0 , 5 0 ) , r )
770 min = 1 . 9 4E−165 med = 2 . 8 2E−161 avg = 1 . 0 0E−157 max= 4 . 5 9E−156 ES ( ( 2 0 / 2 + 1 0 0 ) , r )
771 min = 2 . 3 5E−207 med = 1 . 9 8E−202 avg = 3 . 1 9E−199 max= 2 . 9 8E−197 ES ( ( 2 0 / 2 , 1 0 0 ) , r )
772 min = 2 . 1 9E−108 med = 5 . 4 8E−104 avg = 1 . 6 5E−101 max= 4 . 4 8E−100 ES ( ( 2 0 / 1 0 + 1 0 0 ) , r )
773 min = 2 . 9 7E−151 med = 2 . 1 5E−148 avg = 2 . 1 4E−146 max= 6 . 3 9E−145 ES ( ( 2 0 / 1 0 , 1 0 0 ) , r )
774 min = 6 . 3 5E−119 med = 8 . 7 2E−116 avg = 1 . 0 1E−113 max= 4 . 6 1E−112 ES ( ( 2 0 / 2 + 2 0 0 ) , r )
775 min = 1 . 5 0E−136 med = 2 . 3 2E−133 avg = 3 . 8 9E−132 max= 1 . 9 2E−130 ES ( ( 2 0 / 2 , 2 0 0 ) , r )
776 min = 2 . 0 1E−80 med = 1 . 3 7E−77 avg = 2 . 6 6E−76 max= 7 . 4 5E−75 ES ( ( 2 0 / 1 0 + 2 0 0 ) , r )
777 min = 6 . 7 3E−103 med = 2 . 7 1E−100 avg = 2 . 1 6E−99 max= 4 . 5 2E−98 ES ( ( 2 0 / 1 0 , 2 0 0 ) , r )
778 min = 7 . 6 4E−98 med = 2 . 5 4E−88 avg = 7 . 4 4E−04 max= 7 . 0 8E−02 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
779 min = 2 . 1 4E−01 med = 3 . 1 5 E00 avg = 3 . 9 8 E00 max= 1 . 6 9 E01 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
780 min = 2 . 4 2E−24 med = 3 . 7 8E−07 avg = 6 . 1 4E−03 max= 3 . 4 1E−01 DE−1( r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
781 min = 2 . 9 3E−117 med = 1 . 3 1E−113 avg = 1 . 1 1E−111 max= 5 . 4 9E−110 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
782 min = 8 . 0 2E−223 med = 2 . 5 9E−216 avg = 4 . 8 3E−38 max= 4 . 8 3E−36 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
783 min = 1 . 5 9E−170 med = 2 . 7 2E−166 avg = 2 . 5 1E−163 max= 1 . 9 8E−161 DE−1( r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
784 min = 3 . 2 7E−49 med = 4 . 7 5E−47 avg = 1 . 2 3E−46 max= 1 . 7 3E−45 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
785 min = 2 . 6 6E−03 med = 1 . 5 0E−01 avg = 3 . 3 9E−01 max= 2 . 4 1 E00 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
786 min = 7 . 1 4E−55 med = 7 . 3 5E−53 avg = 5 . 5 3E−52 max= 9 . 7 7E−51 DE−1( r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
787 min = 5 . 2 5E−59 med = 5 . 5 0E−57 avg = 1 . 9 2E−56 max= 3 . 5 6E−55 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
788 min = 1 . 3 2E−112 med = 1 . 0 3E−109 avg = 2 . 1 7E−107 max= 7 . 8 0E−106 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
789 min = 3 . 7 7E−86 med = 3 . 7 6E−84 avg = 7 . 1 7E−83 max= 3 . 1 0E−81 DE−1( r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
790 min = 3 . 0 8E−24 med = 3 . 9 5E−22 avg = 8 . 0 5E−22 max= 7 . 2 0E−21 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )

879
791 min = 9 . 0 0E−10 med = 1 . 1 7E−04 avg = 7 . 7 3E−04 max= 1 . 1 2E−02 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
792 min = 2 . 9 8E−26 med = 5 . 1 4E−25 avg = 8 . 4 4E−25 max= 5 . 5 7E−24 DE−1( r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
880
793 min = 5 . 5 0E−29 med = 3 . 1 5E−28 avg = 6 . 0 9E−28 max= 6 . 1 5E−27 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
794 min = 4 . 5 7E−58 med = 2 . 7 2E−55 avg = 1 . 9 1E−54 max= 5 . 0 4E−53 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
795 min = 1 . 0 4E−43 med = 4 . 8 7E−42 avg = 1 . 7 0E−41 max= 4 . 0 4E−40 DE−1( r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
796 min = 6 . 5 2 E01 med = 2 . 0 4 E02 avg = 1 . 9 9 E02 max= 3 . 3 9 E02 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r )
797 min = 9 . 7 8 E01 med = 2 . 8 0 E02 avg = 2 . 8 0 E02 max= 4 . 8 0 E02 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , r )
798 min = 1 . 4 8 E01 med = 3 . 2 1 E01 avg = 3 . 1 4 E01 max= 4 . 4 0 E01 RS ( r )
799
800
801 SumCosLn
802
803 min = 1 . 8 1E−06 med = 5 . 4 5E−06 avg = 5 . 3 5E−06 max= 8 . 3 2E−06 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
804 min = 2 . 8 7E−07 med = 7 . 8 5E−07 avg = 7 . 6 8E−07 max= 1 . 0 7E−06 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
805 min = 1 . 7 4E−01 med = 3 . 7 3E−01 avg = 3 . 7 2E−01 max= 5 . 4 7E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
806 min = 2 . 6 6E−01 med = 5 . 0 8E−01 avg = 5 . 0 1E−01 max= 6 . 9 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
807 min = 1 . 7 4E−01 med = 3 . 7 3E−01 avg = 3 . 7 2E−01 max= 5 . 4 7E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
808 min = 2 . 6 6E−01 med = 5 . 0 8E−01 avg = 5 . 0 1E−01 max= 6 . 9 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 ) )
809 min = 4 . 4 9E−03 med = 1 . 9 0E−02 avg = 3 . 1 0E−02 max= 1 . 5 8E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 1 ) )
810 min = 1 . 5 9E−01 med = 4 . 0 0E−01 avg = 4 . 0 5E−01 max= 6 . 2 8E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 1 ) )
811 min = 1 . 3 0E−05 med = 3 . 0 1E−05 avg = 3 . 0 2E−05 max= 4 . 6 0E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
812 min = 9 . 1 2E−06 med = 3 . 0 6E−05 avg = 3 . 0 9E−05 max= 4 . 6 5E−05 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 0 . 0 0 1 ) )
813 min = 1 . 9 1E−06 med = 7 . 6 5E−06 avg = 7 . 3 7E−06 max= 1 . 0 8E−05 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
814 min = 9 . 6 9E−07 med = 2 . 8 1E−06 avg = 2 . 7 3E−06 max= 4 . 6 4E−06 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , l o g ( 1 . 0 E−4) )
815 min = 1 . 8 5E−01 med = 4 . 1 2E−01 avg = 4 . 1 1E−01 max= 6 . 2 8E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
816 min = 2 . 7 7E−01 med = 5 . 0 5E−01 avg = 5 . 0 8E−01 max= 6 . 9 8E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−5) )
817 min = 1 . 5 1E−05 med = 3 . 7 9E−05 avg = 3 . 6 9E−05 max= 5 . 6 2E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
818 min = 1 . 4 2E−05 med = 3 . 8 0E−05 avg = 3 . 8 6E−05 max= 7 . 4 6E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1 , e =1.0E−4) )
819 min = 1 . 1 9E−01 med = 3 . 2 8E−01 avg = 3 . 3 0E−01 max= 5 . 1 6E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
820 min = 2 . 5 7E−01 med = 4 . 9 6E−01 avg = 4 . 9 8E−01 max= 6 . 9 3E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
821 min = 5 . 3 1E−06 med = 1 . 0 9E−05 avg = 1 . 0 8E−05 max= 1 . 6 1E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
822 min = 5 . 4 4E−07 med = 3 . 3 3E−06 avg = 3 . 3 0E−06 max= 5 . 0 7E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
823 min = 5 . 1 8E−05 med = 1 . 7 0E−04 avg = 1 . 6 7E−04 max= 2 . 7 0E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
824 min = 6 . 2 1E−05 med = 2 . 7 4E−04 avg = 2 . 8 0E−04 max= 5 . 3 2E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
825 min = 2 . 9 2E−06 med = 6 . 1 2E−06 avg = 6 . 1 7E−06 max= 9 . 9 9E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
826 min = 4 . 2 0E−07 med = 9 . 5 2E−07 avg = 9 . 5 2E−07 max= 1 . 3 3E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
827 min = 6 . 3 2E−06 med = 2 . 0 0E−05 avg = 1 . 9 3E−05 max= 3 . 0 6E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
828 min = 7 . 7 5E−06 med = 1 . 4 9E−05 avg = 1 . 5 0E−05 max= 2 . 8 3E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
829 min = 1 . 4 1E−06 med = 5 . 9 6E−06 avg = 5 . 8 2E−06 max= 8 . 8 4E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
830 min = 4 . 2 0E−07 med = 8 . 1 3E−07 avg = 8 . 3 4E−07 max= 1 . 2 2E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
831 min = 9 . 0 8E−07 med = 2 . 4 0E−06 avg = 2 . 3 9E−06 max= 4 . 1 1E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
832 min = 9 . 2 5E−08 med = 2 . 3 3E−07 avg = 2 . 3 4E−07 max= 4 . 2 5E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
833 min = 3 . 5 4E−07 med = 8 . 8 3E−07 avg = 8 . 8 1E−07 max= 1 . 3 9E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
834 min = 3 . 9 9E−08 med = 8 . 9 9E−08 avg = 8 . 9 6E−08 max= 1 . 4 3E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
835 min = 2 . 9 1E−08 med = 1 . 1 6E−07 avg = 1 . 1 2E−07 max= 1 . 9 4E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
836 min = 4 . 1 6E−09 med = 1 . 1 3E−08 avg = 1 . 1 4E−08 max= 2 . 0 3E−08 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
837 min = 5 . 6 1E−07 med = 3 . 1 1E−06 avg = 3 . 3 2E−06 max= 8 . 1 7E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
838 min = 1 . 2 2E−07 med = 4 . 8 5E−07 avg = 5 . 1 9E−07 max= 1 . 1 8E−06 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
839 min = 1 . 6 1E−07 med = 5 . 6 1E−07 avg = 5 . 6 4E−07 max= 9 . 3 2E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
840 min = 1 . 9 1E−08 med = 5 . 4 8E−08 avg = 5 . 4 7E−08 max= 1 . 1 4E−07 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 1 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
841 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 1 , r )
842 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 1 , r )
843 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 1 , r )
844 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 1 , r )
845 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 1 , r )
846 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 1 , r )
847 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
848 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
849 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 1 , r )
850 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 1 , r )

57 DEMOS
851 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
852 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
853 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 1 , r )
854 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 5 0 ) , f=f 1 , r )
855 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 1 , r )
856 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 1 , r )
857 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 1 , r )
858 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 1 , r )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
859 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 1 , r )
860 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 1 , r )
861 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 1 , r )
862 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 1 , r )
863 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 1 , r )
864 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 1 , r )
865 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 5 0E−06 max= 1 . 4 0E−04 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
866 min = 1 . 9 6E−03 med = 1 . 8 5E−02 avg = 2 . 4 6E−02 max= 8 . 8 9E−02 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
867 min = 0 . 0 0 E00 med = 1 . 3 9E−10 avg = 6 . 2 8E−06 max= 2 . 5 7E−04 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
868 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F = 0 . 7 ) )
869 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F = 0 . 3 ) )
870 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F = 0 . 5 ) )
871 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
872 min = 6 . 4 2E−06 med = 1 . 2 6E−03 avg = 1 . 7 9E−03 max= 1 . 4 6E−02 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
873 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
874 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
875 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
876 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
877 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
878 min = 3 . 3 7E−12 med = 1 . 1 2E−06 avg = 1 . 1 1E−05 max= 2 . 3 0E−04 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
879 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
880 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
881 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
882 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 DE−1( f=f 1 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
883 min = 2 . 4 8E−01 med = 4 . 2 0E−01 avg = 4 . 1 5E−01 max= 5 . 8 4E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
884 min = 2 . 7 3E−01 med = 5 . 0 9E−01 avg = 5 . 0 5E−01 max= 7 . 0 4E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 1 , r )
885 min = 1 . 1 4E−01 med = 1 . 6 8E−01 avg = 1 . 6 6E−01 max= 1 . 9 5E−01 RS ( f=f 1 , r )
886
887
888 CosLnSum
889
890 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 HC( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
891 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 HC( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
892 min = 6 . 3 8E−01 med = 7 . 4 4E−01 avg = 7 . 4 2E−01 max= 8 . 9 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
893 min = 6 . 8 0E−01 med = 8 . 2 3E−01 avg = 8 . 1 7E−01 max= 9 . 5 4E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
894 min = 6 . 3 8E−01 med = 7 . 4 4E−01 avg = 7 . 4 2E−01 max= 8 . 9 2E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
895 min = 6 . 8 0E−01 med = 8 . 2 3E−01 avg = 8 . 1 7E−01 max= 9 . 5 4E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 ) )
896 min = 5 . 6 8E−01 med = 5 . 9 9E−01 avg = 6 . 0 2E−01 max= 6 . 9 1E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 1 ) )
897 min = 6 . 6 5E−01 med = 7 . 7 8E−01 avg = 7 . 7 7E−01 max= 9 . 2 1E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 1 ) )
898 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
899 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 0 . 0 0 1 ) )
900 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
901 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , l o g ( 1 . 0 E−4) )
902 min = 6 . 6 4E−01 med = 7 . 6 1E−01 avg = 7 . 6 1E−01 max= 8 . 8 7E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
903 min = 6 . 8 1E−01 med = 8 . 2 7E−01 avg = 8 . 1 9E−01 max= 9 . 5 7E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−5) )
904 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
905 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1 , e =1.0E−4) )
906 min = 6 . 3 1E−01 med = 7 . 2 7E−01 avg = 7 . 3 3E−01 max= 8 . 5 1E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
907 min = 6 . 8 5E−01 med = 8 . 1 7E−01 avg = 8 . 1 4E−01 max= 9 . 5 4E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
908 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
909 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
910 min = 5 . 4 9E−01 med = 5 . 5 0E−01 avg = 5 . 5 0E−01 max= 5 . 5 0E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
911 min = 5 . 4 9E−01 med = 5 . 5 0E−01 avg = 5 . 5 0E−01 max= 5 . 5 1E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
912 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
913 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
914 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
915 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
916 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
917 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
918 min = 5 . 4 9E−01 med = 5 . 5 5E−01 avg = 5 . 5 7E−01 max= 5 . 8 3E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
919 min = 5 . 7 6E−01 med = 6 . 3 9E−01 avg = 6 . 3 8E−01 max= 6 . 7 2E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
920 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 5 5E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
921 min = 5 . 6 5E−01 med = 6 . 1 3E−01 avg = 6 . 1 2E−01 max= 6 . 4 5E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
922 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )

881
923 min = 5 . 5 7E−01 med = 5 . 9 5E−01 avg = 5 . 9 4E−01 max= 6 . 2 4E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
924 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
882
925 min = 5 . 5 2E−01 med = 5 . 7 6E−01 avg = 5 . 7 7E−01 max= 6 . 0 7E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
926 min = 5 . 4 9E−01 med = 5 . 5 6E−01 avg = 5 . 5 8E−01 max= 5 . 8 7E−01 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
927 min = 5 . 7 8E−01 med = 6 . 3 2E−01 avg = 6 . 3 1E−01 max= 6 . 7 5E−01 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 2 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
928 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 5 0 ) , f=f 2 , r )
929 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 2 , r )
930 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 0 9E−01 max= 7 . 2 5E−01 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 2 , r )
931 min = 0 . 0 0 E00 med = 6 . 8 8E−01 avg = 6 . 5 8E−01 max= 7 . 4 0E−01 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 2 , r )
932 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 2 , r )
933 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 9 3E−02 max= 6 . 6 8E−01 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 2 , r )
934 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 6 . 3 4E−02 max= 7 . 3 2E−01 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
935 min = 0 . 0 0 E00 med = 6 . 9 9E−01 avg = 6 . 5 2E−01 max= 7 . 3 8E−01 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
936 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 2 , r )
937 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 2 . 0 8E−02 max= 7 . 1 3E−01 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 2 , r )
938 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 6 1E−02 max= 7 . 2 4E−01 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 2 , r )
939 min = 0 . 0 0 E00 med = 6 . 9 7E−01 avg = 6 . 0 7E−01 max= 7 . 3 9E−01 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
940 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 5 0 ) , f=f 2 , r )
941 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 3 . 2 2E−02 max= 5 . 9 1E−01 ES ( ( 2 0 / 2 , 5 0 ) , f=f 2 , r )
942 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 4 3E−01 max= 7 . 3 2E−01 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 2 , r )
943 min = 4 . 3 5E−01 med = 6 . 7 3E−01 avg = 6 . 6 1E−01 max= 7 . 3 4E−01 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 2 , r )
944 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 2 , r )
945 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 2 . 5 1E−01 max= 7 . 1 7E−01 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 2 , r )
946 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 1 . 0 4E−01 max= 7 . 3 8E−01 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 2 , r )
947 min = 5 . 0 8E−01 med = 6 . 8 5E−01 avg = 6 . 7 8E−01 max= 7 . 4 2E−01 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 2 , r )
948 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 2 , r )
949 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 7 . 1 1E−02 max= 7 . 2 1E−01 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 2 , r )
950 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 6 . 1 8E−02 max= 7 . 2 0E−01 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 2 , r )
951 min = 0 . 0 0 E00 med = 6 . 9 8E−01 avg = 6 . 7 4E−01 max= 7 . 3 6E−01 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 2 , r )
952 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
953 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 6 0E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
954 min = 0 . 0 0 E00 med = 5 . 4 9E−01 avg = 5 . 2 7E−01 max= 5 . 5 0E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
955 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
956 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
957 min = 1 . 6 0E−05 med = 5 . 4 9E−01 avg = 4 . 6 7E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
958 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
959 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
960 min = 9 . 2 2E−05 med = 5 . 4 9E−01 avg = 4 . 9 2E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
961 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
962 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
963 min = 0 . 0 0 E00 med = 2 . 2 0E−01 avg = 2 . 9 8E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
964 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
965 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
966 min = 0 . 0 0 E00 med = 5 . 4 9E−01 avg = 4 . 8 3E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
967 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
968 min = 5 . 4 9E−01 med = 5 . 4 9E−01 avg = 5 . 4 9E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
969 min = 0 . 0 0 E00 med = 1 . 1 6E−04 avg = 1 . 1 8E−01 max= 5 . 4 9E−01 DE−1( f=f 2 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
970 min = 6 . 7 9E−01 med = 7 . 6 0E−01 avg = 7 . 6 5E−01 max= 9 . 0 0E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
971 min = 7 . 0 1E−01 med = 8 . 1 9E−01 avg = 8 . 2 0E−01 max= 9 . 5 0E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 2 , r )
972 min = 5 . 9 4E−01 med = 6 . 2 0E−01 avg = 6 . 2 0E−01 max= 6 . 4 1E−01 RS ( f=f 2 , r )
973
974
975 MaxSqr
976
977 min = 2 . 4 8E−07 med = 7 . 0 9E−07 avg = 6 . 7 2E−07 max= 1 . 0 8E−06 HC( o1=NM( [ −10 ,10]ˆ10) , f=f 3 ,r)
978 min = 3 . 8 9E−08 med = 9 . 9 3E−08 avg = 9 . 9 2E−08 max= 1 . 5 5E−07 HC( o1=UM( [ −10 ,10]ˆ10) , f=f 3 ,r)
979 min = 5 . 8 3E−02 med = 2 . 4 5E−01 avg = 2 . 6 1E−01 max= 5 . 8 4E−01 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )
980 min = 1 . 6 0E−01 med = 5 . 6 6E−01 avg = 5 . 5 9E−01 max= 8 . 4 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )
981 min = 5 . 8 3E−02 med = 2 . 4 5E−01 avg = 2 . 6 1E−01 max= 5 . 8 4E−01 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )
982 min = 1 . 6 0E−01 med = 5 . 6 6E−01 avg = 5 . 5 9E−01 max= 8 . 4 2E−01 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (1) )

57 DEMOS
983 min = 2 . 3 0E−03 med = 7 . 5 7E−03 avg = 8 . 2 4E−03 max= 2 . 1 0E−02 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.1) )
984 min = 6 . 0 2E−02 med = 1 . 5 9E−01 avg = 1 . 7 5E−01 max= 3 . 7 7E−01 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.1) )
985 min = 6 . 6 1E−06 med = 2 . 6 6E−05 avg = 2 . 6 6E−05 max= 4 . 2 7E−05 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.001) )
986 min = 1 . 4 0E−05 med = 3 . 9 0E−05 avg = 4 . 0 9E−05 max= 8 . 4 8E−05 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , log (0.001) )
987 min = 8 . 3 8E−07 med = 3 . 1 2E−06 avg = 3 . 0 9E−06 max= 4 . 6 9E−06 SA ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , l o g ( 1 . 0 E−4) )
988 min = 7 . 4 9E−07 med = 2 . 7 7E−06 avg = 2 . 8 0E−06 max= 4 . 9 2E−06 SA ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , l o g ( 1 . 0 E−4) )
989 min = 1 . 5 8E−01 med = 4 . 6 9E−01 avg = 4 . 6 1E−01 max= 6 . 7 9E−01 SQ( o1=NM( [ −10 ,10]ˆ10) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
990 min = 1 . 6 5E−01 med = 6 . 4 8E−01 avg = 6 . 3 9E−01 max= 8 . 8 5E−01 SQ( o1=UM( [ −10 ,10]ˆ10) , f=f 3 , r , exp ( Ts =1 , e =1.0E−5) )
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
991 min = 1 . 2 6E−05 med = 3 . 1 6E−05 avg = 3 . 2 3E−05 max= 6 . 2 2E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
992 min = 1 . 5 5E−05 med = 5 . 9 5E−05 avg = 6 . 3 9E−05 max= 1 . 7 1E−04 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1 , e =1.0E−4) )
993 min = 2 . 1 7E−02 med = 1 . 4 2E−01 avg = 1 . 4 6E−01 max= 3 . 4 0E−01 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
994 min = 1 . 7 3E−01 med = 5 . 0 9E−01 avg = 4 . 9 8E−01 max= 7 . 8 8E−01 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−5) )
995 min = 1 . 6 9E−06 med = 4 . 2 7E−06 avg = 4 . 0 7E−06 max= 6 . 3 8E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
996 min = 8 . 9 5E−07 med = 3 . 3 0E−06 avg = 3 . 2 3E−06 max= 5 . 5 4E−06 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 1 , e =1.0E−4) )
997 min = 5 . 8 1E−05 med = 1 . 9 1E−04 avg = 1 . 8 7E−04 max= 3 . 0 8E−04 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
998 min = 1 . 3 9E−04 med = 4 . 2 5E−04 avg = 4 . 5 6E−04 max= 1 . 3 1E−03 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−5) )
999 min = 3 . 8 2E−07 med = 8 . 6 7E−07 avg = 8 . 5 9E−07 max= 1 . 4 4E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
1000 min = 4 . 1 7E−08 med = 1 . 4 5E−07 avg = 1 . 3 9E−07 max= 2 . 2 0E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts = 0 . 0 0 1 , e =1.0E−4) )
1001 min = 5 . 1 5E−06 med = 1 . 4 9E−05 avg = 1 . 4 5E−05 max= 2 . 3 6E−05 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
1002 min = 5 . 3 6E−06 med = 1 . 7 4E−05 avg = 1 . 8 7E−05 max= 3 . 6 5E−05 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−5) )
1003 min = 4 . 3 3E−07 med = 7 . 6 8E−07 avg = 7 . 7 7E−07 max= 1 . 3 7E−06 SQ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
1004 min = 4 . 8 7E−08 med = 1 . 1 5E−07 avg = 1 . 1 5E−07 max= 1 . 8 3E−07 SQ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r , exp ( Ts =1.0E−4 , e =1.0E−4) )
1005 min = 1 . 6 2E−07 med = 3 . 6 4E−07 avg = 3 . 6 2E−07 max= 5 . 3 6E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
1006 min = 1 . 5 5E−08 med = 6 . 6 8E−08 avg = 1 . 7 4E−04 max= 3 . 3 8E−03 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =2) )
1007 min = 4 . 8 0E−08 med = 1 . 2 3E−07 avg = 1 . 2 7E−07 max= 2 . 0 9E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
1008 min = 1 . 6 3E−09 med = 1 . 5 4E−08 avg = 6 . 8 8E−05 max= 6 . 2 7E−03 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =3) )
1009 min = 8 . 0 5E−09 med = 1 . 8 5E−08 avg = 1 . 8 3E−08 max= 3 . 9 3E−08 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
1010 min = 5 . 5 6E−10 med = 2 . 0 9E−09 avg = 1 . 8 5E−06 max= 1 . 0 9E−04 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Tour ( k =5) )
1011 min = 1 . 1 2E−07 med = 5 . 3 4E−07 avg = 5 . 1 8E−07 max= 1 . 0 1E−06 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
1012 min = 3 . 1 0E−08 med = 8 . 6 4E−08 avg = 1 . 5 1E−04 max= 6 . 3 2E−03 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=10) )
1013 min = 4 . 0 5E−08 med = 8 . 5 1E−08 avg = 8 . 5 2E−08 max= 1 . 3 9E−07 sEA ( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
1014 min = 4 . 0 4E−09 med = 1 . 0 3E−08 avg = 3 . 1 2E−05 max= 7 . 6 0E−04 sEA ( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , o2=WAC, f=f 3 , r , c r = 0 . 7 , mr = 0 . 3 , p s =100 , mps =100 , s e l =Trunc ( mps=50) )
1015 min = 2 . 1 6E−263 med = 3 . 2 7E−252 avg = 4 . 4 7E−241 max= 4 . 1 3E−239 ES ( ( 1 0 / 2 + 5 0 ) , f=f 3 , r )
1016 min = 0 . 0 0 E00 med = 0 . 0 0 E00 avg = 0 . 0 0 E00 max= 0 . 0 0 E00 ES ( ( 1 0 / 2 , 5 0 ) , f=f 3 , r )
1017 min = 2 . 7 5E−157 med = 5 . 6 0E−143 avg = 5 . 1 9E−132 max= 4 . 1 8E−130 ES ( ( 1 0 / 1 0 + 5 0 ) , f=f 3 , r )
1018 min = 2 . 4 8E−247 med = 1 . 3 4E−240 avg = 4 . 3 5E−234 max= 3 . 8 8E−232 ES ( ( 1 0 / 1 0 , 5 0 ) , f=f 3 , r )
1019 min = 2 . 0 4E−198 med = 5 . 8 6E−191 avg = 6 . 1 8E−184 max= 6 . 1 3E−182 ES ( ( 1 0 / 2 + 1 0 0 ) , f=f 3 , r )
1020 min = 1 . 8 4E−243 med = 1 . 0 6E−235 avg = 4 . 3 8E−231 max= 1 . 5 2E−229 ES ( ( 1 0 / 2 , 1 0 0 ) , f=f 3 , r )
1021 min = 4 . 4 3E−126 med = 2 . 7 4E−116 avg = 6 . 5 6E−112 max= 4 . 1 9E−110 ES ( ( 1 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
1022 min = 1 . 6 7E−178 med = 1 . 8 6E−172 avg = 3 . 7 1E−169 max= 2 . 5 1E−167 ES ( ( 1 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
1023 min = 1 . 7 0E−132 med = 3 . 6 8E−128 avg = 3 . 7 5E−124 max= 3 . 6 6E−122 ES ( ( 1 0 / 2 + 2 0 0 ) , f=f 3 , r )
1024 min = 1 . 4 8E−149 med = 4 . 4 1E−145 avg = 1 . 1 1E−140 max= 1 . 0 8E−138 ES ( ( 1 0 / 2 , 2 0 0 ) , f=f 3 , r )
1025 min = 5 . 6 2E−89 med = 2 . 7 0E−83 avg = 4 . 3 1E−79 max= 3 . 2 2E−77 ES ( ( 1 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
1026 min = 1 . 5 1E−112 med = 5 . 2 2E−109 avg = 4 . 8 8E−107 max= 2 . 1 3E−105 ES ( ( 1 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
1027 min = 4 . 7 4E−170 med = 1 . 8 9E−161 avg = 5 . 4 4E−155 max= 5 . 4 3E−153 ES ( ( 2 0 / 2 + 5 0 ) , f=f 3 , r )
1028 min = 1 . 6 7E−211 med = 4 . 9 1E−204 avg = 5 . 5 6E−200 max= 3 . 9 2E−198 ES ( ( 2 0 / 2 , 5 0 ) , f=f 3 , r )
1029 min = 1 . 2 6E−97 med = 8 . 5 6E−91 avg = 2 . 1 7E−87 max= 4 . 9 5E−86 ES ( ( 2 0 / 1 0 + 5 0 ) , f=f 3 , r )
1030 min = 9 . 3 7E−124 med = 1 . 6 1E−119 avg = 1 . 4 6E−115 max= 1 . 0 1E−113 ES ( ( 2 0 / 1 0 , 5 0 ) , f=f 3 , r )
1031 min = 1 . 3 0E−147 med = 1 . 4 8E−140 avg = 1 . 6 1E−136 max= 1 . 2 6E−134 ES ( ( 2 0 / 2 + 1 0 0 ) , f=f 3 , r )
1032 min = 4 . 6 2E−186 med = 3 . 3 9E−181 avg = 7 . 3 2E−179 max= 5 . 2 9E−177 ES ( ( 2 0 / 2 , 1 0 0 ) , f=f 3 , r )
1033 min = 5 . 4 5E−87 med = 3 . 1 6E−83 avg = 2 . 1 5E−79 max= 1 . 0 7E−77 ES ( ( 2 0 / 1 0 + 1 0 0 ) , f=f 3 , r )
1034 min = 5 . 7 1E−129 med = 1 . 0 4E−125 avg = 3 . 0 3E−123 max= 2 . 0 3E−121 ES ( ( 2 0 / 1 0 , 1 0 0 ) , f=f 3 , r )
1035 min = 1 . 3 7E−107 med = 3 . 0 7E−104 avg = 9 . 6 0E−103 max= 5 . 2 8E−101 ES ( ( 2 0 / 2 + 2 0 0 ) , f=f 3 , r )
1036 min = 5 . 7 2E−127 med = 3 . 1 1E−123 avg = 6 . 4 2E−122 max= 9 . 3 9E−121 ES ( ( 2 0 / 2 , 2 0 0 ) , f=f 3 , r )
1037 min = 2 . 4 7E−70 med = 1 . 8 9E−65 avg = 5 . 6 0E−64 max= 2 . 3 8E−62 ES ( ( 2 0 / 1 0 + 2 0 0 ) , f=f 3 , r )
1038 min = 6 . 7 2E−92 med = 1 . 2 5E−89 avg = 1 . 3 1E−88 max= 5 . 0 9E−87 ES ( ( 2 0 / 1 0 , 2 0 0 ) , f=f 3 , r )
1039 min = 4 . 7 9E−08 med = 4 . 5 0E−04 avg = 4 . 4 4E−03 max= 1 . 0 5E−01 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1040 min = 6 . 8 6E−04 med = 1 . 8 8E−02 avg = 2 . 3 4E−02 max= 7 . 6 7E−02 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1041 min = 2 . 6 7E−07 med = 1 . 6 7E−03 avg = 3 . 4 3E−03 max= 2 . 3 8E−02 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1042 min = 3 . 3 1E−93 med = 6 . 7 1E−89 avg = 6 . 3 4E−87 max= 2 . 3 5E−85 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1043 min = 4 . 6 7E−152 med = 4 . 7 3E−23 avg = 3 . 0 2E−04 max= 2 . 1 7E−02 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1044 min = 5 . 1 1E−134 med = 3 . 4 6E−128 avg = 1 . 7 4E−123 max= 1 . 5 4E−121 DE−1( f=f 3 , r , p s =50 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1045 min = 6 . 3 5E−40 med = 4 . 2 5E−37 avg = 3 . 8 1E−07 max= 3 . 4 3E−05 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1046 min = 4 . 6 3E−05 med = 3 . 1 2E−03 avg = 4 . 2 0E−03 max= 1 . 9 2E−02 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1047 min = 2 . 0 1E−46 med = 2 . 3 0E−13 avg = 1 . 9 2E−05 max= 8 . 1 5E−04 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1048 min = 1 . 5 6E−49 med = 5 . 0 8E−47 avg = 5 . 8 6E−46 max= 1 . 6 8E−44 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1049 min = 2 . 9 6E−89 med = 3 . 1 6E−83 avg = 2 . 3 1E−18 max= 1 . 9 9E−16 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1050 min = 8 . 3 2E−71 med = 2 . 6 9E−68 avg = 5 . 7 4E−67 max= 2 . 7 4E−65 DE−1( f=f 3 , r , p s =100 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1051 min = 3 . 6 2E−21 med = 2 . 3 2E−19 avg = 3 . 3 1E−19 max= 2 . 6 9E−18 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1052 min = 3 . 6 5E−09 med = 3 . 5 7E−05 avg = 1 . 7 1E−04 max= 2 . 2 1E−03 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
1053 min = 9 . 6 5E−25 med = 2 . 1 8E−23 avg = 4 . 5 8E−23 max= 3 . 9 9E−22 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / b i n ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )

883
1054 min = 3 . 4 5E−26 med = 1 . 0 8E−24 avg = 2 . 1 3E−24 max= 1 . 2 8E−23 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 3 , F= 0 . 7 ) )
1055 min = 3 . 7 0E−47 med = 9 . 8 1E−45 avg = 8 . 4 1E−44 max= 2 . 4 8E−42 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 7 , F= 0 . 3 ) )
884
1056 min = 3 . 6 7E−38 med = 6 . 0 8E−36 avg = 1 . 6 9E−35 max= 1 . 4 9E−34 DE−1( f=f 3 , r , p s =200 , op3 =1/ r a n d / exp ( [ − 1 0 , 1 0 ] ˆ 1 0 , c r = 0 . 5 , F= 0 . 5 ) )
1057 min = 1 . 8 4E−01 med = 4 . 7 8E−01 avg = 4 . 6 9E−01 max= 7 . 1 2E−01 RW( o1=NM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r )
1058 min = 2 . 2 6E−01 med = 6 . 7 8E−01 avg = 6 . 5 6E−01 max= 8 . 4 5E−01 RW( o1=UM( [ − 1 0 , 1 0 ] ˆ 1 0 ) , f=f 3 , r )
1059 min = 4 . 7 0E−02 med = 9 . 8 1E−02 avg = 9 . 4 5E−02 max= 1 . 3 4E−01 RS ( f=f 3 , r )

Listing 57.6: An example output of the Function test program.

57 DEMOS
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 885
57.1.1.2.2 Sphere Problem

Listing 57.7: Solving the Sphere functions.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real;
5
6 import java.util.Random;
7
8 import org.goataa.impl.algorithms.RandomSampling;
9 import org.goataa.impl.algorithms.RandomWalk;
10 import org.goataa.impl.algorithms.hc.HillClimbing;
11 import org.goataa.impl.gpms.IdentityMapping;
12 import
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation;
13 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllNormalMutation;
14 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllUniformMutation;
15 import org.goataa.impl.termination.StepLimit;
16 import org.goataa.impl.utils.BufferedStatistics;
17 import org.goataa.spec.IGPM;
18 import org.goataa.spec.INullarySearchOperation;
19 import org.goataa.spec.IObjectiveFunction;
20 import org.goataa.spec.ITerminationCriterion;
21 import org.goataa.spec.IUnarySearchOperation;
22
23 import demos.org.goataa.strings.real.benchmarkFunctions.Sphere;
24
25 /**
26 * A test of some optimization methods applied to the sphere function (see
27 * Section 50.3.1.1 on page 581). A more extensive test is given in
28 * Paragraph 57.1.1.2.1 on page 863.
29 *
30 * @author Thomas Weise
31 */
32 public class SphereTest {
33
34 /**
35 * The main routine
36 *
37 * @param args
38 * the command line arguments which are ignored here
39 */
40 @SuppressWarnings("unchecked")
41 public static final void main(final String[] args) {
42 final IObjectiveFunction<double[]> f;
43 final INullarySearchOperation<double[]> create;
44 final IUnarySearchOperation<double[]> uniform, normal;
45 final IGPM<double[], double[]> gpm;
46 final ITerminationCriterion term;
47 double[] x;
48 int maxRuns, maxSteps;
49 final BufferedStatistics stat;
50 int i;
51
52 maxRuns = 10;
53 maxSteps = 10000;
54
55 System.out.println("Maximum Number of Steps: " + maxSteps); //N ON − N LS − 1
56 System.out.println("Number of Runs : " + maxRuns); //N ON − N LS − 1
57 System.out.println();
58
59 f = new Sphere();
886 57 DEMOS
60 term = new StepLimit(maxSteps);
61
62 create = new DoubleArrayUniformCreation(10, -3.0, 3.0);
63 normal = new DoubleArrayAllNormalMutation(-3.0, 3.0);
64 uniform = new DoubleArrayAllUniformMutation(-3.0, 3.0);
65 gpm = ((IGPM) (IdentityMapping.IDENTITY_MAPPING));
66 stat = new BufferedStatistics();
67
68 // Hill Climbing ( Algorithm 26.1 on page 230)
69 stat.clear();
70 for (i = 0; i < maxRuns; i++) {
71 term.reset();
72 x = HillClimbing.hillClimbing(f, create, uniform, gpm, term,
73 new Random()).x;
74 stat.add(f.compute(x, null));
75 }
76 System.out.println("HC + uniform: best =" + stat.min() + //N ON − N LS − 1
77 "\n med =" + stat.median() + //N ON − N LS − 1
78 "\n mean =" + stat.mean() + //N ON − N LS − 1
79 "\n worst=" + stat.max());//N ON − N LS − 1
80
81 stat.clear();
82 for (i = 0; i < maxRuns; i++) {
83 term.reset();
84 x = HillClimbing.hillClimbing(f, create, normal, gpm, term,
85 new Random()).x;
86 stat.add(f.compute(x, null));
87 }
88 System.out.println("\nHC + normal : best =" + stat.min() + //N ON − N LS − 1
89 "\n med =" + stat.median() + //N ON − N LS − 1
90 "\n mean =" + stat.mean() + //N ON − N LS − 1
91 "\n worst=" + stat.max());//N ON − N LS − 1
92
93 // Random Walks ( Section 8.2 on page 114)
94 stat.clear();
95 for (i = 0; i < maxRuns; i++) {
96 term.reset();
97 x = RandomWalk.randomWalk(f, create, uniform, gpm, term,
98 new Random()).x;
99 stat.add(f.compute(x, null));
100 }
101 System.out.println("\nRW + uniform: best =" + stat.min() + //N ON − N LS − 1
102 "\n med =" + stat.median() + //N ON − N LS − 1
103 "\n mean =" + stat.mean() + //N ON − N LS − 1
104 "\n worst=" + stat.max());//N ON − N LS − 1
105
106 stat.clear();
107 for (i = 0; i < maxRuns; i++) {
108 term.reset();
109 x = RandomWalk
110 .randomWalk(f, create, normal, gpm, term, new Random()).x;
111 stat.add(f.compute(x, null));
112 }
113 System.out.println("\nRW + normal : best =" + stat.min() + //N ON − N LS − 1
114 "\n med =" + stat.median() + //N ON − N LS − 1
115 "\n mean =" + stat.mean() + //N ON − N LS − 1
116 "\n worst=" + stat.max());//N ON − N LS − 1
117
118 // Random Sampling ( Section 8.1 on page 113)
119 stat.clear();
120 for (i = 0; i < maxRuns; i++) {
121 term.reset();
122 x = RandomSampling
123 .randomSampling(f, create, gpm, term, new Random()).x;
124 stat.add(f.compute(x, null));
125 }
126 System.out.println("\nRS : best =" + stat.min() + //N ON − N LS − 1
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 887
127 "\n med =" + stat.median() + //N ON − N LS − 1
128 "\n mean =" + stat.mean() + //N ON − N LS − 1
129 "\n worst=" + stat.max());//N ON − N LS − 1
130 }
131 }

1 Maximum Number of Steps: 1000000


2 Number of Runs : 100
3
4 HC + uniform: best =2.1184487316487466E-7
5 med =1.7652549059811728E-6
6 mean =1.7392477542969818E-6
7 worst=2.5010946915949356E-6
8
9 HC + normal : best =3.806889268385154E-6
10 med =1.223340156083753E-5
11 mean =1.2081647908816525E-5
12 worst=1.674893713000729E-5
13
14 RW + uniform: best =6.426100906677331
15 med =18.57390162796623
16 mean =18.917613859217102
17 worst=30.58207139961655
18
19 RW + normal : best =2.198685027112634
20 med =9.71028302069235
21 mean =9.905444249311946
22 worst=20.553126721126365
23
24 RS : best =0.6277154825382958
25 med =1.7375737390707902
26 mean =1.6778694214227383
27 worst=2.4311392069803324
Listing 57.8: An example output of the Sphere test program.

57.1.2 Permutation-based Problem Spaces

Many combinatorial optimization problems basically require us to find a good permutation


of elements – in this package, we provide some examples for that.

57.1.2.1 The Bin Packing Problem.

The Bin Packing problem has been introduced in Example E1.1 on page 25 in the book.
In Task 67 on page 238, we provide a version of the problem where the aim is to pack n
objects with sizes a1 to an into as few as possible bins of size b. The number of bins required
is k.
Here, we focus on that problem and the objective functions introduced in this package
consider a candidate solution x to be a permutation of the natural numbers 0 to n − 1 which
denotes the order in which the objects are to be put into bins. Starting with the first object
and an empty bin, the objects are packed in a loop. The ith element x[i] is taken. If its size
a[x[i]] is smaller than the remaining space in the bin, it is placed into the bin and the free
space of the bin is reduced by a[x[i]]. If it does not fit, a new bin is required. The object
is put into the bin and b − a[x[i]] remains as free space. This process is repeated until all
objects have been packed into the bins according to the permutation x. The number k of
bins is counted.
This number is subject to minimization. There exists a couple of different ways to express
this optimization criterion and in this package, we provide some of them. Besides counting
the bins directly, we could, for instance, also count the (squares of the) space remaining free
(wasted) in each bin and minimize that number instead.
888 57 DEMOS
57.1.2.1.1 Base Classes

Listing 57.9: An Instance of the Bin Packing Problem.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Arrays;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.utils.TextUtils;
10
11 /**
12 * We discussed bin packing both in
13 * Task 67 on page 238 and
14 * Example E1.1 on page 25. This object holds the
15 * information of an instance of the bin packing problem. We took the
16 * example instances from [SchKle2003]: Armin Scholl and Robert Klein. Bin
17 * Packing. Friedrich Schiller University of Jena,
18 * Wirtschaftswissenschaftliche Fakultaet: Jena, Thuringia, Germany,
19 * September 2, 2003. See http://www.wiwi.uni-jena.de/Entscheidung/binpp/
20 *
21 * @author Thomas Weise
22 */
23 public class BinPackingInstance extends OptimizationModule {
24
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /** a bin packing instance from [SchKle2003] */
29 public static final BinPackingInstance N1C1W1_A = new BinPackingInstance(
30 100, new int[] { 50, 100, 99, 99, 96, 96, 92, 92, 91, 88, 87, 86,
31 85, 76, 74, 72, 69, 67, 67, 62, 61, 56, 52, 51, 49, 46, 44, 42,
32 40, 40, 33, 33, 30, 30, 29, 28, 28, 27, 25, 24, 23, 22, 21, 20,
33 17, 14, 13, 11, 10, 7, 7, 3 }, 25, "N1C1W1_A"); //N ON − N LS − 1
34
35 /** a bin packing instance from [SchKle2003] */
36 public static final BinPackingInstance N1C2W2_D = new BinPackingInstance(
37 120, new int[] { 50, 120, 99, 98, 98, 97, 96, 90, 88, 86, 82, 82,
38 80, 79, 76, 76, 76, 74, 69, 67, 66, 64, 62, 59, 55, 52, 51, 51,
39 50, 49, 44, 43, 41, 41, 41, 41, 41, 37, 35, 33, 32, 32, 31, 31,
40 31, 30, 29, 23, 23, 22, 20, 20 }, 24, "N1C2W2_D");//N ON − N LS − 1
41
42 /** a bin packing instance from [SchKle2003] */
43 public static final BinPackingInstance N2C1W1_A = new BinPackingInstance(
44 100, new int[] { 100, 100, 99, 97, 95, 95, 94, 92, 91, 89, 86, 86,
45 85, 84, 80, 80, 80, 80, 80, 79, 76, 76, 75, 74, 73, 71, 71, 69,
46 65, 64, 64, 64, 63, 63, 62, 60, 59, 58, 57, 54, 53, 52, 51, 50,
47 48, 48, 48, 46, 44, 43, 43, 43, 43, 42, 41, 40, 40, 39, 38, 38,
48 38, 38, 37, 37, 37, 37, 36, 35, 34, 33, 32, 30, 29, 28, 26, 26,
49 26, 24, 23, 22, 21, 21, 19, 18, 17, 16, 16, 15, 14, 13, 12, 12,
50 11, 9, 9, 8, 8, 7, 6, 6, 5, 1 }, 48, "N2C1W1_A");//N ON − N LS − 1
51
52 /** a bin packing instance from [SchKle2003] */
53 public static final BinPackingInstance N2C3W2_C = new BinPackingInstance(
54 150, new int[] { 100, 150, 100, 99, 99, 98, 97, 97, 97, 96, 96, 95,
55 95, 95, 94, 93, 93, 93, 92, 91, 89, 88, 87, 86, 84, 84, 83, 83,
56 82, 81, 81, 81, 78, 78, 75, 74, 73, 72, 72, 71, 70, 68, 67, 66,
57 65, 64, 63, 63, 62, 60, 60, 59, 59, 58, 57, 56, 56, 55, 54, 51,
58 49, 49, 48, 47, 47, 46, 45, 45, 45, 45, 44, 44, 44, 44, 43, 41,
59 41, 40, 39, 39, 39, 37, 37, 37, 35, 35, 34, 32, 31, 31, 30, 28,
60 26, 25, 24, 24, 23, 23, 22, 21, 20, 20 }, 41, "N2C3W2_C");//N ON − N LS − 1
61
62 /** a bin packing instance from [SchKle2003] */
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 889
63 public static final BinPackingInstance N3C3W4_M = new BinPackingInstance(
64 150, new int[] { 200, 150, 100, 100, 100, 99, 99, 98, 98, 98, 98,
65 97, 96, 95, 94, 94, 94, 94, 93, 93, 93, 93, 93, 92, 92, 92, 91,
66 90, 90, 90, 90, 90, 90, 89, 89, 88, 88, 87, 87, 86, 86, 86, 86,
67 86, 85, 85, 85, 85, 84, 84, 83, 83, 83, 82, 82, 82, 82, 82, 81,
68 81, 80, 80, 79, 79, 79, 79, 79, 79, 78, 78, 78, 77, 77, 76, 76,
69 76, 76, 75, 75, 75, 74, 74, 74, 74, 74, 73, 73, 73, 73, 72, 72,
70 71, 69, 69, 69, 69, 68, 68, 68, 67, 67, 66, 65, 65, 65, 63, 63,
71 63, 62, 61, 61, 61, 61, 60, 60, 59, 59, 59, 59, 58, 58, 58, 58,
72 58, 56, 56, 56, 55, 55, 54, 54, 54, 53, 53, 53, 53, 53, 52, 52,
73 52, 52, 51, 51, 51, 51, 51, 50, 50, 49, 49, 49, 48, 48, 47, 46,
74 46, 46, 46, 45, 45, 45, 44, 44, 44, 42, 42, 42, 41, 41, 39, 39,
75 38, 38, 38, 38, 38, 37, 37, 37, 37, 37, 37, 37, 36, 36, 36, 36,
76 35, 35, 35, 34, 34, 34, 33, 32, 31, 30, 30, 30, 30, 30, 30 },
77 88, "N3C3W4_M");//N ON − N LS − 1
78
79 /** a bin packing instance from [SchKle2003] */
80 public static final BinPackingInstance N4C2W1_I = new BinPackingInstance(
81 120, new int[] { 500, 120, 100, 100, 100, 100, 100, 99, 99, 99, 99,
82 99, 99, 99, 99, 99, 99, 99, 98, 98, 98, 98, 98, 98, 98, 97, 97,
83 97, 96, 96, 96, 96, 96, 96, 96, 96, 95, 95, 95, 95, 94, 94, 94,
84 94, 94, 93, 92, 92, 92, 92, 91, 91, 91, 91, 91, 91, 90, 90, 90,
85 90, 90, 89, 89, 89, 89, 89, 88, 88, 88, 88, 88, 87, 87, 87, 86,
86 86, 86, 86, 85, 85, 85, 85, 84, 84, 84, 84, 84, 84, 83, 83, 83,
87 83, 83, 83, 82, 82, 82, 82, 82, 82, 82, 82, 81, 81, 81, 81, 81,
88 80, 80, 80, 80, 79, 79, 79, 79, 78, 78, 78, 77, 77, 77, 76, 76,
89 75, 75, 74, 74, 74, 74, 74, 73, 73, 73, 72, 72, 72, 72, 72, 72,
90 72, 72, 71, 71, 71, 71, 71, 70, 70, 70, 70, 70, 70, 70, 70, 69,
91 69, 69, 69, 68, 68, 67, 67, 67, 67, 67, 67, 67, 66, 66, 66, 65,
92 65, 65, 65, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 63, 63, 63,
93 63, 63, 63, 63, 62, 62, 62, 62, 62, 61, 61, 61, 61, 61, 60, 60,
94 60, 59, 59, 58, 58, 58, 58, 58, 58, 57, 57, 57, 57, 56, 56, 56,
95 56, 55, 55, 55, 55, 55, 55, 54, 54, 54, 54, 53, 53, 53, 52, 52,
96 52, 52, 52, 51, 51, 51, 51, 51, 50, 50, 50, 50, 50, 50, 50, 50,
97 50, 49, 49, 49, 48, 48, 48, 47, 47, 47, 47, 47, 47, 46, 46, 46,
98 46, 46, 45, 45, 45, 45, 44, 44, 44, 43, 43, 43, 43, 43, 43, 43,
99 42, 42, 42, 42, 42, 42, 42, 42, 41, 41, 41, 41, 41, 41, 40, 40,
100 40, 40, 40, 40, 40, 39, 39, 39, 39, 39, 38, 38, 38, 38, 37, 37,
101 37, 37, 37, 36, 36, 36, 36, 36, 36, 36, 36, 35, 35, 34, 34, 34,
102 34, 34, 34, 34, 34, 33, 33, 33, 33, 33, 32, 32, 31, 31, 31, 31,
103 31, 31, 30, 29, 29, 29, 28, 28, 28, 28, 28, 28, 28, 27, 27, 27,
104 27, 26, 26, 26, 26, 26, 26, 26, 26, 25, 25, 25, 25, 25, 25, 24,
105 24, 24, 24, 24, 24, 24, 24, 24, 23, 23, 23, 22, 22, 22, 21, 21,
106 21, 21, 21, 21, 20, 20, 20, 20, 20, 20, 19, 19, 19, 19, 18, 18,
107 18, 18, 18, 18, 18, 18, 18, 18, 17, 17, 17, 17, 17, 16, 16, 16,
108 16, 16, 15, 15, 15, 15, 15, 14, 14, 14, 14, 14, 13, 13, 13, 13,
109 13, 12, 12, 12, 12, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10,
110 9, 9, 9, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 5, 5, 5, 5, 5,
111 5, 5, 5, 5, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1,
112 1, }, 209, "N4C2W1_I");//N ON − N LS − 1
113
114 /** a bin packing instance from [SchKle2003] */
115 public static final BinPackingInstance N4C3W1_L = new BinPackingInstance(
116 150, new int[] { 500, 150, 100, 100, 100, 100, 100, 99, 99, 99, 98,
117 98, 98, 98, 98, 98, 97, 97, 97, 97, 97, 97, 97, 97, 97, 96, 96,
118 95, 95, 94, 94, 94, 94, 93, 93, 93, 93, 93, 93, 92, 92, 92, 92,
119 92, 92, 91, 91, 91, 91, 91, 90, 89, 89, 88, 88, 88, 88, 88, 87,
120 87, 87, 87, 86, 85, 85, 85, 85, 84, 84, 84, 83, 83, 83, 83, 82,
121 81, 81, 81, 81, 81, 81, 81, 80, 80, 79, 79, 79, 79, 79, 79, 79,
122 79, 78, 78, 78, 78, 78, 78, 77, 77, 77, 77, 77, 77, 77, 76, 76,
123 76, 76, 76, 75, 75, 75, 74, 74, 74, 74, 74, 74, 74, 74, 73, 73,
124 73, 73, 72, 72, 72, 72, 72, 72, 71, 71, 71, 71, 70, 70, 70, 70,
125 70, 69, 69, 69, 69, 68, 68, 68, 68, 67, 67, 67, 67, 67, 66, 66,
126 66, 66, 66, 66, 65, 65, 65, 65, 65, 64, 64, 64, 64, 64, 64, 64,
127 63, 63, 63, 63, 63, 63, 63, 62, 62, 61, 61, 61, 60, 60, 60, 60,
128 59, 59, 59, 59, 59, 58, 58, 58, 58, 57, 57, 57, 57, 57, 57, 57,
129 57, 56, 56, 56, 56, 56, 55, 55, 55, 54, 54, 54, 53, 53, 53, 52,
890 57 DEMOS
130 52, 52, 52, 52, 52, 52, 51, 51, 51, 50, 50, 50, 50, 50, 50, 50,
131 49, 49, 49, 49, 49, 48, 48, 48, 48, 48, 48, 48, 48, 48, 47, 47,
132 47, 47, 47, 47, 46, 46, 46, 46, 46, 46, 45, 45, 45, 45, 45, 45,
133 45, 44, 44, 44, 44, 44, 44, 44, 43, 43, 43, 43, 43, 43, 43, 43,
134 43, 43, 42, 42, 42, 42, 42, 42, 42, 41, 41, 41, 41, 40, 40, 40,
135 40, 39, 39, 39, 39, 39, 38, 38, 38, 38, 38, 38, 37, 37, 37, 36,
136 36, 36, 36, 36, 35, 35, 35, 35, 35, 35, 35, 35, 34, 34, 34, 33,
137 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 31, 31, 31, 31,
138 31, 31, 30, 30, 30, 30, 30, 29, 29, 29, 29, 28, 28, 28, 28, 28,
139 28, 28, 28, 27, 27, 27, 27, 26, 26, 26, 26, 26, 26, 26, 26, 26,
140 26, 25, 25, 25, 25, 25, 25, 25, 24, 24, 24, 23, 23, 23, 23, 23,
141 23, 23, 23, 23, 22, 22, 22, 22, 22, 21, 21, 21, 21, 21, 21, 21,
142 21, 20, 20, 20, 19, 19, 18, 18, 18, 17, 17, 17, 17, 16, 16, 16,
143 15, 15, 14, 14, 14, 14, 14, 14, 13, 13, 13, 13, 13, 13, 13, 12,
144 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10,
145 9, 9, 9, 8, 8, 8, 8, 8, 8, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 5, 5,
146 5, 5, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 1, 1,
147 1, }, 163, "N4C3W1_L");//N ON − N LS − 1
148
149 /** a list with the the bin packing instances */
150 public static final BinPackingInstance[] ALL_INSTANCES = new
BinPackingInstance[] {
151 N1C1W1_A, N1C2W2_D, N2C1W1_A, N2C3W2_C, N3C3W4_M, N4C2W1_I, N4C3W1_L };
152
153 /** the size of each bin */
154 public final int b;
155
156 /** the sizes of the objects, sorted from the smallest to largest */
157 public final int[] a;
158
159 /**
160 * the minimum number of bins of size b known to be able to facilitate
161 * the objects of sizes a; -1 if unknown
162 */
163 public final int bestK;
164
165 /** the instance name, null if unknown */
166 public final String name;
167
168 /**
169 * Instantiate the bin packing instance
170 *
171 * @param pb
172 * the bin size
173 * @param pa
174 * the object sizes
175 * @param k
176 * the best solution; -1 if unknown
177 * @param pname
178 * the intance name, null if unknown
179 */
180 public BinPackingInstance(final int pb, final int[] pa, final int k,
181 final String pname) {
182 super();
183 this.b = pb;
184 this.a = pa.clone();
185 Arrays.sort(this.a);
186 this.bestK = k;
187 this.name = pname;
188 }
189
190 /**
191 * Instantiate the bin packing instance
192 *
193 * @param copy
194 * the bin packing instance to copy
195 */
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 891
196 public BinPackingInstance(final BinPackingInstance copy) {
197 this(copy.b, copy.a, copy.bestK, copy.name);
198 }
199
200 /**
201 * Instantiate the bin packing instance
202 *
203 * @param copy
204 * the bin packing instance to copy
205 * @param on
206 * the override name
207 */
208 BinPackingInstance(final BinPackingInstance copy, final String on) {
209 this(copy.b, copy.a, copy.bestK, (copy.name == null) ? on : (on + ’(’
210 + copy.name + ’)’));
211 }
212
213 /**
214 * Get the name of the optimization module
215 *
216 * @param longVersion
217 * true if the long name should be returned, false if the short
218 * name should be returned
219 * @return the name of the optimization module
220 */
221 @Override
222 public String getName(final boolean longVersion) {
223 if (this.name != null) {
224 return this.name;
225 }
226
227 return super.getName(longVersion);
228 }
229
230 /**
231 * Get the full configuration which holds all the data necessary to
232 * describe this object.
233 *
234 * @param longVersion
235 * true if the long version should be returned, false if the
236 * short version should be returned
237 * @return the full configuration
238 */
239 @Override
240 public String getConfiguration(final boolean longVersion) {
241 boolean bx;
242 StringBuilder sb;
243
244 bx = (this.getClass() != BinPackingInstance.class);
245
246 if (bx && (this.name != null)) {
247 return "";//N ON − N LS − 1
248 }
249
250 sb = new StringBuilder();
251
252 if (!bx) {
253 if (longVersion) {
254 sb.append("sizes=");//N ON − N LS − 1
255 }
256 TextUtils.toStringBuilder(this.a, sb);
257 sb.append(’,’);
258 }
259
260 if (longVersion) {
261 sb.append("bin=");//N ON − N LS − 1
262 }
892 57 DEMOS
263 sb.append(this.b);
264
265 sb.append(’,’);
266 if ((this.bestK > 0) && (this.bestK < Integer.MAX_VALUE)) {
267 if (longVersion) {
268 sb.append("best=");//N ON − N LS − 1
269 }
270 sb.append(this.bestK);
271 }
272
273 return sb.toString();
274 }
275
276 }

Listing 57.10: The first objective function f1 from Task 67.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /**
11 * The first objective function (f1) mentioned in the bin packing
12 * Task 67 on page 238. An objective function which
13 * computes the number of bins of size b required to store n objects with
14 * the sizes a0 to an-1 according to a given sequence x. It takes a
15 * permutation x of the first n natural numbers (or better, from 0 to n-1)
16 * as argument and returns the number of bins required to facilitate the
17 * objects.
18 *
19 * @author Thomas Weise
20 */
21 public class BinPackingNumberOfBins extends BinPackingInstance implements
22 IObjectiveFunction<int[]> {
23
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /**
28 * Instantiate the bin packing objective function with the raw
29 * parameters.
30 *
31 * @param pb
32 * the bin size
33 * @param pa
34 * the object sizes
35 */
36 public BinPackingNumberOfBins(final int pb, final int[] pa) {
37 super(pb, pa, -1, "f1"); //N ON − N LS − 1
38 }
39
40 /**
41 * Instantiate the bin packing objective function based on an instance
42 * object
43 *
44 * @param bpi
45 * the bin packing instance
46 */
47 public BinPackingNumberOfBins(final BinPackingInstance bpi) {
48 super(bpi, "f1");//N ON − N LS − 1
49 }
50
51 /**
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 893
52 * Compute the number k of bins required by the object permutation x.
53 *
54 * @param x
55 * the phenotype to be rated
56 * @param r
57 * the randomizer
58 * @return the number k of bins
59 */
60 public final double compute(final int[] x, final Random r) {
61 int k, free, size, i;
62
63 free = 0;
64 k = 0;
65 // for each object in the sequence
66 for (i = 0; i < x.length; i++) {
67 // get the size of the current object
68 size = this.a[x[i]];
69
70 // does the object fit into the current bin?
71 if (size <= free) {
72 // if so, subtract its size from the remaining free size
73 free -= size;
74 } else {
75 // otherwise, start a new bin and put the object into it
76 free = (this.b - size);
77 k++;
78 }
79 }
80
81 return k;
82 }
83 }

Listing 57.11: The second objective function f2 from Task 67.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /**
11 * The second objective function (f2) mentioned in the bin packing
12 * Task 67 on page 238. An objective function which
13 * computes the squares of the free space left over in each bins of size b
14 * required to store n objects with the sizes a0 to an-1 according to a
15 * given sequence x. It takes a permutation x of the first n natural
16 * numbers (or better, from 0 to n-1) as argument and returns the free
17 * space left over in the bins.
18 *
19 * @author Thomas Weise
20 */
21 public class BinPackingFreeSpace extends BinPackingInstance implements
22 IObjectiveFunction<int[]> {
23
24 /** a constant required by Java serialization */
25 private static final long serialVersionUID = 1;
26
27 /**
28 * Instantiate the bin packing objective function with the raw parameters
29 *
30 * @param pb
31 * the bin size
32 * @param pa
33 * the object sizes
894 57 DEMOS
34 */
35 public BinPackingFreeSpace(final int pb, final int[] pa) {
36 super(pb, pa, -1, "f2"); //N ON − N LS − 1
37 }
38
39 /**
40 * Instantiate the bin packing objective function based on an instance
41 * object
42 *
43 * @param bpi
44 * the bin packing instance
45 */
46 public BinPackingFreeSpace(final BinPackingInstance bpi) {
47 super(bpi, "f2");//N ON − N LS − 1
48 }
49
50 /**
51 * Compute the quares of the wasted space in the bins required by the
52 * object permutation x.
53 *
54 * @param x
55 * the phenotype to be rated
56 * @param r
57 * the randomizer
58 * @return the number k of bins
59 */
60 public final double compute(final int[] x, final Random r) {
61 int leftOver, free, size, i;
62
63 free = 0;
64 leftOver = 0;
65 // for each object in the sequence
66 for (i = 0; i < x.length; i++) {
67 // get the size of the current object
68 size = this.a[x[i]];
69
70 // does the object fit into the current bin?
71 if (size <= free) {
72 // if so, subtract its size from the remaining free size
73 free -= size;
74 } else {
75 // add up the left-over, wasted space
76 leftOver += free;
77 // otherwise, start a new bin and put the object into it
78 free = (this.b - size);
79 }
80 }
81
82 return (((double) leftOver) * leftOver);
83 }
84 }

Listing 57.12: The third objective function f3 from Task 67.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /**
11 * The third objective function (f3) mentioned in the bin packing
12 * Task 67 on page 238. It combines the objectives
13 * BinPackingNumberOfBins(f1) and BinPackingFreeSpace (f2) to f3(x)= f1(x)
14 * - 1/(1+f2(x))
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 895
15 *
16 * @author Thomas Weise
17 */
18 public class BinPackingNumberOfBinsAndFreeSpace extends BinPackingInstance
19 implements IObjectiveFunction<int[]> {
20
21 /** a constant required by Java serialization */
22 private static final long serialVersionUID = 1;
23
24 /**
25 * Instantiate the bin packing objective function with the raw parameters
26 *
27 * @param pb
28 * the bin size
29 * @param pa
30 * the object sizes
31 */
32 public BinPackingNumberOfBinsAndFreeSpace(final int pb, final int[] pa) {
33 super(pb, pa, -1, "f3"); //N ON − N LS − 1
34 }
35
36 /**
37 * Instantiate the bin packing objective function based on an instance
38 * object
39 *
40 * @param bpi
41 * the bin packing instance
42 */
43 public BinPackingNumberOfBinsAndFreeSpace(final BinPackingInstance bpi) {
44 super(bpi, "f3");//N ON − N LS − 1
45 }
46
47 /**
48 * Compute f3(x)= f1(x) - 1/(1+f2(x)).
49 *
50 * @param x
51 * the phenotype to be rated
52 * @param r
53 * the randomizer
54 * @return the number k of bins
55 */
56 public final double compute(final int[] x, final Random r) {
57 int k, leftOver, free, size, i;
58
59 free = 0;
60 leftOver = 0;
61 k = 0;
62 // for each object in the sequence
63 for (i = 0; i < x.length; i++) {
64 // get the size of the current object
65 size = this.a[x[i]];
66
67 // does the object fit into the current bin?
68 if (size <= free) {
69 // if so, subtract its size from the remaining free size
70 free -= size;
71 } else {
72 // add up the left over (waste) space
73 leftOver += free;
74 // otherwise, start a new bin and put the object into it
75 free = (this.b - size);
76 k++;
77 }
78 }
79
80 // f3(x)= f1(x) - 1/(1+f2(x))
81 return (k - ((1d / ((((double) leftOver) * leftOver) + 1d))));
896 57 DEMOS
82 }
83 }

Listing 57.13: The fourth objective function f4 from Task 67.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /**
11 * The fourth objective function (f5) mentioned in the bin packing
12 * Task 67 on page 238. An objective function which
13 * computes the number of bins of size b required to store n objects with
14 * the sizes a0 to an-1 according to a given sequence x. It takes a
15 * permutation x of the first n natural numbers (or better, from 0 to n-1)
16 * as argument and returns the number of bins required to facilitate the
17 * objects. Different from f1, it also considers the space left over in the
18 * last bin. Different from f1, it also considers the maximum space left
19 * over in any bin. The idea is that if the space wasted in one bin
20 * gradually increases, sooner or later, the bin may disappear alltogethre.
21 *
22 * @author Thomas Weise
23 */
24 public class BinPackingNumberOfBinsAndLastFree extends BinPackingInstance
25 implements IObjectiveFunction<int[]> {
26
27 /** a constant required by Java serialization */
28 private static final long serialVersionUID = 1;
29
30 /**
31 * Instantiate the bin packing objective function with the raw
32 * parameters.
33 *
34 * @param pb
35 * the bin size
36 * @param pa
37 * the object sizes
38 */
39 public BinPackingNumberOfBinsAndLastFree(final int pb, final int[] pa) {
40 super(pb, pa, -1, "f5"); //N ON − N LS − 1
41 }
42
43 /**
44 * Instantiate the bin packing objective function based on an instance
45 * object
46 *
47 * @param bpi
48 * the bin packing instance
49 */
50 public BinPackingNumberOfBinsAndLastFree(final BinPackingInstance bpi) {
51 super(bpi, "f5");//N ON − N LS − 1
52 }
53
54 /**
55 * Compute the number k of bins required by the object permutation x and
56 * include the space wasted in the last bin in the objective value.
57 *
58 * @param x
59 * the phenotype to be rated
60 * @param r
61 * the randomizer
62 * @return the number k of bins
63 */
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 897
64 public final double compute(final int[] x, final Random r) {
65 int k, free, size, i;
66
67 free = 0;
68 k = 0;
69 // for each object in the sequence
70 for (i = 0; i < x.length; i++) {
71 // get the size of the current object
72 size = this.a[x[i]];
73
74 // does the object fit into the current bin?
75 if (size <= free) {
76 // if so, subtract its size from the remaining free size
77 free -= size;
78 } else {
79 // otherwise, start a new bin and put the object into it
80 free = (this.b - size);
81 k++;
82 }
83 }
84
85 // We want to minimize the number of bins and maximize the free space
86 // in the last bin (now stored in "free"). k is a natural number and
87 // should be the dominating factor. We can add a value in [0,1) to
88 // represent the free space. This factor should be the smaller, the
89 // larger the free space is. By choosing 1/(1+free), we can realize
90 // this. Amongst two solutions which have the same number of bins, now
91 // the one with more free space will be preferred.
92 return k + (1d / (1d + free));
93 }
94 }

Listing 57.14: The fifth objective function f5 from Task 67.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /**
11 * The fourth objective function (f4) mentioned in the bin packing
12 * Task 67 on page 238. An objective function which
13 * computes the number of bins of size b required to store n objects with
14 * the sizes a0 to an-1 according to a given sequence x. It takes a
15 * permutation x of the first n natural numbers (or better, from 0 to n-1)
16 * as argument and returns the number of bins required to facilitate the
17 * objects. Different from f1, it also considers the maximum space left
18 * over in any bin. The idea is that if the space wasted in one bin
19 * gradually increases, sooner or later, the next object will fit into the
20 * bin or the bin may disappear alltogethre.
21 *
22 * @author Thomas Weise
23 */
24 public class BinPackingNumberOfBinsAndMaxFree extends BinPackingInstance
25 implements IObjectiveFunction<int[]> {
26
27 /** a constant required by Java serialization */
28 private static final long serialVersionUID = 1;
29
30 /**
31 * Instantiate the bin packing objective function with the raw
32 * parameters.
33 *
34 * @param pb
898 57 DEMOS
35 * the bin size
36 * @param pa
37 * the object sizes
38 */
39 public BinPackingNumberOfBinsAndMaxFree(final int pb, final int[] pa) {
40 super(pb, pa, -1, "f4"); //N ON − N LS − 1
41 }
42
43 /**
44 * Instantiate the bin packing objective function based on an instance
45 * object
46 *
47 * @param bpi
48 * the bin packing instance
49 */
50 public BinPackingNumberOfBinsAndMaxFree(final BinPackingInstance bpi) {
51 super(bpi, "f4");//N ON − N LS − 1
52 }
53
54 /**
55 * Compute the number k of bins required by the object permutation x and
56 * include the space wasted in the last bin in the objective value.
57 *
58 * @param x
59 * the phenotype to be rated
60 * @param r
61 * the randomizer
62 * @return the number k of bins
63 */
64 public final double compute(final int[] x, final Random r) {
65 int k, maxFree, maxFreeT, free, size, i;
66
67 free = 0;
68 k = 0;
69 maxFree = 0;
70 maxFreeT = 0;
71 // for each object in the sequence
72 for (i = 0; i < x.length; i++) {
73 // get the size of the current object
74 size = this.a[x[i]];
75
76 // does the object fit into the current bin?
77 if (size <= free) {
78 // if so, subtract its size from the remaining free size
79 free -= size;
80 } else {
81 if (free > maxFree) {
82 maxFree = free;
83 maxFreeT = 1;
84 } else {
85 if (free == maxFree) {
86 maxFreeT++;
87 }
88 }
89 // otherwise, start a new bin and put the object into it
90 free = (this.b - size);
91 k++;
92 }
93 }
94
95 // We want to minimize the number of bins and maximize the maximum free
96 // space (now stored in "maxFree"). k is a natural number and should be
97 // the dominating factor. We can add a value in [0,1) to represent the
98 // free space. This factor should be the smaller, the larger the free
99 // space is. By choosing 1/(1+maxFreeT*maxFree), we can realize this.
100 // Amongst two solutions which have the same number of bins, now the
101 // one with more free space will be preferred.
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 899
102 return k + (1d / (1d + (maxFreeT * maxFree)));
103 }
104 }

57.1.2.1.2 Experiment

Listing 57.15: Solving the Bin Packing problem.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.List;
7
8 import org.goataa.impl.algorithms.RandomSampling;
9 import org.goataa.impl.algorithms.RandomWalk;
10 import org.goataa.impl.algorithms.de.DifferentialEvolution1;
11 import org.goataa.impl.algorithms.es.EvolutionStrategy;
12 import org.goataa.impl.algorithms.hc.HillClimbing;
13 import org.goataa.impl.algorithms.sa.SimulatedAnnealing;
14 import org.goataa.impl.algorithms.sa.temperatureSchedules.Exponential;
15 import org.goataa.impl.algorithms.sa.temperatureSchedules.Logarithmic;
16 import org.goataa.impl.gpms.RandomKeys;
17 import
org.goataa.impl.searchOperations.strings.permutation.nullary.IntPermutationUniformCreation;
18 import
org.goataa.impl.searchOperations.strings.permutation.unary.IntPermutationMultiSwapMutation;
19 import
org.goataa.impl.searchOperations.strings.permutation.unary.IntPermutationSingleSwapMutation;
20 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEbin;
21 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEexp;
22 import org.goataa.impl.termination.StepLimit;
23 import org.goataa.impl.utils.BufferedStatistics;
24 import org.goataa.impl.utils.Individual;
25 import org.goataa.spec.INullarySearchOperation;
26 import org.goataa.spec.IObjectiveFunction;
27 import org.goataa.spec.ISOOptimizationAlgorithm;
28 import org.goataa.spec.ITemperatureSchedule;
29 import org.goataa.spec.ITernarySearchOperation;
30 import org.goataa.spec.IUnarySearchOperation;
31
32 /**
33 * An attempt to solve the bin packing problem introduced in
34 * Example E1.1 on page 25 as stated in
35 * Task 67 on page 238 and
36 * Task 70 on page 252 with some different optimization
37 * algorithms, objective functions, and search operations.
38 *
39 * @author Thomas Weise
40 */
41 public class BinPackingTest {
42
43 /** the objective functions */
44 static IObjectiveFunction<int[]>[] FS;
45
46 /**
47 * The main routine which executes the experiment described in
48 * Task 67 on page 238
49 *
50 * @param args
51 * the command line arguments which are ignored here
52 */
53 @SuppressWarnings("unchecked")
54 public static final void main(final String[] args) {
900 57 DEMOS
55 BinPackingInstance inst;
56 INullarySearchOperation<int[]> create;
57 final IUnarySearchOperation<int[]>[] mutate;
58 final int maxSteps, runs;
59 ITemperatureSchedule[] schedules;
60 int i, j, k, l, mi, li, ri;
61 HillClimbing<int[], int[]> HC;
62 RandomWalk<int[], int[]> RW;
63 RandomSampling<int[], int[]> RS;
64 SimulatedAnnealing<int[], int[]> SA;
65 EvolutionStrategy<int[]> ES;
66 DifferentialEvolution1<int[]> DE;
67 ITernarySearchOperation[] tern;
68 int[] rhos, lambdas, mus;
69
70 maxSteps = 100000;
71 runs = 100;
72
73 System.out.println("Maximum Number of Steps: " + maxSteps); //N ON − N LS − 1
74 System.out.println("Number of Runs : " + runs); //N ON − N LS − 1
75 System.out.println();
76
77 mutate = new IUnarySearchOperation[] {//
78 IntPermutationSingleSwapMutation.INT_PERMUTATION_SINGLE_SWAP_MUTATION,//
79 IntPermutationMultiSwapMutation.INT_PERMUTATION_MULTI_SWAP_MUTATION };
80
81 HC = new HillClimbing<int[], int[]>();
82 SA = new SimulatedAnnealing<int[], int[]>();
83 RW = new RandomWalk<int[], int[]>();
84 RS = new RandomSampling<int[], int[]>();
85 ES = new EvolutionStrategy<int[]>();
86 ES.setGPM(new RandomKeys());
87 DE = new DifferentialEvolution1<int[]>();
88 DE.setGPM(new RandomKeys());
89
90 rhos = new int[] { 2, 10 };
91 lambdas = new int[] { 50, 100, 200 };
92 mus = new int[] { 10, 20 };
93
94 for (i = 0; i < BinPackingInstance.ALL_INSTANCES.length; i++) {
95 inst = BinPackingInstance.ALL_INSTANCES[i];
96 create = new IntPermutationUniformCreation(inst.a.length);
97
98 FS = new IObjectiveFunction[] {//
99 new BinPackingNumberOfBins(inst),//
100 new BinPackingFreeSpace(inst),//
101 new BinPackingNumberOfBinsAndFreeSpace(inst),//
102 new BinPackingNumberOfBinsAndLastFree(inst),//
103 new BinPackingNumberOfBinsAndMaxFree(inst) };//
104
105 System.out.println();
106 System.out.println();
107 System.out.println("Instance Name : " + inst.name); //N ON − N LS − 1
108 System.out.println("Best Possible Solution : " + inst.bestK); //N ON − N LS − 1
109 System.out.println();
110
111 // Hill Climbing ( Algorithm 26.1 on page 230)
112 HC.setNullarySearchOperation(create);
113 for (k = 0; k < mutate.length; k++) {
114 HC.setUnarySearchOperation(mutate[k]);
115 for (j = 0; j < FS.length; j++) {
116 HC.setObjectiveFunction(FS[j]);
117 testRuns(HC, runs, maxSteps);
118 }
119 }
120
121 // Simulated Annealing ( Algorithm 27.1 on page 245)
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 901
122 // and Simulated Quenching ( Section 27.3.2 on page 249)
123 SA.setNullarySearchOperation(create);
124
125 schedules = new ITemperatureSchedule[] {//
126 new Logarithmic(inst.a.length - inst.bestK + 1),//
127 new Logarithmic(2 * (inst.a[inst.a.length - 1] - inst.a[0])),//
128 new Exponential(inst.a.length - inst.bestK + 1, 1e-5),//
129 new Exponential(2 * (inst.a[inst.a.length - 1] - inst.a[0]),
130 1e-5) //
131 };
132 for (l = 0; l < schedules.length; l++) {
133 SA.setTemperatureSchedule(schedules[l]);
134 for (k = 0; k < mutate.length; k++) {
135 SA.setUnarySearchOperation(mutate[k]);
136 for (j = 0; j < FS.length; j++) {
137 SA.setObjectiveFunction(FS[j]);
138 testRuns(SA, runs, maxSteps);
139 }
140 }
141 }
142
143 // Evolution Strategies: Algorithm 30.1 on page 361
144 ES.setNullarySearchOperation(//
145 new
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation(
146 inst.a.length, 0d, 1d));
147 ES.setDimension(inst.a.length);
148 ES.setMinimum(0d);
149 ES.setMaximum(1d);
150
151 for (j = 0; j < FS.length; j++) {
152 ES.setObjectiveFunction(FS[j]);
153
154 for (mi = 0; mi < mus.length; mi++) {
155 ES.setMu(mus[mi]);
156 for (li = 0; li < lambdas.length; li++) {
157 ES.setLambda(lambdas[li]);
158 for (ri = 0; ri < rhos.length; ri++) {
159 ES.setRho(rhos[ri]);
160
161 ES.setPlus(true);
162 testRuns(ES, runs, maxSteps);
163
164 ES.setPlus(false);
165 testRuns(ES, runs, maxSteps);
166 }
167 }
168 }
169 }
170
171 // Differential Evolution: Chapter 33 on page 419
172 tern = new ITernarySearchOperation[] {
173 new DoubleArrayDEbin(0d, 1d, 0.3d, 0.7d),
174 new DoubleArrayDEbin(0d, 1d, 0.7d, 0.3d),
175 new DoubleArrayDEbin(0d, 1d, 0.5d, 0.5d),
176 new DoubleArrayDEexp(0d, 1d, 0.3d, 0.7d),
177 new DoubleArrayDEexp(0d, 1d, 0.7d, 0.3d),
178 new DoubleArrayDEexp(0d, 1d, 0.5d, 0.5d), };
179
180 for (j = 0; j < FS.length; j++) {
181 DE.setObjectiveFunction(FS[j]);
182 DE.setNullarySearchOperation(//
183 new
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation(
184 inst.a.length, 0d, 1d));
185 for (li = 0; li < lambdas.length; li++) {
186 DE.setPopulationSize(lambdas[li]);
902 57 DEMOS
187 for (mi = 0; mi < tern.length; mi++) {
188 DE.setTernarySearchOperation(tern[mi]);
189 testRuns(DE, runs, maxSteps);
190 }
191 }
192 }
193
194 // Random Walks ( Section 8.2 on page 114)
195 RW.setNullarySearchOperation(create);
196 for (k = 0; k < mutate.length; k++) {
197 RW.setUnarySearchOperation(mutate[k]);
198 RW.setObjectiveFunction(FS[0]);
199 testRuns(RW, runs, maxSteps);
200 }
201
202 // Random Sampling ( Section 8.1 on page 113)
203 RS.setNullarySearchOperation(create);
204 RS.setObjectiveFunction(FS[0]);
205 testRuns(RS, runs, maxSteps);
206
207 }
208 }
209
210 /**
211 * Perform the test runs
212 *
213 * @param algorithm
214 * the algorithm configuration to test
215 * @param runs
216 * the number of runs to perform
217 * @param steps
218 * the number of steps to execute per run
219 */
220 @SuppressWarnings("unchecked")
221 private static final void testRuns(
222 final ISOOptimizationAlgorithm<?, int[], ?> algorithm,
223 final int runs, final int steps) {
224 int i;
225 BufferedStatistics stat;
226 List<Individual<?, int[]>> solutions;
227 Individual<?, int[]> individual;
228
229 stat = new BufferedStatistics();
230 algorithm.setTerminationCriterion(new StepLimit(steps));
231
232 for (i = 0; i < runs; i++) {
233 algorithm.setRandSeed(i);
234 solutions = (List) (algorithm.call());
235 individual = solutions.get(0);
236 stat.add(FS[0].compute(individual.x, null));
237 }
238
239 System.out.println(stat.getConfiguration(false) + ’ ’
240 + algorithm.toString(false));
241 }
242 }
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES
1 Maximum Number of Steps : 100000
2 Number o f Runs : 100
3
4
5
6 I n s t a n c e Name : N1C1W1 A
7 Best P o s s i b l e S o l u t i o n : 25
8
9 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 1 0 E01 HC( o0=U( 5 2 ) , o1=SS , f=f 1 ( N1C1W1 A) , t c= s l (100000) ,r)
10 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 5 E01 max= 2 . 8 0 E01 HC( o0=U( 5 2 ) , o1=SS , f=f 2 ( N1C1W1 A) , t c= s l (100000) ,r)
11 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 5 E01 max= 2 . 8 0 E01 HC( o0=U( 5 2 ) , o1=SS , f=f 3 ( N1C1W1 A) , t c= s l (100000) ,r)
12 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 5 E01 max= 2 . 8 0 E01 HC( o0=U( 5 2 ) , o1=SS , f=f 5 ( N1C1W1 A) , t c= s l (100000) ,r)
13 min = 2 . 7 0 E01 med = 2 . 8 0 E01 avg = 2 . 8 1 E01 max= 3 . 0 0 E01 HC( o0=U( 5 2 ) , o1=SS , f=f 4 ( N1C1W1 A) , t c= s l (100000) ,r)
14 min = 2 . 7 0 E01 med = 2 . 8 0 E01 avg = 2 . 8 4 E01 max= 2 . 9 0 E01 HC( o0=U( 5 2 ) , o1=MS, f=f 1 ( N1C1W1 A) , t c= s l (100000) ,r)
15 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 2 E01 max= 2 . 8 0 E01 HC( o0=U( 5 2 ) , o1=MS, f=f 2 ( N1C1W1 A) , t c= s l (100000) ,r)
16 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 2 E01 max= 2 . 8 0 E01 HC( o0=U( 5 2 ) , o1=MS, f=f 3 ( N1C1W1 A) , t c= s l (100000) ,r)
17 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 2 E01 max= 2 . 8 0 E01 HC( o0=U( 5 2 ) , o1=MS, f=f 5 ( N1C1W1 A) , t c= s l (100000) ,r)
18 min = 2 . 7 0 E01 med = 2 . 8 0 E01 avg = 2 . 7 6 E01 max= 2 . 9 0 E01 HC( o0=U( 5 2 ) , o1=MS, f=f 4 ( N1C1W1 A) , t c= s l (100000) ,r)
19 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
20 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
21 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
22 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
23 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
24 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
25 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
26 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 2 . 9 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
27 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
28 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 0 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , log (28) )
29 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
30 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
31 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
32 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
33 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=SS , f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
34 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
35 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
36 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
37 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
38 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SA ( o0=U( 5 2 ) , o1=MS, f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , log (194) )
39 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
40 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
41 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
42 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
43 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
44 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
45 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
46 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
47 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
48 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =28 , e =1.0E−5) )
49 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
50 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
51 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
52 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
53 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=SS , f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
54 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 1 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
55 min = 2 . 7 0 E01 med = 2 . 7 0 E01 avg = 2 . 7 0 E01 max= 2 . 7 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 2 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
56 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 3 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
57 min = 2 . 9 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 3 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 5 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )
58 min = 2 . 8 0 E01 med = 2 . 9 0 E01 avg = 2 . 9 2 E01 max= 3 . 0 0 E01 SQ( o0=U( 5 2 ) , o1=MS, f=f 4 ( N1C1W1 A) , t c= s l (100000) , r , exp ( Ts =194 , e =1.0E−5) )

Listing 57.16: An example output of the Bin Packing test program.

903
904 57 DEMOS
57.1.2.2 Example instances of the Traveling Salesman Problem.

Example instances for the Traveling Salesman Problem.

57.1.2.2.1 The Problem att48.

A Traveling Salesman Problem which corresponds to finding the optimal tour through
the 48 capitals of the US. This problem has been taken from http://www.iwr.uni-
heidelberg.de/groups/comopt/software/TSPLIB95/tsp/. In this task, each of the 48 cities
is represented by a coordinate pair and the distance between two cities is the Euclidean
distance of their coordinates. The optimal solution of this problem is 10628.

Listing 57.17: The data of the att48 problem instance.


1 NAME : att48
2 COMMENT : 48 capitals of the US (Padberg/Rinaldi)
3 TYPE : TSP
4 DIMENSION : 48
5 EDGE_WEIGHT_TYPE : ATT
6 NODE_COORD_SECTION
7 1 6734 1453
8 2 2233 10
9 3 5530 1424
10 4 401 841
11 5 3082 1644
12 6 7608 4458
13 7 7573 3716
14 8 7265 1268
15 9 6898 1885
16 10 1112 2049
17 11 5468 2606
18 12 5989 2873
19 13 4706 2674
20 14 4612 2035
21 15 6347 2683
22 16 6107 669
23 17 7611 5184
24 18 7462 3590
25 19 7732 4723
26 20 5900 3561
27 21 4483 3369
28 22 6101 1110
29 23 5199 2182
30 24 1633 2809
31 25 4307 2322
32 26 675 1006
33 27 7555 4819
34 28 7541 3981
35 29 3177 756
36 30 7352 4506
37 31 7545 2801
38 32 3245 3305
39 33 6426 3173
40 34 4608 1198
41 35 23 2216
42 36 7248 3779
43 37 7762 4595
44 38 7392 2244
45 39 3484 2829
46 40 6271 2135
47 41 4985 140
48 42 1916 1569
49 43 7280 4899
50 44 7509 3239
51 45 10 2676
52 46 6807 2993
57.2. DEMOS FOR GENETIC PROGRAMMING 905
53 47 5185 3258
54 48 3023 1942
55 EOF

57.2 Demos for Genetic Programming


Examples for (strongly-typed, tree-based) Genetic Programming as defined in Section 31.3
on page 386.

57.2.1 Demos for Genetic Programming Applied to Mathematical Problems

57.2.1.1 Mathematical Problems defined over the Real Numbers

Examples for synthesizing mathematical formulas.

57.2.1.1.1 Symbolic Regressions

An example for Symbolic Regression.

Listing 57.18: A single-objective test program for symbolic regression.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import org.goataa.impl.algorithms.ea.SimpleGenerationalEA;
7 import org.goataa.impl.algorithms.ea.selection.TournamentSelection;
8 import org.goataa.impl.gpms.IdentityMapping;
9 import org.goataa.impl.searchOperations.trees.binary.TreeRecombination;
10 import org.goataa.impl.searchOperations.trees.nullary.TreeRampedHalfAndHalf;
11 import org.goataa.impl.searchOperations.trees.unary.TreeMutator;
12 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
13 import org.goataa.impl.searchSpaces.trees.ReflectionNodeType;
14 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
15 import org.goataa.impl.searchSpaces.trees.math.real.arith.Abs;
16 import org.goataa.impl.searchSpaces.trees.math.real.arith.Add;
17 import org.goataa.impl.searchSpaces.trees.math.real.arith.Div;
18 import org.goataa.impl.searchSpaces.trees.math.real.arith.Exp;
19 import org.goataa.impl.searchSpaces.trees.math.real.arith.Mul;
20 import org.goataa.impl.searchSpaces.trees.math.real.arith.Pow;
21 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sqrt;
22 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sub;
23 import org.goataa.impl.searchSpaces.trees.math.real.basic.ConstantType;
24 import org.goataa.impl.searchSpaces.trees.math.real.basic.VariableType;
25 import org.goataa.impl.searchSpaces.trees.math.real.trig.Sin;
26 import org.goataa.impl.termination.StepLimit;
27 import org.goataa.impl.utils.Individual;
28 import org.goataa.spec.IBinarySearchOperation;
29 import org.goataa.spec.IGPM;
30 import org.goataa.spec.INullarySearchOperation;
31 import org.goataa.spec.IObjectiveFunction;
32 import org.goataa.spec.IUnarySearchOperation;
33
34 /**
35 * An example test program for Symbolic Regression as discussed in
36 * Section 49.1 on page 531.
37 *
38 * @author Thomas Weise
39 */
906 57 DEMOS
40 public class SRTest extends Examples {
41
42 /** the training cases */
43 static final TrainingCase[] DATA = createTrainingCases(F1, 250);
44
45 /**
46 * The main routine
47 *
48 * @param args
49 * the command line arguments which are ignored here
50 */
51 @SuppressWarnings("unchecked")
52 public static final void main(final String[] args) {
53 final IObjectiveFunction<RealFunction> f;
54 final INullarySearchOperation<RealFunction> create;
55 final IUnarySearchOperation<RealFunction> mutate;
56 final IBinarySearchOperation<RealFunction> crossover;
57 final NodeTypeSet<RealFunction> nts;
58 final NodeTypeSet<RealFunction>[] binary, unary;
59 final SimpleGenerationalEA<RealFunction, RealFunction> EA;
60 Individual<RealFunction, RealFunction> ind;
61
62 f = new SRObjectiveFunction2(DATA);
63
64 nts = new NodeTypeSet<RealFunction>();
65 binary = new NodeTypeSet[] { nts, nts };
66 unary = new NodeTypeSet[] { nts };
67 nts.add(new VariableType(DATA[0].data.length));
68 nts.add(ConstantType.DEFAULT_CONSTANT_TYPE);
69 nts.add(new ReflectionNodeType<Add, RealFunction>(Add.class, binary));
70 nts.add(new ReflectionNodeType<Sub, RealFunction>(Sub.class, binary));
71 nts.add(new ReflectionNodeType<Mul, RealFunction>(Mul.class, binary));
72 nts.add(new ReflectionNodeType<Div, RealFunction>(Div.class, binary));
73 nts.add(new ReflectionNodeType<Pow, RealFunction>(Pow.class, binary));
74 nts.add(new ReflectionNodeType<Exp, RealFunction>(Exp.class, unary));
75 nts.add(new ReflectionNodeType<Abs, RealFunction>(Abs.class, unary));
76 nts.add(new ReflectionNodeType<Sin, RealFunction>(Sin.class, unary));
77 nts.add(new ReflectionNodeType<Sqrt, RealFunction>(Sqrt.class, unary));
78
79 create = new TreeRampedHalfAndHalf<RealFunction>(nts, 15);
80 mutate = ((IUnarySearchOperation) (new TreeMutator(15)));
81 crossover = ((IBinarySearchOperation) (new TreeRecombination(15)));
82
83 EA = new SimpleGenerationalEA<RealFunction, RealFunction>();
84 EA.setCrossoverRate(0.5);
85 EA.setMutationRate(0.5);
86 EA.setNullarySearchOperation(create);
87 EA.setUnarySearchOperation(mutate);
88 EA.setBinarySearchOperation(crossover);
89 EA.setGPM((IGPM) (IdentityMapping.IDENTITY_MAPPING));
90 EA.setSelectionAlgorithm(new TournamentSelection(2));
91 EA.setTerminationCriterion(new StepLimit(500000));
92 EA.setPopulationSize(4096);
93 EA.setMatingPoolSize(1024);
94 EA.setObjectiveFunction(f);
95
96 ind = EA.call().get(0);
97
98 System.out.println(ind.x.toString());
99 System.out.println(ind.x.getWeight() + " " + ind.x.getHeight());
//N ON − N LS − 1
100 System.out.println();
101 }
102 }

Listing 57.19: A multi-objective test program for symbolic regression.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
57.2. DEMOS FOR GENETIC PROGRAMMING 907
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.List;
7
8 import org.goataa.impl.algorithms.ea.SimpleGenerationalMOEA;
9 import org.goataa.impl.algorithms.ea.fitnessAssignment.ParetoRanking;
10 import org.goataa.impl.algorithms.ea.selection.TournamentSelection;
11 import org.goataa.impl.comparators.Pareto;
12 import org.goataa.impl.gpms.IdentityMapping;
13 import org.goataa.impl.objectiveFunctions.TreeSizeObjective;
14 import org.goataa.impl.searchOperations.trees.binary.TreeRecombination;
15 import org.goataa.impl.searchOperations.trees.nullary.TreeRampedHalfAndHalf;
16 import org.goataa.impl.searchOperations.trees.unary.TreeMutator;
17 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
18 import org.goataa.impl.searchSpaces.trees.ReflectionNodeType;
19 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
20 import org.goataa.impl.searchSpaces.trees.math.real.arith.Abs;
21 import org.goataa.impl.searchSpaces.trees.math.real.arith.Add;
22 import org.goataa.impl.searchSpaces.trees.math.real.arith.Div;
23 import org.goataa.impl.searchSpaces.trees.math.real.arith.Exp;
24 import org.goataa.impl.searchSpaces.trees.math.real.arith.Mul;
25 import org.goataa.impl.searchSpaces.trees.math.real.arith.Pow;
26 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sqrt;
27 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sub;
28 import org.goataa.impl.searchSpaces.trees.math.real.basic.ConstantType;
29 import org.goataa.impl.searchSpaces.trees.math.real.basic.VariableType;
30 import org.goataa.impl.searchSpaces.trees.math.real.trig.Sin;
31 import org.goataa.impl.termination.StepLimit;
32 import org.goataa.impl.utils.MOIndividual;
33 import org.goataa.spec.IBinarySearchOperation;
34 import org.goataa.spec.IGPM;
35 import org.goataa.spec.INullarySearchOperation;
36 import org.goataa.spec.IObjectiveFunction;
37 import org.goataa.spec.IUnarySearchOperation;
38
39 /**
40 * An example test program for Symbolic Regression as discussed in
41 * Section 49.1 on page 531.
42 *
43 * @author Thomas Weise
44 */
45 public class SRTestMO extends Examples {
46
47 /** the training cases */
48 static final TrainingCase[] DATA = createTrainingCases(F1, 250);
49
50 /**
51 * The main routine
52 *
53 * @param args
54 * the command line arguments which are ignored here
55 */
56 @SuppressWarnings("unchecked")
57 public static final void main(final String[] args) {
58 final IObjectiveFunction<RealFunction>[] f;
59 final INullarySearchOperation<RealFunction> create;
60 final IUnarySearchOperation<RealFunction> mutate;
61 final IBinarySearchOperation<RealFunction> crossover;
62 final NodeTypeSet<RealFunction> nts;
63 final NodeTypeSet<RealFunction>[] binary, unary;
64 final SimpleGenerationalMOEA<RealFunction, RealFunction> EA;
65 MOIndividual<RealFunction, RealFunction> ind;
66 List<MOIndividual<RealFunction, RealFunction>> l;
67 int i;
68
908 57 DEMOS
69 f = new IObjectiveFunction[] { new SRObjectiveFunction1(DATA),
70 TreeSizeObjective.TREE_SIZE_OBJECTIVE };
71
72 nts = new NodeTypeSet<RealFunction>();
73 binary = new NodeTypeSet[] { nts, nts };
74 unary = new NodeTypeSet[] { nts };
75 nts.add(new VariableType(DATA[0].data.length));
76 nts.add(ConstantType.DEFAULT_CONSTANT_TYPE);
77 nts.add(new ReflectionNodeType<Add, RealFunction>(Add.class, binary));
78 nts.add(new ReflectionNodeType<Sub, RealFunction>(Sub.class, binary));
79 nts.add(new ReflectionNodeType<Mul, RealFunction>(Mul.class, binary));
80 nts.add(new ReflectionNodeType<Div, RealFunction>(Div.class, binary));
81 nts.add(new ReflectionNodeType<Pow, RealFunction>(Pow.class, binary));
82 nts.add(new ReflectionNodeType<Exp, RealFunction>(Exp.class, unary));
83 nts.add(new ReflectionNodeType<Abs, RealFunction>(Abs.class, unary));
84 nts.add(new ReflectionNodeType<Sin, RealFunction>(Sin.class, unary));
85 nts.add(new ReflectionNodeType<Sqrt, RealFunction>(Sqrt.class, unary));
86
87 create = new TreeRampedHalfAndHalf<RealFunction>(nts, 15);
88 mutate = ((IUnarySearchOperation) (new TreeMutator(15)));
89 crossover = ((IBinarySearchOperation) (new TreeRecombination(15)));
90
91 EA = new SimpleGenerationalMOEA<RealFunction, RealFunction>();
92 EA.setCrossoverRate(0.5);
93 EA.setMutationRate(0.5);
94 EA.setComparator(Pareto.PARETO_COMPARATOR);
95 EA.setFitnesAssignmentProcess(ParetoRanking.PARETO_RANKING);
96 EA.setNullarySearchOperation(create);
97 EA.setUnarySearchOperation(mutate);
98 EA.setBinarySearchOperation(crossover);
99 EA.setGPM((IGPM) (IdentityMapping.IDENTITY_MAPPING));
100 EA.setSelectionAlgorithm(new TournamentSelection(2));
101 EA.setTerminationCriterion(new StepLimit(1000000));
102 EA.setPopulationSize(4096);
103 EA.setMatingPoolSize(2048);
104 EA.setObjectiveFunctions(f);
105
106 l = EA.call();
107
108 for (i = l.size(); (--i) >= 0;) {
109 ind = l.get(i);
110 System.out.println("f(p)=(" + ind.f[0] + //N ON − N LS − 1
111 ", " + ind.f[1] + ")");//N ON − N LS − 1//N ON − N LS − 2
112 System.out.println("p(x)=" + ind.x.toString()); //N ON − N LS − 1
113 System.out.println();
114 }
115
116 }
117 }

Listing 57.20: An objective function for Symbolic Regression.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.math.real.RealContext;
10 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
11 import org.goataa.impl.utils.Constants;
12 import org.goataa.spec.IObjectiveFunction;
13
14 /**
15 * An objective RealFunction for Symbolic Regression which evaluates a
16 * program on basis of test cases. It incorporates the approximation
57.2. DEMOS FOR GENETIC PROGRAMMING 909
17 * accuracy of the program into the fitness as well as its size.
18 *
19 * @author Thomas Weise
20 */
21 public class SRObjectiveFunction1 extends OptimizationModule implements
22 IObjectiveFunction<RealFunction> {
23 /** a constant required by Java serialization */
24 private static final long serialVersionUID = 1;
25
26 /** the training data */
27 private final TrainingCase[] tc;
28
29 /** the real context */
30 private final RealContext rc;
31
32 /**
33 * Create a new instance of the symbolic regression objective
34 * RealFunction.
35 *
36 * @param t
37 * the training case
38 */
39 public SRObjectiveFunction1(final TrainingCase[] t) {
40 super();
41 this.tc = t;
42 this.rc = new RealContext(10000, t[0].data.length);
43 }
44
45 /**
46 * Compute the objective value, i.e., determine the utility of the
47 * solution candidate x as specified in
48 * Definition D2.3 on page 44.
49 *
50 * @param x
51 * the phenotype to be rated
52 * @param r
53 * the randomizer
54 * @return the objective value of x, the lower the better (see
55 * Section 6.3.4 on page 106)
56 */
57 public double compute(final RealFunction x, final Random r) {
58 TrainingCase[] y;
59 int i;
60 TrainingCase q;
61 double res, v;
62 RealContext c;
63
64 res = 0d;
65 y = this.tc;
66 c = this.rc;
67 for (i = y.length; (--i) >= 0;) {
68 q = y[i];
69 c.beginProgram();
70 c.copy(q.data);
71 v = (q.result - x.compute(c));
72 res += (v * v);
73 c.endProgram();
74 }
75
76 if (Double.isInfinite(res) || Double.isNaN(res)) {
77 return Constants.WORST_OBJECTIVE;
78 }
79
80 return res;
81 }
82
83 }
910 57 DEMOS
Listing 57.21: An objective function based on Listing 57.20, but also including tree-size
information.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
9
10 /**
11 * An objective RealFunction for Symbolic Regression which evaluates a program
12 * on basis of test cases. It incorporates the approximation accuracy of
13 * the program into the fitness as well as its size.
14 *
15 * @author Thomas Weise
16 */
17 public class SRObjectiveFunction2 extends SRObjectiveFunction1 {
18 /** a constant required by Java serialization */
19 private static final long serialVersionUID = 1;
20
21 /**
22 * Create a new instance of the symbolic regression objective RealFunction.
23 *
24 * @param t
25 * the training case
26 */
27 public SRObjectiveFunction2(final TrainingCase[] t) {
28 super(t);
29 }
30
31 /**
32 * Compute the objective value, i.e., determine the utility of the
33 * solution candidate x as specified in
34 * Definition D2.3 on page 44.
35 *
36 * @param x
37 * the phenotype to be rated
38 * @param r
39 * the randomizer
40 * @return the objective value of x, the lower the better (see
41 * Section 6.3.4 on page 106)
42 */
43 @Override
44 public final double compute(final RealFunction x, final Random r) {
45 int i;
46 double res;
47
48 res = super.compute(x, r);
49
50 i = Math.max(3, x.getWeight());
51
52 return (res * Math.log(i));
53 }
54
55 }

Listing 57.22: An objective function for Symbolic Regression.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import org.goataa.impl.OptimizationModule;
7 import org.goataa.impl.utils.TextUtils;
57.2. DEMOS FOR GENETIC PROGRAMMING 911
8
9 /**
10 * A training case
11 *
12 * @author Thomas Weise
13 */
14 public class TrainingCase extends OptimizationModule implements
15 Comparable<TrainingCase> {
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /** the data */
20 public final double[] data;
21
22 /** the result */
23 public final double result;
24
25 /**
26 * Create a new training case
27 *
28 * @param t
29 * the data
30 * @param r
31 * the result
32 */
33 public TrainingCase(final double[] t, double r) {
34 super();
35 this.data = t;
36 this.result = r;
37 }
38
39 /**
40 * Get the string representation of this object, i.e., the name and
41 * configuration.
42 *
43 * @param longVersion
44 * true if the long version should be returned, false if the
45 * short version should be returned
46 * @return the string version of this object
47 */
48 @Override
49 public String toString(final boolean longVersion) {
50 String s;
51
52 s = TextUtils.toString(this.data);
53 s = s.substring(1, s.length() - 2);
54
55 return "g(" + s + //N ON − N LS − 1
56 ")=" + this.result;//N ON − N LS − 1
57 }
58
59 /**
60 * Compare to another training cases
61 *
62 * @param tc
63 * the other training case
64 * @return the result of that comparison
65 */
66 public int compareTo(final TrainingCase tc) {
67 double[] d1, d2;
68 int i, r;
69
70 d1 = this.data;
71 d2 = tc.data;
72 for (i = 0; i < d1.length; i++) {
73 r = Double.compare(d1[i], d2[i]);
74 if (r != 0) {
912 57 DEMOS
75 return r;
76 }
77 }
78 return Double.compare(this.result, tc.result);
79 }
80 }

Listing 57.23: Some example functions to match with regression.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.searchSpaces.trees.math.real.RealContext;
10 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
11
12 /**
13 * Some examples to be used for testing the Symbolic Regression (see
14 * Section 49.1 on page 531). capabilities if GP.
15 *
16 * @author Thomas Weise
17 */
18 public class Examples {
19
20 /** the first example RealFunction */
21 public static final ExampleFunct F1 = new ExampleFunct(1, -100, 100) {
22
23 /**
24 * compute the result
25 *
26 * @param d
27 * the vector
28 * @return the result
29 */
30 @Override
31 final double compute(final double[] d) {
32 return Math.exp(Math.sin(d[0])) + 3d * Math.sqrt(Math.abs(d[0]));
33 }
34 };
35
36 /** the first example RealFunction */
37 public static final ExampleFunct F2 = new ExampleFunct(1, 0, 1) {
38
39 /**
40 * compute the result
41 *
42 * @param d
43 * the vector
44 * @return the result
45 */
46 @Override
47 final double compute(final double[] d) {
48 return Math.pow(d[0], 0.2) - Math.sin(Math.abs(d[0]));
49 }
50 };
51
52 /**
53 * Create new random training cases for a RealFunction
54 *
55 * @param f
56 * the RealFunction
57 * @param cnt
58 * the number of training cases to create
59 * @return the new random training cases
57.2. DEMOS FOR GENETIC PROGRAMMING 913
60 */
61 public static final TrainingCase[] createTrainingCases(
62 final ExampleFunct f, final int cnt) {
63 double[] d;
64 TrainingCase[] tc;
65 int i, j;
66 Random r;
67
68 r = new Random();
69 tc = new TrainingCase[cnt];
70 for (i = cnt; (--i) >= 0;) {
71 j = f.dim;
72 d = new double[j];
73 for (; (--j) >= 0;) {
74 d[j] = f.min + ((f.max - f.min) * r.nextDouble());
75 }
76 tc[i] = new TrainingCase(d, f.compute(d));
77 }
78 Arrays.sort(tc);
79 return tc;
80 }
81
82 /**
83 * Print the training case result
84 *
85 * @param tc
86 * the training case
87 * @param f
88 * the RealFunction
89 */
90 static final void print(final TrainingCase[] tc, final RealFunction f) {
91 int i, j;
92 RealContext d;
93
94 d = new RealContext(10000, tc[0].data.length);
95
96 for (i = 0; i < tc.length; i++) {
97 d.copy(tc[i].data);
98 for (j = 0; j < d.getMemorySize(); j++) {
99 System.out.print(String.valueOf(d.read(j)));
100 System.out.print(’\t’);
101 }
102 System.out.print(String.valueOf(tc[i].result));
103 System.out.print(’\t’);
104 System.out.println(f.compute(d));
105 }
106 }
107
108 /**
109 * The internal class for example functions
110 *
111 * @author Thomas Weise
112 */
113 private static abstract class ExampleFunct {
114 /** the dimension */
115 final int dim;
116
117 /** the minimum */
118 final double min;
119
120 /** the maximum */
121 final double max;
122
123 /**
124 * Create a new example RealFunction
125 *
126 * @param d
914 57 DEMOS
127 * the dimension
128 * @param mi
129 * the minimum
130 * @param ma
131 * the maximum
132 */
133 ExampleFunct(final int d, final double mi, final double ma) {
134 super();
135 this.dim = d;
136 this.min = mi;
137 this.max = ma;
138 }
139
140 /**
141 * compute the result
142 *
143 * @param d
144 * the vector
145 * @return the result
146 */
147 abstract double compute(final double[] d);
148 }
149 }

57.3 The VRP Benchmark Problem


The implementation of the benchmark problem discussed in Section 50.4.

57.3.1 The Involved Objects

All the objects involved in the benchmark problem discussed in Section 50.4.2.

Listing 57.24: An instance of this class holds the information of one location.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * This class represents a location. Each customer, depot, and halting
11 * point is always associated with a location.
12 *
13 * @author Thomas Weise
14 */
15 public class Location extends InputObject {
16 /** a constant required by Java serialization */
17 private static final long serialVersionUID = 1;
18
19 /** the locations */
20 public static final InputObjectList<Location> LOCATIONS = new
InputObjectList<Location>(
21 Location.class);
22
23 /** the headers */
24 private static final List<String> HEADERS;
25
26 static {
27 HEADERS = new ArrayList<String>();
28 HEADERS.addAll(IHEADERS);
29 HEADERS.add("is_depot");//N ON − N LS − 1
57.3. THE VRP BENCHMARK PROBLEM 915
30 }
31
32 /** is the location a depot? */
33 public final boolean isDepot;
34
35 /**
36 * Create a new location of the given id
37 *
38 * @param pid
39 * the id
40 * @param depot
41 * true if and only if the location is a depot
42 */
43 public Location(final int pid, final boolean depot) {
44 super(pid, LOCATIONS);
45 this.isDepot = depot;
46 }
47
48 /**
49 * Create a new location
50 *
51 * @param data
52 * the data
53 */
54 public Location(final String[] data) {
55 super(data, LOCATIONS);
56 this.isDepot = Boolean.parseBoolean(data[1]);
57 }
58
59 /**
60 * {@inheritDoc}
61 */
62 @Override
63 final char getIDPrefixChar() {
64 return ’L’;
65 }
66
67 /**
68 * Get the column headers for a csv data representation. Child classes
69 * may add additional fields.
70 *
71 * @return the column headers for representing the data of this object in
72 * csv format
73 */
74 @Override
75 public List<String> getCSVColumnHeader() {
76 return HEADERS;
77 }
78
79 /**
80 * Fill in the data of this object into a string array which can be used
81 * to serialize the object in a nice way
82 *
83 * @param output
84 * the output
85 */
86 @Override
87 public void fillInCSVData(final String[] output) {
88 super.fillInCSVData(output);
89 output[1] = String.valueOf(this.isDepot);
90 }
91 }

Listing 57.25: An instance of this class holds the information of one order.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
916 57 DEMOS
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * An order is a set of goods that must be taken from one place to another
11 * one. It requires a single of container and has timewindoes for pickup
12 * and delivery as well as a start and an end location.
13 *
14 * @author Thomas Weise
15 */
16 public class Order extends InputObject {
17 /** a constant required by Java serialization */
18 private static final long serialVersionUID = 1;
19
20 /** the list of orders */
21 public static final InputObjectList<Order> ORDERS = new InputObjectList<Order>(
22 Order.class);
23
24 /** no orders */
25 public static final Order[] NO_ORDERS = new Order[0];
26
27 /** the headers */
28 private static final List<String> HEADERS;
29
30 static {
31 HEADERS = new ArrayList<String>();
32 HEADERS.addAll(IHEADERS);
33 HEADERS.add("pickup_window_start");//N ON − N LS − 1
34 HEADERS.add("pickup_window_end");//N ON − N LS − 1
35 HEADERS.add("pickup_location");//N ON − N LS − 1
36 HEADERS.add("delivery_window_start");//N ON − N LS − 1
37 HEADERS.add("delivery_window_end");//N ON − N LS − 1
38 HEADERS.add("delivery_location");//N ON − N LS − 1
39 }
40
41 /** the beginning of the pickup time */
42 public final long pickupWindowStart;
43
44 /** the ending of the pickup time */
45 public final long pickupWindowEnd;
46
47 /** the pickup location */
48 public final Location pickupLocation;
49
50 /** the beginning of the delivery time */
51 public final long deliveryWindowStart;
52
53 /** the ending of the deliver time */
54 public final long deliveryWindowEnd;
55
56 /** the delivery location */
57 public final Location deliveryLocation;
58
59 /**
60 * Create a new order
61 *
62 * @param pid
63 * the id
64 * @param ppickupWindowStart
65 * the beginning of the pickup time
66 * @param ppickupWindowEnd
67 * the ending of the pickup time
68 * @param ppickupLocation
69 * the pickup location
70 * @param pdeliveryWindowStart
57.3. THE VRP BENCHMARK PROBLEM 917
71 * the beginning of the delivery time
72 * @param pdeliveryWindowEnd
73 * the ending of the deliver time
74 * @param pdeliveryLocation
75 * the delivery location
76 */
77 public Order(final int pid, final long ppickupWindowStart,
78 final long ppickupWindowEnd, final Location ppickupLocation,
79 final long pdeliveryWindowStart, final long pdeliveryWindowEnd,
80 final Location pdeliveryLocation) {
81 super(pid, ORDERS);
82 this.pickupWindowStart = ppickupWindowStart;
83 this.pickupWindowEnd = ppickupWindowEnd;
84 this.pickupLocation = ppickupLocation;
85 this.deliveryWindowStart = pdeliveryWindowStart;
86 this.deliveryWindowEnd = pdeliveryWindowEnd;
87 this.deliveryLocation = pdeliveryLocation;
88 }
89
90 /**
91 * Create a new order
92 *
93 * @param data
94 * the order data
95 */
96 public Order(final String[] data) {
97 super(data, ORDERS);
98
99 this.pickupWindowStart = Long.parseLong(data[1]);
100 this.pickupWindowEnd = Long.parseLong(data[2]);
101 this.pickupLocation = Location.LOCATIONS.find(extractId(data[3]));
102 this.deliveryWindowStart = Long.parseLong(data[4]);
103 this.deliveryWindowEnd = Long.parseLong(data[5]);
104 this.deliveryLocation = Location.LOCATIONS.find(extractId(data[6]));
105 }
106
107 /**
108 * {@inheritDoc}
109 */
110 @Override
111 final char getIDPrefixChar() {
112 return ’O’;
113 }
114
115 /**
116 * Get the column headers for a csv data representation. Child classes
117 * may add additional fields.
118 *
119 * @return the column headers for representing the data of this object in
120 * csv format
121 */
122 @Override
123 public List<String> getCSVColumnHeader() {
124 return HEADERS;
125 }
126
127 /**
128 * Fill in the data of this object into a string array which can be used
129 * to serialize the object in a nice way
130 *
131 * @param output
132 * the output
133 */
134 @Override
135 public void fillInCSVData(final String[] output) {
136 super.fillInCSVData(output);
137 output[1] = String.valueOf(this.pickupWindowStart);
918 57 DEMOS
138 output[2] = String.valueOf(this.pickupWindowEnd);
139 output[3] = String.valueOf(this.pickupLocation.getIDString());
140 output[4] = String.valueOf(this.deliveryWindowStart);
141 output[5] = String.valueOf(this.deliveryWindowEnd);
142 output[6] = String.valueOf(this.deliveryLocation.getIDString());
143 }
144 }

Listing 57.26: An instance of this class holds the information of one truck.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * A truck is a car that can transport containers.
11 *
12 * @author Thomas Weise
13 */
14 public class Truck extends InputObject {
15 /** a constant required by Java serialization */
16 private static final long serialVersionUID = 1;
17
18 /** the trucks */
19 public static final InputObjectList<Truck> TRUCKS = new InputObjectList<Truck>(
20 Truck.class);
21
22 /** the headers */
23 private static final List<String> HEADERS;
24
25 static {
26 HEADERS = new ArrayList<String>();
27 HEADERS.addAll(IHEADERS);
28 HEADERS.add("start_location");//N ON − N LS − 1
29 HEADERS.add("max_containers");//N ON − N LS − 1
30 HEADERS.add("max_drivers");//N ON − N LS − 1
31 HEADERS.add("costs_per_h_km");//N ON − N LS − 1
32 HEADERS.add("avg_speed_km_h");//N ON − N LS − 1
33 }
34
35 /** the starting location */
36 public final Location start;
37
38 /** the maximum number of containers this truck can transport */
39 public final int maxContainers;
40
41 /** the maximum number of drivers */
42 public final int maxDrivers;
43
44 /** the costs per kilometer in cent */
45 public final int costsPerHKm;
46
47 /** the average speed in km/h */
48 public final int avgSpeedKmH;
49
50 /**
51 * Create a new truck
52 *
53 * @param pid
54 * the id
55 * @param lstart
56 * the starting location
57 * @param maxCont
58 * the maximum number of containers this truck can transport
57.3. THE VRP BENCHMARK PROBLEM 919
59 * @param maxDriv
60 * the maximum number of drivers that can sit in this truck
61 * @param costsHKM
62 * the costs per hour*kilometer in cent
63 * @param avgSpeed
64 * the average speed in km/h
65 */
66 public Truck(final int pid, final Location lstart, final int maxCont,
67 final int maxDriv, final int costsHKM, final int avgSpeed) {
68 super(pid, TRUCKS);
69 this.start = lstart;
70 this.maxContainers = maxCont;
71 this.maxDrivers = maxDriv;
72 this.costsPerHKm = costsHKM;
73 this.avgSpeedKmH = avgSpeed;
74 }
75
76 /**
77 * Create a new truck
78 *
79 * @param data
80 * the data
81 */
82 public Truck(final String[] data) {
83 super(data, TRUCKS);
84 this.start = Location.LOCATIONS.find(InputObject.extractId(data[1]));
85 this.maxContainers = Integer.parseInt(data[2]);
86 this.maxDrivers = Integer.parseInt(data[3]);
87 this.costsPerHKm = Integer.parseInt(data[4]);
88 this.avgSpeedKmH = Integer.parseInt(data[5]);
89 }
90
91 /**
92 * {@inheritDoc}
93 */
94 @Override
95 final char getIDPrefixChar() {
96 return ’T’;
97 }
98
99 /**
100 * Get the column headers for a csv data representation. Child classes
101 * may add additional fields.
102 *
103 * @return the column headers for representing the data of this object in
104 * csv format
105 */
106 @Override
107 public List<String> getCSVColumnHeader() {
108 return HEADERS;
109 }
110
111 /**
112 * Fill in the data of this object into a string array which can be used
113 * to serialize the object in a nice way
114 *
115 * @param output
116 * the output
117 */
118 @Override
119 public void fillInCSVData(final String[] output) {
120 super.fillInCSVData(output);
121 output[1] = this.start.getIDString();
122 output[2] = String.valueOf(this.maxContainers);
123 output[3] = String.valueOf(this.maxDrivers);
124 output[4] = String.valueOf(this.costsPerHKm);
125 output[5] = String.valueOf(this.avgSpeedKmH);
920 57 DEMOS
126 }
127 }

Listing 57.27: An instance of this class holds the information of one container.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * A container is an object which can be used to package one order.
11 *
12 * @author Thomas Weise
13 */
14 public class Container extends InputObject {
15 /** a constant required by Java serialization */
16 private static final long serialVersionUID = 1;
17
18 /** the drivers */
19 public static final InputObjectList<Container> CONTAINERS = new
InputObjectList<Container>(
20 Container.class);
21
22 /** no containers */
23 public static final Container[] NO_CONTAINERS = new Container[0];
24
25 /** the headers */
26 private static final List<String> HEADERS;
27
28 static {
29 HEADERS = new ArrayList<String>();
30 HEADERS.addAll(IHEADERS);
31 HEADERS.add("start");//N ON − N LS − 1
32 }
33
34 /** the start location */
35 public final Location start;
36
37 /**
38 * Create a new container
39 *
40 * @param pid
41 * the id
42 * @param lstart
43 * the start location
44 */
45 public Container(final int pid, final Location lstart) {
46 super(pid, CONTAINERS);
47 this.start = lstart;
48 }
49
50 /**
51 * Create a new truck
52 *
53 * @param data
54 * the data
55 */
56 public Container(final String[] data) {
57 super(data, CONTAINERS);
58 this.start = Location.LOCATIONS.find(InputObject.extractId(data[1]));
59 }
60
61 /**
62 * {@inheritDoc}
57.3. THE VRP BENCHMARK PROBLEM 921
63 */
64 @Override
65 final char getIDPrefixChar() {
66 return ’C’;
67 }
68
69 /**
70 * Get the column headers for a csv data representation. Child classes
71 * may add additional fields.
72 *
73 * @return the column headers for representing the data of this object in
74 * csv format
75 */
76 @Override
77 public List<String> getCSVColumnHeader() {
78 return HEADERS;
79 }
80
81 /**
82 * Fill in the data of this object into a string array which can be used
83 * to serialize the object in a nice way
84 *
85 * @param output
86 * the output
87 */
88 @Override
89 public void fillInCSVData(final String[] output) {
90 super.fillInCSVData(output);
91 output[1] = this.start.getIDString();
92 }
93 }

Listing 57.28: An instance of this class holds the information of one driver.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /**
10 * A driver is a person that can drive a truck.
11 *
12 * @author Thomas Weise
13 */
14 public class Driver extends InputObject {
15 /** a constant required by Java serialization */
16 private static final long serialVersionUID = 1;
17
18 /** the drivers */
19 public static final InputObjectList<Driver> DRIVERS = new
InputObjectList<Driver>(
20 Driver.class);
21
22 /** no drivers */
23 public static final Driver[] NO_DRIVERS = new Driver[0];
24
25 /** the headers */
26 private static final List<String> HEADERS;
27
28 static {
29 HEADERS = new ArrayList<String>();
30 HEADERS.addAll(IHEADERS);
31 HEADERS.add("home");//N ON − N LS − 1
32 HEADERS.add("costs_per_day");//N ON − N LS − 1
33 }
922 57 DEMOS
34
35 /** the maximum driving time */
36 public static int MAX_DRIVING_TIME = 8 * 60 * 60 * 1000;
37
38 /** the minimum break time */
39 public static int MIN_BREAK_TIME = 6 * 60 * 60 * 1000;
40
41 /** the home location */
42 public final Location home;
43
44 /** the costs for any day during which the driver works */
45 public final int costsPerDay;
46
47 /**
48 * Create a new driver
49 *
50 * @param pid
51 * the id
52 * @param lhome
53 * the home location
54 * @param costs
55 * the costs per day
56 */
57 public Driver(final int pid, final Location lhome, final int costs) {
58 super(pid, DRIVERS);
59 this.home = lhome;
60 this.costsPerDay = costs;
61 }
62
63 /**
64 * Create a new truck
65 *
66 * @param data
67 * the data
68 */
69 public Driver(final String[] data) {
70 super(data, DRIVERS);
71 this.home = Location.LOCATIONS.find(InputObject.extractId(data[1]));
72 this.costsPerDay = Integer.parseInt(data[2]);
73 }
74
75 /**
76 * {@inheritDoc}
77 */
78 @Override
79 final char getIDPrefixChar() {
80 return ’D’;
81 }
82
83 /**
84 * Get the column headers for a csv data representation. Child classes
85 * may add additional fields.
86 *
87 * @return the column headers for representing the data of this object in
88 * csv format
89 */
90 @Override
91 public List<String> getCSVColumnHeader() {
92 return HEADERS;
93 }
94
95 /**
96 * Fill in the data of this object into a string array which can be used
97 * to serialize the object in a nice way
98 *
99 * @param output
100 * the output
57.3. THE VRP BENCHMARK PROBLEM 923
101 */
102 @Override
103 public void fillInCSVData(final String[] output) {
104 super.fillInCSVData(output);
105 output[1] = this.home.getIDString();
106 output[2] = String.valueOf(this.costsPerDay);
107 }
108 }

Listing 57.29: This class holds the complete infomation about the distances between loca-
tions.
1 /*
2 * Copyright (c) 2010 Thomas Weise
3 * http://www.it-weise.de/
4 * tweise@gmx.de
5 *
6 * GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7 */
8
9 package demos.org.goataa.vrpBenchmark.objects;
10
11 import java.io.BufferedReader;
12 import java.io.IOException;
13 import java.io.Writer;
14
15 import org.goataa.impl.utils.TextUtils;
16
17 /**
18 * A distance matrix calculates the distance between two locations
19 *
20 * @author Thomas Weise
21 */
22 public class DistanceMatrix extends TextSerializable {
23 /** a constant required by Java serialization */
24 private static final long serialVersionUID = 1;
25
26 /** the globally shared distance matrix instance */
27 public static final DistanceMatrix MATRIX = new DistanceMatrix();
28
29 /** the matrix */
30 private int[][] matrix;
31
32 /**
33 * Create a new distance matrix
34 */
35 public DistanceMatrix() {
36 super();
37 }
38
39 /**
40 * Initialize to a given matrix
41 *
42 * @param m
43 * the matrix
44 */
45 public void init(final int[][] m) {
46 int[] drow;
47 int i, j;
48
49 this.matrix = m;
50
51 if (m != null) {
52 i = m.length;
53
54 for (; (--i) >= 0;) {
55 drow = m[i];
924 57 DEMOS
56
57 for (j = drow.length; (--j) >= 0;) {
58 if (i != j) {
59 drow[j] = Math.max(50, (50 * ((drow[j] + 49) / 50)));
60 } else {
61 drow[j] = 0;
62 }
63 }
64 }
65 }
66 }
67
68 /**
69 * Obtain the distance between two locations
70 *
71 * @param l1
72 * the first location
73 * @param l2
74 * the second location
75 * @return the distance in meters
76 */
77 public final int getDistance(final Location l1, final Location l2) {
78 if (l1 == l2) {
79 return 0;
80 }
81 return this.matrix[l1.id][l2.id];
82 }
83
84 /**
85 * Get the time needed by a given truck for getting from l1 to l2. We
86 * assume that a truck will drive in average with 66% of its top speed.
87 *
88 * @param l1
89 * the first location
90 * @param l2
91 * the second location
92 * @param t
93 * the truck
94 * @return the time needed, in milliseconds
95 */
96 public final int getTimeNeeded(final Location l1, final Location l2,
97 final Truck t) {
98
99 if (l1 == l2) {
100 return 0;
101 }
102
103 return (int) (0.5d + (3600 * (this.matrix[l1.id][l2.id] / t.avgSpeedKmH)));
104 }
105
106 /**
107 * Serialize to a writer
108 *
109 * @param w
110 * the writer
111 * @throws IOException
112 * if anything goes wrong
113 */
114 @Override
115 public void serialize(final Writer w) throws IOException {
116 int[][] ma;
117 int[] l;
118 int i, j;
119
120 ma = this.matrix;
121 for (i = 0; i < ma.length; i++) {
122 if (i > 0) {
57.3. THE VRP BENCHMARK PROBLEM 925
123 w.write(TextUtils.NEWLINE);
124 }
125 l = ma[i];
126 for (j = 0; j < l.length; j++) {
127 if (j > 0) {
128 w.write(’\t’);
129 }
130 w.write(String.valueOf(l[j]));
131 }
132 }
133 }
134
135 /**
136 * Deserialize from a buffered reader
137 *
138 * @param r
139 * reader
140 * @throws IOException
141 * if something goes wrong
142 */
143 @Override
144 public void deserialize(final BufferedReader r) throws IOException {
145 String[] s;
146 String l;
147 int[][] ma;
148 int[] q;
149 int v, i, j;
150
151 l = r.readLine();
152 s = TransportationObjectList.SPLIT.split(l);
153
154 v = s.length;
155 this.matrix = ma = new int[v][v];
156 for (i = 0; i < v; i++) {
157 q = ma[i];
158 for (j = 0; j < v; j++) {
159 q[j] = Integer.parseInt(s[j]);
160 }
161 if (i < (v - 1)) {
162 s = TransportationObjectList.SPLIT.split(r.readLine());
163 }
164 }
165
166 }
167 }

57.3.2 The Optimization Utilities

The objective value computation utility classes from Section 50.4.4 and the candidate solu-
tion element according to Section 50.4.3.

Listing 57.30: An object which holds a transportation move.


1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.optimization;
5
6 import java.util.ArrayList;
7 import java.util.List;
8 import java.util.regex.Pattern;
9
10 import demos.org.goataa.vrpBenchmark.objects.Container;
11 import demos.org.goataa.vrpBenchmark.objects.Driver;
12 import demos.org.goataa.vrpBenchmark.objects.InputObject;
13 import demos.org.goataa.vrpBenchmark.objects.Location;
926 57 DEMOS
14 import demos.org.goataa.vrpBenchmark.objects.Order;
15 import demos.org.goataa.vrpBenchmark.objects.TransportationObject;
16 import demos.org.goataa.vrpBenchmark.objects.Truck;
17
18 /**
19 * A move
20 *
21 * @author Thomas Weise
22 */
23 public class Move extends TransportationObject {
24
25 /** a constant required by Java serialization */
26 private static final long serialVersionUID = 1;
27
28 /** no moves */
29 public static final Move[] NO_MOVES = new Move[0];
30
31 /** split */
32 static final Pattern SPLIT = Pattern.compile("[,]+"); //N ON − N LS − 1
33
34 /** the headers */
35 private static final List<String> HEADERS;
36
37 static {
38 HEADERS = new ArrayList<String>();
39 HEADERS.add("truck");//N ON − N LS − 1
40 HEADERS.add("orders"); //N ON − N LS − 1
41 HEADERS.add("drivers");//N ON − N LS − 1
42 HEADERS.add("containers");//N ON − N LS − 1
43 HEADERS.add("from");//N ON − N LS − 1
44 HEADERS.add("start_time");//N ON − N LS − 1
45 HEADERS.add("to");//N ON − N LS − 1
46 }
47
48 /** the truck */
49 public final Truck truck;
50
51 /** the involved orders */
52 public final Order[] orders;
53
54 /** the involved drivers */
55 public final Driver[] drivers;
56
57 /** the involved containers */
58 public final Container[] containers;
59
60 /** the starting location */
61 public final Location from;
62
63 /** the start time */
64 public final long startTime;
65
66 /** the destination location */
67 public final Location to;
68
69 /**
70 * Create a new freight object
71 *
72 * @param t
73 * the truck
74 * @param o
75 * the orders
76 * @param d
77 * the drivers
78 * @param c
79 * the container
80 * @param lfrom
57.3. THE VRP BENCHMARK PROBLEM 927
81 * the starting location
82 * @param lstartTime
83 * the start time
84 * @param lto
85 * the destination location
86 */
87 public Move(final Truck t, final Order[] o, final Driver[] d,
88 final Container[] c, final Location lfrom, long lstartTime,
89 final Location lto) {
90 super();
91
92 this.truck = t;
93 this.orders = (((o != null) && (o.length > 0)) ? o : Order.NO_ORDERS);
94 this.drivers = (((d != null) && (d.length > 0)) ? d
95 : Driver.NO_DRIVERS);
96 this.containers = (((c != null) && (c.length > 0)) ? c
97 : Container.NO_CONTAINERS);
98 this.from = lfrom;
99 this.startTime = lstartTime;
100 this.to = lto;
101 }
102
103 /**
104 * Create a move from a textual representation
105 *
106 * @param s
107 * the strings
108 */
109 public Move(final String[] s) {
110 this(Truck.TRUCKS.find(InputObject.extractId(s[0])), getOrders(s[1]),
111 getDrivers(s[2]), getContainers(s[3]), Location.LOCATIONS
112 .find(InputObject.extractId(s[4])), Long.parseLong(s[5]),
113 Location.LOCATIONS.find(InputObject.extractId(s[4])));
114 }
115
116 /**
117 * Get the drivers
118 *
119 * @param s
120 * the strings
121 * @return the drivers
122 */
123 private static final Driver[] getDrivers(final String s) {
124 int i, l;
125 boolean b;
126 String q;
127 String[] qs;
128 Driver[] o;
129
130 if ((s == null) || ((l = (s.length())) <= 0)) {
131 return Driver.NO_DRIVERS;
132 }
133
134 i = 0;
135 b = false;
136 if (s.charAt(0) == ’(’) {
137 i++;
138 b = true;
139 }
140 if (s.charAt(l - 1) == ’)’) {
141 l--;
142 b = true;
143 }
144 if (b) {
145 q = s.substring(i, l);
146 } else {
147 q = s;
928 57 DEMOS
148 }
149
150 qs = SPLIT.split(q);
151 if ((qs == null) || ((l = qs.length) <= 0)) {
152 return Driver.NO_DRIVERS;
153 }
154 o = new Driver[l];
155 for (i = 0; i < l; i++) {
156 o[i] = Driver.DRIVERS.find(InputObject.extractId(qs[i]));
157 }
158
159 return o;
160 }
161
162 /**
163 * Get the orders
164 *
165 * @param s
166 * the strings
167 * @return the orders
168 */
169 private static final Order[] getOrders(final String s) {
170 int i, l;
171 boolean b;
172 String q;
173 String[] qs;
174 Order[] o;
175
176 if ((s == null) || ((l = (s.length())) <= 0)) {
177 return Order.NO_ORDERS;
178 }
179
180 i = 0;
181 b = false;
182 if (s.charAt(0) == ’(’) {
183 i++;
184 b = true;
185 }
186 if (s.charAt(l - 1) == ’)’) {
187 l--;
188 b = true;
189 }
190 if (b) {
191 q = s.substring(i, l);
192 } else {
193 q = s;
194 }
195
196 qs = SPLIT.split(q);
197 if ((qs == null) || ((l = qs.length) <= 0)) {
198 return Order.NO_ORDERS;
199 }
200 o = new Order[l];
201 for (i = 0; i < l; i++) {
202 o[i] = Order.ORDERS.find(InputObject.extractId(qs[i]));
203 }
204
205 return o;
206 }
207
208 /**
209 * Get the containers
210 *
211 * @param s
212 * the strings
213 * @return the containers
214 */
57.3. THE VRP BENCHMARK PROBLEM 929
215 private static final Container[] getContainers(final String s) {
216 int i, l;
217 boolean b;
218 String q;
219 String[] qs;
220 Container[] o;
221
222 if ((s == null) || ((l = (s.length())) <= 0)) {
223 return Container.NO_CONTAINERS;
224 }
225
226 i = 0;
227 b = false;
228 if (s.charAt(0) == ’(’) {
229 i++;
230 b = true;
231 }
232 if (s.charAt(l - 1) == ’)’) {
233 l--;
234 b = true;
235 }
236 if (b) {
237 q = s.substring(i, l);
238 } else {
239 q = s;
240 }
241
242 qs = SPLIT.split(q);
243 if ((qs == null) || ((l = qs.length) <= 0)) {
244 return Container.NO_CONTAINERS;
245 }
246 o = new Container[l];
247 for (i = 0; i < l; i++) {
248 o[i] = Container.CONTAINERS.find(InputObject.extractId(qs[i]));
249 }
250
251 return o;
252 }
253
254 /**
255 * Get the column headers for a csv data representation. Child classes
256 * may add additional fields.
257 *
258 * @return the column headers for representing the data of this object in
259 * csv format
260 */
261 @Override
262 public List<String> getCSVColumnHeader() {
263 return HEADERS;
264 }
265
266 /**
267 * Convert an object array to a string
268 *
269 * @param src
270 * the source
271 * @return the string
272 */
273 private static final String toString(final InputObject[] src) {
274 StringBuffer sb;
275 int i;
276 boolean b;
277 InputObject x;
278
279 sb = new StringBuffer();
280 sb.append(’(’);
281 b = false;
930 57 DEMOS
282
283 for (i = 0; i < src.length; i++) {
284
285 x = src[i];
286 if (x != null) {
287 if (b) {
288 sb.append(’,’);
289 }
290 sb.append(x.getIDString());
291 b = true;
292 }
293
294 }
295
296 sb.append(’)’);
297 return sb.toString();
298 }
299
300 /**
301 * Fill in the data of this object into a string array which can be used
302 * to serialize the object in a nice way
303 *
304 * @param output
305 * the output
306 */
307 @Override
308 public void fillInCSVData(final String[] output) {
309 output[0] = this.truck.getIDString();
310 output[1] = toString(this.orders);
311 output[2] = toString(this.drivers);
312 output[3] = toString(this.containers);
313 output[4] = this.from.getIDString();
314 output[5] = String.valueOf(this.startTime);
315 output[6] = this.to.getIDString();
316 }
317
318 /**
319 * Does this move use the given container?
320 *
321 * @param c
322 * the container
323 * @return true if the move uses the container, false otherwise
324 */
325 public final boolean usesContainer(final Container c) {
326 int i;
327
328 for (i = this.containers.length; (--i) >= 0;) {
329 if (this.containers[i] == c) {
330 return true;
331 }
332 }
333
334 return false;
335 }
336
337 /**
338 * Does this move use the given driver?
339 *
340 * @param c
341 * the driver
342 * @return true if the move uses the driver, false otherwise
343 */
344 public final boolean usesDriver(final Driver c) {
345 int i;
346
347 for (i = this.drivers.length; (--i) >= 0;) {
348 if (this.drivers[i] == c) {
57.3. THE VRP BENCHMARK PROBLEM 931
349 return true;
350 }
351 }
352
353 return false;
354 }
355 }

Listing 57.31: An object which can be used to compute the features of a candidate solution
of the benchmark problem.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.optimization;
5
6 import java.io.Serializable;
7 import java.util.Arrays;
8 import java.util.Comparator;
9
10 import demos.org.goataa.vrpBenchmark.objects.Container;
11 import demos.org.goataa.vrpBenchmark.objects.DistanceMatrix;
12 import demos.org.goataa.vrpBenchmark.objects.Driver;
13 import demos.org.goataa.vrpBenchmark.objects.Location;
14 import demos.org.goataa.vrpBenchmark.objects.Order;
15 import demos.org.goataa.vrpBenchmark.objects.TransportationObjectList;
16 import demos.org.goataa.vrpBenchmark.objects.Truck;
17
18 /**
19 * This class allows to compute the state of the world and, hence, the
20 * objective values. The initial state of the world is t=0 and all objects
21 * are at their starting locations. Then, we can make moves. Every move
22 * consists of a truck driving from one location to another. By doing so,
23 * it may carry objects (containers, orders) and is driven by at least one
24 * driver (but may carry more). After the move, the involved objects are at
25 * their new locations.
26 *
27 * @author Thomas Weise
28 */
29 public class Compute extends PerformanceValues {
30 /** a constant required by Java serialization */
31 private static final long serialVersionUID = 1;
32
33 /** the internal comparator */
34 private static final Comparator<Move> MOVE_COMPARATOR = new CMP();
35
36 /** the drivers */
37 private final DriverTrack[] drivers;
38
39 /** the trucks */
40 private final TruckTrack[] trucks;
41
42 /** the container */
43 private final Track[] containers;
44
45 /** the order */
46 private final Track[] orders;
47
48 /** the temporary move list */
49 private transient Move[] temp;
50
51 /** Create a new compute instance */
52 public Compute() {
53 super();
54
55 Track[] t;
56 DriverTrack[] x;
932 57 DEMOS
57 TruckTrack[] y;
58 int i;
59
60 i = Truck.TRUCKS.size();
61 this.trucks = y = new TruckTrack[i];
62 for (; (--i) >= 0;) {
63 y[i] = new TruckTrack();
64 }
65
66 i = Driver.DRIVERS.size();
67 this.drivers = x = new DriverTrack[i];
68 for (; (--i) >= 0;) {
69 x[i] = new DriverTrack();
70 }
71
72 i = Container.CONTAINERS.size();
73 this.containers = t = new Track[i];
74 for (; (--i) >= 0;) {
75 t[i] = new Track();
76 }
77
78 i = Order.ORDERS.size();
79 this.orders = t = new Track[i];
80 for (; (--i) >= 0;) {
81 t[i] = new Track();
82 }
83
84 this.temp = null;
85 }
86
87 /**
88 * Initialize the computer: reset all variables and move all objects to
89 * their default locations
90 */
91 @Override
92 public final void init() {
93 Track[] ts;
94 DriverTrack[] dts;
95 TruckTrack[] tts;
96 TruckTrack q;
97 Track t;
98 DriverTrack dt;
99 Truck a;
100 Driver b;
101 Container c;
102 Order o;
103 int i;
104
105 super.init();
106
107 tts = this.trucks;
108 for (i = tts.length; (--i) >= 0;) {
109 q = tts[i];
110 a = Truck.TRUCKS.get(i);
111 q.location = a.start;
112 q.time = 0l;
113 q.orders = null;
114 q.lastTime = -1l;
115 }
116
117 dts = this.drivers;
118 for (i = dts.length; (--i) >= 0;) {
119 dt = dts[i];
120 b = Driver.DRIVERS.get(i);
121 dt.location = b.home;
122 dt.time = 0l;
123 dt.lastDay = -1;
57.3. THE VRP BENCHMARK PROBLEM 933
124 }
125
126 ts = this.containers;
127 for (i = ts.length; (--i) >= 0;) {
128 t = ts[i];
129 c = Container.CONTAINERS.get(i);
130 t.location = c.start;
131 t.time = 0l;
132 }
133
134 ts = this.orders;
135 for (i = ts.length; (--i) >= 0;) {
136 t = ts[i];
137 o = Order.ORDERS.get(i);
138 t.location = o.pickupLocation;
139 t.time = o.pickupWindowStart;
140 }
141
142 }
143
144 /**
145 * Make a move: Truck t drives from the location "from" to the location
146 * "to" at the given start time. It carries the drivers "d", the orders
147 * "o", and the containers "c".
148 *
149 * @param t
150 * the truck
151 * @param o
152 * the orders
153 * @param d
154 * the drivers
155 * @param c
156 * the container
157 * @param from
158 * the starting location
159 * @param startTime
160 * the start time
161 * @param to
162 * the destination location
163 */
164 public final void move(final Truck t, final Order[] o, final Driver[] d,
165 final Container[] c, final Location from, long startTime,
166 final Location to) {
167 final Track[] lcontainers, lorders;
168 final DriverTrack[] ldrivers;
169 Track track;
170 final TruckTrack truck;
171 Container cc;
172 Location startLoc;
173 long start, end, m, timeNeeded, lastTime;
174 int totalContainers, totalOrders, totalDrivers;
175 int i, ed, sd, td;
176 Order oo;
177 Driver dd;
178 DriverTrack dt;
179 Order[] o2;
180
181 this.clearNew();
182
183 ldrivers = this.drivers;
184 lcontainers = this.containers;
185 lorders = this.orders;
186
187 // move the truck
188 startLoc = from;
189 start = startTime;
190
934 57 DEMOS
191 if (t != null) {
192 truck = this.trucks[t.id];
193 lastTime = truck.lastTime;
194
195 // is the truck at the right location?
196 startLoc = truck.location;
197 if (startLoc != from) {
198 this.addTruckError(1);
199 }
200
201 // is it there at the right time
202 if (truck.time > start) {
203 this.addTruckError(1);
204 start = truck.time;
205 }
206
207 // move the truck, if the start location is different from the end
208 if (startLoc != to) {
209 // compute the time to travel
210 m = DistanceMatrix.MATRIX.getDistance(startLoc, to);
211 timeNeeded = DistanceMatrix.MATRIX.getTimeNeeded(startLoc, to, t);
212
213 this
214 .addTruckCosts(((long) (0.5d + ((t.costsPerHKm * timeNeeded * m) /
3600000000.0d))));
215
216 end = (start + timeNeeded);
217 truck.location = to;
218 } else {
219 end = (start + 1l);
220 this.addTruckCosts(1000l);
221 }
222
223 truck.time = end;
224 } else {
225 // no truck? -> error
226 this.addTruckError(1);
227 end = start + 10000000l;
228 this.addTruckCosts(1000l);
229 truck = null;
230 lastTime = -1l;
231 }
232
233 // move all containers
234 totalContainers = 0;
235 if (c != null) {
236 for (i = c.length; (--i) >= 0;) {
237 cc = c[i];
238
239 // aha, a container
240 if (cc != null) {
241 totalContainers++;
242 track = lcontainers[cc.id];
243
244 // check the current location of the container
245 if (track.location != startLoc) {
246 this.addContainerError(1);
247 }
248
249 // check the time at which the container is there
250 if (track.time > start) {
251 this.addContainerError(1);
252 }
253
254 // move the container
255 track.location = to;
256 track.time = end;
57.3. THE VRP BENCHMARK PROBLEM 935
257 }
258 }
259
260 // are there too many containers?
261 i = (totalContainers - ((t == null) ? 0 : t.maxContainers));
262 if (i > 0) {
263 this.addContainerError(i);
264 }
265 }
266
267 // move all orders
268 totalOrders = 0;
269 if (o != null) {
270 for (i = o.length; (--i) >= 0;) {
271 oo = o[i];
272
273 // aha, an order
274 if (oo != null) {
275 totalOrders++;
276 track = lorders[oo.id];
277
278 // check the current location of the order
279 if (track.location != startLoc) {
280 this.addOrderError(1);
281 }
282
283 // is the order at the starting location?
284 if ((track.location == oo.pickupLocation)
285 && (track.time <= oo.pickupWindowStart)) {
286
287 if (lastTime >= 0l) {
288 m = windowIntersection(oo.pickupWindowStart,
289 oo.pickupWindowEnd, lastTime, start);
290 } else {
291 m = start;
292 }
293
294 m = Compute.windowViolation(oo.pickupWindowStart,
295 oo.pickupWindowEnd, m);
296
297 if (m > 0l) {
298 this.addPickupErrorTime(m);
299 this.addOrderError(1);
300 }
301 } else {
302 // check the time at which the order is there
303 if (track.time > start) {
304 this.addOrderError(1);
305 }
306 }
307
308 // move the order
309 track.location = to;
310 track.time = end;
311 }
312 }
313 }
314
315 // now let us check if delivery took place in the time windows by
316 // possibly correcting drop-off times. It could be that a truck arrives
317 // before the delivery time window but leaves after it
318 if (truck != null) {
319
320 if (lastTime >= 0l) {
321 o2 = truck.orders;
322 if (o2 != null) {
323 // for each order
936 57 DEMOS
324 for (i = o2.length; (--i) >= 0;) {
325 oo = o2[i];
326 if (oo != null) {
327 // get the tracking object
328 track = lorders[oo.id];
329 // is the order still at the place where this truck dropped
330 // it
331 // off and is the order at the delivery location?
332 if ((track.time == lastTime) && (track.location == startLoc)
333 && (track.location == oo.deliveryLocation)) {
334 track.time = windowIntersection(oo.deliveryWindowStart,
335 oo.deliveryWindowEnd, lastTime, start);
336 }
337 }
338 }
339 }
340 }
341
342 truck.orders = o;
343 truck.lastTime = end;
344 }
345
346 // are there more orders than containers?
347 if (totalOrders > totalContainers) {
348 this.addContainerError((totalOrders - totalContainers));
349 }
350
351 // move all drivers
352 totalDrivers = 0;
353 if (d != null) {
354 sd = ((int) ((start / 86400000)));
355 ed = ((int) ((end / 86400000)));
356 td = (ed - sd + 1);
357
358 for (i = d.length; (--i) >= 0;) {
359 dd = d[i];
360
361 // aha, a driver
362 if (dd != null) {
363 totalDrivers++;
364 dt = ldrivers[dd.id];
365
366 // check the current location of the driver
367 if (dt.location != startLoc) {
368 this.addDriverError(1);
369 }
370
371 // check the time at which the driver is there
372 if (dt.time > start) {
373 this.addDriverError(1);
374 }
375
376 // compute costs of that driver: each day, he receives a salary
377 if (ed > dt.lastDay) {
378 dt.lastDay = ed;
379 this.addDriverCosts((td * dd.costsPerDay));
380 }
381
382 // move the driver
383 dt.location = to;
384 dt.time = end;
385 }
386 }
387 }
388
389 // no driver? that’s an error
390 if (totalDrivers <= 0) {
57.3. THE VRP BENCHMARK PROBLEM 937
391 this.addDriverError(1);
392 }
393
394 this.commitNew();
395 }
396
397 /**
398 * Compute the time window violation.
399 *
400 * @param ws
401 * the window start
402 * @param we
403 * the window end
404 * @param t
405 * the time
406 * @return the window violation
407 */
408 private static final long windowViolation(final long ws, final long we,
409 final long t) {
410 if (t < ws) {
411 return (ws - t);
412 }
413 if (t > we) {
414 return (t - we);
415 }
416 return 0l;
417 }
418
419 /**
420 * Get the best window intersection time
421 *
422 * @param w1s
423 * the start of the first window
424 * @param w1e
425 * the end of the first window
426 * @param w2s
427 * the start of the second window
428 * @param w2e
429 * the end of the second window
430 * @return the earliest/best time within the second window which creates
431 * the smallest violation of the first window
432 */
433 private static final long windowIntersection(final long w1s,
434 final long w1e, final long w2s, final long w2e) {
435
436 if ((w2e >= w1s) && (w2s <= w1e)) {
437 return Math.max(w1s, w2s);
438 }
439
440 if (w2e < w1s) {
441 return w2e;
442 }
443
444 if (w2s > w1e) {
445 return w2s;
446 }
447
448 throw new RuntimeException();
449 }
450
451 /**
452 * Finalize the computation, i.e., check if everything is placed at
453 * positions to which it belongs.
454 */
455 public final void finish() {
456 final Track[] ltrucks, lcontainers, lorders;
457 final DriverTrack[] ldrivers;
938 57 DEMOS
458 Track track;
459 int i;
460 Order oo;
461 long y;
462
463 this.clearNew();
464
465 // check the trucks
466 ltrucks = this.trucks;
467 for (i = ltrucks.length; (--i) >= 0;) {
468 track = ltrucks[i];
469 // all trucks need to be at a depot
470 if (!(track.location.isDepot)) {
471 this.addTruckError(1);
472 }
473 }
474
475 // check the drivers
476 ldrivers = this.drivers;
477 for (i = ldrivers.length; (--i) >= 0;) {
478 track = ldrivers[i];
479 // all drivers need to be home at the end of the jobs
480 if (track.location != Driver.DRIVERS.get(i).home) {
481 this.addDriverError(1);
482 }
483 }
484
485 // check the containers
486 lcontainers = this.containers;
487 for (i = lcontainers.length; (--i) >= 0;) {
488 track = lcontainers[i];
489 // all containers need to be at a depot
490 if (!(track.location.isDepot)) {
491 this.addContainerError(1);
492 }
493 }
494
495 // check the orders
496 lorders = this.orders;
497 for (i = lorders.length; (--i) >= 0;) {
498 track = lorders[i];
499 oo = Order.ORDERS.get(i);
500 // all orders need to be at their destination
501 if (track.location != oo.deliveryLocation) {
502 this.addOrderError(1);
503 }
504 y = windowViolation(oo.deliveryWindowStart, oo.deliveryWindowEnd,
505 track.time);
506 if (y > 0) {
507 this.addOrderError(1);
508 this.addDeliveryErrorTime(y);
509 }
510 }
511
512 this.commitNew();
513 }
514
515 /**
516 * Make a move
517 *
518 * @param m
519 * the move
520 */
521 public final void move(final Move m) {
522 if (m != null) {
523 this.move(m.truck, m.orders, m.drivers, m.containers, m.from,
524 m.startTime, m.to);
57.3. THE VRP BENCHMARK PROBLEM 939
525 }
526 }
527
528 /**
529 * Do some moves internally
530 *
531 * @param moves
532 * the moves
533 * @param len
534 * the length
535 */
536 private final void move(final Move[] moves, final int len) {
537 int i;
538
539 Arrays.sort(moves, 0, len, MOVE_COMPARATOR);
540 for (i = len; (--i) >= 0;) {
541 this.move(moves[i]);
542 }
543 }
544
545 /**
546 * Get the internal move array
547 *
548 * @param len
549 * the length
550 * @return the move array
551 */
552 private final Move[] getMoves(final int len) {
553 Move[] m;
554
555 m = this.temp;
556 if ((m == null) || (m.length < len)) {
557 return (this.temp = new Move[len]);
558 }
559 return m;
560 }
561
562 /**
563 * Make the given moves
564 *
565 * @param ms
566 * the move list
567 */
568 public final void move(final TransportationObjectList<Move> ms) {
569 int i, s;
570 Move[] m;
571
572 if (ms != null) {
573 s = ms.size();
574 if (s > 0) {
575 m = this.getMoves(s);
576 for (i = s; (--i) >= 0;) {
577 m[i] = ms.get(i);
578 }
579 this.move(m, s);
580 }
581 }
582 }
583
584 /**
585 * Make the given moves
586 *
587 * @param ms
588 * the moves
589 */
590 public final void move(final Move[] ms) {
591 Move[] m;
940 57 DEMOS
592 int i;
593
594 if (ms != null) {
595 i = ms.length;
596 if (i > 0) {
597 m = this.getMoves(i);
598 System.arraycopy(ms, 0, m, 0, i);
599 this.move(m, i);
600 }
601 }
602 }
603
604 /**
605 * Get the location where the given container was placed last
606 *
607 * @param c
608 * the container
609 * @return the location where the given container was placed last
610 */
611 public final Location getLocation(final Container c) {
612 return this.containers[c.id].location;
613 }
614
615 /**
616 * Get the location where the given truck was placed last
617 *
618 * @param c
619 * the truck
620 * @return the location where the given truck was placed last
621 */
622 public final Location getLocation(final Truck c) {
623 return this.trucks[c.id].location;
624 }
625
626 /**
627 * Get the location where the given driver was placed last
628 *
629 * @param c
630 * the driver
631 * @return the location where the given driver was placed last
632 */
633 public final Location getLocation(final Driver c) {
634 return this.drivers[c.id].location;
635 }
636
637 /**
638 * Get the location where the given order was placed last
639 *
640 * @param c
641 * the order
642 * @return the location where the given order was placed last
643 */
644 public final Location getLocation(final Order c) {
645 return this.orders[c.id].location;
646 }
647
648 /**
649 * A class used for tracking
650 *
651 * @author Thomas Weise
652 */
653 private static class Track implements Serializable {
654 /** a constant required by Java serialization */
655 private static final long serialVersionUID = 1;
656
657 /** the location */
658 Location location;
57.3. THE VRP BENCHMARK PROBLEM 941
659
660 /** the time */
661 long time;
662
663 /** the track */
664 Track() {
665 super();
666 }
667
668 }
669
670 /**
671 * A class used for driver tracking
672 *
673 * @author Thomas Weise
674 */
675 private static class DriverTrack extends Track {
676 /** a constant required by Java serialization */
677 private static final long serialVersionUID = 1;
678
679 /** the days during which a driver was driving */
680 int lastDay;
681
682 /** the track */
683 DriverTrack() {
684 super();
685 }
686
687 }
688
689 /**
690 * A class used for truck tracking
691 *
692 * @author Thomas Weise
693 */
694 private static class TruckTrack extends Track {
695 /** a constant required by Java serialization */
696 private static final long serialVersionUID = 1;
697
698 /** the last time the truck arrived somewhere */
699 long lastTime;
700
701 /** orders which were dropped off */
702 Order[] orders;
703
704 /** the track */
705 TruckTrack() {
706 super();
707 }
708
709 }
710
711 /**
712 * The move comparator for sorting
713 *
714 * @author Thomas Weise
715 */
716 private static final class CMP implements Comparator<Move> {
717
718 /** instantiate the comparator */
719 CMP() {
720 super();
721 }
722
723 /**
724 * Compare two moves according to their start time
725 *
942 57 DEMOS
726 * @param a
727 * the first move
728 * @param b
729 * the second move
730 * @return the comparison result
731 */
732 public final int compare(final Move a, final Move b) {
733 if (a.startTime < b.startTime) {
734 return 1;
735 }
736 if (b.startTime < a.startTime) {
737 return -1;
738 }
739 return 0;
740 }
741 }
742 }
Appendices
944 57 DEMOS
Appendix A
Citation Suggestion

@book{W2011GOEB,
author = {Thomas Weise},
title = {Global Optimization Algorithms -- Theory and Application},
year = {2011},
month = dec # {˜7,},
publisher = {www.it-weise.de},
howpublished = {Online as E-Book},
edition = {third},
copyright = {Copyright c 2006-2011 Thomas Weise, licensed under GNU FDL},
keywords = {Global Optimization, Evolutionary Computation, Evolutionary Al-
gorithms, Genetic Algorithms, Genetic Programming, Evolution
Strategy, Learning Classifier Systems, Extremal Optimization,
Raindrop Method, Ant Colony Optimization, Particle Swarm Op-
timization, Downhill Simplex, Simulated Annealing, Hill Climbing,
Combinatorial Optimization, Numerical Optimization, Java},
language = {en},
url = {http://www.it-weise.de/projects/book.pdf},
}

%0 Book
%A Weise, Thomas
%T Global Optimization Algorithms – Theory and Application
%F Thomas Weise: “Global Optimization Algorithms -- Theory and Application”
%I www.it-weise.de
%D third
%8 third
%7 third
%9 Online available as E-Book at http://www.it-weise.de/projects/book.pdf
Appendix B
GNU Free Documentation License (FDL)

Version 1.3, 3 November 2008


c 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
Copyright

http://fsf.org/ [accessed 2009-06-07]

Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.

Preamble

The purpose of this License is to make a manual, textbook, or other functional and useful
document “free” in the sense of freedom: to assure everyone the effective freedom to copy
and redistribute it, with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way to get credit for their
work, while not being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document
must themselves be free in the same sense. It complements the GNU General Public License,
which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free
software needs free documentation: a free program should come with manuals providing the
same freedoms that the software does. But this License is not limited to software manuals; it
can be used for any textual work, regardless of subject matter or whether it is published as a
printed book. We recommend this License principally for works whose purpose is instruction
or reference.

B.1 Applicability and Definitions

This License applies to any manual or other work, in any medium, that contains a notice
placed by the copyright holder saying it can be distributed under the terms of this License.
Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that
work under the conditions stated herein. The “Document”, below, refers to any such
manual or work. Any member of the public is a licensee, and is addressed as “you”. You
accept the license if you copy, modify or distribute the work in a way requiring permission
under copyright law.
948 B GNU FREE DOCUMENTATION LICENSE (FDL)
A “Modified Version” of the Document means any work containing the Document or
a portion of it, either copied verbatim, or with modifications and/or translated into another
language.
A “Secondary Section” is a named appendix or a front-matter section of the Document
that deals exclusively with the relationship of the publishers or authors of the Document
to the Document’s overall subject (or to related matters) and contains nothing that could
fall directly within that overall subject. (Thus, if the Document is in part a textbook of
mathematics, a Secondary Section may not explain any mathematics.) The relationship
could be a matter of historical connection with the subject or with related matters, or of
legal, commercial, philosophical, ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated,
as being those of Invariant Sections, in the notice that says that the Document is released
under this License. If a section does not fit the above definition of Secondary then it is not
allowed to be designated as Invariant. The Document may contain zero Invariant Sections.
If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under this
License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at
most 25 words.
A “Transparent” copy of the Document means a machine readable copy, represented
in a format whose specification is available to the general public, that is suitable for revising
the document straightforwardly with generic text editors or (for images composed of pixels)
generic paint programs or (for drawings) some widely available drawing editor, and that is
suitable for input to text formatters or for automatic translation to a variety of formats
suitable for input to text formatters. A copy made in an otherwise Transparent file format
whose markup, or absence of markup, has been arranged to thwart or discourage subsequent
modification by readers is not Transparent. An image format is not Transparent if used for
any substantial amount of text. A copy that is not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without
markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly avail-
able DTD, and standard-conforming simple HTML, PostScript or PDF designed for human
modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque
formats include proprietary formats that can be read and edited only by proprietary word
processors, SGML or XML for which the DTD and/or processing tools are not generally
available, and the machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following
pages as are needed to hold, legibly, the material this License requires to appear in the title
page. For works in formats which do not have any title page as such, “Title Page” means
the text near the most prominent appearance of the work’s title, preceding the beginning of
the body of the text.
The “publisher” means any person or entity that distributes copies of the Document
to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title ei-
ther is precisely XYZ or contains XYZ in parentheses following text that translates XYZ
in another language. (Here XYZ stands for a specific section name mentioned below,
such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To
“Preserve the Title” of such a section when you modify the Document means that it
remains a section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that
this License applies to the Document. These Warranty Disclaimers are considered to be
included by reference in this License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has no effect on the
B.2. VERBATIM COPYING 949
meaning of this License.

B.2 Verbatim Copying


You may copy and distribute the Document in any medium, either commercially or noncom-
mercially, provided that this License, the copyright notices, and the license notice saying
this License applies to the Document are reproduced in all copies, and that you add no
other conditions whatsoever to those of this License. You may not use technical measures
to obstruct or control the reading or further copying of the copies you make or distribute.
However, you may accept compensation in exchange for copies. If you distribute a large
enough number of copies you must also follow the conditions in Section B.3.
You may also lend copies, under the same conditions stated above, and you may publicly
display copies.

B.3 Copying in Quantity


If you publish printed copies (or copies in media that commonly have printed covers) of
the Document, numbering more than 100, and the Document’s license notice requires Cover
Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover
Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both
covers must also clearly and legibly identify you as the publisher of these copies. The front
cover must present the full title with all words of the title equally prominent and visible.
You may add other material on the covers in addition. Copying with changes limited to the
covers, as long as they preserve the title of the Document and satisfy these conditions, can
be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the
first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto
adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100,
you must either include a machine-readable Transparent copy along with each Opaque copy,
or state in or with each Opaque copy a computer-network location from which the general
network-using public has access to download using public-standard network protocols a
complete Transparent copy of the Document, free of added material. If you use the latter
option, you must take reasonably prudent steps, when you begin distribution of Opaque
copies in quantity, to ensure that this Transparent copy will remain thus accessible at the
stated location until at least one year after the last time you distribute an Opaque copy
(directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well
before redistributing any large number of copies, to give them a chance to provide you with
an updated version of the Document.

B.4 Modifications
You may copy and distribute a Modified Version of the Document under the conditions
of sections 2 and 3 above, provided that you release the Modified Version under precisely
this License, with the Modified Version filling the role of the Document, thus licensing
distribution and modification of the Modified Version to whoever possesses a copy of it. In
addition, you must do these things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct from that of the
Document, and from those of previous versions (which should, if there were any, be
950 B GNU FREE DOCUMENTATION LICENSE (FDL)
listed in the History section of the Document). You may use the same title as a
previous version if the original publisher of that version gives permission.

B. List on the Title Page, as authors, one or more persons or entities responsible for
authorship of the modifications in the Modified Version, together with at least five of
the principal authors of the Document (all of its principal authors, if it has fewer than
five), unless they release you from this requirement.

C. State on the Title page the name of the publisher of the Modified Version, as the
publisher.

D. Preserve all the copyright notices of the Document.

E. Add an appropriate copyright notice for your modifications adjacent to the other
copyright notices.

F. Include, immediately after the copyright notices, a license notice giving the public
permission to use the Modified Version under the terms of this License, in the form
shown in the Addendum below.

G. Preserve in that license notice the full lists of Invariant Sections and required Cover
Texts given in the Document’s license notice.

H. Include an unaltered copy of this License.

I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item
stating at least the title, year, new authors, and publisher of the Modified Version as
given on the Title Page. If there is no section Entitled “History” in the Document,
create one stating the title, year, authors, and publisher of the Document as given
on its Title Page, then add an item describing the Modified Version as stated in the
previous sentence.

J. Preserve the network location, if any, given in the Document for public access to a
Transparent copy of the Document, and likewise the network locations given in the
Document for previous versions it was based on. These may be placed in the “History”
section. You may omit a network location for a work that was published at least four
years before the Document itself, or if the original publisher of the version it refers to
gives permission.

K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title


of the section, and preserve in the section all the substance and tone of each of the
contributor acknowledgements and/or dedications given therein.

L. Preserve all the Invariant Sections of the Document, unaltered in their text and in
their titles. Section numbers or the equivalent are not considered part of the section
titles.

M. Delete any section Entitled “Endorsements”. Such a section may not be included in
the Modified Version.

N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title


with any Invariant Section.

O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as
Secondary Sections and contain no material copied from the Document, you may at your
option designate some or all of these sections as invariant. To do this, add their titles to
B.5. COMBINING DOCUMENTS 951
the list of Invariant Sections in the Modified Version’s license notice. These titles must be
distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but en-
dorsements of your Modified Version by various parties – for example, statements of peer
review or that the text has been approved by an organization as the authoritative definition
of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up
to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified
Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added
by (or through arrangements made by) any one entity. If the Document already includes
a cover text for the same cover, previously added by you or by arrangement made by the
same entity you are acting on behalf of, you may not add another; but you may replace the
old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to
use their names for publicity for or to assert or imply endorsement of any Modified Version.

B.5 Combining Documents

You may combine the Document with other documents released under this License, under
the terms defined in Section B.4 above for modified versions, provided that you include in
the combination all of the Invariant Sections of all of the original documents, unmodified,
and list them all as Invariant Sections of your combined work in its license notice, and that
you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical
Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections
with the same name but different contents, make the title of each such section unique by
adding at the end of it, in parentheses, the name of the original author or publisher of that
section if known, or else a unique number. Make the same adjustment to the section titles
in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various
original documents, forming one section Entitled “History”; likewise combine any sections
Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete
all sections Entitled “Endorsements”.

B.6 Collections of Documents

You may make a collection consisting of the Document and other documents released
under this License, and replace the individual copies of this License in the various documents
with a single copy that is included in the collection, provided that you follow the rules of
this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually
under this License, provided you insert a copy of this License into the extracted document,
and follow this License in all other respects regarding verbatim copying of that document.

B.7 Aggregation with Independent Works

A compilation of the Document or its derivatives with other separate and independent
documents or works, in or on a volume of a storage or distribution medium, is called an
952 B GNU FREE DOCUMENTATION LICENSE (FDL)
“aggregate” if the copyright resulting from the compilation is not used to limit the legal rights
of the compilation’s users beyond what the individual works permit. When the Document
is included in an aggregate, this License does not apply to the other works in the aggregate
which are not themselves derivative works of the Document.
If the Cover Text requirement of Section B.3 is applicable to these copies of the Docu-
ment, then if the Document is less than one half of the entire aggregate, the Document’s
Cover Texts may be placed on covers that bracket the Document within the aggregate, or
the electronic equivalent of covers if the Document is in electronic form. Otherwise they
must appear on printed covers that bracket the whole aggregate.

B.8 Translation

Translation is considered a kind of modification, so you may distribute translations of the


Document under the terms of Section B.4. Replacing Invariant Sections with translations
requires special permission from their copyright holders, but you may include translations of
some or all Invariant Sections in addition to the original versions of these Invariant Sections.
You may include a translation of this License, and all the license notices in the Document,
and any Warranty Disclaimers, provided that you also include the original English version
of this License and the original versions of those notices and disclaimers. In case of a
disagreement between the translation and the original version of this License or a notice or
disclaimer, the original version will prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “His-
tory”, the requirement (Section B.4) to Preserve its Title (Section B.1) will typically require
changing the actual title.

B.9 Termination

You may not copy, modify, sublicense, or distribute the Document except as expressly
provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute
it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular
copyright holder is reinstated (a) provisionally, unless and until the copyright holder explic-
itly and finally terminates your license, and (b) permanently, if the copyright holder fails to
notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if
the copyright holder notifies you of the violation by some reasonable means, this is the first
time you have received notice of violation of this License (for any work) from that copyright
holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties
who have received copies or rights from you under this License. If your rights have been
terminated and not permanently reinstated, receipt of a copy of some or all of the same
material does not give you any rights to use it.

B.10 Future Revisions of this License

The Free Software Foundation may publish new, revised versions of the GNU Free Doc-
umentation License from time to time. Such new versions will be similar in spirit to the
B.11. RELICENSING 953
present version, but may differ in detail to address new problems or concerns. See http://
www.gnu.org/copyleft/ [accessed 2009-06-07].
Each version of the License is given a distinguishing version number. If the Document
specifies that a particular numbered version of this License “or any later version” applies
to it, you have the option of following the terms and conditions either of that specified
version or of any later version that has been published (not as a draft) by the Free Software
Foundation. If the Document does not specify a version number of this License, you may
choose any version ever published (not as a draft) by the Free Software Foundation. If the
Document specifies that a proxy can decide which future versions of this License can be
used, that proxy’s public statement of acceptance of a version permanently authorizes you
to choose that version for the Document.

B.11 Relicensing

“Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide Web
server that publishes copyrightable works and also provides prominent facilities for anybody
to edit those works. A public wiki that anybody can edit is an example of such a server. A
“Massive Multiauthor Collaboration” (or “MMC”) contained in the site means any set of
copyrightable works thus published on the MMC site.
“CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license published
by Creative Commons Corporation, a not-for-profit corporation with a principal place of
business in San Francisco, California, as well as future copyleft versions of that license
published by that same organization.
“Incorporate” means to publish or republish a Document, in whole or in part, as part of
another Document.
An MMC is “eligible for relicensing” if it is licensed under this License, and if all works
that were first published under this License somewhere other than this MMC, and subse-
quently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant
sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-
BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible
for relicensing.

Addendum: How to use this License for your documents

To use this License in a document you have written, include a copy of the License in the
document and put the following copyright and license notices just after the title page:

c YEAR YOUR NAME.


Copyright
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no Front-
Cover Texts, and no Back-Cover Texts. A copy of the license is included in the
section entitled “GNU Free Documentation License”.

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the
“with . . . Texts.” line with this:
954 B GNU FREE DOCUMENTATION LICENSE (FDL)
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover
Texts being LIST, and with the Back-Cover Texts being LIST.

If you have Invariant Sections without Cover Texts, or some other combination of the
three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing
these examples in parallel under your choice of free software license, such as the GNU General
Public License, to permit their use in free software.
Appendix C
GNU Lesser General Public License (LGPL)

Version 3, 29 June 2007


c 2007 Free Software Foundation, Inc.
Copyright
http://fsf.org/ [accessed 2009-06-07]

Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.
This version of the GNU Lesser General Public License incorporates the terms and
conditions of version 3 of the GNU General Public License, supplemented by the additional
permissions listed below.

C.1 Additional Definitions

As used herein, “this License” refers to version 3 of the GNU Lesser General Public
License, and the “GNU GPL” refers to version 3 of the GNU General Public License.
“The Library” refers to a covered work governed by this License, other than an Appli-
cation or a Combined Work as defined below.
An “Application” is any work that makes use of an interface provided by the Library,
but which is not otherwise based on the Library. Defining a subclass of a class defined by
the Library is deemed a mode of using an interface provided by the Library.
A “Combined Work” is a work produced by combining or linking an Application with
the Library. The particular version of the Library with which the Combined Work was made
is also called the “Linked Version”.
The “Minimal Corresponding Source” for a Combined Work means the Corresponding
Source for the Combined Work, excluding any source code for portions of the Combined
Work that, considered in isolation, are based on the Application, and not on the Linked
Version.
The “Corresponding Application Code” for a Combined Work means the object code
and/or source code for the Application, including any data and utility programs needed for
reproducing the Combined Work from the Application, but excluding the System Libraries
of the Combined Work.

C.2 Exception to Section 3 of the GNU GPL

You may convey a covered work under Section C.4 and Section C.5 of this License without
being bound by section 3 of the GNU GPL.
956 C GNU LESSER GENERAL PUBLIC LICENSE (LGPL)
C.3 Conveying Modified Versions

If you modify a copy of the Library, and, in your modifications, a facility refers to a
function or data to be supplied by an Application that uses the facility (other than as an
argument passed when the facility is invoked), then you may convey a copy of the modified
version:

1. under this License, provided that you make a good faith effort to ensure that, in the
event an Application does not supply the function or data, the facility still operates,
and performs whatever part of its purpose remains meaningful, or

2. under the GNU GPL, with none of the additional permissions of this License applicable
to that copy.

C.4 Object Code Incorporating Material from Library Header


Files

The object code form of an Application may incorporate material from a header file
that is part of the Library. You may convey such object code under terms of your choice,
provided that, if the incorporated material is not limited to numerical parameters, data
structure layouts and accessors, or small macros, inline functions and templates (ten or
fewer lines in length), you do both of the following:

1. Give prominent notice with each copy of the object code that the Library is used in it
and that the Library and its use are covered by this License.

2. Accompany the object code with a copy of the GNU GPL and this license document.

C.5 Combined Works

You may convey a Combined Work under terms of your choice that, taken together,
effectively do not restrict modification of the portions of the Library contained in the Com-
bined Work and reverse engineering for debugging such modifications, if you also do each of
the following:

1. Give prominent notice with each copy of the Combined Work that the Library is used
in it and that the Library and its use are covered by this License.

2. Accompany the Combined Work with a copy of the GNU GPL and this license docu-
ment.

3. For a Combined Work that displays copyright notices during execution, include the
copyright notice for the Library among these notices, as well as a reference directing
the user to the copies of the GNU GPL and this license document.

4. Do one of the following:


C.6. COMBINED LIBRARIES 957
(a) Convey the Minimal Corresponding Source under the terms of this License, and
the Corresponding Application Code in a form suitable for, and under terms that
permit, the user to recombine or relink the Application with a modified version
of the Linked Version to produce a modified Combined Work, in the manner
specified by section 6 of the GNU GPL for conveying Corresponding Source.
(b) Use a suitable shared library mechanism for linking with the Library. A suitable
mechanism is one that (a) uses at run time a copy of the Library already present
on the user’s computer system, and (b) will operate properly with a modified
version of the Library that is interface-compatible with the Linked Version.
5. Provide Installation Information, but only if you would otherwise be required to pro-
vide such information under section 6 of the GNU GPL, and only to the extent that
such information is necessary to install and execute a modified version of the Combined
Work produced by recombining or relinking the Application with a modified version
of the Linked Version. (If you use option Section C.5-a, the Installation Information
must accompany the Minimal Corresponding Source and Corresponding Application
Code. If you use option Section C.5-b, you must provide the Installation Information
in the manner specified by section 6 of the GNU GPL for conveying Corresponding
Source.)

C.6 Combined Libraries

You may place library facilities that are a work based on the Library side by side in
a single library together with other library facilities that are not Applications and are not
covered by this License, and convey such a combined library under terms of your choice, if
you do both of the following:
1. Accompany the combined library with a copy of the same work based on the Library,
uncombined with any other library facilities, conveyed under the terms of this License.
2. Give prominent notice with the combined library that part of it is a work based on
the Library, and explaining where to find the accompanying uncombined form of the
same work.

C.7 Revised Versions of the GNU Lesser General Public License

The Free Software Foundation may publish revised and/or new versions of the GNU
Lesser General Public License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library as you received it
specifies that a certain numbered version of the GNU Lesser General Public License “or any
later version” applies to it, you have the option of following the terms and conditions either
of that published version or of any later version published by the Free Software Foundation.
If the Library as you received it does not specify a version number of the GNU Lesser General
Public License, you may choose any version of the GNU Lesser General Public License ever
published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide whether future versions
of the GNU Lesser General Public License shall apply, that proxy’s public statement of
acceptance of any version is permanent authorization for you to choose that version for the
Library.
Appendix D
Credits and Contributors

In this section I want to give credits to whom they deserve (by directly or indirectly con-
tributing to this book). So its props to:

Stefan Achler
For working together at the 2007 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Steffen Bleul
For working together at the 2006, 2007, and 2008 Web Service Challenge.
See ?? on page ?? and [334, 335, 2903, 2908]
Raymond Chiong
For co-authoring a bookchapter corresponding to Part II and by doing so, helping to correct
many mistakes.
See Part II on page 138
Distributed Systems Group, University of Kassel
To the whole Distributed Systems Group at the University of Kassel for being supportive
and coming over again and again with new ideas and helpful advices.
Each and every research project described in this book.
Qiu Li
For careful reading and pointing out inconsistencies and spelling mistakes.
See Chapter 51
Gan Min
For careful reading and pointing out inconsistencies and spelling mistakes.
See especially the Pareto optimization area Section 3.3.5 on page 65.
Kurt Geihs
For being supportive and contributing to many of my research projects.
See for instance ?? on page ?? and [334, 335, 2896, 2897, 2904]
Martin Göb
For working together at the 2007 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Ibrahim Ibrahim
For careful proofreading.
See Chapter 1 on page 21.
Antonio Nebro
For co-authoring a bookchapter corresponding to Part II and by doing so, helping to correct
960 D CREDITS AND CONTRIBUTORS
many mistakes.
See Chapter 3 on page 51
Stefan Niemczyk
For careful proofreading and scientific support.
See for instance Chapter 1 on page 21, Section 50.2.7 on page 566, and [2037, 2910].
Roland Reichle
For fruitful discussions and scientific support.
See for instance Section 53.6.2 on page 693, Section 50.2.7 on page 566, and [2910].
Laurent Rodriguez
For contributing corrections in some of the chapters.
See for instance Section 53.1 on page 653 and Section 51.11 on page 647.
Richard Seabrook
For pointing out mistakes in some of the set theory definitions.
See Chapter 51 on page 639
Hendrik Skubch
For fruitful discussions and scientific support.
See for instance ?? on page ??, Section 50.2.7 on page 566, and [2910].
Christian Voigtmann
For working together at the 2007 and 2008 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Wibul Wongpoowarak
For valuable suggestions in different areas, like including random optimization.
See for instance ?? on page ?? [2971], Chapter 44 on page 505.
Michael Zapf
For helping me to make many sections more comprehensible.
See for instance ?? on page ??, ?? on page ??, Section 31.5 on page 409, and [2906? ].
References

1. Proceedings of the The Fifth Conference on Innovative Applications of Artificial Intelligence


(IAAI’93), July 11–15, 1993, Washington, DC, USA. AAAI Press: Menlo Park, CA, USA.
isbn: 0-929280-46-6. Partly available at http://www.aaai.org/Conferences/IAAI/
iaai93.php [accessed 2007-09-06].
2. Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI’10),
July 11–15, 2010, Westin Peachtree Plaza: Atlanta, GA, USA. AAAI Press: Menlo Park,
CA, USA.
3. Proceedings of the Twenty-Fifth Conference on Artificial Intelligence (AAAI’11), August 7–
11, 2011, Hyatt Regency San Francisco: San Francisco, CA, USA. AAAI Press: Menlo Park,
CA, USA.
4. Proceedings of the 23rd Conference on Innovative Applications of Artificial Intelligence
(IAAI’11), August 9–11, 2011, Hyatt Regency San Francisco: San Francisco, CA, USA.
AAAI Press: Menlo Park, CA, USA.
5. Emile H. L. Aarts and Jan Karel Lenstra, editors. Local Search in Combinatorial Opti-
mization, Estimation, Simulation, and Control – Wiley-Interscience Series in Discrete Math-
ematics and Optimization. Princeton University Press: Princeton, NJ, USA, 1997–2003.
isbn: 0585277540 and 0691115222. Google Books ID: NWghN9G7q9MC. OCLC: 45733970.
6. Florin Abelès, editor. Optical Interference Coatings, June 6–10, 1994, Grenoble, France,
volume 2253 in The Proceedings of SPIE. Society of Photographic Instrumentation Engineers
(SPIE): Bellingham, WA, USA. isbn: 0819415626. Google Books ID: 9OxTAAAAMAAJ and
cutTAAAAMAAJ. OCLC: 31676720.
7. Ajith Abraham and Mario Köppen, editors. Hybrid Information Systems, Proceedings of the
First International Workshop on Hybrid Intelligent Systems (HIS’01), December 11–12, 2001,
Adelaide University, Main Campus: Adelaide, SA, Australia, Advances in Soft Computing.
Physica-Verlag GmbH & Co.: Heidelberg, Germany. isbn: 3-7908-1480-6. Partly available
at http://his01.hybridsystem.com/ [accessed 2007-09-01].
8. Ajith Abraham, Lakhmi C. Jain, and Robert Goldberg, editors. Evolutionary Multiobjective
Optimization – Theoretical Advances and Applications, Advanced Information and Knowledge
Processing. Springer-Verlag GmbH: Berlin, Germany. isbn: 1852337877. Google Books
ID: Ei7q1YSjiSAC.
9. Ajith Abraham, Javier Ruiz-del-Solar, and Mario Köppen, editors. Soft Computing Systems
– Design, Management and Applications (HIS’02), December 1–4, 2002, Santiago, Chile,
volume 87 in Frontiers in Artificial Intelligence and Applications. IOS Press: Amsterdam,
The Netherlands. isbn: 1-58603-297-6.
10. Ajith Abraham, Mario Köppen, and Katrin Franke, editors. Design and Application of Hybrid
Intelligent Systems, Proceedings of the Third International Conference on Hybrid Intelligent
Systems (HIS’03), December 14–17, 2003, Monash Conference Centre: Melbourne, VIC,
Australia, volume 105 in Frontiers in Artificial Intelligence and Applications. IOS Press:
962 REFERENCES
Amsterdam, The Netherlands. isbn: 1-58603-394-8. Partly available at http://his03.
hybridsystem.com/ [accessed 2007-09-01].
11. Ajith Abraham, Yasuhiko Dote, Takeshi Furuhashi, Azuma Ohuchi, and Yukio Ohsawa,
editors. The Fourth IEEE International Workshop on Soft Computing as Transdisci-
plinary Science and Technology (WSTST’05), May 25–27, 2005, Muroran, Japan, volume
29/2005 in Advances in Intelligent and Soft Computing. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/3-540-32391-0. isbn: 3-540-25055-7. Google Books ID: gLGl5vBTnrAC. Li-
brary of Congress Control Number (LCCN): 2005923683.
12. Ajith Abraham, Andre Carvalho, Francisco Herrera Triguero, and Vijayalakshmi Pai, editors.
World Congress on Nature and Biologically Inspired Computing (NaBIC’09), December 9–11,
2009, PSG College of Technology: Coimbatore, India. Library of Congress Control Number
(LCCN): 2009907135. Catalogue no.: CFP0995H.
13. Ajith Abraham, Aboul-Ella Hassanien, and André Ponce de Leon F. de Carvalho, editors.
Foundations of Computational Intelligence – Volume 4: Bio-Inspired Data Mining, volume
204/2009 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-642-01088-0.
14. Ajith Abraham, Mohamed Kamel, and Ronald Yager, editors. Proceedings of the 11th Inter-
national Conference on Hybrid Intelligent Systems (HIS’11), December 5–8, 2011, Melaka,
Malaysia. IEEE Computer Society Press: Los Alamitos, CA, USA.
15. Faris N. Abuali, Dale A. Schoenefeld, and Roger L. Wainwright. Designing Telecom-
munications Networks using Genetic Algorithms and Probabilistic Minimum Spanning
Trees. In SAC’94 [734], pages 242–246, 1994. doi: 10.1145/326619.326733. Fully avail-
able at http://doi.acm.org/10.1145/326619.326733 and http://euler.mcs.
utulsa.edu/˜rogerw/papers/Abuali-Prufer-CSC-94.pdf [accessed 2008-08-28]. See also
[16].
16. Faris N. Abuali, Dale A. Schoenefeld, and Roger L. Wainwright. Terminal Assignment
in a Communications Network using Genetic Algorithms. In CSC’94 [584], pages 74–81,
1994. doi: 10.1145/197530.197559. Fully available at http://euler.mcs.utulsa.edu/
˜rogerw/papers/Abuali-Terminal-CSC-94.pdf [accessed 2008-08-28]. See also [15].
17. Congress on Numerical Methods in Combinatorial Optimization, March 1986, Capri, Italy.
Academic Press Professional, Inc.: San Diego, CA, USA.
18. Wolfgang Achtziger and Mathias Stolpe. Truss Topology Optimization with Discrete Design
Variables – Guaranteed Global Optimality and Benchmark Examples. Structural and Mul-
tidisciplinary Optimization, 34(1):1–20, July 2007, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/s00158-006-0074-2.
19. David H. Ackley. A Connectionist Machine for Genetic Hillclimbing, volume 28 in Kluwer
International Series in Engineering and Computer Science, The Springer International Series
in Engineering and Computer Science. PhD thesis, Carnegy Mellon University (CMU): Pitts-
burgh, PA, USA, Kluwer Academic Publishers: Norwell, MA, USA and Springer US: Boston,
MA, USA, August 31, 1987. isbn: 0-89838-236-X. OCLC: 16005487, 475896170, and
493204271. Library of Congress Control Number (LCCN): 87013536. GBV-Identification
(PPN): 016581156 and 016643720. LC Classification: Q336 .A25 1987.
20. Proceedings of the Third Annual ACM Symposium on Theory of Computing (STOC’71), 1971,
Shaker Heights, OH, USA. ACM Press: New York, NY, USA.
21. Proceedings of the ACM Annual Conference (ACM’72), Boston, MA, USA. ACM Press: New
York, NY, USA.
22. Proceedings of the 1999 ACM SIGMOD International Conference on Management of
Data (SIGMOD’99), 1999, Philadelphia, PA, USA. ACM Press: New York, NY, USA.
isbn: 1-58113-084-8.
23. Proceedings of the 2000 ACM symposium on Applied computing (SAC’00), March 19–21,
2000, Villa Olmo, Como, Italy. ACM Press: New York, NY, USA. isbn: 1-58113-240-9.
OCLC: 57310819.
24. Proceedings of the 2001 ACM Symposium on Applied Computing (SAC’01), March 11–14,
2001, Alexis Park Resort Hotel: Las Vegas, NV, USA. ACM Press: New York, NY, USA.
isbn: 1-58113-287-5. OCLC: 47677896 and 74839643.
25. Proceedings of the Eleventh International Conference on Information and Knowledge Man-
agement (CIKM’02), November 4–9, 2002, McLean, VA, USA. ACM Press: New York, NY,
USA. isbn: 1-58113-590-4.
REFERENCES 963
26. Proceedings of the 11th international conference on World Wide (WWW’02), May 7–11, 2002,
Honolulu, HI, USA. ACM Press: New York, NY, USA. isbn: 1-58113-449-5.
27. Eighth International Workshop on Learning Classifier Systems (IWLCS’05), June 25–29,
2005, Washington, DC, USA. ACM Press: New York, NY, USA. Part of [304]. See also
[1589].
28. Proceedings of the Conference on Information and Knowledge Management (CIKM’07),
November 6–10, 2007, Lisbon, Portugal. ACM Press: New York, NY, USA.
29. ACM SIGEVO Workshop on Foundations of Genetic Algorithms X (FOGA’09), January 9–
11, 2009, Orlando, FL, USA. ACM Press: New York, NY, USA. ACM Order No.: 910094.
30. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’10), July 7–
11, 2010, Portland Marriott Downtown Waterfront Hotel: Portland, OR, USA. ACM Press:
New York, NY, USA. Partly available at http://www.sigevo.org/gecco-2010/ [ac-
cessed 2011-02-28]. ACM Order No.: 910102. See also [31].

31. Companion Publication of the Genetic and Evolutionary Computation Conference


(GECCO’10 Companion), July 7–11, 2010, Portland Marriott Downtown Waterfront Ho-
tel: Portland, OR, USA. ACM Press: New York, NY, USA. ACM Order No.: 910102. See
also [30].
32. Christoph Adami, Richard K. Belew, Hiroaki Kitano, and Charles E. Taylor, editors. Proceed-
ings of the Sixth International Conference on Artificial Life (Artificial Life VI), June 27–29,
1998, University of California (UCLA): Los Angeles, CA, USA, volume 6 in Complex Adap-
tive Systems, Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-51099-5.
Google Books ID: K2M6VDCQ5mMC. OCLC: 39013808 and 245676852.
33. Proceedings of the Australasian Joint Conference on Artificial Intelligence Conference
(AI’10), December 7–10, 2010, Adelaide, SA, Australia.
34. Proceedings of the 6th International Conference on Database Theory (ICDT’97). Foto N.
Afrati and Phokion G. Kolaitis, editors, volume 1186 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany, January 8–10, 1997, Delphi, Greece.
isbn: 3-540-62222-5.
35. Divyakant Agrawal, Pusheng Zhang, Amr El Abbadi, and Mohamed F. Mokbel, editors. Pro-
ceedings of the 18th ACM SIGSPATIAL International Conference on Advances in Geographic
Information Systems (ACM SIGSPATIAL GIS 2010), November 2–5, 2010, Doubletree Hotel
San Jose: San José, CA, USA. ACM Press: New York, NY, USA.
36. Vikas R. Agrawal. A Dissertation Entitled Data Warehouse Operational Design: View Se-
lection and Performance Simulation. PhD thesis, University of Toledo: Toledo, OH, USA,
May 2005, Mesbah Ahmed and P. S. Sundararaghavan, Advisors, Udayan Nandkeolyar and
Robert Bennett, Committee members. Fully available at http://www.utoledo.edu/
business/phd/PHDDocs/Vikas_Agrawal_-_Data_warehouse.pdf [accessed 2011-04-13].
37. Alan Agresti. An Introduction to Categorical Data Analysis, Wiley Series in Probability and
Mathematical Statistics – Applied Probability and Statistics Section Series. Wiley Inter-
science: Chichester, West Sussex, UK, January 1996. isbn: 0471113387. OCLC: 32745985
and 464377546. Library of Congress Control Number (LCCN): 95021928. GBV-
Identification (PPN): 186503121. LC Classification: QA278 .A355 1996.
38. Arturo Hernández Aguirre, Raúl Monroy Borja, and Carlos A. Reyes Garcı́a, editors. Ad-
vances in Artificial Intelligence: 8th Mexican International Conference on Artificial Intelli-
gence (MICAI), November 9–13, 2009, Guanajuato, México, volume 5845 in Lecture Notes
in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. isbn: 3-642-05257-6. Library of Congress Control Number
(LCCN): 2009937486.
39. Luis E. Agustı́n-Blas, Sancho Salcedo-Sanz, Pablo Vidales, Gilberto Urueta, José Antonio
Portilla-Figueras, and Mark Solarski. A Hybrid Grouping Genetic Algorithm for City-
wide Ubiquitous WiFi Access Deployment. In CEC’09 [1350], pages 2172–2179, 2009.
doi: 10.1109/CEC.2009.4983210. INSPEC Accession Number: 10688805.
40. David W. Aha, Andras Janosi, William Steinbrunn, and Matthias Pfisterer. Heart Disease
Data Set. UCI Machine Learning Repository, Center for Machine Learning and Intelligent
Systems, Donald Bren School of Information and Computer Science, University of California:
Irvine, CA, USA, July 1, 1988. Fully available at http://archive.ics.uci.edu/ml/
datasets/Heart+Disease [accessed 2008-12-27].
41. Chang Wook Ahn. Advances in Evolutionary Algorithms – Theory, Design and Practice,
964 REFERENCES
volume 18 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg,
2006. doi: 10.1007/3-540-31759-7. isbn: 3-540-31758-9. Library of Congress Control
Number (LCCN): 2005939008.
42. Chang Wook Ahn and Rudrapatna S. Ramakrishna. Clustering-Based Probabilistic Model
Fitting in Estimation of Distribution Algorithms. IEICE Transactions on Information and
Systems, Oxford Journals, E89-D(1):381–383, January 2006, Institute of Electronics, Informa-
tion and Communication Engineers (IEICE): Tokyo, Japan. doi: 10.1093/ietisy/e89-d.1.381.
43. Chang Wook Ahn and Rudrapatna S. Ramakrishna. Augmented Compact Genetic Algorithm.
In PPAM’03 [2991], volume 560 (565), 2003. doi: 10.1007/978-3-540-24669-5 73. See also
[44].
44. Chang Wook Ahn and Rudrapatna S. Ramakrishna. Augmented Compact Genetic Algo-
rithm. NWC Report EC-200201, Kwang-Ju Institute of Science & Technology (K-JIST),
Department of Information & Communications, New Wave Computing (NWC) Laboratory:
Puk-gu, Gwangju, Korea, February 2002. Fully available at http://www.evolution.re.
kr/Publication/Tech/EC-200201.pdf [accessed 2010-08-03]. See also [43].
45. Sangtae Ahn, Jeffrey A. Fessler, Doron Blatt, and Alfred O. Hero. Convergent Incre-
mental Optimization Transfer Algorithms: Application to Tomography. IEEE Trans-
actions on Medical Imaging, 25(3):283–296, March 2006, IEEE (Institute of Electri-
cal and Electronics Engineers): Piscataway, NJ, USA. doi: 10.1109/TMI.2005.862740.
Fully available at http://www-rcf.usc.edu/˜sangtaea/papers/ahn:06:cio.pdf
[accessed 2009-10-25]. Partly available at http://www-rcf.usc.edu/˜sangtaea/papers/
ahn:06:cio_correct_eq.jpg and www-rcf.usc.edu/˜sangtaea/papers/ahn:04:
iot,talk.pdf [accessed 2009-10-25]. 2004 IEEE NSS-MIC, October 21, 2004.
46. Alfred Vaino Aho, John Edward Hopcroft, and Jeffrey David Ullman. The Design and
Analysis of Computer Algorithms, Addison-Wesley Series in Computer Science and Informa-
tion Processing. Addison-Wesley Publishing Co. Inc.: Reading, MA, USA, international
ed edition, January 1974. isbn: 0-201-00029-6. Google Books ID: O4dGcAAACAAJ.
OCLC: 1147299, 252049725, 466284870, and 634120903. Library of Congress Con-
trol Number (LCCN): 74003995. GBV-Identification (PPN): 020989148, 027429121,
and 132190680. LC Classification: QA76.6 .A36.
47. J. H. Ahrens and U. Dieter. Computer Methods for Sampling from Gamma, Beta, Poisson
and Bionomial Distributions. Computing, 12(3):223–246, September 10, 1974, Springer Verlag
GmbH: Vienna, Austria. doi: 10.1007/BF02293108.
48. Marco Aiello, Christian Platzer, Florian Rosenberg, Huy Tran, Martin Vasko, and
Schahram Dustdar. Web Service Indexing for Efficient Retrieval and Composition. In
CEC/EEE’06 [2967], pages 424–426, 2006. doi: 10.1109/CEC-EEE.2006.96. Fully avail-
able at https://berlin.vitalab.tuwien.ac.at/˜florian/papers/cec2006.pdf
[accessed 2007-10-25]. INSPEC Accession Number: 9199703. See also [49, 50, 318, 761].

49. Marco Aiello, Nico van Benthem, and Elie el Khoury. Visualizing Compositions of Services
from Large Repositories. In CEC/EEE’08 [1346], pages 359–362, 2008. doi: 10.1109/CE-
CandEEE.2008.149. INSPEC Accession Number: 10475111. See also [48, 50, 204, 761].
50. Marco Aiello, Elie el Khoury, Alexander Lazovik, and Patrick Ratelband. Opti-
mal QoS–Aware Web Service Composition. In CEC’09 [1245], pages 491–494, 2009.
doi: 10.1109/CEC.2009.63. INSPEC Accession Number: 10839135. See also [48, 49, 761, 1571].
51. Jan Aikins and Howard Shrobe, editors. Proceedings of the The Seventh Conference on In-
novative Applications of Artificial Intelligence (IAAI’95), August 21–25, 1995, Montréal,
QC, Canada. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-81-4. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai95.php [accessed 2007-09-06].
OCLC: 33813525, 633974570, and 635878104. GBV-Identification (PPN): 212259687.
52. P. Aiyarak, A. S. Saket, and Mark C. Sinclair. Genetic Programming Approaches
for Minimum Cost Topology Optimisation of Optical Telecommunication Networks. In
GALESIA’97 [3056], pages 415–420, 1997. doi: 10.1049/cp:19971216. Fully available
at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/aiyarak_1997_GPtootn.
html [accessed 2009-09-01]. CiteSeerx : 10.1.1.55.6656. See also [2498].
53. M. Ali Akbar and Muddassar Farooq. Application of Evolutionary Algorithms in De-
tection of SIP based Flooding Attacks. In GECCO’09-I [2342], pages 1419–1426,
2009. doi: 10.1145/1569901.1570092. Fully available at http://nexginrc.org/papers/
gecco09-ali.pdf [accessed 2009-09-03].
REFERENCES 965
54. Rama Akkiraju, Biplav Srivastava, Anca Ivan, Richard Goodwin, and Tanveer Syeda-
Mahmood. Semantic Matching to Achieve Web Service Discovery and Composition. In
CEC/EEE’06 [2967], pages 445–447, 2006. doi: 10.1109/CEC-EEE.2006.78. INSPEC Acces-
sion Number: 9199704. See also [318].
55. Selim G. Akl, Cristian S. Calude, Michael J. Dinneen, Grzegorz Rozenberg, and Todd Ware-
ham, editors. Proceedings of the 6th International Conference on Unconventional Compu-
tation (UC’07), August 13–17, 2007, Kingston, Canada, volume 4618/2007 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 978-3-540-73553-3.
56. Jarmo Tapani Alander. An Indexed Bibliography of Genetic Algorithms in Signal and Im-
age Processing. Technical Report 94-1-SIGNAL, University of Vaasa: Finland, May 18,
1998. Fully available at ftp://ftp.uwasa.fi/cs/report94-1/gaSIGNALbib.ps.Z
and http://citeseer.ist.psu.edu/319835.html [accessed 2008-05-18].
57. Jarmo Tapani Alander, editor. Proceedings of the 1st Finnish Workshop on Genetic Algo-
rithms and Their Applications (1FWGA), November 4–5, 1992, Helsinki University of Tech-
nology: Espoo, Finland. Helsinki University of Technology, Department of Computer Science:
Helsinki, Finland. Technical Report TKO-A30, partly in Finnish, published 1993.
58. Jarmo Tapani Alander, editor. Proceedings of the Second Finnish Workshop on Genetic Al-
gorithms and Their Applications (2FWGA), March 16–18, 1994, Vaasa, Finland. University
of Vaasa, Department of Information Technology and Production Economics: Vaasa, Fin-
land. isbn: 951-683-526-0. Fully available at ftp://ftp.uwasa.fi/cs/report94-2
[accessed 2008-07-24]. issn: 1237-7589. Technical Report 94-2.

59. Jarmo Tapani Alander, editor. Proceedings of the First Nordic Workshop on Genetic Algo-
rithms (The Third Finnish Workshop on Genetic Algorithms, 3FWGA) (1NWGA), June 9–
13, 1995, Vaasa, Finland, 2 in Proceedings of the University of Vaasa. University of Vaasa,
Department of Information Technology and Production Economics: Vaasa, Finland. Fully
available at ftp://ftp.uwasa.fi/cs/1NWGA [accessed 2008-07-24]. Technical Report 95-1.
60. Jarmo Tapani Alander, editor. Proceedings of the Second Nordic Workshop on Genetic Al-
gorithms (2NWGA), August 19–23, 1996, Vaasa, Finland, volume 13 in Proceedings of the
University of Vaasa. Fully available at ftp://ftp.uwasa.fi/cs/2NWGA [accessed 2008-07-24].
In collaboration with the Finnish Artificial Intelligence Society.
61. Jarmo Tapani Alander, editor. Proceedings of the Third Nordic Workshop on Genetic Al-
gorithms (3NWGA), August 20–22, 1997, Helsinki, Finland. Finnish Artificial Intelligence
Society (FAIS): Vantaa, Finland. Fully available at ftp://ftp.uwasa.fi/cs/3NWGA [ac-
cessed 2008-07-24].

62. Jarmo Tapani Alander, editor. Proceedings of the Fourth Nordic Workshop on Genetic Al-
gorithms (4NWGA), July 27–31, 1998, Norwegian University of Science and Technology:
Trondheim, Norway. Finnish Artificial Intelligence Society (FAIS): Vantaa, Finland. seem-
ingly didn’t take place?
63. Enrique Alba Torres and J. Francisco Chicano. Evolutionary Algorithms in Telecommunica-
tions. In MELECON’06 [2388], pages 795–798, 2006. doi: 10.1109/MELCON.2006.1653218.
Fully available at http://dea.brunel.ac.uk/ugprojects/docs/melecon06.pdf [ac-
cessed 2008-08-01].

64. Enrique Alba Torres and Bernabé Dorronsoro Dı́az. Solving the Vehicle Routing Problem by
Using Cellular Genetic Algorithms. In EvoCOP’04 [1115], pages 11–20, 2004. Fully available
at http://www.lcc.uma.es/˜eat/pdf/evocop04.pdf [accessed 2008-10-27]. See also [65].
65. Enrique Alba Torres and Bernabé Dorronsoro Dı́az. Computing Nine New Best-So-Far So-
lutions for Capacitated VRP with a Cellular Genetic Algorithm. Information Processing
Letters, 98(6):225–230, June 30, 2006, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/j.ipl.2006.02.006. See also [64].
66. Enrique Alba Torres and Bernabé Dorronsoro Dı́az. Cellular Genetic Algorithms, volume 42
in Operations Research/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin,
Germany, June 2008. isbn: 0387776095.
67. Enrique Alba Torres, Carlos Cotta, J. Francisco Chicano, and Antonio Jesús Nebro Ur-
baneja. Parallel Evolutionary Algorithms in Telecommunications: Two Case Studies. In
CACIC’02 [2747], 2002. Fully available at http://www.lcc.uma.es/˜eat/./pdf/
cacic02.pdf [accessed 2008-08-01]. CiteSeerx : 10.1.1.8.4073.
966 REFERENCES
68. Enrique Alba Torres, José Garcı́a-Nieto, Javid Taheri, and Albert Zomaya. New Research
in Nature Inspired Algorithms for Mobility Management in GSM Networks. In EvoWork-
shops’08 [1051], pages 1–10, 2008. doi: 10.1007/978-3-540-78761-7 1.
69. Rudolf F. Albrecht, Colin R. Reeves, and Nigel C. Steele, editors. Proceedings of
Artificial Neural Nets and Genetic Algorithms (ANNGA’93), April 14–16, 1993, Inns-
bruck, Austria. Springer New York: New York, NY, USA. isbn: 0-387-82459-6 and
3-211-82459-6. Google Books ID: ZYkAQAAIAAJ. OCLC: 27936988, 300357118,
443980789, 611581837, and 633734458. GBV-Identification (PPN): 04374902X and
126104662.
70. John Aldrich. R.A. Fisher and the Making of Maximum Likelihood 1912-1922. Discussion Pa-
per Series In Economics And Econometrics, Statistical Science, 3(12):162–176, January 1995,
Institute of Mathematical Statistics, University of Southampton. doi: 10.1214/ss/1030037906.
Fully available at http://projecteuclid.org/euclid.ss/1030037906 [accessed 2007-09-
15]. Zentralblatt MATH identifier: 0955.62525. Mathematical Reviews number (Math-
SciNet): 1617519.
71. Christos E. Alexakos, Konstantinos C. Giotopoulos, Eleni J. Thermogianni, Grigorios N.
Beligiannis, and Spiridon D. Likothanassis. Integrating E-learning Environments with Com-
putational Intelligence Assessment Agents. Proceedings of World Academy of Science, En-
gineering and Technology, 13:233–239, May 2006. Fully available at http://www.waset.
org/pwaset/v13/v13-45.pdf [accessed 2009-03-31].
72. M. Montaz Ali, Charoenchai Khompatraporn, and Zelda B. Zabinsky. A Numerical Eval-
uation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test
Problems. Journal of Global Optimization, 31(4):635–672, April 2005, Springer Netherlands:
Dordrecht, Netherlands. doi: 10.1007/s10898-004-9972-2.
73. Eugene L. Allgower and Kurt Georg, editors. Computational Solution of Nonlinear Systems
of Equations – Proceedings of the 1988 SIAM-AMS Summer Seminar on Computational So-
lution of Nonlinear Systems of Equations, July 18–29, 1988, Colorado State University: Fort
Collins, CO, USA, volume 26 in Lectures in Applied Mathematics (LAM). American Math-
ematical Society (AMS) Bookstore: Providence, RI, USA. isbn: 0-8218-1131-2. Google
Books ID: UAabWwXBifoC. OCLC: 20992934, 243444170, 311687125, and 423088702.
Published in 1990.
74. Jean-Marc Alliot, Evelyne Lutton, and Marc Schoenauer, editors. European Conference on
Artificial Evolution (AE’94), September 20–23, 1994, ENAC: Toulouse, France. Cépaduès:
Toulouse, France. isbn: 2854284119. Google Books ID: AhBWAAAACAAJ. OCLC: 34264284
and 258151166. Published in January 1995.
75. Jean-Marc Alliot, Evelyne Lutton, Edmund M. A. Ronald, Marc Schoenauer, and Dominique
Snyers, editors. Selected Papers of the European Conference on Artificial Evolution (AE’95),
September 4–6, 1995, Brest, France, volume 1063 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-61108-8. Published in
1996.
76. Riyad Alshammari, Peter I. Lichodzijewski, Malcom Ian Heywood, and A. Nur Zincir-
Heywood. Classifying SSH Encrypted Traffic with Minimum Packet Header Fea-
tures using Genetic Programming. In GECCO’09-II [2343], pages 2539–2546, 2009.
doi: 10.1145/1570256.1570358.
77. Lee Altenberg. The Schema Theorem and Price’s Theorem. In FOGA 3 [2932], pages 23–
49, 1994. Fully available at http://citeseer.ist.psu.edu/old/700230.html and
http://dynamics.org/Altenberg/FILES/LeeSTPT.pdf [accessed 2009-07-10].
78. Lee Altenberg. Genome Growth and the Evolution of the Genotype-Phenotype Map. In
Evolution and Biocomputation – Computational Models of Evolution [206]. Springer-Verlag
GmbH: Berlin, Germany, 1995. CiteSeerx : 10.1.1.39.3876. Corrections and revisions,
October 1997.
79. Lee Altenberg. NK Fitness Landscapes. In Handbook of Evolutionary Computation [171],
Chapter B2.7.2. Oxford University Press, Inc.: New York, NY, USA, Institute of Physics
Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press, Inc.:
Boca Raton, FL, USA, 1997. Fully available at http://dynamics.org/Altenberg/
FILES/LeeNKFL.pdf and http://www.cmi.univ-mrs.fr/˜pardoux/LeeNKFL.pdf
x
[accessed 2010-08-26]. CiteSeer : 10.1.1.79.1351.

80. Lee Altenberg. Fitness Distance Correlation Analysis: An Instructive Counterexam-


REFERENCES 967
ple. In ICGA’97 [166], pages 57–64, 1997. Fully available at http://dynamics.
org/Altenberg/PAPERS/FDCAAIC/ and http://www.cs.uml.edu/˜giam/91.510/
Papers/Altenberg1997.pdf [accessed 2008-07-20].
81. Guilherme Bastos Alvarenga and Geraldo Robson Mateus. Hierarchical Tournament Selection
Genetic Algorithm for the Vehicle Routing Problem with Time Windows. In HIS’04 [1421],
pages 410–415, 2004. doi: 10.1109/ICHIS.2004.53. INSPEC Accession Number: 8470883.
82. Shun-Ichi Amari, editor. Proceedings of the IEEE-INNS-ENNS International Joint
Conference on Neural Networks (IJCNN’00), July 24–27, 2000, Como, Italy. IEEE
Computer Society: Piscataway, NJ, USA. isbn: 0-7695-0619-4 and 0780365410.
Google Books ID: JAlWAAAAMAAJ, i7RVAAAAMAAJ, iLhVAAAAMAAJ, and tglWAAAAMAAJ.
OCLC: 49364554, 59509805, and 423975593.
83. Anita Amberg, Wolfgang Domschke, and Stefan Voß. Multiple Center Capacitated Arc
Routing Problems: A Tabu Search Algorithm using Capacitated Trees. European Journal
of Operational Research (EJOR), 124(2):360–376, July 16, 2000, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(99)00170-8.
84. 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Con-
ference and Exhibit, April 22–25, 2002, Denver, CO, USA. American Institute of Aeronautics
and Astronautics (AIAA): Reston, VA, USA.
85. Proceedings of OMAE04, 23rd International Conference on Offshore Mechanics and Arc-
tic Engineering, June 20–25, 2004, Vancouver, BC, Canada. American Society of Me-
chanical Engineers (ASME): New York, NY, USA. isbn: 0791837386, 0-7918-3743-2,
0-7918-3744-0, and 0-7918-3745-9. Order No.: I706CD. LC Classification: TC1505
.I566 2004. Three volumes.
86. Ciro Amitrano, Luca Peliti, and M. Saber. Population Dynamics in a Spin-Glass Model of
Chemical Evolution. Journal of Molecular Evolution, 29(6):513–525, December 1989, Springer
New York: New York, NY, USA. doi: 10.1007/BF02602923.
87. Heni Ben Amor and Achim Rettinger. Intelligent Exploration for Genetic Algorithms: Using
Self-Organizing Maps in Evolutionary Computation. In GECCO’05 [304], pages 1531–1538,
2005. doi: 10.1145/1068009.1068250. See also [88].
88. Heni Ben Amor and Achim Rettinger. Intelligent Exploration for Genetic Algorithms
– Using Self-Organizing Maps in Evolutionary Computation. Fachberichte Informatik 1,
Universität Koblenz-Landau, Institut für Informatik: Koblenz, Rhineland-Palatinate, Ger-
many, February 4, 2005. Fully available at http://www.uni-koblenz-landau.de/
koblenz/fb4/publications/fachberichte/fb2005/rr-1-2005.pdf [accessed 2011-08-
06]. issn: 1860-4471. See also [87].

89. Michele Amoretti. A Framework for Evolutionary Peer-to-Peer Overlay Schemes. In


EvoWorkshops’09 [1052], pages 61–70, 2009. doi: 10.1007/978-3-642-01129-0 7.
90. IBM PhD Student Symposium at ICSOC 2005, December 2005, Amsterdam, The Nether-
lands.
91. Proceedings of the 2011 International Conference on Systems, Man, and Cybernetics
(SCM’11), October 9–12, 2011, Anchorage, AK, USA.
92. Edgar Anderson. The Irises of the Gaspé Peninsula. Bulletin of the American Iris Society,
59:2–5, 1935, American Iris Society (AIS): Philadelphia, PA, USA. See also [305, 930].
93. Johan Anderson. A Survey of Multiobjective Optimization in Engineering Design. Techni-
cal Report LiTH-IKP-R-1097, Linköping University, Department of Mechanical Engineer-
ing: Linköping, Sweden, 2000. Fully available at http://www.lania.mx/˜ccoello/
andersson00.pdf.gz [accessed 2009-06-22]. CiteSeerx : 10.1.1.8.5638.
94. Philip Warren Anderson. Suggested Model for Prebiotic Evolution: The Use of Chaos. Pro-
ceedings of the National Academy of Science of the United States of America (PNAS), 80
(11):3386–3390, June 1, 1983, National Academy of Sciences: Washington, DC, USA. Fully
available at http://www.pnas.org/cgi/reprint/80/11/3386.pdf and http://
www.pubmedcentral.nih.gov/articlerender.fcgi?artid=394048 [accessed 2008-07-
02]. PubMed ID: 6190177.

95. David Andre. Evolution of Mapmaking Ability: Strategies for the Evolution of Learning,
Planning, and Memory using Genetic Programming. In CEC’94 [1891], pages 250–255,
volume 1, 1994. See also [96].
96. David Andre. The Evolution of Agents that Build Mental Models and Create Simple Plans
968 REFERENCES
Using Genetic Programming. In ICGA’95 [883], pages 248–255, 1995. See also [95].
97. David Andre. Learning and Upgrading Rules for an Optical Character Recognition System
Using Genetic Programming. In Handbook of Evolutionary Computation [171]. Oxford Uni-
versity Press, Inc.: New York, NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac
House, Temple Back, Bristol, UK, and CRC Press, Inc.: Boca Raton, FL, USA, 1997.
98. David Andre and Astro Teller. Evolving Team Darwin United. In RoboCup-98: Robot
Soccer World Cup II [129], pages 346–351, 1998. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/Andre_1999_ETD.html and http://www.
cs.cmu.edu/afs/cs/usr/astro/public/papers/Teller_Astro.ps [accessed 2010-08-
x
21]. CiteSeer : 10.1.1.36.2696.

99. David Andre, Forrest H. Bennett III, and John R. Koza. Discovery by Genetic Programming
of a Cellular Automata Rule that is Better than any Known Rule for the Majority Classifi-
cation Problem. pages 3–11. Fully available at http://citeseer.ist.psu.edu/33008.
html [accessed 2007-08-01]. See also [100].
100. David Andre, Forrest H. Bennett III, and John R. Koza. Evolution of Intricate Long-
Distance Communication Signals in Cellular Automata using Genetic Programming. In
Artificial Life V [1911], volume 1, 1996. Fully available at http://citeseer.ist.
psu.edu/andre96evolution.html and http://www.genetic-programming.com/
jkpdf/alife1996gkl.pdf [accessed 2007-10-03]. See also [99].
101. Peter John Angeline. Subtree Crossover: Building Block Engine or Macromutation? In
GP’97 [1611], pages 9–17, 1997. Fully available at http://ncra.ucd.ie/COMP30290/
SubtreeXoverBuildingBlockorMacromutation_angeline_gp97.ps [accessed 2010-08-
20].

102. Peter John Angeline. A Historical Perspective on the Evolution of Executable Struc-
tures. Fundamenta Informaticae – Annales Societatis Mathematicae Polonae, Series IV,
35(1-2):179–195, August 1998, IOS Press: Amsterdam, The Netherlands and European
Association for Theoretical Computer Science (EATCS): Rio, Greece. Fully available
at http://www.natural-selection.com/publications_1998.html [accessed 2009-07-
09]. CiteSeerx : 10.1.1.15.6779. Special volume: Evolutionary Computation. See also
[105].
103. Peter John Angeline. Evolutionary Optimization versus Particle Swarm Optimiza-
tion: Philosophy and Performance Differences. In EP’98 [2208], pages 601–610, 1998.
doi: 10.1007/BFb0040811.
104. Peter John Angeline. Subtree Crossover Causes Bloat. In GP’98 [1612], pages 745–752, 1998.
105. Peter John Angeline. A Historical Perspective on the Evolution of Executable Structures. In
Evolutionary Computation [863]. IOS Press: Amsterdam, The Netherlands, 1999. See also
[102].
106. Peter John Angeline and Kenneth E. Kinnear, Jr, editors. Advances in Genetic Programming
II, Complex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, Octo-
ber 26, 1996. isbn: 0-262-01158-1. Google Books ID: c3G7QgAACAAJ. OCLC: 29595260
and 59664755. GBV-Identification (PPN): 282212574.
107. Peter John Angeline and Jordan B. Pollack. Coevolving High-Level Representations. In
Artificial Life III [1667], pages 55–71, 1992. Fully available at http://citeseer.
ist.psu.edu/angeline94coevolving.html and http://demo.cs.brandeis.edu/
papers/alife3.pdf [accessed 2007-10-05].
108. Peter John Angeline and Jordan B. Pollack. Evolutionary Module Acquisition. In
EP’93 [943], pages 154–163, 1993. Fully available at http://www.demo.cs.brandeis.
edu/papers/ep93.pdf and https://eprints.kfupm.edu.sa/38410/ [accessed 2009-07-
x
11]. CiteSeer : 10.1.1.50.938 and 10.1.1.57.3280.

109. Peter John Angeline, Robert G. Reynolds, John Robert McDonnell, and Russel C. Eber-
hart, editors. Proceedings of the 6th International Conference on Evolutionary Pro-
gramming (EP’97), April 13–16, 1997, Indianapolis, IN, USA, volume 1213/1997 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 354062788X. Google Books ID: 4cviSAAACAAJ. OCLC: 36629908, 247514632,
316851036, 441306808, 466121928, and 473591147.
110. Peter John Angeline, Zbigniew Michalewicz, Marc Schoenauer, Xin Yao, and Ali
M. S. Zalzala, editors. Proceedings of the IEEE Congress on Evolutionary Compu-
tation (CEC’99), July 6–9, 1999, Mayflower Hotel: Washington, DC, USA, volume
REFERENCES 969
1-3. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-5536-9 and
0-7803-5537-7. Google Books ID: GJtVAAAAMAAJ, YZpVAAAAMAAJ, hM6nAAAACAAJ,
and r5hVAAAAMAAJ. OCLC: 42253982 and 64431060. Library of Congress Control Num-
ber (LCCN): 99-61143. INSPEC Accession Number: 6338817. Catalogue no.: 99TH8406.
CEC-99 – A joint meeting of the IEEE, Evolutionary Programming Society, Galesia, and the
IEE.
111. Shoshana Anily and Awi Federgruen. Ergodicity in Parametric Non-Stationary Markov
Chains: An Application to Simulated Annealing Methods. Operations Research, 35(6):867–
874, November–December 1987, Institute for Operations Research and the Management Sci-
ences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cam-
bridge, MA, USA. JSTOR Stable ID: 171435.
112. Pavlos Antoniou, Andreas Pitsilides, Tim Blackwell, and Andries P. Engelbrecht. Employing
the Flocking Behavior of Birds for Controlling Congestion in Autonomous Decentralized Net-
works. In CEC’09 [1350], pages 1753–1761, 2009. doi: 10.1109/CEC.2009.4983153. INSPEC
Accession Number: 10688748.
113. Hendrik James Antonisse. A Grammar-Based Genetic Algorithm. In FOGA’90 [2562], pages
193–204, 1990.
114. 19th European Conference on Machine Learning and the 12th European Conference on Princi-
ples and Practice of Knowledge Discovery in Databases (ECML PKDD’08), September 15–19,
2008, Antwerp, Belgium.
115. Kamel Aouiche and Jérôme Darmont. Data Mining-Based Materialized View and Index
Selection in Data Warehouses. Journal of Intelligent Information Systems, 33(1):65–93, Au-
gust 2009, Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/s10844-009-0080-0.
116. Kamel Aouiche, Pierre-Emmanuel Jouve, and Jérôme Darmont. Clustering-Based Mate-
rialized View Selection in Data Warehouses. In ADBIS’06 [1828], pages 81–95, 2009.
doi: 10.1007/11827252 9. Fully available at http://hal.archives-ouvertes.fr/docs/
00/32/06/24/PDF/adbis_06.pdf and http://hal.archives-ouvertes.fr/docs/
00/32/06/24/PS/adbis_06.ps [accessed 2011-04-12]. CiteSeerx : 10.1.1.95.8281. arXiv
ID: cs/0703114v1.
117. Chatchawit Aporntewan and Prabhas Chongstitvatana. A Hardware Implementation of
the Compact Genetic Algorithm. In CEC’01 [1334], pages 624–629, volume 1, 2001.
doi: 10.1109/CEC.2001.934449. Fully available at http://pioneer.netserv.chula.
ac.th/˜achatcha/Publications/0004.pdf [accessed 2009-08-21]. INSPEC Accession Num-
ber: 7006160.
118. David L. Applegate, Robert E. Bixby, Vašek Chvátal, and William J. Cook. The Travel-
ing Salesman Problem: A Computational Study, Princeton Series in Applied Mathematics.
Princeton University Press: Princeton, NJ, USA, February 2007. isbn: 0-691-12993-2.
Google Books ID: nmF4rVNJMVsC.
119. Hamid R. Arabnia, editor. Proceedings of the 2010 International Conference on Genetic and
Evolutionary Methods (GEM’10), July 12–15, 2010, Monte Carlo Resort: Las Vegas, NV,
USA.
120. Hamid R. Arabnia and Youngsong Mun, editors. Proceedings of the 2008 International Con-
ference on Genetic and Evolutionary Methods (GEM’08), June 14–17, 2008, Monte Carlo
Resort: Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-069-8.
121. Hamid R. Arabnia and Ashu M. G. Solo, editors. Proceedings of the 2009 International
Conference on Genetic and Evolutionary Methods (GEM’09), July 13–16, 2009, Monte Carlo
Resort: Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-106-6.
122. Hamid R. Arabnia, Jack Y. Yang, and Mary Qu Yang, editors. Proceedings of the 2007
International Conference on Genetic and Evolutionary Methods (GEM’07), June 25–28, 2007,
Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-038-8.
123. Victoria S. Aragón and Susana C. Esquivel. An Evolutionary Algorithm to Track Changes
of Optimum Value Locations in Dynamic Environments. Journal of Computer Science
& Technology (JCS&T), 4(3(12)):127–133, October 2004, Iberoamerican Science & Tech-
nology Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM,
USA. Fully available at http://journal.info.unlp.edu.ar/journal/journal12/
papers/JCST-Oct04-1.pdf [accessed 2009-07-13]. Invited paper.
124. Edoardo Ardizzone, Salvatore Gaglio, and Filippo Sorbello, editors. Proceedings of the 2nd
Congress of the Italian Association for Artificial Intelligence (AI*IA) on Trends in Artificial
970 REFERENCES
Intelligence (AIIA’91), October 29, 1991, Palermo, Italy, volume 549 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0387547126 and
3-540-54712-6. OCLC: 150398646, 311526865, 320328143, 440934107, 465123834,
and 466123701.
125. Maribel G. Arenas, Pierre Collet, Ágoston E. Eiben, Márk Jelasity, Juan Julián Merelo-
Guervós, Ben Paechter, Mike Preuß, and Marc Schoenauer. A Framework for Distributed
Evolutionary Algorithms. In PPSN VII [1870], pages 665–675, 2002. Fully available
at http://citeseer.ist.psu.edu/arenas02framework.html [accessed 2008-04-06].
126. Rubén Armañanzas, Iñaki Inza, Roberto Santana, Yvan Saeys, Jose Luis Flores, José An-
tonio Lozano, Yves Van de Peer, Rosa Blanco, Vı́ctor Robles, Concha Bielza, and Pedro
Larrañaga. A Review of Estimation of Distribution Algorithms in Bioinformatics. Bio-
Data Mining, 1(6), 2008, BioMed Central Ltd.: London, UK. doi: 10.1186/1756-0381-1-6.
Fully available at http://www.biodatamining.org/content/1/1/6 [accessed 2009-08-28].
PubMed ID: 18822112.
127. Grenville Armitage, Mark Claypool, and Philip Branch. Network Latency, Jitter and Loss.
In Networking and Online Games: Understanding and Engineering Multiplayer Internet
Games [128], Chapter 5, pages 69–82. John Wiley & Sons Ltd.: New York, NY, USA, 2006.
doi: 10.1002/047003047X.ch5.
128. Grenville Armitage, Mark Claypool, and Philip Branch. Networking and Online Games:
Understanding and Engineering Multiplayer Internet Games. John Wiley & Sons Ltd.:
New York, NY, USA, 2006. doi: 10.1002/047003047X. isbn: 0470018577. Google Books
ID: AQ7bIwAACAAJ and mK9QAAAAMAAJ. OCLC: 62896496 and 222232694.
129. Minoru Asada and Hiroaki Kitano, editors. RoboCup-98: Robot Soccer World Cup II,
July 1998, Paris, France, volume 1604 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-66320-7. Published in 1999.
130. B. Ashadevi and R. Balasubramanian. Cost Effective Approach for Materialized View Se-
lection in Data Warehousing Environment. International Journal of Computer Science and
Network Security (IJCSNS), 8(10):236–242, 2008, IJCSNS: Seoul, South Korea. Fully avail-
able at http://paper.ijcsns.org/07_book/200810/20081036.pdf [accessed 2011-04-12].
131. B. Ashadevi and R. Balasubramanian. Optimized Cost Effective Approach for Se-
lection of Materialized Views in Data Warehousing. Journal of Computer Science
& Technology (JCS&T), 9(1):21–26, April 2009, Iberoamerican Science & Technol-
ogy Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM,
USA. Fully available at http://journal.info.unlp.edu.ar/journal/journal25/
papers/JCST-Apr09-4.pdf [accessed 2011-04-12].
132. Workshop on Practical Combinatorial Optimization (I ALIO/EURO), August 14–18, 1989,
Rio de Janeiro, RJ, Brazil. Asociación Latino-Iberoamericana de Investigación Operativa
(ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research Soci-
eties (EURO): Brussels, Belgium.
133. Workshop on Practical Combinatorial Optimization (II ALIO/EURO), November 11–15,
1996, Catholic University: Valparaı́so, Chile. Asociación Latino-Iberoamericana de Inves-
tigación Operativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European Opera-
tional Research Societies (EURO): Brussels, Belgium.
134. Workshop on Applied Combinatorial Optimization (III ALIO/EURO), November 1–7, 1999,
Erice, Italy. Asociación Latino-Iberoamericana de Investigación Operativa (ALIO): Rio de
Janeiro, RJ, Brazil and Association of European Operational Research Societies (EURO):
Brussels, Belgium.
135. Workshop on Applied Combinatorial Optimization (IV ALIO/EURO), November 4–6, 2002,
Pucón, Chile. Asociación Latino-Iberoamericana de Investigación Operativa (ALIO): Rio de
Janeiro, RJ, Brazil and Association of European Operational Research Societies (EURO):
Brussels, Belgium.
136. The Fifth ALIO/EURO Conference on Combinatorial Optimization (V ALIO/EURO), Octo-
ber 26–28, 2005, Paris, France. Asociación Latino-Iberoamericana de Investigación Operativa
(ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research Soci-
eties (EURO): Brussels, Belgium.
137. Workshop on Applied Combinatorial Optimization (VI ALIO/EURO), December 15–17,
2008, Buenos Aires, Argentina. Asociación Latino-Iberoamericana de Investigación Oper-
REFERENCES 971
ativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research
Societies (EURO): Brussels, Belgium.
138. Workshop on Applied Combinatorial Optimization (VII ALIO/EURO), May 4–6, 2011, Uni-
versity of Porto, Faculty of Sciences: Porto, Portugal. Asociación Latino-Iberoamericana de
Investigación Operativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European
Operational Research Societies (EURO): Brussels, Belgium.
139. Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing
Systems (CHI’08), April 5–10, 2008, Florence, Italy. Association for Computing Machinery
(ACM): New York, NY, USA.
140. Proceedings of the 17th International World Wide Web Conference (WWW’08), April 21–25,
2008, Běijı̄ng International Convention Center: Běijı̄ng, China. Association for Computing
Machinery (ACM): New York, NY, USA.
141. Mikhail J. Atallah, editor. Algorithms and Theory of Computation Handbook, Chapman and
Hall/CRC Applied Algorithms and Data Structures Series. CRC Press, Inc.: Boca Raton,
FL, USA, first edition, 1998. isbn: 0-8493-2649-4. Google Books ID: fyTk5AwnnoQC.
OCLC: 39606815, 54016189, 222972701, and 471519169. Library of Congress Con-
trol Number (LCCN): 98038016. GBV-Identification (PPN): 247401749. LC Classifica-
tion: QA76.9.A43 A43 1999. Produced By Suzanne Lassandro.
142. Proceedings of the Fifth Joint Conference on Information Science (JCIS’00), February 27–
March 3, 2000, Atlantic City, NJ, USA.
143. Paolo Atzeni, Albertas Caplinskas, and Hannu Jaakkola, editors. Proceedings of the 12th
East European Conference on Advances in Databases and Information Systems (ADBIS),
September 5–9, 2008, Pori, Finland, volume 5207 in Information Systems and Application,
incl. Internet/Web and HCI (SL 3), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-85713-6. isbn: 3-540-85712-5.
Library of Congress Control Number (LCCN): 2008933208.
144. Wai-Ho Au, Keith C. C. Chan, and Xin Yao. A Novel Evolutionary Data Mining Algorithm
with Applications to Churn Prediction. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 7(6):532–545, December 2003, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.819264. CiteSeerx : 10.1.1.10.8230.
145. Anne Auger and Oliver Teytaud. Continuous Lunches are Free Plus the Design of
Optimal Optimization Algorithms. Rapports de Recherche inria-00369788, Insti-
tut National de Recherche en Informatique et en Automatique (INRIA), March 21,
2009. Fully available at http://hal.inria.fr/docs/00/36/97/88/PDF/
ccflRevisedVersionAugerTeytaud.pdf [accessed 2010-10-29]. See also [146].
146. Anne Auger and Oliver Teytaud. Continuous Lunches are Free Plus the Design of Optimal
Optimization Algorithms. Algorithmica, 57(1):121–146, May 2010, Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/s00453-008-9244-5. See also [145].
147. Anne Auger, Nikolaus Hansen, Nikolas Mauny, Raymond Ros, and Marc Schoenauer. Bio-
Inspired Continuous Optimization: The Coming of Age. In CEC’07 [1343], 2007. Fully
available at http://www.lri.fr/˜marc/schoenauerCEC07.pdf [accessed 2009-10-20]. In-
vited Talk at CEC’2007. See also [1181].
148. P. Augerat, José Manuel Belenguer Ribera, Enrique Benavent, A. Corberán, D. Naddef, and
G. Rinaldi. Computational Results with a Branch and Cut Code for the Capacitated Vehicle
Routing Problem. Technical Report Research Report 949-M, Université Joseph Fourier:
Grenoble, France, 1995.
149. Douglas A. Augusto and Helio Joseé Correa Barbosa. Symbolic Regression via Genetic
Programming. In SBRN’00 [981], pages 173–178, 2000. doi: 10.1109/SBRN.2000.889734.
INSPEC Accession Number: 6813089.
150. Lerina Aversano, Massililiano di Penta, and Kunal Taneja. A Genetic Programming Approach
to Support the Design of Service Compositions. International Journal of Computer Systems
Science and Engineering (CSSE), 21(4):247–254, July 2006, CRL Publishing Limited: Le-
icester, UK. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
Aversano_2006_IJCSSE.html and http://www.rcost.unisannio.it/mdipenta/
papers/csse06.pdf [accessed 2011-01-19]. CiteSeerx : 10.1.1.145.843.
151. Erel Avineri, Mario Köppen, Keshav Dahal, Yos Sunitiyoso, and Rajkumar Roy, editors. Ap-
plications of Soft Computing: 12th Online World Conference on Soft Computing in Industrial
Applications (WSC12), October 16–26, 2007, volume 52 in Advances in Intelligent and Soft
972 REFERENCES
Computing. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-540-88079-0. Library of
Congress Control Number (LCCN): 2008935493.
152. Mordecai Avriel. Nonlinear Programming: Analysis and Methods. Dover Publica-
tions: Mineola, NY, USA, September 16, 2003. isbn: 0-486-43227-0. Google Books
ID: 815RAAAAMAAJ and byF4Xb1QbvMC.
153. Proceedings of the 3rd International Conference on Bio-Inspired Models of Network, Informa-
tion, and Computing Systems (BIONETICS’08), November 25–28, 2008, Awaji Yumebutai
International Conference Center: Awaji City, Hyogo, Japan.
154. Mehmet E. Aydin, Jun Yang, and Jie Zhang. A Comparative Investigation on Heuristic
Optimization of WCDMA Radio Networks. In EvoWorkshops’07 [1050], pages 111–120,
2007. doi: 10.1007/978-3-540-71805-5 12.
155. Mehmet E. Aydin, Raymond S. K. Kwan, Cyril Leung, and Jie Zhang. Multiuser Scheduling
in HSDPA with Particle Swarm Optimization. In EvoWorkshops’09 [1052], pages 71–80,
2009. doi: 10.1007/978-3-642-01129-0 8.
156. Antonia Azzini, Matteo De Felice, Sandro Meloni, and Andrea G. B. Tettamanzi. Soft
Computing Techniques for Internet Backbone Traffic Anomaly Detection. In EvoWork-
shops’09 [1052], pages 99–104, 2009. doi: 10.1007/978-3-642-01129-0 12.

157. Sara Baase and Allen Van Gelder. Computer Algorithms: Introduction to Design and Analy-
sis. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 3rd edition, Novem-
ber 1999. isbn: 0-201-61244-5. Google Books ID: GInCQgAACAAJ. OCLC: 40813345 and
474499890. Library of Congress Control Number (LCCN): 99014185. GBV-Identification
(PPN): 301465576 and 314912401. LC Classification: QA76.9.A43 B33 2000.
158. Jaume Bacardit i Peñarroya. Pittsburgh Genetics-Based Machine Learning in the Data Min-
ing Era: Representations, Generalization, and Run-time. PhD thesis, Ramon Llull Univer-
sity: Barcelona, Catalonia, Spain, October 8, 2004, Josep Maria Garrell i Guiu, Advisor.
Fully available at http://www.cs.nott.ac.uk/˜jqb/publications/thesis.pdf [ac-
cessed 2007-09-12].

159. Jaume Bacardit i Peñarroya. Analysis of the Initialization Stage of a Pittsburgh


Approach Learning Classifier System. In GECCO’05 [304], pages 1843–1850, 2005.
doi: 10.1145/1068009.1068321. Fully available at http://www.cs.nott.ac.uk/˜jqb/
publications/gecco2005.pdf [accessed 2007-09-12].
160. Jaume Bacardit i Peñarroya and Martin V. Butz. Data Mining in Learning Classi-
fier Systems: Comparing XCS with GAssist. In IWLCS 03-05 [1589], pages 282–290,
2007. doi: 10.1007/978-3-540-71231-2 19. Fully available at http://www.illigal.
uiuc.edu/pub/papers/IlliGALs/2004030.pdf and http://www.psychologie.
uni-wuerzburg.de/IWLCS/abstracts/IWLCS04-Bacardit_Butz.pdf [accessed 2010-07-
02].

161. Jaume Bacardit i Peñarroya and Natalio Krasnogor. Smart Crossover Operator with Multiple
Parents for a Pittsburgh Learning Classifier System. In GECCO’06 [1516], pages 1441–1448,
2006. doi: 10.1145/1143997.1144235. Fully available at http://www.cs.nott.ac.uk/
˜jqb/publications/gecco2006-sx.pdf [accessed 2007-09-12].
162. Jaume Bacardit i Peñarroya and Natalio Krasnogor. A Mixed Discrete-
Continuous Attribute List Representation for Large Scale Classification Domains.
In GECCO’09-I [2342], pages 1155–1162, 2009. doi: 10.1145/1569901.1570057.
Fully available at http://www.asap.cs.nott.ac.uk/publications/pdf/
AMixedDiscreteContinuousAttributeListRepresentation.pdf [accessed 2009-09-

05].

163. Paul Gustav Heinrich Bachmann. Die Analytische Zahlentheorie / Dargestellt von Paul Bach-
mann, volume Zweiter Theil in Zahlentheorie: Versuch einer Gesamtdarstellung dieser Wis-
senschaft in ihren Haupttheilen. University of Michigan Library, Scholarly Publishing Office:
Ann Arbor, MI, USA, 1894–2004. isbn: 1418169633, 141818540X, and 978-1418169633.
Fully available at http://gallica.bnf.fr/ark:/12148/bpt6k994750 [accessed 2008-05-
20]. Google Books ID: SfKYPQAACAAJ, SvKYPQAACAAJ, and ujyhGQAACAAJ.

164. George Baciu, Yingxu Wang, Yiyu Yao, Witold Kinsner, Keith C. C. Chan, and Lotfi A.
REFERENCES 973
Zadeh, editors. Cognitive Computing and Semantic Mining – Proceedings of the 8th IEEE
International Conference on Cognitive Informatics (ICCI’09), June 15–17, 2009, Hong Kong
Polytechnic University: Kowloon, Hong Kong / Xiānggǎng, China. IEEE Computer Society:
Piscataway, NJ, USA. isbn: 1-4244-4642-2. Google Books ID: MTx3PwAACAAJ.
165. Thomas Bäck. Generalized Convergence Models for Tournament- and (µ, λ)-Selection. In
ICGA’95 [883], pages 2–8, 1995. CiteSeerx : 10.1.1.56.5969.
166. Thomas Bäck, editor. Proceedings of The Seventh International Conference on Genetic Al-
gorithms (ICGA’97), July 19–23, 1997, Michigan State University: East Lansing, MI, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-487-1. Google
Books ID: eP1QAAAAMAAJ and fI0BAAAACAAJ. OCLC: 37024995 and 247130717.
167. Thomas Bäck. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evo-
lutionary Programming, Genetic Algorithms. Oxford University Press, Inc.: New York,
NY, USA, January 1996. isbn: 0-19-509971-0. Google Books ID: 6MOqAAAACAAJ and
EaN7kvl5coYC. OCLC: 32350061 and 246886085. Library of Congress Control Number
(LCCN): 95013506. GBV-Identification (PPN): 185320007 and 27885365X. LC Classifi-
cation: QA402.5 .B333 1996.
168. Thomas Bäck and Ulrich Hammel. Evolution Strategies Applied to Perturbed Objective
Functions. In CEC’94 [1891], pages 40–45, volume 1, 1994. doi: 10.1109/ICEC.1994.350045.
Fully available at http://citeseer.ist.psu.edu/16973.html [accessed 2008-07-19].
169. Thomas Bäck and Hans-Paul Schwefel. An Overview of Evolutionary Algorithms for Pa-
rameter Optimization. Evolutionary Computation, 1(1):1–23, Spring 1993, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1993.1.1.1. Fully available at http://www.
santafe.edu/events/workshops/images/7/7e/Back-ec93.pdf and www.robots.
ox.ac.uk/˜cvrg/trinity2002/EC/EAs.ps.gz [accessed 2009-10-18].
170. Thomas Bäck, Frank Hoffmeister, and Hans-Paul Schwefel. A Survey of Evolution Strate-
gies. In ICGA’91 [254], pages 2–9, 1991. Fully available at http://130.203.133.121:
8080/viewdoc/summary?doi=10.1.1.42.3375, http://bi.snu.ac.kr/Info/EC/
A%20Survey%20of%20Evolution%20Strategies.pdf, and http://www6.uniovi.
es/pub/EC/GA/papers/icga91.ps.gz [accessed 2009-09-18]. CiteSeerx : 10.1.1.42.3375.
171. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, editors. Handbook of Evolutionary
Computation, Computational Intelligence Library. Oxford University Press, Inc.: New York,
NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK,
and CRC Press, Inc.: Boca Raton, FL, USA, January 1, 1997. isbn: 0-7503-0392-1 and
0-7503-0895-8. Google Books ID: kgqGQgAACAAJ, n5nuiIZvmpAC, and neKNGAAACAAJ.
OCLC: 173074676, 327018351, and 327018351. Library of Congress Control Number
(LCCN): 97004461. GBV-Identification (PPN): 224708430 and 364589272. LC Classifi-
cation: QA76.87 .H357 1997.
172. Thomas Bäck, Ulrich Hammel, and Hans-Paul Schwefel. Evolutionary Computation:
Comments on the History and Current State. IEEE Transactions on Evolution-
ary Computation (IEEE-EC), 1(1):3–17, April 1997, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/4235.585888. Fully available at http://sci2s.
ugr.es/docencia/doctobio/EC-History-IEEETEC-1-1-1997.pdf [accessed 2009-08-05].
CiteSeerx : 10.1.1.6.5943. INSPEC Accession Number: 5611369.
173. Thomas Bäck, Zbigniew Michalewicz, and Xin Yao, editors. IEEE International Confer-
ence on Evolutionary Computation (CEC’97), April 13–16, 1997, Indianapolis, IN, USA.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-3949-5. Google Books
ID: 6JZVAAAAMAAJ and so7XHQAACAAJ. INSPEC Accession Number: 5577833.
174. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, editors. Evolutionary Compu-
tation 1: Basic Algorithms and Operators. Institute of Physics Publishing Ltd. (IOP):
Dirac House, Temple Back, Bristol, UK, January 2000. isbn: 0750306645. Google Books
ID: 4HMYCq9US78C.
175. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, editors. Evolutionary Computation
2: Advanced Algorithms and Operators. Institute of Physics Publishing Ltd. (IOP): Dirac
House, Temple Back, Bristol, UK, November 2000. isbn: 0750306653.
176. Jeffrey L. Bada, C. Bigham, and Stanley L. Miller. Impact Melting of Frozen Oceans on
the Early Earth: Implications for the Origin of Life. Proceedings of the National Academy
of Science of the United States of America (PNAS), 91(4):1248–1250, February 15, 1994,
National Academy of Sciences: Washington, DC, USA. doi: 10.1073/pnas.91.4.1248. PubMed
974 REFERENCES
ID: 11539550.
177. Liviu Badea and Monica Stanciu. Refinement Operators Can Be (Weakly) Perfect. In ILP-
99 [851], pages 21–32, 1999. doi: 10.1007/3-540-48751-4 4. Fully available at http://www.
ai.ici.ro/papers/ilp99.ps.gz [accessed 2008-02-22].
178. P. Badeau, Michel Gendreau, F. Guertin, Jean-Yves Potvin, and Éric D. Taillard. A Parallel
Tabu Search Heuristic for the Vehicle Routing Problem with Time Windows. Transportation
Research Part C: Emerging Technologies, 5(2):109–122, April 1997, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands and Pergamon Press: Oxford, UK.
179. Philipp Andreas Baer and Roland Reichle. Communication and Collaboration in Heteroge-
neous Teams of Soccer Robots. In Robotic Soccer [1740], Chapter 1, pages 1–28. I-Tech Educa-
tion and Publishing: Vienna, Austria, 2007. Fully available at http://s.i-techonline.
com/Book/Robotic-Soccer/ISBN978-3-902613-21-9-rs01.pdf [accessed 2008-04-23].
180. John Daniel Bagley. The Behavior of Adaptive Systems which Employ Genetic and Cor-
relation Algorithms. PhD thesis, University of Michigan: Ann Arbor, MI, USA, De-
cember 1967. Fully available at http://deepblue.lib.umich.edu/handle/2027.
42/3354 [accessed 2010-08-03]. Order No.: AAI6807556. University Microfilms No.: 68-7556.
ID: bab0779.0001.001. ORA PROJECT.01-i252. See also [181].
181. John Daniel Bagley. The Behavior of Adaptive Systems which employ Genetic and Correla-
tion Algorithms. Dissertation Abstracts International (DAI), 28(12):5160B, ProQuest: Ann
Arbor, MI, USA. See also [180].
182. Ruzena Bajcsy, editor. Proceedings of the 13th International Joint Conference on Artificial In-
telligence (IJCAI’93-I), August 28–September 3, 1993, Chambéry, France, volume 1. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-300-X. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-93-VOL1/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [183, 927, 2259].

183. Ruzena Bajcsy, editor. Proceedings of the 13th International Joint Conference on Artificial In-
telligence (IJCAI93-II), August 28–September 3, 1993, Chambéry, France, volume 2. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-300-X. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-93-VOL2/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [182, 927, 2259].

184. Javier Bajo, Emilio S. Corchado, Álvaro Herrero, and Juan M. Corchado, editors. Proceed-
ings of the 1st International Workshop on Hybrid Artificial Intelligence Systems (HAIS’06),
October 27, 2006, Ribeirão Preto, SP, Brazil. University of Salamanca: Salamanca, Spain.
isbn: 84-934181-9-6.
185. Per Bak and Kim Sneppen. Punctuated Equilibrium and Criticality in a Simple Model of
Evolution. Physical Review Letters, 71(24):4083–4086, December 13, 1993, American Phys-
ical Society: College Park, MD, USA. doi: 10.1103/PhysRevLett.71.4083. Fully available
at http://prola.aps.org/abstract/PRL/v71/i24/p4083_1 [accessed 2008-08-24].
186. Per Bak, Chao Tang, and Kurt Wiesenfeld. Self-Organized Criticality. Physical Review
A, 38(1):364–374, July 1, 1998, American Physical Society: College Park, MD, USA.
doi: 10.1103/PhysRevA.38.364. Fully available at http://prola.aps.org/abstract/
PRA/v38/i1/p364_1 [accessed 2008-08-23].
187. James E. Baker. Adaptive Selection Methods for Genetic Algorithms. In ICGA’85 [1131],
pages 101–111, 1985.
188. James E. Baker. Reducing Bias and Inefficiency in the Selection Algorithm. In
ICGA’87 [1132], pages 14–21, 1987.
189. Lazlo Bako. Real-time Classification of Datasets with Hardware Embedded Neuromorphic
Neural Networks. Briefings in Bioinformatics, 11(3):348–363, 2010, Oxford University Press,
Inc.: New York, NY, USA. doi: 10.1093/bib/bbp066.
190. Roberto Baldacci, Maria Battarra, and Daniele Vigo. Routing a Heterogeneous Fleet
of Vehicles. Technical Report DEIS OR.INGCE 2007/1, Università degli Studi di
Bologna, Dipartimento di Elettronica, Informatica e Sistemistica (DEIS), Operations
Research Group (Ricerca Operativa): Bológna, Emilia-Romagna, Italy, January 2007.
Fully available at or.ingce.unibo.it/ricerca/technical-reports-or-ingce/
papers/bbv_hvrp.pdf [accessed 2011-10-12].
191. James Mark Baldwin. A New Factor in Evolution. The American Naturalist, 30(354):441–451,
June 1896, University of Chicago Press for The American Society of Naturalists: Chicago,
IL, USA. Fully available at http://www.brocku.ca/MeadProject/Baldwin/Baldwin_
REFERENCES 975
1896_h.html [accessed 2008-09-10]. See also [192].
192. James Mark Baldwin. A New Factor in Evolution. In Adaptive Individuals in Evolving Pop-
ulations: Models and Algorithms [255], pages 59–80. Addison-Wesley Longman Publishing
Co., Inc.: Boston, MA, USA and Westview Press, 1996. See also [191].
193. Shumeet Baluja. Population-Based Incremental Learning – A Method for Integrating Genetic
Search Based Function Optimization and Competitive Learning. Technical Report CMU-CS-
94-163, Carnegy Mellon University (CMU), School of Computer Science, Computer Science
Department: Pittsburgh, PA, USA, June 2, 1994. Fully available at http://www.ri.cmu.
edu/pub_files/pub1/baluja_shumeet_1994_2/baluja_shumeet_1994_2.pdf [ac-
x
cessed 2011-12-05]. CiteSeer : 10.1.1.61.8554.

194. Shumeet Baluja. An Empirical Comparison of Seven Iterative and Evolutionary Function Op-
timization Heuristics. Technical Report CMU-CS-95-193, Carnegy Mellon University (CMU),
School of Computer Science, Computer Science Department: Pittsburgh, PA, USA, Septem-
ber 1, 1995. CiteSeerx : 10.1.1.43.1108.
195. Shumeet Baluja and Richard A. Caruana. Removing the Genetics from the Standard Genetic
Algorithm. In ICML’95 [2221], pages 38–46, 1995. Fully available at http://www.cs.
cornell.edu/˜caruana/ml95.ps [accessed 2009-08-20]. See also [196].
196. Shumeet Baluja and Richard A. Caruana. Removing the Genetics from the Stan-
dard Genetic Algorithm. Technical Report CMU-CS-95-141, Carnegy Mellon University
(CMU), School of Computer Science, Computer Science Department: Pittsburgh, PA,
USA, May 22, 1995. Fully available at http://www.i3s.unice.fr/˜verel/supports/
articles/baluja95removing.pdf and https://eprints.kfupm.edu.sa/62112/
1/62112.pdf [accessed 2009-08-20]. CiteSeerx : 10.1.1.44.5424. See also [195].
197. Robert Balzer, editor. Proceedings of the 1st Annual National Conference on Artificial Intel-
ligence (AAAI’80), August 18–20, 1980, Stanford University: Stanford, CA, USA. American
Association for Artificial Intelligence: Los Altos, CA, USA and John W. Kaufmann Inc.:
Washington, DC, USA. isbn: 0-262-51050-2. Partly available at http://www.aaai.
org/Conferences/AAAI/aaai80.php [accessed 2007-09-06].
198. Carlos A. Bana e Costa, editor. Selected Readings from the Third International Summer
School on Multicriteria Decision Aid: Methods, Applications, and Software (MCDA’90),
July 23–27, 1998, Monte Estoril, Lisbon, Portugal. Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 0387529500 and 3540529500. Google Books ID: F2yfGAAACAAJ. Library of
Congress Control Number (LCCN): 90010313. LC Classification: T58.62 .R43 198. Pub-
lished in 1990.
199. Saswatee Banerjee and Lakshminarayan N. Hazra. Thin Lens Design of Cooke Triplet
Lenses: Application of a Global Optimization Technique. In Novel Optical Systems and
Large-Aperture Imaging [259], pages 175–183, 1998. doi: 10.1117/12.332474.
200. Proceedings of the 10th IEEE International Conference on Cognitive Informatics & Cogni-
tive Computing (ICCI*CC’11), August 18–20, 2011, Banff, AB, Canada. Partly available
at http://www.ucalgary.ca/icci_cc2011/ [accessed 2011-02-28].
201. Jørgen Bang-Jensen, Gregory Gutin, and Anders Yeo. When the Greedy Algorithm
Fails. Discrete Optimization, 1(2):121–127, November 15, 2004, Elsevier Science Publish-
ers B.V.: Essex, UK. doi: 10.1016/j.disopt.2004.03.007. Fully available at http://www.
optimization-online.org/DB_FILE/2003/03/622.pdf.
202. Balázs Bánhelyi, Marco Biazzini, Alberto Montresor, and Márk Jelasity. Peer-to-Peer Op-
timization in Large Unreliable Networks with Branch-and-Bound and Particle Swarms. In
EvoWorkshops’09 [1052], pages 87–92, 2009. doi: 10.1007/978-3-642-01129-0 10.
203. David Banks, Leanna House, Frederick R. McMorris, Phipps Arabie, and Wolfgang Gaul,
editors. Classification, Clustering, and Data Mining Applications: Proceedings of the Meeting
of the International Federation of Classification Societies (IFCS), July 15–18, 2004, Studies in
Classification, Data Analysis, and Knowledge Organization. Springer New York: New York,
NY, USA. isbn: 3540220143.
204. Ajay Bansal, M. Brian Blake, Srividya Kona, Steffen Bleul, Thomas Weise, and Michael C.
Jäger. WSC-08: Continuing the Web Services Challenge. In CEC/EEE’08 [1346], pages
351–354, 2008. doi: 10.1109/CECandEEE.2008.146. Fully available at http://www.
it-weise.de/documents/files/BBKBWJ2008WSC08CTWSC.pdf. INSPEC Accession
Number: 10475109. Ei ID: 20091512027446. IDS (SCI): BJF01.
205. Wolfgang Banzhaf. Genetic Programming for Pedestrians. In ICGA’93 [969], pages 17–21,
976 REFERENCES
1993. Fully available at ftp://ftp.cs.cuhk.hk/pub/EC/GP/papers/pedes93.
ps.gz, http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/banzhaf_mrl.html,
and http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/
GenProg_forPed.ps.Z [accessed 2010-08-18]. CiteSeerx : 10.1.1.38.1144. Also: MRL
Technical Report 93-03.
206. Wolfgang Banzhaf and Frank H. Eeckman, editors. Evolution and Biocomputation –
Computational Models of Evolution, volume 899/1995 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany, kindle edition, April 13, 1995.
doi: 10.1007/3-540-59046-3. isbn: 0-387-59046-3 and 3-540-59046-3. Google Books
ID: dZf-yrmSZPkC. asin: B000V1DAP0. Revised papers from the “Evolution as a Compu-
tational Process” workshop held in July 1992 in Monterey, CA, USA.
207. Wolfgang Banzhaf and Christian W. G. Lasarczyk. Genetic Programming of an Algorith-
mic Chemistry. In GPTP’04 [2089], pages 175–190, 2004. doi: 10.1007/0-387-23254-0 11.
Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/banzhaf_
2004_GPTP.html and http://www.cs.mun.ca/˜banzhaf/papers/algochem.pdf [ac-
x
cessed 2010-08-01]. CiteSeer : 10.1.1.94.6816.

208. Wolfgang Banzhaf and Colin R. Reeves, editors. Proceedings of the Fifth Workshop on Foun-
dations of Genetic Algorithms (FOGA’98), July 22–25, 1998, Madison, WI, USA. Morgan
Kaufmann: San Mateo, CA, USA. isbn: 1-55860-559-2. Published April 2, 1991.
209. Wolfgang Banzhaf, Peter Nordin, Robert E. Keller, and Frank D. Francone. Genetic Pro-
gramming: An Introduction – On the Automatic Evolution of Computer Programs and Its Ap-
plications. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA and dpunkt.verlag:
Heidelberg, Germany, November 30, 1997. isbn: 1-558-60510-X and 3-920993-58-6.
Google Books ID: 1697qefFdtIC. OCLC: 38067741, 247738380, 468426941, and
474902629. Library of Congress Control Number (LCCN): 97051603. GBV-Identification
(PPN): 238379108, 279969872, and 307343650. LC Classification: QA76.623 .G46 1998.
210. Wolfgang Banzhaf, Riccardo Poli, Marc Schoenauer, and Terence Claus Fogarty, editors. Pro-
ceedings of the First European Workshop on Genetic Programming (EuroGP’98), April 14–
15, 1998, Paris, France, volume 1391/1998 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0055923. isbn: 3-540-64360-5.
See also [2195].
211. Wolfgang Banzhaf, Jason M. Daida, Ágoston E. Eiben, Max H. Garzon, Vasant Honavar,
Mark J. Jakiela, and Robert Elliott Smith, editors. Proceedings of the Genetic and Evolu-
tionary Computation Conference (GECCO’99), July 13–17, 1999, Orlando, FL, USA. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-611-4. Google
Books ID: -1vTAAAACAAJ and hZlQAAAAMAAJ. OCLC: 42918215, 43044489, 59333111,
and 316858650. The 1999 Genetic and Evolutionary Computational Conference (GECCO-
99) combined the longest running conferences in evolutionary computation (ICGA) and the
world’s two largest EC conferences (GP and ICGA) to create a unique opportunity to col-
lect the best in research in this growing field of computer science and engineering. See also
[2093, 2502].
212. Wolfgang Banzhaf, Guillaume Beslon, Steffen Christensen, James A. Foster, François Képès,
Virginie Lefort, Julian Francis Miller, Miroslav Radman, and Jeremy J. Ramsden. From
Artificial Evolution to Computational Evolution: A Research Agenda. Nature Reviews Ge-
netics, 7(9):729–735, September 2006, Nature Publishing Group (NPG): Houndmills, Bas-
ingstoke, Hampshire, UK. doi: 10.1038/nrg1921. Fully available at http://www.cs.mun.
ca/˜banzhaf/papers/nrg2006.pdf [accessed 2008-07-20]. Guidelines/Perspectives.
213. Elena Baralis, Stefano Paraboschi, and Ernest Teniente. Materialized View Selection in a Mul-
tidimensional Database. In VLDB’97 [1434], pages 156–165, 1997. Fully available at http://
www.vldb.org/conf/1997/P156.PDF [accessed 2011-04-13]. CiteSeerx : 10.1.1.41.7437
and 10.1.1.78.9501.
214. Helio Joseé Correa Barbosa, Fernanda M. P. Raupp, Carlile Campos Lavor, Harry C. Lima,
and Nelson Maculan. A Hybrid Genetic Algorithm for Finding Stable Conformations of Small
Molecules. In SBRN’00 [981], pages 90–94, 2000. doi: 10.1109/SBRN.2000.889719. INSPEC
Accession Number: 6813074.
215. 11th European Conference on Machine Learning (ECML’00), May 30–June 2, 2000,
Barcelona, Catalonia, Spain.
216. Proceedings of the EUROGEN2003 Conference: Evolutionary Methods for Design Optimiza-
REFERENCES 977
tion and Control with Applications to Industrial Problems (EUROGEN’03), September 15–
17, 2003, Barcelona, Catalonia, Spain. isbn: 84-95999-33-1. Partly available at http://
congress.cimne.upc.es/eurogen03/ [accessed 2007-09-16]. Published on CD.
217. Tiffany Barnes, Michel Desmarais, Cristóbal Romero, and Sebastián Ventura, ed-
itors. Proceedings of the 2nd International Conference on Educational Data
Mining (EDM’09), July 1–3, 2009, Córdoba, Spain. isbn: 84-613-2308-4.
Fully available at http://www.educationaldatamining.org/EDM2009/uploads/
proceedings/edm-proceedings-2009.pdf [accessed 2011-10-25]. OCLC: 733734163.
218. Lionel Barnett. Tangled Webs: Evolutionary Dynamics on Fitness Landscapes with Neu-
trality. Master’s thesis, University of Sussex, School of Cognitive Science: Brighton, UK,
Summer 1997, Inman Harvey, Supervisor. CiteSeerx : 10.1.1.48.2862.
219. Lionel Barnett. Ruggedness and Neutrality – The NKp Family of Fitness Landscapes. In
Artificial Life VI [32], pages 18–27, 1998. Fully available at http://www.cogs.susx.ac.
uk/users/lionelb/downloads/publications/alife6_paper.ps.gz [accessed 2009-07-
x
10]. CiteSeer : 10.1.1.43.5010.

220. Nils Aaall Barricelli. Esempi Numerici di Processi di Evoluzione. Methodos, 6(21–22):45–68,
1954.
221. Nils Aaall Barricelli. Symbiogenetic Evolution Processes Realized by Artificial Methods.
Methodos, 9(35–36):143–182, 1957.
222. Nils Aaall Barricelli. Numerical Testing of Evolution Theories. Part I. Theroetical Introduc-
tion and Basic Tests. Acta Biotheoretica, 16(1/2):69–98, March 1962, Springer Netherlands:
Dordrecht, Netherlands. doi: 10.1007/BF01556771. Received: 27 November 1961. See also
[223].
223. Nils Aaall Barricelli. Numerical Testing of Evolution Theories. Part II. Preliminary Tests
of Performance. Symbiogenesis and Terrestrial Life. Acta Biotheoretica, 16(3/4):99–126,
September 1963, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF01556602.
Received: 27 November 1961. See also [222].
224. Alwyn M. Barry, editor. Proceedings of the Bird of a Feather Workshop at Genetic and
Evolutionary Computation Conference (GECCO’02 WS), July 8, 2002, Roosevelt Hotel: New
York, NY, USA. AAAI Press: Menlo Park, CA, USA. Bird-of-a-feather Workshops, GECCO-
2002. See also [478, 1675, 1790].
225. Peter Bartalos and Mária Bieliková. Semantic Web Service Composition Framework Based
on Parallel Processing. In CEC’09 [1245], pages 495–498, 2009. doi: 10.1109/CEC.2009.27.
INSPEC Accession Number: 10839132. See also [1571].
226. Thomas Barth, Bernd Freisleben, Manfred Grauer, and Frank Thilo. Distributed Solution of
Optimal Hybrid Control Problems on Networks of Workstations. In CLUSTER 2000 [1365],
pages 162–169, 2000. doi: 10.1109/CLUSTR.2000.889027.
227. Thomas Bartz-Beielstein. Experimental Analysis of Evolution Strategies – Overview and
Comprehensive Introduction. Technical Report 157, Universität Dortmund, Collabora-
tive Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia,
Germany, May 24, 2004. Fully available at http://hdl.handle.net/2003/5449 and
http://ls11-www.cs.uni-dortmund.de/people/tom/158703.pdf [accessed 2009-09-09].
CiteSeerx : 10.1.1.5.2261.
228. Thomas Bartz-Beielstein, Marco Chiarandini, Luı́s Paquete, and Mike Preuß, editors. Ex-
perimental Methods for the Analysis of Optimization Algorithms. Springer-Verlag GmbH:
Berlin, Germany, February 2010. doi: 10.1007/978-3-642-02538-9. isbn: 3-642-02537-4.
Libri-Number: 9596283.
229. Pablo Manuel Rabanal Basalo, Ismael Rodrı́guez Laguna, and Fernando Rubio Dı́ez. Using
River Formation Dynamics to Design Heuristic Algorithms. In UC’07 [55], pages 163–177,
2007. doi: 10.1007/978-3-540-73554-0 16.
230. Pablo Manuel Rabanal Basalo, Ismael Rodrı́guez Laguna, and Fernando Rubio Dı́ez. Finding
Minimum Spanning/Distance Trees by Using River Formation Dynamics. In ANTS’08 [2580],
pages 60–71, 2008. doi: 10.1007/978-3-540-87527-7 6.
231. William Bateson. Mendel’s Principles of Heredity, Kessinger Publishing’s R Rare
Reprints. Cambridge University Press: Cambridge, UK, 1909–1930. isbn: 1428648194.
Google Books ID: ASoFAQAAIAAJ, NskrHgAACAAJ, WWZfDQljn8gC, YpTPAAAAMAAJ, and
dRjSCFULQegC.
232. Roberto Battiti and Giampietro Tecchiolli. The Reactive Tabu Search. ORSA
978 REFERENCES
Journal on Computing, 6(2):126–140, 1994, Operations Research Society of Amer-
ica (ORSA). doi: 10.1287/ijoc.6.2.126. Fully available at http://citeseer.ist.
psu.edu/141556.html and http://rtm.science.unitn.it/˜battiti/archive/
reactive-tabu-search.ps.gz [accessed 2007-09-15].
233. Andreas Bauer and Wolfgang Lehner. On Solving the View Selection Problem in
Distributed Data Warehouse Architectures. In SSDBM’03 [1368], pages 43–51, 2003.
doi: 10.1109/SSDM.2003.1214953. INSPEC Accession Number: 7804318.
234. Helmut Baumgarten, editor. Das Beste Der Logistik: Innovationen, Strategien, Umsetzun-
gen. Springer-Verlag GmbH: Berlin, Germany and Bundesvereinigung Logistik e.V. (BVL):
Bremen, Germany, 2008. isbn: 3-540-78404-7. Google Books ID: di5g8FY5X YC.
235. James C. Bean. Genetics and Random Keys for Sequencing and Optimization. Technical
Report 92-43, University of Michigan, Department of Industrial and Operations Engineering:
Ann Arbor, MI, USA, June 1992–December 17, 1993. Fully available at http://deepblue.
lib.umich.edu/bitstream/2027.42/3481/5/ban1152.0001.001.pdf [accessed 2010-
11-22]. See also [236].

236. James C. Bean. Genetic Algorithms and Random Keys for Sequencing and Optimization.
ORSA Journal on Computing, 6(2):154–160, Spring 1994, Operations Research Society of
America (ORSA). doi: 10.1287/ijoc.6.2.154. See also [235].
237. James C. Bean and Atidel Ben Hadj-Alouane. A Dual Genetic Algorithm for Bounded Integer
Programs. Technical Report 92-53, University of Michigan, Department of Industrial and
Operations Engineering: Ann Arbor, MI, USA, October 1992. Fully available at http://
hdl.handle.net/2027.42/3480 [accessed 2008-11-15]. Revised in June 1993.
238. David Beasley, David R. Bull, and Ralph R. Martin. An Overview of Genetic Algorithms:
Part 1. Fundamentals. University Computing, 15(2):58–69, 1993, Inter-University Commit-
tee on Computing: Cambridge, MA, USA. Fully available at http://ralph.cs.cf.ac.
uk/papers/GAs/ga_overview1.pdf [accessed 2010-07-23]. CiteSeerx : 10.1.1.44.3757 and
10.1.1.61.3619. See also [239].
239. David Beasley, David R. Bull, and Ralph R. Martin. An Overview of Genetic Algorithms:
Part 2, Research Topics. University Computing, 15(4):170–181, 1993, Inter-University
Committee on Computing: Cambridge, MA, USA. Fully available at http://proton.
tau.ac.il/˜marek/genetic_algorithms_beasley93_overview_partII.pdf and
http://ralph.cs.cf.ac.uk/papers/GAs/ga_overview2.pdf [accessed 2010-07-23].

CiteSeerx : 10.1.1.23.8293 and 10.1.1.61.3643. See also [238].


240. David Beasley, David R. Bull, and Ralph R. Martin. Reducing Epistasis in Combinatorial
Problems by Expansive Coding. In ICGA’93 [969], pages 400–407, 1993. Fully available
at http://ralph.cs.cf.ac.uk/papers/GAs/expansive_coding.pdf [accessed 2010-07-
x
29]. CiteSeer : 10.1.1.23.1953.

241. John Edward Beasley. OR-Library. Brunel University, Department of Mathematical Sciences:
Uxbridge, Middlesex, UK, June 1990. Fully available at http://people.brunel.ac.uk/
˜mastjjb/jeb/info.html [accessed 2011-10-12].
242. William Beaudoin, Sébastien Vérel, Philippe Collard, and Cathy Escazut. Deceptiveness and
Neutrality the ND Family of Fitness Landscapes. In GECCO’06 [1516], pages 507–514, 2006.
doi: 10.1145/1143997.1144091.
243. Jörg D. Becker and Ignaz Eisele, editors. Proceedings of the Workshop on Parallel Process-
ing: Logic, Organization and Technology (WOPPLOT’83), June 27–29, 1983, Federal Armed
Forces University Munich (HSBw M): Neubiberg, Bavaria, Germany, volume 196/1984 in
Lecture Notes in Physics. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/BFb0018249.
isbn: 3540129170. Published in 1984.
244. Jörg D. Becker and Ignaz Eisele, editors. Proceedings of the Workshop on Parallel Pro-
cessing: Logic, Organization, and Technology (WOPPLOT’86), July 2–4, 1986, Federal
Armed Forces University Munich (HSBw M): Neubiberg, Bavaria, Germany, volume 253/1987
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-18022-2. isbn: 0-387-18022-2 and 3-540-18022-2. Published in
1987.
245. Jörg D. Becker, Ignaz Eisele, and F. W. Mündemann, editors. Parallelism, Learning, Evolu-
tion: Workshop on Evolutionary Models and Strategies (Neubiberg, Germany, 1989-03-10/11)
and Workshop on Parallel Processing: Logic, Organization, and Technology (Wildbad Kreuth,
Germany, 1989-07-24 to 28) (WOPPLOT’89), 1989, Germany, volume 565/1991 in Lecture
REFERENCES 979
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-55027-5. isbn: 0-387-55027-5
and 3-540-55027-5. Google Books ID: AJrVD3dAousC. OCLC: 25111932, 231262527,
311366610, 320328459, and 440927259. Published in 1991.
246. Mark A. Bedau, John S. McCaskill, Norman H. Packard, and Steen Rasmussen, edi-
tors. Proceedings of the Seventh International Conference on Artificial Life (Artificial Life
VII), August 1–2, 2000, Reed College: Portland, OR, USA, Complex Adaptive Systems,
Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 026252290X. Google Books
ID: xLGm7KFGy4C. OCLC: 44117888.
247. Witold Bednorz, editor. Advances in Greedy Algorithms. IN-TECH Education and Pub-
lishing: Vienna, Austria, November 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-27-5&type=B [accessed 2009-01-13].
248. Niko Beerenwinkel, Lior Pachter, and Bernd Sturmfels. Epistasis and Shapes of Fitness
Landscapes. Statistica Sinica, 17(4):1317–1342, October 2007, Academia Sinica, Institute of
Statistical Science: Taipei, Taiwan and International Chinese Statistical Association (ICSA).
Fully available at http://www3.stat.sinica.edu.tw/statistica/J17N4/J17N43/
J17N43.html [accessed 2009-07-11]. arXiv ID: q-bio/0603034.
249. Catriel Beeri and Peter Buneman, editors. Proceedings of the 7th International Confer-
ence on Database Theory (ICDT’99), January 10–12, 1999, Jerusalem, Israel, volume 1540
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-65452-6.
250. José Manuel Belenguer Ribera. CARPLIB. Universitat de València, Departament
d’Estadı́stica i Investigació Operativa: València, Spain, November 5, 2005. Fully available
at http://www.uv.es/˜belengue/carp/READ_ME [accessed 2011-09-18].
251. José Manuel Belenguer Ribera. The Capacitated Arc Routing Problem (CARPLIB). Universi-
tat de València, Departament d’Estadı́stica i Investigació Operativa: València, Spain, Novem-
ber 5, 2005. Fully available at http://www.uv.es/belengue/carp.html [accessed 2010-12-
03].

252. José Manuel Belenguer Ribera and Enrique Benavent. A Cutting Plane Algorithm for
the Capacitated Arc Routing Problem. Computers & Operations Research, 30(5):705–728,
April 2003, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0305-0548(02)00046-1.
253. Richard K. Belew. When both Individuals and Populations Search: Adding Simple Learning
to the Genetic Algorithm. In ICGA’89 [2414], pages 34–41, 1989.
254. Richard K. Belew and Lashon Bernard Booker, editors. Proceedings of the Fourth Interna-
tional Conference on Genetic Algorithms (ICGA’91), July 13–16, 1991, University of Cal-
ifornia (UCSD): San Diego, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-208-9. Google Books ID: OtnJOwAACAAJ and YvBQAAAAMAAJ.
OCLC: 24132977.
255. Richard K. Belew and Melanie Mitchell, editors. Adaptive Individuals in Evolving Popu-
lations: Models and Algorithms, volume 26 in Santa Fe Institue Studies in the Sciences of
Complexity. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA and West-
view Press, January 15, 1996. isbn: 0-201-48364-5 and 0-201-48369-6. Google Books
ID: OKkytWTQxeEC, YurWHgAACAAJ, and qOAhIAAACAAJ.
256. Richard K. Belew and Michael D. Vose, editors. Proceedings of the 4th Workshop on
Foundations of Genetic Algorithms (FOGA’96), August 5, 1996, University of San Diego:
San Diego, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA.
isbn: 1-55860-460-X. Published March 1, 1997.
257. Bartlomiej Beliczyński, Andrzej Dzieliński, Marcin Iwanowski, and Bernardete Ribeiro, ed-
itors. Proceedings of the 8th International Conference on Adaptive and Natural Comput-
ing Algorithms, Part I (ICANNGA’07-I), April 11–17, 2007, Warsaw University: War-
saw, Poland, volume 4431/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71618-1. isbn: 3-540-71589-4. Partly available at http://
icannga07.ee.pw.edu.pl/ [accessed 2009-08-20]. Google Books ID: WYvMFlgm1KkC.
OCLC: 122935609, 154951366, and 318299908. Library of Congress Control Number
(LCCN): 2007923870. See also [258].
258. Bartlomiej Beliczyński, Andrzej Dzieliński, Marcin Iwanowski, and Bernardete Ribeiro, ed-
980 REFERENCES
itors. Proceedings of the 8th International Conference on Adaptive and Natural Comput-
ing Algorithms, Part II (ICANNGA’07-II), April 11–17, 2007, Warsaw University: War-
saw, Poland, volume 4432/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71629-7. isbn: 3-540-71590-8 and 3-540-71629-7. Google Books
ID: 5yYBOAAACAAJ and V9ts51 ia2sC. OCLC: 122935609 and 154951366. Library of
Congress Control Number (LCCN): 2007923870. See also [257].
259. Kevin D. Bell, Michael K. Powers, and Jose M. Sasian, editors. Novel Optical Systems and
Large-Aperture Imaging, July 21, 1998, San Diego, CA, USA, volume 3430 in The Proceedings
of SPIE. Society of Photographic Instrumentation Engineers (SPIE): Bellingham, WA, USA.
Published in December 1998.
260. Richard Ernest Bellman. Dynamic Programming, Dover Books on Mathematics. Prince-
ton University Press: Princeton, NJ, USA, 1957–2003. isbn: 0486428095. Google Books
ID: L ChPwAACAAJ, fyVtp3EMxasC, and wdtoPwAACAAJ. OCLC: 50866848.
261. Richard Ernest Bellman. Adaptive Control Processes: A Guided Tour. Prince-
ton University Press: Princeton, NJ, USA, 1961–1990. isbn: 0691079013. Google
Books ID: 8wY3PAAACAAJ, POAmAAAAMAAJ, cXJEAAAAIAAJ, and pVu0PQAACAAJ.
OCLC: 67470618.
262. David A. Belsley and Christopher F. Baum, editors. Fifth International Conference on Com-
puting in Economics and Finance, June 24–26, 1999, Boston College: Boston, MA, USA.
263. David A. Belsley and Erricos John Kontoghiorghes, editors. Handbook on Com-
putational Econometrics. Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.1002/9780470748916. isbn: 0-470-74385-9. OCLC: 319499406. Library of
Congress Control Number (LCCN): 2009025907. GBV-Identification (PPN): 599321954.
LC Classification: HB143.5 .H357 2009.
264. Aharon Ben-Tal. Characterization of Pareto and Lexicographic Optimal Solutions. In
MCDM’80 [1960], pages 1–11, 1980.
265. Enrique Benavent, V. Campos, A. Corberán, and E. Mota. The Capacitated Arc Routing
Problem. Lower Bounds. Networks, 22(7):669–690, December 1992, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1002/net.3230220706.
266. Stefano Benedettini, Christian Blum, and Andrea Roli. A Randomized Iterated Greedy
Algorithm for the Founder Sequence Reconstruction Problem. In LION 4 [2581], pages
37–51, 2010. doi: 10.1007/978-3-642-13800-3 4.
267. Ernesto Benini and Andrea Toffolo. Optimal Design of Horizontal-Axis Wind Turbines Using
Blade-Element Theory and Evolutionary Computation. Journal of Solar Energy Engineering,
124(4):357–363, November 2002, American Society of Mechanical Engineers (ASME): New
York, NY, USA. doi: 10.1115/1.1510868.
268. J. H. Bennett, editor. The Collected Papers of R.A. Fisher, volume I–V. University
of Adelaide: SA, Australia, 1971–1974. Fully available at http://digital.library.
adelaide.edu.au/coll/special/fisher/stat_math.html [accessed 2008-12-08]. Col-
lected Papers of R.A. Fisher, Relating to Statistical and Mathematical Theory and Ap-
plications (excluding Genetics).
269. Forrest H. Bennett III. Emergence of a Multi-Agent Architecture and New Tactics For the
Ant Colony Foraging Problem Using Genetic Programming. In SAB’96 [1811], pages 430–
439, 1996.
270. Forrest H. Bennett III. Automatic Creation of an Efficient Multi-Agent Architecture Using
Genetic Programming with Architecture-Altering Operations. pages 30–38.
271. Forrest H. Bennett III, John R. Koza, David Andre, and Martin A. Keane. Evolu-
tion of a 60 Decibel Op Amp using Genetic Programming. In ICES’96 [1231], 1996.
doi: 10.1007/3-540-63173-9 65. CiteSeerx : 10.1.1.56.6051. See also [1607].
272. Peter J. Bentley, editor. Evolutionary Design by Computers. Morgan Kaufmann Pub-
lishers Inc.: San Francisco, CA, USA, May 1999. isbn: 1-55860-605-X. Google Books
ID: EgC6LBAH5r8C.
273. Peter J. Bentley and S.P. Kumar. The Ways to Grow Designs: A Comparison of Embryogenies
for an Evolutionary Design Problem. In GECCO’99 [211], pages 35–43, 1999.
274. Peter J. Bentley, Doheon Lee, and Sungwon Jung, editors. Proceedings of the 7th International
Conference on Artificial Immune Systems (ICARIS’08), August 10–13, 2008, Phuket, Thai-
land, volume 5132 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
REFERENCES 981
Berlin, Germany.
275. Carlos Bento, Amı́lcar Cardoso, and Gaël Dias, editors. Progress in Artificial Intelligence
– Proceedings of the 13th Portuguese Conference on Aritficial Intelligence (EPIA’05), De-
cember 5–8, 2005, Covilhã, Portugal, volume 3808 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
276. Aviv Bergman and Marcus W. Feldman. Recombination Dynamics and the Fitness Land-
scape. Physica D: Nonlinear Phenomena, 56(1):57–67, April 1992, Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Am-
sterdam, The Netherlands. doi: 10.1016/0167-2789(92)90050-W.
277. Pavel Berkhin. Survey Of Clustering Data Mining Techniques. Technical Report, Ac-
crue Software: San José, CA, USA, 2002. Fully available at http://citeseer.ist.
psu.edu/berkhin02survey.html and http://www.ee.ucr.edu/˜barth/EE242/
clustering_survey.pdf [accessed 2007-08-27].
278. 17th European Conference on Machine Learning (ECML’06), September 18–22, 2006, Berlin,
Germany.
279. Ester Bernadó, Xavier Llorà, and Josep Maria Garrell i Guiu. XCS and GALE: A Com-
parative Study of Two Learning Classifier Systems with Six Other Learning Algorithms on
Classification Tasks. In IWLCS’01 [1679], pages 115–132, 2001. doi: 10.1007/3-540-48104-4 8.
CiteSeerx : 10.1.1.1.4467.
280. Gérard Berry, Hubert Comon, and Alain Finkel, editors. Computer Aided Verifica-
tion – Proceedings of the 13th International Conference on Computer Aided Verification
(CAV’01), July 18–22, 2001, Palais de la Mutualité: Paris, France, volume 2102/2001
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-44585-4. isbn: 3-540-42345-1. Google Books ID: 1llkK-8I7YYC.
OCLC: 47282220, 47282220, 47675546, 48729917, 61847966, 67015253, 76232298,
243488788, and 248623804.
281. Michael J. A. Berry and Gordon S. Linoff. Mastering Data Mining: The Art and Science
of Customer Relationship Management. Wiley Interscience: Chichester, West Sussex, UK,
December 1999. isbn: 0471331236. Google Books ID: QeVJPwAACAAJ. Library of Congress
Control Number (LCCN): 00265296. GBV-Identification (PPN): 311887082. LC Classifi-
cation: HF5415.125 .B47 2000.
282. Hugues Bersini and Jorge Carneiro, editors. Proceedings of the 5th International Conference
on Artificial Immune Systems (ICARIS’06), September 4–6, 2006, Instituto Gulbenkian de
Ciência: Oeiras, Portugal, volume 4163 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-37749-2.
283. Elisa Bertino and Silvana Castano, editors. Atti del Settimo Convegno Nazionale Sistemi
Evoluti per Basi di Dati, June 23–25, 1999, Villa Olmo, Como, Italy.
284. Alberto Bertoni and Marco Dorigo. Implicit Parallelism in Genetic Algorithms. Artifi-
cial Intelligence, 61(2):307–314, June 1993, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(93)90071-I. See also [285].
285. Alberto Bertoni and Marco Dorigo. Implicit Parallelism in Genetic Algorithms. Tech-
nical Report TR-93-001-Revised, Universitá Statale di Milano, Dipartimento di Scienze
dell’Informazione: Milano, Italy and International Computer Science Institute (ICSI), Uni-
versity of California: Berkeley, CA, USA, April 1993. Fully available at ftp://ftp.
icsi.berkeley.edu/pub/techreports/1993/tr-93-001.ps.gz and http://ftp.
icsi.berkeley.edu/ftp/pub/techreports/1993/tr-93-001.pdf [accessed 2010-07-30].
CiteSeerx : 10.1.1.34.3275. See also [284].
286. Patricia Besson, Jean-Marc Vesin, Vlad Popovici, and Murat Kunt. Differential Evolu-
tion Applied to a Multimodal Information Theoretic Optimization Problem. In EvoWork-
shops’06 [2341], pages 505–509, 2006. doi: 10.1007/11732242 46.
287. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. In CSC’78 [774],
1978. Fully available at http://hdl.handle.net/2027.42/3572 [accessed 2010-07-23].
ID: UMR0341. See also [288].
288. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. PhD thesis, University
of Michigan: Ann Arbor, MI, USA, 1980. Order No.: AAI8106101. University Microfilms
No.: 8106101. National Science Foundation Grant No. MCS76-04297. See also [287, 289].
289. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. Dissertation Abstracts
International (DAI), 41(9):3503B, 1980, ProQuest: Ann Arbor, MI, USA. See also [288].
982 REFERENCES
290. Pete Bettinger and Jianping Zhu. A New Heuristic Method for Solving Spatially Constrained
Forest Planning Problems based on Mitigation of Infeasibilities Radiating Outward from
a Forced Choice. Silva Fennica, 40(2):315–333, 2006, Finnish Society of Forest Science,
The Finnish Forest Research Institute: Finland, Eeva Korpilahti, editor. Fully available
at http://www.metla.fi/silvafennica/full/sf40/sf402315.pdf [accessed 2010-07-
x
24]. CiteSeer : 10.1.1.133.2656.

291. Gregory Beurier, Fabien Michel, and Jacques Ferber. A Morphogenesis Model for Multia-
gent Embryogeny. In ALIFE’06 [2314], 2006. Fully available at http://www.lirmm.fr/
˜fmichel/publi/pdfs/beurier06alifeX.pdf [accessed 2010-08-06].
292. Hans-Georg Beyer. Some Aspects of the ’Evolution Strategy’ for Solving TSP-Like Optimiza-
tion Problems Appearing at the Design Studies of a 0.5T eV e+ e− -Linear Collider. In PPSN
II [1827], pages 361–370, 1992.
293. Hans-Georg Beyer. Towards a Theory of ’Evolution Strategies’: Some Asymptotical Results
from the (1, +λ)-Theory. Technical Report SYS–5/92, Universität Dortmund, Fachbereich
Informatik: Dortmund, North Rhine-Westphalia, Germany, December 1, 1992. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/˜beyer/coll/Bey93a/Bey93a.ps
[accessed 2009-07-11]. See also [294].

294. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: Some Asymptotical Re-
sults from the (1, +λ)-Theory. Evolutionary Computation, 1(2):165–188, Summer 1993,
MIT Press: Cambridge, MA, USA, Marc Schoenauer, editor. Fully available at http://
ls11-www.cs.uni-dortmund.de/˜beyer/coll/Bey93a/Bey93a.ps [accessed 2009-07-11].
CiteSeerx : 10.1.1.38.3238. See also [293].
295. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: The (µ, λ)-Theory.
Evolutionary Computation, 2(4):381–407, Winter 1994, MIT Press: Cambridge, MA,
USA. doi: 10.1162/evco.1994.2.4.381. Fully available at http://ls11-www.cs.
uni-dortmund.de/people/beyer/coll/Bey95a/Bey95a.ps [accessed 2009-07-10].

CiteSeerx : 10.1.1.47.3088.
296. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: On the Benefit of
Sex – The (µ/µ, λ)-Theory. Evolutionary Computation, 3(1):81–111, Spring 1995,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.1.81. Fully avail-
able at http://ls11-www.informatik.uni-dortmund.de/people/beyer/coll/
Bey95b/Bey95b.ps [accessed 2010-08-18]. CiteSeerx : 10.1.1.30.5489.
297. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: Self-Adaptation. Evo-
lutionary Computation, 3(3):311–347, Fall 1995, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1995.3.3.311. Fully available at http://ls11-www.cs.uni-dortmund.
de/˜beyer/coll/Bey95e/Bey95e.ps [accessed 2010-08-18]. CiteSeerx : 10.1.1.26.3527.
298. Hans-Georg Beyer. An Alternative Explanation for the Manner in which Genetic Algorithms
Operate. Biosystems, 41(1):1–15, January 1997, Elsevier Science Ireland Ltd.: East Park,
Shannon, Ireland and North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/S0303-2647(96)01657-7. CiteSeerx : 10.1.1.39.307.
299. Hans-Georg Beyer. Design Optimization of a Linear Accelerator using Evolution Strategy:
Solving a TSP-like Optimization Problem. In Handbook of Evolutionary Computation [171],
Chapter G4.2, pages 1–8. Oxford University Press, Inc.: New York, NY, USA, Institute of
Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press,
Inc.: Boca Raton, FL, USA, 1997.
300. Hans-Georg Beyer. The Theory of Evolution Strategies, Natural Computing Series. Springer
New York: New York, NY, USA, May 27, 2001. isbn: 3-540-67297-4. Google Books
ID: 8tbInLufkTMC. OCLC: 46394059 and 247928112. Library of Congress Control
Number (LCCN): 2001020620. LC Classification: QA76.618 B49 2001.
301. Hans-Georg Beyer and William B. Langdon, editors. Foundations of Genetic Algorithms
XI (FOGA’11), January 5–9, 2011, Hirschen-Schwarzenberg: Schwarzenberg, Austria. ACM
Press: New York, NY, USA.
302. Hans-Georg Beyer and Hans-Paul Schwefel. Evolution Strategies – A Comprehensive In-
troduction. Natural Computing: An International Journal, 1(1):3–52, March 2002, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1015059928466. Fully available at http://www.cs.bham.ac.uk/
˜pxt/NIL/es.pdf [accessed 2010-08-09].
303. Hans-Georg Beyer, Eva Brucherseifer, Wilfried Jakob, Hartmut Pohlheim, Bernhard Send-
REFERENCES 983
hoff, and Thanh Binh To. Evolutionary Algorithms – Terms and Definitions. Universität
Dortmund: Dortmund, North Rhine-Westphalia, Germany, version e-1.2 edition, Febru-
ary 25, 2002. Fully available at http://ls11-www.cs.uni-dortmund.de/people/
beyer/EA-glossary/ [accessed 2008-04-10].
304. Hans-Georg Beyer, Una-May O’Reilly, Dirk V. Arnold, Wolfgang Banzhaf, Christian Blum,
Eric W. Bonabeau, Erick Cantú-Paz, Dipankar Dasgupta, Kalyanmoy Deb, James A. Fos-
ter, Edwin D. de Jong, Hod Lipson, Xavier Llorà, Spiros Mancoridis, Martin Pelikan,
Günther R. Raidl, Terence Soule, Jean-Paul Watson, and Eckart Zitzler, editors. Pro-
ceedings of Genetic and Evolutionary Computation Conference (GECCO’05), June 25–
27, 2005, Loews L’Enfant Plaza Hotel: Washington, DC, USA. ACM Press: New York,
NY, USA. isbn: 1-59593-010-8. Google Books ID: 1Y5QAAAAMAAJ, CY9QAAAAMAAJ,
KY1QAAAAMAAJ, TlWQAAAACAAJ, and VI5QAAAAMAAJ. OCLC: 62461063 and 63125628.
ACM Order No.: 910050. A joint meeting of the fourteenth international conference on genetic
algorithms (ICGA-2005) and the tenth annual genetic programming conference (GP-2005).
See also [2337, 2339].
305. James C. Bezdek, James M. Keller, Raghu Krishnapuram, Ludmila I. Kuncheva, and
Nikhil R. Pal. Will the Real Iris Data Please Stand Up? IEEE Transactions on Fuzzy
Systems (TFS), 7(3):368–369, June 1999, IEEE Computational Intelligence Society: Piscat-
away, NJ, USA. doi: 10.1109/91.771092. INSPEC Accession Number: 6293498. See also
[92].
306. V. Bhaskar, Santosh K. Gupta, and A.K. Ray. Applications of Multiobjective Optimization
in Chemical Engineering. Reviews in Chemical Engineering, 16(1):1–54, 2000.
307. Gouri K. Bhattacharyya and Richard A. Johnson. Statistical Concepts and Methods. John Wi-
ley & Sons Ltd.: New York, NY, USA, March 8, 1977. isbn: 0471035327 and 0471072044.
Google Books ID: Xk9AAAACAAJ. OCLC: 2597235.
308. Leonora Bianchi, Marco Dorigo, Luca Maria Gambardella, and Walter J. Gutjahr. A Sur-
vey on Metaheuristics for Stochastic Combinatorial Optimization. Natural Computing: An
International Journal, 8(2):239–287, June 2009, Kluwer Academic Publishers: Norwell, MA,
USA and Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/s11047-008-9098-4.
309. Arthur S. Bickel and Riva Wenig Bickel. Tree Structured Rules in Genetic Algorithms. In
ICGA’87 [1132], pages 77–81, 1987.
310. Joseph P. Bigus. Data Mining With Neural Networks: Solving Business Problems from Ap-
plication Development to Decision Support. McGraw-Hill: New York, NY, USA, May 20,
1996. isbn: 0-07-005779-6. OCLC: 34282366 and 34282366. Library of Congress
Control Number (LCCN): 96033779. GBV-Identification (PPN): 194728420. LC Classifi-
cation: QA76.87 .B55 1996.
311. Kurt Binder and A. Peter Young. Spin Glasses: Experimental Facts, Theoretical Concepts,
and Open Questions. Reviews of Modern Physics, 58(4):801–976, October 1986, American
Physical Society: College Park, MD, USA. doi: 10.1103/RevModPhys.58.801. Fully available
at http://link.aps.org/abstract/RMP/v58/p801 [accessed 2008-07-02].
312. Lothar Birk, Günther F. Clauss, and June Y. Lee. Practical Application of Global Optimiza-
tion to the Design of Offshore Structures. In Proceedings of OMAE04, 23rd International
Conference on Offshore Mechanics and Arctic Engineering [85], pages 567–579, volume III,
2004. Fully available at http://130.149.35.79/downloads/publikationen/2004/
OMAE2004-51225.pdf [accessed 2007-09-01]. Paper no. OMAE2004-51225.
313. Lawrence A. Birnbaum and Gregg C. Collins, editors. Proceedings of the 8th International
Workshop on Machine Learning (ML’91), June 1991, Evanston, IL, USA. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-200-3. Google Books
ID: OnNgQgAACAAJ. OCLC: 24106151, 438477848, and 1558602003. GBV-Identification
(PPN): 022371885.
314. Christopher M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press
(Oxford University Press): Oxford, UK, November 3, 1995. isbn: 0-19-853849-9
and 0-19-853864-2. Google Books ID: -aAwQO -rXwC. OCLC: 33101074,
174549000, 257120262, and 263662370. Library of Congress Control Number
(LCCN): 95040465. GBV-Identification (PPN): 082852855, 265786185, 315914459,
354835041, 475047974, 506020282, and 611503409. LC Classification: QA76.87 .B574
1995.
315. Susanne Biundo, Karen Myers, and Kanna Rajan, editors. Proceedings of the Fifteenth
984 REFERENCES
International Conference on Automated Planning and Scheduling (ICAPS’05), June 5–10,
2005, Monterey, CA, USA. AAAI Press: Menlo Park, CA, USA.
316. Tim Blackwell. Particle Swarm Optimization in Dynamic Environments. In Evolution-
ary Computation in Dynamic and Uncertain Environments [3019], Chapter 2, pages 29–49.
Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-49774-5 2. Fully available
at http://igor.gold.ac.uk/˜mas01tb/papers/PSOdynenv.pdf [accessed 2009-07-13].
317. M. Brian Blake, Kwok Ching Tsui, and Andreas Wombacher. The EEE-05 Challenge:
A New Web Service Discovery and Composition Competition. In EEE’05 [1371], pages
780–783, 2005. doi: 10.1109/EEE.2005.131. Fully available at http://ws-challenge.
georgetown.edu/ws-challenge/The%20EEE.htm [accessed 2007-09-02]. INSPEC Accession
Number: 8530438. Ei ID: 2006049663049.
318. M. Brian Blake, William K.W. Cheung, Michael C. Jäger, and Andreas Wombacher.
WSC-06: The Web Service Challenge. In CEC/EEE’06 [2967], pages 505–508, 2006.
doi: 10.1109/CEC-EEE.2006.98. INSPEC Accession Number: 9189338.
319. M. Brian Blake, William K.W. Cheung, Michael C. Jäger, and Andreas Wombacher. WSC-
07: Evolving the Web Service Challenge. In CEC/EEE’07 [1344], pages 505–508, 2007.
doi: 10.1109/CEC-EEE.2007.107. INSPEC Accession Number: 9868634.
320. M. Brian Blake, Thomas Weise, and Steffen Bleul. WSC-2010: Web Services Composition and
Evaluation. In SOCA’10 [1378], pages 1–4, 2010. doi: 10.1109/SOCA.2010.5707190. Fully
available at http://www.it-weise.de/documents/files/BWB2010W2WSCAE.pdf.
321. 20th European Conference on Machine Learning and the 13th European Conference on Princi-
ples and Practice of Knowledge Discovery in Databases (ECML PKDD’09), September 7–11,
2009, Bled, Slovenia.
322. Progress in Machine Learning – Proceedings of the Second European Working Session on
Learning (EWSL’87), May 13–17, 1987, Bled, Yugoslavia (now Slovenia).
323. Woodrow “Woody” Wilson Bledsoe. The Use of Biological Concepts in the Analytical Study
of Systems. Technical Report PRI 2, Panoramic Research, Inc.: Palo Alto, CA, USA, 1961.
Presented at ORSA-TIMS National Meeting, San Francisco, California, November 10, 1961.
324. Woodrow “Woody” Wilson Bledsoe. Lethally Dependent Genes Using Instant Selection.
Technical Report PRI 1, Panoramic Research, Inc.: Palo Alto, CA, USA, 1961.
325. Woodrow “Woody” Wilson Bledsoe. An Analysis of Genetic Populations. Technical Report,
Panoramic Research, Inc.: Palo Alto, CA, USA, 1962.
326. Woodrow “Woody” Wilson Bledsoe. The Evolutionary Method in Hill Climbing: Convergence
Rates. Technical Report, Panoramic Research, Inc.: Palo Alto, CA, USA, 1962.
327. Woodrow “Woody” Wilson Bledsoe and I. Browning. Pattern Recognition and Reading by
Machine. In Proceedings of the Eastern Joint Computer Conference (EJCC) – Papers and
Discussions Presented at the Joint IRE – AIEE – ACM Computer Conference [1395], pages
225–232, 1959. doi: 10.1109/AFIPS.1959.88.
328. Maria J. Blesa and Christian Blum. Ant Colony Optimization for the Maximum Edge-Disjoint
Paths Problem. In EvoWorkshops’04 [2254], pages 160–169, 2004.
329. Maria J. Blesa, Pablo Moscato, and Fatos Xhafa. A Memetic Algorithm for the Minimum
Weighted k-Cardinality Tree Subgraph Problem. In MIC’01 [2294], pages 85–90, 2001.
CiteSeerx : 10.1.1.11.2478 and 10.1.1.21.5376.
330. Steffen Bleul and Kurt Geihs. Automatic Quality-Aware Service Discovery and Matching.
In HPOVUA [370], 2006.
331. Steffen Bleul and Thomas Weise. An Ontology for Quality-Aware Service Discovery. In
WESC’05 [3095], volume RC23821, 2005. Fully available at http://www.it-weise.de/
documents/files/BW2005QASD.pdf [accessed 2011-01-11]. CiteSeerx : 10.1.1.111.390. See
also [333].
332. Steffen Bleul and Michael Zapf. Ontology-Based Self-Organization in Service-Oriented Ar-
chitectures. In Proceedings of the Workshop SAKS 2007 [2801], 2007.
333. Steffen Bleul, Thomas Weise, and Kurt Geihs. An Ontology for Quality-Aware Service
Discovery. International Journal of Computer Systems Science and Engineering (CSSE), 21
(4):227–234, July 2006, CRL Publishing Limited: Leicester, UK. Fully available at http://
www.it-weise.de/documents/files/BWG2006QASD.pdf. Ei ID: 20064410208380. IDS
(SCI): 095XP. Special issue on “Engineering Design and Composition of Service-Oriented
Applications”. See also [331].
334. Steffen Bleul, Thomas Weise, and Kurt Geihs. Large-Scale Service Composi-
REFERENCES 985
tion in Semantic Service Discovery. In CEC/EEE’06 [2967], pages 427–429,
2006. doi: 10.1109/CEC-EEE.2006.59. Fully available at http://www.it-weise.
de/documents/files/BWG2006WSC.pdf. INSPEC Accession Number: 9189339. Ei
ID: 20070110350633. 1st place in 2006 WSC. See also [318, 335, 2909].
335. Steffen Bleul, Thomas Weise, and Kurt Geihs. Making a Fast Semantic Ser-
vice Composition System Faster. In CEC/EEE’07 [1344], pages 517–520, 2007.
doi: 10.1109/CEC-EEE.2007.62. Fully available at http://www.it-weise.de/
documents/files/BWG2007WSC.pdf. INSPEC Accession Number: 9868637. Ei
ID: 20083511485208. IDS (SCI): BGR20. 2nd place in 2007 WSC. See also [319, 334, 2909].
336. Tobias Blickle. Theory of Evolutionary Algorithms and Application to System Synthe-
sis, TIK-Schriftenreihe. PhD thesis, Eidgenössische Technische Hochschule (ETH) Zürich:
Zürich, Switzerland, Verlag der Fachvereine Hochschulverlag AG der ETH Zürich: Zürich,
Switzerland, November 1996, Lothar Thiele and Hans-Paul Schwefel, Committee mem-
bers. doi: 10.3929/ethz-a-001710359. Fully available at http://www.handshake.de/
user/blickle/publications/diss.pdf and http://www.tik.ee.ethz.ch/˜tec/
publications/bli97a/diss.ps.gz [accessed 2007-09-07]. ETH Thesis Number: 11894.
337. Tobias Blickle and Lothar Thiele. Genetic Programming and Redundancy. In Genetic Algo-
rithms within the Framework of Evolutionary Computation, Workshop at KI-94 [1265], pages
33–38, 1994. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/people/thiele/
paper/BlTh94.ps [accessed 2007-09-07]. CiteSeerx : 10.1.1.48.8474.
338. Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes Used in Genetic Al-
gorithms. TIK-Report 11, Eidgenössische Technische Hochschule (ETH) Zürich, Department
of Electrical Engineering, Computer Engineering and Networks Laboratory (TIK): Zürich,
Switzerland, December 1995. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/
publications/TIK-Report11.ps and http://www.handshake.de/user/blickle/
publications/tik-report11_v2.ps [accessed 2010-08-03]. CiteSeerx : 10.1.1.11.509. See
also [340].
339. Tobias Blickle and Lothar Thiele. A Mathematical Analysis of Tournament Se-
lection. In ICGA’95 [883], pages 9–16, 1995. Fully available at http://
www.handshake.de/user/blickle/publications/tournament.ps [accessed 2010-08-04].
CiteSeerx : 10.1.1.52.9907.
340. Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes used in
Evolutionary Algorithms. Evolutionary Computation, 4(4):361–394, Winter 1996,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.4.361. Fully avail-
able at http://www.handshake.de/user/blickle/publications/ECfinal.ps [ac-
x
cessed 2010-08-03]. CiteSeer : 10.1.1.15.9584. See also [338].

341. Christian Blum and Daniel Merkle, editors. Swarm Intelligence – Introduction and Ap-
plications, Natural Computing Series. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-3-540-74089-6. Google Books ID: 6Ky4bVPCXqMC, B1rrPQAACAAJ, and
ofRKPAAACAAJ. Library of Congress Control Number (LCCN): 2008933849.
342. Christian Blum and Andrea Roli. Metaheuristics in Combinatorial Optimization: Overview
and Conceptual Comparison. ACM Computing Surveys (CSUR), 35(3):268–308, Septem-
ber 2003, ACM Press: New York, NY, USA. doi: 10.1145/937503.937505. Fully available
at http://iridia.ulb.ac.be/˜meta/newsite/downloads/ACSUR-blum-roli.
pdf [accessed 2008-10-06].
343. Egbert J. W. Boers, Jens Gottlieb, Pier Luca Lanzi, Robert Elliott Smith, Stefano Cagnoni,
Emma Hart, Günther R. Raidl, and Harald Tijink, editors. Applications of Evolution-
ary Computing, Proceedings of EvoWorkshops 2001: EvoCOP, EvoFlight, EvoIASP, Ev-
oLearn, and EvoSTIM (EvoWorkshops’01), April 18–20, 2001, Lake Como, Milan, Italy,
volume 2037/2001 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-45365-2. isbn: 3-540-41920-9. Google Books
ID: EhROxow98NoC.
344. Stefan Boettcher. Extremal Optimization of Graph Partitioning at the Percolation
Threshold. Journal of Physics A: Mathematical and General, 32(28):5201–5211, July 16,
1999, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol,
UK. doi: 10.1088/0305-4470/32/28/302. Fully available at http://www.iop.org/
EJ/article/0305-4470/32/28/302/a92802.pdf and http://xxx.lanl.gov/abs/
cond-mat/9901353 [accessed 2008-08-23]. arXiv ID: cond-mat/9901353v2.
986 REFERENCES
345. Stefan Boettcher and Allon G. Percus. Extremal Optimization: Methods Derived from Co-
Evolution. In GECCO’99 [211], pages 825–832, 1999. arXiv ID: math.OC/9904056.
346. Stefan Boettcher and Allon G. Percus. Extremal Optimization for Graph Partitioning.
Physical Review E, 64(2), July 2001, American Physical Society: College Park, MD,
USA. doi: 10.1103/PhysRevE.64.026114. Fully available at http://www.math.ucla.edu/
˜percus/Publications/eopre.pdf [accessed 2008-08-23]. arXiv ID: cond-mat/0104214.
347. Stefan Boettcher and Allon G. Percus. Optimization with Extremal Dynamics. Physi-
cal Review Letters, 86(23):5211–5214, June 4, 2001, American Physical Society: College
Park, MD, USA. doi: 10.1103/PhysRevLett.86.5211. CiteSeerx : 10.1.1.94.3835. arXiv
ID: cond-mat/0010337.
348. Stefan Boettcher and Allon G. Percus. Optimization with Extremal Dynamics. Complexity,
8(2):57–62, March 21, 2003, Wiley Periodicals, Inc.: Wilmington, DE, USA. doi: 10.1002/c-
plx.10072. Special Issue: Complex Adaptive Systems: Part I.
349. The 5th International Conference on Bioinspired Optimization Methods and their Applica-
tions (BIOMA’12), May 24–25, 2012, Bohinj, Slovenia. Partly available at http://bioma.
ijs.si/conference/2012/ [accessed 2011-06-21].
350. Celia C. Bojarczuk, Heitor Silverio Lopes, and Alex Alves Freitas. Data Mining with
Constrained-Syntax Genetic Programming: Applications to Medical Data Sets. In
IDAMAP’01 [3103], 2001. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/bojarczuk_2001_idamap.html, http://www.graco.unb.br/
alvares/emule/Data%20Mining%20with%20Constrained-Syntax%20Genetic
%20Programming%20Applications%20in%20Medical%20Data%20Set.pdf, and
www.ailab.si/idamap/idamap2001/papers/bojarczuk.pdf [accessed 2009-09-09].

CiteSeerx : 10.1.1.21.6146.
351. Celia C. Bojarczuk, Heitor Silverio Lopes, and Alex Alves Freitas. An Innova-
tive Application of a Constrained-Syntax Genetic Programming System to the Prob-
lem of Predicting Survival of Patients. In EuroGP’03 [2360], pages 11–59, 2003.
doi: 10.1007/3-540-36599-0 2. Fully available at http://www.cs.kent.ac.uk/people/
staff/aaf/pub_papers.dir/EuroGP-2003-Celia.pdf [accessed 2007-09-09].
352. Alessandro Bollini and Marco Piastra. Distributed and Persistent Evolutionary Algorithms:
A Design Pattern. In EuroGP’99 [2196], pages 173–183, 1999.
353. Eric W. Bonabeau, Marco Dorigo, and Guy Théraulaz. Swarm Intelligence: From Nat-
ural to Artificial Systems. Oxford University Press, Inc.: New York, NY, USA, Au-
gust 1999. isbn: 0195131592. Google Books ID: PvTDhzqMr7cC, fcTcHvSsRMYC, and
mhLj70l8HfAC. OCLC: 40331208, 232159801, and 318382300.
354. Eric W. Bonabeau, Marco Dorigo, and Guy Théraulaz. Inspiration for Optimization from So-
cial Insect Behavior. Nature, 406:39–42, July 6, 2000. doi: 10.1038/35017500. Fully available
at http://biology.unm.edu/pibbs/classes/readings/socialinsectbehavior.
pdf and http://www.nature.com/nature/journal/v406/n6791/abs/406039a0.
html [accessed 2011-05-02].
355. Josh C. Bongard and Chandana Paul. Making Evolution an Offer It Can’t Refuse: Mor-
phology and the Extradimensional Bypass. In ECAL’01 [1520], pages 401–412, 2001.
doi: 10.1007/3-540-44811-X 43. Fully available at http://www.cs.uvm.edu/˜jbongard/
papers/bongardPaulEcal2001.ps.gz [accessed 2008-11-02]. CiteSeerx : 10.1.1.27.8240.
356. Josh C. Bongard and Rolf Pfeifer. Repeated Structure and Dissociation of Genotypic and
Phenotypic Complexity in Artificial Ontogeny. In GECCO’01 [2570], pages 829–836, 2001.
CiteSeerx : 10.1.1.11.6911.
357. John Tyler Bonner. On Development: The Biology of Form, Commonwealth Fund Pub-
lications. Harvard University Press: Cambridge, MA, USA, new ed edition, Decem-
ber 12, 1974. isbn: 0674634128. Google Books ID: dNa2AAAACAAJ, dda2AAAACAAJ, and
lO5BGACTstMC.
358. Lashon Bernard Booker. Intelligent Behavior as an Adaptation to the Task Environment.
PhD thesis, University of Michigan: Ann Arbor, MI, USA. Fully available at http://
hdl.handle.net/2027.42/3746 and http://hdl.handle.net/2027.42/3747 [ac-
cessed 2010-08-03]. Order No.: AAI8214966. University Microfilms No.: 8214966. Technical
Report No. 243.
359. Egon Börger. Computability, Complexity, Logic, volume 128 in Studies in Logic and the
Foundations of Mathematics. Elsevier Science Publishing Company, Inc.: New York, NY,
REFERENCES 987
USA. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands, 1989.
isbn: 0-444-87406-2. Google Books ID: T88gs0nemygC.
360. István Borgulya. A Cluster-based Evolutionary Algorithm for the Single Machine Total
Weighted Tardiness-Scheduling Problem. Journal of Computing and Information Technology
(CIT), 10(3):211–217, September 2002, University Computing Centre: Zagreb, Croatia. Fully
available at http://cit.zesoi.fer.hr/downloadPaper.php?paper=384 [accessed 2008-
04-07].

361. Erich G. Bornberg-Bauer and Hue Sun Chan. Modeling Evolutionary Landscapes: Muta-
tional Stability, Topology, and Superfunnels in Sequence Space. Proceedings of the National
Academy of Science of the United States of America (PNAS), 96(19):10689–10694, Septem-
ber 14, 1999, National Academy of Sciences: Washington, DC, USA, Peter G. Wolynes, edi-
tor. Fully available at http://www.pnas.org/cgi/content/abstract/96/19/10689
[accessed 2008-07-02].

362. Bruno Bosacchi, David B. Fogel, and James C. Bezdek, editors. Proceedings of SPIE’s
45th International Symposium on Optical Science and Technology: Applications and Science
of Neural Networks, Fuzzy Systems, and Evolutionary Computation III, July 30–August 4,
2000, San Diego, CA, USA, volume 4120. Society of Photographic Instrumentation Engineers
(SPIE): Bellingham, WA, USA. isbn: 0-8194-3765-4. Google Books ID: tLhQAAAAMAAJ.
OCLC: 45211584 and 248133501.
363. Peter A. N. Bosman and Dirk Thierens. Mixed IDEAs. Technical Report UU-CS-2000-
45, Utrecht University, Department of Information and Computing Sciences: Utrecht,
The Netherlands, December 2000. Fully available at http://www.cs.uu.nl/research/
techreps/UU-CS-2000-45.html [accessed 2009-08-28].
364. Peter A. N. Bosman and Dirk Thierens. Advancing Continuous IDEAs with Mixture
Distributions and Factorization Selection Metrics. In OBUPM’01 [2149], pages 208–212,
2001. Fully available at http://citeseer.ist.psu.edu/old/bosman01advancing.
html [accessed 2009-08-28]. CiteSeerx : 10.1.1.19.7412.
365. Peter A. N. Bosman and Dirk Thierens. Multi-Objective Optimization with Diversity Pre-
serving Mixture-based Iterated Density Estimation Evolutionary Algorithms. International
Journal Approximate Reasoning, 31(2):259–289, November 2002, Elsevier Science Publishers
B.V.: Essex, UK. doi: 10.1016/S0888-613X(02)00090-7.
366. Peter A. N. Bosman and Dirk Thierens. A Thorough Documentation of Obtained Results on
Real-Valued Continuous And Combinatorial Multi-Objective Optimization Problems Using
Diversity Preserving Mixture-Based Iterated Density Estimation Evolutionary Algorithms.
Technical Report UU-CS-2002-052, Utrecht University, Institute of Information and Comput-
ing Sciences: Utrecht, The Netherlands, December 2002. Fully available at http://www.
cs.uu.nl/research/techreps/repo/CS-2002/2002-052.pdf [accessed 2007-08-24].
367. Proceedings of the 5th International ICST Conference on Bio-Inspired Models of Network,
Information, and Computing Systems (BIONETICS’10), December 1–3, 2010, Boston, MA,
USA.
368. Jack Bosworth, Norman Foo, and Bernard P. Zeigler. Comparison of Genetic Algorithms
with Conjugate Gradient Methods. Technical Report 00312-1-T, University of Michigan:
Ann Arbor, MI, USA, February 1972. Fully available at http://hdl.handle.net/2027.
42/3761 [accessed 2010-05-29]. ID: UMR0554. Other Identifiers: UMR0554.
369. Henry Bottomley. Relationship between the Mean, Median, Mode, and Standard Deviation in
a Unimodal Distribution. self-published, September 2006. Fully available at http://www.
btinternet.com/˜se16/hgb/median.htm [accessed 2007-09-15].
370. Karima Boudaoud, Nicolas Nobelis, and Thomas Nebe, editors. Proceedings of the 13th
Annual Workshop of HP OpenView University Association (HPOVUA), May 16, 2006, Côte
d’Azur, France. Infonomics-Consulting.
371. Abdelmadjid Boukra, Mohamed Ahmed-nacer, and Sadek Bouroubi. Selection of Views to
Materialize in Data Warehouse: A Hybrid Solution. International Journal of Computational
Intelligence Research (IJCIR), 3(4):327–334, 2007, Research India Publications: Delhi, India.
doi: 10.5019/j.ijcir.2004.113. Fully available at http://www.ijcir.com/download.php?
idArticle=113 [accessed 2011-04-10].
372. Anu G. Bourgeois, editor. Proceedings of the 8th International Conference on Algorithms and
Architectures for Parallel Processing (ICA3PP), June 9–11, 2008, Ayia Napa, Cyprus, volume
5022/2008 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
988 REFERENCES
Germany. doi: 10.1007/978-3-540-69501-1. isbn: 3-540-69500-1. OCLC: 496756775.
GBV-Identification (PPN): 568737482.
373. Craig Boutilier, editor. Proceedings of the International Joint Conference on Artificial Intel-
ligence (IJCAI’09), July 11–17, 2009, Pasadena, CA, USA. AAAI Press: Menlo Park, CA,
USA. Fully available at http://ijcai.org/papers09/ [accessed 2010-06-23].
374. Denis Bouyssou. Building Criteria: A Prerequisite For MCDA. In MCDA’90 [198],
pages 58–80, 1998. Fully available at http://www.lamsade.dauphine.fr/˜bouyssou/
CRITERIA.PDF [accessed 2009-07-17]. CiteSeerx : 10.1.1.2.7628.
375. Chris P. Bowers. Simulating Evolution with a Computational Model of Embryogeny: Ob-
taining Robustness from Evolved Individuals. In ECAL’05 [488], pages 149–158, 2005. Fully
available at http://www.cs.bham.ac.uk/˜cpb/publications/ecal05_bowers.pdf
[accessed 2010-08-06].

376. George Edward Pelham Box. Evolutionary Operation: A Method for Increasing Industrial
Productivity. Journal of the Royal Statistical Society: Series C – Applied Statistics, 6(2):
81–101, June 1957, Blackwell Publishing for the Royal Statistical Society: Chichester, West
Sussex, UK.
377. George Edward Pelham Box and Norman Richard Draper. Evolutionary Operation. A Sta-
tistical Method for Process Improvement, Wiley Publication in Applied Statistics. John
Wiley & Sons Ltd.: New York, NY, USA, 1969. isbn: 0-471-09305-X. OCLC: 2592,
252982175, and 639061685. Library of Congress Control Number (LCCN): 68056159.
GBV-Identification (PPN): 020827873. LC Classification: TP155.7 .B6.
378. George Edward Pelham Box, J. Stuart Hunter, and William G. Hunter. Statistics for Ex-
perimenters: Design, Innovation, and Discovery. John Wiley & Sons Ltd.: New York, NY,
USA, 1978–2005. isbn: 471093157 and 0-471-71813-0.
379. Anthony Brabazon and Michael O’Neill. Evolving Financial Models using Grammatical Evo-
lution. In Proceedings of The Annual Conference of the South Eastern Accounting Group
(SEAG) 2003 [1771], 2003.
380. Anthony Brabazon, Michael O’Neill, Robin Matthews, and Conor Ryan. Grammati-
cal Evolution And Corporate Failure Prediction. In GECCO’02 [1675], pages 1011–
1018, 2002. Fully available at http://business.kingston.ac.uk/research/intbus/
paper4.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/gecco2002/RWA145.
pdf [accessed 2007-09-09].
381. Ronald J. Brachman, editor. Proceedings of the Fourth National Conference on Artifi-
cial Intelligence (AAAI’84), August 6–10, 1984, University of Texas: Austin, TX, USA.
isbn: 0-262-51053-7. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai84.php [accessed 2007-09-06].
382. R. M. Brady. Optimization Strategies Gleaned from Biological Evolution. Nature, 317(6040):
804–806, October 31, 1985. doi: 10.1038/317804a0. Fully available at http://www.nature.
com/nature/journal/v317/n6040/pdf/317804a0.pdf [accessed 2008-09-10]. In: Letters to
Nature.
383. Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Fed-
erico Michele Facca, and Christina Tziviskou. Improvements and Future Perspectives on
Web Engineering Methods for Automating Web Services Mediation, Choreography and
Discovery. In Third Workshop of the Semantic Web Service Challenge 2006 – Chal-
lenge on Automating Web Services Mediation, Choreography and Discovery [2690], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Athens/papers/
SWS-phase-III_polimi_cefriel_v1.0.pdf [accessed 2010-12-16]. See also [384–387, 1650,
1831].
384. Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele Facca,
Piero Fraternali, and Christina Tziviskou. Web Modeling-based Approach to Automat-
ing Web Services Mediation, Choreography and Discovery. In First Workshop of the Se-
mantic Web Service Challenge 2006 – Challenge on Automating Web Services Mediation,
Choreography and Discovery [2688], 2006. Fully available at http://sws-challenge.
org/workshops/2006-Stanford/papers/01.pdf [accessed 2010-12-12]. See also [383, 385–
387, 1650, 1831].
385. Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele
Facca, and Christina Tziviskou. Coping with Requirements Changes: SWS-Challenge
Phase II. In Second Workshop of the Semantic Web Service Challenge 2006 – Chal-
REFERENCES 989
lenge on Automating Web Services Mediation, Choreography and Discovery [2689], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Budva/papers/
SWS-phase-Finale_polimi_cefriel.pdf [accessed 2010-12-12]. See also [383, 384, 386, 387,
1650, 1831].
386. Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle,
Federico Michele Facca, Andrea Turati, and Christina Tziviskou. WebML and Glue:
An Integrated Discovery Approach for the SWS Challenge. In KWEB SWS Chal-
lenge Workshop [2450], 2007. Fully available at http://sws-challenge.org/
workshops/2007-Innsbruck/papers/3-SWS-phase-IV_polimi_cefriel_v1.
2.pdf [accessed 2010-12-16]. See also [383–385, 387, 1650, 1831].
387. Marco Brambilla, Stefano Ceri, Federico Michele Facca, Christina Tziviskou, Irene Celino,
Dario Cerizza, Emanuele Della Valle, and Andrea Turati. WebML and Glue: An Integrated
Discovery Approach for the SWS Challenge. In WI/IAT Workshops’07 [1731], pages 148–
151, 2007. doi: 10.1109/WI-IATW.2007.58. INSPEC Accession Number: 9894902. See also
[384–386, 1831].
388. Markus F. Brameier. On Linear Genetic Programming. PhD thesis, Universität Dort-
mund, Fachbereich Informatik: Dortmund, North Rhine-Westphalia, Germany, Febru-
ary 13, 2004, Wolfgang Banzhaf, Martin Riedmiller, and Peter Nordin, Committee members.
Fully available at http://hdl.handle.net/2003/20098 and https://eldorado.
tu-dortmund.de/handle/2003/20098 [accessed 2010-08-22].
389. Markus F. Brameier and Wolfgang Banzhaf. A Comparison of Genetic Programming and
Neural Networks in Medical Data Analysis. Reihe computational intelligence: Design and
management of complex technical processes and systems by means of computational intelli-
gence methods, Universität Dortmund, Collaborative Research Center (Sonderforschungs-
bereich) 531: Dortmund, North Rhine-Westphalia, Germany, November 8, 1998. Fully
available at http://dspace.hrhttps://eldorado.tu-dortmund.de/handle/2003/
5344 [accessed 2009-07-22]. CiteSeerx : 10.1.1.36.4678. issn: 1433-3325. Reihe Computa-
tional Intelligence, Sonderforschungsbereich 531. See also [391].
390. Markus F. Brameier and Wolfgang Banzhaf. Evolving Teams of Predictors with Linear Ge-
netic Programming. Genetic Programming and Evolvable Machines, 2(4):381–407, Decem-
ber 2001, Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Pub-
lishers: Norwell, MA, USA. doi: 10.1023/A:1012978805372. CiteSeerx : 10.1.1.16.7410.
391. Markus F. Brameier and Wolfgang Banzhaf. A Comparison of Linear Genetic Program-
ming and Neural Networks in Medical Data Mining. IEEE Transactions on Evolution-
ary Computation (IEEE-EC), 5(1):17–26, 2001, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.910462. Fully available at http://web.cs.mun.ca/˜banzhaf/
papers/ieee_taec.pdf [accessed 2009-07-22]. CiteSeerx : 10.1.1.41.208. INSPEC Accession
Number: 6876106. See also [389].
392. Markus F. Brameier and Wolfgang Banzhaf. Explicit Control of Diversity and Effective Vari-
ation Distance in Linear Genetic Programming. In EuroGP’02 [977], pages 37–49, 2002.
doi: 10.1007/3-540-45984-7 4. Fully available at http://eldorado.uni-dortmund.de:
8080/bitstream/2003/5419/1/123.pdf and http://www.cs.mun.ca/˜banzhaf/
papers/eurogp02_dist.pdf [accessed 2010-08-22]. CiteSeerx : 10.1.1.11.5531.
393. Markus F. Brameier and Wolfgang Banzhaf. Neutral Variations Cause Bloat in Linear GP.
In EuroGP’03 [2360], pages 997–1039, 2003. CiteSeerx : 10.1.1.120.8070.
394. Markus F. Brameier and Wolfgang Banzhaf. Linear Genetic Programming, volume 1 in
Genetic and Evolutionary Computation. Springer US: Boston, MA, USA and Kluwer Aca-
demic Publishers: Norwell, MA, USA, December 11, 2006. doi: 10.1007/978-0-387-31030-5.
isbn: 0-387-31029-0 and 0-387-31030-4. Library of Congress Control Number
(LCCN): 2006920909. Series Editor: David L. Goldberg, John R. Koza.
395. Jürgen Branke. Creating Robust Solutions by Means of Evolutionary Algorithms. In PPSN
V [866], pages 119–128, 1998. doi: 10.1007/BFb0056855.
396. Jürgen Branke. The Moving Peaks Benchmark. Technical Report, University of Karlsruhe,
Institute for Applied Computer Science and Formal Description Methods (AIFB): Karlsruhe,
Germany, December 16, 1999. Fully available at http://www.aifb.uni-karlsruhe.de/
˜jbr/MovPeaks/ [accessed 2007-08-19]. See also [397].
397. Jürgen Branke. Memory Enhanced Evolutionary Algorithms for Changing Optimization
Problems. In CEC’99 [110], pages 1875–1882, volume 3, 1999. doi: 10.1109/CEC.1999.785502.
990 REFERENCES
Fully available at http://www.aifb.uni-karlsruhe.de/˜jbr/Papers/memory_
final2.ps.gz [accessed 2008-12-07]. CiteSeerx : 10.1.1.2.8897. INSPEC Accession Num-
ber: 6346978.
398. Jürgen Branke. Evolutionary Optimization in Dynamic Environments, volume 3 in Genetic
and Evolutionary Computation. PhD thesis, University of Karlsruhe, Department of Eco-
nomics and Business Engineering: Karlsruhe, Germany, Springer US: Boston, MA, USA and
Kluwer Academic Publishers: Norwell, MA, USA, December 2000–2001, Hartmut Schmeck,
Georg Bol, and Lothar Thiele, Advisors. isbn: 0792376315 and 9780792376316. Google
Books ID: weXuTB5JjFsC.
399. Jürgen Branke, Kalyanmoy Deb, Kaisa Miettinen, and Ralph E. Steuer, editors. Practi-
cal Approaches to Multi-Objective Optimization, November 7–12, 2004, Schloss Dagstuhl:
Wadern, Germany, 04461 in Dagstuhl Seminar Proceedings. Internationales Begegnungs-
und Forschungszentrum für Informatik (IBFI): Schloss Dagstuhl, Wadern, Germany.
Fully available at http://drops.dagstuhl.de/portals/index.php?semnr=04461
and http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=04461 [ac-
cessed 2007-09-19]. Published in 2005.

400. Jürgen Branke, Erdem Salihoğlu, and A. Şima Uyar. Towards an Analysis of Dynamic
Environments. In GECCO’05 [304], pages 1433–1440, 2005. doi: 10.1145/1068009.1068237.
401. Ivan Bratko and Sašo Džeroski, editors. Proceedings of the 16th International Conference on
Machine Learning (ICML’99), June 27–30, 1999, Bled, Slovenia. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. isbn: 1-55860-612-2.
402. Olli Bräysy. Genetic Algorithms for the Vehicle Routing Problem with Time Windows.
Arpakannus – Newsletter of the Finnish Artificial Intelligence Society (FAIS), 1:33–38,
May 2001, Finnish Artificial Intelligence Society (FAIS): Vantaa, Finland. Special issue
on Bioinformatics and Genetic Algorithms.
403. Olli Bräysy and Michel Gendreau. Tabu Search Heuristics for the Vehicle Routing Problem
with Time Windows. TOP: An Official Journal of the Spanish Society of Statistics and Op-
erations Research, 10(2):211–237, December 2002, Springer-Verlag GmbH: Berlin, Germany
and Sociedad de Estadı́stica e Investigatión Operativa. doi: 10.1007/BF02579017.
404. Mihaela Breaban, Lenuta Alboaie, and Henri Luchian. Guiding Users within Trust
Networks Using Swarm Algorithms. In CEC’09 [1350], pages 1770–1777, 2009.
doi: 10.1109/CEC.2009.4983155. INSPEC Accession Number: 10688750.
405. Leo Breiman. Bagging Predictors. Machine Learning, 24(2):123–140, August 1996, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/BF00058655. Fully available at http://www.salford-systems.com/doc/
BAGGING_PREDICTORS.pdf [accessed 2009-03-18]. CiteSeerx : 10.1.1.32.9399.
406. Leo Breiman. Random Forests. Machine Learning, 45(1):5–32, 2001, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1010933404324. Fully available at http://www.springerlink.
com/content/u0p06167n6173512/fulltext.pdf [accessed 2008-12-27].
407. Hans J. Bremermann. Optimization Through Evolution and Recombination. In Self-
Organizing Systems (Proceedings of the conference sponsored by the Information Systems
Branch of the Office of Naval Research and the Armour Research Foundation of the Illinois
Institute of Technology.) [3037], pages 93–103, 1962. Fully available at http://holtz.org/
Library/Natural%20Science/Physics/ [accessed 2007-10-31].
408. Janez Brest, Viljem Žumer, and Mirjam Sepesy Maučec. Control Parameters
in Self-Adaptive Differential Evolution. In BIOMA’06 [919], pages 35–44, 2006.
CiteSeerx : 10.1.1.106.8106.
409. Anne F. Brindle. Genetic Algorithms for Function Optimization. PhD thesis, University of
Alberta: Edmonton, Alberta, Canada, 1980. isbn: 0-315-01039-8. OCLC: 15896420,
59840399, and 632337062. Technical Report TR81-2.
410. David Brittain, Jon Sims Williams, and Chris McMahon. A Genetic Algorithm Approach
To Planning The Telecommunications Access Network. In ICGA’97 [166], pages 623–628,
1997. Fully available at http://citeseer.ist.psu.edu/brittain97genetic.html
[accessed 2008-08-01].

411. Proceedings of the 14th International Conference on Soft Computing (MENDEL’08), June 18–
20, 2009, Brno, Czech Republic.
412. Proceedings of the 15th International Conference on Soft Computing (MENDEL’09), June 17–
REFERENCES 991
26, 2009, Brno, Czech Republic.
413. Proceedings of the 12th International Conference on Soft Computing (MENDEL’06), May 31–
June 2, 2006, Brno University of Technology: Brno, Czech Republic. Brno University of Tech-
nology, Faculty of Mechanical Engineering: Brno, Czech Republic. isbn: 80-214-3195-4.
414. Proceedings of the 1st International Mendel Conference on Genetic Algorithms on the occasion
of 130th anniversary of Mendel’s laws (Mendel’95), September 1995, Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0672-0.
415. Proceedings of the 2nd International Mendel Conference on Genetic Algorithms
(MENDEL’96), June 26–28, 1996, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0769-7.
416. Proceedings of the 2nd International Mendel Conference on Genetic Algorithms
(MENDEL’97), June 26–28, 1997, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0769-7.
417. Proceedings of the 4th International Mendel Conference on Genetic Algorithms, Optimisa-
tion Problems, Fuzzy Logic, Neural Networks, Rough Sets (MENDEL’98), June 24–26, 1998,
Brno University of Technology: Brno, Czech Republic. Brno University of Technology, Ústav
Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-1131-7.
418. Proceedings of the 3rd International Conference on Genetic Algorithms (MENDEL’99),
June 1997, Brno University of Technology: Brno, Czech Republic. Brno University of Tech-
nology, Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-1131-7.
419. Proceedings of the 8th International Conference on Soft Computing (MENDEL’02), June 5–7,
2002, Brno University of Technology: Brno, Czech Republic. Brno University of Technology,
Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2135-5.
420. Proceedings of the 10th International Conference on Soft Computing (MENDEL’04), June 16–
18, 2004, Brno University of Technology: Brno, Czech Republic. Brno University of Technol-
ogy, Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2676-4.
421. Proceedings of the 11th International Conference on Soft Computing (MENDEL’05), June 15–
17, 2005, Brno University of Technology: Brno, Czech Republic. Brno University of Technol-
ogy, Ústav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2961-5.
422. Dimo Brockhoff and Eckart Zitzler. Are All Objectives Necessary? On Dimensionality Re-
duction in Evolutionary Multiobjective Optimization. In PPSN IX [2355], pages 533–542,
2006. doi: 10.1007/11844297 54. Fully available at http://www.tik.ee.ethz.ch/sop/
publicationListFiles/bz2006d.pdf [accessed 2009-07-18].
423. Dimo Brockhoff and Eckart Zitzler. Dimensionality Reduction in Multiobjective Optimiza-
tion: The Minimum Objective Subset Problem. In Selected Papers of the Annual Interna-
tional Conference of the German Operations Research Society (GOR), Jointly Organized with
the Austrian Society of Operations Research (ÖGOR) and the Swiss Society of Operations
Research (SVOR) [2840], pages 423–429, 2006. doi: 10.1007/978-3-540-69995-8 68. Fully
available at http://www.tik.ee.ethz.ch/sop/publicationListFiles/bz2007d.
pdf [accessed 2009-07-18].
424. Dimo Brockhoff and Eckart Zitzler. Improving Hypervolume-based Multiobjective Evolution-
ary Algorithms by using Objective Reduction Methods. In CEC’07 [1343], pages 2086–2093,
2007. doi: 10.1109/CEC.2007.4424730. Fully available at http://www.tik.ee.ethz.ch/
sop/publicationListFiles/bz2007c.pdf [accessed 2009-07-18]. INSPEC Accession Num-
ber: 9889552.
425. Dimo Brockhoff, Tobias Friedrich, Nils Hebbinghaus, Christian Klein, Frank Neumann, and
Eckart Zitzler. Do additional objectives make a problem harder? In GECCO’07-I [2699],
pages 765–772, 2007. doi: 10.1145/1276958.1277114.
426. Carla E. Brodley, editor. Proceedings of the 21st International Conference on Machine Learn-
ing (ICML’04), July 4–8, 2004, Banff, AB, Canada, volume 69 in ACM International Con-
ference Proceeding Series (AICPS). ACM Press: New York, NY, USA.
427. Carla E. Brodley and Andrea Pohoreckyj Danyluk, editors. Proceedings of the 18th Interna-
tional Conference on Machine Learning (ICML’01), June 28–July 1, 2001, Williams College:
Williamstown, MA, USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA.
isbn: 1-55860-778-1.
992 REFERENCES
428. Alexander M. Bronstein and Michael M. Bronstein. Numerical Optimization. Project TOSCA
– Tools for Non-Rigid Shape Comparison and Analysis, Computer Science Department, Tech-
nion – Israel Institute of Technology: Haifa, Israel, 2008. Fully available at http://tosca.
cs.technion.ac.il/book/slides/Milano08_optimization.ppt [accessed 2010-08-28].
429. Donald E. Brown, Christopher L. Huntley, and Andrew R. Spillane. A Parallel Genetic
Heuristic for the Quadratic Assignment Problem. In ICGA’89 [2414], pages 406–415, 1989.
430. Jason Brownlee. Learning Classifier Systems. Technical Report 070514A, Swinburne Uni-
versity of Technology, Faculty of Information and Communication Technologies, Centre for
Information Technology Research, Complex Intelligent Systems Laboratory: Melbourne,
VIC, Australia, May 2007. Fully available at http://www.ict.swin.edu.au/personal/
jbrownlee/2007/TR24-2007.pdf [accessed 2008-04-03].
431. Jason Brownlee. OAT HowTo: High-Level Domain, Problem, and Algorithm Implementation.
Technical Report 20071218A, Swinburne University of Technology, Faculty of Information and
Communication Technologies, Centre for Information Technology Research, Complex Intel-
ligent Systems Laboratory: Melbourne, VIC, Australia, December 2007. Fully available
at http://optalgtoolkit.sourceforge.net/docs/TR46-2007.pdf [accessed 2010-03-
11].

432. Jason Brownlee. OAT: The Optimization Algorithm Toolkit. Technical Report 20071220A,
Swinburne University of Technology, Faculty of Information and Communication Technolo-
gies, Centre for Information Technology Research, Complex Intelligent Systems Laboratory:
Melbourne, VIC, Australia, December 2007. Fully available at http://www.ict.swin.
edu.au/personal/jbrownlee/2007/TR47-2007.pdf [accessed 2010-03-11].
433. Axel T. Brünger, Paul D. Adams, and Luke M. Rice. New Applications of Simulated Anneal-
ing in X-Ray Crystallography and Solution NMR. Structure, 15:325–336, 1997. Fully available
at http://atb.slac.stanford.edu/public/papers.php?sendfile=44 [accessed 2007-
08-25].

434. Seventh International Conference on Swarm Intelligence (ANTS’10), September 8–10, 2010,
Brussels, Belgium.
435. Barrett Richard Bryant, editor. Proceedings of the 12th/1997 ACM/SIGAPP Symposium
on Applied Computing (SAC’97), February 28–March 2, 1997, Hyatt St. Claire: San José,
CA, USA. ACM Press: New York, NY, USA. isbn: 0-89791-850-9. Google Books
ID: JbAmPQAACAAJ. OCLC: 36711176 and 246580075.
436. Kylie Bryant. Genetic Algorithms and the Traveling Salesman Problem. Master’s the-
sis, Harvey Mudd College (HMC), Department of Mathematics: Claremont, CA, USA,
December 2000, Arthur Benjamin, Advisor, Lisette de Pillis, Committee member. Fully
available at http://www.math.hmc.edu/seniorthesis/archives/2001/kbryant/
kbryant-2001-thesis.pdf [accessed 2009-09-30].
437. Bradley J. Buckham and Casey Lambert. Simulated Annealing Applications. University
of Victoria, Mechanical Engineering Department: Victoria, BC, Canada, November 1999,
Zuomin Dong, Advisor. Fully available at http://www.me.uvic.ca/˜zdong/courses/
mech620/SA_App.PDF [accessed 2007-08-25]. Seminar presentation: MECH620 Quantitative
Analysis, Reasoning and Optimization Methods in CAD/CAM and Concurrent Engineering.
438. Lam Thu Bui and Sameer Alam, editors. Multi-Objective Optimization in Computational
Intelligence: Theory and Practice, Premier Reference Source. Idea Group Publishing (Idea
Group Inc., IGI Global): New York, NY, USA, March 30, 2008. isbn: 1599044986.
439. Vinh Bui, Lam Thu Bui, Hussein A. Abbass, Axel Bender, and Pradeep Ray. On the Role of
Information Networks in Logistics: An Evolutionary Approach with Military Scenarios. In
CEC’09 [1350], pages 598–605, 2009. doi: 10.1109/CEC.2009.4983000. INSPEC Accession
Number: 10688573.
440. Larry Bull. On using ZCS in a Simulated Continuous Double-Auction Market. In
GECCO’99 [211], pages 83–90, 1999. Fully available at http://www.cs.bham.ac.uk/
˜wbl/biblio/gecco1999/GA-806.pdf [accessed 2007-09-12].
441. Larry Bull, editor. Applications of Learning Classifier Systems, volume 150 in Studies
in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, April 2004.
isbn: 3540211098. Google Books ID: aBIjqGag5-kC.
442. Larry Bull and Jacob Hurst. ZCS Redux. Evolutionary Computation, 10(2):185–205, Sum-
mer 2002, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365602320169848. Fully
available at http://www.cems.uwe.ac.uk/lcsg/papers/zcsredux.ps [accessed 2007-09-
REFERENCES 993
12].

443. Larry Bull and Tim Kovacs, editors. Foundations of Learning Classifier Systems, Studies
in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, September 1,
2005. isbn: 3540250735.
444. Bernd Bullnheimer, Richard F. Hartl, and Christine Strauss. Applying the Ant System to
the Vehicle Routing Problem. In MIC’97 [2828], 1997. CiteSeerx : 10.1.1.48.7946.
445. Bernd Bullnheimer, Richard F. Hartl, and Christine Strauss. An improved Ant System Al-
gorithm for the Vehicle Routing Problem. Annals of Operations Research, 89(0):319–328,
January 1999, Springer Netherlands: Dordrecht, Netherlands and J. C. Baltzer AG, Sci-
ence Publishers: Amsterdam, The Netherlands. doi: 10.1023/A:1018940026670. Fully avail-
able at http://www.macs.hw.ac.uk/˜dwcorne/Teaching/bullnheimer-vr.pdf [ac-
x
cessed 2008-10-27]. CiteSeer : 10.1.1.49.1415.

446. Seth Bullock, Jason Noble, Richard A. Watson, and Mark A. Bedau, editors. Proceedings
of the Eleventh International Conference on the Simulation and Synthesis of Living Systems
(Artificial Life XI), August 5–8, 2008, Winchester, Hampshire, UK. MIT Press: Cambridge,
MA, USA.
447. Bundesministerium für Wirtschaft und Technologie (BMWi): Berlin, Germany. Innovation-
spolitik, Informationsgesellschaft, Telekommunikation – Mobilität und Verkehrstechnologien –
Das 3. Verkehrsforschungsprogramm der Bundesregierung. Bundesministerium für Wirtschaft
und Technologie (BMWi), Öffentlichkeitsarbeit: Berlin, Germany, May 2008.
448. Alan Bundy, editor. Proceedings of the 8th International Joint Conference on Artificial Intel-
ligence (IJCAI’83-I), August 1983, Karlsruhe, Germany, volume 1. William Kaufmann: Los
Altos, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-83-VOL-1/
CONTENT/content.htm [accessed 2008-04-01]. See also [449].
449. Alan Bundy, editor. Proceedings of the 8th International Joint Conference on Artificial Intel-
ligence (IJCAI’83-II), August 1983, Karlsruhe, Germany, volume 2. William Kaufmann: Los
Altos, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-83-VOL-2/
CONTENT/content.htm [accessed 2008-04-01]. See also [448].
450. Christopher J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recog-
nition. Data Mining and Knowledge Discovery, 2(2):121–167, 1998, Springer Nether-
lands: Dordrecht, Netherlands, Usama Fayyad, editor. doi: 10.1023/A:1009715923555.
CiteSeerx : 10.1.1.18.1083.
451. Luciana Buriol, Paulo M. França, and Pablo Moscato. A New Memetic Algo-
rithm for the Asymmetric Traveling Salesman Problem. Journal of Heuristics,
10(5):483–506, September 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:HEUR.0000045321.59202.52. Fully available at http://citeseer.
ist.psu.edu/544761.html and http://www.springerlink.com/content/
w617486q60mphg88/fulltext.pdf [accessed 2007-09-12]. CiteSeerx : 10.1.1.20.1660.
452. Edmund K. Burke and Wilhelm Erben, editors. Selected Papers from the Third Inter-
national Conference on Practice and Theory of Automated Timetabling (PATAT’00), Au-
gust 16–18, 2000, Konstanz, Germany, volume 2079/2001 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-44629-X.
isbn: 3-540-42421-0.
453. Edmund K. Burke and Graham Kendall, editors. Search Methodologies – Introductory Tuto-
rials in Optimization and Decision Support Techniques. Springer Science+Business Media,
Inc.: New York, NY, USA, 2005. doi: 10.1007/0-387-28356-0. isbn: 0-387-23460-8. Google
Books ID: X4BcOZEa4HsC. Library of Congress Control Number (LCCN): 2005051623.
454. Edmund K. Burke, Steven Matt Gustafson, and Graham Kendall. Survey and Analysis of
Diversity Measures in Genetic Programming. In GECCO’02 [1675], pages 716–723, 2002.
Fully available at http://www.gustafsonresearch.com/research/publications/
gecco-diversity-2002.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.2.3757.
455. Edmund K. Burke, Steven Matt Gustafson, Graham Kendall, and Natalio Krasnogor. Ad-
vanced Population Diversity Measures in Genetic Programming. In PPSN VII [1870],
pages 341–350, 2002. doi: 10.1007/3-540-45712-7 33. Fully available at http://www.cs.
bham.ac.uk/˜wbl/biblio/gp-html/burke_ppsn2002_pp341.html and http://
www.gustafsonresearch.com/research/publications/ppsn-2002.ps [accessed 2009-
x
08-07]. CiteSeer : 10.1.1.18.6142.

456. Edmund K. Burke, Steven Matt Gustafson, Graham Kendall, and Natalio Krasnogor. Is
994 REFERENCES
Increasing Diversity in Genetic Programming Beneficial? An Analysis of the Effects on Fit-
ness. In CEC’03 [2395], pages 1398–1405, volume 2, 2003. doi: 10.1109/CEC.2003.1299834.
Fully available at http://www.gustafsonresearch.com/research/publications/
cec-2003.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.1.1929.
457. Edmund K. Burke, Graham Kendall, and Eric Soubeiga. A Tabu Search Hyperheuristic for
Timetabling and Rostering. Journal of Heuristics, 9(6):451–470, December 2003, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1023/B:HEUR.0000012446.94732.b6. Fully
available at http://www.asap.cs.nott.ac.uk/publications/pdf/Journal_eric.
pdf and http://www.asap.cs.nott.ac.uk/publications/pdf/Journal_eric03.
pdf [accessed 2011-04-23]. CiteSeerx : 10.1.1.1.9539.
458. Edmund K. Burke, Steven Matt Gustafson, and Graham Kendall. Diversity in Genetic
Programming: An Analysis of Measures and Correlation with Fitness. IEEE Transac-
tions on Evolutionary Computation (IEEE-EC), 8(1):47–62, February 2004, IEEE Com-
puter Society: Washington, DC, USA, Xin Yao, editor. doi: 10.1109/TEVC.2003.819263.
Fully available at http://www.gustafsonresearch.com/research/publications/
gustafson-ieee2004.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.59.6663.
459. Martin V. Butz. Anticipatory Learning Classifier Systems, Genetic and Evolutionary Com-
putation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA,
USA, January 25, 2002. isbn: 0792376307.
460. Martin V. Butz. Rule-Based Evolutionary Online Learning Systems: A Principled Approach
to LCS Analysis and Design, Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, December 22, 2005. isbn: 3540253793.
461. Martin V. Butz, Kumara Sastry, and David Edward Goldberg. Tournament Selection in
XCS. Technical Report 2002020, Illinois Genetic Algorithms Laboratory (IlliGAL), Depart-
ment of Computer Science, Department of General Engineering, University of Illinois at
Urbana-Champaign: Urbana-Champaign, IL, USA, July 2002. Fully available at ftp://
ftp-illigal.ge.uiuc.edu/pub/papers/IlliGALs/2002020.ps.Z [accessed 2010-08-03].
CiteSeerx : 10.1.1.19.1850.
462. Elizabeth Byrnes and Jan Aikins, editors. Proceedings of the The Sixth Conference on Innova-
tive Applications of Artificial Intelligence (IAAI’94), August 1–3, 1994, Convention Center:
Seattle, WA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-62-8. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai94.php [accessed 2007-09-06].
OCLC: 635873487. GBV-Identification (PPN): 196175178.

463. Louis Caccetta and Araya Kulanoot. Computational Aspects of Hard Knapsack Prob-
lems. Nonlinear Analysis: Theory, Methods & Applications – Theory, Methods & Ap-
plications An International Multidisciplinary Journal, 47(8):5547–5558, August 2001, El-
sevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/S0362-546X(01)00658-7.
464. Stefano Cagnoni, Riccardo Poli, Yung-Chien Lin, George D. Smith, David Wolfe Corne,
Martin J. Oates, Emma Hart, Pier Luca Lanzi, Egbert J. W. Boers, Ben Paechter, and Ter-
ence Claus Fogarty, editors. Real-World Applications of Evolutionary Computing,:EvoIASP,
EvoSCONDI, EvoTel, EvoSTIM, EvoROB, and EvoFlight (EvoWorkshops’00), April 17,
2000, Edinburgh, Scotland, UK, volume 1803/200 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45561-2.
isbn: 3-540-67353-9. Google Books ID: KQo0n1vZbi0C. OCLC: 43851409, 122976715,
174380229, and 243487744.
465. Stefano Cagnoni, Jens Gottlieb, Emma Hart, Martin Middendorf, and Günther R. Raidl,
editors. Applications of Evolutionary Computing, Proceedings of EvoWorkshops 2002: Evo-
COP, EvoIASP, EvoSTIM/EvoPLAN (EvoWorkshops’02), April 2–4, 2002, Kinsale, Ireland,
volume 2279/-1/20 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-46004-7. isbn: 3-540-43432-1.
466. Stefano Cagnoni, Evelyne Lutton, and Gustavo Olague, editors. Genetic and Evolutionary
Computation for Image Processing and Analysis, volume 8 in EURASIP Book Series on Signal
Processing and Communications. Hindawi Publishing Corporation: New York, NY, USA,
REFERENCES 995
February 2008. Fully available at http://www.hindawi.com/books/9789774540011.
pdf [accessed 2008-05-17].
467. Sébastien Cahon, Nordine Melab, and El-Ghazali Talbi. ParadisEO: A Framework
for the Reusable Design of Parallel and Distributed Metaheuristics. Journal of
Heuristics, 10(3):357–380, May 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:HEUR.0000026900.92269.ec.
468. Tao Cai, Feng Pan, and Jie Chen. Adaptive Particle Swarm Optimization Algorithm. In
WCICA’04 [1388], pages 2245–2247, volume 3, 2004. doi: 10.1109/WCICA.2004.1341988.
INSPEC Accession Number: 8152861.
469. Proceedings of the 7th Asia-Pacific Conference on Complex Systems (Complex’04), Decem-
ber 6–10, 2004, Cairns Convention Centre: Cairns, Australia.
470. Osvaldo Cairó, Luis Enrique Sucar, and Francisco J. Cantu, editors. Advances in Artificial
Intelligence, Proceedings of the Mexican International Conference on Artificial Intelligence
(MICAI’00), April 11–14, 2000, Acapulco, México, volume 1793/2000 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/10720076. isbn: 3-540-67354-7.
471. Craig Caldwell and Victor S. Johnston. Tracking a Criminal Suspect Through “Face-Space”
with a Genetic Algorithm. In ICGA’91 [254], pages 416–421, 1991.
472. Cristian S. Calude, Michael J. Dinneen, Gheorghe Păun, Grzegorz Rozenberg, and Susan
Stepney, editors. Proceedings of the 5th International Conference on Unconventional Com-
putation (UC’06), September 4–8, 2006, York, UK, volume 4135 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11839132. isbn: 3-540-38593-2. Library of
Congress Control Number (LCCN): 2006931474.
473. William H. Calvin. The River That Flows Uphill: A Journey from the Big Bang to the Big
Brain. Macmillan Publishers Co.: New York, NY, USA, Backinprint.com: New York, NY,
USA, and Sierra Club Books: San Francisco, CA, USA, July 1986. isbn: 0025209205,
0595167004, 0792445228, and 0-87156-719-9. Google Books ID: KVdoPwAACAAJ,
OpNuPwAACAAJ, nT7rNiPqEuAC, and uavwAAAAMAAJ. OCLC: 13524443, 48962546, and
246817496. Library of Congress Control Number (LCCN): 86008490 and 87000381.
GBV-Identification (PPN): 116426063 and 277678366. LC Classification: QP356 .C35
1986b. See also [474].
474. William H. Calvin. Die Geschichte des Lebens – Vom Urknall bis zum Großhirn des Homo
Sapiens. Bechtermünz Verlag GmbH: Augsburg, Germany, 1997, Friedrich Griese, Translator.
isbn: 3-86047-567-3. OCLC: 57376043. See also [473].
475. Mike Campos, Eric W. Bonabeau, Guy Théraulaz, and Jean-Louis Deneubourg.
Dynamic Scheduling and Division of Labor in Social Insects. Adaptive Behav-
ior, 8(2):83–95, March 2000, SAGE Publications: Thousand Oaks, CA, USA.
doi: 10.1177/105971230000800201. Fully available at http://www.icosystem.com/
dynamic-scheduling-and-division-of-labor-in-social-insects/ [accessed 2011-
x
05-02]. CiteSeer : 10.1.1.112.3678.

476. Erick Cantú-Paz. Designing Efficient and Accurate Parallel Genetic Algorithms. PhD the-
sis, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science,
Department of General Engineering, University of Illinois at Urbana-Champaign: Urbana-
Champaign, IL, USA, July 1999. Fully available at http://citeseer.ist.psu.edu/
cantu-paz99designing.html [accessed 2008-04-06]. Also IlliGAL report 99017. See also [477].
477. Erick Cantú-Paz. Efficient and Accurate Parallel Genetic Algorithms, volume 1 in Genetic
and Evolutionary Computation. Springer US: Boston, MA, USA and Kluwer Academic
Publishers: Norwell, MA, USA, December 15, 2000. isbn: 0-7923-7221-2. Google Books
ID: UXkGGQbmsfAC. OCLC: 45058477. See also [476].
478. Erick Cantú-Paz, editor. Late Breaking papers at the Genetic and Evolutionary Computation
Conference (GECCO’02 LBP), July 9–13, 2002, Roosevelt Hotel: New York, NY, USA.
AAAI Press: Menlo Park, CA, USA. See also [224, 1675, 1790].
479. Erick Cantú-Paz and Francisco Fernández de Vega, editors. First Workshop on Parallel
Architectures and Bioinspired Algorithms (WPBA’05), June 14, 2005, Oslo, Norway.
480. Erick Cantú-Paz and Chandrika Kamath. Combining Evolutionary Algorithms with Oblique
Decision Trees to Detect Bent-Double Galaxis. In Proceedings of SPIE’s 45th International
Symposium on Optical Science and Technology: Applications and Science of Neural Networks,
996 REFERENCES
Fuzzy Systems, and Evolutionary Computation III [362], pages 63–71, 2000.
481. Erick Cantú-Paz and Chandrika Kamath. Using Evolutionary Algorithms to Induce
Oblique Decision Trees. In GECCO’00 [2935], pages 1053–1060, 2000. Fully avail-
able at https://computation.llnl.gov/casc/sapphire/dtrees/oc1.html and
www.evolutionaria.com/publications/gecco00-obliqueDT.ps.gz [accessed 2009-09-
x
09]. CiteSeer : 10.1.1.28.7300.

482. Erick Cantú-Paz and William F. Punch, editors. Evolutionary Computation and Parallel
Processing Workshop, 1999, Orlando, FL, USA. Morgan Kaufmann Publishers Inc.: San
Francisco, CA, USA. Part of [211].
483. Erick Cantú-Paz, Martin Pelikan, and David Edward Goldberg. Linkage Problem, Dis-
tribution Estimation, and Bayesian Networks. Evolutionary Computation, 8(3):311–
340, Fall 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600750078808.
CiteSeerx : 10.1.1.16.236. See also [2150].
484. Erick Cantú-Paz, James A. Foster, Kalyanmoy Deb, Lawrence Davis, Rajkumar Roy, Una-
May O’Reilly, Hans-Georg Beyer, Russell K. Standish, Graham Kendall, Stewart W. Wilson,
Mark Harman, Joachim Wegener, Dipankar Dasgupta, Mitchell A. Potter, Alan C. Schultz,
Kathryn A. Dowsland, Natasa Jonoska, and Julian Francis Miller, editors. Proceedings of
the Genetic and Evolutionary Computation Conference, Part I (GECCO’03-I), July 12–16,
2003, Holiday Inn Chicago: Chicago, IL, USA, volume 2723/2003 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45105-6.
isbn: 3-540-40602-6. Google Books ID: IlWhIKahmsC. OCLC: 52506094, 52559224,
53068317, 243490402, and 314238192. See also [485, 2078].
485. Erick Cantú-Paz, James A. Foster, Kalyanmoy Deb, Lawrence Davis, Rajkumar Roy, Una-
May O’Reilly, Hans-Georg Beyer, Russell K. Standish, Graham Kendall, Stewart W. Wil-
son, Mark Harman, Joachim Wegener, Dipankar Dasgupta, Mitchell A. Potter, Alan C.
Schultz, Kathryn A. Dowsland, Natasa Jonoska, and Julian Francis Miller, editors. Pro-
ceedings of the Genetic and Evolutionary Computation Conference, Part II (GECCO’03-
II), July 12–16, 2003, Holiday Inn Chicago: Chicago, IL, USA, volume 2724/2003 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-45110-2. isbn: 3-540-40603-4. Google Books ID: eYlQAAAAMAAJ and
zcN2KcCllT0C. OCLC: 52506094, 52559224, 59295690, and 314195657. See also
[484, 2078].
486. Aizeng Cao, Yueting Chen, Jun Wei, and Jinping Li. A Hybrid Evolutionary Algo-
rithm Based on EDAs and Clustering Analysis. In CCC’07 [555], pages 754–758, 2007.
doi: 10.1109/CHICC.2006.4347236. INSPEC Accession Number: 9800559.
487. Hongqing Cao, Jingxian Yu, and Lishan Kang. An Evolutionary Approach for Modeling the
Equivalent Circuit for Electrochemical Impedance Spectroscopy. In CEC’03 [2395], pages
1819–1825, volume 3, 2003. doi: 10.1109/CEC.2003.1299893. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/Cao_2003_Aeafmtecfeis.html [accessed 2009-
06-16].

488. Mathieu S. Capcarrère, Alex Alves Freitas, Peter J. Bentley, Colin G. Johnson, and Jonathan
Timmis, editors. Proceedings of the 8th European Conference on Advances in Artificial Life
(ECAL’05), September 5–9, 2005, University of Kent: Canterbury, Kent, UK, volume 3630
in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-28848-1.
489. Andrea Caponio, Giuseppe Leonardo Cascella, Ferrante Neri, Nadia Salvatore, and Mark
Sumner. A Fast Adaptive Memetic Algorithm for Online and Offline Control Design of
PMSM Drives. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernet-
ics, 37(1):28–41, February 2007, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/TSMCB.2006.883271. PubMed ID: 17278556. INSPEC Accession
Number: 9299478.
490. Andrea Caponio, Ferrante Neri, and Ville Tirronen. Super-fit Control Adaptation in Memetic
Differential Evolution Frameworks. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 13(8-9):811–831, July 2009, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s00500-008-0357-1.
491. Alain Cardon, Theirry Galinho, and Jean-Philippe Vacher. A Multi-Objective Genetic Al-
gorithm in Job Shop Scheduling Problem to Refine an Agents’ Architecture. In EURO-
GEN’99 [1894], 1999. Fully available at http://www.jeo.org/emo/cardon99.ps.gz.
REFERENCES 997
492. Jorge Cardoso and Amit P. Sheth. Introduction to Semantic Web Services and Web Process
Composition. In SWSWPC’04 [493], pages 1–13, 2004. doi: 10.1007/978-3-540-30581-1 1.
493. Jorge Cardoso and Amit P. Sheth, editors. Revised Selected Papers from the First Interna-
tional Workshop on Semantic Web Services and Web Process Composition (SWSWPC’04),
July 6, 2004, Westin Horton Plaza Hotel: San Diego, CA, USA, volume 3387/2005 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b105145. isbn: 3-540-24328-3. Google Books ID: AvChgxNuKK8C. Library
of Congress Control Number (LCCN): 2004117658.
494. Jorge Cardoso, José Cordeiro, and Joaquim Filipe, editors. Proceedings of the 9th Interna-
tional Conference on Enterprise Information Systems (ICEIS’07), June 12–16, 2007, Funchal,
Madeira, Portugal, volume SAIC. Institute for Systems and Technologies of Information,
Control and Communication (INSTICC) Press: Setubal, Portugal. isbn: 972-8865-91-0.
Google Books ID: JwTJPgAACAAJ. See also [915].
495. Alessio Carenini, Dario Cerizza, Marco Comerio, Emanuele Della Valle, Flavio De Paoli,
Andrea Maurino, Matteo Palmonari, Matteo Sassi, and Andrea Turati. Semantic Web
Service Discovery and Selection: A Test Bed Scenario. In EON-SWSC’08 [1023], 2008.
Fully available at http://sunsite.informatik.rwth-aachen.de/Publications/
CEUR-WS/Vol-359/Paper-3.pdf [accessed 2010-12-16]. See also [2166].
496. Peter A. Cariani. Extradimensional Bypass. Biosystems, 64(1-3):47–53, January 2002, El-
sevier Science Ireland Ltd.: East Park, Shannon, Ireland and North-Holland Scientific Pub-
lishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0303-2647(01)00174-5.
497. Tonci Caric and Hrvoje Gold, editors. Vehicle Routing Problem. IN-TECH Education and
Publishing: Vienna, Austria, September 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-09-1&type=B [accessed 2009-01-13].
498. Anthony Jack Carlisle. Applying the Particle Swarm Optimizer to Non-Stationary Envi-
ronments. PhD thesis, Auburn University, Graduate Faculty: Auburn, AL, USA, Decem-
ber 16, 2002, Gerry V. Dozier, Advisor, Gerry V. Dozier, Kai-Hsiung Chang, Dean Hen-
drix, and Stephen L. McFarland, Committee members. Fully available at http://antho.
huntingdon.edu/publications/Dissertation.pdf [accessed 2009-07-13].
499. Anthony Jack Carlisle and Gerry V. Dozier. Tracking Changing Extrema with Adap-
tive Particle Swarm Optimizer. In WAC’02 [1432], pages 265–270, volume 13, 2002.
doi: 10.1109/WAC.2002.1049555. Fully available at http://antho.huntingdon.edu/
publications/default.html [accessed 2007-08-19].
500. Charles W. Carroll. An Operations Research Approach to the Economic Optimization of
a Kraft Pulping Process. PhD thesis, Institute of Paper Chemistry: Appleton, WI, USA,
Georgia Institute of Technology: Atlanta, GA, USA, June 1959. Fully available at http://
hdl.handle.net/1853/5853 [accessed 2008-11-15].
501. Charles W. Carroll. The Created Response Surface Technique for Optimizing Nonlinear,
Restrained Systems. Operations Research, 9(2):169–184, March–April 1961, Institute for
Operations Research and the Management Sciences (INFORMS): Linthicum, ML, USA and
HighWire Press (Stanford University): Cambridge, MA, USA. doi: 10.1287/opre.9.2.169.
502. Ted Carson and Russell Impagliazzo. Hill-Climbing finds Random Planted Bisections. In
Proceedings of the twelfth annual ACM-SIAM Symposium on Discrete Algorithms [2543],
pages 903–909, volume 12, 2001. Fully available at http://citeseer.ist.psu.
edu/carson01hillclimbing.html and http://portal.acm.org/citation.cfm?
id=365411.365805 [accessed 2007-09-11].
503. Hugh Cartwright, editor. Intelligent Data Analysis in Science, volume 4 in Oxford Chemistry
Masters. Oxford University Press, Inc.: New York, NY, USA, June 2000. isbn: 0198502338.
504. Richard A. Caruana, Larry J. Eshelman, and J. David Schaffer. Representation and Hidden
Bias: Gray vs. Binary Coding for Genetic Algorithms. In ICML’88 [1659], pages 153–161,
1988.
505. George Casella and Roger L. Berger. Statistical Inference, Duxbury Advanced Series.
Duxbury Thomson Learning / Duxbury Press: Pacific Grove, CA, USA, 2nd edition, June 18,
2000. isbn: 0-534-24312-6. OCLC: 46538638.
506. Félix Castro, Àngela Nebot, and Francisco Mugica. A Soft Computing Decision Support
Framework to Improve the e-Learning Experience. In SpringSim’08 [2258], pages 781–
788, 2008. Fully available at http://portal.acm.org/citation.cfm?id=1400674
[accessed 2009-03-29]. SESSION: 2008 Modeling & Simulation in Education (MSE’08): The Edu-
998 REFERENCES
cation Continuum.
507. Seventh European Conference on Machine Learning (ECML’94), April 6–8, 1994, Catania,
Sicily, Italy.
508. H. John Caulfield, Shu-Heng Chen, Heng-Da Cheng, Richard J. Duro, Vasant Honavar,
Etienne E. Kerre, Mikulas Luptacik, Manuel Grana Romay, Timothy K. Shih, Dan Ventura,
Paul P. Wang, and Yuanyuan Yang, editors. Proceedings of the Sixth Joint Conference on
Information Science (JCIS 2002), Section: The Fourth International Workshop on Frontiers
in Evolutionary Algorithms (FEA’02), March 8–13, 2002, Research Triangle Park, NC, USA.
Association for Intelligent Machinery, Inc.: Durham, NC, USA. isbn: 0-9707890-1-7.
509. Proceedings of the 1st International Conference on Bio Inspired mOdels of NEtwork, Infor-
mation and Computing Systems (BIONETICS’06), December 11–13, 2006, Cavalese, Italy.
510. Daniel Joseph Cavicchio, Jr. Adaptive Search using Simulated Evolution. PhD thesis, Uni-
versity of Michigan, College of Literature, Science, and the Arts, Computer and Commu-
nication Sciences Department: Ann Arbor, MI, USA, August 1970, John Henry Holland,
Advisor. Fully available at http://deepblue.lib.umich.edu/handle/2027.42/4042
[accessed 2010-08-03]. ID: bab9712.0001.001.

511. Daniel Joseph Cavicchio, Jr. Reproductive Adaptive Plans. In ACM’72 [21], pages 60–70.
doi: 10.1145/800193.805822.
512. 14th European Conference on Machine Learning (ECML’03), September 22–26, 2003, Cavtat,
Dubrovnik, Croatia.
513. The Second Annual Symposium on Autonomous Intelligent Networks and Systems (AINS’03),
June 30–July 1, 2003, SRI International: Menlo Park, CA, USA. Center for Autonomous
Intelligent Networks and Systems (CAINS), University of California (UCLA): Los Angeles,
CA, USA. Fully available at http://path.berkeley.edu/ains/ [accessed 2009-07-10].
514. Proceedings of the 20th Informatics and Operations Research Meeting (20th Jornadas Ar-
gentinas e Informática e Investigación Operativa) (JAIIO’91), August 20–23, 1991, Centro
Cultural General San Martı́n: Buenos Aires, Argentina.
515. Proceedings of the International Conference on Soft Computing and Pattern Recognition
(SoCPaR’10), December 7–10, 2010, Cergy-Pontoise, France.
516. Vladimı́r Černý. Thermodynamical Approach to the Traveling Salesman Problem: An Ef-
ficient Simulation Algorithm. Journal of Optimization Theory and Applications, 45(1):41–
51, January 1985, Springer Netherlands: Dordrecht, Netherlands and Plenum Press: New
York, NY, USA. doi: 10.1007/BF00940812. Fully available at http://mkweb.bcgsc.ca/
papers/cerny-travelingsalesman.pdf [accessed 2010-09-25]. Communicated by S. E. Drey-
fus. Also: Technical Report, Comenius University, Mlynská Dolina, Bratislava, Czechoslo-
vakia, 1982.
517. Miroslav Červenka and Vojtěch Křesálek. Aerodynamic Wing Optimisation Us-
ing SOMA Evolutionary Algorithm. In NICSO’08 [1620], pages 127–138, 2008.
doi: 10.1007/978-3-642-03211-0 11. See also [518].
518. Miroslav Červenka and Ivan Zelinka. Application of Evolutionary Algorithm on Aerody-
namic Wing Optimisation. In ECC’08 [1819], 2008. Fully available at http://www.wseas.
us/e-library/conferences/2008/malta/ecc/ecc53.pdf [accessed 2010-12-27]. See also
[517].
519. Deepti Chafekar, Jiang Xuan, and Khaled Rasheed. Constrained Multi-objective Optimiza-
tion Using Steady State Genetic Algorithms. In GECCO’03-I [484], pages 813–824, 2003.
Fully available at http://www.cs.uga.edu/˜khaled/papers/385.pdf [accessed 2010-08-
x
01]. CiteSeer : 10.1.1.15.2094.

520. Uday Kumar Chakraborty, Kalyanmoy Deb, and Mandira Chakraborty. Analysis of Se-
lection Algorithms: A Markov Chain Approach. Evolutionary Computation, 4(2):133–167,
Summer 1996, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.2.133.
521. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: Applications, volume
I in Practical Handbook of Genetic Algorithms. Chapman & Hall: London, UK and CRC
Press, Inc.: Boca Raton, FL, USA, 2nd ed: 2000 edition, 1995. isbn: 0849325196 and
1584882409.
522. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: New Frontiers, vol-
ume II. CRC Press, Inc.: Boca Raton, FL, USA, 1995. isbn: 0849325293. Google Books
ID: 9RCE3pgj9K4C. OCLC: 32394368, 35865793, 54016160, 61827855, 468627012,
and 503637768.
REFERENCES 999
523. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: Complex Coding
Systems, volume III in Practical Handbook of Genetic Algorithms. Chapman & Hall: London,
UK and CRC Press, Inc.: Boca Raton, FL, USA, December 23, 1998. isbn: 0849325390.
524. Christine W. Chan, Witold Kinsner, Yingxu Wang, and D. Michael Miller, editors. Pro-
ceedings of the 3rd IEEE International Conference on Cognitive Informatics (ICCI’04),
August 16–17, 2004, Victoria, Canada. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2190-8.
525. Felix T.S. Chan and Manoj Kumar Tiwari, editors. Swarm Intelligence – Focus on
Ant and Particle Swarm Optimization. I-Tech Education and Publishing: Vienna,
Austria, December 2007. Fully available at http://s.i-techonline.com/Book/
Swarm-Intelligence/Swarm-Intelligence.zip [accessed 2008-08-22].
526. Sandeep Chandran and R. Russell Rhinehart. Heuristic Random Optimizer-Version II. In
ACC’02 [1335], pages 2589–2594, volume 4, 2002. doi: 10.1109/ACC.2002.1025175.
527. C. S. Chang and Chung Min Kwan. Evaluation of Evolutionary Algorithms for
Multi-Objective Train Schedule Optimization. In AI’04 [2871], pages 803–815, 2004.
doi: 10.1007/978-3-540-30549-1 69 and 10.1007/b104336.
528. Vira Chankong and Yacov Y. Haimes. Multiobjective Decision Making Theory and Methodol-
ogy. North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands, and Dover Publications: Mineola, NY, USA,
January 1983. isbn: 0-444-00710-5. Google Books ID: fQ5VPQAACAAJ.
529. 14th Conference on Technologies and Applications of Artificial Intelligence (TAAI’09), Oc-
tober 30–31, 2009, Chaoyang University of Technology (CYU): Wufeng Township, Taichung
County, Taiwan.
530. Abraham Charnes and William Wager Cooper. Management Models and Industrial Appli-
cations of Linear Programming. John Wiley & Sons Canada Ltd.: Toronto, ON, Canada,
December 1961. isbn: 0471148504. Google Books ID: xj3WAAAACAAJ.
531. Abraham Charnes, William Wager Cooper, and R. O. Ferguson. Optimal Estimation of
Executive Compensation by Linear Programming. Management Science, 1(2):138–151, Jan-
uary 1955, Institute for Operations Research and the Management Sciences (INFORMS):
Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1287/mnsc.1.2.138.
532. Neela Chattoraj and Jibendu Sekhar Roy. Application of Genetic Algorithm to the Optimiza-
tion of Microstrip Antennas with and without Superstrate. Mikrotalasna Revija (Microwave
Review), 2(6), November 2006, Yugoslav Society for Microwave Technique and Technolo-
gies, Serbia and Montenegro IEEE MTT-S Chapter: Novi Beograd, Serbia. Fully available
at http://www.mwr.medianis.net/pdf/Vol12No2-06-NChattoraj.pdf [accessed 2008-
09-02].

533. Leonardo Weiss F. Chaves, Erik Buchmann, Fabian Hueske, and Klemens Böhm. Towards
Materialized View Selection for Distributed Databases. In EDBT’09 [1526], pages 1088–
1099, 2009. doi: 10.1145/1516360.1516484. Fully available at http://www.ipd.kit.edu/
˜buchmann/pdfs/weiss08matviews.pdf [accessed 2011-04-09].
534. José M. Chaves-González, Miguel A. Vega-Rodrı́guez, David Domı́nguez-González, Juan A.
Gómez-Pulido, and Juan M. Sánchez-Pérez. SS vs PBIL to Solve a Real-World Frequency
Assignment Problem in GSM Networks. In EvoWorkshops’08 [1051], pages 21–30, 2008.
doi: 10.1007/978-3-540-78761-7 3.
535. Pravir K. Chawdry, Rajkumar Roy, and Raj K. Pant, editors. Soft Computing in En-
gineering Design and Manufacturing – Proceedings of the 2nd On-line World Conference
on Soft Computing in Engineering Design and Manufacture (WSC2), June 1997. Springer-
Verlag GmbH: Berlin, Germany. isbn: 3-540-76214-0. Google Books ID: mxcP1mSjOlsC.
OCLC: 37696334, 247521345, and 634603077. Library of Congress Control Number
(LCCN): 97031959. GBV-Identification (PPN): 236542257 and 279731663. LC Classifi-
cation: QA76.9.S63 S63 1998.
536. Sin Man Cheang, Kwong Sak Leung, and Kin Hong Lee. Genetic Parallel Programming:
Design and Implementation. Evolutionary Computation, 14(2):129–156, Summer 2006, MIT
Press: Cambridge, MA, USA. doi: 10.1162/evco.2006.14.2.129.
537. 10th European Conference on Machine Learning (ECML’98), April 21–24, 1998, Chemnitz,
Sachsen, Germany.
538. Arbee L. P. Chen, Jui-Shang Chiu, and Frank S.C. Tseng. Evaluating Aggregate Opera-
1000 REFERENCES
tions Over Imprecise Data. IEEE Transactions on Knowledge and Data Engineering, 8(2):
273–284, April 1996, IEEE Computer Society Press: Los Alamitos, CA, USA. Fully avail-
able at http://make.cs.nthu.edu.tw/alp/alp_paper/Evaluating%20aggregate
%20operations%20over%20imprecise%20data.pdf [accessed 2007-09-12].
539. Chi-hau Chen, L. F. Pau, and P. Shen-pei Wang, editors. Handbook of Pattern Recogni-
tion and Computer Vision. World Scientific Publishing Co.: Singapore, 1st edition, Au-
gust 1993. isbn: 981-02-1136-8, 981-02-2276-9, and 981-02-3071-0. Google Books
ID: 5J6J7PP0-ZMC, pCghq7-i1oMC, and wnhwKoen4xAC.
540. Chia-Mei Chen, Bing Chiang Jeng, Chia Ru Yang, and Gu Hsin Lai. Tracing Denial of
Service Origin: Ant Colony Approach. In EvoWorkshops’06 [2341], pages 286–295, 2006.
doi: 10.1007/11732242 26.
541. Guoliang Chen, editor. Genetic Algorithm and its Application (Yı́chuán Suànfǎ Jı́ Qı́
Yı̀ngyòng), National High-Tech Key Books: Communications Technology (Quánguó Gāo
Jı̀shù Zhòngdiǎn Túshū: Tōngxı̀n Jı̀shù Lı̌ngyù). Post & Telecom Press (PTPress, Rénmı́n
Yóudiàn Chūbǎn Shè): Běijı̄ng, China, 1996–2001. isbn: 7115059640. Google Books
ID: qmNoAAAACAAJ. OCLC: 300052154.
542. Jing Chen, Zeng-zhi Li, Zhi-Gang Liao, and Yun-lan Wang. Distributed Service Performance
Management Based on Linear Regression and Genetic Programming. In ICMLC’05 [2263],
pages 560–563, volume 1, 2005. doi: 10.1109/ICMLC.2005.1527007.
543. Jing Chen, Zeng-zhi Li, and Yun-lan Wang. Distributed Service Management Based on
Genetic Programming. In AWIC’05 [2642], pages 83–88, 2005. doi: 10.1007/11495772 14.
544. Jong-Chen Chen and Michael Conrad. A Multilevel Neuromolecular Architecture that uses
the Extradimensional Bypass Principle to Facilitate Evolutionary Learning. Physica D: Non-
linear Phenomena, 75(1–3):417–437, August 1, 1994, Elsevier Science Publishers B.V.: Am-
sterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam,
The Netherlands. doi: 10.1016/0167-2789(94)90295-X.
545. Jong-Chen Chen, Guo-Xun Liao, Jr-Sung Hsie, and Cheng-Hua Liao. A Study of the Contri-
bution Made by Evolutionary Learning on Dynamic Load-Balancing Problems in Distributed
Cmputing Systems. Expert Systems with Applications – An International Journal, 34(1):357–
365, January 2008, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press:
Oxford, UK. doi: 10.1016/j.eswa.2006.09.036.
546. Ken Chen, Shu-Heng Chen, Heng-Da Cheng, David K.Y. Chin, Sanjoy Das, Richard J.
Duro, Zhen Jiang, Nikola Kasabov, Etienne E. Kerre, Hong Va Leong, Qing Li, Mi Lu,
Manuel Grana Romay, Dan Ventura, Paul P. Wang, and Jie Wu, editors. Proceedings of the
Seventh Joint Conference on Information Science (JCIS 2003), Section: The Fifth Interna-
tional Workshop on Frontiers in Evolutionary Algorithms (FEA’03), September 26–30, 2003,
Embassy Suites Hotel and Conference Center: Cary, NC, USA. Association for Intelligent
Machinery, Inc.: Durham, NC, USA. isbn: 0964345692. OCLC: 248354376. Workshop
held in conjunction with Seventh Joint Conference on Information Sciences.
547. Min-Rong Chen, Yong-zai Lu, and Gen-ke Yang. Multiobjective Extremal Optimization with
Applications to Engineering Design. Journal of Zhejiang University – Science A (JZUS-A), 8
(12):1905–1911, November 2007, Zhejiang University Press: Hángzhōu, Zhèjiāng, China and
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1631/jzus.2007.A1905.
548. Min-Rong Chen, Yong-zai Lu, and Gen-ke Yang. Multiobjective Extremal Optimization with
Applications to Engineering Design. Journal of Zhejiang University – Science A (JZUS-A), 8
(12):1905–1911, November 2007, Zhejiang University Press: Hángzhōu, Zhèjiāng, China and
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1631/jzus.2007.A1905.
549. Shu-Heng Chen, editor. Evolutionary Computation in Economics and Finance, volume 100 in
Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, August 5,
2002. isbn: 3790814768.
550. Tianshi Chen, Ke Tang, Guoliang Chen, and Xin Yao. A Large Population Size Can Be
Unhelpful in Evolutionary Algorithms. Theoretical Computer Science, 2011, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.tcs.2011.02.016.
551. Wenxiang Chen, Thomas Weise, Zhenyu Yang, and Ke Tang. Large-Scale Global
Optimization Using Cooperative Coevolution with Variable Interaction Learning. In
PPSN’10-2 [2413], pages 300–309, 2010. doi: 10.1007/978-3-642-15871-1 31. Fully
available at http://mail.ustc.edu.cn/˜chenwx/ppsn10Chen.pdf [accessed 2011-02-18]
and http://www.it-weise.de/documents/files/CWYT2010LSGOUCCWVIL.pdf. Ei
REFERENCES 1001
ID: 20104513369063. IDS (SCI): BTC53.
552. Ying-Ping Chen. Extending the Scalability of Linkage Learning Genetic Algorithms – Theory
& Practice, volume 190/2006 in Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, University of Illinois: Urbana-Champaign, IL, USA, 2004–2006,
David Edward Goldberg, Advisor. doi: 10.1007/b102053. isbn: 3-540-28459-1. Google
Books ID: kKr3rKhPU7oC. Order No.: AAI3130894.
553. Yuehui Chen and Ajith Abraham, editors. Proceedings of the Sixth International Conference
on Intelligent Systems Design and Applications (ISDA’06), October 16–18, 2006, Jı̀’nán,
Shāndōng, China. IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-2528-8.
Google Books ID: xgDBAgAACAAJ. Order No.: P2528.
554. Zhixiong Chen, Charles A. Shoniregun, and Yuan-Chwen You, editors. First IEEE In-
ternational Services Computing Contest (SCContest’06), 2006, Chicago, IL, USA. IEEE
Computer Society Press: Los Alamitos, CA, USA. Fully available at http://iscc.
servicescomputing.org/2006/ [accessed 2010-12-17]. Part of [1373].
555. Dàizhǎn Chéng and Mı̌n Wú, editors. Proceedings of the 26th Chinese Control Confer-
ence (CCC’07), July 26–31, 2007, Zhāngjiājiè, Húnán, China. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 7-81124-055-6. Google Books
ID: MCLlMQAACAAJ. OCLC: 182550073. Catalogue no.: 07EX1694.
556. Hui Cheng and Shengxiang Yang. Genetic Algorithms with Elitism-based Immigrants for
Dynamic Shortest Path Problem in Mobile Ad Hoc Networks. In CEC’09 [1350], pages
3135–3140, 2009. doi: 10.1109/CEC.2009.4983340. INSPEC Accession Number: 10688935.
557. Yiu-ming Cheung, Yuping Wang, and Hailin Liu, editors. Proceedings of the 2006 Inter-
national Conference on Computational Intelligence and Security (CIS’06), November 3–6,
2006, Ramada Pearl Hotel, Guǎngzhōu, Guǎngdōng, China. IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA. isbn: 1-4244-0605-6. Library of Congress
Control Number (LCCN): 2006931375. Catalogue no.: 06EX1512.
558. Bastien Chevreux. Genetische Algorithmen zur Molekülstrukturoptimierung. Master’s thesis,
Universität Heidelberg: Heidelberg, Germany, Fachhochschule Heilbronn: Heilbronn, Ger-
many, and Deutsches Krebsforschungszentrum Heidelberg: Heidelberg, Germany, March 1,
1997. Fully available at http://chevreux.org/diplom/diplom.html [accessed 2010-08-01].
559. Alpha C. Chiang and Kevin Wainwright. Fundamental Methods of Mathematical Eco-
nomics. McGraw-Hill Ltd.: Maidenhead, UK, England, 4th edition, February 2, 2005.
isbn: 0070109109.
560. Altannar Chinchuluun, Panos M. Pardalos, Athanasios Migdalas, and Leonidas S. Pit-
soulis, editors. Pareto Optimality, Game Theory and Equilibria, volume 17 in Springer
Optimization and Its Applications. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-0-387-77247-9. isbn: 0387772464. Google Books ID: kNqHxU3Tc2YC.
561. Raymond Chiong, editor. Intelligent Systems for Automated Learning and Adaptation:
Emerging Trends and Applications. Information Science Reference: Hershey, PA, USA,
September 2009. doi: 10.4018/978-1-60566-798-0. isbn: 1-60566-798-6. Google Books
ID: bjFRPgAACAAJ and r3BUPgAACAAJ. OCLC: 318420065.
562. Raymond Chiong, editor. Nature-Inspired Algorithms for Optimisation, volume 193/2009 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, April 30, 2009.
doi: 10.1007/978-3-642-00267-0. isbn: 3-642-00266-8 and 3-642-00267-6. Google Books
ID: 557PPQbkfNYC. OCLC: 400530826, 405547847, and 423750494. Library of Congress
Control Number (LCCN): 2009920517.
563. Raymond Chiong, editor. Nature-Inspired Informatics for Intelligent Applications and Knowl-
edge Discovery: Implications in Business, Science and Engineering. Information Science
Reference: Hershey, PA, USA, 2009. doi: 10.4018/978-1-60566-705-8. isbn: 1605667056.
564. Raymond Chiong and Sandeep Dhakal, editors. Natural Intelligence for Schedul-
ing, Planning and Packing Problems, volume 250 in Studies in Computational Intelli-
gence. Springer-Verlag: Berlin/Heidelberg, October 2009. doi: 10.1007/978-3-642-04039-9.
isbn: 3-642-04038-1. Google Books ID: X1XkPwAACAAJ. Library of Congress Control
Number (LCCN): 2009934305.
565. Raymond Chiong and Thomas Weise. Global Optimisation and Mobile Learning.
Learning Technology – LTTF Newsletter, 11(1-2):26–28, January–April 2009, Technical
Committee on Learning Technology. Fully available at http://www.ieeetclt.org/
issues/april2009/index.html#_Toc230200599 and http://www.it-weise.de/
1002 REFERENCES
documents/files/CW2009GOAML.pdf [accessed 2009-09-08].
566. Raymond Chiong, Thomas Weise, and Bee Theng Lau. Template Design using Extremal
Optimization with Multiple Search Operators. In SoCPaR’09 [1224], pages 202–207, 2009.
doi: 10.1109/SoCPaR.2009.49. Fully available at http://www.it-weise.de/documents/
files/CWL2009TDUEOWMSO.pdf [accessed 2010-06-10]. INSPEC Accession Number: 11050930.
Ei ID: 20101012758280. IDS (SCI): BOP29. See also [2895].
567. Raymond Chiong, Thomas Weise, and Zbigniew Michalewicz, editors. Variants of Evolution-
ary Algorithms for Real-World Applications. Springer-Verlag: Berlin/Heidelberg, Septem-
ber 30, 2011–2012. doi: 10.1007/978-3-642-23424-8. Google Books ID: B2ONePP40MEC.
Library of Congress Control Number (LCCN): 2011935740.
568. Rada Chirkova. The View-Selection Problem Has an Exponential-Time Lower Bound
for Conjunctive Queries and Views. In PODS’02 [2204], pages 159–168, 2002.
doi: 10.1145/543613.543634. Fully available at http://dbgroup.ncsu.edu/rychirko/
Papers/pods02.pdf [accessed 2011-03-28]. CiteSeerx : 10.1.1.88.2166.
569. Huidae Cho, Francisco Olivera, and Seth D. Guikema. A Derivation of the Number of
Minima of the Griewank Function. Journal of Applied Mathematics and Computation, 204
(2):694–701, October 15, 2008, Elsevier Science Publishers B.V.: Essex, UK. Fully available
at https://ceprofs.civil.tamu.edu/folivera/PapersPDFs/A%20derivation
%20of%20the%20number%20of%20minima%20of%20the%20Griewank%20function.
pdf [accessed 2009-08-16]. Special Issue on New Approaches in Dynamic Optimization to
Assessment of Economic and Environmental Systems.
570. Chi-Hon Choi, Jeffrey Xu Yu, and Gang Gou. What Difference Heuristics Make:
Maintenance-Cost View-Selection Revisited. In WAIM’02 [1868], pages 313–350, 2002.
doi: 10.1007/3-540-45703-8 23.
571. Chi-Hon Choi, Jeffrey Xu Yu, and Hongjun Lu. Dynamic Materialized View
Management Based on Predicates. In APWeb’03 [3082], pages 583–594, 2003.
doi: 10.1007/3-540-36901-5 58.
572. Sung-Soon Choi, Kyomin Jung, and Jeong Han Kim. Phase Transition in a
Random NK Landscape Model. In GECCO’05 [304], pages 1241–1248, 2005.
doi: 10.1145/1068009.1068212. Fully available at http://web.mit.edu/kmjung/Public/
NK%20gecco%20final.pdf [accessed 2009-02-26]. Session: Genetic Algorithms. See also [573].
573. Sung-Soon Choi, Kyomin Jung, and Jeong Han Kim. Phase Transition in a Random NK
Landscape Model. Artificial Intelligence, 172(2–3):179–203, February 2008, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.artint.2007.06.002. See also [572].
574. Proceedings of the 8th International Conference on Natural Computation (ICNC’12), May 29–
31, 2012, Chóngqı̀ng, China.
575. Hosung Choo, Adrian Hutani, Luiz Cezar Trintinalia, and Hao Ling. Shape Optimisation of
Broadband Microstrip Antennas using Genetic Algorithm. Electronics Letters, 36(25):2057–
2058, December 7, 2000, Institution of Engineering and Technology (IET): Stevenage, Herts,
UK. doi: 10.1049/el:20001452.
576. Heather Christensen, Roger L. Wainwright, and Dale A. Schoenefeld. A Hybrid Algo-
rithm for the Point to Multipoint Routing Problem. In SAC’97 [435], pages 263–268,
1997. doi: 10.1145/331697.331751. Fully available at http://euler.mcs.utulsa.edu/
x
˜rogerw/papers/Heather-PMP.pdf [accessed 2008-08-28]. CiteSeer : 10.1.1.22.8559.
577. Nicos Christofides, Aristide Mingozzi, and Paolo Toth. The Vehicle Routing Problem. In
Combinatorial Optimization [578], Chapter 11, pages 315–338. John Wiley & Sons Ltd.: New
York, NY, USA, 1977.
578. Nicos Christofides, Aristide Mingozzi, Paolo Toth, and C. Sandi, editors. Combinatorial
Optimization, A Wiley-Interscience Publiction. John Wiley & Sons Ltd.: New York, NY,
USA, May 30–June 11, 1977. isbn: 0471997498. OCLC: 4495250, 264568944, and
636199125. Library of Congress Control Number (LCCN): 78011131. GBV-Identification
(PPN): 027251012. LC Classification: QA402.5 .C545. Based on a series of lectures given
at the Summer School in Combinatorial Optimization held in SOGESTA, Urbino, Italy from
30th May to 11th June 1977.
579. Hyoung-Seog Chung, Seongim Choi, and Juan J. Alonso. Supersonic Business Jet Design Us-
ing Knowledge-Based Genetic Algorithm with Adaptive, Unstructured Grid Methodology. In
AIAA’03 [2550], page 3791, 2003. Fully available at http://www.lania.mx/˜ccoello/
chung03.pdf.gz [accessed 2009-06-17].
REFERENCES 1003
580. Charles West Churchman, Lincoln Ackoff Russell, and E. Leonard Arnoff. Introduction to
Operations Research. John Wiley & Sons Ltd.: New York, NY, USA and Chapman & Hall:
London, UK, 2nd edition, 1957. asin: B0000CJP9J and B001ORMIX0.
581. Vic Ciesielski and Xiang Li. Analysis of Genetic Programming Runs. In Complex’04 [469],
2004. Fully available at http://goanna.cs.rmit.edu.au/˜xiali/pub/ai04.vc.
pdf and http://www.cs.rmit.edu.au/˜vc/papers/aspgp04.pdf [accessed 2010-11-20].
CiteSeerx : 10.1.1.87.4136.
582. Emilia Cimpian, Paavo Kotinurmi, Adrian Mocan, Matthew Moran, Tomas Vitvar, and Ma-
ciej Zaremba. Dynamic RosettaNet Integration on the Semantic Web Services. In First
Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating Web
Services Mediation, Choreography and Discovery [2688], 2006. Fully available at http://
sws-challenge.org/workshops/2006-Stanford/papers/03.pdf [accessed 2010-12-12].
See also [1210, 1940, 3057–3059].
583. Proceedings of the Third International Workshop on Local Search Techniques in Constraint
Satisfaction, September 25, 2006, Cite des Congres: Nantes, France.
584. Dawn Cizmar, editor. Proceedings of the 22nd Annual ACM Computer Science Confer-
ence on Scaling up: Meeting the Challenge of Complexity in Real-World Computing Ap-
plications (CSC’94), March 8–10, 1994, Phoenix, AZ, USA. ACM Press: New York, NY,
USA. isbn: 0-89791-634-4. Google Books ID: 6jUbOAAACAAJ, InpQAAAAMAAJ, and
nIkzMgAACAAJ. OCLC: 31861788, 51345201, 257836439, and 312016702.
585. Christopher D. Clack, editor. Advanced Research Challenges in Financial Evolutionary Com-
puting Workshop (ARC-FEC’08), July 12, 2008, Renaissance Atlanta Hotel Downtown: At-
lanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
586. Bill Clancey and Dan Weld, editors. Proceedings of the Thirteenth National Conference on
Artificial Intelligence and Eighth Innovative Applications of Artificial Intelligence Conference
(AAA’96, IAAI’96), August 4–8, 1996, Portland, OR, USA. isbn: 0-262-51091-X. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai96.php and http://
www.aaai.org/Conferences/IAAI/iaai96.php [accessed 2007-09-06]. 2 volumes. See also
[2207].
587. David E. Clark, editor. Evolutionary Algorithms in Molecular Design, volume 8 in Methods
and Principles in Medicinal Chemistry. Wiley-VCH Verlag GmbH & Co. KGaA: Weinheim,
Germany, September 11, 2000. isbn: 3527301550. Series editors: Raimund Mannhold, Hugo
Kubinyi, and Hendrik Timmerman.
588. The International Workshop on Modeling & Applied Simulation (MAS’03), October 2–4,
2003, Claudio Hotel & Congress Center: Bergeggi, Italy.
589. Maurice Clerc. Particle Swarm Optimization. ISTE Publishing Company: London, UK,
February 24, 2006. isbn: 1905209045. Google Books ID: a0YeAAAACAAJ.
590. Manuel Clergue, Philippe Collard, Marco Tomassini, and Leonardo Vanneschi. Fit-
ness Distance Correlation And Problem Difficulty For Genetic Programming. In
GECCO’02 [1675], pages 724–732, 2002. Fully available at http://www.cs.bham.ac.
uk/˜wbl/biblio/gecco2002/GP072.pdf and http://www.cs.ucl.ac.uk/staff/
W.Langdon/ftp/papers/gecco2002/gecco-2002-14.pdf [accessed 2008-07-20]. See also
[2784].
591. Dave Cliff and Susi Ross. Adding Temporary Memory to ZCS. Adaptive Be-
havior, 3(2):101–150, Fall 1994, SAGE Publications: Thousand Oaks, CA, USA.
doi: 10.1177/105971239400300201. Fully available at ftp://ftp.informatics.sussex.
ac.uk/pub/reports/csrp/csrp347.ps.Z [accessed 2007-09-12].
592. João Clı́maco, editor. Proceedings of the 11th International Conference on Multiple Criteria
Decision Making: Multicriteria Analysis (MCDM’94), August 1–6, 1994, Coimbra, Portu-
gal. Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-62074-5. OCLC: 36181115,
636440409, and 639833152. GBV-Identification (PPN): 222685735 and 25787321X.
Published in May 1997.
593. Carlos Artemio Coello Coello. A Short Tutorial on Evolutionary Multiobjective Op-
timization. In EMO’01 [3099], pages 21–40, 2001. Fully available at http://www.
cs.cinvestav.mx/˜emooworkgroup/tutorial-slides-coello.pdf [accessed 2010-07-
x
31]. CiteSeer : 10.1.1.29.3997.

594. Carlos Artemio Coello Coello. Theoretical and Numerical Constraint-Handling Techniques
used with Evolutionary Algorithms: A Survey of the State of the Art. Computer Methods
1004 REFERENCES
in Applied Mechanics and Engineering, 191(11–12):1245–1287, January 14, 2002, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. doi: 10.1016/S0045-7825(01)00323-1.
595. Carlos Artemio Coello Coello. A Comprehensive Survey of Evolutionary-Based Multiobjec-
tive Optimization Techniques. Knowledge and Information Systems – An International Jour-
nal (KAIS), 1(3), August 1999, Springer-Verlag London Limited: London, UK. Fully available
at http://www.lania.mx/˜ccoello/EMOO/informationfinal.ps.gz [accessed 2009-08-
x
04]. CiteSeer : 10.1.1.48.3504.

596. Carlos Artemio Coello Coello. An Updated Survey of Evolutionary Multiobjective Opti-
mization Techniques: State of the Art and Future Trends. In CEC’99 [110], pages 3–13,
volume 1, 1999. doi: 10.1109/CEC.1999.781901. Fully available at http://www-course.
cs.york.ac.uk/evo/SupportingDocs/coellocoello99updated.pdf [accessed 2009-06-
x
20]. CiteSeer : 10.1.1.50.2154.

597. Carlos Artemio Coello Coello and Gary B. Lamont, editors. Applications of Multi-Objective
Evolutionary Algorithms, volume 1 in Advances in Natural Computation. World Scien-
tific Publishing Co.: Singapore, December 2004. isbn: 981-256-106-4. Google Books
ID: viWm0k9meOcC.
598. Carlos Artemio Coello Coello, Alvaro de Albornoz, Luis Enrique Sucar, and Osvaldo Cairó
Battistutti, editors. Advances in Artificial Intelligence: Proceedings of the Second Mex-
ican International Conference on Artificial Intelligence (MICAI’02), April 22–26, 2002,
Mérida, Yucatán, México, volume 2313/2002 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-46016-0. isbn: 3-540-43475-5.
599. Carlos Artemio Coello Coello, Gary B. Lamont, and David A. van Veldhuizen. Evolution-
ary Algorithms for Solving Multi-Objective Problems, volume 5 in Genetic and Evolutionary
Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell,
MA, USA, 2nd edition, 2002–2007. doi: 10.1007/978-0-387-36797-2. isbn: 0306467623 and
0387332545. Google Books ID: NZdOgAACAAJ, rXIuAMw3lGAC, and sgX Cst yTsC.
600. Carlos Artemio Coello Coello, Arturo Hernández Aguirre, and Eckart Zitzler, editors. Pro-
ceedings of the Third International Conference on Evolutionary Multi-Criterion Optimiza-
tion (EMO’05), March 9–11, 2005, Guanajuato, México, volume 3410/2005 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b106458. isbn: 3-540-24983-4
and 3-540-31880-1. Google Books ID: AfE5fFFl5AgC. OCLC: 58601838, 58998290,
76657382, 150417695, 316700466, 318300131, and 403748504. Library of Congress
Control Number (LCCN): 2005920590.
601. William W. Cohen and Haym Hirsh, editors. Proceedings of the 11th International Confer-
ence on Machine Learning (ICML’94), July 10–13, 1994, New Brunswick, NJ, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-335-2.
602. William W. Cohen and Andrew Moore, editors. Proceedings of the 23rd International Con-
ference on Machine Learning (ICML’06), June 25–29, 2006, Carnegy Mellon University
(CMU): Pittsburgh, PA, USA, volume 148 in ACM International Conference Proceeding
Series (AICPS). Omipress: Madison, WI, USA. isbn: 1-59593-383-2. Fully available
at http://www.machinelearning.org/icml2006_proc.html [accessed 2010-06-23].
603. William W. Cohen, Andrew McCallum, and Sam Roweis, editors. Proceedings of the 25th
International Conference on Machine Learning (ICML’08), July 5–9, 2008, Helsinki, Fin-
land, volume 307 in ACM International Conference Proceeding Series (AICPS). Omipress:
Madison, WI, USA. Fully available at http://www.machinelearning.org/archive/
icml2008/proceedings.shtml [accessed 2010-06-23].
604. James P. Cohoon, S. U. Hegde, Worthy Neil Martin, and D. Richards. Punctuated Equilibria:
A Parallel Genetic Algorithm. In ICGA’87 [1132], pages 148–154, 1987.
605. David W. Coit and Fatema Baheranwala. Solution of Stochastic Multi-Objective System
Reliability Design Problems using Genetic Algorithms. In ESREL’05 [1567], pages 391–
398, 2005. Fully available at http://ise.rutgers.edu/resource/research_paper/
paper_05-010.pdf [accessed 2010-12-16]. See also [2645].
606. Pierre Collet, Cyril Fonlupt, Jin-Kao Hao, Evelyne Lutton, and Marc Schoenauer, edi-
tors. Selected Papers of the 5th International Conference on Artificial Evolution, Evo-
lution Artificielle (EA’01), October 29–31, 2001, Le Creusot, France, volume 2310/2002
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
REFERENCES 1005
doi: 10.1007/3-540-46033-0. isbn: 3-540-43544-1.
607. Pierre Collet, Marco Tomassini, Marc Ebner, Steven Matt Gustafson, and Anikó Ekárt,
editors. Proceedings of the 9th European Conference Genetic Programming (EuroGP’06),
April 10–12, 2006, Budapest, Hungary, volume 3905/2006 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11729976. isbn: 3-540-33143-3. Google
Books ID: 1JNQAAAAMAAJ and ZusmAAAACAAJ. Library of Congress Control Number
(LCCN): 2006922466.
608. Pierre Collet, Nicolas Monmarché, Pierrick Legrand, Marc Schoenauer, and Evelyne Lutton,
editors. Artificial Evolution: Revised Selected Papers from the 9th International Conference,
Evolution Artificielle (EA’09), October 26–28, 2009, Strasbourg, France, volume 5975 in
Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3642141552.
609. Yann Collette and Patrick Siarry. Multiobjective Optimization: Principles and Case Stud-
ies, volume 2 in Decision Engineering. Groupe Eyrolles: Paris, France, 2002–2004.
isbn: 3540401822. Google Books ID: XNYF4hltoF0C.
610. J. J. Collins and Malachy Eaton. Genocodes for Genetic Algorithms. In MENDEL’97 [416],
pages 23–30, 1997. Fully available at http://www.csis.ul.ie/staff/jjcollins/
mendel97.ps.gz [accessed 2010-08-05]. CiteSeerx : 10.1.1.51.2349.
611. Proceedings of the Third Bi-Annual EUROMICRO International Conference on Massively
Parallel Computing Systems (MPCS’98), April 6–9, 1998, Colorado Springs, CO, USA.
612. Benoı̂t Colson and Philippe L. Toint. Optimizing Partially Separable Functions with-
out Derivatives. Technical Report 2003/20, University of Namur (FUNDP), Department
of Mathematics: Namur, Belgium, November 6, 2003. Fully available at http://www.
fundp.ac.be/pdf/publications/51357.pdf and perso.fundp.ac.be/˜phtoint/
pubs/TR03-20.ps [accessed 2009-10-26]. See also [613].
613. Benoı̂t Colson and Philippe L. Toint. Optimizing Partially Separable Functions without
Derivatives. Optimization Methods and Software, 20(4 & 5):493–508, August 2005, Taylor
and Francis LLC: London, UK. doi: 10.1080/10556780500140227. See also [612].
614. Francesc Comellas. Using Genetic Programming to Design Broadcasting Algorithms for
Manhattan Street Networks. In EvoWorkshops’04 [2254], pages 170–177, 2004.
615. Francesc Comellas and G. Giménez. Genetic Programming to Design Communication Algo-
rithms for Parallel Architectures. Parallel Processing Letters (PPL), 8(4):549–560, Decem-
ber 1998, World Scientific Publishing Co.: Singapore. doi: 10.1142/S0129626498000547. Fully
available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Comellas_1998_
GPD.html [accessed 2009-09-01]. CiteSeerx : 10.1.1.57.4752.
616. Francesc Comellas and Juan Paz-Sánchez. Reconstruction of Networks from
Their Betweenness Centrality. In EvoWorkshops’08 [1051], pages 31–37, 2008.
doi: 10.1007/978-3-540-78761-7 4.
617. Diana Elena Comes, Steffen Bleul, Thomas Weise, and Kurt Geihs. A Flexible
Approach for Business Processes Monitoring. In DAIS’09 [2453], pages 116–128,
2009. doi: 10.1007/978-3-642-02164-0 9. Fully available at http://www.it-weise.de/
documents/files/CBTG2009AFAFBPM.pdf. Ei ID: 20093412265790. IDS (SCI): BKH10.
618. Committee on the Fundamentals of Computer Science: Challenges and Opportunities,
Computer Science and Telecommunications Board, National Research Council of the Na-
tional Academies: Washington, DC, USA, editor. Computer Science: Reflections on the
Field, Reflections from the Field. National Academies Press: Washington, DC, USA, 2004.
isbn: 0-309-09301-5 and 0-309-54529-3. Google Books ID: sTlPLMq6ZdYC.
619. Giuseppe Confessore, Graziano Galiano, and Giuseppe Stecca. An Evolutionary Algorithm
for Vehicle Routing Problem with Real Life Constraints. In Manufacturing Systems and Tech-
nologies for the New Frontier – The 41st CIRP Conference on Manufacturing Systems [1918],
pages 225–228, 2008. doi: 10.1007/978-1-84800-267-8 46.
620. Brian Connolly. Genetic Algorithms – Survival of the Fittest: Natural Selection
with Windows Forms. MSDN Magazin, August 2004, Microsoft Corporation: Red-
mond, WA, USA. Fully available at http://msdn.microsoft.com/de-de/magazine/
cc163934(en-us).aspx [accessed 2008-06-24].
621. Michael Conrad. Bootstrapping on the Adaptive Landscape. Biosystems, 11(2-
3):81–84, August 1979, Elsevier Science Ireland Ltd.: East Park, Shannon, Ire-
1006 REFERENCES
land and North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/0303-2647(79)90009-1. Fully available at http://dx.doi.org/10.1016/
0303-2647(79)90009-1 and http://hdl.handle.net/2027.42/23514 [accessed 2008-
11-02].

622. Michael Conrad. Adaptability: The Significance of Variability from Molecule to Ecosystem.
Plenum Press: New York, NY, USA, March 31, 1983. isbn: 0-306-41223-3. Google Books
ID: 7ewUAQAAIAAJ. OCLC: 9110177, 251709596, 299388964, and 476965274. Library
of Congress Control Number (LCCN): 82024558. GBV-Identification (PPN): 014038137.
LC Classification: QH546 .C65 1983.
623. Michael Conrad. Molecular Computing. In Advances in Computers [3036], pages 235–324.
Academic Press Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands, 1990. doi: 10.1016/S0065-2458(08)60155-2.
624. Michael Conrad. The Geometry of Evolution. Biosystems, 24(1):61–81, 1990, Elsevier Science
Ireland Ltd.: East Park, Shannon, Ireland and North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands. doi: 10.1016/0303-2647(90)90030-5.
625. Michael Conrad. Towards High Evolvability Dynamics. In Evolutionary Systems – Biological
and Epistemological Perspectives on Selection and Self-Organization [2771], pages 33–43.
Kluwer Academic Publishers: Norwell, MA, USA, 1998.
626. Stephen A. Cook. The Complexity of Theorem-Proving Procedures. In STOC’71 [20], pages
151–158, 1971. doi: 10.1145/800157.805047.
627. William J. Cook. Traveling Salesman Problem. Georgia Institute of Technology, College of
Engineering, School of Industrial and Systems Engineering, Operations Research: Atlanta,
GA, USA, in edition, September 18, 2011. Fully available at http://www.tsp.gatech.
edu/index.html [accessed 2011-09-18].
628. William J. Cook, William H. Cunningham, William R. Pulleyblank, and Alexander Schrijver.
Combinatorial Optimization, Estimation, Simulation, and Control – Wiley-Interscience Series
in Discrete Mathematics and Optimization. Wiley Interscience: Chichester, West Sussex, UK,
November 12, 1997. isbn: 0-471-55894-X. OCLC: 37451897, 247024428, 288962207,
and 473487390. Library of Congress Control Number (LCCN): 97035774. GBV-
Identification (PPN): 233568778 and 280181795. LC Classification: QA402.5 .C54523
1998.
629. Susan Coombs and Lawrence Davis. Genetic Algorithms and Communication Link Speed
Design: Constraints and Operators. In ICGA’87 [1132], pages 257–260, 1987. Partly available
at http://www.questia.com/PM.qst?a=o&d=97597557 [accessed 2008-08-29].
630. D. C. Cooper, editor. Proceedings of the 2nd International Joint Conference on Arti-
ficial Intelligence (IJCAI’71), September 1–3, 1971, Imperial College London: London,
UK. isbn: 0-934613-34-6. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-1971/CONTENT/content.htm [accessed 2008-04-01]. OCLC: 17487276.
631. Emilio S. Corchado, Juan M. Corchado, and Ajith Abraham, editors. Innovations in Hybrid
Intelligent Systems – Proceedings of the 2nd International Workshop on Hybrid Artificial
Intelligence Systems (HAIS’07), November 2007, University of Salamanca: Salamanca, Spain,
volume 44/2008 in Advances in Soft Computing. Physica-Verlag GmbH & Co.: Heidelberg,
Germany. doi: 10.1007/978-3-540-74972-1. isbn: 3-540-74971-3. Library of Congress
Control Number (LCCN): 2007935489.
632. Emilio S. Corchado, Ajith Abraham, and Witold Pedrycz, editors. Proceedings of the Third
International Workshop on Hybrid Artificial Intelligence Systems (HAIS’08), September 24–
26, 2008, Burgos, Spain, volume 5271/2008 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-87656-4. isbn: 3-540-87655-3. Library of Congress Control Number
(LCCN): 2008935394.
633. Arthur L. Corcoran and Sandip Sen. Using Real-Valued Genetic Algorithms to Evolve
Rule Sets for Classification. In CEC’94 [1891], pages 120–124, volume 1, 1994.
doi: 10.1109/ICEC.1994.350030. CiteSeerx : 10.1.1.55.1864.
634. Óscar Cordón, Francisco Herrera Triguero, and Achim G. Hoffmann. Genetic Fuzzy
Systems: Evolutionary Tuning and Learning of Fuzzy Knowledge Bases, volume 19
in Advances in Fuzzy Systems. World Scientific Publishing Co.: Singapore, 2001.
isbn: 981-02-4016-3 and 981-02-4017-1. Google Books ID: BWmSV-38fKAC
and bwa9QgAACAAJ. OCLC: 47768307. Library of Congress Control Number
REFERENCES 1007
(LCCN): 2001275437. GBV-Identification (PPN): 334123739. LC Classification: QA402.5
.G4565 2001.
635. Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to Algo-
rithms, MIT Electrical Engineering and Computer Science. MIT Press: Cambridge, MA,
USA and McGraw-Hill Ltd.: Maidenhead, UK, England, 2nd edition, 1990–August 2001.
isbn: 0070131430, 0070131449, 0-2620-3141-8, 0-2625-3196-8, and 8120321413.
Google Books ID: CJY1KwAACAAJ, CZY1KwAACAAJ, ElKaGgAACAAJ, JlCNJwAACAAJ,
KVCNJwAACAAJ, NLngYyWFl YC, O9WbHAAACAAJ, OdWbHAAACAAJ, PB02AAAACAAJ,
PNWbHAAACAAJ, V7wYPAAACAAJ, WLwYPAAACAAJ, oStGQAACAAJ, iLY1NwAACAAJ, and
zukGIQAACAAJ.
636. David Wolfe Corne and Joshua D. Knowles. Techniques for Highly Multiobjective Optimisa-
tion: Some Nondominated Points are Better than Others. In GECCO’07-I [2699], pages 773–
780, 2007. Fully available at http://dbkgroup.org/knowles/pap583s7-corne.pdf
and http://www.macs.hw.ac.uk/˜dwcorne/p773-corne.pdf [accessed 2011-12-05]. arXiv
ID: 0908.3025v1.
637. David Wolfe Corne and Jonathan L. Shapiro, editors. Proceedings of the Workshop on Ar-
tificial Intelligence and Simulation of Behaviour, International Workshop on Evolutionary
Computing, Selected Papers (AISB’97), April 7–8, 1997, Manchester, UK, volume 1305/1997
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BFb0027161. isbn: 3-540-63476-2.
638. David Wolfe Corne, Marco Dorigo, Fred Glover, Dipankar Dasgupta, Pablo Moscato, Ric-
cardo Poli, and Kenneth V. Price, editors. New Ideas in Optimization, McGraw-Hill’s Ad-
vanced Topics In Computer Science Series. McGraw-Hill Ltd.: Maidenhead, UK, England,
May 1999. isbn: 0-07-709506-5. Google Books ID: nC35AAAACAAJ. OCLC: 40883094
and 265809212.
639. David Wolfe Corne, Joshua D. Knowles, and Martin J. Oates. The Pareto Envelope-
Based Selection Algorithm for Multiobjective Optimization. In PPSN VI [2421], pages
839–848, 2000. doi: 10.1007/3-540-45356-3 82. Fully available at http://www.lania.
mx/˜ccoello/corne00.ps.gz [accessed 2009-07-18]. CiteSeerx : 10.1.1.32.9195.
640. David Wolfe Corne, Martin J. Oates, and George D. Smith, editors. Telecommunications Opti-
mization: Heuristic and Adaptive Techniques. John Wiley & Sons Ltd.: New York, NY, USA,
September 2000. doi: 10.1002/047084163X. isbn: 047084163X and 0471988553. Google
Books ID: EgNTAAAAMAAJ. OCLC: 43187002, 43903625, 50745182, and 123099960.
641. David Wolfe Corne, Zbigniew Michalewicz, Robert Ian McKay, Ágoston E. Eiben, David B.
Fogel, Carlos M. Fonseca, Günther R. Raidl, Kay Chen Tan, and Ali M. S. Zalzala, edi-
tors. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’05), Septem-
ber 2–5, 2005, Edinburgh, Scotland, UK. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-9363-5. Google Books ID: -QIVAAAACAAJ. OCLC: 62773008 and
423962661. Catalogue no.: 05TH8834.
642. F. Corno, G. Cumani, M. Sonza Reorda, and Giovanni Squillero. Efficent Machine-
Code Test-Program Induction. In CEC’02 [944], pages 1486–1491, 2002. Fully available
at http://www.cad.polito.it/pap/db/cec2002.pdf and http://www.cs.bham.
ac.uk/˜wbl/biblio/gp-html/corno_2002_emctpi.html [accessed 2008-09-17].
643. Gérard Cornuéjols. Revival of the Gomory cuts in the 1990’s. Annals of Operations Research,
149(1):63–66, February 2007, Springer Netherlands: Dordrecht, Netherlands and J. C. Baltzer
AG, Science Publishers: Amsterdam, The Netherlands. doi: 10.1007/s10479-006-0100-1. Fully
available at http://integer.tepper.cmu.edu/webpub/gomory.pdf [accessed 2009-07-05].
644. Pablo Cortés Achedad, Luis Onieva Giménez, Jesús Muñuzuri Sanz, and José Guadix
Martı́n. A Revision of Evolutionary Computation Techniques in Telecommunications
and an Application for the Network Global Planning Problem. In Success in Evolu-
tionary Computation [3014], pages 239–262. Springer-Verlag: Berlin/Heidelberg, 2008.
doi: 10.1007/978-3-540-76286-7 11.
645. Carlos Cotta and Peter Cowling, editors. Proceedings of the 9th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP’09), April 15–17,
2009, Eberhard-Karls-Universität Tübingen, Fakultät für Informations- und Kognitionswis-
senschaften: Tübingen, Germany, volume 5482/2009 in Theoretical Computer Science and
General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-642-01009-5. isbn: 3-642-01008-3.
1008 REFERENCES
646. Carlos Cotta and Jano I. van Hemert, editors. Proceedings of the 7th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP’07), April 11–13,
2007, València, Spain, volume 4446/2007 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany.
647. Richard Courant. Variational Methods for the Solution of Problems of Equilibrium and
Vibrations. Bulletin of the American Iris Society, 49(1):1–23, 1943, American Iris So-
ciety (AIS): Philadelphia, PA, USA. doi: 10.1090/S0002-9904-1943-07818-4. Fully avail-
able at http://projecteuclid.org/euclid.bams/1183504922 and http://www.
ams.org/bull/1943-49-01/S0002-9904-1943-07818-4/home.html [accessed 2008-11-
15]. Zentralblatt MATH identifier: 0063.00985. Mathematical Reviews number (Math-
SciNet): MR0007838.
648. Steven H. Cousins. Species Diversity Measurement: Choosing the Right Index. Trends
in Ecology and Evolution (TREE), 6(6):190–192, June 1991, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands and Cell Press: St. Louis, MO, USA.
doi: 10.1016/0169-5347(91)90212-G.
649. Peter Cowling, Graham Kendall, and Eric Soubeiga. A Hyperheuristic Ap-
proach to Scheduling a Sales Summit. In PATAT’00 [452], pages 176–190, 2000.
doi: 10.1007/3-540-44629-X 11. Fully available at http://www.asap.cs.nott.ac.
uk/publications/pdf/exs_patat01.pdf and http://www.cs.nott.ac.uk/˜gxk/
aim/2009/reading/hh_paper_1.pdf [accessed 2011-04-23]. CiteSeerx : 10.1.1.63.7949.
650. Louis Anthony Cox, Jr., Lawrence Davis, and Yuping Qiu. Dynamic Anticipatory Routing In
Circuit-Switched Telecommunications Networks. In Handbook of Genetic Algorithms [695],
pages 124–143. Thomson Publishing Group, Inc.: Stamford, CT, USA, 1991.
651. Teodor Gabriel Crainic and Gilbert Laporte, editors. Fleet Management and Logistics.
Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Nor-
well, MA, USA, 1998. isbn: 0792381610. Google Books ID: IXed6zq9ZigC.
652. Nichael Lynn Cramer. A Representation for the Adaptive Generation of Simple Sequential
Programs. In ICGA’85 [1131], pages 183–187, 1985. Fully available at http://www.sover.
net/˜nichael/nlc-publications/icga85/index.html [accessed 2007-09-06].
653. Ronald L. Crepeau. Genetic Evolution of Machine Language Software. In Proceedings of
the Workshop on Genetic Programming: From Theory to Real-World Applications [2326],
pages 121–134, 1995. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/crepeau_1995_GEMS.html [accessed 2008-09-17]. CiteSeerx : 10.1.1.61.7001.
654. Matej Črepinček, Marjan Mernik, and Shih-Hsi Liu. Analysis of Exploration and Exploita-
tion in Evolutionary Algorithms by Ancestry Trees. International Journal of Innovative
Computing and Applications (IJICA), 3(1), January 2011, InderScience Publishers: Geneva,
Switzerland. doi: 10.1504/IJICA.2011.037947.
655. H. Brown Cribbs, III and Robert Elliott Smith. What Can I Do With a Learning Classifier
System? In Industrial Applications of Genetic Algorithms [1496], pages 299–319. CRC Press,
Inc.: Boca Raton, FL, USA, 1998.
656. Sven F. Crone, Stefan Lessmann, and Robert Stahlbock, editors. Proceedings of the 2006
International Conference on Data Mining (DMIN’06), June 26–29, 2006, Las Vegas, NV,
USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-004-3.
657. Helena Cronin. The Ant and the Peacock – Altruism and Sexual Selection from Darwin to
Today. Cambridge University Press: Cambridge, UK, 1991. isbn: 0-521-32937-X and
0-521-45765-3. Google Books ID: SwEnVmakCB0C. OCLC: 29778586, 444540292, and
490467682. GBV-Identification (PPN): 040676250 and 232802165. With a foreword by
John Maynard Smith.
658. Mark Crosbie and Gene Spafford. Applying Genetic Programming to Intrusion De-
tection. In Proceedings of 1995 AAAI Fall Symposium Series, Genetic Programming
Track [1606], 1995. Fully available at http://ftp.cerias.purdue.edu/pub/papers/
mark-crosbie/mcrosbie-spaf-AAAI.ps.Z [accessed 2008-06-17]. The paper appeared also
as technical report COAST TR 95-05 of COAST Laboratory, Deptartment of Computer
Sciences, Purdue University, West Lafayette, IN, USA.
659. Jack L. Crosby. Computer Simulation in Genetics. John Wiley & Sons Ltd.: New York, NY,
USA, January 1973. isbn: 0-4711-8880-8.
660. Claudia Crosio, Francesco Cecconi, Paolo Mariottini, Gianni Cesareni, Sydney Brenner,
and Francesco Amaldi. Fugu Intron Oversize Reveals the Presence of U15 snoRNA Cod-
REFERENCES 1009
ing Sequences in Some Introns of the Ribosomal Protein S3 Gene. Genome Research, 6(12):
1227–1231, December 1996. Fully available at http://www.genome.org/cgi/content/
abstract/6/12/1227 [accessed 2008-03-23].
661. Proceedings of the Second Conference on Artificial General Intelligence (AGI’09), March 6–9,
2009, Crowne Plaza National Airport: Arlington, VA, USA.
662. Li Ying Cui, Soundar R. T. Kumara, John Jung-Woon Yoo, and Fatih Cavdur. Large-Scale
Network Decomposition and Mathematical Programming Based Web Service Composition.
In CEC’09 [1245], pages 511–514, 2009. doi: 10.1109/CEC.2009.91. INSPEC Accession
Number: 10839128. See also [1571, 1999, 2064–2067, 3034].
663. Xunxue Cui and Chuang Lin. A Multiobjective Genetic Algorithm for Distributed Database
Management. In WCICA’04 [1388], pages 2117–2121, volume 3, 2004. doi: 10.1109/W-
CICA.2004.1341959. INSPEC Accession Number: 8143777.
664. Francesco Cupertino, Ernesto Mininno, and David Naso. Compact Genetic Algorithms for
the Optimization of Induction Motor Cascaded Control. In IEMDC’07 [1391], pages 82–87,
volume 2, 2007. doi: 10.1109/IEMDC.2007.383557. INSPEC Accession Number: 9892564.
665. Djurdje Cvijović and Jacek Klinowski. Taboo Search: An Approach to the Multiple Minima
Problem. Science Magazine, 267(5198):664–666, February 3, 1995, American Association for
the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press (Stan-
ford University): Cambridge, MA, USA. doi: 10.1126/science.267.5198.664. Fully available
at http://www-klinowski.ch.cam.ac.uk/pdfs/244.pdf [accessed 2010-10-04]. PubMed
ID: 17745843.
666. Hans Czap, Rainer Unland, Cherif Branki, and Huaglory Tianfield, editors. Self-Organization
and Autonomic Informatics (I) – International Conference on Self-Organization and Adap-
tation of Multi-agent and Grid Systems (SOAS’2005), December 11–13, 2005, University of
Paisley: Glasgow, Scotland, UK, Frontiers in Artificial Intelligence and Applications. IOS
Press: Amsterdam, The Netherlands. isbn: 1-58603-577-0 and 1601291337. Google
Books ID: 0wp9iwroM9gC and tPP8OgAACAAJ.
667. Zbigniew J. Czech and Piotr Czarnas. Parallel Simulated Annealing for the Vehicle Routing
problem with Time Windows. In PDP’02 [1383], pages 376–383, 2002. doi: 10.1109/EM-
PDP.2002.994313. CiteSeerx : 10.1.1.16.5766.

668. António Gaspar Lopes da Cunha and José António Colaço Gomes Covas. RPSGAe - Re-
duced Pareto Set Genetic Algorithm: A Multiobjectiv Algorithm with Elistim: Application
to Polymer Extrusion. Xavier Gandibleux, Marc Sevaux, Kenneth Sörensen, and Vincent
T’kindt, editors. Fully available at http://www2.lifl.fr/PM2O/Reunions/04112002/
gaspar.pdf [accessed 2007-09-21]. Poster on the joint PM2O-EU/ME meeting.
669. Marcus Vinicius Carvalho da Silva, Nadia Nedjah, and Luiza de Macedo Mourelle. Evolution-
ary IP Assignment for Efficient NoC-based System Design using Multi-objective Optimiza-
tion. In CEC’09 [1350], pages 2257–2264, 2009. doi: 10.1109/CEC.2009.4983221. INSPEC
Accession Number: 10688816.
670. Cihan H. Dagli, Anna L. Buczak, Joydeep Ghosh, Mark J. Embrechts, and Okan Erson,
editors. Intelligent Engineering Systems through Artificial Neural Networks – Smart Engi-
neering System Design: Neural Networks, Fuzzy Logic, EP, Complex Systems and Artificial
Life – Proceedings of the Artificial Neural Networks in Engineering Conference (ANNIE’03),
November 2–5, 2003, St. Louis, MO, USA, volume 13. American Society of Mechanical En-
gineers (ASME): St. Louis, MO, USA.
671. Wei Dai, Paul Moynihan, Juanqiong Gou, Ping Zou, Xi Yang, Tiedong Chen, and Xin Wan.
Services Oriented Knowledge-based Supply Chain Application. In SCContest’07 [1374], pages
660–667, 2007. doi: 10.1109/SCC.2007.106. INSPEC Accession Number: 9869254.
672. Hōsei Daigaku, Shietung Peng, Vladimir V Savchenko, and Shuichi Yukita, editors. First
International Symposium on Cyber Worlds: Theory and Practices (CW’02), November 6–8,
2002. IEEE Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7695-1862-1.
Google Books ID: lxIOAAAACAAJ. OCLC: 51208033, 65194958, 71489103, and
423965835.
673. R. J. Dakin. A Tree-Search Algorithm for Mixed Integer Programming Problems. The
1010 REFERENCES
Computer Journal, Oxford Journals, 8(3):250–255, 1965, British Computer Society: Swindon,
UK. doi: 10.1093/comjnl/8.3.250.
674. Gerard E. Dallal. The Little Handbook of Statistical Practice. Tufts University, Jean Mayer
USDA Human Nutrition Research Center on Aging, Biostatistics Unit: Boston, MA, USA,
July 16, 2008. Fully available at http://www.statisticalpractice.com/ [accessed 2008-
08-15].

675. Hai H. Dam, Hussein A. Abbass, and Chris Lokan. DXCS: An XCS System for Distributed
Data Mining. In GECCO’05 [304], pages 1883–1890, 2005. Fully available at http://doi.
acm.org/10.1145/1068009.1068326 [accessed 2007-09-12]. See also [676].
676. Hai H. Dam, Hussein A. Abbass, and Chris Lokan. DXCS: An XCS System for Distributed
Data Mining. Technical Report TR-ALAR-200504002, University of New South Wales, School
of Information Technology and Electrical Engineering, Artificial Life and Adaptive Robotics
Laboratory: Canberra, Australia, 2005. Fully available at http://www.itee.adfa.edu.
au/˜alar/techreps/200504002.pdf [accessed 2007-09-12]. See also [675].
677. David B. D’Ambrosio and Kenneth Owen Stanley. A Novel Generative Encoding for Ex-
ploiting Neural Network Sensor and Output Geometry. In GECCO’07-I [2699], pages 974–
981, 2007. doi: 10.1145/1276958.1277155. Fully available at http://eplex.cs.ucf.edu/
papers/dambrosio_gecco07.pdf [accessed 2011-06-19].
678. Martin Damsbo, Brian S. Kinnear, Matthew R. Hartings, Peder T. Ruhoff, Martin F. Jar-
rold, and Mark A. Ratner. Application of Evolutionary Algorithm Methods to Polypep-
tide Folding: Comparison with Experimental Results for Unsolvated Ac-(Ala-Gly-Gly)5 -
LysH+ . Proceedings of the National Academy of Science of the United States of Amer-
ica (PNAS), 101(19):7215–7222, May 2004, National Academy of Sciences: Washington,
DC, USA. Fully available at http://www.pnas.org/cgi/reprint/101/19/7215.pdf?
ck=nck and http://www.pnas.org/content/vol101/issue19/ [accessed 2007-08-27].
679. Grégoire Danoy, Bernabé Dorronsoro Dı́az, and Pascal Bouvry. Overcoming Partitioning in
Large Ad Hoc Networks Using Genetic Algorithms. In GECCO’09-I [2342], pages 1347–1354,
2009. doi: 10.1145/1569901.1570082.
680. George Bernhard Dantzig and J. H. Ramser. The Truck Dispatching Problem. Management
Science, 6(1):80–91, October 1959, Institute for Operations Research and the Management
Sciences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cam-
bridge, MA, USA. doi: 10.1287/mnsc.6.1.80.
681. Andrea Pohoreckyj Danyluk, Léon Bottou, and Michael Littman, editors. Proceedings of the
26th International Conference on Machine Learning (ICML’09), June 14–18, 2009, Montréal,
QC, Canada, volume 382 in ACM International Conference Proceeding Series (AICPS). ACM
Press: New York, NY, USA. Fully available at http://www.machinelearning.org/
archive/icml2009/abstracts.html [accessed 2010-06-23].
682. Ludwig Danzera and Victor Klee. Lengths of Snakes in Boxes. Journal of Combinatorial
Theory, 2(3):258–265, May 1967, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. doi: 10.1016/S0021-9800(67)80026-7.
683. Paul J. Darwen and Xin Yao. Every Niching Method has its Niche: Fitness Shar-
ing and Implicit Sharing Compared. In PPSN IV [2818], pages 398–407, 1996.
doi: 10.1007/3-540-61723-X 1004. CiteSeerx : 10.1.1.20.9897 and 10.1.1.56.4202.
684. Charles Darwin. On the Origin of Species by Means of Natural Selection, or the Preserva-
tion of Favoured Races in the Struggle for Life. John Murray: London, UK, 6th edition,
November 24, 1859. Fully available at http://www.gutenberg.org/etext/1228 [ac-
cessed 2007-08-05]. Google Books ID: 6gcpNAAACAAJ and nNQNAAAAYAAJ. Newly published as

[685].
685. Charles Darwin, author. Gillian Beer, editor. On the Origin of Species, Oxford World’s Clas-
sics. Oxford University Press, Inc.: New York, NY, USA, May 1998. isbn: 0-19-283438-X.
Google Books ID: LDrPI52uFQsC. OCLC: 39117382. New edition of [684].
686. Erasmus Darwin. Zoönomia; or, the Organic Laws of Life, volume I. J. Johnson: St. Paul’s
Church-Yard, London, UK, second corrected edition, 1796. Fully available at http://www.
gutenberg.org/etext/15707 [accessed 2008-09-10]. Entered at Stationers’ Hall.
687. Angan Das and Ranga Vemuri. A Self-learning Optimization Technique for Topol-
ogy Design of Computer Networks. In EvoWorkshops’08 [1051], pages 38–51, 2008.
doi: 10.1007/978-3-540-78761-7 5.
688. Indraneel Das and John E. Dennis. A Closer Look at Drawbacks of Minimizing Weighted
REFERENCES 1011
Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems. Tech-
nical Report TR96.36, Rice University, Department of Computational & Applied Mathemat-
ics (CAAM): Houston, TX, USA, December 1996. Fully available at http://www.caam.
rice.edu/tech_reports/1996/TR96-36.ps [accessed 2009-07-18]. See also [689].
689. Indraneel Das and John E. Dennis. A Closer Look at Drawbacks of Minimizing Weighted
Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems. Struc-
tural Optimization, 14(1):63–69, August 1997, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BF01197559. See also [688].
690. Dipankar Dasgupta, Fernando Nino, Deon Garrett, Koyel Chaudhuri, Soujanya Medapati,
Aishwarya Kaushal, and James Simien. A Multiobjective Evolutionary Algorithm for the
Task based Sailor Assignment Problem. In GECCO’09-I [2342], pages 1475–1482, 2009.
doi: 10.1145/1569901.1570099. Fully available at http://ais.cs.memphis.edu/files/
papers/t13fp621-dasguptaPS.pdf [accessed 2010-09-21].
691. Yuval Davidor. Epistasis Variance: Suitability of a Representation to Genetic Algorithms.
Complex Systems, 4(4):369–383, 1990, Complex Systems Publications, Inc.: Champaign, IL,
USA, Stephen Wolfram and Todd Rowland, editors.
692. Yuval Davidor. Epistasis Variance: A Viewpoint on GA-Hardness. In FOGA’90 [2562], pages
23–35, 1990.
693. Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors. Proceedings of the
Third Conference on Parallel Problem Solving from Nature; International Conference on
Evolutionary Computation (PPSN III), October 9–14, 1994, Jerusalem, Israel, volume
866/1994 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/3-540-58484-6. isbn: 0387584846 and 3-540-58484-6. Google
Books ID: n2lXlC5mv68C. OCLC: 31132760, 190832323, 202350473, 311905312, and
422861493.
694. Lawrence Davis, editor. Genetic Algorithms and Simulated Annealing, Research Notes in
Artificial Intelligence. Pitman: London, UK, October 1987–1990. isbn: 0273087711 and
0934613443. Google Books ID: edfSSAAACAAJ. OCLC: 16355405, 257879939, and
476337140. Library of Congress Control Number (LCCN): 87021357.
695. Lawrence Davis, editor. Handbook of Genetic Algorithms, VNR Computer Library. Thom-
son Publishing Group, Inc.: Stamford, CT, USA, January 1991. isbn: 0-442-00173-8
and 1850328250. Google Books ID: vTG5PAAACAAJ. OCLC: 23081440, 36495685,
231258310, and 300431834. Library of Congress Control Number (LCCN): 90012823.
GBV-Identification (PPN): 01945077X and 276468937. LC Classification: QA402.5 .H36
1991.
696. Randall Davis and Jonathan King. An Overview of Production Systems. Technical Report
STAN-CS-75-524, Stanford University, Computer Science Department: Stanford, CA, USA,
October 1975. See also [697].
697. Randall Davis and Jonathan King. An Overview of Production Systems. In Machine Intel-
ligence 8 [874], pages 300–334, 1977. See also [696].
698. Brian D. Davison and Khaled Rasheed. Effect of Global Parallelism on a Steady State GA.
In Evolutionary Computation and Parallel Processing Workshop [482], pages 167–170, 1999.
Fully available at http://www.cse.lehigh.edu/˜brian/pubs/1999/cec99/pgado.
pdf [accessed 2010-08-01]. CiteSeerx : 10.1.1.112.5296. See also [2270].
699. Richard Dawkins. The Evolution of Evolvability. In Artificial Life’87 [1666], pages 201–220,
1987.
700. Richard Dawkins. The Selfish Gene. Oxford University Press, Inc.: New York, NY,
USA, 1st/2nd edition, 1976–October 1989. isbn: 0-192-86092-5. Google Books
ID: WkHO9HI7koEC.
701. Richard Dawkins. Climbing Mount Improbable. Penguin Books: London, UK and W.W.
Norton & Company: New York, NY, USA, 1 edition, 1996. isbn: 0140179186, 0141026170,
and 0670850187. Google Books ID: 5XUYHAAACAAJ, BuKaKAAACAAJ, TPzqAAAACAAJ, and
TvzqAAAACAAJ. asin: 0393039307 and 0393316823.
702. Sérgio Granato de Araújo, Aloysio de Castro Pinto Pedroza, and Antônio Carneiro de
Mesquita Filho. Evolutionary Synthesis of Communication Protocols. In ICT’03 [1776], pages
986–993, volume 2, 2003. doi: 10.1109/ICTEL.2003.1191573. Fully available at http://www.
gta.ufrj.br/ftp/gta/TechReports/AMP03a.pdf [accessed 2008-06-21]. See also [703, 705].
703. Sérgio Granato de Araújo, Aloysio de Castro Pinto Pedroza, and Antônio Carneiro de
1012 REFERENCES
Mesquita Filho. Uma Metodologia de Projeto de Protocolos de Comunicação Baseada em
Técnicas Evolutivas. In SBrT’03 [1274], 2003. Fully available at http://www.gta.ufrj.
br/ftp/gta/TechReports/AMP03e.pdf [accessed 2008-06-21]. See also [702, 704].
704. Sérgio Granato de Araújo, Antônio Carneiro de Mesquita Filho, and Aloysio de Castro Pinto
Pedroza. Sı́ntese de Circuitos Digitais Otimizados via Programação Genética. In SEM-
ISH’03 [3003], pages 273–285, volume III, 2003. Fully available at http://www.gta.ufrj.
br/ftp/gta/TechReports/AMP03d.pdf [accessed 2008-06-21].
705. Sérgio Granato de Araújo, Antônio Carneiro de Mesquita Filho, and Aloysio C. P. Pedroza.
A Scenario-Based Approach to Protocol Design Using Evolutionary Techniques. In EvoWork-
shops’04 [2254], pages 178–187, 2004. See also [702].
706. Bart de Boer. Classifier Systems: A useful Approach to Machine Learning?, IR94-02. Mas-
ter’s thesis, Leiden University: Leiden, The Netherlands, August 1994, Ida Sprinkhuizen-
Kuyper and Egbert J. W. Boers, Supervisors. Fully available at ftp://ftp.cs.bham.
ac.uk/pub/authors/T.Kovacs/lcs.archive/DeBoer1994a.ps.gz [accessed 2010-12-15].
CiteSeerx : 10.1.1.38.6367.
707. Leandro Nunes de Castro, Fernando José Von Zuben, and Helder Knidel, editors. Pro-
ceedings of the 6th International Conference on Artificial Immune Systems (ICARIS’07),
August 26–29, 2007, Santos, Brazil, volume 4628 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
708. Ivanoe de Falco, Antonio Della Cioppa, Domenico Maisto, Umberto Scafuri, and Ernesto
Tarantino. Extremal Optimization as a Viable Means for Mapping in Grids. In EvoWork-
shops’09 [1052], pages 41–50, 2009. doi: 10.1007/978-3-642-01129-0 5.
709. Edwin D. de Jong, Richard A. Watson, and Jordan B. Pollack. Reducing Bloat and Pro-
moting Diversity using Multi-Objective Methods. In GECCO’01 [2570], pages 11–18, 2001.
Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/jong_2001_
gecco.html and http://www.demo.cs.brandeis.edu/papers/rbpd_gecco01.ps.
gz [accessed 2009-07-09]. CiteSeerx : 10.1.1.28.4549.
710. Gerdien de Jong. Evolution of Phenotypic Plasticity: Patterns of Plasticity and the Emer-
gence of Ecotypes. New Phytologist, 166(1):101–118, April 2005, New Phytologist Trust and
Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1111/j.1469-8137.2005.01322.x.
711. Kenneth Alan De Jong. An Analysis of the Behavior of a Class of Genetic Adaptive Sys-
tems. PhD thesis, University of Michigan: Ann Arbor, MI, USA, August 1975, John Henry
Holland, Larry K. Flanigan, Richard A. Volz, and Bernard P. Zeigler, Committee members.
Fully available at http://cs.gmu.edu/˜eclab/kdj_thesis.html [accessed 2007-08-17]. Or-
der No.: AAI7609381.
712. Kenneth Alan De Jong. On Using Genetic Algorithms to Search Program Spaces. In
ICGA’87 [1132], pages 210–216, 1987.
713. Kenneth Alan De Jong. Learning with Genetic Algorithms: An Overview. Machine Learning,
3(2-3):121–138, October 1988, Kluwer Academic Publishers: Norwell, MA, USA and Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00113894.
714. Kenneth Alan De Jong. Genetic Algorithms are NOT Function Optimizers. In
FOGA’92 [2927], pages 5–17, 1992. Fully available at http://www.mli.gmu.edu/
papers/91-95/92-12.pdf [accessed 2011-05-30]. CiteSeerx : 10.1.1.161.5655.
715. Kenneth Alan De Jong. Evolutionary Computation: A Unified Approach, volume 4 in Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, February 2006.
isbn: 0262041944 and 8120330021. Google Books ID: OIRQAAAAMAAJ. OCLC: 46500047,
182530408, and 276452339.
716. Kenneth Alan De Jong and William M. Spears. Learning Concept Classification Rules
using Genetic Algorithms. In IJCAI’91-II [1988], pages 651–656, 1991. Fully avail-
able at http://citeseer.ist.psu.edu/dejong91learning.html [accessed 2007-09-12].
CiteSeerx : 10.1.1.104.4397 and 10.1.1.56.5640.
717. Kenneth Alan De Jong, William M. Spears, and Diana F. Gordon. Using Genetic
Algorithms for Concept Learning. Machine Learning, 13:161–188, 1993, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
Fully available at http://www.aic.nrl.navy.mil/papers/1993/AIC-93-050.ps.
Z and http://www.cs.uwyo.edu/˜dspears/papers/mlj93.pdf [accessed 2010-12-14].
CiteSeerx : 10.1.1.115.8585 and 10.1.1.51.8124.
718. Kenneth Alan De Jong, David B. Fogel, and Hans-Paul Schwefel. A History of Evolution-
REFERENCES 1013
ary Computation. In Evolutionary Computation 1: Basic Algorithms and Operators [174],
Chapter 6, page 40. Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back,
Bristol, UK, 2000.
719. Kenneth Alan De Jong, Riccardo Poli, and Jonathan E. Rowe, editors. Foundations of Genetic
Algorithms 7 (FOGA’02), September 4–6, 2002, Torremolinos, Spain. Morgan Kaufmann:
San Mateo, CA, USA. isbn: 0-12-208155-2. Published in 2003.
720. Pierre-Simon Marquis de Laplace. Théorie Analytique des Probabilités (Analytical Theory
of Probability). M. V. Courcier, Imprimeur-Libraire pour les Mathématiques, quai des Au-
gustins, no. 57: Paris, France, 1812. Google Books ID: 6MRLAAAAMAAJ, VWuGOwAACAAJ,
VmuGOwAACAAJ, dR6 PAAACAAJ, eB6 PAAACAAJ, and nQwAAAAAMAAJ. Première Partie.
721. Luis de-Marcos, José J. Martı́nez, José A. Gutiérrez, Roberto Barchino, and José M.
Gutiérrez. A New Sequencing Method in Web-Based Education. In CEC’09 [1350], pages
3219–3225, 2009. doi: 10.1109/CEC.2009.4983352. INSPEC Accession Number: 10688947.
722. Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck. Philosophie zoologique –
ou Exposition des considérations relatives à l’histoire naturelle des Animaux; à la diver-
sité de leur organisation et des facultés qu’ils en obtiennent; . . . . Dentu: Paris, France
and J. B. Baillière Liberaire: Paris, France, 1809–1830. isbn: 1412116465. Fully avail-
able at http://www.lamarck.cnrs.fr/ice/ice_book_detail.php?type=text&
bdd=lamarck&table=ouvrages_lamarck&bookId=29 [accessed 2008-09-10]. Google Books
ID: 0gkOAAAAQAAJ, L6qAG6ZPgj4C, rlAKHKY8RboC, and vUIDAAAAQAAJ. Philosophie
zoologique – ou Exposition des considérations relatives à l’histoire naturelle des Animaux; à
la diversité de leur organisation et des facultés qu’ils en obtiennent; aux causes physiques qui
maintiennent en eux la vie et donnent lieu aux mouvements qu’ils exécutant; enfin, à celles
qui produisent les unes le sentiment, et les autres l’intelligence de ceux qui en sont doués.
723. Luc de Raedt and Arno Siebes, editors. 5th European Conference on Principles of Data
Mining and Knowledge Discovery (PKDD’01), September 3–5, 2001, Freiburg, Germany,
volume 2168 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-44794-6.
isbn: 3-540-42534-9. Google Books ID: U9IXQfcoqvwC.
724. Luc de Raedt and Stefan Wrobel, editors. Proceedings of the 22nd International Conference
on Machine Learning (ICML’05), August 7–11, 2005, University of Bonn: Bonn, North
Rhine-Westphalia, Germany, volume 119 in ACM International Conference Proceeding Series
(AICPS). ACM Press: New York, NY, USA. isbn: 1-59593-180-5.
725. Fabiano Luis de Sousa and Fernando Manuel Ramos. Function Optimization us-
ing Extremal Dynamics. In ICIPE’02 [2092], pages 115–119, volume 1, 2002.
CiteSeerx : 10.1.1.128.9802.
726. Fabiano Luis de Sousa and Walter Kenkiti Takahashi. Discrete Optimal De-
sign of Trusses by Generalized Extremal Optimization. In WCSMO6 [1226], 2005.
CiteSeerx : 10.1.1.76.8740.
727. Fabiano Luis de Sousa, Fernando Manuel Ramos, Pedro Pagione, and Roberto M. Girardi.
New Stochastic Algorithm for Design Optimization. AIAA Journal, 41(9):1808–1818, 2003,
American Institute of Aeronautics and Astronautics: Reston, VA, USA. doi: 10.2514/2.7299.
728. Fabiano Luis de Sousa, Valeri Vlassov, and Fernando Manuel Ramos. Generalized Extremal
Optimization: An Application in Heat Pipe Design. Applied Mathematical Modelling, 28
(10):911–931, October 2004, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
doi: 10.1016/j.apm.2004.04.004.
729. Dominique de Werra and Alain Hertz. Tabu Search Techniques: A Tutorial
and an Application to Neural Networks. OR Spectrum – Quantitative Approaches
in Management, 11(3):131–141, September 1989, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/BF01720782. Fully available at http://www.springerlink.de/content/
x25k97k0qx237553/fulltext.pdf [accessed 2008-03-27].
730. Thomas L. Dean, editor. Proceedings of the Sixteenth International Joint Conference
on Artificial Intelligence (IJCAI’99-I), July 31–August 6, 1999, City Conference Center:
Stockholm, Sweden, volume 1. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-613-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-99-VOL-1/CONTENT/content.htm [accessed 2008-04-01]. See also [731, 2296].
731. Thomas L. Dean, editor. Proceedings of the Sixteenth International Joint Conference
on Artificial Intelligence (IJCAI’99-II), July 31–August 6, 1999, City Conference Cen-
1014 REFERENCES
ter: Stockholm, Sweden, volume 2. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-613-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-99-VOL-2/CONTENT/content.htm [accessed 2008-04-01]. See also [730, 2296].
732. Thomas L. Dean and Kathleen McKeown, editors. Proceedings of the 9th National Conference
on Artificial Intelligence (AAAI’91), July 14–19, 1991, Anaheim, CA, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51059-6. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai91.php [accessed 2007-09-06].
2 volumes.
733. James S. DeArmon. A Comparison of Heuristics for the Capacitated Chinese Postman Prob-
lem. Master’s thesis, University of Maryland: College Park, MD, USA, 1981.
734. Ed Deaton, editor. Proceedings of the 1994 ACM Symposium on Applied computing (SAC’94),
March 6–8, 1994, Phoenix Civic Plaza: Phoenix, AZ, USA. ACM Press: New York, NY, USA.
isbn: 0-89791-647-6. Google Books ID: uq1QAAAAMAAJ. OCLC: 31391245.
735. Kalyanmoy Deb. Genetic Algorithms for Multimodal Function Optimization. Master’s thesis,
Clearinghouse for Genetic Algorithms, University of Alabama: Tuscaloosa, 1989. TCGA
Report No. 89002.
736. Kalyanmoy Deb. Solving Goal Programming Problems Using Multi-Objective Genetic Algo-
rithms. In CEC’99 [110], pages 77–84, volume 1, 1999. doi: 10.1109/CEC.1999.781910.
737. Kalyanmoy Deb. Evolutionary Algorithms for Multi-Criterion Optimization in
Engineering Design. In EUROGEN’99 [1894], pages 135–161, 1999. Fully
available at http://www.lania.mx/˜ccoello/EMOO/deb99.ps.gz [accessed 2009-08-05].
CiteSeerx : 10.1.1.52.3761.
738. Kalyanmoy Deb. An Introduction to Genetic Algorithms. Sādhanā – Academy Proceedings in
Engineering Sciences, 24(4-5):293–315, August 1999, Indian Academy of Sciences: Bangalore,
India and Springer India: New Delhi, India. doi: 10.1007/BF02823145. Fully available
at http://www.iitk.ac.in/kangal/papers/sadhana.ps.gz [accessed 2007-07-28].
739. Kalyanmoy Deb. Nonlinear Goal Programming using Multi-Objective Genetic Al-
gorithms. The Journal of the Operational Research Society (JORS), 52(3):291–302,
March 2001, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and Op-
erations Research Society: Birmingham, UK. doi: 10.1057/palgrave.jors.2601089. Fully
available at http://www.lania.mx/˜ccoello/EMOO/deb01d.ps.gz [accessed 2008-11-14].
CiteSeerx : 10.1.1.6.6775.
740. Kalyanmoy Deb. Multi-Objective Optimization Using Evolutionary Algorithms, Wiley In-
terscience Series in Systems and Optimization. John Wiley & Sons Ltd.: New York, NY,
USA, May 2001. isbn: 047187339X. Google Books ID: OSTn4GSy2uQC and ll Blt03u5IC.
OCLC: 46472121 and 46694962.
741. Kalyanmoy Deb. Genetic Algorithms for Optimization. KanGAL Report 2001002, Kan-
pur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical Engineering,
Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, 2001. Fully
available at http://www.iitk.ac.in/kangal/papers/isna.ps.gz [accessed 2009-07-09].
CiteSeerx : 10.1.1.6.9296.
742. Kalyanmoy Deb and David Edward Goldberg. An Investigation of Niche and Species For-
mation in Genetic Function Optimization. In ICGA’89 [2414], pages 42–50, 1989.
743. Kalyanmoy Deb and David Edward Goldberg. Sufficient Conditions for Deceptive and Easy
Binary Functions. Annals of Mathematics and Artificial Intelligence, 10(4):385–408, Decem-
ber 1994, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF01531277. See also
[744].
744. Kalyanmoy Deb and David Edward Goldberg. Sufficient Conditions for Deceptive and Easy
Binary Functions. Technical Report 92001, Illinois Genetic Algorithms Laboratory (IlliGAL),
Department of Computer Science, Department of General Engineering, University of Illinois
at Urbana-Champaign: Urbana-Champaign, IL, USA, 1992. See also [743].
745. Kalyanmoy Deb and Dhish Kumar Saxena. On Finding Pareto-Optimal Solutions Through
Dimensionality Reduction for Certain Large-Dimensional Multi-Objective Optimization
Problems. KanGAL Report 2005011, Kanpur Genetic Algorithms Laboratory (KanGAL),
Department of Mechanical Engineering, Indian Institute of Technology Kanpur (IIT): Kan-
pur, Uttar Pradesh, India, December 2005. Fully available at http://www.iitk.ac.in/
kangal/papers/k2005011.pdf [accessed 2009-07-18].
746. Kalyanmoy Deb and J. Sundar. Reference Point Based Multi-Objective Optimization using
REFERENCES 1015
Evolutionary Algorithms. In GECCO’06 [1516], 2006. doi: 10.1145/1143997.1144112. See
also [753, 756].
747. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T Meyarivan. A Fast Elitist
Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-
II. KanGAL Report 200001, Kanpur Genetic Algorithms Laboratory (KanGAL), De-
partment of Mechanical Engineering, Indian Institute of Technology Kanpur (IIT): Kan-
pur, Uttar Pradesh, India, 2000. Fully available at http://vision.ucsd.edu/
˜sagarwal/nsga2.pdf and http://www.jeo.org/emo/deb00.ps.gz [accessed 2009-07-
x
18]. CiteSeer : 10.1.1.18.4257. See also [748, 749].

748. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T Meyarivan. A Fast Elitist Non-
Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. In PPSN
VI [2421], pages 849–858, 2000. doi: 10.1007/3-540-45356-3 83. Fully available at https://
eprints.kfupm.edu.sa/17643/1/17643.pdf [accessed 2008-10-20]. See also [747, 749].
749. Kalyanmoy Deb, Amrit Pratab, Samir Agrawal, and T Meyarivan. A Fast and Elitist Mul-
tiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 6(2):182–197, April 2002, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.996017. Fully available at http://dynamics.org/˜altenber/UH_
ICS/EC_REFS/MULTI_OBJ/DebPratapAgarwalMeyarivan.pdf [accessed 2009-07-17]. See
also [747, 748].
750. Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler. Scalable Multi-
Objective Optimization Test Problems. In CEC’02 [944], pages 825–830, volume 1, 2002.
doi: 10.1109/CEC.2002.1007032. CiteSeerx : 10.1.1.18.7531. INSPEC Accession Num-
ber: 7321894.
751. Kalyanmoy Deb, Riccardo Poli, Wolfgang Banzhaf, Hans-Georg Beyer, Edmund K. Burke,
Paul J. Darwen, Dipankar Dasgupta, Dario Floreano, James A. Foster, Mark Harman,
Owen E. Holland, Pier Luca Lanzi, Lee Spector, Andrea G. B. Tettamanzi, and Dirk
Thierens, editors. Proceedings of the Genetic and Evolutionary Computation Conference, Part
I (GECCO’04-I), June 26–30, 2004, Red Lion Hotel: Seattle, WA, USA, volume 3102/2004
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-22344-4. Google Books ID: b7k G2KpRrQC. OCLC: 55857927, 181345757,
and 315013110. Library of Congress Control Number (LCCN): 2004107860. See also
[752, 1515, 2079].
752. Kalyanmoy Deb, Riccardo Poli, Wolfgang Banzhaf, Hans-Georg Beyer, Edmund K. Burke,
Paul J. Darwen, Dipankar Dasgupta, Dario Floreano, James A. Foster, Mark Harman,
Owen E. Holland, Pier Luca Lanzi, Lee Spector, Andrea G. B. Tettamanzi, and Dirk
Thierens, editors. Proceedings of the Genetic and Evolutionary Computation Conference,
Part II (GECCO’04-II), June 26–30, 2004, Red Lion Hotel: Seattle, WA, USA, volume
3103/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-22343-6. Google Books ID: ftKoWMXvHKoC. Library of Congress
Control Number (LCCN): 2004107860. See also [751, 1515, 2079].
753. Kalyanmoy Deb, J. Sundar, and Udaya Bhaskara Rao N. Reference Point Based Multi-
Objective Optimization using Evolutionary Algorithms. KanGAL Report 2005012, Kanpur
Genetic Algorithms Laboratory (KanGAL), Department of Mechanical Engineering, Indian
Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, December 2005. Fully
available at http://www.iitk.ac.in/kangal/papers/k2005012.pdf [accessed 2009-07-
18]. See also [746, 756].

754. Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-Objective Test Problems, Link-
ages, and Evolutionary Methodologies. In GECCO’06 [1516], pages 1141–1148, 2006.
doi: 10.1145/1143997.1144179. See also [755].
755. Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-Objective Test Problems, Link-
ages, and Evolutionary Methodologies. KanGAL Report 2006001, Kanpur Genetic Algo-
rithms Laboratory (KanGAL), Department of Mechanical Engineering, Indian Institute of
Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, January 2006. Fully available
at http://www.iitk.ac.in/kangal/papers/k2006001.pdf [accessed 2009-07-11]. See also
[754].
756. Kalyanmoy Deb, J. Sundar, Udaya Bhaskara Rao N., and Shamik Chaudhuri. Reference Point
Based Multi-Objective Optimization using Evolutionary Algorithms. International Journal
of Computational Intelligence Research (IJCIR), 2(3):273–286, 2006, Research India Publica-
1016 REFERENCES
tions: Delhi, India. Fully available at http://www.lania.mx/˜ccoello/deb06b.pdf.
gz and http://www.ripublication.com/ijcirv2/ijcirv2n3_4.pdf [accessed 2009-07-
18]. See also [746, 753].

757. Stefano Debattisti, Nicola Marlat, Luca Mussi, and Stefano Cagnoni. Implementation of
a Simple Genetic Algorithm within the CUDA Architecture. Technical Report, Università
degli Studi di Parma, Dipartimento di Ingegneria dell’Informazione: Parma, Italy, 2009. Fully
available at http://www.gpgpgpu.com/gecco2009/3.pdf [accessed 2011-11-14].
758. Rina Dechter and Judea Pearl. Generalized Best-First Search Strategies and the Optimality of
A*. Journal of the Association for Computing Machinery (JACM), 32(3):505–536, July 1985,
ACM Press: New York, NY, USA. doi: 10.1145/3828.3830. CiteSeerx : 10.1.1.89.3090.
759. Rina Dechter and Richard Sutton, editors. Proceedings of the Eighteenth National Con-
ference on Artificial Intelligence and Fourteenth Conference on Innovative Applications of
Artificial Intelligence (AAAI’02, IAAI’02), July 28–August 1, 2002, Edmonton, Alberta,
Canada. AAAI Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/
Conferences/AAAI/aaai02.php and http://www.aaai.org/Conferences/IAAI/
iaai02.php [accessed 2007-09-06].
760. Garg Deepak, editor. Soft Computing. Allied Publishers: Vadodara, Gujrat, India, 2005.
isbn: 817764632X. Google Books ID: IkajJC9iGxMC.
761. Viktoriya Degeler, Ilče Georgievski, Alexander Lazovik, and Marco Aiello. Concept Mapping
for Faster QoS-Aware Web Service Composition. In SOCA’10 [1378], 2010. See also [48–
50, 320].
762. Frank K. H. A. Dehne, Todd Eavis, and Andrew Rau-Chaplin. Efficient Computation of
View Subsets. In DOLAP’07 [2556], pages 65–72, 2007. doi: 10.1145/1317331.1317343.
Fully available at http://dolap07.cs.aau.dk/dehne.pdf [accessed 2011-03-29].
CiteSeerx : 10.1.1.68.9167.
763. Marcel Dekker. Introduction to Set Theory. CRC Press, Inc.: Boca Raton, FL, USA, revised
and expanded, 3rd edition, June 22, 1999. isbn: 0-8247-7915-0.
764. Sarah Jane Delany and Michael Madden, editors. 18th Irish Conference on Artificial Intelli-
gence and Cognitive Science (AICS’07), August 29–31, 2007, Dublin Institute of Technology:
Dublin, Ireland.
765. Federico Della Croce, Roberto Tadei, and Giuseppe Volta. A Genetic Algorithm for
the Job Shop Problem. Computers & Operations Research, 22(1):15–24, January 1995,
Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0305-0548(93)E0015-L.
766. Janez Demšar. Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of
Machine Learning Research (JMLR), 7:1–30, January 2006, MIT Press: Cambridge, MA,
USA. Fully available at http://jmlr.csail.mit.edu/papers/volume7/demsar06a/
demsar06a.pdf [accessed 2010-10-19]. CiteSeerx : 10.1.1.141.3142. See also [1020].
767. Jean-Louis Deneubourg and Simon Goss. Collective Patterns and Decision-Making. Ethol-
ogy, Ecology & Evolution, 1(4):295–311, December 1989. Fully available at http://www.
ulb.ac.be/sciences/use/publications/JLD/53.pdfhttp://www.ulb.ac.be/
sciences/use/publications/JLD/53.pdf [accessed 2010-12-09].
768. Jean-Louis Deneubourg, Jacques M. Pasteels, and J. C. Verhaeghe. Probabilistic Behaviour
in Ants: A Strategy of Errors? Journal of Theoretical Biology, 105(2):259–271, 1983, El-
sevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: Academic Press
Professional, Inc.: San Diego, CA, USA. doi: 10.1016/S0022-5193(83)80007-1.
769. Berna Dengiz, Fulya Altiparmak, and Alice E. Smith. Local Search Genetic Algorithm
for Optimal Design of Reliable Networks. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 1(3):179–188, September 1997, IEEE Computer Society: Washington, DC, USA.
Fully available at http://www.eng.auburn.edu/˜aesmith/publications/journal/
ieeeec.pdf [accessed 2009-09-01]. CiteSeerx : 10.1.1.40.8821.
770. Roozbeh Derakhshan, Frank K. H. A. Dehne, Othmar Korn, and Bela Stantic. Sim-
ulated Annealing For Materialized View Selection in Data Warehousing Environment.
In DBA’06 [1407], pages 89–94, 2006. Fully available at http://www.inf.ethz.ch/
personal/droozbeh/docs/503-036.pdf [accessed 2010-09-11]. See also [771].
771. Roozbeh Derakhshan, Bela Stantic, Othmar Korn, and Frank K. H. A. Dehne. Par-
allel Simulated Annealing for Materialized View Selection in Data Warehousing Envi-
ronments. In ICA3PP [372], pages 121–132, 2008. doi: 10.1007/978-3-540-69501-1 14.
REFERENCES 1017
CiteSeerx : 10.1.1.148.4815. See also [770].
772. Anna Derezińska. Advanced Mutation Operators Applicable in C# Programs. In
SET’06 [2366], pages 283–288, 2006.
773. Ulrich Derigs, editor. Optimization and Operations Research, Encyclopedia of Life Support
Systems (EOLSS). Developed under the Auspices of the UNESCO, Eolss Publishers: Oxford,
UK.
774. Proceedings of the 1978 ACM 7th Annual Computer Science Conference (CSC’78), Febru-
ary 21–23, 1978, Detroit, MI, USA.
775. I. Devarenne, Alexandre Caminada, H. Mabed, and T. Defaix. Adaptive Local Search for
a New Military Frequency Hopping Planning Problem. In EvoWorkshops’08 [1051], pages
11–20, 2008. doi: 10.1007/978-3-540-78761-7 2.
776. Vladan Devedžic, editor. Proceedings of the 25th IASTED International Multi-Conference on
Artificial Intelligence and Applications (AIAP’07), February 12–14, 2007, Innsbruck, Aus-
tria. ACTA Press: Anaheim, CA, USA. Partly available at http://www.iasted.org/
conferences/pastinfo-549.html [accessed 2011-04-09].
777. Alexandre Devert. Building Processes Optimization: Toward an Artificial Ontogeny based
Approach. PhD thesis, Université Paris-Sud, Ecole Doctorale d’Informatique: Paris, France
and Institut National de Recherche en Informatique et en Automatique (INRIA), Centre de
Recherche Saclay - Île-de-France: Orsay, France, May 2009, Marc Schoenauer and Nicolas
Bredèche, Advisors.
778. Alexandre Devert, Thomas Weise, and Ke Tang. A Study on Scalable Representations for
Evolutionary Optimization of Ground Structures. Evolutionary Computation, 20, 2012, MIT
Press: Cambridge, MA, USA. doi: 10.1162/EVCO a 00054. PubMed ID: 22004002.
779. Yves Deville and Christine Solnon, editors. EPTCS 5: Proceedings 6th International Work-
shop on Local Search Techniques in Constraint Satisfaction, September 20, 2009, Lisbon,
Portugal. doi: 10.4204/EPTCS.5. arXiv ID: 0910.1404v1.
780. Yves Deville and Christine Solnon, editors. Proceedings of the 7th Workshop on Local Search
Techniques in Constraint Satisfaction, September 6, 2010, St Andrews, Scotland, UK.
781. Luc Devroye. Non-Uniform Random Variate Generation. Springer-Verlag London Lim-
ited: London, UK, 1986. isbn: 0-387-96305-7 and 3-540-96305-7. Fully available
at http://cg.scs.carleton.ca/˜luc/rnbookindex.html [accessed 2007-07-05]. Google
Books ID: J9KZAQAACAAJ and SJgWGQAACAAJ. OCLC: 260101691 and 300414888.
782. Patrik D’Haeseleer. Context Preserving Crossover in Genetic Programming. In
CEC’94 [1891], pages 256–261, volume 1, 1994. doi: 10.1109/ICEC.1994.350006.
CiteSeerx : 10.1.1.56.2785. INSPEC Accession Number: 4812326.
783. C. A. Dhote and M. S. Ali. Materialized View Selection in Data Warehousing: A Survey.
Journal of Applied Sciences, 9(3):401–414, 2009, Asian Network for Scientific Information
(ANSI): Faisalabad, Pakistan. doi: 10.3923/jas.2009.401.414. Fully available at http://
docsdrive.com/pdfs/ansinet/jas/2009/401-414.pdf [accessed 2011-04-13].
784. Gianni A. Di Caro. Ant Colony Optimization and its Application to Adaptive Routing
in Telecommunication Networks. PhD thesis, Applied Sciences, Polytechnic School, Uni-
versité Libre de Bruxelles: Brussels, Belgium, September 2004, Marco Dorigo, Advisor.
Partly available at http://www.idsia.ch/˜gianni/Papers/routing-chapters.pdf
and http://www.idsia.ch/˜gianni/Papers/thesis-abstract.pdf [accessed 2008-07-
26]. See also [786].

785. Gianni A. Di Caro and Marco Dorigo. AntNet: Distributed Stigmergetic Control for
Communications Networks. Journal of Artificial Intelligence Research (JAIR), 9:317–
365, December 1998, AI Access Foundation, Inc.: El Segundo, CA, USA and AAAI
Press: Menlo Park, CA, USA. Fully available at http://www.jair.org/media/544/
live-544-1748-jair.pdf [accessed 2009-07-21]. CiteSeerx : 10.1.1.53.4154. See also [786].
786. Gianni A. Di Caro and Marco Dorigo. Two Ant Colony Algorithms for Best-Effort Routing
in Datagram Networks. In PDCS’98 [1409], pages 541–546, 1998. Fully available at http://
citeseer.ist.psu.edu/dicaro98two.html and http://www.idsia.ch/˜gianni/
Papers/PDCS98.ps.gz [accessed 2008-07-26]. See also [784, 785].
787. Gianni A. Di Caro, Frederick Ducatelle, and Luca Maria Gambardella. Wireless Commu-
nications for Distributed Navigation in Robot Swarms. In EvoWorkshops’09 [1052], pages
21–30, 2009. doi: 10.1007/978-3-642-01129-0 3.
788. Cecilia Di Chio, Stefano Cagnoni, Carlos Cotta, Marc Ebner, Anikó Ekárt, Anna Isabel
1018 REFERENCES
Esparcia-Alcázar, Juan Julián Merelo-Guervós, Ferrante Neri, Mike Preuß, Hendrik Richter,
Julian Togelius, and Georgios N. Yannakakis, editors. Applications of Evolutionary Computa-
tion – Proceedings of EvoApplications 2011: EvoCOMPLEX, EvoGAMES, EvoIASP, EvoIN-
TELLIGENCE, EvoNUM, and EvoSTOC, Part 1 (EvoAPPLICATIONS’11), April 27–29,
2011, Torino, Italy, volume 6624 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-20525-5. Library of Congress Control Number (LCCN): 2011925061.
789. Robert P. Dick and Niraj K. Jha. MOGAC: A Multiobjective Genetic Algorithm for
Hardware-Software Co-synthesis of Hierarchical Heterogeneous Distributed Embedded Sys-
tems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,
17(10):920–935, October 1998, IEEE Circuits and Systems Society: Piscataway, NJ, USA.
doi: 10.1109/43.728914. Fully available at http://www.jeo.org/emo/dick98.ps.gz [ac-
x
cessed 2009-07-31]. CiteSeer : 10.1.1.46.5584. INSPEC Accession Number: 6092176.

790. Dirk Dickmanns, Jürgen Schmidhuber, and Andreas Winklhofer. Der Genetische Algo-
rithmus: Eine Implementierung in Prolog. Technische Universität München, Institut für
Informatik, Lehrstuhl Professor Radig: Munich, Bavaria, Germany, 1987. Fully avail-
able at http://www.idsia.ch/˜juergen/geneticprogramming.html [accessed 2007-11-
01]. Fortgeschrittenenpraktikum.

791. Reinhard Diestel. Graph Theory, volume 173 in Graduate Texts in Mathematics. Springer
New York: New York, NY, USA, 3rd edition, 2006. isbn: 3540261834. Google Books
ID: aR2TMYQr2CMC.
792. Thomas G. Dietterich. Overfitting and Undercomputing in Machine Learning. ACM
Computing Surveys (CSUR), 27(3):326–327, 1995, ACM Press: New York, NY, USA.
doi: 10.1145/212094.212114. CiteSeerx : 10.1.1.55.2069.
793. Thomas G. Dietterich and William R. Swartout, editors. Proceedings of the 8th Na-
tional Conference on Artificial Intelligence (AAAI’90), July 29–August 3, 1990, Boston,
MA, USA. AAAI Press: Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA.
isbn: 0-262-51057-X. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai90.php [accessed 2007-09-06]. 2 volumes.
794. Jason Digalakis and Konstantinos Margaritis. A Parallel Memetic Algorithm for Solv-
ing Optimization Problems. In MIC’01 [2294], pages 121–125, 2001. Fully available
at http://citeseer.ist.psu.edu/digalakis01parallel.html and http://www.
it.uom.gr/people/digalakis/digiasfull.pdf [accessed 2007-09-12]. See also [795].
795. Jason Digalakis and Konstantinos Margaritis. Performance Comparison of Memetic Al-
gorithms. Journal of Applied Mathematics and Computation, 158:237–252, October 2004,
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.amc.2003.08.115. Fully avail-
able at http://www.complexity.org.au/ci/draft/draft/digala02/digala02s.
pdf and http://www.complexity.org.au/ci/vol10/digala02/digala02s.pdf [ac-
x
cessed 2010-09-11]. CiteSeer : 10.1.1.21.5495. See also [794].

796. Edsger Wybe Dijkstra. A Note on Two Problems in Connexion with Graphs. Numerische
Mathematik, 1:269–271, 1959. Fully available at http://www-m3.ma.tum.de/twiki/
pub/MN0506/WebHome/dijkstra.pdf [accessed 2008-06-24].
797. Werner Dilger. Einführung in die Künstliche Intelligenz. Chemnitz University of Technol-
ogy, Faculty of Computer Science, Chair of Artificial Intelligence (Künstliche Intelligenz):
Chemnitz, Sachsen, Germany, April 2006. The Lecture Notes of the Lecture on Artificial
Intelligence by Prof. Dilger from 2006.
798. Paolo Dini. Section 1: Science, New Paradigms. Subsection 1: A Scientific Foundation
for Digital Ecosystems. In Digital Business Ecosystems [1989]. Office for Official Publi-
cations of the European Communities: Luxembourg, 2007. Fully available at http://
www.digital-ecosystems.org/book/Section1.pdf and http://www.ieee-dest.
curtin.edu.au/2007/PDini.pdf [accessed 2008-11-09]. See also: Keynote Talk “Structure
and Outlook of Digital Ecosystems Research”, IEEE Digital Ecosystems Technologies Con-
ference, DEST07, Cairns, Australia, February 20–23, 2007, and. See also [799].
799. Paolo Dini. Digital Business Ecosystems, WP 18: Socio-economic Perspectives of Sus-
tainability and Dynamic Specification of Behaviour in Digital Business Ecosystems,
Deliverable 18.4: Report on Self-Organisation from a Dynamical Systems and Computer
Science Viewpoint. Information Society Technologies, March 5, 2007. Fully available
at http://files.opaals.org/DBE/deliverables/Del_18.4_DBE_Report_on_
REFERENCES 1019
self-organisation_from_a_dynamical_systems_and_computer_viewpoint.
pdf [accessed 2008-11-09]. Contract nr. 507953.
800. Yefim Dinitz. Dinitz’ Algorithm: The Original Version and Even’s Version. In Theoretical
Computer Science: Essays in Memory of Shimon Even [1093], pages 218–240. Springer-Verlag
GmbH: Berlin, Germany, 2006. Fully available at http://www.cs.bgu.ac.il/˜dinitz/
Papers/Dinitz_alg.pdf [accessed 2010-09-01].
801. Andrej Dobnikar, Nigel C. Steele, David W. Pearson, and Rudolf F. Albrecht, editors. Pro-
ceedings of the 4th International Conference on Artificial Neural Nets and Genetic Algorithms
(ICANNGA’99), April 6–9, 1999, Protorož, Slovenia. Birkhäuser Verlag: Basel, Switzer-
land and Springer-Verlag GmbH: Berlin, Germany. isbn: 3-211-83364-1. Google Books
ID: clKwynlfZYkC.
802. Karl Doerner, Manfred Gronalt, Richard F. Hartl, Marc Reimann, Christine Strauss,
and Michael Stummer. Savings Ants for the Vehicle Routing Problem. In EvoWork-
shops’02 [465], pages 11–20, 2002. doi: 10.1007/3-540-46004-7 2. Fully avail-
able at http://epub.wu-wien.ac.at/dyn/virlib/wp/mediate/epub-wu-01_1f1.
pdf?ID=epub-wu-01_1f1 [accessed 2008-10-27]. CiteSeerx : 10.1.1.6.8882. See also [2811].
803. Benjamin Doerr, Edda Happ, and Christian Klein. Crossover can provably be
useful in Evolutionary Computation. In GECCO’08 [1519], pages 539–546, 2008.
doi: 10.1145/1389095.1389202. Fully available at http://www.mpi-inf.mpg.de/
˜doerr/papers/cpuinec.pdf and http://www.mpi-inf.mpg.de/˜edda/papers/
gecco2008_1.pdf [accessed 2011-11-23]. CiteSeerx : 10.1.1.163.9588. See also [804].
804. Benjamin Doerr, Edda Happ, and Christian Klein. Crossover can provably be useful in
Evolutionary Computation. Theoretical Computer Science, pages 1–17, 2010, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.tcs.2010.10.035. See also [803].
805. Pedro Domingos and Michael Pazzani. On the Optimality of the Simple Bayesian Clas-
sifier under Zero-One Loss. Machine Learning, 29(2-3):103–130, November 1997, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1007413511361. Fully available at http://citeseer.ist.psu.
edu/old/domingos97optimality.html and http://www.ics.uci.edu/˜pazzani/
Publications/mlj97-pedro.pdf [accessed 2009-09-10].
806. Wolfgang Domschke. Logistik, Rundreisen und Touren, Oldenbourgs Lehr- und Handbücher
der Wirtschafts- u. Sozialwissenschaften. Oldenbourg Verlag: Munich, Bavaria, Germany,
fourth edition, 1997. doi: 978-3-486-24273-7.
807. Marco Dorigo and Christian Blum. Ant Colony Optimization Theory: A Survey. Theo-
retical Computer Science, 344(2-3), November 17, 2005, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.tcs.2005.05.020. Fully available at http://code.ulb.ac.be/
dbfiles/DorBlu2005tcs.pdf [accessed 2007-08-05].
808. Marco Dorigo and Luca Maria Gambardella. Ant Colony System: A Cooperative Learning
Approach to the Traveling Salesman Problem. IEEE Transactions on Evolutionary Compu-
tation (IEEE-EC), 1(1):53–66, April 1997, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.585892. Fully available at http://www.idsia.ch/˜luca/acs-ec97.
pdf [accessed 2009-06-27]. CiteSeerx : 10.1.1.49.7702.
809. Marco Dorigo and Thomas Stützle. Ant Colony Optimization, Bradford Books. MIT
Press: Cambridge, MA, USA, July 1, 2004. isbn: 0-262-04219-3. Google Books
ID: aefcpY8GiEC.
810. Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. The Ant System: Optimization by a
Colony of Cooperating Agents. IEEE Transactions on Systems, Man, and Cybernetics – Part
B: Cybernetics, 26(1):29–41, February 1996, IEEE Systems, Man, and Cybernetics Society:
New York, NY, USA. doi: 10.1109/3477.484436. Fully available at ftp://iridia.ulb.ac.
be/pub/mdorigo/journals/IJ.10-SMC96.pdf and http://www.agent.ai/doc/
upload/200302/dori96.pdf [accessed 2009-06-26]. CiteSeerx : 10.1.1.50.6491. INSPEC
Accession Number: 5191313.
811. Marco Dorigo, Gianni A. Di Caro, and Luca Maria Gambardella. Ant Algorithms for Discrete
Optimization. Technical Report IRIDIA/98-10, Université Libre de Bruxelles: Brussels,
Belgium, 1998. CiteSeerx : 10.1.1.72.9591. See also [812].
812. Marco Dorigo, Gianni A. Di Caro, and Luca Maria Gambardella. Ant Algorithms for
Discrete Optimization. Artificial Life, 5(2):137–172, Spring 1999, MIT Press: Cambridge,
MA, USA. doi: 10.1162/106454699568728. Fully available at ftp://ftp.idsia.ch/
1020 REFERENCES
pub/luca/papers/ij_23-alife99.ps.gz, http://code.ulb.ac.be/dbfiles/
DorDicGam1999al.pdf, http://www.cs.ubc.ca/˜hutter/earg/papers04-05/
artificial_life.pdf, http://www.idsia.ch/˜luca/abstracts/papers/ij_
23-alife99.pdf, and http://www.idsia.ch/˜luca/ij_23-alife99.pdf [ac-
cessed 2010-12-10]. See also [811].

813. Marco Dorigo, Gianni A. Di Caro, and Thomas Stützle. Special issue on “Ant Algorithms“.
Future Generation Computer Systems – The International Journal of Grid Computing: The-
ory, Methods and Applications (FGCS), 16(8):851–955, June 2000, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands.
814. Marco Dorigo, Gianni A. Di Caro, and Thomas Stützle, editors. From Ant Colonies to Artifi-
cial Ants – First International Workshop on Ant Colony Optimization and Swarm Intelligence
(ANTS’98), October 15–16, 2000, Brussels, Belgium. See also [813].
815. Marco Dorigo, Luca Maria Gambardella, Martin Middendorf, and Thomas Stützle, editors.
From Ant Colonies to Artificial Ants – Second International Workshop on Ant Colony Opti-
mization and Swarm Intelligence (ANTS’00), September 8–9, 2000, Brussels, Belgium. See
also [817].
816. Marco Dorigo, Gianni A. Di Caro, and Michael Samples, editors. From Ant Colonies to
Artificial Ants – Third International Workshop on Ant Colony Optimization (ANTS’02),
September 12–14, 2002, Brussels, Belgium, volume 2463/2002 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45724-0.
isbn: 3-540-44146-8.
817. Marco Dorigo, Luca Maria Gambardella, Martin Middendorf, and Thomas Stützle. Special
Section on “Ant Colony Optimization”. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 6(4):317–365, August 2002, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2002.802446.
818. Marco Dorigo, Mauro Birattari, Christian Blum, Luca Maria Gambardella, Francesco Mon-
dada, and Thomas Stützle, editors. Fourth International Workshop on Ant Colony Optimiza-
tion and Swarm Intelligence (ANTS’04), September 5–8, 2004, Brussels, Belgium, volume
3172/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b99492.
819. Marco Dorigo, Mauro Birattari, and Thomas Stützle. Ant Colony Optimization. IEEE
Computational Intelligence Magazine, 1(4):28–39, November 2006, IEEE Computational In-
telligence Society: Piscataway, NJ, USA. doi: 10.1109/MCI.2006.329691. INSPEC Accession
Number: 9184238.
820. Marco Dorigo, Mauro Birattari, and Thomas Stützle. Ant Colony Optimization – Artificial
Ants as a Computational Intelligence Technique. IEEE Computational Intelligence Mag-
azine, 1(4):28–39, 2006, IEEE Computational Intelligence Society: Piscataway, NJ, USA.
doi: 10.1109/MCI.2006.329691. Fully available at http://iridia.ulb.ac.be/˜mbiro/
paperi/DorBirStu2006ieee-cim.pdf and http://iridia.ulb.ac.be/˜mbiro/
paperi/IridiaTr2006-023r001.pdf [accessed 2010-09-26]. CiteSeerx : 10.1.1.64.9532.
INSPEC Accession Number: 9184238.
821. Marco Dorigo, Luca Maria Gambardella, Mauro Birattari, A. Martinoli, Riccardo Poli, and
Thomas Stützle, editors. Fifth International Workshop on Ant Colony Optimization and
Swarm Intelligence (ANTS’06), September 4–7, 2006, Brussels, Belgium, volume 4150/2006
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/11839088.
822. Jürgen Dorn, Peter Hrastnik, and Albert Rainer. Web Service Discovery and Composition
with MOVE. In EEE’05 [1371], pages 791–792, 2005. doi: 10.1109/EEE.2005.142. INSPEC
Accession Number: 8530441. See also [317, 823, 2257].
823. Jürgen Dorn, Albert Rainer, and Peter Hrastnik. Toward Semantic Composi-
tion of Web Services with MOVE. In CEC/EEE’06 [2967], pages 437–438, 2006.
doi: 10.1109/CEC-EEE.2006.89. Fully available at http://move.ec3.at/Papers/
WSC06.pdf [accessed 2007-10-25]. INSPEC Accession Number: 9189342. See also [318, 822, 2257].
824. Bernabé Dorronsoro Dı́az. The VRP Web. Auren: Madrid, Spain and University of Málaga
(UMA), Complejo Tecnológico, Departamento Lenguajes y Ciencias de la Computación:
Campus de Teatinos, Málaga, Spain, March 2007. Fully available at http://neo.lcc.
uma.es/radi-aeb/WebVRP/index.html [accessed 2011-10-12].
REFERENCES 1021
825. Bernabé Dorronsoro Dı́az. Known Best Results. University of Málaga (UMA), Networking and
Emerging Optimization (NEO): Campus de Teatinos, Málaga, Spain, March 2007. Fully avail-
able at http://neo.lcc.uma.es/radi-aeb/WebVRP/results/BestResults.htm [ac-
cessed 2007-12-28].

826. Bernabé Dorronsoro Dı́az, Patricia Ruiz, Grégoire Danoy, Pascal Bouvry, and Lorenzo J. Tar-
don. Towards Connectivity Improvement in VANETs using Bypass Links. In CEC’09 [1350],
pages 2201–2208, 2009. doi: 10.1109/CEC.2009.4983214. INSPEC Accession Num-
ber: 10688809.
827. Walter Dosch and Narayan C. Debnath, editors. Proceedings of the ISCA 13th Interna-
tional Conference on Intelligent and Adaptive Systems and Software Engineering (IASSE’04),
July 1–3, 2004, Nice, France. isbn: 1-880843-51-X. Google Books ID: KigWPQAACAAJ and
TyyyAAAACAAJ.
828. Norman Richard Draper and Harry Smith. Applied Regression Analysis, Wiley Series in Prob-
ability and Mathematical Statistics – Applied Probability and Statistics Section Series. Wi-
ley Interscience: Chichester, West Sussex, UK, 1966. isbn: 0471029955. OCLC: 6486827.
GBV-Identification (PPN): 024357847. Reviewed in [879].
829. Nicole Drechsler, Rolf Drechsler, and Bernd Becker. Multi-objective Optimization in Evo-
lutionary Algorithms Using Satisfiability Classes. In International Conference on Computa-
tional Intelligence: Theory and Applications – 6th Fuzzy Days [2295], pages 108–117, 1996.
doi: 10.1007/3-540-48774-3 14. Fully available at http://citeseer.ist.psu.edu/old/
drechsler99multiobjective.html [accessed 2009-07-18].
830. Nicole Drechsler, Rolf Drechsler, and Bernd Becker. Multi-Objective Optimisation Based on
Relation Favour. In EMO’01 [3099], pages 154–166, 2001. doi: 10.1007/3-540-44719-9 11.
Fully available at http://www.lania.mx/˜ccoello/EMOO/drechsler01.pdf.gz [ac-
x
cessed 2011-12-05]. CiteSeer : 10.1.1.65.5318.

831. Dimiter Driankov, Peter W. Eklund, and Anca L. Ralescu, editors. Fuzzy Logic and Fuzzy
Control: Proceedings of Workshops on Fuzzy Logic and Fuzzy Control (IJCAI’91 Fuzzy
WS), August 24, 1991, Sydney, NSW, Australia, volume 833 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-58279-7.
OCLC: 150398218, 303915934, 303915937, 303915940, 303915944, 422860424,
473465173, and 501692697. See also [1987, 1988, 2122].
832. Moshe Dror, editor. Arc Routing: Theory, Solutions and Applications. Springer-Verlag
GmbH: Berlin, Germany, 2000. isbn: 0-7923-7898-9. Google Books ID: rPiHiE0ZZCkC.
833. Stefan Droste. Not All Linear Functions Are Equally Difficult for the Compact Genetic
Algorithm. In GECCO’05 [304], pages 679–686, 2005. doi: 10.1145/1068009.1068124. See
also [834].
834. Stefan Droste. A Rigorous Analysis of the Compact Genetic Algorithm for Linear Functions.
Natural Computing: An International Journal, 5(3):257–283, September 2006, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s11047-006-9001-0. See also [833].
835. Stefan Droste and Dirk Wiesmann. On Representation and Genetic Operators in Evo-
lutionary Algorithms. Technical Report CI–41/98, Universität Dortmund, Collaborative
Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia, Ger-
many, June 1998. Fully available at http://hdl.handle.net/2003/5341 [accessed 2009-07-
x
10]. CiteSeer : 10.1.1.36.3354.

836. Stefan Droste, Thomas Jansen, and Ingo Wegener. On the Optimization of Unimodal
Functions with the (1+1) Evolutionary Algorithm. In PPSN V [866], pages 13–22, 1998.
doi: 10.1007/BFb0056845.
837. Stefan Droste, Thomas Jansen, and Ingo Wegener. Perhaps Not a Free Lunch But At Least
a Free Appetizer. Reihe Computational Intelligence: Design and Management of Complex
Technical Processes and Systems by Means of Computational Intelligence Methods CI-45/98,
Universität Dortmund, Collaborative Research Center (Sonderforschungsbereich) 531: Dort-
mund, North Rhine-Westphalia, Germany, September 1998. Fully available at http://hdl.
handle.net/2003/5339 [accessed 2009-03-01]. issn: 1433-3325. See also [838].
838. Stefan Droste, Thomas Jansen, and Ingo Wegener. Perhaps Not a Free Lunch But At Least
a Free Appetizer. In GECCO’99 [211], pages 833–839, 1999. See also [837].
839. Stefan Droste, Thomas Jansen, and Ingo Wegener. Optimization with Randomized Search
Heuristics – The (A)NFL Theorem, Realistic Scenarios, and Difficult Functions. Reihe Com-
1022 REFERENCES
putational Intelligence: Design and Management of Complex Technical Processes and Sys-
tems by Means of Computational Intelligence Methods CI-91/00, Universität Dortmund,
Collaborative Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-
Westphalia, Germany, August 2000. Fully available at http://hdl.handle.net/2003/
5394 [accessed 2009-03-01]. issn: 1433-3325. See also [840].
840. Stefan Droste, Thomas Jansen, and Ingo Wegener. Optimization with Randomized Search
Heuristics – The (A)NFL Theorem, Realistic Scenarios, and Difficult Functions. Theoretical
Computer Science, 287(1):131–144, September 25, 2002, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/S0304-3975(02)00094-4. CiteSeerx : 10.1.1.35.5850. See also
[839].
841. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’11), July 12–
16, 2011, Dublin, Ireland.
842. Marc Dubreuil, Christian Gagné, and Marc Parizeau. Analysis of a Master-Slave Archi-
tecture for Distributed Evolutionary Computations. IEEE Transactions on Systems, Man,
and Cybernetics – Part B: Cybernetics, 36(1):229–235, February 2006, IEEE Systems, Man,
and Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCB.2005.856724. Fully
available at http://vision.gel.ulaval.ca/˜parizeau/publications/smc06.pdf
[accessed 2008-04-06]. INSPEC Accession Number: 8736975.

843. Frederick Ducatelle, Martin Roth, and Luca Maria Gambardella. Design of a User Space
Software Suite for Probabilistic Routing in Ad-Hoc Networks. In EvoWorkshops’07 [1050],
pages 121–128, 2007. doi: 10.1007/978-3-540-71805-5 13.
844. Richard O. Duda, Peter Elliot Hart, and David G. Stork. Pattern Classification, Esti-
mation, Simulation, and Control – Wiley-Interscience Series in Discrete Mathematics and
Optimization. Wiley Interscience: Chichester, West Sussex, UK, 2nd edition, Novem-
ber 2000. isbn: 0-471-05669-3. Google Books ID: YoxQAAAAMAAJ, hyQgQAAACAAJ, and
o3I8PgAACAAJ. OCLC: 41347061, 154744650, and 474918353. Library of Congress
Control Number (LCCN): 99029981. GBV-Identification (PPN): 303957670.
845. John Duffy and Jim Engle-Warnick. Using Symbolic Regression to Infer Strategies from
Experimental Data. In Fifth International Conference on Computing in Economics and Fi-
nance [262], page 150, 1999. Fully available at http://www.pitt.edu/˜jduffy/papers/
Usr.pdf [accessed 2010-12-02]. CiteSeerx : 10.1.1.34.4205. See also [846].
846. John Duffy and Jim Engle-Warnick. Using Symbolic Regression to Infer Strategies from Ex-
perimental Data. In Evolutionary Computation in Economics and Finance [549], Chapter 4,
pages 61–84. Springer-Verlag GmbH: Berlin, Germany, 2002. Fully available at http://
www.pitt.edu/˜jduffy/docs/Usr.ps [accessed 2007-09-09]. See also [845].
847. Dumitru (Dan) Dumitrescu, Beatrice Lazzerini, Lakhmi C. Jain, and A. Dumitrescu. Evo-
lutionary Computation, volume 18 in International Series on Computational Intelligence.
CRC Press, Inc.: Boca Raton, FL, USA, June 2000. isbn: 0-8493-0588-8. Google Books
ID: MSU9ep79JvUC. OCLC: 43894173, 247021679, and 468482770. Library of Congress
Control Number (LCCN): 00030348. LC Classification: QA76.618 .E882 2000.
848. Bruce S. Duncan and Arthur J. Olson. Applications of Evolutionary Programming for the
Prediction of Protein-Protein Interactions. In EP’96 [946], pages 411–417, 1996.
849. Margaret H. Dunham. Data Mining: Introductory and Advanced Topics. Prentice Hall
International Inc.: Upper Saddle River, NJ, USA, August 2002. isbn: 0130888923.
850. G. Dzemyda, V. Saltenis, and Antanas Žilinskas, editors. Stochastic and Global Optimization,
Nonconvex Optimization and Its Applications. Springer Science+Business Media, Inc.: New
York, NY, USA, March 1, 2002.
851. Sašo Džeroski and Peter A. Flach, editors. Proceedings of 9th International Workshop
on Inductive Logic Programming (ILP-99), June 24–27, 1999, Bled, Slovenia, volume
1634/1999 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48751-4.
isbn: 3-540-66109-3. Google Books ID: ePRRHzBNvFAC.

852. Fernando Almeida e Costa, Luı́s Mateus Rocha, Ernesto Jorge Fernandes Costa, Inman
Harvey, and António Coutinho, editors. Proceedings of the 9th European Conference on
REFERENCES 1023
Advances in Artificial Life (ECAL’07), September 10–14, 2007, Lisbon, Portugal, volume
4648/2007 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-74913-4.
isbn: 3-540-74912-8. Google Books ID: 8Hl-qxgNAjQC and gVLbGgAACAAJ. Library of
Congress Control Number (LCCN): 2007934544.
853. Russel C. Eberhart and James Kennedy. A New Optimizer Using Particle Swarm
Theory. In MHS’95 [1326], pages 39–43, 1995. doi: 10.1109/MHS.1995.494215.
Fully available at http://webmining.spd.louisville.edu/Websites/COMB-OPT/
FINAL-PAPERS/SwarmsPaper.pdf [accessed 2009-07-05]. INSPEC Accession Number: 5297172.
854. Russel C. Eberhart and Yuhui Shi. Comparison between Genetic Algorithms and Particle
Swarm Optimization. In EP’98 [2208], pages 611–616, 1998. doi: 10.1007/BFb0040812.
855. Russel C. Eberhart and Yuhui Shi. A Modified Particle Swarm Optimizer. In CEC’98 [2496],
pages 69–73, 1998. doi: 10.1109/ICEC.1998.699146. INSPEC Accession Number: 6021037.
856. Marc Ebner, Michael O’Neill, Anikó Ekárt, Leonardo Vanneschi, and Anna Isabel Esparcia-
Alcázar, editors. Proceedings of the 10th European Conference on Genetic Programming
(EuroGP’07), April 11–13, 2007, València, Spain, volume 4445/2007 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-71605-1. isbn: 3-540-71602-5.
Library of Congress Control Number (LCCN): 2007923720.
857. Francis Ysidro Edgeworth. Mathematical Psychics: An Essay on the Applica-
tion of Mathematics to the Moral Sciences. Kessinger Publishing: Whitefish, MT,
USA and C. Kegan Paul & Co: London, UK, 1881. isbn: 0548216657 and
1163230006. Fully available at http://socserv.mcmaster.ca/econ/ugcm/3ll3/
edgeworth/mathpsychics.pdf [accessed 2011-12-02]. asin: 0548216657.
858. Proceedings of the 9th International Conference on Artificial Immune Systems (ICARIS’10),
July 26–29, 2010, Edinburgh, Scotland, UK.
859. Proceedings of the 2013 International Conference on Systems, Man, and Cybernetics
(SMC’13), October 6–9, 2013, Edinburgh, Scotland, UK.
860. Matthias Ehrgott. Multicriteria Optimization, volume 491 in Lecture Notes in Economics
and Mathematical Systems. Birkhäuser Verlag: Basel, Switzerland, 2nd edition, 2005.
isbn: 3-540-21398-8. Google Books ID: yrZw9srrHroC.
861. Matthias Ehrgott, editor. Proceedings of the 19th International Conference on Multiple
Criteria Decision Making: MCDM for Sustainable Energy and Transportation Systems
(MCDM’08), June 7–12, 2008, Auckland, New Zealand. Partly available at https://
secure.orsnz.org.nz/mcdm2008/ [accessed 2007-09-10].
862. Matthias Ehrgott, Carlos M. Fonseca, Xavier Gandibleux, Jin-Kao Hao, and Marc Se-
vaux, editors. Proceedings of the 5th International Conference on Evolutionary Multi-
Criterion Optimization (EMO’09), April 7–10, 2009, Nantes, France, volume 5467/2009
in Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01020-0.
isbn: 3-642-01019-9.
863. Ágoston E. Eiben, editor. Evolutionary Computation, Theoretical Computer Science. IOS
Press: Amsterdam, The Netherlands, 1999. isbn: 4-274-90269-2 and 90-5199-471-0.
Google Books ID: 8LVAGQAACAAJ. This is the book edition of the journal Fundamenta
Informaticae, Volume 35, Nos. 1-4, 1998.
864. Ágoston E. Eiben and C. A. Schippers. On Evolutionary Exploration and Exploita-
tion. Fundamenta Informaticae – Annales Societatis Mathematicae Polonae, Series IV,
35(1-2):35–50, July–August 1998, IOS Press: Amsterdam, The Netherlands and Euro-
pean Association for Theoretical Computer Science (EATCS): Rio, Greece. Fully avail-
able at http://www.cs.vu.nl/˜gusz/papers/FunInf98-Eiben-Schippers.ps [ac-
x
cessed 2009-07-09]. CiteSeer : 10.1.1.29.4885.

865. Ágoston E. Eiben and James E. Smith. Introduction to Evolutionary Computing, Natural
Computing Series. Springer New York: New York, NY, USA, 1st edition, November 2003.
isbn: 3540401849. Google Books ID: 7IOE5VIpFpwC and RRKo9xVFW QC.
866. Ágoston E. Eiben, Thomas Bäck, Marc Schoenauer, and Hans-Paul Schwefel, editors.
Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
(PPSN V), September 27–30, 1998, Amsterdam, The Netherlands, volume 1498/1998 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1024 REFERENCES
doi: 10.1007/BFb0056843. isbn: 3-540-65078-4. Google Books ID: HLeU 1TkwTsC.
OCLC: 39739752, 150419863, 243486760, and 246324853.
867. Manfred Eigen and Peter K. Schuster. The Hypercycle: A Principle of Natural Self Orga-
nization. Springer-Verlag GmbH: Berlin, Germany, June 1979. isbn: 0387092935. Google
Books ID: P54PPAAACAAJ and UNJcAAAACAAJ. OCLC: 4665354.
868. Horst A. Eiselt, Michael Gendrau, and Gilbert Laporte. Arc Routing Problems, Part I:
The Chinese Postman Problem. Operations Research, 43(2):231–242, March–April 1995,
Institute for Operations Research and the Management Sciences (INFORMS): Linthicum,
ML, USA and HighWire Press (Stanford University): Cambridge, MA, USA. JSTOR Stable
ID: 171832.
869. Anikó Ekárt and Sándor Zoltán Németh. Selection Based on the Pareto Nondomination
Criterion for Controlling Code Growth in Genetic Programming. Genetic Programming and
Evolvable Machines, 2(1):61–73, March 2001, Springer Netherlands: Dordrecht, Netherlands.
Imprint: Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1023/A:1010070616149.
870. Sven E. Eklund. A Massively Parallel GP Engine in VLSI. In CEC’02 [944], pages 629–633,
2002. doi: 10.1109/CEC.2002.1006999. INSPEC Accession Number: 7321863.
871. El-Sayed M. El-Alfy. Discovering Classification Rules for Email Spam Filtering with
an Ant Colony Optimization Algorithm. In CEC’09 [1350], pages 1778–1783, 2009.
doi: 10.1109/CEC.2009.4983156. INSPEC Accession Number: 10688751.
872. Khaled El-Fakihy, Hirozumi Yamaguchi, and Gregor von Bochmann. A Method and a Ge-
netic Algorithm for Deriving Protocols for Distributed Applications with Minimum Com-
munication Cost. In PDCS’99 [1410], pages 863–868, 1999. Fully available at http://
www-higashi.ist.osaka-u.ac.jp/˜h-yamagu/resource/pdcs99.pdf [accessed 2009-
x
08-01]. CiteSeer : 10.1.1.27.4746. See also [3005].

873. Proceedings of the 3rd International Conference on Metaheuristics and Nature Inspired Com-
puting (META’10), October 27–31, 2010, El Mouradi Djerba Menzel: Djerba Island, Tunisia.
874. E.W. Elcock and Donald Michie, editors. Proceedings of the Eighth Machine Intelligence
Workshop (Machine Intelligence 8), 1977, NATO Advanced Study Institute: Santa Cruz,
CA, USA. Ellis Horwood: Chichester, West Sussex, UK and John Wiley & Sons Ltd.: New
York, NY, USA.
875. Niles Eldredge and Stephen Jay Gould. Punctuated Equilibria: An Alternative to Phyletic
Gradualism. In Models in Paleobiology [2427], Chapter 5, pages 82–115. Freeman, Cooper &
Co: San Francisco, CA, USA and DoubleDay: New York, NY, USA, 1972. Fully available
at http://www.blackwellpublishing.com/ridley/classictexts/eldredge.pdf
[accessed 2008-07-01].

876. Niles Eldredge and Stephen Jay Gould. Punctuated Equilibria: The Tempo and Mode of
Evolution Reconsidered. Paleobiology, 3(2):115–151, April 1, 1977, Paleontological Society:
Lawrence, KS, USA and Allen Press Online Publishing: Lawrence, KS, USA. Fully available
at http://www.nileseldredge.com/pdf_files/Punctuated_Equilibria_Gould_
Eldredge_1977.pdf [accessed 2008-11-09].
877. Proceedings of the 4th European Congress on Fuzzy and Intelligent Technologies (EUFIT’96),
September 2–5, 1996, Aachen, North Rhine-Westphalia, Germany. ELITE Foundation:
Aachen, North Rhine-Westphalia, Germany. isbn: 3896531875.
878. Khaled Elleithy, editor. Advances and Innovations in Systems, Computing Sciences and
Software Engineering – Volume 1 of Proceedings of the International Conference on Systems,
Computing Sciences and Software Engineering (SCSS’06). Springer-Verlag GmbH: Berlin,
Germany, December 5–14, 2006, University of Bridgeport: Bridgeport, CT, USA.
879. D. M. Ellis. Book Reviews: “Applied Regression Analysis” by N. P. Draper and H. Smith.
Journal of the Royal Statistical Society: Series C – Applied Statistics, 17(1):83–84, 1968,
Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK. See
also [828].
880. Michael T.M. Emmerich, Nicola Beume, and Boris Naujoks. An EMO Algorithm using
the Hypervolume Measure as Selection Criterion. In EMO’05 [600], pages 62–76, 2005.
doi: 10.1007/b106458. CiteSeerx : 10.1.1.60.2524.
881. Andries P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. John Wiley &
Sons Ltd.: New York, NY, USA, 2005. isbn: 0470091916. Google Books ID: UAg AAAACAAJ.
882. Fatma Corut Ergin, Ayşegül Yayımlı, and A. Şima Uyar. An Evolutionary Algorithm for Sur-
vivable Virtual Topology Mapping in Optical WDM Networks. In EvoWorkshops’09 [1052],
REFERENCES 1025
pages 31–40, 2009. doi: 10.1007/978-3-642-01129-0 4.
883. Larry J. Eshelman, editor. Proceedings of the Sixth International Conference on Genetic
Algorithms (ICGA’95), July 15–19, 1995, University of Pittsburgh: Pittsburgh, PA, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-370-0. Google
Books ID: 5KMsOwAACAAJ and 9xRRAAAAMAAJ.
884. Larry J. Eshelman and J. David Schaffer. Preventing Premature Convergence in Genetic
Algorithms by Preventing Incest. In ICGA’91 [254], pages 115–122, 1991.
885. Larry J. Eshelman, Richard A. Caruana, and J. David Schaffer. Biases in the Crossover
Landscape. In ICGA’89 [2414], pages 10–19, 1989.
886. Anna Isabel Esparcia-Alcázar, Anikó Ekárt, Sara Silva, Stephen Dignum, and A. Şima
Etaner-Uyar, editors. Proceedings of the 13th European Conference on Genetic Programming
(EuroGP’10), April 7–10, 2010, Istanbul, Turkey, volume 6021 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-12148-7. isbn: 3-642-12147-0. Google
Books ID: 7mHr-QFI-gsC. Library of Congress Control Number (LCCN): 2010922339.
887. Nicolás S. Estévez and Hod Lipson. Dynamical Blueprints: Exploiting Levels
of System-Environment Interaction. In GECCO’07-I [2699], pages 238–244, 2007.
doi: 10.1145/1276958.1277009. Fully available at http://ccsl.mae.cornell.edu/
papers/gecco07_estevez.pdf [accessed 2010-08-05]. CiteSeerx : 10.1.1.130.2238.
888. European Symposium on Intelligent Techniques (ESIT’99), June 3–4, 1999, Orthodox
Academy of Crete (OAC): Kolymvari, Kissamos, Chania, Crete, Greece. European Net-
work for Fuzzy Logic and Uncertainty Modeling in Information Technology (ERUDIT).
Fully available at http://www.erudit.de/erudit/events/esit99/programme.htm
[accessed 2009-07-20].

889. Xiannian Fan, Ke Tang, and Thomas Weise. Margin-Based Over-Sampling Method for Learn-
ing From Imbalanced Datasets. In PAKDD’11 [1294], 2011.
890. Günter Fandel and Thomáš Gál, editors. Proceedings of the 3rd International Conference
on Multiple Criteria Decision Making: Theory and Application (MCDM’79), August 20–
22, 1979, Hagen/Königswinter, West Germany, volume 177 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387099638 and
3-540-09963-8. OCLC: 6143366, 263581036, 310833050, 421850617, 470793575,
and 636202402. Published in May 1980.
891. Günter Fandel, Thomáš Gál, and Thomas Hanne, editors. Proceedings of the 12th Inter-
national Conference on Multiple Criteria Decision Making (MCDM’95), June 19–23, 1995,
Hagen, North Rhine-Westphalia, Germany, volume 448 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 3-540-62097-4.
OCLC: 36017289, 174158645, 246647660, 312770616, 476484227, 490887148, and
639833174. GBV-Identification (PPN): 222832282, 257873279, and 271804556. Pub-
lished on March 21, 1997.
892. Usenet FAQs: comp.ai.neural-nets FAQ. FAQS.ORG – Internet FAQ Archives: USA,
July 12, 2009. Fully available at http://www.faqs.org/faqs/ai-faq/neural-nets/
[accessed 2009-07-12].

893. Monique P. Fargues and Ralph D. Hippenstiel, editors. Conference Record of the Thirty-
First Asilomar Conference on Signals, Systems & Computers, November 2–5, 1997, Naval
Postgraduate School (U.S.): Pacific Grove, CA, USA. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-8186-8316-3, 0-8186-8317-1, and 0-8186-8318-X. Google Books
ID: LRwNPwAACAAJ and SyjnAAAACAAJ. OCLC: 39219056. issn: 1058-6393. Order
No.: 97CB36163.
894. Marco Farina and Paolo Amato. On the Optimal Solution Definition for Many-
Criteria Optimization Problems. In NAFIPS’00 [1522], pages 233–238, 2002.
doi: 10.1109/NAFIPS.2002.1018061. Fully available at http://www.lania.mx/
˜ccoello/farina02b.pdf.gz [accessed 2009-07-17]. INSPEC Accession Number: 7388102.
895. Tom Fawcett and Nina Mishra, editors. Proceedings of the 20th International Conference
1026 REFERENCES
on Machine Learning (ICML’03), August 21–24, 2003, Washington, DC, USA. AAAI Press:
Menlo Park, CA, USA. isbn: 1-57735-189-4.
896. Xiang Fei, Junzhou Luo, Jieyi Wu, and Guanqun Gu. QoS Routing based on Genetic Al-
gorithms. Computer Communications – The International Journal for the Computer and
Telecommunications Industry, 22(15-16):1392–1399, October 25, 1999, Elsevier Science Pub-
lishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(99)00113-9. Fully available at http://
neo.lcc.uma.es/opticomm/main.html [accessed 2008-08-01].
897. Edward A. Feigenbaum and Julian Feldman, editors. Computers and Thought, AAAI
Press Series / AAAI Press Copublications. McGraw-Hill: New York, NY, USA, 1963–
1995. isbn: 0-262-56092-5. Google Books ID: YnlQAAAAMAAJ. OCLC: 33332474 and
246968117. Library of Congress Control Number (LCCN): 95215375. GBV-Identification
(PPN): 189350644, 197823203, and 279066740. LC Classification: Q335.5 .C66 1995.
898. Stuart I. Feldman, Mike Uretsky, Marc Najork, and Craig E. Wills, editors. Proceedings
of the 13th international conference on World Wide Web (WWW’04), May 17–20, 2004,
Manhattan, New York, NY, USA. ACM Press: New York, NY, USA. isbn: 1-58113-844-X
and 1604230304. Google Books ID: Gr8dAAAACAAJ, sYW PAAACAAJ, and syExNAAACAAJ.
OCLC: 71006498, 223922047, and 327018361.
899. William Feller. An Introduction to Probability Theory and Its Applications, Volume 1, Wi-
ley Series in Probability and Mathematical Statistics – Applied Probability and Statis-
tics Section Series. Wiley Interscience: Chichester, West Sussex, UK, 3rd edition, 1968.
isbn: 0471257087. Google Books ID: E9WLSAAACAAJ and TkfeSAAACAAJ.
900. Dieter Fensel, Fausto Giunchiglia, Deborah L. McGuinness, and Mary-Anne Williams, edi-
tors. Proceedings of the Eighth International Conference on Knowledge Representation and
Reasoning (KR’02), April 22–25, 2002, Toulouse, France. Morgan Kaufmann Publishers Inc.:
San Francisco, CA, USA.
901. Thomas A. Feo and Jonathan F. Bard. Flight Scheduling and Maintenance Base Plan-
ning. Management Science, 35(12):1415–1432, December 1989, Institute for Operations Re-
search and the Management Sciences (INFORMS): Linthicum, ML, USA and HighWire Press
(Stanford University): Cambridge, MA, USA. doi: 10.1287/mnsc.35.12.1415. JSTOR Stable
ID: 2632228.
902. Thomas A. Feo and Mauricio G.C. Resende. Greedy Randomized Adaptive Search Proce-
dures. Journal of Global Optimization, 6(2):109–133, March 1995, Springer Netherlands: Dor-
drecht, Netherlands. doi: 10.1007/BF01096763. Fully available at http://www.research.
att.com/˜mgcr/doc/gtut.ps.Z [accessed 2008-10-20]. CiteSeerx : 10.1.1.48.8667.
903. Vitaliy Feoktistov. Differential Evolution – In Search of Solutions, volume 5 in Springer
Optimization and Its Applications. Springer New York: New York, NY, USA, Decem-
ber 2006. isbn: 0-387-36895-7 and 0-387-36896-5. Google Books ID: DUL7AAAACAAJ
and kG7aP v-SU4C. OCLC: 77077713, 318292315, and 494112725. Library of Congress
Control Number (LCCN): 2006929851. GBV-Identification (PPN): 516152432 and
546651992. LC Classification: QA402.5 .F42 2006.
904. Alberto Fernández, Salvador Garcı́a, Julián Luengo, Ester Bernadó-Mansilla, and Fran-
cisco Herrera Triguero. Genetics-Based Machine Learning for Rule Induction: State of
the Art, Taxonomy, and Comparative Study. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), June 21, 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2009.2039140.
905. Francisco Fernández de Vega and Erick Cantú-Paz, editors. Second Workshop on Parallel
Bioinspired Algorithms (WPBA’07), July 7, 2007, University College London (UCL): London,
UK. ACM Press: New York, NY, USA. Part of [2700].
906. Paola Festa and Mauricio G.C. Resende. An Annotated Bibliography of GRASP. AT&T
Labs Research Technical Report TD-5WYSEW, AT&T Labs: Florham Park, NJ, USA,
February 29, 2004. Fully available at http://www.research.att.com/˜mgcr/grasp/
gannbib/gannbib.html [accessed 2008-10-20]. See also [907].
907. Paola Festa and Mauricio G.C. Resende. An Annotated Bibliography of GRASP-
Part II: Applications. International Transactions in Operational Research, 16(2):
131–172, March 2009, Blackwell Publishing Ltd: Chichester, West Sussex, UK.
doi: 10.1111/j.1475-3995.2009.00664.x. See also [906].
908. Anthony V. Fiacco and Garth P. McCormick. Nonlinear Programming: Sequential Un-
constrained Minimization Techniques. John Wiley & Sons Ltd.: New York, NY, USA
REFERENCES 1027
and Defense Technical Information Center (DTIC): Fort Belvoir, VA, USA, January 1969.
isbn: 0471258105. Google Books ID: Hjp7OwAACAAJ and TYONOgAACAAJ. See also [909].
909. Anthony V. Fiacco and Garth P. McCormick. Nonlinear Programming: Sequential Uncon-
strained Minimization Techniques, volume 4 in Classics in Applied Mathematics Series. So-
ciety for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, new edition,
April 1990. isbn: 0898712548. Google Books ID: 8NhQAAAAMAAJ, sjD RJfxvr0C, and
sjD RJfxvr0C&dq. See also [908].
910. Volker Fickert, Winfried Kalfa, and Thomas Weise. Framework for Distributed Simula-
tion and Measuring of Complex Relations of Operating System Components. In EDME-
DIA’05 [1568], pages 3865–3870, 2005. Fully available at http://www.it-weise.de/
documents/files/FKW2005OSF.pdf [accessed 2010-12-07]. See also [2900].
911. M.V. Fidelis, Heitor Silverio Lopes, and Alex Alves Freitas. Discovering Comprehensible
Classification Rules with a Genetic Algorithm. In CEC’00 [1333], pages 805–810, volume 1,
2000. doi: 10.1109/CEC.2000.870381. CiteSeerx : 10.1.1.35.1828.
912. Jonathan E. Fieldsend, Richard M. Everson, and Sameer Singh. Using Unconstrained
Elite Archives for Multi-Objective Optimisation. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 7(3):305–323, June 2003, IEEE Computer Society: Washington,
DC, USA. doi: 10.1109/TEVC.2003.810733. Fully available at http://empslocal.ex.
ac.uk/people/staff/jefields/JF_06.pdf, http://eref.uqu.edu.sa/files/
using_unconstrained_elite_archives_for_m.pdf, and http://www.lania.mx/
x
˜ccoello/EMOO/fieldsend03.pdf.gz [accessed 2010-12-16]. CiteSeer : 10.1.1.14.6272
and 10.1.1.19.4563. INSPEC Accession Number: 7666738.
913. Richard Fikes and Wendy Lehnert, editors. Proceedings of the 11th National Conference
on Artificial Intelligence (AAAI’93), July 11–15, 1993, Washington, DC, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51071-5. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai93.php [accessed 2007-09-06].
914. Joaquim Filipe and Janusz Kacprzyk, editors. Proceedings of the 2nd International Con-
ference on Computational Intelligence (IJCCI’10), October 24–26, 2010, Hotel Sidi Saler:
València, Spain.
915. Joaquim Filipe, José Cordeiro, and Jorge Cardoso, editors. Revised Selected Papers of the
9th International Conference on Enterprise Information Systems (ICEIS’07), June 12–16,
2007, Funchal, Madeira, Portugal, volume 12/2009 in Lecture Notes in Business Information
Processing. Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-88710-2. See
also [494].
916. Joaquim Filipe, Ana L.N. Fred, and Bernadette Sharp, editors. Proceedings of the Interna-
tional Conference on Agents and Artificial Intelligence (ICAART’09), January 19–21, 2009,
Porto, Portugal. Institute for Systems and Technologies of Information, Control and Com-
munication (INSTICC) Press: Setubal, Portugal.
917. Joaquim Filipe, Ana L.N. Fred, and Bernadette Sharp, editors. Proceedings of the 2nd
International Conference on Agents and Artificial Intelligence (ICAART’10), January 22–
24, 2010, València, Spain. Institute for Systems and Technologies of Information, Control
and Communication (INSTICC) Press: Setubal, Portugal.
918. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the International Conference on Bioin-
spired Optimization Methods and their Applications (BIOMA’04), October 11–12, 2004, Jožef
Stefan International Postgraduate School: Ljubljana, Slovenia. Jožef Stefan Institute: Ljubl-
jana, Slovenia. Part of the Information Society Multiconference (IS 2004).
919. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the Second International Con-
ference on Bioinspired Optimization Methods and their Applications (BIOMA’06), Oc-
tober 9–10, 2006, Jožef Stefan International Postgraduate School: Ljubljana, Slove-
nia, Informacijska Družba (Information Society). Jožef Stefan Institute: Ljubljana,
Slovenia. isbn: 961-6303-81-3. Fully available at http://is.ijs.si/is/
is2006/Proceedings/C/BIOMA06_Complete.pdf [accessed 2008-11-05]. Partly available
at http://bioma.ijs.si/conference/2006/index.html [accessed 2008-11-05]. Google
Books ID: 0JLsAAAACAAJ and 0ZLsAAAACAAJ. Part of the Information Society Multi-
conference (IS 2006).
920. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the Third International Conference
on Bioinspired Optimization Methods and their Applications (BIOMA’08), October 13–14,
2008, Jožef Stefan International Postgraduate School: Ljubljana, Slovenia. Jožef Stefan Insti-
1028 REFERENCES
tute: Ljubljana, Slovenia. Fully available at http://is.ijs.si/is/is2008/zborniki/
D%20-%20BIOMA.pdf [accessed 2008-11-05]. Part of the Information Society Multiconference (IS
2008).
921. Bogdan Filipič and Jurij Šilc, editors. Proceedings of the 4th International Conference
on Bioinspired Optimization Methods and their Applications (BIOMA’10), May 20–21,
2010, Jožef Stefan Institute: Ljubljana, Slovenia. Jožef Stefan Institute: Ljubljana, Slove-
nia. Partly available at http://bioma.ijs.si/conference/ProcBIOMA2010Intro.
pdf [accessed 2010-06-22].
922. Steven R. Finch. Mathematical Constants, volume 94. Cambridge University Press: Cam-
bridge, UK, August 18, 2003. isbn: 0521818052. Partly available at http://algo.inria.
fr/bsolve/ [accessed 2009-06-13]. Google Books ID: Pl5I2ZSI6uAC. LC Classification: QA41
.F54 2003. See also [923].
923. Steven R. Finch. Transitive Relations, Topologies and Partial Orders. Cambridge University
Press: Cambridge, UK, June 5, 2003. Fully available at http://algo.inria.fr/csolve/
posets.pdf [accessed 2009-06-13]. CiteSeerx : 10.1.1.5.402. See also [922].
924. Jenny Rose Finkel. Using Genetic Programming To Evolve an Algorithm For Factoring
Numbers. In Genetic Algorithms and Genetic Programming at Stanford [1601], pages 52–60.
Stanford University Bookstore, Stanford University: Stanford, CA, USA, 2003. Fully avail-
able at http://www.genetic-programming.org/sp2003/Finkel.pdf [accessed 2010-11-
x
19]. CiteSeer : 10.1.1.140.9317.

925. Hans Fischer. Die verschiedenen Formen und Funktionen des zentralen Grenzw-
ertsatzes in der Entwicklung von der klassischen zur modernen Wahrscheinlichkeit-
srechnung, Berichte aus der Mathematik. Shaker Verlag GmbH: Aachen, North
Rhine-Westphalia, Germany, August 2000. isbn: 3-8265-7767-1. Partly avail-
able at http://www.ku-eichstaett.de/Fakultaeten/MGF/Mathematik/Didmath/
Didmath.Fischer/HF_sections/content/ [accessed 2008-08-19]. OCLC: 247452236. “The
Different Forms and Applications of the Central Limit Theorem in its Development from
Classical to Modern Probability Theory”.
926. Douglas H. Fisher, editor. Proceedings of the 14th International Conference on Machine
Learning (ICML’97), July 8–12, 1997, Nashville, TN, USA. Morgan Kaufmann Publishers
Inc.: San Francisco, CA, USA. isbn: 1-55860-486-3.
927. Michael Fisher and Richard Owens, editors. Executable Modal and Temporal Logics, Work-
shop Proceedings (IJCAI’93-WS), August 28, 1993, Chambéry, France, volume 897 in Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-58976-7 and 3-540-58976-7.
OCLC: 150398117, 246324352, 312223057, 473142654, and 501656926. GBV-
Identification (PPN): 180339729, 272032352, and 595124461. See also [182, 183, 2259].
928. Sir Ronald Aylmer Fisher. The Correlations between Relatives on the Supposition of
Mendelian Inheritance. Philosophical Transactions of the Royal Society of Edinburgh, 52:
399–433, October 1, 1918, Royal Society of Edinburgh: Edinburgh, Scotland, UK. Fully
available at http://www.library.adelaide.edu.au/digitised/fisher/9.pdf [ac-
cessed 2009-07-11].

929. Sir Ronald Aylmer Fisher. Applications of “Student’s” distribution. Metron (Roma), 5(3):
90–104, 1925. Fully available at http://digital.library.adelaide.edu.au/coll/
special/fisher/43.pdf [accessed 2007-09-30]. See also [268].
930. Sir Ronald Aylmer Fisher. The Use of Multiple Measurements in Taxonomic Problems.
Annals of Eugenics, 7(2):179–188, 1936, University of London Galton Laboratory for National
Eugenic: Harpenden, Herts., England, UK. Fully available at http://digital.library.
adelaide.edu.au/coll/special/fisher/138.pdf [accessed 2008-08-16]. See also [92, 268].
931. Sir Ronald Aylmer Fisher. Statistical Methods and Scientific Inference. Hafner
Press: New York, NY, USA, Macmillan Publishers Co.: New York, NY, USA, and
Oliver and Boyd: London, UK, revised edition, 1956–July 1973. isbn: 0028447409.
Google Books ID: 0WoGAQAAIAAJ, IHdqAAAAMAAJ, QmsGAQAAIAAJ, oyXPAAAAMAAJ, and
p7RvGQAACAAJ. OCLC: 785822.
932. J. Michael Fitzpatrick and John J. Grefenstette. Genetic Algorithms in Noisy Environments.
Machine Learning, 3(2–3):101–120, October 1998, Kluwer Academic Publishers: Norwell,
MA, USA and Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00113893.
Fully available at http://www.springerlink.com/content/n6w328441t63q374/
REFERENCES 1029
fulltext.pdf [accessed 2008-07-19].
933. Peter J. Fleming, Robin Charles Purshouse, and Robert J. Lygoe. Many-Objective Opti-
mization: An Engineering Design Perspective. In EMO’05 [600], pages 14–32, 2005.
934. Dario Floreano, Jean-Daniel Nicoud, and Francesco Mondada, editors. Proceedings of the
5th European Conference on Advances in Artificial Life (ECAL’99), September 13–17, 1999,
Lausanne, Switzerland, volume 1674/1999 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-48304-7. isbn: 3-540-66452-1. Google Books ID: 2dWRlVcCBfIC and
sOwRVCPs5McC.
935. Henrick Flyvbjerg and Benny Lautrup. Evolution in a Rugged Fitness Landscape. Physical
Review A, 46(10), November 15, 1992, American Physical Society: College Park, MD, USA.
doi: 10.1103/PhysRevA.46.6714. CiteSeerx : 10.1.1.55.4769 and 10.1.1.55.4769.
936. Terence Claus Fogarty, editor. Proceedings of the Workshop on Artificial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB’94), April 11–13, 1994, Leeds, UK, volume 865/1994 in Lecture Notes in Com-
puter Science (LNCS). Society for the Study of Artificial Intelligence and the Simulation
of Behaviour (SSAISB): Chichester, West Sussex, UK, Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/3-540-58483-8. isbn: 0-387-58483-8 and 3-540-58483-8. GBV-
Identification (PPN): 164010106, 272035424, and 59512478X.
937. Terence Claus Fogarty, editor. Proceedings of the Workshop on Artificial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB’95), April 3–4, 1995, Sheffield, UK, volume 993/1995 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-60469-3.
isbn: 3-540-60469-3.
938. Terence Claus Fogarty, editor. Proceedings of the Workshop on Artificial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB’96), April 1–2, 1996, Brighton, UK, volume 1143/1996 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0032768.
isbn: 3-540-61749-3.
939. David B. Fogel, editor. Evolutionary Computation: The Fossil Record. IEEE Computer
Society Press: Los Alamitos, CA, USA and Wiley Interscience: Chichester, West Sussex, UK,
May 1998. isbn: 0-7803-3481-7. Google Books ID: ahYLAAAACAAJ. OCLC: 38270557.
Order No.: PC5737.
940. David B. Fogel. Blondie24: Playing at the Edge of AI, The Morgan Kaufmann Series in
Artificial Intelligence. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, Septem-
ber 2001. isbn: 1558607838. Google Books ID: 5vpuw8L0 CAC and t5RQAAAAMAAJ.
941. David B. Fogel. In Memoriam – Alex S. Fraser. Evolutionary Computation, 6(5):429–430,
October 2002, MIT Press: Cambridge, MA, USA. doi: 10.1109/TEVC.2002.805212.
942. David B. Fogel and Wirt Atmar, editors. Proceedings of the 1st Annual Conference on
Evolutionary Programming (EP’92), February 21–22, 1992, La Jolla Marriott Hotel: La
Jolla, CA, USA. Evolutionary Programming Society: San Diego, CA, USA. Google Books
ID: UA2HAAACAAJ. OCLC: 84992066.
943. David B. Fogel and Wirt Atmar, editors. Proceedings of the 2nd Annual Conference on
Evolutionary Programming (EP’93), February 25–26, 1993, Radisson Hotel: La Jolla, CA,
USA. Evolutionary Programming Society: San Diego, CA, USA. OCLC: 28466830.
944. David B. Fogel, Mohamed A. El-Sharkawi, Xin Yao, Hitoshi Iba, Paul Marrow, and
Mark Shackleton, editors. Proceedings of the IEEE Congress on Evolutionary Computa-
tion (CEC’02), 2002, Hilton Hawaiian Village Hotel (Beach Resort & Spa): Honolulu, HI,
USA, volume 1-2. IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer Society
Press: Los Alamitos, CA, USA. isbn: 0-7803-7282-4. Google Books ID: -ptVAAAAMAAJ
and Irq5AQAACAAJ. OCLC: 51476813, 59461927, 64431055, and 181357364. INSPEC
Accession Number: 7321757. CEC 2002 – A joint meeting of the IEEE, the Evolutionary
Programming Society, and the IEE. Held in connection with the World Congress on Compu-
tational Intelligence (WCCI 2002). Part of [1338].
945. Lawrence Jerome Fogel, Alvin J. Owens, and Michael J. Walsh. Artificial Intelligence
through Simulated Evolution. John Wiley & Sons Ltd.: New York, NY, USA, 1966.
isbn: 0471265160. Google Books ID: 75RQAAAAMAAJ, CwaoPAAACAAJ, EerbAAAACAAJ,
QMLaAAAAMAAJ, and rJNwOwAACAAJ. OCLC: 1208284, 223078340, and 561242276.
1030 REFERENCES
GBV-Identification (PPN): 190979224. asin: B0000CNARU.
946. Lawrence Jerome Fogel, Peter John Angeline, and Thomas Bäck, editors. Evolutionary
Programming V: Proceedings of the Fifth Annual Conference on Evolutionary Programming
(EP’96), February 29–March 3, 1996, San Diego, CA, USA, Complex Adaptive Systems,
Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-06190-2. Google Books
ID: d hQAAAAMAAJ.
947. Gianluigi Folino, Clara Pizzuti, and Giandomenico Spezzano. GP Ensemble for
Distributed Intrusion Detection Systems. In ICAPR’05 [2506], pages 54–62, 2005.
doi: 10.1007/11551188 6. Fully available at http://dns2.icar.cnr.it/pizzuti/
icapr05.pdf [accessed 2008-06-23].
948. Cyril Fonlupt, Jin-Kao Hao, Evelyne Lutton, Edmund M. A. Ronald, and Marc Schoenauer,
editors. Selected Papers of the 4th European Conference on Artificial Evolution (AE’99),
November 3–5, 1999, Dunkerque, France, volume 1829 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-67846-8. Published in
2000.
949. Carlos M. Fonseca. Decision Making in Evolutionary Optimization (Abstract of Invited Talk).
In EMO’07 [2061], 2007.
950. Carlos M. Fonseca. Preference Articulation in Evolutionary Multiobjective Optimisation –
Plenary Talk. In HIS’07 [1572], 2007.
951. Carlos M. Fonseca and Peter J. Fleming. Genetic Algorithms for Multiobjective Optimization:
Formulation, Discussion and Generalization. In ICGA’93 [969], pages 416–423, 1993. Fully
available at http://www.lania.mx/˜ccoello/EMOO/fonseca93.ps.gz [accessed 2009-06-
x
23]. CiteSeer : 10.1.1.48.9077.

952. Carlos M. Fonseca and Peter J. Fleming. An Overview of Evolutionary Algorithms in Mul-
tiobjective Optimization. Evolutionary Computation, 3(1):1–16, Spring 1995, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1995.3.1.1. CiteSeerx : 10.1.1.50.7779.
953. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms – Part II: Application example. Technical
Report 565, University of Sheffield, Department of Automatic Control and Systems Engi-
neering: Sheffield, UK, January 23, 1995. CiteSeerx : 10.1.1.55.5395. See also [955].
954. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms – Part I: A Unified Formulation. IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 28(1):
26–37, January 1998, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/3468.650319. CiteSeerx : 10.1.1.52.2473. See also [955].
955. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms – Part II: Application example. IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 28(1):
38–47, January 1998, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/3468.650320. See also [953, 954].
956. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple
Constraint Handling with Evolutionary Algorithms. In Practical Approaches to Multi-
Objective Optimization [399], 2004. Fully available at http://drops.dagstuhl.de/
opus/volltexte/2005/237/ [accessed 2007-09-19].
957. Carlos M. Fonseca, Peter J. Fleming, Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele,
editors. Proceedings of the Second International Conference on Evolutionary Multi-Criterion
Optimization (EMO’03), April 8–11, 2003, University of the Algarve: Faro, Portugal, vol-
ume 2632/2003 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-36970-8. isbn: 3-540-01869-7. Google Books
ID: 4H9aKVMsFrcC. OCLC: 51942678, 51975842, 150396006, 166467742, 243489698,
and 248786908.
958. Walter Fontana. Algorithmic Chemistry. In Artificial Life II [1668], pages 159–210, 1990.
Fully available at http://fontana.med.harvard.edu/www/Documents/WF/Papers/
alchemy.pdf [accessed 2009-08-02].
959. Walter Fontana, Peter F. Stadler, Erich G. Bornberg-Bauer, Thomas Griesmacher, Ivo L. Ho-
facker, Manfred Tacker, Pedro Tarazona, Edward D. Weinberger, and Peter K. Schuster. RNA
Folding and Combinatory Landscapes. Physical Review E, 47(3):2083–2099, March 1993,
American Physical Society: College Park, MD, USA. doi: 10.1103/PhysRevE.47.2083.
REFERENCES 1031
Fully available at http://fontana.med.harvard.edu/www/Documents/WF/
Papers/combinatory.landscapes.pdf and http://www.santafe.edu/media/
workingpapers/92-10-051.pdf [accessed 2010-12-03]. CiteSeerx : 10.1.1.47.6724.
960. Kenneth S. H. Forbus and Howard Shrobe, editors. Proceedings of the 6th National Conference
on Artificial Intelligence (AAAI’87), July 13–17, 1987, Seattle, WA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Partly available at http://www.aaai.org/
Conferences/AAAI/aaai87.php [accessed 2007-09-06].
961. George Foreman. A Pitfall and Solution in Multi-Class Feature Selection for Text
Classification. In ICML’04 [426], pages 38–46, 2004. doi: 10.1145/1015330.1015356.
CiteSeerx : 10.1.1.64.9405.
962. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Building a Peer-
to-peer Information System in Grids via Self-Organizing Agents. Journal of Grid
Computing, 6(2):125–140, June 2000, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10723-007-9062-z. Fully available at http://grid.deis.unical.it/
papers/pdf/JGC2008-ForestieroMastroianniSpezzano.pdf [accessed 2008-09-05]. See
also [963].
963. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Construc-
tion of a Peer-to-Peer Information System in Grids. In SOAS’2005 [666], pages
220–236, 2005. Fully available at http://grid.deis.unical.it/papers/pdf/
SOAS05-ForestieroMastroianniSpezzano.pdf [accessed 2008-09-05]. See also [962, 964–
967].
964. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Antares: An Ant-
Inspired P2P Information System for a Self-Structured Grid. In BIONETICS’07 [1342], pages
151–158, 2007. doi: 10.1109/BIMNICS.2007.4610103. Fully available at http://grid.
deis.unical.it/papers/pdf/ForestieroMastroianniSpezzano-Antares.pdf
[accessed 2008-06-13]. See also [963].

965. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. So-Grid: A Self-
Organizing Grid Featuring Bio-inspired Algorithms. ACM Transactions on Autonomous
and Adaptive Systems (TAAS), 3(2:5):1–37, May 2008, ACM Press: New York, NY,
USA. doi: 10.1145/1352789.1352790. Fully available at http://grid.deis.unical.it/
papers/pdf/Article.pdf [accessed 2008-09-05]. See also [963].
966. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. QoS-based Dis-
semination of Content in Grids. Future Generation Computer Systems – The Inter-
national Journal of Grid Computing: Theory, Methods and Applications (FGCS), 24
(3):235–244, March 2008, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.future.2007.05.003. Fully available at http://grid.deis.unical.it/
papers/pdf/ForestieroMastroianniSpezzano-FGCS-QoS.pdf [accessed 2008-09-05]. See
also [963].
967. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Reorganization and
Discovery of Grid Information with Epidemic Tuning. Future Generation Computer Sys-
tems – The International Journal of Grid Computing: Theory, Methods and Applications
(FGCS), 24(8):788–797, October 2008, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/j.future.2008.04.001. Fully available at http://grid.deis.unical.
it/papers/pdf/FGCS-epidemic.pdf [accessed 2008-09-05]. See also [963].
968. M. Forina, S. Lanteri, C. Armanino, and et al. PARVUS – An Extendible Package for Data
Exploration, Classification and Correlation. Technical Report, Institute of Pharmaceutical
and Food Analysis and Technologies: Genoa, Italy, Elsevier Science Publishers B.V.: Ams-
terdam, The Netherlands, 1988–1991. isbn: 0444430121. Google Books ID: qi lAAAACAAJ.
GBV-Identification (PPN): 052128725.
969. Stephanie Forrest, editor. Proceedings of the 5th International Conference on Genetic Al-
gorithms (ICGA’93), July 17–21, 1993, University of Illinois: Urbana-Champaign, IL, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-299-2 and
978-1-55860-299-1. Google Books ID: cfBQAAAAMAAJ.
970. Stephanie Forrest and Melanie Mitchell. Relative Building-Block Fitness and the
Building-Block Hypothesis. In FOGA’92 [2927], pages 109–126, 1992. Fully available
at http://web.cecs.pdx.edu/˜mm/Forrest-Mitchell-FOGA.pdf [accessed 2010-12-04].
1032 REFERENCES
CiteSeerx : 10.1.1.83.4366.
971. Christian V. Forst, Christian M. Reidys, and Jacqueline Weber. Evolutionary Dynamics and
Optimization: Neutral Networks as Model-Landscapes for RNA Secondary-Structure Folding-
Landscapes. In ECAL’95 [1939], pages 128–147, 1995. doi: 10.1007/3-540-59496-5 294.
CiteSeerx : 10.1.1.21.1221 and 10.1.1.56.9190. arXiv ID: adap-org/9505002v1.
972. Richard S. Forsyth. BEAGLE – A Darwinian Approach to Pattern Recognition. Kyber-
netes, 10(3):159–166, 1981, Thales Publications (W.O.) Ltd.: Irvine, CA, USA and Emerald
Group Publishing Limited: Bingley, UK. doi: 10.1108/eb005587. Fully available at http://
www.cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/kybernetes_forsyth.pdf [ac-
cessed 2007-11-01]. Received December 17, 1980. (copy from British Library May 1994).

973. Richard S. Forsyth. The Evolution of Intelligence. In Machine Learning: Principles and
Techniques [974], Chapter 4, pages 65–82. Chapman & Hall: London, UK, 1989. Refers also
to PC/BEAGLE. See also [972].
974. Richard S. Forsyth, editor. Machine Learning: Principles and Techniques. Chapman &
Hall: London, UK, 1989. isbn: 0-412-30570-4 and 0-412-30580-1. OCLC: 18255812,
246546781, and 315238891. Library of Congress Control Number (LCCN): 88022872.
GBV-Identification (PPN): 017008093, 025424157, and 275121755. LC Classifica-
tion: Q325 .F64 1989.
975. Richard S. Forsyth and Roy Rada. Machine Learning Applications in Expert Systems
and Information Retrieval, Ellis Horwood Series in Artificial Intelligence. John Wiley
& Sons Australia: Milton, QLD, Australia and Halsted Press: New York, NY, USA.
isbn: 0-470-20309-9 and 0-7458-0045-9. Google Books ID: TcnaAAAAMAAJ. Contains
chapters on BEAGLE. See also [972].
976. James A. Foster. Review: Discipulus: A Commercial Genetic Programming System. Ge-
netic Programming and Evolvable Machines, 2(2):201–203, June 2001, Springer Nether-
lands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1023/A:1011516717456.
977. James A. Foster, Evelyne Lutton, Julian Francis Miller, Conor Ryan, and Andrea
G. B. Tettamanzi, editors. Proceedings of the 5th European Conference on Genetic
Programming (EuroGP’02), April 3–5, 2002, Kinsale, Ireland, volume 2278/2002 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-45984-7. isbn: 3-540-43378-3. Google Books ID: RpNQAAAAMAAJ and
eCbu4GwRLusC. OCLC: 49312634, 50589656, 64651626, 66624505, 76394308, and
314133163.
978. B. R. Fox and M. B. McMahon. Genetic Operators for Sequencing Problems. In
FOGA’90 [2562], pages 284–300, 1990.
979. Dieter Fox and Carla P. Gomes, editors. Proceedings of the Twenty-Third AAAI Conference
on Artificial Intelligence (AAAI’08), June 13–17, 2008, Chicago, IL, USA. AAAI Press:
Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai08.php [accessed 2009-02-27].
980. John Fox. Applied Regression Analysis, Linear Models, and Related Methods. SAGE Publica-
tions: Thousand Oaks, CA, USA, February 1997. isbn: 0-8039-4540-X. OCLC: 36065974,
464653054, and 639274269. GBV-Identification (PPN): 225778130 and 509874118.
981. Felipe M. G. França and Carlos H. C. Ribeiro, editors. Proceedings of the VI Brazil-
ian Symposium on Neural Networks (SBRN’00), November 22–25, 2000, Rio de Janeiro,
RJ, Brazil. IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-0856-1,
0-7695-0857-X, and 0-7695-0858-8. Google Books ID: Q6d KQAACAAJ and
QqGoJgAACAAJ. OCLC: 45537396, 45537396, 47880868, 47880868, 51498392,
59548079, 59548079, 80493808, 80493808, 288979392, 288979392, 423976687, and
423976687.
982. Frank D. Francone, Markus Conrads, Wolfgang Banzhaf, and Peter Nordin. Homolo-
gous Crossover in Genetic Programming. In GECCO’99 [211], pages 1021–1026, volume
2, 1999. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gecco1999/
GP-463.pdf [accessed 2010-08-23]. CiteSeerx : 10.1.1.153.15.
983. Eibe Frank, Mark A. Hall, Geoffrey Holmes, Richard Kirkby, Bernhard Pfahringer, Ian H.
Witten, and Leonhard Trigg. WEKA – A Machine Learning Workbench for Data
Mining. In The Data Mining and Knowledge Discovery Handbook [1815], Chapter 62,
pages 1305–1314. Springer Science+Business Media, Inc.: New York, NY, USA, 2005.
REFERENCES 1033
doi: 10.1007/0-387-25465-X 62. Fully available at http://www.cs.waikato.ac.nz/˜ml/
publications/2005/weka_dmh.pdf [accessed 2008-11-19].
984. Gene F. Franklin and Anthony N. Michel, editors. Proceedings of the 24th IEEE Conference
on Decision and Control (CDC’85), December 11–13, 1985, Bonaventure Hotel & Spa: Fort
Lauderdale, FL, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA. Library of Congress Control Number (LCCN): 85CH2245-9. Catalogue no.: 79-
640961.
985. Daniel Raymond Frantz. Nonlinearities in Genetic Adaptive Search. PhD thesis, University
of Michigan: Ann Arbor, MI, USA, September 1972, John Henry Holland, Advisor. Fully
available at http://deepblue.lib.umich.edu/handle/2027.42/4950 [accessed 2010-08-
03]. Order No.: AAI7311116. University Microfilms No.: 73-11116. ID: UMR1485. Published

as Technical Report Nr. 138. See also [986].


986. Daniel Raymond Frantz. Nonlinearities in Genetic Adaptive Search. Dissertation Abstracts
International (DAI), 33(11):5240B–5241B, ProQuest: Ann Arbor, MI, USA. See also [985].
987. Silvio Franz, Matteo Marsili, and Haijun Zhou, editors. 2nd Asian-Pacific School on Sta-
tistical Physics and Interdisciplinary Applications, March 3–14, 2008, Běijı̄ng, China. Abdus
Salam International Centre for Theoretical Physics (ICTP): Triest, Italy, Chinese Center
of Advanced Science and Technology (CCAST): Běijı̄ng, China, and Chinese Academy of
Sciences, Kavli Institute of Theoretical Physics China (KITPC): Běijı̄ng, China.
988. Alex S. Fraser. Simulation of Genetic Systems by Automatic Digital Computers. I. Introduc-
tion. Australian Journal of Biological Science (AJBS), 10:484–491, 1957. See also [989, 990].
989. Alex S. Fraser. Simulation of Genetic Systems by Automatic Digital Computers. II. Effects
of Linkage or Rates of Advance under Selection. Australian Journal of Biological Science
(AJBS), 10:484–491, 1957. See also [990].
990. Alex S. Fraser. Simulation of Genetic Systems by Automatic Digital Computers. VI. Epistasis.
Australian Journal of Biological Science (AJBS), 13(2):150–162, 1960. See also [988, 989].
991. William J. Frawley, Gregory Piatetsky-Shapiro, and Christopher J. Matheus. Knowl-
edge Discovery in Databases: An Overview. AI Magazine, 13(3):57–70, Fall 1992,
AAAI Press: Menlo Park, CA, USA. Fully available at http://www.aaai.
org/ojs/index.php/aimagazine/article/download/1011/929 and http://
www.kdnuggets.com/gpspubs/aimag-kdd-overview-1992.pdf [accessed 2009-09-09].
CiteSeerx : 10.1.1.18.1674.
992. 12th European Conference on Machine Learning (ECML’01), September 3–7, 2001, Freiburg,
Germany.
993. Alex Alves Freitas. A Genetic Programming Framework for Two Data Mining Tasks: Classi-
fication and Generalized Rule Induction. In GP’97 [1611], pages 96–101, 1997. Fully available
at http://kar.kent.ac.uk/21483/ [accessed 2010-12-04]. CiteSeerx : 10.1.1.47.4151 and
10.1.1.56.2449.
994. Alex Alves Freitas. Data Mining and Knowledge Discovery with Evolutionary Algo-
rithms, Natural Computing Series. Springer New York: New York, NY, USA, 2002.
isbn: 3-540-43331-7. Google Books ID: KkdZlfQJvbYC. OCLC: 49415761, 248441650,
and 492385968. Library of Congress Control Number (LCCN): 2002021728. LC Classifi-
cation: QA76.9.D343 F72 2002.
995. Richard M. Friedberg. A Learning Machine: Part I. IBM Journal of Research and Develop-
ment, 2(1):2–13, November 1958. doi: 10.1147/rd.21.0002. See also [996].
996. Richard M. Friedberg, B. Dunham, and J. H. North. A Learning Machine: Part II. IBM
Journal of Research and Development, 3(3):282–287, July 1959. doi: 10.1147/rd.33.0282. See
also [995].
997. Drew Fudenberg and Jean Tirole. Game Theory. MIT Press: Cambridge, MA, USA, Au-
gust 1991. isbn: 0-2620-6141-4. Google Books ID: pFPHKwXro3QC.
998. Cory Fujiko and John Dickinson. Using the Genetic Algorithm to Generate LISP Source
Code to Solve the Prisoner’s Dilemma. In ICGA’87 [1132], pages 236–240, 1987.
999. Cory Fujuki. An Evaluation of Hollands Genetic Algorithm Applied to a Program Generator.
Master’s thesis, University of Idaho, Computer Science Department: Moscow, ID, USA, 1986.
1000. David T. Fullwood. Percolation in Two-Dimensional Grain Boundary Structures, and
Polycrystal Property Closures. Master’s thesis, Brigham Young University: Provo, UT,
USA, December 2005. Fully available at http://contentdm.lib.byu.edu/ETD/image/
etd1045.pdf [accessed 2009-06-21].
1034 REFERENCES
1001. Takeshi Furuhashi, editor. First Online Workshop on Soft Computing (WSC1), August 19–
30, 1996, Nagoya University: Nagoya, Japan.
1002. Takeshi Furuhashi and Tomohiro Yoshikawa. Visualization Techniques for Mining of Solu-
tions. In ISIS’07 [1579], pages 68–71, 2007. Fully available at http://isis2007.fuzzy.
or.kr/submission/upload/A1484.pdf [accessed 2011-12-05].

1003. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part I (KES’06-I), October 9–11, 2006, Bournemouth International Centre:
Bournemouth, UK, volume 4251/2006 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-46535-9. See also [1004, 1005].
1004. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part II (KES’06-II), October 9–11, 2006, Bournemouth International Cen-
tre: Bournemouth, UK, volume 4252/2006 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-46537-5. See also [1003, 1005].
1005. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part III (KES’06-III), October 9–11, 2006, Bournemouth International Cen-
tre: Bournemouth, UK, volume 4253/2006 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/11893011. isbn: 3-540-46542-1. Library of Congress Control Number
(LCCN): 2006933827. See also [1003, 1004].
1006. Malgorzata Gadomska and Andrzej Pacut. Performance of Ant Routing Algorithms When
Using TCP. In EvoWorkshops’07 [1050], pages 1–10, 2007. doi: 10.1007/978-3-540-71805-5 1.
1007. Pablo Galiasso and Roger L. Wainwright. A Hybrid Genetic Algorithm for the Point to
Multipoint Routing Problem with Single Split Paths. In SAC’01 [24], pages 327–332, 2001.
doi: 10.1145/372202.372354. CiteSeerx : 10.1.1.20.8236.
1008. Ante Galic, Tonci Caric, and Hrvoje Gold. MARS – A Programming Language for
Solving Vehicle Routing Problems. In Recent Advances in City Logistics. Proceedings
of the 4th International Conference on City Logistics [2672], pages 47–57, 2005. Fully
available at http://www.cro-grid.hr/hr/apps/rep/app/oot/fpz_galic_caric_
gold_final_paper.doc [accessed 2011-01-14].
1009. Marcus Gallagher, Marcus R. Frean, and Tom Downs. Real-Valued Evolutionary Optimiza-
tion using a Flexible Probability Density Estimator. In GECCO’99 [211], pages 840–846,
1999. CiteSeerx : 10.1.1.35.5113.
1010. E. A. Galperin. Pareto Analysis vis-à-vis Balance Space Approach in Multiobjective Global
Optimization. Journal of Optimization Theory and Applications, 93(3):533–545, June 1997,
Springer Netherlands: Dordrecht, Netherlands and Plenum Press: New York, NY, USA.
doi: 10.1023/A:1022639028824.
1011. Luca Maria Gambardella and Marco Dorigo. Solving Symmetric and Asymmetric TSPs by
Ant Colonies. In CEC’96 [1445], pages 622–627, 1996. doi: 10.1109/ICEC.1996.542672.
Fully available at http://www.idsia.ch/˜luca/icec96-acs.pdf [accessed 2009-06-26].
CiteSeerx : 10.1.1.40.4846.
1012. Luca Maria Gambardella, Éric D. Taillard, and Marco Dorigo. Ant Colonies for the
Quadratic Assignment Problem. The Journal of the Operational Research Society (JORS),
50(2):167–176, February 1999, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hamp-
shire, UK and Operations Research Society: Birmingham, UK. doi: 10.2307/3010565.
Fully available at http://www.idsia.ch/˜luca/tr-idsia-4-97.pdf [accessed 2009-06-30].
CiteSeerx : 10.1.1.32.2245.
1013. Luca Maria Gambardella, Andrea E. Rizzoli, Fabrizio Oliverio, Norman Casagrande, Al-
berto V. Donati, Roberto Montemanni, and Enzo Lucibello. Ant Colony Optimization for
REFERENCES 1035
Vehicle Routing in Advanced Logistics Systems. In MAS’03 [588], pages 3–9, 2003. Fully
available at http://www.idsia.ch/˜luca/MAS2003_18.pdf [accessed 2007-08-19].
1014. Xavier Gandibleux, Marc Sevaux, Kenneth Sörensen, and Vincent T’kindt, editors. Meta-
heuristics for Multiobjective Optimisation, volume 535 in Lecture Notes in Economics and
Mathematical Systems. Springer-Verlag: Berlin/Heidelberg, 2004. isbn: 3-540-20637-X.
Google Books ID: Lb8p1FWAh-8C and TEToXuP17RYC.
1015. X. Z. Gao, António Gaspar-Cunha, Gerald Schaefer, and Jun Wang, editors. Soft Computing
in Industrial Applications – 14th Online World Conference on Soft Computing in Industrial
Applications (WSC14), November 17–29, 2009, volume 75 in Advances in Intelligent and
Soft Computing. Springer-Verlag: Berlin/Heidelberg. isbn: 3-642-11281-1. Google Books
ID: EDp3RAAACAAJ.
1016. Yong Gao and Joseph Culberson. An Analysis of Phase Transition in NK Landscapes. Jour-
nal of Artificial Intelligence Research (JAIR), 17:309–332, October 2002, AI Access Founda-
tion, Inc.: El Segundo, CA, USA and AAAI Press: Menlo Park, CA, USA. Fully available
at http://www.jair.org/media/1081/live-1081-2104-jair.pdf [accessed 2009-02-26].
CiteSeerx : 10.1.1.20.2690.
1017. Yu Gao and Yong-Jun Wang. A Memetic Differential Evolutionary Algorithm for High
Dimensional Functions’ Optimization. In ICNC’07 [1711], pages 188–192, volume 4, 2007.
doi: 10.1109/ICNC.2007.60.
1018. Yuelin Gao and Yuhong Duan. An Adaptive Particle Swarm Optimization Algo-
rithm with New Random Inertia Weight. In ICIC’07-2 [1293], pages 342–350, 2007.
doi: 10.1007/978-3-540-74282-1 39.
1019. Yuelin Gao and Zihui Ren. Adaptive Particle Swarm Optimization Algorithm With
Genetic Mutation Operation. In ICNC’07 [1711], pages 211–215, volume 2, 2007.
doi: 10.1109/ICNC.2007.161.
1020. Salvador Garcı́a and Francisco Herrera Triguero. An Extension on “Statistical Com-
parisons of Classifiers over Multiple Data Sets“ for all Pairwise Comparisons. Journal
of Machine Learning Research (JMLR), 9:2677–2694, December 2008, MIT Press: Cam-
bridge, MA, USA. Fully available at http://jmlr.csail.mit.edu/papers/volume9/
garcia08a/garcia08a.pdf and http://sci2s.ugr.es/publications/ficheros/
2008-Garcia-JMLR.pdf [accessed 2010-10-19]. See also [766].
1021. Alma Lilia Garcı́a-Almanza and Edward P. K. Tsang. The Repository Method for
Chance Discovery in Financial Forecasting. In KES’06-III [1005], pages 30–37, 2006.
doi: 10.1007/11893011 5.
1022. Alma Lilia Garcı́a-Almanza, Edward P. K. Tsang, and Edgar Galván-López.
Evolving Decision Rules to Discover Patterns in Financial Data Sets. In
Computational Methods in Financial Engineering – Essays in Honour of Man-
fred Gilli [1575], Chapter II-5, pages 239–255. Springer-Verlag: Berlin/Heidelberg,
2008. doi: 10.1007/978-3-540-77958-2 12. Fully available at http://www.bracil.
net/finance/papers/GarciaTsangGalvan-Ecr-Book-2007.pdf [accessed 2010-06-25].
CiteSeerx : 10.1.1.153.5463.
1023. Raúl Garcı́a-Castro, Asunción Gómez-Pérez, Charles J. Petrie, Emanuele Della Valle, Ul-
rich Küster, Michal Zaremba, and Omair Shafiq, editors. Proceedings of the 6th Inter-
national Workshop on Evaluation of Ontology-based Tools and the Semantic Web Service
Challenge (EON-SWSC’08), June 1–2, 2008, Tenerife, Spain, volume 359 in CEUR Work-
shop Proceedings (CEUR-WS.org). Rheinisch-Westfälische Technische Hochschule (RWTH)
Aachen, Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/ [accessed 2010-12-12]. See also [2166].
1024. Martin Gardner. Martin Gardner’s Sixth Book of Mathematical Diversions from Scientific
American. University of Chicago Press: Chicago, IL, USA, 1983. isbn: 0226282503.
1025. Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the
Theory of NP-Completeness, Series of Books in the Mathematical Sciences. W. H. Free-
man & Co.: New York, NY, USA, 1979. isbn: 0-7167-1044-7 and 0-7167-1045-5.
Google Books ID: OHrJJQAACAAJ, gB FAgAACAAJ, and mdBxHAAACAAJ. OCLC: 4195125,
476019082, 630176268, 632535534, and 633137406. Library of Congress Control Num-
ber (LCCN): 78012361. GBV-Identification (PPN): 250973014, 303019050, 317880381,
321971280, 354248944, 513349081, 584237758, and 59999438X. LC Classifica-
1036 REFERENCES
tion: QA76.6 .G35.
1026. Michael R. Garey, David S. Johnson, and Ravi Sethi. The Complexity of Flowshop and
Jobshop Scheduling. Mathematics of Operations Research (MOR), 1(2):117–129, May 1976,
Institute for Operations Research and the Management Sciences (INFORMS): Linthicum,
ML, USA. doi: 10.1287/moor.1.2.117. JSTOR Stable ID: 3689278.
1027. Wolfgang Walter Garn. Vehicle Routing Problem (VRP). Vienna University of Technol-
ogy: Vienna, Austria, July 20, 2007. Fully available at http://osiris.tuwien.ac.at/
˜wgarn/VehicleRouting/vehicle_routing.html [accessed 2011-04-25].
1028. Josselin Garnier and Leila Kallel. Statistical Distribution of the Convergence Time of Evo-
lutionary Algorithms for Long-Path Problems. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), 4(1):16–30, April 2000, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.843492. CiteSeerx : 10.1.1.31.4329. INSPEC Accession Num-
ber: 6617599.
1029. António Gaspar-Cunha and Ricardo H. C. Takahashi, editors. 15th Online World Confer-
ence on Soft Computing in Industrial Applications (WSC15), November 15–27, 2010. World
Federation on Soft Computing (WFSC).
1030. Sergey Gavrilets. Fitness Landscapes and the Origin of Species, volume MPB-41 in Mono-
graphs in Population Biology (MPB). Princeton University Press: Princeton, NJ, USA,
July 2004. isbn: 0-691-11983-X. Google Books ID: YjQKGwAACAAJ.
1031. Nicholas Geard. An Exploration of NK Landscapes with Neutrality, 14213. Master’s thesis,
University of Queensland, School of Information Technology and Electrical Engineering: St
Lucia, Q, Australia, October 2001, Janet Wiles, Supervisor. Fully available at http://
eprints.ecs.soton.ac.uk/14213/1/ng_thesis.pdf [accessed 2008-07-01].
1032. Nicholas Geard, Janet Wiles, Jennifer Hallinan, Bradley Tonkes, and Benjamin
Skellett. A Comparison of Neutral Landscapes – NK, NKp and NKq. In
CEC’02 [944], pages 205–210, 2002. Fully available at http://eprints.ecs.soton.ac.
uk/14208/1/cec1-comp.pdf and http://www.staff.ncl.ac.uk/j.s.hallinan/
pubs/cec2002.pdf [accessed 2010-12-04]. CiteSeerx : 10.1.1.16.5159.
1033. Zong Woo Geem, Joong Hoon Kim, and G.V. Loganathan. A New Heuristic Opti-
mization Algorithm: Harmony Search. Simulation – Transactions of The Society for
Modeling and Simulation International, SAGE Journals Online, pages 60–68, Febru-
ary 2001, Society for Modeling and Simulation International (SCS): San Diego, CA, USA.
doi: 10.1177/003754970107600201.
1034. Zong Woo Geem, Kang Seok Lee, and Yongjin Park. Application of Harmony Search to
Vehicle Routing. American Journal of Applied Sciences, 2(12):1552–1557, December 2005,
Science Publications: New York, NY, USA. Fully available at http://www.scipub.org/
fulltext/ajas/ajas2121552-1557.pdf [accessed 2010-12-17].
1035. Johannes Gehrke, Raghu Ramakrishnan, and Venkatesh Ganti. RainForest – A Framework
for Fast Decision Tree Construction of Large Datasets. In VLDB’98 [1151], pages 416–427,
1998. Fully available at http://www.vldb.org/conf/1998/p416.pdf [accessed 2009-05-11].
CiteSeerx : 10.1.1.40.474.
1036. Johannes Gehrke, Venkatesh Ganti, Raghu Ramakrishnan, and Wei-Yin Loh. BOAT
– Optimistic Decision Tree Construction. In SIGMOD’99 [22], pages 169–180,
1999. doi: 10.1145/304182.304197. Fully available at http://doi.acm.org/10.
1145/304182.304197 and http://www.cs.cornell.edu/johannes/papers/1999/
sigmod1999-boat.pdf [accessed 2009-05-11].
1037. Alexander F. Gelbukh and Angel Fernando Kuri Morales, editors. Advances in Artificial In-
telligence: Proceedings of the 6th Mexican International Conference on Artificial Intelligence
(MICAI’07), November 4–10, 2007, Aguascalientes, México, volume 4827/2007 in Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-76631-5.
1038. Alexander F. Gelbukh and Eduardo F. Morales, editors. Advances in Artificial Intelligence
– Proceedings of the 7th Mexican International Conference on Artificial Intelligence (MI-
CAI’08), October 27–31, 2008, Tecnológico de Monterrey, Campus Estado de México: Ati-
zapán de Zaragoza, México, volume 5317/2008 in Lecture Notes in Artificial Intelligence
(LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-540-88636-5. isbn: 3-540-88635-4. Library of Congress
Control Number (LCCN): 2008936816.
REFERENCES 1037
1039. Alexander F. Gelbukh, Alvaro de Albornoz, and Hugo Terashima-Marı́n, editors. Ad-
vances in Artificial Intelligence: Proceedings of the 4th Mexican International Conference
on Artificial Intelligence (MICAI’05), November 14–18, 2005, Monterrey, México, volume
3789/2005 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11579427.
isbn: 3-540-29896-7.
1040. Stuart Geman, Elie Bienenstock, and René Doursat. Neural Networks and the Bias/Variance
Dilemma. Neural Computation, 4(1):1–58, January 1992, MIT Press: Cambridge, MA, USA.
doi: 10.1162/neco.1992.4.1.1.
1041. Michael R. Genesereth, editor. Proceedings of the National Conference on Artificial Intelli-
gence (AAAI’93), August 22–26, 1983, Washington, DC, USA. AAAI Press: Menlo Park, CA,
USA. isbn: 0-262-51052-9. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai83.php [accessed 2007-09-06]. OCLC: 228289483.
1042. Ian Gent, Hans van Maaren, and Toby Walsh, editors. SAT2000 – Highlights of Satisfiability
Research in the Year 2000, volume 63 in Frontiers in Artificial Intelligence and Applica-
tions. IOS Press: Amsterdam, The Netherlands, 2000. isbn: 1586030612. Google Books
ID: GGJM0lXMJQgC.
1043. Stuart German. Stochastic Relaxation, Gibbs Distribution and the Bayesian Restoration in
Images. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), PAMI-6
(6):721–741, November 1984, IEEE Computer Society: Piscataway, NJ, USA. doi: 10.1109/T-
PAMI.1984.4767596. Fully available at http://noodle.med.yale.edu/˜qian/docs/
geman.pdf [accessed 2011-11-21].
1044. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Encapsulated Evolution Strategies for the
Determination of Group Contribution Model Parameters in Order to Predict Thermodynamic
Properties. In PPSN V [866], pages 978–987, 1998. doi: 10.1007/BFb0056939. See also
[1045, 1046].
1045. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Use of Evolutionary Algorithms for
the Calculation of Group Contribution Parameters in order to Predict Thermodynamic
properties – Part 2: Encapsulated evolution strategies. Computers and Chemical En-
gineering, 23(7):955–973, July 1, 1999, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0098-1354(99)00270-7. See also [1044, 1046].
1046. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Verschachtelte Evolutionsstrategien zur
Optimierung nichtlinearer verfahrenstechnischer Regressionsprobleme. Chemie Ingenieur
Technik (CIT), 72(4):369–373, April 2000, Wiley Interscience: Chichester, West Sussex,
UK. doi: 10.1002/1522-2640(200004)72:4<369::AID-CITE369>3.0.CO;2-W. Fully avail-
able at http://www3.interscience.wiley.com/cgi-bin/fulltext/76500452/
PDFSTART [accessed 2010-08-01]. See also [1044, 1045].
1047. Zoubin Ghahramani, editor. Proceedings of the 24th Annual International Conference on
Machine Learning (ICML’07), June 20–24, 2007, Oregon State University: Corvallis, OR,
USA, volume 227 in ACM International Conference Proceeding Series (AICPS). Omipress:
Madison, WI, USA. Fully available at http://www.machinelearning.org/icml2007_
proc.html [accessed 2010-06-30].
1048. Ashish Ghosh and Lakhmi C. Jain, editors. Evolutionary Computation in Data Mining,
volume 163/2005 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH:
Berlin, Germany, 2005. doi: 10.1007/3-540-32358-9. isbn: 3-540-22370-3. Google Books
ID: Caevy3rrjPUC. OCLC: 56658773, 249236978, 318293100, and 320964275. Library
of Congress Control Number (LCCN): 2004111009.
1049. Ashish Ghosh and Shigeyoshi Tsutsui, editors. Advances in Evolutionary Computing – The-
ory and Applications, Natural Computing Series. Springer New York: New York, NY,
USA, November 22, 2002. isbn: 3-540-43330-9. Google Books ID: OGMEMC9P3vMC.
OCLC: 50207987, 249607003, and 314328282. Library of Congress Control Number
(LCCN): 2002029164. GBV-Identification (PPN): 35332180X and 358205859. LC Clas-
sification: QA76.618 .A215 2003.
1050. Mario Giacobini, Anthony Brabazon, Stefano Cagnoni, Gianni A. Di Caro, Rolf Drech-
sler, Muddassar Farooq, Andreas Fink, Evelyne Lutton, Penousal Machado, Stefan Min-
ner, Michael O’Neill, Juan Romero, Franz Rothlauf, Giovanni Squillero, Hideyuki Tak-
agi, A. Şima Uyar, and Shengxiang Yang, editors. Applications of Evolutionary Comput-
ing, Proceedings of EvoWorkshops 2007: EvoCoMnet, EvoFIN, EvoIASP, EvoINTERAC-
1038 REFERENCES
TION, EvoMUSART, EvoSTOC and EvoTransLog (EvoWorkshops’07), April 11–13, 2007,
València, Spain, volume 4448/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71805-5. isbn: 3-540-71804-4. Google Books ID: DtwgBJkn7iMC.
OCLC: 122935862, 152418528, 255966727, and 318300154. Library of Congress Con-
trol Number (LCCN): 2007923848.
1051. Mario Giacobini, Anthony Brabazon, Stefano Cagnoni, Gianni A. Di Caro, Rolf Drech-
sler, Anikó Ekárt, Anna Isabel Esparcia-Alcázar, Muddassar Farooq, Andreas Fink, Jon
McCormack, Michael O’Neill, Juan Romero, Franz Rothlauf, Giovanni Squillero, A. Şima
Uyar, and Shengxiang Yang, editors. Applications of Evolutionary Computing – Proceed-
ings of EvoWorkshops 2008: EvoCOMNET, EvoFIN, EvoHOT, EvoIASP, EvoMUSART,
EvoNUM, EvoSTOC, and EvoTransLog (EvoWorkshops’08), May 26–28, 2008, Naples, Italy,
volume 4974/2008 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-540-78761-7. isbn: 3-540-78760-7. Google Books
ID: ibbRDnV1vUkC.
1052. Mario Giacobini, Penousal Machado, Anthony Brabazon, Jon McCormack, Stefano Cagnoni,
Michael O’Neill, Gianni A. Di Caro, Ferrante Neri, Anikó Ekárt, Mike Preuß, Anna Is-
abel Esparcia-Alcázar, Franz Rothlauf, Muddassar Farooq, Ernesto Tarantino, Andreas
Fink, and Shengxiang Yang, editors. Applications of Evolutionary Computing – Proceedings
of EvoWorkshops 2009: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, Evo-
HOT, EvoIASP, EvoINTERACTION, EvoMUSART, EvoNUM, EvoSTOC, EvoTRANSLOG
(EvoWorkshops’09), April 15–17, 2009, Eberhard-Karls-Universität Tübingen, Fakultät für
Informations- und Kognitionswissenschaften: Tübingen, Germany, volume 5484/2009 in The-
oretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01129-0.
isbn: 3-642-01128-4. Partly available at http://evostar.na.icar.cnr.it/
EvoWorkshops/EvoWorkshops.html [accessed 2011-02-28]. Google Books ID: NvOd8Rz997AC.
1053. K. C. Giannakoglou, D. T. Tsahalis, J. Périaux, K. D. Papailiou, and Terence Claus Fogarty,
editors. Proceedings of the Conference on Evolutionary Methods for Design Optimization
and Control with Applications to Industrial Problems (EUROGEN’01), September 19–21,
2001, Athens, Greece. International Center for Numerical Methods in Engineering (Cmine):
Barcelona, Catalonia, Spain. isbn: 84-89925-97-6. Partly available at http://www.
mech.ntua.gr/˜eurogen2001 [accessed 2007-09-16]. GBV-Identification (PPN): 37304500X.
Published in 2002.
1054. Basilis Gidas. Nonstationary Markov Chains and Convergence of the Annealing Algorithm.
Journal of Statistical Physics, 39(1-2):73–131, April 1985, Springer Netherlands: Dordrecht,
Netherlands. doi: 10.1007/BF01007975.
1055. Yolanda Gil and Raymond J. Mooney, editors. Proceedings, The Twenty-First National
Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial
Intelligence Conference (AAAI’06 / IAAI’06), July 16–20, 2006, Boston, MA, USA. AAAI
Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai06.php and http://www.aaai.org/Conferences/IAAI/iaai06.php [ac-
cessed 2007-09-06].

1056. Walter Gilbert. Why Genes in Pieces? Nature, 271(5645):501, February 9, 1978.
doi: 10.1038/271501a0. PubMed ID: 622185.
1057. John H. Gillespie. Molecular Evolution Over the Mutational Landscape. Evolution – In-
ternational Journal of Organic Evolution, 38(5):1116–1129, September 1984, Society for
the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.2307/2408444.
1058. Jean-Numa Gillet and Yunlong Sheng. Simulated Quenching with Temperature Rescaling
for Designing Diffractive Optical Elements. In Proceedings of the 18th Congress of the Inter-
national Commission for Optics [1062], pages 683–684, 1999. doi: 10.1117/12.354954.
1059. Jean-Numa Gillet and Yunlong Sheng. Iterative Simulated Quenching for Designing Irregular-
Spot-Array Generators. Applied Optics, 39(20):3456–3465, July 10, 2000, Optical Society of
America (OSA): Washington, DC, USA. doi: 10.1364/AO.39.003456.
1060. Manfred Gilli and Peter Winker. Heuristic Optimization Methods in Econometrics. In Hand-
book on Computational Econometrics [263]. Wiley Interscience: Chichester, West Sussex, UK,
October 21, 2007. doi: 10.1002/9780470748916.ch3. Fully available at http://www.unige.
REFERENCES 1039
ch/ses/metri/gilli/Teaching/GilliWinkerHandbook.pdf [accessed 2010-09-17].
1061. Third European Working Session on Learning (EWSL’88), October 3–5, 1988, Glasgow,
Scotland, UK.
1062. Alexander J. Glass, Joseph W. Goodman, Milton Chang, Arthur H. Guenther, and Toshim-
itsu Asakura, editors. Proceedings of the 18th Congress of the International Commission for
Optics, August 2, 1999, San Francisco, CA, USA, volume 3749 in The Proceedings of SPIE.
Society of Photographic Instrumentation Engineers (SPIE): Bellingham, WA, USA.
1063. Fred Glover. Heuristics for Integer Programming using Surrogate Constraints. Management
Science Report Series 74-6, University of Colorado, Business Research Division: Boulder,
CO, USA, 1974. asin: B0006WBQOG. ID: OL14555131M. See also [1064].
1064. Fred Glover. Heuristics for Integer Programming using Surrogate Constraints. Decision
Sciences – A Journal of the Decision Sciences Institute, 8(1):156–166, January 1977, Decision
Sciences Institute: Atlanta, GA, USA and John Wiley & Sons Ltd.: New York, NY, USA.
doi: 10.1111/j.1540-5915.1977.tb01074.x. See also [1063].
1065. Fred Glover. Future Paths for Integer Programming and Links to Artificial Intelligence.
Computers & Operations Research, 13(5):533–549, 1986, Pergamon Press: Oxford, UK and
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0305-0548(86)90048-1. Fully
available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Future
%20Paths%20for%20Integer%20Programming.pdf [accessed 2010-10-03].
1066. Fred Glover. Tabu Search – Part I. ORSA Journal on Computing, 1(3):190–206, Sum-
mer 1989, Operations Research Society of America (ORSA). doi: 10.1287/ijoc.1.3.190.
Fully available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Part
%20I-ORSA.pdf [accessed 2008-03-27]. See also [1067].
1067. Fred Glover. Tabu Search – Part II. ORSA Journal on Computing, 2(1):190–206,
Winter 1990, Operations Research Society of America (ORSA). doi: 10.1287/ijoc.2.1.4.
Fully available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Part
%20II-ORSA-aw.pdf [accessed 2011-12-05]. See also [1066].
1068. Fred Glover and Gary A. Kochenberger, editors. Handbook of Metaheuristics, volume 57
in International Series in Operations Research & Management Science. Kluwer Academic
Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands, 2003.
doi: 10.1007/b101874. isbn: 0-306-48056-5 and 1-4020-7263-5. Series Editor Frederick
S. Hillier.
1069. Fred Glover, Éric D. Taillard, and Dominique de Werra. A User’s Guide to Tabu Search.
Annals of Operations Research, 41(1):3–28, March 1993, Springer Netherlands: Dordrecht,
Netherlands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1007/BF02078647. Special issue on Tabu search.
1070. Helen G. Gobb and John J. Grefenstette. Genetic Algorithms for Tracking Changing En-
vironments. In ICGA’93 [969], pages 523–529, 1993. Fully available at ftp://ftp.aic.
nrl.navy.mil/pub/papers/1993/AIC-93-004.ps [accessed 2008-07-19].
1071. William L. Goffe, Gary D. Ferrier, and John Rogers. Global Optimization of Statistical Func-
tions With Simulated Annealing. Journal of Econometrics, 60(1-2):65–99, January–February
1994, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0304-4076(94)90038-8. Fully
available at http://cook.rfe.org/anneal-preprint.pdf [accessed 2009-09-18]. Partly
available at http://econpapers.repec.org/software/codfortra/simanneal.htm
[accessed 2009-09-18].

1072. J. Peter Gogarten and Elena Hilario. Inteins, Introns, and Homing Endonucleases:
Recent Revelations about the Life Cycle of Parasitic Genetic Elements. BMC Evo-
lutionary Biology, 6(94), November 13, 2006, BioMed Central Ltd.: London, UK.
doi: 10.1186/1471-2148-6-94. Fully available at http://www.biomedcentral.com/
content/pdf/1471-2148-6-94.pdf [accessed 2008-03-23].
1073. David Edward Goldberg. Computer-Aided Gas Pipeline Operation using Genetic Algorithms
and Rule Learning. Dissertation Abstracts International (DAI), 44(10):3174B, April 1984,
ProQuest: Ann Arbor, MI, USA. ID: 750217171. See also [1074].
1074. David Edward Goldberg. Computer-aided Gas Pipeline Operation using Genetic Algorithms
and Rule Learning. PhD thesis, University of Michigan: Ann Arbor, MI, USA, 1983. Uni-
versity Microfilms No.: 8402282. See also [1073].
1075. David Edward Goldberg. Genetic Algorithms in Search, Optimization, and Ma-
chine Learning. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA,
1040 REFERENCES
1989. isbn: 0-201-15767-5. Google Books ID: 2IIJAAAACAAJ, 3 RQAAAAMAAJ, and
bO9SAAAACAAJ. OCLC: 17674450, 246916144, 255664278, and 299467773. Library of
Congress Control Number (LCCN): 88006276. GBV-Identification (PPN): 250764105,
301364087, 357619773, 357663551, 478167571, 499485440, 558515053, and
62589894X. LC Classification: QA402.5 .G635 1989.
1076. David Edward Goldberg. Genetic Algorithms and Walsh Functions: Part I, A Gentle Intro-
duction. Complex Systems, 3(2):129–152, 1989, Complex Systems Publications, Inc.: Cham-
paign, IL, USA. Fully available at http://www.complex-systems.com/pdf/03-2-2.
pdf [accessed 2010-07-23].
1077. David Edward Goldberg. Genetic Algorithms and Walsh Functions: Part II, Deception
and Its Analysis. Complex Systems, 3(2):153–171, 1989, Complex Systems Publications,
Inc.: Champaign, IL, USA. Fully available at http://www.complex-systems.com/pdf/
03-2-3.pdf [accessed 2010-07-23].
1078. David Edward Goldberg. Making Genetic Algorithm Fly: A Lesson from the Wright Broth-
ers. Advanced Technology For Developers, 2:1–8, February 1993, High-Tech Communications:
Sewickley, PA, USA.
1079. David Edward Goldberg and Kalyanmoy Deb. A Comparative Analysis of Selection Schemes
Used in Genetic Algorithms. In FOGA’90 [2562], pages 69–93, 1990. Fully available
at http://www.cse.unr.edu/˜sushil/class/gas/papers/Select.pdf [accessed 2010-
x
08-03]. CiteSeer : 10.1.1.101.9494.

1080. David Edward Goldberg and Jon T. Richardson. Genetic Algorithms with Sharing for Mul-
timodal Function Optimization. In ICGA’87 [1132], pages 41–49, 1987.
1081. David Edward Goldberg and Liwei Wang. Adaptive Niching via Coevolutionary Sharing. In
Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163], Chapter 2, pages 21–38. John Wiley & Sons
Ltd.: New York, NY, USA, 1998. See also [1082].
1082. David Edward Goldberg and Liwei Wang. Adaptive Niching via Coevolutionary Sharing.
IlliGAL Report 97007, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of
Computer Science, Department of General Engineering, University of Illinois at Urbana-
Champaign: Urbana-Champaign, IL, USA, August 1997. Fully available at http://
www.illigal.uiuc.edu/pub/papers/IlliGALs/97007.ps.Z and www.lania.mx/
x
˜ccoello/EMOO/goldberg98.pdf.gz [accessed 2009-08-28]. CiteSeer : 10.1.1.15.5500.
See also [1081].
1083. David Edward Goldberg, Kalyanmoy Deb, and Bradley Korb. Messy Genetic Algorithms:
Motivation, Analysis, and First Results. Complex Systems, 3(5):493–530, 1989, Com-
plex Systems Publications, Inc.: Champaign, IL, USA. Fully available at http://www.
complex-systems.com/pdf/03-5-5.pdf [accessed 2010-08-09]. asin: B00071YQK2.
1084. David Edward Goldberg, Kalyanmoy Deb, and Bradley Korb. Messy Genetic Algorithms
Revisited: Studies in Mixed Size and Scale. Complex Systems, 4(4):415–444, 1990, Com-
plex Systems Publications, Inc.: Champaign, IL, USA. Fully available at http://www.
complex-systems.com/pdf/04-4-4.pdf [accessed 2010-08-09].
1085. David Edward Goldberg, Kalyanmoy Deb, and Jeffrey Horn. Massive Multimodality, Decep-
tion, and Genetic Algorithms. In PPSN II [1827], pages 37–48, 1992. See also [1086].
1086. David Edward Goldberg, Kalyanmoy Deb, and Jeffrey Horn. Massive Multimodality,
Deception, and Genetic Algorithms. Illigal report, Illinois Genetic Algorithms Labora-
tory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, April 1992.
CiteSeerx : 10.1.1.48.8442. See also [1085].
1087. David Edward Goldberg, Kalyanmoy Deb, Hillol Kargupta, and Georges Raif Harik. Rapid,
Accurate Optimization of Difficult Problems Using Fast Messy Genetic Algorithms. In
ICGA’93 [969], pages 56–64, 1993. Fully available at https://eprints.kfupm.edu.
sa/60781/ [accessed 2008-10-18]. CiteSeerx : 10.1.1.49.3524. See also [1088].
1088. David Edward Goldberg, Kalyanmoy Deb, Hillol Kargupta, and Georges Raif Harik. Rapid,
Accurate Optimization of Difficult Problems Using Fast Messy Genetic Algorithms. IlliGAL
Report 93004, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer
Science, Department of General Engineering, University of Illinois at Urbana-Champaign:
Urbana-Champaign, IL, USA, February 1993. Fully available at http://www.illigal.
uiuc.edu/pub/papers/IlliGALs/93004.ps.Z [accessed 2010-08-09]. See also [1087].
REFERENCES 1041
1089. Bruce L. Golden, James S. DeArmon, and Edward K. Baker. Computational Experiments
with Algorithms for a Class of Routing Problems. Computers & Operations Research, 10(1):
47–59, 1983, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0305-0548(83)90026-6.
1090. Bruce L. Golden, Edward A. Wasil, James P. Kelly, and I-Ming Chao. The Impact of
Metaheuristics on Solving the Vehicle Routing Problem: Algorithms, Problem Sets, and
Computational Results. In Fleet Management and Logistics [651], pages 33–56. Springer
Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA,
USA, 1998.
1091. Bruce L. Golden, S. (Raghu) Raghavan, and Edward A. Wasil, editors. The Vehicle
Routing Problem: Latest Advances and New Challenges, volume 43 in Operations Re-
search/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin, Germany, 2008.
doi: 10.1007/978-0-387-77778-8. isbn: 0-387-77777-6. Library of Congress Control Number
(LCCN): 2008921562.
1092. Oded Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge Univer-
sity Press: Cambridge, UK, 2008. Google Books ID: EuguvA-w5OEC. Library of Congress
Control Number (LCCN): 2008006750.
1093. Oded Goldreich, Arnold L. Rosenberg, and Alan L. Selman, editors. Theoretical Computer
Science: Essays in Memory of Shimon Even, volume 3895 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany, 2006. doi: 10.1007/11685654.
isbn: 3-540-32880-7. OCLC: 65427613, 318290447, 487447128, 612704970, and
629703194. Library of Congress Control Number (LCCN): 2006922002. GBV-
Identification (PPN): 508402778 and 512697760. LC Classification: MLCM 2006/41498
(Q).
1094. Erik F. Golen, Bo Yuan, and Nirmala Shenoy. An Evolutionary Approach to
Underwater Sensor Deployment. In GECCO’09-I [2342], pages 1925–1926, 2009.
doi: 10.1145/1569901.1570240.
1095. Matteo Golfarelli, Dario Maio, and Stefano Rizzi. Vertical Fragmentation of Views in
Relational Data Warehouses. In Atti del Settimo Convegno Nazionale Sistemi Evoluti
per Basi di Dati [283], pages 19–33, 1999. Fully available at http://www-db.deis.
unibo.it/˜srizzi/PDF/sebd99.pdf [accessed 2011-04-11]. CiteSeerx : 10.1.1.45.5959
and 10.1.1.64.414.
1096. Matteo Golfarelli, Vittorio Maniezzo, and Stefano Rizzi. Materialization of Fragmented Views
in Multidimensional Databases. Data & Knowledge Engineering, 49(3):325–351, June 2004,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland
Scientific Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/j.datak.2003.11.001.
CiteSeerx : 10.1.1.81.6534.
1097. Gene Howard Golub and Charles F. Van Loan. Matrix Computations, volume 3 in Johns
Hopkins Studies in the Mathematical Sciences. Johns Hopkins University (JHU) Press
Press: Baltimore, MD, USA, 3rd edition, 1996. isbn: 0801837391, 080185413X, and
0801854148. Google Books ID: mlOa7wPX6OYC. OCLC: 34515797. Library of Congress
Control Number (LCCN): 96014291. LC Classification: QA188 .G65 1996.
1098. Karthik Gomadam, Jon Lathem, John A. Miller, Meenakshi Nagarajan, Cary Pennington,
Amit P. Sheth, Kunal Verma, and Zixin Wu. Semantic Web Services Challenge 2006 - Phase
I. In First Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating
Web Services Mediation, Choreography and Discovery [2688], 2006. See also [2990].
1099. Takashi Gomi, editor. Proceedings of the International Symposium on Evolutionary Robotics
– From Intelligent Robotics to Artificial Life (ER’01), October 18–19, 2001, Tokyo, Japan,
volume 2217 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-42737-6.
1100. G. Gong and Bojan Cestnik. Hepatitis Data Set. UCI Machine Learning Reposi-
tory, Center for Machine Learning and Intelligent Systems, Donald Bren School of In-
formation and Computer Science, University of California: Irvine, CA, USA, Novem-
ber 1, 1988. Fully available at http://archive.ics.uci.edu/ml/datasets/
Breast+Cancer+Wisconsin+(Original) [accessed 2008-12-27].
1101. Yiyuan Gong and Alex S. Fukunaga. Fault Tolerance in Distributed Genetic Algorithms with
Tree Topologies. In CEC’09 [1350], pages 968–975, 2009. doi: 10.1109/CEC.2009.4983050.
INSPEC Accession Number: 10688632.
1042 REFERENCES
1102. Juan R. González, David Alejandro Pelta, Carlos Cruz, Germán Terrazas, and Natalio
Krasnogor, editors. Proceedings of the 4th International Workshop Nature Inspired Coop-
erative Strategies for Optimization (NICSO’10), May 12–14, 2010, Granada, Spain, vol-
ume 284 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-642-12538-6. isbn: 3-642-12537-9. Google Books ID: OAfiU1kGWWAC.
Library of Congress Control Number (LCCN): 20100924760.
1103. Teofilo F. Gonzalez, editor. Handbook of Approximation Algorithms and Metaheuristics,
Chapmann & Hall/CRC Computer and Information Science Series. Chapman & Hall/CRC:
Boca Raton, FL, USA, 2007. isbn: 1584885505. Google Books ID: QK3 VU8ngK8C.
1104. Erik D. Goodman, editor. Late Breaking Papers at Genetic and Evolutionary Computation
Conference (GECCO’01 LBP), July 7–11, 2001, Holiday Inn Golden Gateway Hotel: San
Francisco, CA, USA. AAAI Press: Menlo Park, CA, USA. Google Books ID: FjhHHQAACAAJ
and FzhHHQAACAAJ. See also [2570].
1105. Janaki Gopalan, Reda Alhajj, and Ken Barker. Discovering Accurate and Interesting Classi-
fication Rules Using Genetic Algorithm. In DMIN’06 [656], pages 389–395, 2006. Fully avail-
able at http://ww1.ucmss.com/books/LFS/CSREA2006/DMI5509.pdf [accessed 2007-07-
29].

1106. V. Scott Gordon and L. Darrell Whitley. Serial and Parallel Genetic Algorithms as Function
Optimizers. In ICGA’93 [969], pages 177–183, 1993. Fully available at http://eprints.
kfupm.edu.sa/64675/ and www.cs.colostate.edu/˜genitor/1993/icga93.ps.
gz [accessed 2009-08-20]. CiteSeerx : 10.1.1.54.3472.
1107. Martina Gorges-Schleuter. ASPARAGOS: An Asynchronous Parallel Genetic Optimization
Strategy. In ICGA’89 [2414], pages 422–427, 1989.
1108. Martina Gorges-Schleuter. Explicit Parallelism of Genetic Algorithms through Population
Structures. In PPSN I [2438], pages 150–159, 1990. doi: 10.1007/BFb0029746.
1109. James Gosling and Henry McGilton. The Java Language Environment – A White Paper.
Technical Report, Sun Microsystems, Inc.: Santa Clara, CA, USA, May 1996. Fully available
at http://java.sun.com/docs/white/langenv/ [accessed 2007-07-03].
1110. James Gosling, Bill Joy, Guy Steele, and Gilad Bracha. The JavaTM Language Specification,
The Java Series. Prentice Hall International Inc.: Upper Saddle River, NJ, USA, Sun Mi-
crosystems Press (SMP): Santa Clara, CA, USA, and Addison-Wesley Professional: Reading,
MA, USA, 3rd edition, May 2005. isbn: 0-321-24678-0. Fully available at http://java.
sun.com/docs/books/jls/ [accessed 2007-09-14].
1111. Simon Goss, R. Beckers, Jean-Louis Deneubourg, S. Aron, and Jacques M. Pasteels. How
trail Laying and Trail following can solve Foraging Problems for Ant Colonies. In NATO
Advanced Research Workshop on Behavioural Mechanisms of Food Selection [1306], pages
661–678, 1989.
1112. William Sealy Gosset. The Probable Error of a Mean. Biometrika, 6(1):1–25, March 1908,
Oxford University Press, Inc.: New York, NY, USA. Fully available at http://www.york.
ac.uk/depts/maths/histstat/student.pdf [accessed 2007-09-30]. Because the author was
not allowed to publish this article, he used the pseudonym “Student”. Reprinted in [1113].
1113. William Sealy Gosset. The Probable Error of a Mean. In “Student’s” Collected Papers [1114],
pages 11–34. Cambridge University Press: Cambridge, UK and Biometrika Office: London,
UK, 1942. Reprint of [1112].
1114. William Sealy Gosset, author. Egon Sharpe Pearson and John Wishart, editors. “Student’s”
Collected Papers. Cambridge University Press: Cambridge, UK and Biometrika Office:
London, UK, 1942. Google Books ID: 39fvGAAACAAJ, 3tfvGAAACAAJ, SdKEAAAAIAAJ,
UoJsAAAAMAAJ, and rqM2HQAACAAJ. OCLC: 220708760, 258570743, 313515170, and
318188805. With a Foreword by Launce McMullen.
1115. Jens Gottlieb and Günther R. Raidl, editors. Proceedings of the 4th European Confer-
ence on Evolutionary Computation in Combinatorial Optimization (EvoCOP’04), April 5–7,
2004, Coimbra, Portugal, volume 3004/2004 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b96499. isbn: 3-540-21367-8.
1116. Jens Gottlieb and Günther R. Raidl, editors. Proceedings of the 6th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP’06), April 10–12,
2006, Budapest, Hungary, volume 3906/2006 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-33178-6.
1117. Jens Gottlieb, Elena Marchiori, and Claudio Rossi. Evolutionary Algorithms for the Satisfi-
REFERENCES 1043
ability Problem. Evolutionary Computation, 10(1):35–50, Spring 2002, MIT Press: Cam-
bridge, MA, USA. doi: 10.1162/106365602317301763. Fully available at http://www.
cs.ru.nl/˜elenam/fsat.pdf [accessed 2010-08-01]. CiteSeerx : 10.1.1.77.4306. PubMed
ID: 11911782. See also [2335].
1118. Georg Gottlob and Toby Walsh, editors. Proceedings of the Eighteenth International Joint
Conference on Artificial Intelligence (IJCAI’03), August 9–15, 2003, Acapulco, México. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 0-127-05661-0. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-2003/content.htm [accessed 2008-04-01].
See also [1485].
1119. Gang Gou, Jeffrey Xu Yu, and Hongjun Lu. A* Search: An Efficient and Flexible Approach
to Materialized View Selection. IEEE Transactions on Systems, Man, and Cybernetics – Part
C: Applications and Reviews, 36(3):411–425, May 2006, IEEE Systems, Man, and Cybernet-
ics Society: New York, NY, USA. doi: 10.1109/TSMCC.2004.843248. INSPEC Accession
Number: 8906628.
1120. Paul Grace. Genetic Programming and Protocol Configuration. Master’s thesis, Lancaster
University, Computing Department: Lancaster, Lancashire, UK, September 2000, Gordon
Blair, Supervisor. Fully available at http://www.lancs.ac.uk/postgrad/gracep/
msc.pdf [accessed 2008-06-23].
1121. Jörn Grahl. Estimation of Distribution Algorithms in Logistics: Analysis, Design, and Appli-
cation. PhD thesis, Universität Mannheim, Fakultät für Betriebswirtschaftslehre: Mannheim,
Germany, April 16, 2008, Daniel J. Veit, Supervisor, Hans H. Bauer and Stefan Minner,
Committee members. Fully available at http://madoc.bib.uni-mannheim.de/madoc/
volltexte/2008/1897/ [accessed 2010-06-18]. urn: urn:nbn:de:bsz:180-madoc-18974.
1122. The 2006 China-UK Workshop on Nature Inspired Computation and Applications
(CUWNICA’06), October 25–27, 2006, Grand Hotel Overseas Tradesrs Club: Héféi, Ānhuı̄,
China.
1123. Pierre-Paul Grassé. La Reconstruction du Nid et les Coordinations Inter-Individuelles
chez Bellicositermes Natalensis et Cubitermes sp. la Théorie de la Stigmergie: Essai
d’Interprétation des Termites Constructeurs. Insectes Sociaux, 6(1):41–80, March 1959,
Springer-Verlag GmbH: Berlin, Germany and Birkhäuser Verlag: Basel, Switzerland.
doi: 10.1007/BF02223791.
1124. Federico Greco, editor. Traveling Salesman Problem. IN-TECH Education and Publish-
ing: Vienna, Austria, September 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-10-7&type=B [accessed 2009-01-13].
1125. Garrison W. Greenwood, Xiaobo Sharon Hu, and Joseph G. D’Ambrosio. Fitness Functions
for Multiple Objective Optimization Problems: Combining Preferences with Pareto Rankings.
In FOGA’96 [256], pages 437–455, 1996.
1126. John J. Grefenstette. Incorporating Problem Specific Knowledge into Genetic Algorithms.
In Genetic Algorithms and Simulated Annealing [694], pages 42–60. Pitman: London, UK,
1987–1990.
1127. John J. Grefenstette. Credit Assignment in Rule Discovery Systems Based on Ge-
netic Algorithms. Machine Learning, 3(2-3):225–245, October 1988, Kluwer Academic
Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1022614421909.
1128. John J. Grefenstette. Conditions for Implicit Parallelism. In FOGA’90 [2562], pages 252–261,
1990. Fully available at http://www.aic.nrl.navy.mil/papers/1991/AIC-91-012.
ps.Z [accessed 2010-07-30]. CiteSeerx : 10.1.1.52.7729.
1129. John J. Grefenstette. Deception Considered Harmful. In FOGA’92 [2927], pages 75–91, 1992.
CiteSeerx : 10.1.1.49.1448.
1130. John J. Grefenstette. Predictive Models Using Fitness Distributions of Genetic Operators. In
FOGA 3 [2932], pages 139–161, 1994. Fully available at http://reference.kfupm.edu.
sa/content/p/r/predictive_models_using_fitness_distribu_672029.pdf [ac-
x
cessed 2009-07-10]. CiteSeer : 10.1.1.52.4341.

1131. John J. Grefenstette, editor. Proceedings of the 1st International Conference on Ge-
netic Algorithms and their Applications (ICGA’85), June 24–26, 1985, Carnegy Mellon
University (CMU): Pittsburgh, PA, USA. Lawrence Erlbaum Associates: Hillsdale, NJ,
USA. isbn: 0-8058-0426-9. OCLC: 19702892, 174343913, 475815642, 493612786,
497548640, and 606101493.
1044 REFERENCES
1132. John J. Grefenstette, editor. Proceedings of the Second International Conference on Genetic
Algorithms and their Applications (ICGA’87), July 28–31, 1987, Massachusetts Institute of
Technology (MIT): Cambridge, MA, USA. Lawrence Erlbaum Associates, Inc. (LEA): Mah-
wah, NJ, USA. isbn: 0-8058-0158-8. Google Books ID: 0hdRAAAAMAAJ, 5N7LAQAACAAJ,
and bvlQAAAAMAAJ. OCLC: 17252562.
1133. John J. Grefenstette and James E. Baker. How Genetic Algorithms Work: A Critical Look
at Implicit Parallelism. In ICGA’89 [2414], pages 20–27, 1989.
1134. Horst Greiner. Robust Filter Design by Stochastic Optimization. In Optical Interference
Coatings [6], pages 150–160, 1994. doi: 10.1117/12.192107.
1135. Horst Greiner. Robust Optical Coating Design with Evolutionary Strategies. Applied Optics,
35(28):5477–5483, October 1, 1996, Optical Society of America (OSA): Washington, DC,
USA. doi: 10.1364/AO.35.005477.
1136. Andreas Griewank and Philippe L. Toint. Local Convergence Analysis for Parti-
tioned Quasi-Newton Updates. Numerische Mathematik, 39(3):429–448, October 1982.
doi: 10.1007/BF01407874.
1137. Andreas Griewank and Philippe L. Toint. Partitioned Variable Metric Updates for Large
Structured Optimization Problems. Numerische Mathematik, 39(1):119–137, February 1982.
doi: 10.1007/BF01399316.
1138. David F. Griffiths and G. A. Watson, editors. Numerical Analysis – Proceedings of the 16th
Dundee Biennial Conference in Numerical Analysis, June 27–30, 1995, University of Dundee:
Nethergate, Dundee, Scotland, UK, volume 344 in Pitman Research Notes in Mathemat-
ics. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, Chapman & Hall:
London, UK, and CRC Press, Inc.: Boca Raton, FL, USA. isbn: 0-582-27633-0. GBV-
Identification (PPN): 198456158 and 27380491X.
1139. Crina Grosan and Ajith Abraham. Hybrid Evolutionary Algorithms: Methodolo-
gies, Architectures, and Reviews. In Hybrid Evolutionary Algorithms [1141], pages
1–17. Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-73297-6 1.
Fully available at http://www.softcomputing.net/hea1.pdf [accessed 2010-09-11].
CiteSeerx : 10.1.1.160.9007.
1140. Crina Grosan and Ajith Abraham. A Novel Global Optimization Technique for High Dimen-
sional Functions. International Journal of Intelligent Systems, 24(4):421–440, April 2009,
Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1002/int.20343. Fully available
at http://www.softcomputing.net/wil2009.pdf [accessed 2009-08-07].
1141. Crina Grosan, Ajith Abraham, and Hisao Ishibuchi, editors. Hybrid Evolutionary Algorithms,
volume 75/2007 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Hei-
delberg, 2007. doi: 10.1007/978-3-540-73297-6. isbn: 3-540-73296-9. Google Books
ID: II7LCqiGFlEC. Library of Congress Control Number (LCCN): 2007932399.
1142. David A. Grossman, Luis Gravano, ChengXiang Zhai, Otthein Herzog, and David A. Evans,
editors. Proceedings of ACM Thirteenth Conference on Information and Knowledge Man-
agement (CIKM’04), November 8–13, 2004, Hyatt Arlington Hotel: Washington, DC, USA.
isbn: 1-58113-874-1. Partly available at http://ir.iit.edu/cikm2004/ [accessed 2011-
04-11].

1143. Frédéric Gruau. Neural Network Synthesis using Cellular Encoding and the Genetic Algo-
rithm. PhD thesis, l’Ecole Normale Supérior de Lyon, Laboratoire de l’Informatique du
Parallélisme (LIP-IMAG), Unité de Recherche Associée au CNRS: Lyon, France and Centre
d’Étude Nucléaire de Grenoble (CENG), Départment de Recherche Fondamentale Matière
Condensée: Grenoble, France, January 4, 1994, Maurice Nivat, Jean-Arcady Meyer, Jacques
Demongeot, Michel Cosnard, Jacques Mazoyer, Pierre Peretto, and L. Darrell Whitley, Com-
mittee members. Fully available at ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/PhD/
PhD1994/PhD1994-01-E.ps.Z [accessed 2010-08-05]. CiteSeerx : 10.1.1.29.5939.
1144. Frédéric Gruau and L. Darrell Whitley. Adding Learning to the Cellular Development
of Neural Networks: Evolution and the Baldwin Effect. Evolutionary Computation, 1(3):
213–233, Fall 1993, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1993.1.3.213.
CiteSeerx : 10.1.1.34.1656.
1145. Jun-hua Gu, Xiu-li Zhao, and Qing Tan. Application of Ant Colony System to Materi-
alized Views Selection. Journal of Computer Applications (Jisuan Ji Yı̀ngyòng), 27(11):
2763–2765, November 2007, Chinese Electronic Periodical Services (CEPS): Taipei, Tai-
wan. Fully available at http://epub.cnki.net/grid2008/docdown/docdownload.
REFERENCES 1045
aspx?filename=JSJY200711049&dbcode=CJFD&dflag=pdfdown [accessed 2011-04-10].
ID: CNKI:SUN:JSJY.0.2007-11-049.
1146. Zhifeng Gu, Bin Xu, and Juanzi Li. Inheritance-Aware Document-Driven Service Composi-
tion. In CEC/EEE’07 [1344], pages 513–516, 2007. doi: 10.1109/CEC-EEE.2007.56. INSPEC
Accession Number: 9868636. See also [319, 1797, 2998, 3010, 3011].
1147. Frank Guerin and Wamberto Weber Vasconcelos, editors. AISB 2008 Convention: Com-
munication, Interaction and Social Intelligence (AISB’08), April 1–4, 2008, University of
Aberdeen: Aberdeen, Scotland, UK, volume 12 in Computing & Philosophy. Society for
the Study of Artificial Intelligence and the Simulation of Behaviour (SSAISB): Chichester,
West Sussex, UK. isbn: 1-902956-71-0. Fully available at http://www.aisb.org.uk/
publications/proceedings.shtml [accessed 2008-09-10].
1148. Michael Guntsch and Martin Middendorf. Pheromone Modification Strategies for Ant
Algorithms Applied to Dynamic TSP. In EvoWorkshops’01 [343], pages 213–222, 2001.
doi: 10.1007/3-540-45365-2 22. Fully available at http://pacosy.informatik.
uni-leipzig.de/pv/Personen/middendorf/Papers/Papers/EvoCOP2001.
Report.ps [accessed 2009-06-29].
1149. Michael Guntsch, Martin Middendorf, and Hartmut Schmeck. An Ant Colony Optimiza-
tion Approach to Dynamic TSP. In GECCO’01 [2570], pages 860–867, 2001. Fully
available at http://pacosy.informatik.uni-leipzig.de/middendorf/Papers/
GECCO2001-Pap.ps [accessed 2009-07-13]. CiteSeerx : 10.1.1.58.1398.
1150. Maozu Guo, Liang Zhao, and Lipo Wang, editors. Proceedings of the 4th International
Conference on Natural Computation (ICNC’08), October 18–20, 2008, Jı̀’nán, Shāndōng,
China. IEEE Computer Society: Piscataway, NJ, USA. Library of Congress Control Number
(LCCN): 2008904182. Catalogue no.: CFP08CNC-PRT.
1151. Ashish Gupta, Oded Shmueli, and Jennifer Widom, editors. Proceedings of 24rd International
Conference on Very Large Data Bases (VLDB’98), August 24–26, 1998, New York, NY, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-566-5.
1152. Himanshu Gupta. Selection of Views to Materialize in a Data Warehouse. In
ICDT’97 [34], pages 98–112, 1997. doi: 10.1007/3-540-62222-5 39. Fully avail-
able at http://ilpubs.stanford.edu:8090/161/1/1996-34.pdf [accessed 2011-04-12].
CiteSeerx : 10.1.1.30.3892. See also [1154].
1153. Himanshu Gupta and Inderpal Singh Mumick. Selection of Views to Material-
ize under Maintenance Cost Constraint. In ICDT’99 [249], pages 453–470, 1999.
doi: 10.1007/3-540-49257-7 28.
1154. Himanshu Gupta and Inderpal Singh Mumick. Selection of Views to Materialize
in a Data Warehouse. IEEE Transactions on Knowledge and Data Engineering, 17
(1):24–43, January 2005, IEEE Computer Society Press: Los Alamitos, CA, USA.
doi: 10.1109/TKDE.2005.16. INSPEC Accession Number: 8287192. See also [1152].
1155. L. S. Gurin and Leonard A. Rastrigin. Convergence of the Random Search Method in
the Presence of Noise. Automation and Remote Control, 26:1505–1511, 1965, Springer Sci-
ence+Business Media, Inc.: New York, NY, USA and MAIK Nauka/Interperiodica – Pleiades
Publishing, Inc.: Moscow, Russia.
1156. Steven Matt Gustafson. An Analysis of Diversity in Genetic Programming. PhD thesis, Uni-
versity of Nottingham, School of Computer Science & IT: Nottingham, UK, February 2004.
Fully available at http://www.gustafsonresearch.com/thesis_html/index.html
x
[accessed 2009-07-09]. CiteSeer : 10.1.1.2.9746.

1157. Steven Matt Gustafson, Anikó Ekárt, Edmund K. Burke, and Graham Kendall.
Problem Difficulty and Code Growth in Genetic Programming. Genetic Program-
ming and Evolvable Machines, 5(3):271–290, September 2004, Springer Netherlands:
Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA,
USA. doi: 10.1023/B:GENP.0000030194.98244.e3. Fully available at http://www.
gustafsonresearch.com/research/publications/gustafson-gpem2004.pdf [ac-
x
cessed 2009-07-09]. CiteSeer : 10.1.1.59.6659. Submitted March 4, 2003; Revised August 7,

2003.
1158. Sergio Gutiérrez, Grégory Valigiani, Pierre Collet, and Carlos Delgado Kloos. Adaptation
of the ACO Heuristic for Sequencing Learning Activities. In EC-TEL’07 Posters [2962],
2007. Fully available at http://ftp.informatik.rwth-aachen.de/Publications/
CEUR-WS/Vol-280/p15.pdf and http://sunsite.informatik.rwth-aachen.de/
1046 REFERENCES
Publications/CEUR-WS/Vol-280/p15.pdf [accessed 2009-09-05].
1159. Tomasz Dominik Gwiazda. Genetic Algorithms Reference – Volume 1 – Crossover for
Single-Objective Numerical Optimization Problems. TOMASZGWIAZDA Ebooks: Lomianki,
Poland, Agata Szczepańska, Translator. isbn: 83-923958-3-2 and 83-923958-4-0.
OCLC: 300393138.
1160. Lorenz Gygax. Statistik für Nutztierethologen – Einführung in die statistische Denkweise:
Was ist, was macht ein statistischer Test? Swiss Federal Veterinary Office (FVO), Centre
for Proper Housing of Ruminants and Pigs: Zürich, Switzerland, 1.1 edition, June 2003. Fully
available at http://www.lorenzgygax.ch/documents/introEtho.pdf and http://
www.proximate-biology.ch/documents/introEtho.pdf [accessed 2009-07-29].

1161. Atidel Ben Hadj-Alouane and James C. Bean. A Genetic Algorithm for the Multiple-Choice
Integer Program. Technical Report 92-50, University of Michigan, Department of Indus-
trial and Operations Engineering: Ann Arbor, MI, USA, September 1992. Fully available
at http://hdl.handle.net/2027.42/3516 [accessed 2008-11-15]. Revised in July 1993.
1162. George Hadley. Nonlinear and Dynamics Programming, World Student. Addison-Wesley
Professional: Reading, MA, USA, December 1964. isbn: 0201026643.
1163. Proceedings of the 27th International Conference on Machine Learning (ICML’10), June 21–
24, 2010, Haifa, Israel.
1164. Karen Haigh and Nestor Rychtyckyj, editors. Proceedings of the Twenty-first Innovative
Applications of Artificial Intelligence Conference (IAAI’09), July 14–16, 2009, Pasadena,
CA, USA. AAAI Press: Menlo Park, CA, USA.
1165. Yacov Y. Haimes, editor. Proceedings of the 6th International Conference on Multiple Cri-
teria Decision Making: Decision Making with Multiple Objectives (MCDM’84), June 4–8,
1984, Cleveland, OH, USA, volume 242 in Lecture Notes in Economics and Mathemati-
cal Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387152237 and 3-540-15223-7.
OCLC: 45927944, 310680663, 450849831, 470794588, 613247950, 634256419, and
638892702. Published December 31, 1985.
1166. Yacov Y. Haimes and Ralph E. Steuer, editors. Proceedings of the 14th International Con-
ference on Multiple Criteria Decision Making: Research and Practice in Multi Criteria De-
cision Making (MCDM’98), June 12–18, 1998, University of Virginia: Charlottesville, VA,
USA, volume 487 in Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 3-540-67266-1. Google Books ID: skkEdJir95QC.
1167. Bruce Hajek. Cooling Schedules for Optimal Annealing. Mathematics of Operations Research
(MOR), 13(2):311–329, May 1988, Institute for Operations Research and the Management
Sciences (INFORMS): Linthicum, ML, USA. doi: 10.1287/moor.13.2.311. Fully available
at http://web.mit.edu/6.435/www/Hajek88.pdf [accessed 2010-09-25].
1168. Hesham H. Hallal, Arnaud Dury, and Alexandre Petrenko. Web-FIM: Automated Framework
for the Inference of Business Software Models. In SERVICES Cup’09 [1351], pages 130–138,
2009. doi: 10.1109/SERVICES-I.2009.106. INSPEC Accession Number: 10975112.
1169. Paul Richard Halmos. Naive Set Theory, Undergraduate Texts in Mathematics. Lit-
ton Educational Publishing, Inc.: New York, NY, USA, 1st, new edition, 1960–1998.
isbn: 0-3879-0092-6. Google Books ID: atw2Z3lqfBIC.
1170. Proceedings of the First International Conference on Metaheuristics and Nature Inspired
Computing (META’06), November 2–4, 2006, Hammamet, Tunesia.
1171. Proceedings of the 2nd International Conference on Metaheuristics and Nature Inspired Com-
puting (META’08), October 29–31, 2008, Hammamet, Tunesia.
1172. Ulrich Hammel and Thomas Bäck. Evolution Strategies on Noisy Functions: How
to Improve Convergence Properties. In PPSN III [693], pages 159–168, 1994.
doi: 10.1007/3-540-58484-6 260. CiteSeerx : 10.1.1.57.5798.
1173. Richard W. Hamming. Error-Detecting and Error-Correcting Codes. Bell System Technical
Journal, 29(2):147–169, 1950, Bell Laboratories: Berkeley Heights, NJ, USA. Fully avail-
able at http://garfield.library.upenn.edu/classics1982/A1982NY35700001.
pdf and http://guest.engelschall.com/˜sb/hamming/ [accessed 2010-12-04].
1174. Michelle Okaley Hammond and Terence Claus Fogarty. Co-operative OuLiPian Genera-
REFERENCES 1047
tive Literature using Human Based Evolutionary Computing. In GECCO’05 LBP [2337],
2005. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gecco2005lbp/
papers/56-hammond.pdf [accessed 2010-08-07].
1175. Lin Han and Xingshi He. A Novel Opposition-Based Particle Swarm Optimization for Noisy
Problems. In ICNC’07 [1711], pages 624–629, volume 3, 2007. doi: 10.1109/ICNC.2007.119.
1176. David J. Hand, Heikki Mannila, and Padhraic Smyth. Principles of Data Mining, volume
6 in Adaptive Computation and Machine Learning, Bradford Books. Prentice-Hall Of India
Pvt. Ltd. (PHI): New Delhi, India, August 2001. isbn: 0-262-08290-X and 8120324579.
Google Books ID: 5i2WPwAACAAJ and SdZ-bhVhZGYC. OCLC: 46866355.
1177. Hisashi Handa, Dan Lin, Lee Chapman, and Xin Yao. Robust Solution of Salting
Route Optimisation Using Evolutionary Algorithms. In CEC’06 [3033], pages 3098–3105,
2006. doi: 10.1109/CEC.2006.1688701. Fully available at http://escholarship.lib.
okayama-u.ac.jp/industrial_engineering/2/ [accessed 2008-06-19].
1178. Julia Handl, Simon C. Lovell, and Joshua D. Knowles. Multiobjectivization by
Decomposition of Scalar Cost Functions. In PPSN X [2354], pages 31–40, 2008.
doi: 10.1007/978-3-540-87700-4 4. Fully available at http://dbkgroup.org/handl/
ppsn_decomp.pdf [accessed 2010-07-18].
1179. Proceedings of the Ninth International Conference on Simulated Evolution And Learning
(SEAL’12), December 16–19, 2012, Hanoi, Vietnam.
1180. James V. Hansen, Paul Benjamin Lowry, Rayman Meservy, and Dan McDonald. Ge-
netic Programming for Prevention of Cyberterrorism through Previous Dynamic and Evolv-
ing Intrusion Detection. Decision Support Systems, 43(4):1362–1374, August 2007, El-
sevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.dss.2006.04.004. Fully avail-
able at http://dx.doi.org/10.1016/j.dss.2006.04.004 and http://ssrn.com/
abstract=877981 [accessed 2008-06-17]. Special Issue on Clusters.
1181. James V. Hansen, Raymond Ros, Nikolas Mauny, Marc Schoenauer, and Anne Auger.
PSO Facing Non-Separable and Ill-Conditioned Problems. Technical Report 6447, In-
stitut National de Recherche en Informatique et en Automatique (INRIA), February 11,
2008. Fully available at http://hal.archives-ouvertes.fr/docs/00/25/01/60/
PDF/RR-6447.pdf [accessed 2009-10-20]. See also [147].
1182. Nikolaus Hansen and Andreas Ostermeier. Adapting Arbitrary Normal Mutation Distri-
butions in Evolution Strategies: The Covariance Matrix Adaptation. In CEC’96 [1445],
pages 312–317, 1996. doi: 10.1109/ICEC.1996.542381. Fully available at http://
www.bionik.tu-berlin.de/ftp-papers/CMAES.ps.Z, http://www.ml.inf.
ethz.ch/education/ws_06_07_seminar_mustererkennung/paper_hansen_
ostermeier, and https://eprints.kfupm.edu.sa/22718/1/22718.pdf [accessed 2009-
x
09-18]. CiteSeer : 10.1.1.35.2147. INSPEC Accession Number: 5375406.

1183. Nikolaus Hansen and Andreas Ostermeier. Convergence Properties of Evolution Strate-
gies with the Derandomized Covariance Matrix Adaption: The (µ/µI , λ)-CMA-ES. In EU-
FIT’97 [3091], pages 650–654, volume 1, 1997. CiteSeerx : 10.1.1.30.648.
1184. Nikolaus Hansen and Andreas Ostermeier. Completely Derandomized Self-Adaptation
in Evolution Strategies. Evolutionary Computation, 9(2):159–195, 2001, MIT Press:
Cambridge, MA, USA. Fully available at http://www.bionik.tu-berlin.
de/user/niko/cmaartic.pdf, http://www.cs.colostate.edu/˜whitley/CS640/
cmaartic.pdf, and http://www.lri.fr/˜hansen/cmaartic.pdf [accessed 2010-08-10].
CiteSeerx : 10.1.1.13.7462.
1185. Nikolaus Hansen, Andreas Ostermeier, and Andreas Gawelczyk. On the Adaptation of Ar-
bitrary Normal Mutation Distributions in Evolution Strategies: The Generating Set Adap-
tation. In ICGA’95 [883], pages 57–64, 1995. CiteSeerx : 10.1.1.29.9321.
1186. Nikolaus Hansen, Sibylle D. Müller, and Petros Koumoutsakos. Reducing the Time Com-
plexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-
ES). Evolutionary Computation, 11(1):1–18, Spring 2003, MIT Press: Cambridge, MA,
USA. doi: 10.1162/106365603321828970. Fully available at http://mitpress.mit.edu/
journals/pdf/evco_11_1_1_0.pdf [accessed 2010-03-08]. CiteSeerx : 10.1.1.13.1008.
1187. Nikolaus Hansen, Fabian Gemperle, Anne Auger, and Petros Koumoutsakos. When
Do Heavy-Tail Distributions Help? In PPSN IX [2355], pages 62–71, 2006.
doi: 10.1007/11844297 7.
1188. Nikolaus Hansen, Anne Auger, Steffen Finck, and Raymond Ros. Real-Parameter Black-
1048 REFERENCES
Box Optimization Benchmarking 2009: Experimental Setup. Rapports de Recherche
RR-6828, Institut National de Recherche en Informatique et en Automatique (INRIA),
October 16, 2009. Fully available at http://coco.gforge.inria.fr/doku.php?
id=bbob-2009, http://hal.archives-ouvertes.fr/inria-00362649/en/, and
http://hal.inria.fr/inria-00362649/en [accessed 2009-10-28]. Version 3.
1189. Pierre Hansen. The Steepest Ascent Mildest Descent Heuristic for Combinatorial Program-
ming. In Congress on Numerical Methods in Combinatorial Optimization [17], 1986.
1190. Pierre Hansen, editor. Proceedings of the 5th International Conference on Multiple Criteria
Decision Making: Essays and Surveys on Multiple Criteria Decision Making (MCDM’82),
August 9–13, 1982, Mons, Brussels, Belgium, volume 209 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387119914 and
3540119914. OCLC: 9350544, 239745974, 310666373, 455931654, 470793901,
497290816, 498330203, 610956168, and 634251590. Published in March 1983.
1191. Jin-Kao Hao, Evelyne Lutton, Edmund M. A. Ronald, Marc Schoenauer, and Dominique
Snyers, editors. Selected Papers of the Third European Conference on Artificial Evolu-
tion (AE’97), October 22–24, 1997, Ecole pour les Etudes et la Recherche en Informa-
tique et Electronique (EERIE): Nı̂mes, France, volume 1363/1998 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0026588.
isbn: 3-540-64169-6. Google Books ID: DtQQwyST3RkC. OCLC: 38519371, 65845265,
190833428, 243486301, and 246325993.
1192. Yue Hao, Jiming Liu, Yuping Wang, Yiu-ming Cheung, and Hujun Yin, editors. Proceedings
of the 2005 International Conference on Computational Intelligence and Security, Part 1
(CIS’05), December 15–19, 2005, Xı̄’ān, Shǎnxı̄, China, volume 3801 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11596448. isbn: 3540308180. Google Books
ID: lGLPmtQFyJkC.
1193. Yue Hao, Jiming Liu, Yuping Wang, Yiu-ming Cheung, and Hujun Yin, editors. Proceedings
of the 2005 International Conference on Computational Intelligence and Security, Part 2
(CIS’05), December 15–19, 2005, Xı̄’ān, Shǎnxı̄, China, volume 3802 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11596981. isbn: 3540308199. Google Books
ID: IH VSc0d1EYC.
1194. Jenny A. Harding, Muhammad Shahbaz, Srinivas, and Andrew Kusiak. Data Min-
ing in Manufacturing: A Review. Journal of Manufacturing Science and Engineer-
ing, 128(4):969–977, November 2006, American Society of Mechanical Engineers (ASME):
New York, NY, USA. Fully available at http://www.icaen.uiowa.edu/˜ankusiak/
Journal-papers/Harding.pdf [accessed 2010-06-27].
1195. Simon Harding and Wolfgang Banzhaf. Fast Genetic Programming on GPUs. In Eu-
roGP’07 [856], pages 90–101, 2007. doi: 10.1007/978-3-540-71605-1 9. Fully avail-
able at http://www.cs.mun.ca/˜banzhaf/papers/eurogp07.pdf [accessed 2008-12-24].
CiteSeerx : 10.1.1.93.1862.
1196. Georges Raif Harik. Learning Gene Linkage to Efficiently Solve Problems of Bounded Diffi-
culty using Genetic Algorithms. PhD thesis, University of Michigan: Ann Arbor, MI, USA,
1997, Keki B. Irani, David Edward Goldberg, Edmund H. Durfee, and Robert K. Lindsay,
Committee members. CiteSeerx : 10.1.1.54.7092. UMI Order No.: GAX97-32090.
1197. Georges Raif Harik. Linkage Learning via Probabilistic Modeling in the ECGA. Il-
liGAL Report 99010, Illinois Genetic Algorithms Laboratory (IlliGAL), Department
of Computer Science, Department of General Engineering, University of Illinois at
Urbana-Champaign: Urbana-Champaign, IL, USA, January 1999. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/99010.ps.Z [accessed 2009-
x
08-27]. CiteSeer : 10.1.1.33.5508.

1198. Georges Raif Harik, Fernando G. Lobo, and David Edward Goldberg. The Compact
Genetic Algorithm. IlliGAL Report 97006, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, Uni-
versity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, August 1997.
CiteSeerx : 10.1.1.48.906. See also [1199].
1199. Georges Raif Harik, Fernando G. Lobo, and David Edward Goldberg. The Compact Ge-
netic Algorithm. IEEE Transactions on Evolutionary Computation (IEEE-EC), 3(4):287–297,
REFERENCES 1049
November 1999, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.797971.
INSPEC Accession Number: 6411650. See also [1198].
1200. Venky Harinarayan, Anan Rajaraman, and Jeffrey David Ullman. Implementing Data Cubes
Efficiently. In SIGMOD’96 [1428], pages 205–216, 1996. doi: 10.1145/233269.233333. See
also [1201].
1201. Venky Harinarayan, Anan Rajaraman, and Jeffrey David Ullman. Implementing Data Cubes
Efficiently. SIGMOD Record, 25(2):205–216, June 1996, Association for Computing Ma-
chinery Special Interest Group on Management of Data (ACM SIGMOD): New York, NY,
USA. doi: 10.1145/235968.233333. Fully available at http://infolab.stanford.edu/
pub/papers/cube.ps, http://www.cs.aau.dk/˜simas/dat5_08/presentations/
Implementing_Data_Cubes_Efficiently.pdf, and http://www.cs.sfu.ca/CC/
459/han/papers/harinarayan96.pdf [accessed 2011-03-28]. CiteSeerx : 10.1.1.63.9928.
See also [1200].
1202. Lisa Lavoie Harlow, Stanley A. Mulaik, and James H. Steiger, editors. What If There Were
No Significance Tests?, Multivariate Applications Book Series. Lawrence Erlbaum Asso-
ciates, Inc. (LEA): Mahwah, NJ, USA, August 1997. isbn: 0805826343. Google Books
ID: lZ2ybZzrnroC. OCLC: 36892849 and 264623722.
1203. Emma Hart, Chris McEwan, Jonathan Timmis, and Andy Hone, editors. Proceedings of the
10th International Conference on Artificial Immune Systems (ICARIS’11), July 18–21, 2011,
University of Cambridge, Computer Laboratory, William Gates Building: Cambridge, UK,
volume 6209 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany.
1204. Peter Elliot Hart, Nils J. Nilsson, and Bertram Raphael. Correction to “A Formal Basis for
the Heuristic Determination of Minimum Cost Paths”. ACM SIGART Bulletin, -(37):28–29,
December 1972, ACM Press: New York, NY, USA. doi: 10.1145/1056777.1056779.
1205. William Eugene Hart, Natalio Krasnogor, and James E. Smith, editors. Recent Advances in
Memetic Algorithms, volume 166/2005 in Studies in Fuzziness and Soft Computing. Springer-
Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/3-540-32363-5. isbn: 3-540-22904-3.
Google Books ID: LYf7YW4DmkUC. OCLC: 56697114 and 318297267. Library of Congress
Control Number (LCCN): 2004111139.
1206. Brad Harvey, James A. Foster, and Deborah Frincke. Byte Code Genetic Programming. In
GP’98 LBP [1605], 1998. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/harvey_1998_bcGP.html and http://www.csds.uidaho.edu/deb/jvm.
pdf [accessed 2008-09-16].
1207. Brad Harvey, James A. Foster, and Deborah Frincke. Towards Byte Code Genetic Program-
ming. In GECCO’99 [211], page 1234, volume 2, 1999. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/harvey_1999_TBCGP.html [accessed 2008-09-16].
CiteSeerx : 10.1.1.22.4974.
1208. Inman Harvey. The Puzzle of the Persistent Question Marks: A Case Study of Genetic Drift.
In ICGA’93 [969], pages 15–22, 1993. See also [1209].
1209. Inman Harvey. The Puzzle of the Persistent Question Marks: A Case Study of Genetic Drift.
Cognitive Science Research Papers (CSRP) 278, University of Sussex, School of Cognitive
Science: Brighton, UK, April 1993. CiteSeerx : 10.1.1.56.3830. issn: 1350-3162. See
also [1208].
1210. Thomas Haselwanter, Paavo Kotinurmi, Matthew Moran, Tomas Vitvar, and Maciej
Zaremba. Dynamic B2B Integration on the Semantic Web Services: SWS Challenge Phase
2. In Second Workshop of the Semantic Web Service Challenge 2006 – Challenge on Au-
tomating Web Services Mediation, Choreography and Discovery [2689], 2006. Fully avail-
able at http://sws-challenge.org/workshops/2006-Budva/papers/DERI_WSMX_
SWSChallenge_II.pdf [accessed 2010-12-12]. See also [582, 1940, 3057–3059].
1211. Rainer Hauser and Reinhard Männer. Implementation of Standard Genetic Algorithm on
MIMD Machines. In PPSN III [693], pages 504–513, 1994. doi: 10.1007/3-540-58484-6 293.
CiteSeerx : 10.1.1.26.9333.
1212. Patrick J. Hayes, editor. Proceedings of the 7th International Joint Conference on Artificial
Intelligence (IJCAI’81-I), August 24–28, 1981, University of British Columbia: Vancouver,
BC, Canada, volume 1. John W. Kaufmann Inc.: Washington, DC, USA. Fully available
at http://dli.iiit.ac.in/ijcai/IJCAI-81-VOL%201/CONTENT/content.htm [ac-
cessed 2008-04-01]. OCLC: 490050000. See also [1213].
1050 REFERENCES
1213. Patrick J. Hayes, editor. Proceedings of the 7th International Joint Conference on Artifi-
cial Intelligence (IJCAI’81-II), August 24–28, 1981, University of British Columbia: Van-
couver, BC, Canada, volume 2. John W. Kaufmann Inc.: Washington, DC, USA. Fully
available at http://dli.iiit.ac.in/ijcai/IJCAI-81-VOL-2/CONTENT/content.
htm [accessed 2008-04-01]. OCLC: 490050000. See also [1212].
1214. Barbara Hayes-Roth and Richard E. Korf, editors. Proceedings of the 12th National Con-
ference on Artificial Intelligence (AAAI’94), July 31–August 4, 1994, Convention Center:
Seattle, WA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-61102-3. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai94.php [accessed 2007-09-06].
2 volumes.
1215. Thomas D. Haynes, Roger L. Wainwright, Sandip Sen, and Dale A. Schoenefeld. Strongly
Typed Genetic Programming in Evolving Cooperation Strategies. In ICGA’95 [883], pages
271–278, 1995. Fully available at http://www.mcs.utulsa.edu/˜rogerw/papers/
Haynes-icga95.pdf [accessed 2007-10-03]. CiteSeerx : 10.1.1.48.1037.
1216. Thomas D. Haynes, Dale A. Schoenefeld, and Roger L. Wainwright. Type Inheritance
in Strongly Typed Genetic Programming. In Advances in Genetic Programming II [106],
Chapter 18, pages 359–376. MIT Press: Cambridge, MA, USA, 1996. Fully available
at http://www.mcs.utulsa.edu/˜rogerw/papers/Haynes-hier.pdf [accessed 2011-11-
x
23]. CiteSeer : 10.1.1.109.5646.

1217. Jun He and Xin Yao. Towards an Analytic Framework for Analysing the Computation
Time of Evolutionary Algorithms. Artificial Intelligence, 145(1-2):59–97, April 2003, El-
sevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0004-3702(02)00381-8. Fully
available at http://www.cs.bham.ac.uk/˜xin/papers/published_aij_03.pdf [ac-
x
cessed 2011-08-06]. CiteSeer : 10.1.1.118.4632.

1218. Richard Heady, George F. Luger, Arthur Maccabe, and Mark Servilla. The Architecture of
a Network Level Intrusion Detection System. Technical Report CS90-20, LA-SUB–93-219,
W-7405, University of New Mexico, Department of Computer Science: Albuquerque, NM,
USA, August 15, 1990. doi: 10.2172/425295. Fully available at http://www.osti.gov/
energycitations/product.biblio.jsp?osti_id=425295 [accessed 2008-09-07].
1219. Wendi Rabiner Heinzelman, Anantha Chandrakasan, and Hari Balakrishnan. Energy-
Efficient Communication Protocol for Wireless Microsensor Networks. In HICSS’00 [2574],
page 10, volume 2, 2000. Fully available at http://pdos.csail.mit.edu/decouto/
papers/heinzelman00.pdf and https://eprints.kfupm.edu.sa/37623/1/
37623.pdf [accessed 2008-11-16]. INSPEC Accession Number: 6530466.
1220. Jörg Heitkötter and David Beasley, editors. Hitch-Hiker’s Guide to Evolutionary Compu-
tation: A List of Frequently Asked Questions (FAQ) (HHGT). ENCORE (The Evolu-
tioNary Computation REpository Network): Santa Fé, NM, USA, 8.1 edition, March 29,
2000. Fully available at http://www.aip.de/˜ast/EvolCompFAQ/, http://www.cse.
dmu.ac.uk/˜rij/gafaq/top.htm, and http://www.etsimo.uniovi.es/ftp/pub/
EC/FAQ/www/ [accessed 2010-08-07]. USENET: comp.ai.genetic.
1221. 12th European Conference on Machine Learning (ECML’02), August 19–23, 2002, Helsinki,
Finland.
1222. Jim Hendler and Devika Subramanian, editors. Proceedings of the Sixteenth National Con-
ference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Ar-
tificial Intelligence (AAAI’99, IAAI’99), July 18–22, 1999, Orlando, FL, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51106-1. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai99.php and http://
www.aaai.org/Conferences/IAAI/iaai99.php [accessed 2007-09-06].
1223. Eighth European Conference on Machine Learning (ECML’95), April 25–27, 1995, Heraclion,
Crete, Greece.
1224. Nanna Suryana Herman, Siti Mariyam Shamsuddin, and Ajith Abraham, editors. Interna-
tional Conference on SOft Computing and PAttern Recognition (SoCPaR’09), December 4–
7, 2009, Melaka International Trade Centre (MITC): Malacca, Malaysia. IEEE (Institute of
Electrical and Electronics Engineers): Piscataway, NJ, USA. Partly available at http://
fke.uitm.edu.my/index.php/socpar-2009 [accessed 2011-02-28].
1225. Hugo Hernández and Christian Blum. Self-Synchronized Duty-Cycling in Sensor Networks
with Energy Harvesting Capabilities: The Static Network Case. In GECCO’09-I [2342],
pages 33–40, 2009. doi: 10.1145/1569901.1569907.
REFERENCES 1051
1226. José Herskovits, Sandro Mazorche, and Alfredo Canelas, editors. Proceedings of the 6th
World Congresses of Structural and Multidisciplinary Optimization (WCSMO6), May 30–
June 3, 2005, Rio de Janeiro, RJ, Brazil. COPPE Publication: Rio de Janeiro, RJ, Brazil.
isbn: 8528500705. OCLC: 71284315 and 254952854.
1227. José Herskovits, Alfredo Canelas, Henry Cortes, and Miguel Aroztegui, editors. International
Conference on Engineering Optimization (EngOpt’08), June 1–5, 2008, Rio de Janeiro, RJ,
Brazil.
1228. Alain Hertz, Éric D. Taillard, and Dominique de Werra. A Tutorial on Tabu Search. In
AIRO’95 [2169], pages 13–24, 1995. Fully available at http://www.cs.colostate.edu/
˜whitley/CS640/hertz92tutorial.pdf and http://www.dei.unipd.it/˜fisch/
ricop/tabu_search_tutorial.pdf [accessed 2009-07-02]. CiteSeerx : 10.1.1.28.521 and
10.1.1.50.7347.
1229. Malcom Ian Heywood and A. Nur Zincir-Heywood. Dynamic Page Based Linear Genetic
Programming. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics,
32(3):380–388, June 2002, IEEE Systems, Man, and Cybernetics Society: New York, NY,
USA. doi: 10.1109/TSMCB.2002.999814. INSPEC Accession Number: 7284736.
1230. Joseph F. Hicklin. Application of the Genetic Algorithm to Automatic Program Generation.
Master’s thesis, University of Idaho, Computer Science Department: Moscow, ID, USA, 1986.
Google Books ID: oC5IOAAACAAJ.
1231. Tetsuya Higuchi, Masaya Iwata, and Weixin Liu, editors. The First International Con-
ference on Evolvable Systems: From Biology to Hardware, Revised Papers (ICES’96),
October 7–8, 1996, Tsukuba, Japan, volume 1259/1997 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-63173-9.
isbn: 3-540-63173-9.
1232. Frederick S. Hillier and Gerard J. Liebermann. Introduction to Operations Research. McGraw-
Hill Ltd.: Maidenhead, UK, England, Tsinghua University Press: Běijı̄ng, China, and Holden-
Day: San Francisco, CA, USA, 8th edition, 1980–2005. isbn: 0070600929, 0816238677,
7302122431, and 9751400147. Google Books ID: toNN2yNKRBIC.
1233. W. Daniel Hillis. Co-Evolving Parasites Improve Simulated Evolution as an Optimization
Procedure. Physica D: Nonlinear Phenomena, 42(1-2):228–234, June 1990, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/0167-2789(90)90076-2.
1234. Philip F. Hingston, Luigi C. Barone, and Zbigniew Michalewicz, editors. Design by Evo-
lution: Advances in Evolutionary Design, Natural Computing Series. Springer New York:
New York, NY, USA. doi: 10.1007/978-3-540-74111-4. isbn: 3540741097. Google Books
ID: FzJF0sYDvw8C. asin: 3540741097.
1235. Geoffrey E. Hinton and Steven J. Nowlan. How Learning Can Guide Evolution. Complex
Systems, 1(3):495–502, 1987, Complex Systems Publications, Inc.: Champaign, IL, USA.
Fully available at http://htpprints.yorku.ca/archive/00000172/ [accessed 2008-09-10].
See also [1236].
1236. Geoffrey E. Hinton and Steven J. Nowlan. How Learning Can Guide Evolution. In Adaptive
Individuals in Evolving Populations: Models and Algorithms [255], pages 447–454. Addison-
Wesley Longman Publishing Co., Inc.: Boston, MA, USA and Westview Press, 1996. See
also [1235].
1237. Tomoyuki Hiroyasu, Hiroyuki Ishida, Mitsunori Miki, and Hisatake Yokouchi. Difficulties of
Evolutionary Many-Objective Optimization. Intelligent Systems Design Laboratory Research
Reports (ISDL Reports) 20081006004, Doshisha University, Department of Knowledge Engi-
neering and Computer Sciences, Intelligent Systems Design Laboratory: Kyoto, Japan, Oc-
tober 13, 2008. Fully available at http://mikilab.doshisha.ac.jp/dia/research/
report/2008/1006/004/report20081006004.html [accessed 2009-07-18].
1238. Haym Hirsh and Steve A. Chien, editors. Proceedings of the Thirteenth Innovative Appli-
cations of Artificial Intelligence Conference (IAAI’01), April 7–9, 2001, Seattle, WA, USA.
AAAI Press: Menlo Park, CA, USA. isbn: 1-57735-134-7. Partly available at http://
www.aaai.org/Conferences/IAAI/iaai01.php [accessed 2007-09-06].
1239. Brahim Hnich, Mats Carlsson, François Fages, and Francesca Rossi, editors. Recent Ad-
vances in Constraints – Joint ERCIM/CoLogNET International Workshop on Constraint
Solving and Constraint Logic Programming, CSCLP 2005 – Revised Selected and Invited
Papers, June 20–22, 2005, Uppsala, Sweden, volume 3978/2006 in Lecture Notes in Artifi-
1052 REFERENCES
cial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11754602. isbn: 3-540-34215-X. Google Books
ID: YUdkAAAACAAJ. Library of Congress Control Number (LCCN): 2006925094.
1240. Sir Charles Antony Richard (Tony) Hoare. Quicksort. The Computer Journal, Oxford Jour-
nals, 5(1):10–16, 1962, British Computer Society: Swindon, UK. doi: 10.1093/comjnl/5.1.10.
1241. Cem Hocaoğlu and Arthur C. Sanderson. Evolutionary Speciation Using Minimal Represen-
tation Size Clustering. In EP’95 [1848], pages 187–203, 1995.
1242. Frank Hoffmann, Mario Köppen, Klaus Klawonn, and Rajkumar Roy, editors. Soft Comput-
ing: Methodologies and Applications, volume 32 in Advances in Intelligent and Soft Com-
puting. Birkhäuser Verlag: Basel, Switzerland, May 2005. doi: 10.1007/3-540-32400-3.
isbn: 3-540-25726-8. Google Books ID: ATswE4Z1m14C. Library of Congress Control
Number (LCCN): 2005925479. Post-conference proceedings of the 8th Online World Con-
ferences on Soft Computing (WSC8) which took place from September 29th to October 10th,
2003, organized by World Federation of Soft Computing (WFSC).
1243. Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies – Simi-
larities and Differences. In PPSN I [2438], pages 455–469, 1990. doi: 10.1007/BFb0029787.
See also [1243].
1244. Frank Hoffmeister and Hans-Paul Schwefel. KORR 2.1 – An implementation of a (µ +,
λ)-evolution strategy. Technical Report, Universität Dortmund, Fachbereich Informatik:
Dortmund, North Rhine-Westphalia, Germany, November 1990. Interner Bericht.
1245. Birgit Hofreiter, editor. Proceedings of the 11th IEEE Conference on Commerce and En-
terprise Computing (CEC’09), July 20–23, 2009, Vienna University of Technology: Vienna,
Austria. IEEE Computer Society: Piscataway, NJ, USA and Curran Associates, Inc.: Red
Hook, NY, USA. isbn: 0-7695-3755-3. Partly available at http://cec2009.isis.
tuwien.ac.at/ [accessed 2011-02-28]. GBV-Identification (PPN): 615078826 and 615079865.
1246. Christian Höhn and Colin R. Reeves. Are Long Path Problems Hard for Genetic Algorithms?
In PPSN IV [2818], pages 134–143, 1996. doi: 10.1007/3-540-61723-X 977.
1247. John Henry Holland. Outline for a Logical Theory of Adaptive Systems. Journal of the
Association for Computing Machinery (JACM), 9(3):297–314, ACM Press: New York, NY,
USA. doi: 10.1145/321127.321128.
1248. John Henry Holland. Nonlinear Environments Permitting Efficient Adaptation. In Proceed-
ings of the Symposium on Computer and Information Sciences II [2718], pages 147–164,
1966.
1249. John Henry Holland. Adaptive Plans Optimal for Payoff-Only Environments. In
HICSS’69 [2056], pages 917–920, 1969. DTIC Accession Number: AD0688839.
1250. John Henry Holland. Adaptation in Natural and Artificial Systems: An Introduc-
tory Analysis with Applications to Biology, Control, and Artificial Intelligence. Uni-
versity of Michigan Press: Ann Arbor, MI, USA, 1975. isbn: 0-472-08460-7.
Google Books ID: JE5RAAAAMAAJ, JOv3AQAACAAJ, Qk5RAAAAMAAJ, YE5RAAAAMAAJ, and
vk5RAAAAMAAJ. OCLC: 1531617 and 266623839. Library of Congress Control Number
(LCCN): 74078988. GBV-Identification (PPN): 075982293. LC Classification: QH546
.H64 1975. Newly published as [1252].
1251. John Henry Holland. Adaptive Algorithms for Discovering and Using General Patterns in
Growing Knowledge Bases. International Journal of Policy Analysis and Information Sys-
tems, 4(3):245–268, 1980, Plenum Press: New York, NY, USA.
1252. John Henry Holland. Adaptation in Natural and Artificial Systems: An Introductory Anal-
ysis with Applications to Biology, Control, and Artificial Intelligence. NetLibrary, Inc.:
Dublin, OH, USA, May 1992. isbn: 0585038449. Google Books ID: Jev3AQAACAAJ and
nALfIAAACAAJ. Reprint of [1250].
1253. John Henry Holland. Genetic Algorithms – Computer programs that “evolve“ in ways that
resemble natural selection can solve complex problems even their creators do not fully under-
stand. Scientific American, 267(1):44–50, July 1992, Scientific American, Inc.: New York,
NY, USA. Fully available at http://members.fortunecity.com/templarseries/
algo.html, http://www.cc.gatech.edu/˜turk/bio_sim/articles/genetic_
algorithm.pdf, http://www.im.isu.edu.tw/faculty/pwu/heuristic/GA_1.
pdf, and http://www2.econ.iastate.edu/tesfatsi/holland.gaintro.htm
[accessed 2011-11-20].

1254. John Henry Holland and Arthur W. Burks. Adaptive Computing System Capable of Learn-
REFERENCES 1053
ing and Discovery. Technical Report 619349 filed on 1984-06-11, United States Patent
and Trademark Office (USPTO): Alexandria, VA, USA, 1987. Fully available at http://
www.freepatentsonline.com/4697242.html and http://www.patentstorm.us/
patents/4697242.html [accessed 2008-04-01]. US Patent Issued on September 29, 1987, Cur-
rent US Class 706/13, Genetic algorithm and genetic programming system 382/155, LEARN-
ING SYSTEMS 706/62 MISCELLANEOUS, Foreign Patent References 8501601 WO Apr.,
1985. Editors: Jerry Smith and John R Lastova.
1255. John Henry Holland and Judith S. Reitman. Cognitive Systems based on Adaptive Algo-
rithms. ACM SIGART Bulletin, 0(63):49, June 1977, ACM Press: New York, NY, USA.
doi: 10.1145/1045343.1045373. See also [1256].
1256. John Henry Holland and Judith S. Reitman. Cognitive Systems based on Adaptive Algo-
rithms. In Selected Papers from the Workshop on Pattern Directed Inference Systems [2869],
pages 313–329, 1977. See also [1255]. Reprinted in [939].
1257. John Henry Holland, Lashon Bernard Booker, Marco Colombetti, Marco Dorigo, David Ed-
ward Goldberg, Stephanie Forrest, Rick L. Riolo, Robert Elliott Smith, Pier Luca
Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson. What Is a Learning Classi-
fier System? In IWLCS’00 [1678], pages 3–32, 2000. doi: 10.1007/3-540-45027-0 1.
Fully available at http://www.cs.unm.edu/˜forrest/publications/Learning
%20Classifier%20Systems.pdf [accessed 2010-12-12].
1258. R. B. Hollstien. Artificial Genetic Adaptation in Computer Control Systems. PhD thesis,
University of Michigan: Ann Arbor, MI, USA, 1971. University Microfilms No.: 71-23.
1259. Geoffrey Holmes, Andrew Donkin, and Ian H. Witten. WEKA: A Machine Learning
Workbench. In ANZIIS’98 [1359], pages 357–361, 1994. doi: 10.1109/ANZIIS.1994.396988.
Fully available at http://www.cs.waikato.ac.nz/˜ml/publications/1994/
Holmes-ANZIIS-WEKA.pdf [accessed 2008-11-19]. INSPEC Accession Number: 4962353.
1260. Diana Holstein and Pablo Moscato. Memetic Algorithms using Guided Local Search: A Case
Study. In New Ideas in Optimization [638], pages 235–244. McGraw-Hill Ltd.: Maidenhead,
UK, England, 1999.
1261. Robert C. Holte. Combinatorial Auctions, Knapsack Problems, and Hill-Climbing Search.
In AI’01 [2624], pages 57–66, 2001. Fully available at http://www.cs.ubc.ca/˜hoos/
Courses/Trento-05/Holte01.pdf [accessed 2010-10-12]. CiteSeerx : 10.1.1.28.754.
1262. Robert C. Holte and Adele Howe, editors. Proceedings of the Twenty-Second AAAI Confer-
ence on Artificial Intelligence (AAAI’07), July 22–26, 2007, Vancouver, BC, Canada. AAAI
Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai07.php [accessed 2007-09-06].
1263. Holger H. Hoos and Thomas Stützle. SATLIB: An Online Resource for Research on SAT. In
SAT2000 – Highlights of Satisfiability Research in the Year 2000 [1042], pages 283–292. IOS
Press: Amsterdam, The Netherlands, 2000. Fully available at http://www.cs.ubc.ca/
x
˜hoos/Publ/sat2000-satlib.pdf [accessed 2011-09-07]. CiteSeer : 10.1.1.42.5395. See
also [1264].
1264. Holger H. Hoos and Thomas Stützle, editors. SATLIB - The Satisfiability Library. TU Darm-
stadt: Darmstadt, Germany and University of British Columbia: Vancouver, BC, Canada,
March 22, 2005. Fully available at http://www.cs.ubc.ca/˜hoos/SATLIB/benchm.
html and http://www.satlib.org/ [accessed 2011-09-28]. See also [1263].
1265. J. Hopf, editor. Genetic Algorithms within the Framework of Evolutionary Computation,
Workshop at KI-94. Technical Report MPI-I-94-241, September 1994.
1266. Jeffrey Horn and David Edward Goldberg. Genetic Algorithm Difficulty and the
Modality of the Fitness Landscape. In FOGA 3 [2932], pages 243–269, 1994.
CiteSeerx : 10.1.1.31.3340.
1267. Jeffrey Horn, David Edward Goldberg, and Kalyanmoy Deb. Long Path Prob-
lems. In PPSN III [693], pages 149–158, 1994. doi: 10.1007/3-540-58484-6 259.
CiteSeerx : 10.1.1.51.9758.
1268. Jeffrey Horn, Nicholas Nafpliotis, and David Edward Goldberg. A Niched Pareto Genetic
Algorithm for Multiobjective Optimization. In CEC’94 [1891], pages 82–87, volume 1, 1994.
doi: 10.1109/ICEC.1994.350037. Fully available at http://www.lania.mx/˜ccoello/
EMOO/horn2.ps.gz [accessed 2007-08-28]. CiteSeerx : 10.1.1.34.4189. Also: IEEE Sympo-
sium on Circuits and Systems, pp. 2264–2267, 1991.
1269. Jorng-Tzong Horng, Yu-Jan Chang, Baw-Jhiune Liu, and Cheng-Yan Kao. Materialized View
1054 REFERENCES
Selection using Genetic Algorithms in a Data Warehouse System. In CEC’99 [110], pages
22–27, 1999. doi: 10.1109/CEC.1999.785551. INSPEC Accession Number: 6339040.
1270. Jorng-Tzong Horng, Yu-Jan Chang, and Baw-Jhiune Liu. Applying Evolutionary Algo-
rithms to Materialized View Selection in a Data Warehouse. Soft Computing – A Fusion of
Foundations, Methodologies and Applications, 7(8):574–581, August 2003, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/s00500-002-0243-1.
1271. Reiner Horst and Tuy Hoang. Global Optimization: Deterministic Approaches. Springer-
Verlag GmbH: Berlin, Germany, 3rd edition, 1996. isbn: 3-540-61038-3. Google Books
ID: HyyaAAAAIAAJ and usFjGFvuBDEC.
1272. Reiner Horst, Panos M. Pardalos, and H. Edwin Romeijm. Handbook of Global Optimiza-
tion, volume 62(1-2) in Nonconvex Optimization and Its Applications. Kluwer Academic
Publishers: Norwell, MA, USA, 1995–2002. isbn: 1-4020-0632-2 and 1-4020-0742-6.
Fully available at http://www.optimization-online.org/DB_FILE/2002/03/456.
pdf [accessed 2009-06-22]. Google Books ID: 9W0Ast3pDIC.
1273. Learning and Intelligent OptimizatioN
Learning and Intelligent OptimizatioN (LION 1), February 12–18, 2007, Hotel Adler: Andalo,
Trento, Italy.
1274. XX Simpósio Brasileiro de Telecomunicações (SBrT’03), October 3–5, 2003, Hotel Glória:
Rio de Janeiro, RJ, Brazil.
1275. Proceedings of the 4th International Conference on Bio-Inspired Models of Network, Infor-
mation, and Computing Systems (BIONETICS’09), December 9–11, 2009, Hotel MERCURE
Pont d’Avignon: Avignon, France.
1276. Proceedings of the International Conference on Evolutionary Computation (ICEC’10), 2010,
Hotel Sidi Saler: València, Spain. Part of [914].
1277. Daniel Howard. Frontiers in the Convergence of Bioscience and Information Technologies
(FBIT’07), October 11–13, 2007, Jeju City, South Korea. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 0-7695-2999-2. Google Books
ID: AcgoPwAACAAJ and KbupPQAACAAJ. OCLC: 191820852, 230950453, 254360958,
and 314010039.
1278. Nicola Howarth. Abstract Syntax Tree Design. ANSA Documents: Official Record of the
ANSA Project 23/08/95 APM.1551.01, Architecture Projects Management Limited: Posei-
don House, Castle Park, Cambridge, UK, August 23, 1995. Fully available at http://www.
ansa.co.uk/ANSATech/95/Primary/155101.pdf [accessed 2007-07-03].
1279. Robert J. Howlett and Lakhmi C. Jain, editors. Proceedings of the Fourth International
Conference on Knowledge-Based Intelligent Information Engineering Systems & Allied Tech-
nologies (KES’00), August 30–September 1, 2000, University of Brighton: Sussex, UK.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-6400-7. Google Books
ID: 369VAAAAMAAJ, IHAfPwAACAAJ, and VuFVAAAAMAAJ.
1280. Peter Hrastnik and Albert Rainer. Web Service Discovery and Composition for Vir-
tual Enterprises. International Journal on Web Services Research (IJWSR), 4(1):23–
29, 2007, Idea Group Publishing (Idea Group Inc., IGI Global): New York, NY, USA.
doi: 10.4018/jwsr.2007010102. CiteSeerx : 10.1.1.68.5392.
1281. Juraj Hromkovič. Algorithmics for Hard Computing Problems: Introduction to Combina-
torial Optimization, Randomization, Approximation, and Heuristics, Texts in Theoretical
Computer Science – An EATCS Series. Springer-Verlag GmbH: Berlin, Germany, 2001.
isbn: 3-540-66860-8. Google Books ID: Ut1QAAAAMAAJ. OCLC: 46713245. Library of
Congress Control Number (LCCN): 2001031301. GBV-Identification (PPN): 319756777.
LC Classification: QA76.9.A43 H76 2001.
1282. Juraj Hromkovič and I. Zámecniková. Design and Analysis of Randomized Algorithms:
Introduction to Design Paradigms, Texts in Theoretical Computer Science – An EATCS
Series. Springer-Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/3-540-27903-2.
isbn: 3-540-23949-9. Google Books ID: EfuMfn8T5 oC. OCLC: 60800705, 318291536,
and 474955415. Library of Congress Control Number (LCCN): 2005926236. GBV-
Identification (PPN): 524971269. LC Classification: QA274 .H76 2005.
1283. 15th Conference on Technologies and Applications of Artificial Intelligence (TAAI’10),
November 18–20, 2010, Hsinchu, Taiwan.
1284. Chien-Chang Hsu and Yao-Wen Tang. An Intelligent Mobile Learning System for
On-the-Job Training of Luxury Brand Firms. In AI’06 [2406], pages 749–759, 2006.
REFERENCES 1055
doi: 10.1007/11941439 79.
1285. P. L. Hsu, R. Lai, and C. C. Chiu. The Hybrid of Association Rule Algorithms and Ge-
netic Algorithms for Tree Induction: An Example of Predicting the Student Course Perfor-
mance. Expert Systems with Applications – An International Journal, 25(1):51–62, July 2003,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/S0957-4174(03)00005-8.
1286. Ting Hu and Wolfgang Banzhaf. Measuring Rate of Evolution in Genetic Programming Using
Amino Acid to Synonymous Substitution Ratio ka /ks . In GECCO’08 [1519], pages 1337–
1338, 2008. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
Hu_2008_gecco.html and http://www.cs.mun.ca/˜tingh/papers/GECCO08.pdf
[accessed 2008-08-08].

1287. Ting Hu and Wolfgang Banzhaf. Nonsynonymous to Synonymous Substitution Ratio ka /ks :
Measurement for Rate of Evolution in Evolutionary Computation. In PPSN X [2354], pages
448–457, 2008. doi: 10.1007/978-3-540-87700-4 45. Fully available at http://www.cs.mun.
ca/˜tingh/papers/PPSN08.pdf [accessed 2009-07-10].
1288. Ting Hu and Wolfgang Banzhaf. Evolvability and Acceleration in Evolutionary Computa-
tion. Technical Report MUN-CS-2008-04, Memorial University, Department of Computer
Science: St. John’s, Canada, October 2008. Fully available at http://www.cs.mun.ca/
˜tingh/papers/TR08.pdf and http://www.mun.ca/computerscience/research/
MUN-CS-2008-04.pdf [accessed 2009-08-05].
1289. Ting Hu and Wolfgang Banzhaf. Evolvability and Speed of Evolutionary Algorithms in the
Light of Recent Developments in Biology. Journal of Artificial Evolution and Applications,
2010, Hindawi Publishing Corporation: Nasr City, Cairo, Egypt. doi: 10.1155/2010/568375.
Fully available at http://www.cs.mun.ca/˜tingh/papers/JAEA10.pdf and http://
www.hindawi.com/archive/2010/568375.html [accessed 2010-10-22]. ID: 568375.
1290. Ting Hu, Yuanzhu Peter Chen, Wolfgang Banzhaf, and Robert Benkoczi. An Evolutionary
Approach to Planning IEEE 802.16 Networks. In GECCO’09-I [2342], pages 1929–1930,
2009. doi: 10.1145/1569901.1570242. Fully available at http://www.cs.mun.ca/˜tingh/
papers/GECCO09-network.pdf [accessed 2009-09-05].
1291. Xiaobing Hu, Mark Leeson, and Evor Hines. An Effective Genetic Algorithm for the Network
Coding Problem. In CEC’09 [1350], pages 1714–1720, 2009. doi: 10.1109/CEC.2009.4983148.
INSPEC Accession Number: 10688743.
1292. Xiaomin Hu, Jun Zhang, and Liming Zhang. Swarm Intelligence Inspired Multicast Routing:
An Ant Colony Optimization Approach. In EvoWorkshops’09 [1052], pages 51–60, 2009.
doi: 10.1007/978-3-642-01129-0 6.
1293. De-Shuang Huang, Laurent Heutte, and Marco Loog, editors. Advanced Intelligent Com-
puting Theories and Applications. With Aspects of Contemporary Intelligent Computing
Techniques – Proceedings of the Third International Conference on Intelligent Comput-
ing (ICIC’07-2), August 21–24, 2007, Qı̄ngdǎo, Shāndōng, China, volume 2 in Commu-
nications in Computer and Information Science. Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-74282-1. Fully available at http://www.springerlink.com/
content/k74022/ [accessed 2008-03-28].
1294. Joshua (Zhexue) Huang, Longbing Cao, and Jaideep Srivastava, editors. Proceedings of
the 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD’11),
May 24–27, 2011, Shēnzhèn, Guǎngdōng, China, Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. Partly available at http://pakdd2011.pakdd.
org/ [accessed 2011-02-28].
1295. Sheng Huang, Xiaoling Wang, and Aoying Zhou. Efficient Web Service Composition Based
on Syntactical Matching. In EEE’05 [1371], pages 782–783, 2005. doi: 10.1109/EEE.2005.66.
See also [317].
1296. Thomas S. Huang, Anton Nijholt, Maja Pantic, and Alex Pentland, editors. Artifical Intelli-
gence for Human Computing, ICMI 2006 and IJCAI 2007 International Workshops, Banff,
Canada, November 3, 2006, Hyderabad, India, January 6, 2007, Revised Seleced and Invited
Papers (ICMI’06 / IJCAI’07), 2006–2007, Banff, AB, Canada and Hyderabad, India, vol-
ume 4451 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. See also [2797].
1297. V. Ling Huang, Ponnuthurai Nagaratnam Suganthan, Alex Kai Qin, and S. Baskar. Multi-
objective Differential Evolution with External Archive and Harmonic Distance-Based Diver-
1056 REFERENCES
sity Measure. Technical Report, Nanyang Technological University (NTU): Singapore, 2006.
Fully available at http://www3.ntu.edu.sg/home/epnsugan/index_files/papers/
Tech-Report-MODE-2005.pdf [accessed 2009-07-17].
1298. Yueh-Min Huang, Yen-Ting Lin, and Shu-Chen Cheng. An Adaptive Testing System
for Supporting Versatile Educational Assessment. Computers & Education – An Interna-
tional Journal, 52(1):53–67, January 2009, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.compedu.2008.06.007.
1299. Zhenqiu Huang, Wei Jiang, Songlin Hu, and Zhiyong Liu. Effective Pruning Algo-
rithm for QoS-Aware Service Composition. In CEC’09 [1245], pages 519–522, 2009.
doi: 10.1109/CEC.2009.41. INSPEC Accession Number: 10839126. See also [1571, 1801].
1300. Claudia Huber and Günter Wächterhäuser. Peptides by Activation of Amino Acids with
CO on (Ni,Fe)S Surfaces: Implications for the Origin of Life. Science Magazine, 281(5377):
670–672, July 31, 1998, American Association for the Advancement of Science (AAAS):
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.281.5377.670. PubMed ID: 9685253.
1301. Lorenz Huelsbergen. Toward Simulated Evolution of Machine Learning Interaction. In
GP’96 [1609], pages 315–320, 1996. Fully available at http://cognet.mit.edu/
library/books/mitpress/0262611279/cache/chap41.pdf [accessed 2010-11-21].
1302. Barry D. Hughes. Random Walks and Random Environments: Volume 1: Random Walks.
Oxford University Press, Inc.: New York, NY, USA, May 16, 1995. isbn: 0-19-853788-3.
Google Books ID: QhOen t0LeQC.
1303. Evan J. Hughes. Evolutionary Many-Objective Optimisation: Many Once or One Many? In
CEC’05 [641], pages 222–227, volume 1, 2005. doi: 10.1109/CEC.2005.1554688. Fully avail-
able at http://www.lania.mx/˜ccoello/hughes05.pdf.gz [accessed 2009-07-18]. INSPEC
Accession Number: 9004154.
1304. Evan J. Hughes. MSOPS-II: A General-Purpose Many-Objective Optimiser. In
CEC’07 [1343], pages 3944–3951, 2007. doi: 10.1109/CEC.2007.4424985. INSPEC Acces-
sion Number: 9889597.
1305. Evan J. Hughes. Fitness Assignment Methods for Many-Objective Problems. In Multiobjective
Problem Solving from Nature – From Concepts to Applications [1554], Chapter IV-1, pages
307–329. Springer New York: New York, NY, USA, 2008. doi: 10.1007/978-3-540-72964-8 15.
1306. Roger N. Hughes, editor. NATO Advanced Research Workshop on Behavioural Mecha-
nisms of Food Selection, July 17–21, 1989, Gregynog, Wales, UK, volume 20 in NATO Ad-
vanced Science Institutes (ASI) Series G. Ecological Sciences (NATO ASI). Springer-Verlag
GmbH: Berlin, Germany. isbn: 0-387-51762-6 and 3-540-51762-6. OCLC: 20854001,
311111175, and 472872808. Library of Congress Control Number (LCCN): 89026334.
GBV-Identification (PPN): 025232932 and 157637646. LC Classification: QL756.5 .N38
1989.
1307. F. J. Humphreys and M. Hatherly. Recrystallization and Related Annealing Phenomena,
Pergamon Materials Series. Elsevier Science Publishers B.V.: Amsterdam, The Netherlands,
2004. isbn: 0080441645. Google Books ID: 52Gloa7HxGsC.
1308. Ming-Chuan Hung, Man-Lin Huang, Don-Lin Yang, and Nien-Lin Hsueh. Efficient Ap-
proaches for Materialized Views Selection in a Data Warehouse. Information Sciences
– Informatics and Computer Science Intelligent Systems Applications: An International
Journal, 177(6):1333–1348, March 2007, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.ins.2006.09.007.
1309. Phil Husbands and Inman Harvey, editors. Proceedings of the Fourth European Conference on
Advances in Artificial Life (ECAL’97), July 28–31, 1997, Brighton, UK, Complex Adaptive
Systems, Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-58157-4. GBV-
Identification (PPN): 234965770.
1310. Phil Husbands and Jean-Arcady Meyer, editors. Proceedings of the First European Workshop
on Evolutionary Robotics (EvoRobot’98), April 16–17, 1998, Paris, France, volume 1468/1998
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-64957-3. OCLC: 246316294, 316835898, 441663096, 471681293, and
502374580. Library of Congress Control Number (LCCN): 98039439.
1311. Phil Husbands and Frank Mill. Simulated Co-Evolution as the Mechanism for
Emergent Planning and Scheduling. In ICGA’91 [254], pages 264–270, 1991.
Fully available at http://www.informatics.sussex.ac.uk/users/philh/pubs/
REFERENCES 1057
icga91Husbands.pdf [accessed 2010-09-13].
1312. Marcus Hutter. Fitness Uniform Selection to Preserve Genetic Diversity. In CEC’02 [944],
pages 783–788, 2002. doi: 10.1109/CEC.2002.1007025. Fully available at ftp://
ftp.idsia.ch/pub/techrep/IDSIA-01-01.ps.gz, http://www.hutter1.net/
ai/pfuss.pdf, http://www.hutter1.net/ai/pfuss.ps, and http://www.
hutter1.net/ai/pfuss.tar [accessed 2011-04-29]. CiteSeerx : 10.1.1.106.9784. arXiv
ID: cs/0103015. INSPEC Accession Number: 7336007. Also: Technical Report IDSIA-01-
01, 17 January 2001. See also [1313, 1708, 1709].
1313. Marcus Hutter and Shane Legg. Fitness Uniform Optimization. IEEE Transactions
on Evolutionary Computation (IEEE-EC), 10(5):568–589, October 2006, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/TEVC.2005.863127. Fully avail-
able at http://www.idsia.ch/idsiareport/IDSIA-16-06.pdf and http://
www.vetta.org/documents/FitnessUniformOptimization.pdf [accessed 2011-03-

01]. CiteSeerx : 10.1.1.137.2999. arXiv ID: cs/0610126v1. INSPEC Accession


Number: 9101740. See also [1312, 1708, 1709].
1314. Martijn A. Huynen. Exploring Phenotype Space Through Neutral Evolution. Jour-
nal of Molecular Evolution, 43(3):165–169, September 1996, Springer New York:
New York, NY, USA. doi: 10.1007/PL00006074. Fully available at http://
www.santafe.edu/research/publications/wpabstract/199510100 [accessed 2009-07-
x
10]. CiteSeer : 10.1.1.21.3483. SFI Working Paper Abstract 95-10-100.

1315. Martijn A. Huynen, Peter F. Stadler, and Walter Fontana. Smoothness within Rugged-
ness: The Role of Neutrality in Adaptation. Proceedings of the National Academy of
Science of the United States of America (PNAS), 93(1):397–401, January 9, 1996, Na-
tional Academy of Sciences: Washington, DC, USA. Fully available at http://fontana.
med.harvard.edu/www/Documents/WF/Papers/neutrality.pdf and http://www.
pnas.org/cgi/reprint/93/1/397.pdf [accessed 2011-12-05]. CiteSeerx : 10.1.1.31.1013.
Communicated by Hans Frauenfelder, Los Alamos National Laboratory, Los Alamos, NM,
September 20, 1995 (received for review June 29, 1995).
1316. Chii-Ruey Hwang. Simulated Annealing: Theory and Applications. Acta Applicandae
Mathematicae: An International Survey Journal on Applying Mathematics and Mathemat-
ical Applications, 12(1):108–111, May 1988, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/BF00047572. Academia Sinica. Review of [2776].

1317. Hitoshi Iba. Emergent Cooperation for Multiple Agents Using Genetic Programming. In
PPSN IV [2818], pages 32–41, 1996. doi: 10.1007/3-540-61723-X 967.
1318. Hitoshi Iba. Evolutionary Learning of Communicating Agents. Information Sciences
– Informatics and Computer Science Intelligent Systems Applications: An International
Journal, 108(1–4):181–205, July 1998, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0020-0255(97)10055-X. See also [1320].
1319. Hitoshi Iba. Evolving Multiple Agents by Genetic Programming. In Advances in Genetic
Programming [2569], pages 447–466. MIT Press: Cambridge, MA, USA, 1996. Fully avail-
able at http://www.cs.bham.ac.uk/˜wbl/aigp3/ch19.pdf [accessed 2007-10-03]. See also
[1320].
1320. Hitoshi Iba, Tishihide Nozoe, and Kanji Ueda. Evolving Communicating Agents based on Ge-
netic Programming. In CEC’97 [173], pages 297–302, 1997. doi: 10.1109/ICEC.1997.592321.
Fully available at http://lis.epfl.ch/˜markus/References/Iba97.pdf [accessed 2008-
07-28]. INSPEC Accession Number: 5573068. See also [1318, 1319].

1321. Toshihide Ibaraki, Koji Nonobe, and Mutsunori Yagiura, editors. Fifth Metaheuristics In-
ternational Conference – Metaheuristics: Progress as Real Problem Solvers (MIC’03), Au-
gust 25–28, 2003, Kyoto International Conference Hall: Kyoto, Japan, volume 32 in Op-
erations Research/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 0-387-25382-3. Partly available at http://www-or.amp.i.kyoto-u.ac.
jp/mic2003/ [accessed 2007-09-12]. Published in 2005.
1322. Proceedings of the IEEE Aerospace Conference, March 18–20, 2000, Big Sky, MT, USA.
IEEE Aerospace and Electronic Systems Society, IEEE Computer Society: Piscataway, NJ,
1058 REFERENCES
USA. isbn: 0-7803-5846-5 and 0780358473. INSPEC Accession Number: 6763001. LC
Classification: TL3000.A1 I22 2000. Catalogue no.: 00TH8484.
1323. Joint IEEE/IEE Workshop on Natural Algorithms in Signal Processing (NASP’93), Novem-
ber 14–16, 1993, Chelmsford, Essex, UK. IEEE Computer Society: Piscataway, NJ, USA.
1324. 1994 IEEE World Congress on Computation Intelligence (WCCI’94), June 26–July 2, 1994,
Walt Disney World Dolphin Hotel: Orlando, FL, USA. IEEE Computer Society: Piscataway,
NJ, USA.
1325. Proceedings of the 1995 American Control Conference (ACC), June 21–23, 1995, Seattle,
WA, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-2445-5 and
0780324463. Google Books ID: EvlVAAAAMAAJ, MfhVAAAAMAAJ, e VVAAAAMAAJ, and
pPlVAAAAMAAJ. INSPEC Accession Number: 5080557.
1326. Proceedings of the Sixth International Symposium on Micro Machine and Human Sci-
ence (MHS’95), October 4–6, 1995, Nagoya, Japan. IEEE Computer Society: Piscat-
away, NJ, USA. doi: 10.1109/MHS.1995.494249. isbn: 0-7803-2676-8. Google Books
ID: wqFVAAAAMAAJ. INSPEC Accession Number: 5303058.
1327. Second IEEE International Conference on Engineering of Complex Computer Systems
(ICECCS’96), October 21–25, 1996, Montréal, QC, Canada. IEEE Computer Society: Pis-
cataway, NJ, USA. isbn: 0-8186-7614-0. INSPEC Accession Number: 5436943. held
jointly with 6th CSESAW and 4th IEEE RTAW.
1328. Digest of Technical Papers of the IEEE/ACM International Conference on Computer-Aided
Design (ICCAD), November 9–13, 1997, San José, CA, USA. IEEE Computer Society:
Piscataway, NJ, USA. isbn: 0-8186-8200-0, 0-8186-8201-9, and 0-8186-8202-7.
issn: 1093-3152. Order No.: PRO8200. Catalogue no.: 97CB36142. ACM Order No.: 477973.
1329. Proceedings of the 22nd IEEE Conference on Local Computer Networks (LCN’97), Novem-
ber 2–5, 1997, Mineapolis, MN, USA. IEEE Computer Society: Piscataway, NJ,
USA. doi: 10.1109/LCN.1997.630893. isbn: 0-8186-8141-1, 0-8186-8142-X, and
0-8186-8143-8. Google Books ID: 5e1VAAAAMAAJ, ANQWKwAACAAJ, and j 4GMwAACAAJ.
OCLC: 38188270 and 60166824. issn: 0742-1303. Order No.: 97TB100179 and PR08141.
1330. 1998 IEEE World Congress on Computation Intelligence (WCCI’98), May 4–9, 1998, Egan
Civic & Convention Center and Anchorage Hilton Hotel: Anchorage, AK, USA. IEEE Com-
puter Society: Piscataway, NJ, USA.
1331. IEEE International Conference on Systems, Man, and Cybernetics – Human Communi-
cation and Cybernetics (SMC’99), October 12–15, 1999, Tokyo University, Department of
Aeronautics and Space Engineering: Tokyo, Japan. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-5731-0, 0-7803-5732-9, and 0-7803-5733-7. Google Books
ID: J-kOAAAACAAJ and Yv6cGAAACAAJ. OCLC: 43823469 and 213032676.
1332. Proceedings of the 2000 American Control Conference (ACC), June 28–30, 2000, Hyatt
Regency Chicago: Chicago, IL, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-5519-9. Google Books ID: pQANAAAACAAJ and xD11HAAACAAJ. INSPEC
Accession Number: 6805288.
1333. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’00), July 16–19, 2000,
La Jolla Marriott Hotel: La Jolla, CA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-6375-2. Google Books ID: ttlVAAAAMAAJ. Library of Congress Control
Number (LCCN): 00-018644. Catalogue no.: 00TH8512. CEC-2000 – A joint meeting of
the IEEE, Evolutionary Programming Society, Galesia, and the IEE.
1334. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’01), May 27–30,
2001, COEX, World Trade Center: Gangnam-gu, Seoul, Korea. IEEE Computer Soci-
ety: Piscataway, NJ, USA. isbn: 0-7803-6657-3 and 0-7803-6658-1. Google Books
ID: R6FVAAAAMAAJ, WXH PQAACAAJ, XaBVAAAAMAAJ, and 1iEAAAACAAJ. Catalogue
no.: 01TH8546. CEC-2001 – A joint meeting of the IEEE, Evolutionary Programming Society,
Galesia, and the IEE.
1335. Proceedings of the American Control Conference (ACC’02), May 8–10, 2002, Honeywell,
Houston, TX, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-7298-0.
Google Books ID: 98SNAAAACAAJ and IxlaHAAACAAJ. issn: 0743-1619.
1336. IEEE AP-S International Symposium and USNC/URSI National Radio Science Meeting,
June 16–21, 2002, San Antonio, TX, USA. IEEE Computer Society: Piscataway, NJ, USA.
1337. Proceedings of the 1st IEEE International Conference on Cognitive Informatics (ICCI’02),
August 19–20, 2002, Calgary, AB, Canada. IEEE Computer Society: Piscataway, NJ, USA.
REFERENCES 1059
isbn: 0-7695-1724-2.
1338. 2002 IEEE World Congress on Computation Intelligence (WCCI’02), May 12–17, 2002,
Hilton Hawaiian Village Hotel (Beach Resort & Spa): Honolulu, HI, USA. IEEE Computer
Society: Piscataway, NJ, USA.
1339. Proceedings of the 4th IEEE International Conference on Cognitive Informatics (ICCI’05),
August 8–10, 2005, University of California: Irvine, CA, USA. IEEE Computer Society:
Piscataway, NJ, USA.
1340. 6th International Conference on Hybrid Intelligent Systems (HIS’06), December 13–15, 2006,
AUT Technology Park: Auckland, New Zealand. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 0-7695-2662-4. Partly available at http://his06.hybridsystem.com/
[accessed 2007-09-01].

1341. 2006 IEEE World Congress on Computation Intelligence (WCCI’06), July 16–21, 2006, Sher-
aton Vancouver Wall Centre Hotel: Vancouver, BC, Canada. IEEE Computer Society: Pis-
cataway, NJ, USA.
1342. Proceedings of the 2nd International Conference on Bio-Inspired Models of Network, In-
formation, and Computing Systems (BIONETICS’07), December 10–13, 2007, Radisson
SAS Beke Hotel: Budapest, Hungary. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 963-9799-05-X. Partly available at http://www.bionetics.org/2007/ [ac-
cessed 2011-02-28]. OCLC: 259234301 and 288980779. Catalogue no.: 08EX2011.

1343. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’07), September 25–
28, 2007, Swissôtel The Stamford: Singapore. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 1-4244-1339-7. Google Books ID: duo6GQAACAAJ and yBzVPQAACAAJ.
OCLC: 257656187.
1344. Proceedings of IEEE Joint Conference on E-Commerce Technology (9th CEC) and Enter-
prise Computing, E-Commerce and E-Services (4th EEE) (CEC/EEE’07), July 23–26, 2007,
National Center of Sciences: Tokyo, Japan. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2913-5. Partly available at http://ieejc.ise.eng.osaka-u.ac.jp/
CEC2007/ [accessed 2011-02-28]. Google Books ID: FXdEPAAACAAJ. OCLC: 170925181 and
288976020.
1345. Proceedings of the First IEEE Symposium Series on Computational Intelligence (SSCI’07),
April 1–5, 2007, Honolulu, HI, USA. IEEE Computer Society: Piscataway, NJ, USA.
1346. Proceedings of IEEE Joint Conference on E-Commerce Technology (10th CEC) and En-
terprise Computing, E-Commerce and E-Services (5th EEE) (CEC/EEE’08), July 21–
24, 2008, Washington, DC, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-3340-X. Partly available at http://cec2008.cs.georgetown.edu/ [ac-
cessed 2011-02-28]. Google Books ID: XrakOgAACAAJ. OCLC: 401711110. Library of Congress

Control Number (LCCN): 2005928254. BMS Part Number CFP08321-PRT.


1347. Third IEEE International Services Computing Contest, 2008, Honolulu, HI, USA. IEEE Com-
puter Society: Piscataway, NJ, USA. Part of [1348].
1348. Proceedings of the 4th IEEE Congress on Services: Part I (SERVICES’08-I), July 6–11, 2008,
Honolulu, HI, USA. IEEE Computer Society: Piscataway, NJ, USA.
1349. Third Asia International Conference on Modelling & Simulation (AMS’09), May 25–26, 2009,
Bandung, Bali. IEEE Computer Society: Piscataway, NJ, USA.
1350. 10th IEEE Congress on Evolutionary Computation (CEC’09), May 18–21, 2009, Trondheim,
Norway. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-2958-7. Google
Books ID: 1-RCOgAACAAJ.
1351. SERVICES Cup’09, 2009, Los Angeles, CA, USA. IEEE Computer Society: Piscataway, NJ,
USA. Part of [3068].
1352. Proceedings of the 2009 International Conference on Systems, Man, and Cybernetics
(SMC’09), October 11–14, 2009, San Antonio, TX, USA. IEEE Computer Society: Pis-
cataway, NJ, USA.
1353. Proceedings of the Second IEEE Symposium Series on Computational Intelligence (SSCI’09),
March 30–April 2, 2009, Sheraton Music City Hotel: Nashville, TN, USA. IEEE Computer
Society: Piscataway, NJ, USA.
1354. 11th IEEE Congress on Evolutionary Computation (CEC’10), 2010, Centre de Convencions
Internacional de Barcelona: Barcelona, Catalonia, Spain. IEEE Computer Society: Piscat-
away, NJ, USA. Part of [1356].
1355. Proceedings of the Sixth International Conference on Natural Computation (ICNC’10), Au-
1060 REFERENCES
gust 10–12, 2010, Yāntái, Shāndōng, China. IEEE Computer Society: Piscataway, NJ, USA.
Catalogue no.: CFP10CNC-PRT.
1356. 2010 IEEE World Congress on Computation Intelligence (WCCI’10), July 18–23, 2010, Cen-
tre de Convencions Internacional de Barcelona: Barcelona, Catalonia, Spain. IEEE Computer
Society: Piscataway, NJ, USA.
1357. Proceedings of the Third IEEE Symposium Series on Computational Intelligence (SSCI’11),
April 11–15, 2011, Paris, France. IEEE Computer Society: Piscataway, NJ, USA.
1358. Proceedings of the 2nd International IEEE Conference on Tools for Artificial In-
telligence (TAI’90), November 6–9, 1990, Hyatt Hotel, Dulles International Air-
port: Herndon, VA, USA. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-8186-2084-6. Google Books ID: 7KaENQAACAAJ, YrY5AAAACAAJ,
and nPtVAAAAMAAJ. OCLC: 23232686, 224971554, and 289019938. Catalogue
no.: 90CH2915-7.
1359. Proceedings of the Second Australia and New Zealand Conference on Intelligent Information
Systems (ANZIIS’98), November 29–December 2, 1994, Brisbane, QLD, Australia. IEEE
Computer Society Press: Los Alamitos, CA, USA.
1360. Proceedings of the Seventh Florida Artificial Intelligence Research Symposium (FLAIRS’94),
May 5–7, 1994, Pensacola Beach, FL, USA. IEEE Computer Society Press: Los Alamitos,
CA, USA.
1361. Second IEEE International Conference on Evolutionary Computation (CEC’95), Novem-
ber 29–December 1, 1995, University of Western Australia: Perth, WA, Australia. IEEE
Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7803-2759-4. Google
Books ID: BJdVAAAAMAAJ, uQAAAACAAJ, and zpZVAAAAMAAJ. INSPEC Accession Num-
ber: 5248032. CEC-95 Editors not given by IEEE, Organisers David Fogel and Chris deSilva.
1362. Proceedings of the IEEE International Conference on Neural Networks (ICNN’95), Novem-
ber 27–December 1, 1995, University of Western Australia: Perth, WA, Australia. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7803-2768-3 and 0-7803-2769-1.
Google Books ID: vy-hcQAACAAJ.
1363. Proceedings of the 17th International Conference on Distributed Computing Systems
(ICDCS’97), May 28–30, 1997, Baltimore, MD, USA. IEEE Computer Society Press: Los
Alamitos, CA, USA. isbn: 0-8186-7813-5. Partly available at http://computer.org/
proceedings/icdcs/7813/7813toc.htm [accessed 2011-04-09].
1364. IEEE International Conference on Systems, Man, and Cybernetics (SMC’98), October 11–
14, 1998, La Jolla, CA, USA, volume 1-5. IEEE Computer Society Press: Los Alamitos,
CA, USA. doi: 10.1109/ICSMC.1998.725373. isbn: 0-7803-4778-1. INSPEC Accession
Number: 6175788. issn: 1062-922X. Catalogue no.: 98CH36218.
1365. Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER 2000),
November 28–December 1, 2000, Chemnitz, Sachsen, Germany. IEEE Computer Soci-
ety Press: Los Alamitos, CA, USA. isbn: 0-7695-0896-0. INSPEC Accession Num-
ber: 6805968.
1366. IEEE International Conference on Systems, Man, and Cybernetics. E-Systems and e-Man
for Cybernetics in Cyberspace (SMC’01), October 7–10, 2001, Tucson, AZ, USA. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7803-7087-2. INSPEC Accession
Number: 7162685. issn: 1062-922X. Catalogue no.: 01CH37236.
1367. 15th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’03), Novem-
ber 3–5, 2003, Sacramento, CA, USA. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7695-2038-3. Google Books ID: ILRVAAAAMAAJ, IvUnMwAACAAJ, and
nVRFAgAACAAJ. issn: 1082-3409.
1368. Proceedings of the 15th International Conference on Scientific and Statistical Database Man-
agement (SSDBM’03), July 9–11, 2003, Cambridge, MA, USA. IEEE Computer Society
Press: Los Alamitos, CA, USA. isbn: 0-7695-1964-4.
1369. Proceedings of the IEEE Congress on Evolutionary Computation (CEC’04), June 20–23,
2004, Portland, OR, USA. IEEE Computer Society Press: Los Alamitos, CA, USA.
isbn: 0-7803-8515-2. Google Books ID: I8 5AAAACAAJ. CEC 2004 – A joint meeting
of the IEEE, the EPS, and the IEE.
1370. International Conference on Computational Intelligence for Modelling, Control and Automa-
tion and International Conference on Intelligent Agents, Web Technologies and Internet
Commerce (CIMCA-IAWTIC’05), November 28–30, 2005, Vienna, Austria. IEEE Computer
REFERENCES 1061
Society Press: Los Alamitos, CA, USA. isbn: 0-7695-2504-0.
1371. Proceedings of the 2005 IEEE International Conference on e-Technology, e-Commerce, and
e-Service (EEE’05), March 29–April 1, 2005, Hong Kong Baptist University, Lam Woo Inter-
national Conference Center: Hong Kong (Xiānggǎng), China. IEEE Computer Society Press:
Los Alamitos, CA, USA. isbn: 0-7695-2274-2. Library of Congress Control Number
(LCCN): 2005. Order No.: P2274.
1372. Proceedings of the Tenth International Database Engineering and Applications Symposium
(IDEAS’06), December 11–14, 2006, Delhi, India. IEEE Computer Society Press: Los Alami-
tos, CA, USA. isbn: 0-7695-2577-6.
1373. Proceedings of the IEEE International Conference on Services Computing (SCC’06), Septem-
ber 18–22, 2006, Chicago, IL, USA. IEEE Computer Society Press: Los Alamitos, CA, USA.
isbn: 0-7695-2670-5. Library of Congress Control Number (LCCN): 2006928886. Order
No.: P2670.
1374. Second IEEE International Services Computing Contest (SCContest’07), 2007, Salt Lake
City, UT, USA. IEEE Computer Society Press: Los Alamitos, CA, USA. Part of [3069].
1375. Proceedings of the Ninth International Conference on Hybrid Intelligent Systems (HIS’09),
August 12–14, 2009, Shenyang Normal University: Shěnyáng, Liáonı́ng, China. IEEE Com-
puter Society Press: Los Alamitos, CA, USA.
1376. Proceedings of the International Conference on Signal Processing Systems (ICSPS’09),
May 15–17, 2009, Singapore. IEEE Computer Society Press: Los Alamitos, CA, USA. Library
of Congress Control Number (LCCN): 2009903184. Order No.: P3654.
1377. Proceedings of the Tenth International Conference on Hybrid Intelligent Systems (HIS’10),
August 23–25, 2010, Atlanta, GA, USA. IEEE Computer Society Press: Los Alamitos, CA,
USA.
1378. Proceedings of the IEEE International Conference on Service-Oriented Computing and Appli-
cations (SOCA’10), December 13–15, 2010, Perth, WA, Australia. IEEE Computer Society
Press: Los Alamitos, CA, USA. isbn: 978-1-4244-9802-4. Partly available at http://
conferences.computer.org/soca/2010/ [accessed 2011-02-28].
1379. Proceedings of the 7th International Conference on Natural Computation (ICNC’11), July 26–
28, 2011, Shànghǎi, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
1380. Proceedings of the 3rd International Workshop on Intelligent Systems and Applications
(ISA’11), May 28–29, 2011, Wǔhàn, Húběi, China. IEEE Computer Society Press: Los
Alamitos, CA, USA. Catalogue no.: CFP1160G-PRT.
1381. Proceedings of the 3rd International Conference on Soft Computing and Pattern Recognition
(SoCPaR’11), October 14–16, 2011, Dàlián, Liáonı́ng, China. IEEE Computer Society Press:
Los Alamitos, CA, USA.
1382. Proceedings of the 15th International Conference on Distributed Computing Systems
(ICDCS’95), May 30–June 2, 1995, Vancouver, BC, Canada. IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/ICDCS.1995.499995. isbn: 0-8186-7025-8. Google Books
ID: oWdCPgAACAAJ. INSPEC Accession Number: 4998824. issn: 1063-6927.
1383. 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing (PDP’02),
January 9–11, 2002, Canary Islands, Spain. IEEE Computer Society: Washington, DC, USA.
isbn: 0-7695-1444-8.
1384. Proceedings of the 21st IEEE Symposium on Reliable Distributed Systems (SRDS’02), Octo-
ber 13–16, 2002, Ōsaka University: Suita, Ōsaka, Japan. IEEE Computer Society: Wash-
ington, DC, USA. isbn: 0-7695-1659-9, 0-7695-1660-2, and 0-7695-1661-0. Google
Books ID: KJVVAAAAMAAJ, k8G AAAACAAJ, and uC8GAAAACAAJ. OCLC: 50818238 and
423964963. issn: 1060-9857.
1385. International Symposium on Microarchitecture – Proceedings of the 15th Annual Workshop on
Microprogramming (MICRO 15), October 5–7, 1982, Palo Alto, CA, USA. IEEE (Institute
of Electrical and Electronics Engineers): Piscataway, NJ, USA. isbn: 9993629073.
1386. Proceedings of the 1999 IEEE International Conference on Robotics and Automation
(ICRA’99), May 10–15, 1999, Marriott Hotel, Renaissance Center: Detroit, MI,
USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway, NJ, USA.
isbn: 0-7803-5180-0, 0-7803-518 1-9, 0-7803-5182-7, and 0-7803-5183-5. Li-
brary of Congress Control Number (LCCN): 90-640158. issn: 1050-4729. Catalogue
no.: 99CH36288 and 99CH36288C.
1387. Proceedings of the 2003 IEEE Swarm Intelligence Symposium (SIS’03), April 24–26, 2003, In-
1062 REFERENCES
dianapolis, IN, USA, IEEE International Symposia on Swarm Intelligence. IEEE (Institute of
Electrical and Electronics Engineers): Piscataway, NJ, USA. doi: 10.1109/SIS.2003.1202237.
Partly available at http://www.computelligence.org/sis/2003/ [accessed 2007-08-26].
Catalogue no.: 03EX706.
1388. Proceedings of Fifth World Congress on Intelligent Control and Automation (WCICA’04),
June 15–19, 2004, Hángzhōu, Zhèjiāng, China. IEEE (Institute of Electrical and Electronics
Engineers): Piscataway, NJ, USA. isbn: 0-7803-8273-0. Library of Congress Control
Number (LCCN): 2003115847. Catalogue no.: 04EX788.
1389. Proceedings of the 2006 IEEE International Conference on Robotics and Automation
(ICRA’06), May 15–19, 2006, Hilton Walt Disney World, Walt Disney World Resort: Or-
lando, FL, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway, NJ,
USA. issn: 1050-4729.
1390. Proceedings of the 2007 International Conference on Computational Intelligence and Security
(CIS’07), December 15–19, 2007, Harbin, Heilongjiang China. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 0-7695-3072-9.
1391. Proceedings of the IEEE International Electric Machines & Drives Conference (IEMDC’07),
May 3–5, 2007, Antalya, Turkey. IEEE (Institute of Electrical and Electronics Engineers):
Piscataway, NJ, USA. isbn: 1-4244-0742-7 and 1-4244-0743-5. OCLC: 156998236
and 244627557. Library of Congress Control Number (LCCN): 2006935481. GBV-
Identification (PPN): 540144487. LC Classification: TK2000 .I35 2007. Catalogue
no.: 07EX1597.
1392. Proceedings of the 2008 International Conference on Computational Intelligence and Security
(CIS’08), December 13–17, 2008, Sūzhōu, Jiāngsū, China. IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA.
1393. Proceedings of the 2009 International Conference on Computational Intelligence and Security
(CIS’09), December 11–14, 2009, Běijı̄ng, China. IEEE (Institute of Electrical and Electronics
Engineers): Piscataway, NJ, USA.
1394. 6th International Workshop on Memetic Algorithms (WOMA’09), March 30–April 2, 2009,
Sheraton Music City Hotel: Nashville, TN, USA. IEEE (Institute of Electrical and Electron-
ics Engineers): Piscataway, NJ, USA. Part of IEEE Symposium Series on Computational
Intelligence.
1395. Proceedings of the Eastern Joint Computer Conference (EJCC) – Papers and Discussions
Presented at the Joint IRE – AIEE – ACM Computer Conference, December 1–3, 1959,
Boston, MA, USA, 16. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA, Association for Computing Machinery (ACM): New York, NY, USA, and Institute
of Radio Engineers: New York, NY, USA. doi: 10.1109/AFIPS.1959.85. asin: B001BS7LVG.
1396. Christian Igel. Causality of Hierarchical Variable Length Representations. In CEC’98 [2496],
pages 324–329, 1998. doi: 10.1109/ICEC.1998.699753. Fully available at http://www.
neuroinformatik.ruhr-uni-bochum.de/PEOPLE/igel/CoHVLR.ps.gz [accessed 2009-
x
07-10]. CiteSeer : 10.1.1.47.9002.

1397. Christian Igel and Marc Toussaint. On Classes of Functions for which No Free Lunch
Results Hold. Information Processing Letters, 86(6):317–321, June 30, 2003, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scien-
tific Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0020-0190(03)00222-9.
CiteSeerx : 10.1.1.72.4278 and 10.1.1.8.6772.
1398. Christian Igel and Marc Toussaint. Recent Results on No-Free-Lunch Theorems for Opti-
mization. arXiv.org, Cornell Universiy Library: Ithaca, NY, USA, March 31, 2003. Fully
available at http://www.citebase.org/abstract?id=oai:arXiv.org:cs/0303032
[accessed 2008-03-28]. arXiv ID: abs/cs/0303032.

1399. Auke Jan Ijspeert, Masayuki Murata, and Naoki Wakamiya, editors. Revised Selected Papers
of the First International Workshop on Biologically Inspired Approaches to Advanced In-
formation Technology (BioADIT’04), January 29–30, 2004, Lausanne, Switzerland, volume
3141/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b101281. isbn: 3-540-23339-3. Google Books ID: PdYn-iHZFDYC
and WOR7wu jMMQC. OCLC: 56878906 and 56922928. Library of Congress Control Number
(LCCN): 2004113283.
1400. Kokolo Ikeda, Hajime Kita, and Shigenobu Kobayashi. Failure of Pareto-based MOEAs:
Does Non-Dominated Really Mean Near to Optimal? In CEC’01 [1334], pages 957–962,
REFERENCES 1063
volume 2, 2001. doi: 10.1109/CEC.2001.934293. INSPEC Accession Number: 7018245.
1401. Michael Iles and Dwight Lorne Deugo. A Search for Routing Strategies in a Peer-to-
Peer Network Using Genetic Programming. In SRDS’02 [1384], pages 341–346, 2002.
doi: 10.1109/RELDIS.2002.1180207. INSPEC Accession Number: 7516795.
1402. Lester Ingber. Adaptive Simulated Annealing (ASA): Lessons learned. Control and Cybernet-
ics, 25(1):33–54, 1996, Polish Academy of Sciences (Polska Akademia Nauk, PAN): Poland.
Fully available at http://www.ingber.com/asa96_lessons.pdf [accessed 2010-09-25] and
http://www.ingber.com/asa96_lessons.ps.gz. CiteSeerx : 10.1.1.15.2777.
1403. Lester Ingber. Simulated Annealing: Practice versus Theory. Mathematical and Com-
puter Modelling, 18(11):29–57, November 1993, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0895-7177(93)90204-C. Fully available at http://www.ingber.com/asa93_
sapvt.pdf [accessed 2009-09-18]. CiteSeerx : 10.1.1.15.1046.
1404. Proceedings of the 3rd International Conference on Agents and Artificial Intelligence
(ICAART’11), January 28–30, 2011, Rome, Italy. Institute for Systems and Technologies
of Information, Control and Communication (INSTICC) Press: Setubal, Portugal.
1405. Proceedings of the 2nd International Conference on Engineering Optimization (EngOpt’10),
September 6–9, 2010, Instituto Superior Técnico (IST) Congress Center: Lisbon, Portugal.
1406. Proceedings of the World Congress on Engineering and Computer Science (WCECS’07), Oc-
tober 26–27, 2007, University of California: Berkeley, CA, USA, Lecture Notes in Engineer-
ing and Computer Science. International Association of Engineers (IAENG): Hong Kong
(Xiānggǎng), China and Newswood Publications Ltd.: Hong Kong (Xiānggǎng), China.
isbn: 988-98671-6-8. Google Books ID: Lg8PLwAACAAJ. OCLC: 184910962.
1407. Proceedings of the IASTED International Conference on Databases and Applications
(DBA’06), February 13–15, 2006, Innsbruck, Austria. International Association of Science
and Technology for Development (IASTED): Calgary, AB, Canada. Part of [1408].
1408. Proceedings of the 24th IASTED International Multi-Conference on Applied Informatics
(AI’06), February 14–16, 2006, Innsbruck, Austria. International Association of Science and
Technology for Development (IASTED): Calgary, AB, Canada.
1409. Proceedings of the Tenth IASTED International Conference on Parallel and Distributed Com-
puting and Systems (PDCS’98), October 1–4, 1998, Las Vegas, NV, USA. International As-
sociation of Science and Technology for Development (IASTED): Calgary, AB, Canada and
ACTA Press: Calgary, AB, Canada.
1410. Proceedings of the Eleventh IASTED International Conference on Parallel and Distributed
Computing and Systems (PDCS’99), November 3–6, 1999, Massachusetts Institute of Tech-
nology (MIT): Cambridge, MA, USA. International Association of Science and Technology
for Development (IASTED): Calgary, AB, Canada and ACTA Press: Calgary, AB, Canada.
1411. The 2012 IEEE World Congress on Computational Intelligence (WCCI’12), June 10–15,
2012, International Convention Centre: Brisbane, QLD, Australia.
1412. 8th International Conference on Systems Research, Informatics and Cybernetics (Inter-
Symp’96), August 14–18, 1996, Baden-Baden, Baden-Württemberg, Germany. International
Institute for Advanced Studies in Systems Research and Cybernetic (IIAS): Tecumseh, ON,
Canada.
1413. Proceedings of the Symposium on Network and Distributed Systems Security (NDSS’00),
February 2–4, 2000, Catamaran Resort Hotel: San Diego, CA, USA. Internet Society
(ISOC): Reston, VA, USA. isbn: 1-891562-07-X, 1-891562-08-8, and 1-891562-11-8.
Google Books ID: 49fJPgAACAAJ, LzfCPgAACAAJ, rAGIPgAACAAJ, and sGuCPgAACAAJ.
OCLC: 59522955 and 236489300.
1414. Proceedings of the First Conference on Artificial General Intelligence (AGI’08), March 1–3,
2008, University of Memphis, FedEx Institute of Technology: Memphis, TN, USA. IOS Press:
Amsterdam, The Netherlands.
1415. Proceedings of the 4th International Workshop on Machine Learning (ML’87), 1987, Irvine,
CA, USA.
1416. Hisao Ishibuchi and Yusuke Nojima. Optimization of Scalarizing Functions Through
Evolutionary Multiobjective Optimization. In EMO’07 [2061], pages 51–65, 2007.
doi: 10.1007/978-3-540-70928-2 8. Fully available at http://ksuseer1.ist.psu.edu/
viewdoc/summary?doi=10.1.1.78.8014 and http://www.lania.mx/˜ccoello/
EMOO/ishibuchi07a.pdf.gz [accessed 2009-07-18].
1417. Hisao Ishibuchi, Tsutomu Doi, and Yusuke Nojima. Incorporation of Scalar-
1064 REFERENCES
izing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms.
In PPSN IX [2355], pages 493–502, 2006. doi: 10.1007/11844297 50. Fully
available at http://www.ie.osakafu-u.ac.jp/˜hisaoi/ci_lab_e/research/pdf_
file/multiobjective/PPSN_2006_EMO_Camera-Ready-HP.pdf [accessed 2009-07-18].
1418. Hisao Ishibuchi, Yusuke Nojima, and Tsutomu Doi. Comparison between Single-Objective
and Multi-Objective Genetic Algorithms: Performance Comparison and Performance Mea-
sures. In CEC’06 [3033], pages 3959–3966, 2006. doi: 10.1109/CEC.2006.1688438. INSPEC
Accession Number: 9723561.
1419. Hisao Ishibuchi, Noritaka Tsukamoto, and Yusuke Nojima. Iterative Approach to
Indicator-based Multiobjective Optimization. In CEC’07 [1343], pages 3967–3974, 2007.
doi: 10.1109/CEC.2007.4424988. INSPEC Accession Number: 9889480.
1420. Hisao Ishibuchi, Noritaka Tsukamoto, and Yusuke Nojima. Evolutionary Many-
Objective Optimization: A Short Review. In CEC’08 [1889], pages 2424–2431, 2008.
doi: 10.1109/CEC.2008.4631121. Fully available at http://www.ie.osakafu-u.ac.
jp/˜hisaoi/ci_lab_e/research/pdf_file/multiobjective/CEC2008_Many_
Objective_Final.pdf [accessed 2009-07-18]. INSPEC Accession Number: 10250829.
1421. Masumi Ishikawa, editor. Fourth International Conference on Hybrid Intelligent Systems
(HIS’04), December 5–8, 2004, Kitakyushu, Japan. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7695-2291-2. OCLC: 58543493, 58804879, 61262247, 71500201,
254581031, and 423969575. INSPEC Accession Number: 8470883.

1422. Hajira Jabeen and Abdul Rauf Baig. Review of Classification using Genetic Programming. In-
ternational Journal of Engineering Science and Technology, 2(2):94–103, February 2010, Engg
Journals Publications (EJP): Otteri, Vandalur, Chennai, India. Fully available at http://
www.ijest.info/docs/IJEST10-02-02-06.pdf [accessed 2010-06-25].
1423. David Jackson. Parsing and Translation of Expressions by Genetic Programming. In
GECCO’05 [304], pages 1681–1688, 2005. doi: 10.1145/1068009.1068291.
1424. Christian Jacob, Marcin L. Pilat, Peter J. Bentley, and Jonathan Timmis, editors. Proceedings
of the 4th International Conference on Artificial Immune Systems (ICARIS’05), August 14–
17, 2005, Banff, AB, Canada, volume 3627 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-28175-4.
1425. Matthias Jacob, Alexander Kuscher, Max Plauth, and Christoph Thiele. Automated Data
Augmentation Services Using Text Mining, Data Cleansing and Web Crawling Techniques.
In Third IEEE International Services Computing Contest [1347], pages 136–143, 2008.
doi: 10.1109/SERVICES-1.2008.67. INSPEC Accession Number: 10117645.
1426. Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code
Optimization. ACM SIGMICRO Newsletter, 13(4):143–148, December 1982, Association for
Computing Machinery (ACM): New York, NY, USA. See also [1427].
1427. Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code
Optimization. In MICRO 15 [1385], pages 143–146, 1982. See also [1426].
1428. H. V. Jagadish and Inderpal Singh Mumick, editors. Proceedings of the 1996 ACM SIGMOD
International Conference on Management of Data (SIGMOD’96), June 4–6, 1996, Montréal,
QC, Canada. ACM Press: New York, NY, USA.
1429. Martin Jähne, Xiaodong Li, and Jürgen Branke. Evolutionary Algorithms and Multi-
Objectivization for the Travelling Salesman Problem. In GECCO’09-I [2342], pages 595–602,
2009. doi: 10.1145/1569901.1569984. Fully available at http://goanna.cs.rmit.edu.
au/˜xiaodong/publications/multi-objectivization-jahne-gecco09.pdf [ac-
cessed 2010-07-18].

1430. Lakhmi C. Jain and Janusz Kacprzyk, editors. New Learning Paradigms in Soft Computing,
volume 84 (PR-200 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH:
Berlin, Germany, February 2002. isbn: 3-7908-1436-9. Google Books ID: u2NcykOhYYwC.
1431. Nick Jakobi. Harnessing Morphogenesis. Technical Report CSRP 423, University of Sussex,
School of Cognitive Science: Brighton, UK, 1995. CiteSeerx : 10.1.1.39.372.
1432. Mohammad Jamshidi, editor. Multimedia, Image Processing, and Soft Computing : Trends,
Principles, and Applications: Proceedings of the 5th Biannual World Automation Congress
REFERENCES 1065
(WAC’02), June 9–13, 2002, Orlando, FL, USA, volume 13/14 in Proceedings of the World
Automation Congress. Technology Software & Information Enterprise (TSI Press) Inc.: San
Antonio, TX, USA. isbn: 1889335185 and 1889335193. Google Books ID: jPGbAAAACAAJ.
OCLC: 51296904, 54530539, 80832353, and 248985844. Catalogue no.: D2EX548.
1433. Thomas Jansen and Ingo Wegener. A Comparison of Simulated Annealing with a Simple
Evolutionary Algorithm on Pseudo-boolean Functions of Unitation. Theoretical Computer
Science, 386(1-2):73–93, October 28, 2007, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.tcs.2007.06.003.
1434. Matthias Jarke, Michael J. Carey, Klaus R. Dittrich, Frederick H. Lochovsky, Pericles
Loucopoulos, and Manfred A. Jeusfeld, editors. Proceedings of 23rd International Confer-
ence on Very Large Data Bases (VLDB’97), August 25–27, 1997, Athens, Greece. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-470-7.
1435. Jiřı́ Jaroš and Václav Dvořák. An Evolutionary Design Technique for Collective Communica-
tions on Optimal Diameter-Degree Networks. In GECCO’08 [1519], pages 1539–1546, 2008.
doi: 10.1145/1389095.1389391.
1436. Grahame A. Jastrebski and Dirk V. Arnold. Improving Evolution Strategies through
Active Covariance Matrix Adaptation. In CEC’06 [3033], pages 9719–9726, 2006.
doi: 10.1109/CEC.2006.1688662. INSPEC Accession Number: 9723774.
1437. Andrzej Jaszkiewicz. On the Computational Efficiency of Multiple Objective Metaheuristics:
The Knapsack Problem Case Study. European Journal of Operational Research (EJOR), 158
(2):418–433, October 16, 2004, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2003.06.015.
1438. Edwin Thompson Jaynes, author. G. Larry Bretthorst, editor. Probability Theory: The Logic
of Science. Cambridge University Press: Cambridge, UK, May 2002–June 2003, Washington
University: St. Louis, MO, USA. isbn: 0521592712. Fully available at http://bayes.
wustl.edu/etj/prob/book.pdf [accessed 2009-07-25]. Google Books ID: tTN4HuUNXjgC.
OCLC: 49859931 and 271784842.
1439. Henrik Jeldtoft Jensen. Self-Organized Criticality: Emergent Complex Behavior in Physical
and Biological Systems, volume 10 in Cambridge Lecture Notes in Physics. Cambridge Uni-
versity Press: Cambridge, UK, 1998. isbn: 0521483719. Google Books ID: 7u3x1Luq50cC.
1440. Mikkel T. Jensen. Guiding Single-Objective Optimization Using Multi-Objective Methods.
In EvoWorkshop’03 [2253], pages 91–98, 2003. doi: 10.1007/3-540-36605-9 25. See also [1442].
1441. Mikkel T. Jensen. Reducing the Run-Time Complexity of Multiobjective EAs: The
NSGA-II and other Algorithms. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 7(5):503–515, October 2003, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.817234. INSPEC Accession Number: 7757990.
1442. Mikkel T. Jensen. Helper-Objectives: Using Multi-Objective Evolutionary Algo-
rithms for Single-Objective Optimisation. Journal of Mathematical Modelling and Al-
gorithms, 3(4):323–347, December 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:JMMA.0000049378.57591.c6. See also [1440].
1443. Licheng Jiao, Lipo Wang, Xinbo Gao, Jing Liu, and Feng Wu, editors. Proceedings of the
Second International Conference on Advances in Natural Computation, Part I (ICNC’06-
I), September 24–28, 2006, Xı̄’ān, Shǎnxı̄, China, volume 4221 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11881070.
isbn: 3-540-45901-4. See also [1444].
1444. Licheng Jiao, Lipo Wang, Xinbo Gao, Jing Liu, and Feng Wu, editors. Proceedings of the
Second International Conference on Advances in Natural Computation, Part II (ICNC’06-
II), September 24–28, 2006, Xı̄’ān, Shǎnxı̄, China, volume 4222 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11881223.
isbn: 3-540-45907-3. See also [1443].
1445. Keisoku Jidō and Seigyo Gakkai, editors. Proceedings of IEEE International Conference on
Evolutionary Computation (CEC’96), May 20–22, 1996, Nagoya University, Symposium &
Toyoda Auditorium: Nagoya, Japan. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7803-2902-3. Google Books ID: C ZPAAAACAAJ, dNZVAAAAMAAJ, and
qi7jGAAACAAJ. OCLC: 312788226 and 639955150. INSPEC Accession Number: 5398378.
1446. Wan-rong Jih and Jane Yung-jen Hsu. Dynamic Vehicle Routing Using Hy-
brid Genetic Algorithms. In ICRA’99 [1386], pages 453–458, volume 1, 1999.
1066 REFERENCES
doi: 10.1109/ROBOT.1999.770019. Fully available at http://agents.csie.ntu.
edu.tw/˜jih/publication/ICRA99-GAinDVRP.pdf, http://neo.lcc.uma.es/
radi-aeb/WebVRP/data/articles/DVRP.pdf, and http://ntur.lib.ntu.edu.
tw/bitstream/246246/2007041910021646/1/00770019.pdf [accessed 2010-11-02]. IN-
SPEC Accession Number: 6345736.
1447. Prasanna Jog, Jung Y. Suh, and Dirk Van Gucht. Parallel Genetic Algorithms Applied
to the Traveling Salesman Problem. SIAM Journal on Optimization (SIOPT), 1(4):515–
529, 1991, Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA.
doi: 10.1137/0801031.
1448. Wilhelm Ludvig Johannsen. Elemente der exakten Erblichkeitslehre (mit Grundzügen der Bi-
ologischen Variationsstatistik). Verlag von Gustav Fischer: Jena, Thuringia, Germany, second
german, revised and very extended edition, 1909–1913. Fully available at http://www.zum.
de/stueber/johannsen/elemente/ [accessed 2008-08-20]. Google Books ID: KF8MGQAACAAJ,
RmRVAAAAMAAJ, dxb GwAACAAJ, ggglAAAAMAAJ, l1Q1AAAAMAAJ, oSHVJgAACAAJ, and
yoBUAAAAMAAJ. Limitations of natural selection on pure lines.
1449. Brad Johanson and Riccardo Poli. GP-Music: An Interactive Genetic Programming Sys-
tem for Music Generation with Automated Fitness Raters. Technical Report CSRP-98-13,
University of Birmingham, School of Computer Science: Birmingham, UK, 1998. Fully avail-
able at http://graphics.stanford.edu/˜bjohanso/gp-music/tech-report/ [ac-
cessed 2009-06-18]. See also [1453].

1450. Matthias John and Max J. Ammann. Design of a Wide-Band Printed Antenna Using a
Genetic Algorithm on an Array of Overlapping Sub-Patches. In iWAT’06 [1745], pages 92–
95, 2006–2005. Fully available at http://www.ctvr.ie/docs/RF%20Pubs/01608983.
pdf [accessed 2008-09-03]. See also [1451].
1451. Matthias John and Max J. Ammann. Optimisation of a Wide-Band Printed Monopole An-
tenna using a Genetic Algorithm. In PAPC’06 [1778], pages 237–240, 2006. Fully available
at http://www.ctvr.ie/docs/RF%20Pubs/LAPC_2006_MJ.pdf [accessed 2008-09-02]. See
also [1450].
1452. Colin G. Johnson and Juan Jesús Romero Cardalda. Workshop “Genetic Algorithms in
Visual Art and Music” (GAVAM’00). In GECCO’00 WS [2986], July 8, 2000, Las Vegas,
NV, USA. Bird-of-a-feather Workshop at the 2000 Genetic and Evolutionary Computation
Conference (GECCO-2000). Part of [2986].
1453. Colin G. Johnson and Riccardo Poli. GP-Music: An Interactive Genetic Programming Sys-
tem for Music Generation with Automated Fitness Raters. In GP’98 [1612], pages 181–
186, 1998. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
johanson_1998_GP-Music.html [accessed 2009-06-18]. CiteSeerx : 10.1.1.14.1076. See
also [1449].
1454. David S. Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine Schevon. Optimiza-
tion by Simulated Annealing: An Experimental Evaluation. Part I, Graph Partitioning. Op-
erations Research, 37(6), November–December 1989, Institute for Operations Research and
the Management Sciences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford
University): Cambridge, MA, USA. doi: 10.1287/opre.37.6.865. See also [1455].
1455. David S. Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine Schevon. Op-
timization by Simulated Annealing: An Experimental Evaluation; Part II, Graph Color-
ing and Number Partitioning. Operations Research, 39(3):378–406, May–June 1991, Insti-
tute for Operations Research and the Management Sciences (INFORMS): Linthicum, ML,
USA and HighWire Press (Stanford University): Cambridge, MA, USA. doi: 10.1287/o-
pre.39.3.378. Fully available at http://www.cs.uic.edu/˜jlillis/courses/cs594_
f02/JohnsonSA.pdf [accessed 2009-09-09]. See also [1454].
1456. Derek M. Johnson, Ankur M. Teredesai, and Robert T. Saltarelli. Genetic Programming in
Wireless Sensor Networks. In EuroGP’05 [1518], pages 96–107, 2005. doi: 10.1007/b107383.
Fully available at http://www.cs.rit.edu/˜amt/pubs/EuroGP05FinalTeredesai.
pdf [accessed 2008-06-17].
1457. Timothy D. Johnstona. Selective Costs and Benefits in the Evolution of Learning. In Advances
in the Study of Behavior [2332], pages 65–106. Academic Press Professional, Inc.: San Diego,
CA, USA. doi: 10.1016/S0065-3454(08)60046-7.
1458. Jeffrey A. Joines and Christopher R. Houck. On the Use of Non-Stationary Penalty Func-
tions to Solve Nonlinear Constrained Optimization Problems with GA’s. In CEC’94 [1891],
REFERENCES 1067
pages 579–584, volume 2, 1994. doi: 10.1109/ICEC.1994.349995. Fully available
at http://www.cs.cinvestav.mx/˜constraint/papers/gacons.ps.gz [accessed 2008-
x
11-15]. CiteSeer : 10.1.1.28.1065.

1459. Donald Forsha Jones, editor. Proceedings of the Sixth Annual Congress of Genetics, 1932,
Ithaca, NY, USA. Brooklyn Botanic Gardens: New York, NY, USA and Menasha Corpora-
tion: Neenah, WI, USA.
1460. Donald R. Jones and Mark A. Beltramo. Solving Partitioning Problems with Genetic Algo-
rithms. In ICGA’91 [254], pages 442–449, 1991.
1461. Gareth Jones. Genetic and Evolutionary Algorithms. In Encyclopedia of Computational
Chemistry. Volume III: Databases and Expert Systems [2824], volume 29. John Wiley & Sons
Ltd.: New York, NY, USA, 1998. Fully available at http://www.wiley.com/legacy/
wileychi/ecc/samples/sample10.pdf [accessed 2010-08-01].
1462. Joel Jones. Abstract Syntax Tree Implementation Idioms. In PLoP’03 [2691],
2003. Fully available at http://hillside.net/plop/plop2003/Papers/
Jones-ImplementingASTs.pdf [accessed 2010-08-19].
1463. Josh Jones and Terence Soule. Comparing Genetic Robustness in Generational vs.
Steady State Evolutionary Algorithms. In GECCO’06 [1516], pages 143–150, 2006.
doi: 10.1145/1143997.1144024.
1464. Terry Jones. A Description of Holland’s Royal Road Function. Evolutionary Computation, 2
(4):411–417, Winter 1994, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1994.2.4.409.
See also [1465].
1465. Terry Jones. A Description of Holland’s Royal Road Function. Working Papers 94-11-
059, Santa Fe Institute: Santa Fé, NM, USA, November 1994. Fully available at http://
www.santafe.edu/media/workingpapers/94-11-059.pdf [accessed 2010-12-04]. See also
[1464].
1466. Terry Jones. Crossover, Macromutation, and Population-based Search. In ICGA’95 [883],
pages 73–80, 1995. CiteSeerx : 10.1.1.42.3194.
1467. Terry Jones. Evolutionary Algorithms, Fitness Landscapes and Search. PhD thesis, Uni-
versity of New Mexico: Albuquerque, NM, USA, May 1995, Ron Mullin, Ian Munro,
Charlie Colbourn, Douglas Hofstadter, Gregory J. E. Rawlins, John Henry Holland, and
Stephanie Forrest, Advisors. Fully available at http://jon.es/research/phd.pdf
and http://www.cs.unm.edu/˜forrest/dissertations-and-proposals/terry.
pdf [accessed 2009-07-10].
1468. Terry Jones and Stephanie Forrest. Fitness Distance Correlation as a Measure of Prob-
lem Difficulty for Genetic Algorithms. In ICGA’95 [883], pages 184–192, 1995. Fully
available at http://www.santafe.edu/research/publications/workingpapers/
95-02-022.ps [accessed 2009-07-10]. CiteSeerx : 10.1.1.42.415.
1469. Aravind K. Joshi, editor. Proceedings of the 9th International Joint Conference on Artificial
Intelligence (IJCIA’85-I), August 1985, Los Angeles, CA, USA, volume 1. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-85-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1470].
1470. Aravind K. Joshi, editor. Proceedings of the 9th International Joint Conference on Artificial
Intelligence (IJCIA’85-II), August 1985, Los Angeles, CA, USA, volume 2. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-85-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [1469].
1471. Bryant A. Julstrom. Seeding the Population: Improved Performance in a Genetic Al-
gorithm for the Rectilinear Steiner Problem. In SAC’94 [734], pages 222–226, 1994.
doi: 10.1145/326619.326728.
1472. Bryant A. Julstrom. Evolutionary Codings and Operators for the Terminal Assignment
Problem. In GECCO’09-I [2342], pages 1805–1806, 2009. doi: 10.1145/1569901.1570171.
1473. Lukasz Juszcyk, Anton Michlmayer, and Christian Platzer. Large Scale Web Service Discov-
ery and Composition using High Performance In-Memory Indexing. In CEC/EEE’07 [1344],
pages 509–512, 2007. doi: 10.1109/CEC-EEE.2007.60. Fully available at https://berlin.
vitalab.tuwien.ac.at/˜florian/papers/cec2007.pdf [accessed 2007-10-24]. INSPEC
Accession Number: 9868635. See also [319].
1068 REFERENCES
K

1474. Leslie Pack Kaelbling and Alessandro Saffiotti, editors. Proceedings of the Nineteenth In-
ternational Joint Conference on Artificial Intelligence (IJCAI’05), July 30–August 5, 2005,
Edinburgh, Scotland, UK. International Joint Conferences on Artificial Intelligence: Den-
ver, CO, USA. isbn: 0938075934. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-05/IJCAI-05%20CONTENT.htm [accessed 2008-04-01]. OCLC: 442578924.
1475. Professor Kaelo. Some Population-set based Methods for Unconstrained Global Optimization.
PhD thesis, Witwatersrand University, School of Computational and Applied Mathematics:
Johannesburg, South Africa, 2005, M. Montaz Ali, Advisor.
1476. Professor Kaelo and M. Montaz Ali. A Numerical Study of Some Modified Differen-
tial Evolution Algorithms. European Journal of Operational Research (EJOR), 169(3):
1176–1184, March 16, 2006, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2004.08.047.
1477. Professor Kaelo and M. Montaz Ali. Differential Evolution Algorithms using Hybrid Muta-
tion. Computational Optimization and Applications, 37(2):231–246, June 2007, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10589-007-9014-3.
1478. Tatiana Kalganova. An Extrinsic Function-Level Evolvable Hardware Approach. In Eu-
roGP’00 [2198], pages 60–75, 2000. doi: 10.1007/b75085. CiteSeerx : 10.1.1.42.1386.
1479. Tatiana Kalganova and Julian Francis Miller. Evolving More Efficient Digital Circuits by
Allowing Circuit Layout Evolution and Multi-Objective Fitness. In EH’99 [2614], pages 54–
63, 1999. doi: 10.1109/EH.1999.785435. CiteSeerx : 10.1.1.47.948. INSPEC Accession
Number: 6338575.
1480. Leila Kallel, Bart Naudts, and Alex Rogers, editors. Proceedings of the Second Evonet Sum-
mer School on Theoretical Aspects of Evolutionary Computing, September 1–7, 1999, Uni-
versity of Antwerp (RUCA): Antwerp, Belgium, Natural Computing Series. Springer New
York: New York, NY, USA. isbn: 3-540-67396-2. Google Books ID: 6TWWVJD2yXYC.
OCLC: 174724209. Library of Congress Control Number (LCCN): 00061910. GBV-
Identification (PPN): 319367169. LC Classification: QA76.618 .T47 2001.
1481. Olav Kallenberg. Foundations of Modern Probability, Probability and Its Applications,
Springer Series in Statistics (SSS). Springer New York: New York, NY, USA, 2nd edi-
tion, January 8, 2002. isbn: 0-3879-4957-7 and 0-3879-5313-2. Google Books
ID: L6fhXh13OyMC, TBgFslMy8V4C, and lzLVjXkERQQC. OCLC: 46937587.
1482. Olav Kallenberg. Probabilistic Symmetries and Invariance Principles, Probability and
Its Applications, Springer Series in Statistics (SSS). Springer New York: New York,
NY, USA, June 27, 2005. isbn: 0-3872-5115-4 and 0-3872-8861-9. Google Books
ID: E2mUQjV09y4C, Zn3rKAAACAAJ, and apT0AQAACAAJ. OCLC: 60740995, 64668233,
315796434, and 317470414.
1483. Subrahmanyam Kalyanasundaram, Richard J. Lipton, Kenneth W. Regan, and Farbod
Shokrieh. Improved Simulation of Nondeterministic Turing Machines. In MFCS’10 [2582],
2010. Fully available at http://www.cc.gatech.edu/˜subruk/pdf/simulate.pdf [ac-
cessed 2010-06-24].

1484. Subbarao Kambhampati, editor. Proceedings of the Twentieth National Conference on Artifi-
cial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Confer-
ence (AAAI’05 / IAAI’05), July 9–13, 2005, Pittsburgh, PA, USA. AAAI Press: Menlo Park,
CA, USA and MIT Press: Cambridge, MA, USA. isbn: 1-57735-236-X. Partly available
at http://www.aaai.org/Conferences/AAAI/aaai05.php and http://www.aaai.
org/Conferences/IAAI/iaai06.php [accessed 2007-09-06].
1485. Subbarao Kambhampati and Craig A. Knoblock, editors. Proceedings of IJCAI’03 Work-
shop on Information Integration on the Web (IIWeb’03), August 9–10, 2003, Acapulco,
México. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. Fully available
at http://www.isi.edu/info-agents/workshops/ijcai03/proceedings.htm [ac-
cessed 2008-04-01]. Part of [1118].

1486. Peter Kampstra. Evolutionary Computing in Telecommunications – A Likely EC Suc-


cess Story. Business mathematics and informatics (bmi), Vrije Universiteit Amster-
dam, Faculty of Sciences: Amsterdam, The Netherlands, August 2005, Rob D. van der
REFERENCES 1069
Mei and Ágoston E. Eiben, Supervisors. Fully available at http://www.few.vu.nl/
stagebureau/werkstuk/werkstukken/werkstuk-kampstra.pdf [accessed 2008-09-04].
1487. Ali Kamrani, Wang Rong, and Ricardo Gonzalez. A Genetic Algorithm Methodology for Data
Mining and Intelligent Knowledge Acquisition. Computers & Industrial Engineering, 40(4):
361–377, September 2001, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon
Press: Oxford, UK. doi: 10.1016/S0360-8352(01)00036-5.
1488. Chih-Wei Kang and Jian-Hung Chen. Multi-Objective Evolutionary Optimization of 3D
Differentiated Sensor Network Deployment. In GECCO’09-II [2343], pages 2059–2064, 2009.
doi: 10.1145/1570256.1570276.
1489. Lishan Kang, Z. Cai, and Y. Yan, editors. Progress in Intelligence Computation and Intel-
ligence: International Symposium on Intelligence, Computation and Applications (ISICA),
April 4–6, 2005, Wǔhàn, Húběi, China. China University of Geosciences (CUG), School of
Computer Science: Wǔhàn, Húběi, China.
1490. Lishan Kang, Yong Liu, and Sanyou Zeng, editors. Second International Symposium on
Advances in Computation and Intelligence (ISICA’07), September 21–23, 2007, Wǔhàn,
Húběi, China, volume 4683/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-74581-5. isbn: 3-540-74580-7 and 6611353755. Google Books
ID: t7zBMdjahZUC. OCLC: 170907281, 255916871, 315404653, 318300255, and
403659231. Library of Congress Control Number (LCCN): 2007933208.
1491. Zhuo Kang, Lishan Kang, Xiufen Zou, Minzhong Liu, Changhe Li, Ming Yang, Yan Li, Yup-
ing Chen, and Sanyou Zeng. A New Evolutionary Decision Theory for Many-Objective Opti-
mization Problems. In ISICA’07 [1490], pages 1–11, 2007. doi: 10.1007/978-3-540-74581-5 1.
1492. Proceedings of the United States Department of Energy Cyber Security Group 2004 Training
Conference, April 24–27, 2004, Kansas City, KS, USA. CD ROM Proceedings.
1493. 11th Conference on Technologies and Applications of Artificial Intelligence (TAAI’06), De-
cember 2006, Kaohsiung, Taiwan.
1494. Bahar Karaoğlu, Haluk Topçuoğlu, and Fikret Gürgen. Evolutionary Algorithms for Location
Area Management. In EvoWorkshops’05 [2340], pages 175–184, 2005.
1495. John K. Karlof, editor. Integer Programming: Theory and Practice, Operations Research
Series. CRC Press, Inc.: Boca Raton, FL, USA, September 22, 2005. isbn: 0-8493-1914-5,
1420039598, and 6610536953. Google Books ID: TO3xQO9XEjwC. OCLC: 58052161,
309875596, 508010376, and 629733350. Library of Congress Control Number
(LCCN): 2005041912. LC Classification: T57.74 .I547 2006.
1496. Charles L. Karr and L. Michael Freeman, editors. Industrial Applications of Genetic Algo-
rithms, International Series on Computational Intelligence. CRC Press, Inc.: Boca Raton,
FL, USA, December 29, 1998. isbn: 0849398010.
1497. Firat Kart, Zhongnan Shen, and Cagdas Evren Gerede. The MIDAS System: A Service
Oriented Architecture for Automated Supply Chain Management. In SCContest’06 [554],
pages 487–494, 2006. doi: 10.1109/SCC.2006.103. INSPEC Accession Number: 9165413.
1498. Nikola Kasabov and Peter Alexander Whigham, editors. Proceedings of the 5th Australia-
Japan Joint Workshop on Intelligent & Evolutionary Systems – “From Population Genetics
to Evolving Intelligent Systems” (AJWIS’01), November 19–21, 2001, University of Otago:
Dunedin, New Zealand.
1499. Stuart Alan Kauffman. Adaptation on Rugged Fitness Landscapes. In Lectures in the Sci-
ences of Complexity: The Proceedings of the 1988 Complex Systems Summer School [2608],
pages 527–618, volume Lecture 1, 1988.
1500. Stuart Alan Kauffman. The Origins of Order: Self-Organization and Selection in Evolution.
Oxford University Press, Inc.: New York, NY, USA, May 1993. isbn: 0195058119 and
0195079515. Google Books ID: 7mOsGwAACAAJ and lZcSpRJz0dgC. OCLC: 23253930,
59892535, and 263622938.
1501. Stuart Alan Kauffman and Simon Asher Levin. Towards a General Theory of Adaptive
Walks on Rugged Landscapes. Journal of Theoretical Biology, 128(1):11–45, September 7,
1987, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: Academic
Press Professional, Inc.: San Diego, CA, USA. doi: 10.1016/S0022-5193(87)80029-2. PubMed
ID: 3431131.
1502. Stuart Alan Kauffman and Edward D. Weinberger. The NK Model of Rugged Fitness Land-
scapes and its Application to Maturation of the Immune Response. Journal of Theoreti-
1070 REFERENCES
cal Biology, 141(2):211–245, November 21, 1989, Elsevier Science Publishers B.V.: Amster-
dam, The Netherlands. Imprint: Academic Press Professional, Inc.: San Diego, CA, USA.
doi: 10.1016/S0022-5193(89)80019-0.
1503. Leonard Kaufman and Peter J. Rousseeuw. Finding Groups in Data – An Introduction
to Cluster Analysis, volume 59 in Wiley Series in Probability and Mathematical Statistics
– Applied Probability and Statistics Section Series. Wiley Interscience: Chichester, West
Sussex, UK, 9th edition, March 1990–May 2005. isbn: 0-471-87876-6. Google Books
ID: Q5wQAQAAIAAJ. OCLC: 19458846.
1504. Henry Kautz and Bruce W. Porter, editors. Proceedings of the Seventeenth National Confer-
ence on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial
Intelligence (AAAI’00, IAAI’00), July 30–August 3, 2000, Austin, TX, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51112-6. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai00.php and http://
www.aaai.org/Conferences/IAAI/iaai00.php [accessed 2007-09-06].
1505. W. H. Kautz. Unit-distance Error-Checking Codes. IRE Transactions on Electronic Comput-
ers, EC-7:177–180, 1958, Institute of Radio Engineers: New York, NY, USA, R. E. Meagher,
editor.
1506. Steven M. Kay. Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory,
Prentice Hall Signal Processing Series. Prentice Hall PTR: Englewood Cliffs, Upper Saddle
River, NJ, USA, March 26, 1993. isbn: 0133457117 and 013504135X. Google Books
ID: 3GcsPwAACAAJ. OCLC: 26504848, 177055890, and 313604278.
1507. Hilmi Gunes Kayacık, Malcom Ian Heywood, and A. Nur Zincir-Heywood. Evolving Buffer
Overflow Attacks with Detector Feedback. In EvoWorkshops’07 [1050], pages 11–20, 2007.
doi: 10.1007/978-3-540-71805-5 2.
1508. Hilmi Gunes Kayacık, A. Nur Zincir-Heywood, Malcom Ian Heywood, and Stefan Burschka.
Testing Detector Parameterization Using Evolutionary Exploit Generation. In EvoWork-
shops’09 [1052], pages 105–110, 2009. doi: 10.1007/978-3-642-01129-0 13.
1509. Sanza T. Kazadi. Conjugate Schema and Basis Representation of Crossover and Mutation.
Evolutionary Computation, pages 129–160, Summer 1998, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1998.6.2.129. PubMed ID: 10021744.
1510. Michael J. Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron. An Experimental and
Theoretical Comparison of Model Selection Methods. In COLT’95 [1802], pages 21–30, 1995.
doi: 10.1145/225298.225301. Fully available at http://www.cis.upenn.edu/˜mkearns/
papers/ms.pdf [accessed 2008-03-02]. See also [1511].
1511. Michael J. Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron. An Experimental
and Theoretical Comparison of Model Selection Methods. Machine Learning, 27(1):7–50,
April 1997, Kluwer Academic Publishers: Norwell, MA, USA and Springer Netherlands:
Dordrecht, Netherlands, Philip M. Long, editor. doi: 10.1023/A:1007344726582. Fully avail-
able at http://www.springerlink.com/content/r26rk7q55615qq43/fulltext.
pdf [accessed 2009-07-12]. See also [1510].
1512. Tom Kehler and Stan Rosenschein, editors. Proceedings of the 5th National Conference
on Artificial Intelligence (AAAI’86-I), August 11–15, 1986, Philadelphia, PA, USA, volume
Science (Volume 1). Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai86.php [accessed 2007-09-06].
See also [1513].
1513. Tom Kehler and Stan Rosenschein, editors. Proceedings of the 5th National Conference
on Artificial Intelligence (AAAI’86-II), August 11–15, 1986, Philadelphia, PA, USA, volume
Engineering (volume 2). Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai86.php [accessed 2007-09-06].
See also [1512].
1514. Maarten Keijzer. Scaled Symbolic Regression. Genetic Programming and
Evolvable Machines, 5(3):259–269, September 2004, Springer Netherlands: Dor-
drecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1023/B:GENP.0000030195.77571.f9.
1515. Maarten Keijzer, editor. Late Breaking Papers at the 2004 Genetic and Evolutionary Com-
putation Conference (GECCO’04 LBP), 2004, Red Lion Hotel: Seattle, WA, USA. Springer-
Verlag GmbH: Berlin, Germany. Distributed on CD-ROM. Part of [752].
1516. Maarten Keijzer and Mike Cattolico, editors. Proceedings of the 8th Annual Conference on
REFERENCES 1071
Genetic and Evolutionary Computation (GECCO’06), July 8–12, 2006, Renaissance Seattle
Hotel: Seattle, WA, USA. ACM Press: New York, NY, USA. isbn: 1-59593-186-4 and
1-59593-187-2. Google Books ID: 6y1pAAAACAAJ. OCLC: 71335687 and 85782847.
ACM Order No.: 910060.
1517. Maarten Keijzer, Una-May O’Reilly, Simon M. Lucas, Ernesto Jorge Fernandes Costa, and
Terence Soule, editors. Proceedings of the 7th European Conference on Genetic Programming
(EuroGP’04), April 5–7, 2004, Coimbra, Portugal, volume 3003/2004 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-21346-5.
1518. Maarten Keijzer, Andrea G. B. Tettamanzi, Pierre Collet, Jano I. van Hemert, and Marco
Tomassini, editors. Proceedings of the 8th European Conference on Genetic Program-
ming (EuroGP’05), March 30–April 1, 2005, Lausanne, Switzerland, volume 3447/2005
in Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b107383.
isbn: 3-540-25436-6 and 3-540-31989-1. Google Books ID: -Q88PwAACAAJ and
lcw0Pkgc6OUC. OCLC: 59351789, 145832391, 254751818, and 318300168. Library of
Congress Control Number (LCCN): 2005922866.
1519. Maarten Keijzer, Giuliano Antoniol, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Do-
err, Nikolaus Hansen, John H. Holmes, Gregory S. Hornby, Daniel Howard, James Kennedy,
Sanjeev Kumar, Fernando G. Lobo, Julian Francis Miller, Jason H. Moore, Frank Neumann,
Martin Pelikan, Jordan B. Pollack, Kumara Sastry, Kenneth Owen Stanley, Adrian Stoica,
El-Ghazali Talbi, and Ingo Wegener, editors. Proceedings of the Genetic and Evolution-
ary Computation Conference (GECCO’08), July 12–16, 2008, Renaissance Atlanta Hotel
Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. isbn: 1-60558-130-5
and 1-60558-131-3. Partly available at http://www.sigevo.org/gecco-2008/ [ac-
cessed 2011-02-28]. OCLC: 288975508, 288975509, 299786110, 302317062, and 373680990.

ACM Order No.: 910080 and 910081. See also [585, 1872, 2268, 2537].
1520. Jozef Kelemen and Petr Sosı́k, editors. Proceedings of the 6th European Conference on Ad-
vances in Artificial Life (ECAL’01), September 10–14, 2001, Prague, Czech Republic, volume
2159/-1 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/3-540-44811-X. isbn: 3-540-42567-5.
1521. Evelyn Fox Keller and Elisabeth A. Lloyd, editors. Keywords in Evolutionary Biology. Har-
vard University Press: Cambridge, MA, USA, November 1992. isbn: 0-674-50312-0 and
0-674-50313-9. Google Books ID: Hvm7sCuyRV4C and fgRpAAAACAAJ.
1522. James M. Keller and Olfa Nasraoui, editors. Proceedings of the Annual Meeting of the
North American Fuzzy Information Processing Society (NAFIPS’00), June 27–29, 2002, Tu-
lane University: New Orleans, LA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-7461-4. Google Books ID: mpdVAAAAMAAJ. OCLC: 50258796, 63197367,
248814326, and 268938117. Catalogue no.: 02TH8622.
1523. James Kennedy and Russel C. Eberhart. Particle Swarm Optimization. In ICNN’95 [1362],
pages 1942–1948, volume 4, 1995. doi: 10.1109/ICNN.1995.488968. Fully available
at http://www.engr.iupui.edu/˜shi/Coference/psopap4.html [accessed 2010-12-24].
INSPEC Accession Number: 5263228.
1524. James Kennedy and Russel C. Eberhart. Swarm Intelligence: Collective, Adaptive. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2001, Yuhui Shi, Publisher.
isbn: 1558605959. Partly available at http://www.engr.iupui.edu/˜eberhart/web/
PSObook.html [accessed 2008-08-22]. Google Books ID: pHku3gYfL2UC and vOx-QV3sRQsC.
1525. Brian Wilson Kernighan and Dennis M. Ritchie. The C Programming Language, Prentice-
Hall Software Series. Prentice Hall PTR: Englewood Cliffs, Upper Saddle River, NJ,
USA, first/second edition, 1978–1988. isbn: 0-13-110163-3, 0-13-110362-8, and
0-13-110370-9. OCLC: 3608698, 251418906, 257254858, and 299378788. Library
of Congress Control Number (LCCN): 77028983. GBV-Identification (PPN): 023146052,
076304213, and 423015281. LC Classification: QA76.73.C15 K47.
1526. Martin L. Kersten, Boris Novikov, Jens Teubner, Vladimir Polutin, and Stefan Manegold,
editors. Proceedings of the 12th International Conference on Extending Database Technology:
Advances in Database Technology (EDBT’09), March 24–26, 2009, Saint Petersburg, Russia,
volume 360 in ACM International Conference Proceeding Series (AICPS). ACM Press: New
York, NY, USA.
1527. Faten Kharbat, Larry Bull, and Mohammed Odeh. Mining Breast Cancer Data with
1072 REFERENCES
XCS. In GECCO’07-I [2699], pages 2066–2073, 2007. doi: 10.1145/1276958.1277362. Fully
available at http://www.cs.york.ac.uk/rts/docs/GECCO_2007/docs/p2066.pdf
[accessed 2010-07-26].

1528. Vineet Khare. Performance Scaling of Multi-Objective Evolutionary Algorithms. Mas-


ter’s thesis, University of Birmingham, School of Computer Science: Birmingham,
UK, September 21, 2002, Xin Yao and Kalyanmoy Deb, Supervisors. Fully available
at http://www.lania.mx/˜ccoello/EMOO/thesis_khare.pdf.gz [accessed 2011-08-07].
CiteSeerx : 10.1.1.15.835. See also [1530].
1529. Vineet Khare, Xin Yao, and Kalyanmoy Deb. Performance Scaling of Multi-Objective
Evolutionary Algorithms. Technical Report 2002009, Kanpur Genetic Algorithms Labo-
ratory (KanGAL), Department of Mechanical Engineering, Indian Institute of Technology
Kanpur (IIT): Kanpur, Uttar Pradesh, India, October 2002. Fully available at http://
citeseer.ist.psu.edu/old/597663.html and http://www.iitk.ac.in/kangal/
papers/k2002009.pdf.gz [accessed 2009-07-18]. See also [1528, 1530].
1530. Vineet Khare, Xin Yao, and Kalyanmoy Deb. Performance Scaling of Multi-Objective Evolu-
tionary Algorithms. In EMO’03 [957], pages 367–390, 2003. doi: 10.1007/3-540-36970-8 27.
See also [1528, 1529].
1531. Eik Fun Khor, Kay Chen Tan, Tong Heng Lee, and Chi Keong Goh. A Study on Distribution
Preservation Mechanism in Evolutionary Multi-Objective Optimization. Artificial Intelli-
gence Review – An International Science and Engineering Journal, 23(1):31–33, March 2005,
Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Nor-
well, MA, USA. doi: 10.1007/s10462-004-2902-3.
1532. Sami Khuri and Teresa Chiu. Heuristic Algorithms for the Terminal Assignment
Problem. In SAC’97 [435], pages 247–251, 1997. doi: 10.1145/331697.331748.
CiteSeerx : 10.1.1.53.9516.
1533. Sami Khuri, Thomas Bäck, and Jörg Heitkötter. The zero/one multiple knapsack problem
and genetic algorithms. pages 188–193. doi: 10.1145/326619.326694. isbn: 0-89791-647-6.
Fully available at http://doi.acm.org/10.1145/326619.326694 [accessed 2008-06-23].
CiteSeerx : 10.1.1.40.4393.
1534. Sami Khuri, Martin Schütz, and Jörg Heitkötter. Evolutionary Heuristics for the
Bin Packing Problem. In ICANNGA’95 [2141], pages 285–288, 1995. Fully available
at http://www6.uniovi.es/pub/EC/GA/papers/icannga95.ps.gz [accessed 2010-10-18].
CiteSeerx : 10.1.1.33.3373.
1535. Hyumin Kim and Yong-Hyuk Kim. Optimal Designs of Ambiguous Mobile Key-
pad with Alphabetical Constraints. In GECCO’09-I [2342], pages 1931–1932, 2009.
doi: 10.1145/1569901.1570243.
1536. Jong-Hwan Kim and Hyun-Sik Shim. Evolutionary Programming-Based Optimal Robust
Locomotion Control of Autonomous Mobile Robots. In EP’95 [1848], pages 631–644, 1995.
1537. Minkyu Kim, Varun Aggarwal, Una-May O’Reilly, Muriel Médard, and Wonsik Kim. Genetic
Representations for Evolutionary Minimization of Network Coding Resources. In EvoWork-
shops’07 [1050], pages 21–31, 2007. doi: 10.1007/978-3-540-71805-5 3.
1538. Moon-Chan Kim, Chang Ouk Kim, Seong Rok Hong, and Ick-Hyun Kwon. Forward–
backward Analysis of RFID-enabled Supply Chain using Fuzzy Cognitive Map and Genetic
Algorithm. Expert Systems with Applications – An International Journal, 35(3):1166–1176,
October 2008, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Ox-
ford, UK. doi: 10.1016/j.eswa.2007.08.015.
1539. Kenneth E. Kinnear, Jr, editor. Advances in Genetic Programming Volume 1, Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, April 7,
1994. isbn: 0-262-11188-8. Google Books ID: eu2JplnQdBkC and eu2JplnQdBkC.
OCLC: 29595260, 468496888, and 473279948. GBV-Identification (PPN): 133135691
and 192075969.
1540. Kenneth E. Kinnear, Jr. Fitness Landscapes and Difficulty in Genetic Programming. In
CEC’94 [1891], pages 142–147, volume 1, 1994. doi: 10.1109/ICEC.1994.350026. Fully avail-
able at http://eprints.kfupm.edu.sa/41390/ and http://www.cs.bham.ac.uk/
˜wbl/biblio/gp-html/ieee94_kinnear.html [accessed 2009-07-10].
1541. Scott Kirkpatrick, C. Daniel Gelatt, Jr., and Mario P. Vecchi. Optimization by Simu-
lated Annealing. Science Magazine, 220(4598):671–680, May 13, 1983, American Associa-
tion for the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press
REFERENCES 1073
(Stanford University): Cambridge, MA, USA. doi: 10.1126/science.220.4598.671. Fully
available at http://fezzik.ucd.ie/msc/cscs/ga/kirkpatrick83optimization.
pdf [accessed 2009-07-03]. CiteSeerx : 10.1.1.18.4175.
1542. I. M. A. Kirkwood, S. H. Shami, and Mark C. Sinclair. Discovering Simple
Fault-Tolerant Routing Rules using Genetic Programming. In ICANNGA’97 [2527],
1997. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
kirkam_1997_dsftrr.html [accessed 2009-07-21]. CiteSeerx : 10.1.1.51.4447. See also
[2463].
1543. Marc Kirschner and John Gerhart. Evolvability. Proceedings of the National Academy of
Science of the United States of America (PNAS), 95(15):8420–8427, July 15, 1998, National
Academy of Sciences: Washington, DC, USA. Fully available at http://www.pnas.org/
cgi/content/full/95/15/8420 [accessed 2009-07-10].
1544. Hajime Kita and Yasuhito Sano. Genetic Algorithms for Optimization of Noisy Fitness Func-
tions and Adaptation to Changing Environments. In 2003 Joint Workshop of Hayashibara
Foundation and Hayashibara Forum – Physics and Information – and Workshop on Statistical
Mechanical Approach to Probabilistic Information Processing (SMAPIP) [2636], 2003. Fully
available at http://www.smapip.is.tohoku.ac.jp/˜smapip/2003/hayashibara/
proceedings/HajimeKita.pdf [accessed 2008-07-19].
1545. Second World Congress on Nature and Biologically Inspired Computing (NaBIC’10), Decem-
ber 15–17, 2010, Kitakyushu International Conference Center: Kitakyushu, Japan.
1546. M. Kiuchi, Y. Shinano, R. Hirabayashi, and Y. Saruwatari. An Exact Algorithm for the
Capacitated Arc Routing Problem using Parallel Branch and Bound Method. In Abstracts
of the National Conference of the Operations Research Society of Japan [2087], pages 28–29,
Spring 1995. Written in Japanese.
1547. Stefan Klahold, Steffen Frank, Robert E. Keller, and Wolfgang Banzhaf. Exploring the Pos-
sibilities and Restrictions of Genetic Programming in Java Bytecode. In GP’98 LBP [1605],
pages 120–124, 1998.
1548. David G. Kleinbaum, Lawrence L. Kupper, and Keith E. Muller, editors. Applied Regres-
sion Analysis and other Multivariable Methods, The Duxbury Series in Statistics and Deci-
sion Sciences. Prindle, Weber, Schmidt (PWS) Publishing Company: Boston, MA, USA,
1988. isbn: 0-871-50123-6. OCLC: 15317108, 299861986, and 476708711. Library
of Congress Control Number (LCCN): 87005397. GBV-Identification (PPN): 040577813,
219461139, and 231144253. LC Classification: QA278.
1549. Jon Kleinberg and Christos Papadimitriou. Computability and Complexity. In Computer
Science: Reflections on the Field, Reflections from the Field [618], Chapter 2.1, pages 37–50.
National Academies Press: Washington, DC, USA, 2004. Fully available at http://www.
cs.cornell.edu/home/kleinber/cstb-turing.pdf [accessed 2010-06-25].
1550. Georg Kliewer and Peter Unruh. Parallele Simulated Annealing Bibliothek: Entwicklung und
Einsatz zur Flugplanoptimierung. Master’s thesis, Universität-Gesamthochschule Paderborn,
Fachbereich 17: Mathematik/Informatik: Paderborn, Germany, February 1998, Burkhard
Monien and S. Tschöke, Advisors. Fully available at http://www.uni-paderborn.de/
cs/geokl/publications/geokl.master.ps [accessed 2010-09-25].
1551. Dimitri Knjazew and David Edward Goldberg. Solving Permutation Problems with the
Ordering Messy Genetic Algorithms. In Advances in Evolutionary Computing – Theory and
Applications [1049], pages 321–350. Springer New York: New York, NY, USA, 2002.
1552. Joshua D. Knowles and David Wolfe Corne. Properties of an Adaptive Archiving Al-
gorithm for Storing Nondominated Vectors. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), 7(2):100–116, April 2003, IEEE Computer Society: Washington,
DC, USA. doi: 10.1109/TEVC.2003.810755. Fully available at http://dbkgroup.org/
knowles/KnowlesCorneIEEEToECsub2a.pdf [accessed 2010-12-16]. INSPEC Accession Num-
ber: 7617174.
1553. Joshua D. Knowles, Richard A. Watson, and David Wolfe Corne. Reducing Local Optima
in Single-Objective Problems by Multi-objectivization. In EMO’01 [3099], pages 269–283,
2001. doi: 10.1007/3-540-44719-9 19. Fully available at http://www.macs.hw.ac.uk/
˜dwcorne/rlo.pdf [accessed 2010-07-18].
1554. Joshua D. Knowles, David Wolfe Corne, and Kalyanmoy Deb. Multiobjective Problem Solving
from Nature – From Concepts to Applications, Natural Computing Series. Springer New York:
New York, NY, USA, 2008. doi: 10.1007/978-3-540-72964-8. isbn: 3-540-72963-1. Google
1074 REFERENCES
Books ID: pzq8t9rCKC8C. OCLC: 255894619, 300164898, and 315697103. Library of
Congress Control Number (LCCN): 2007936865.
1555. Donald Ervin Knuth. Big Omicron and Big Omega and Big Theta. ACM SIGACT News, 8
(2):18–24, April–June 1976, ACM Press: New York, NY, USA. Fully available at http://
portal.acm.org/citation.cfm?id=1008328.1008329 [accessed 2010-06-24].
1556. Donald Ervin Knuth. Fundamental Algorithms, volume 1 in The Art of Computer Program-
ming (TAOCP). Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, third
edition, 1997. isbn: 0-201-89683-4. Google Books ID: 5oJQAAAAMAAJ, B31GAAAAYAAJ,
and J MySQAACAAJ.
1557. Donald Ervin Knuth. Sorting and Searching, volume 3 in The Art of Computer Programming
(TAOCP). Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 2nd edition,
1998. isbn: 0-201-03803-X, 0-201-03809-9, 0-201-03822-6, 0-201-85394-9, and
0-201-89685-0. Google Books ID: 3YpQAAAAMAAJ, QDh9AAAACAAJ, QTh9AAAACAAJ,
STh9AAAACAAJ, SoRQAAAAMAAJ, YG2SGQAACAAJ, and jYNQAAAAMAAJ. Original from the
University of Michigan.
1558. King-Tim Ko, Kit-Sang Tang, Cheung-Yau Chan, Kim-Fung Man, and Sam Kwong. Using
Genetic Algorithms to Design Mesh Networks. Computer, 30(8):56–61, August 1997, IEEE
Computer Society Press: Los Alamitos, CA, USA. doi: 10.1109/2.607086.
1559. Ryan K. L. Ko, Andre Jusuf, and Siang Guan (Stephen) Lee. Genesis – Dynamic Collabo-
rative Business Process Formulation Based on Business Goals and Criteria. In SERVICES
Cup’09 [1351], pages 123–129, 2009. doi: 10.1109/SERVICES-I.2009.108. INSPEC Accession
Number: 10967055.
1560. Yves Kodratoff, editor. Machine Learning: Fifth European Working Session on Learning
(EWSL’91), March 6–8, 1991, Porto, Portugal, volume 482 in Lecture Notes in Artificial In-
telligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. isbn: 0-387-53816-X.
1561. Natallia Kokash. Web Service Discovery with Implicit QoS Filtering. In IBM
PhD Student Symposium at ICSOC 2005 [90], pages 61–66, 2005. Fully available
at http://homepages.cwi.nl/˜kokash/documents/PhDICSOC.pdf [accessed 2011-01-10].
CiteSeerx : 10.1.1.104.691.
1562. Murat Köksalan and Stanley Zionts, editors. Proceedings of the 15th International Con-
ference on Multiple Criteria Decision Making: Multiple Criteria Decision Making in the
New Millennium (MCDM’00), June 10–14, 2000, Middle East Technical University: Ankara,
Turkey, volume 507 in Lecture Notes in Economics and Mathematical Systems. Springer-
Verlag: Berlin/Heidelberg. Partly available at http://mcdm2000.ie.metu.edu.tr/ [ac-
cessed 2007-09-10]. Published November 9, 2001.

1563. Krasimir Kolarov. Landscape Ruggedness in Evolutionary Algorithms. In CEC’97 [173],


pages 19–24, 1997. doi: 10.1109/ICEC.1997.592261. CiteSeerx : 10.1.1.40.329.
1564. Ville Kolehmainen, Pekka Toivanen, and Bartlomiej Beliczyński, editors. Revised Se-
lected Papers from the 9th International Conference on Adaptive and Natural Comput-
ing Algorithms (ICANNGA’09), April 23–25, 2009, University of Kuopio: Kuopio, Fin-
land, volume 5495/2009 in Theoretical Computer Science and General Issues (SL 1),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-04921-7. isbn: 3-642-04920-6. Library of Congress Control Number
(LCCN): 2009935814.
1565. Andreı̆ Nikolajevič Kolmogorov. Grundbegriffe der Wahrscheinlichkeitsrechnung. Springer-
Verlag GmbH: Berlin, Germany and Chelsea Publishing Company: New York, NY,
USA, 1933. isbn: 038706110X and 354006110X. Google Books ID: NN1aJgAACAAJ,
PDM5AAAAIAAJ, jZA0PQAACAAJ, lxvQAAAAMAAJ, and ob4rAAAAYAAJ. OCLC: 64177345,
74144504, and 185672567. See also [1566].
1566. Andreı̆ Nikolajevič Kolmogorov. Foundations of the Theory of Probability. Chelsea Pub-
lishing Company: New York, NY, USA, second edition, 1950. isbn: 0828400237. Fully
available at http://www.mathematik.com/Kolmogorov/ [accessed 2007-09-15]. Google Books
ID: BZ5UJwAACAAJ, EmetAAAAMAAJ, delQAAAACAAJ, puRLAAAAMAAJ, t07dPAAACAAJ,
and tk7dPAAACAAJ. OCLC: 233143719. See also [1565].
1567. Krzysztof Kolowrocki, editor. Advances in Safety and Reliability – Proceedings of the Eu-
ropean Safety and Reliability Conference (ESREL’05), June 27–30, 2005, Tri City (Gdynia,
Gdansk, Sopot), Poland. Taylor and Francis LLC: London, UK. isbn: 0415383404. Google
REFERENCES 1075
Books ID: YTtWDEFp 3cC.
1568. Piet Kommers and Griff Richards, editors. Proceedings of World Conference on Educational
Multimedia, Hypermedia and Telecommunications (EDMEDIA’05), June 27, 2005, Le Cen-
tre Sheraton Hotel Montréal: Montréal, QC, Canada. Association for the Advancement of
Computing in Education (AACE): Chesapeake, VA, USA. Partly available at http://www.
aace.org/conf/edmedia/edmed05poster.pdf [accessed 2011-02-28].
1569. Srividya Kona, Ajay Bansal, Gopal Gupta, and Thomas D. Hite. Web Service Dis-
covery and Composition using USDL. In CEC/EEE’06 [2967], pages 430–432, 2006.
doi: 10.1109/CEC-EEE.2006.95. INSPEC Accession Number: 9189340. See also [318, 1570].
1570. Srividya Kona, Ajay Bansal, Gopal Gupta, and Thomas D. Hite. Semantics-based
Web Service Composition Engine. In CEC/EEE’07 [1344], pages 521–524, 2007.
doi: 10.1109/CEC-EEE.2007.87. Fully available at http://www.utd.edu/˜sxk038200/
research/wsc07.pdf [accessed 2007-10-25]. INSPEC Accession Number: 9868638. See also
[319, 1569].
1571. Srividya Kona, Ajay Bansal, M. Brian Blake, Steffen Bleul, and Thomas Weise. WSC-
2009: A Quality of Service-Oriented Web Services Challenge. In CEC’09 [1245], pages
487–490, 2009. doi: 10.1109/CEC.2009.80. Fully available at http://www.it-weise.de/
documents/files/KBBSW2009WAQOSOWSC.pdf. INSPEC Accession Number: 10839134.
Ei ID: 20094712467206.
1572. Andreas König, Mario Köppen, Ajith Abraham, Christian Igel, and Nikola Kasabov,
editors. Proceedings of the 7th International Conference on Hybrid Intelligent
Systems (HIS’07), September 17–19, 2007, Fraunhofer Center FhG ITWM/FhG
IESE: Kaiserslautern, Germany. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2946-1. OCLC: 213890399 and 314017946. Library of Congress Con-
trol Number (LCCN): 2007936727. Order No.: P2946. Product Number E2946.
1573. Anna V. Kononova, Derek B. Ingham, and Mohamed Pourkashanian. Simple Scheduled
Memetic Algorithm for Inverse Problems in Higher Dimensions: Application to Chemical Ki-
netics. In CEC’08 [1889], pages 3905–3912, 2008. doi: 10.1109/CEC.2008.4631328. INSPEC
Accession Number: 10251020.
1574. Andreas Konstantinidis, Qingfu Zhang, and Kun Yang. A Subproblem-dependent Heuristic in
MOEA/D for the Deployment and Power Assignment Problem in Wireless Sensor Network. In
CEC’09 [1350], pages 2740–2747, 2009. doi: 10.1109/CEC.2009.4983286. INSPEC Accession
Number: 10688881.
1575. Erricos John Kontoghiorghes, Berç Rustem, and Peter Winker, editors. Computational
Methods in Financial Engineering – Essays in Honour of Manfred Gilli. Springer-Verlag:
Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-77958-2. Library of Congress Control Num-
ber (LCCN): 2008921042.
1576. Mario Köppen and Kaori Yoshida. Substitute Distance Assignments in NSGA-II for Han-
dling Many-Objective Optimization Problems. In EMO’07 [2061], pages 727–741, 2007.
doi: 10.1007/978-3-540-70928-2 55.
1577. Mario Köppen and Kaori Yoshida. Visualization of Pareto-Sets in Evolutionary Multi-
Objective Optimization. In HIS’07 [1572], pages 156–161, 2007. doi: 10.1109/HIS.2007.62.
INSPEC Accession Number: 9877140.
1578. Mario Köppen, David H. Wolpert, and William G. Macready. Remarks on a Recent Pa-
per on the “No Free Lunch” Theorems. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 5(3):295–296, June 2001, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.930318. Fully available at http://ti.arc.nasa.gov/people/dhw/
papers/76.ps and http://www.no-free-lunch.org/KoWM01.pdf [accessed 2008-03-28].
1579. Proceedings of the 8th International Symposium on Advanced Intelligent Systems (ISIS’07),
September 5–8, 2007, Kensington Hotel: Sokcho, Korea. Korean Fuzzy Logic and Intelligent
System Society (KFIS).
1580. Peter Korošec and Jurij Šilc. Real-Parameter Optimization Using Stigmergy. In
BIOMA’06 [919], pages 73–84, 2006. Fully available at http://csd.ijs.si/silc/
articles/BIOMA06DASA.pdf [accessed 2007-08-05].
1581. Witold Kosiński, editor. Advances in Evolutionary Algorithms. InTech: Winchester,
Hampshire, UK. Fully available at http://www.intechopen.com/books/show/title/
advances_in_evolutionary_algorithms [accessed 2011-11-21].
1582. Juhani Koski. Defectiveness of Weighting Method in Multicriterion Optimization of Struc-
1076 REFERENCES
tures. Communications in Applied Numerical Methods (CANM), 1(6):333–337, Novem-
ber 1985, Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1002/cnm.1630010613.
1583. Alex Kosorukoff. Human-based Genetic Algorithm. In SMC’01 [1366], pages 3464–3469,
volume 5, 2001. doi: 10.1109/ICSMC.2001.972056. See also [1584].
1584. Alex Kosorukoff. Human-based Genetic Algorithm. IlliGAL Report 2001004, Illinois Genetic
Algorithms Laboratory (IlliGAL), Department of Computer Science, Department of General
Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA,
January 2001. CiteSeerx : 10.1.1.28.6133. See also [1583].
1585. Yannis Kotidis and Nick Roussopoulos. A Case for Dynamic View Management. ACM Trans-
actions on Database Systems (TODS), 26(4):388–423, December 2001, Association for Com-
puting Machinery (ACM): New York, NY, USA. doi: 10.1145/503099.503100. Fully available
at http://www.db-net.aueb.gr/kotidis/Publications/TODS.pdf [accessed 2011-04-
x
13]. CiteSeer : 10.1.1.21.1377.

1586. Petros Koumoutsakos, J. Freund, and D. Parekh. Evolution Strategies for Parameter Opti-
mization in Jet Flow Control. In Proceedings of the 1998 Summer Program [1925], 1998. Fully
available at http://ctr.stanford.edu/Summer98/koumoutsakos.pdf [accessed 2009-06-
16].

1587. Tim Kovacs. XCS Classifier System Reliably Evolves Accurate, Complete, and Minimal
Representations for Boolean Functions. In WSC2 [535], pages 59–68, 1997. Fully available
at http://www.cs.bris.ac.uk/Publications/Papers/1000239.pdf [accessed 2007-08-
x
18]. CiteSeer : 10.1.1.38.4522 and 10.1.1.58.5539.

1588. Tim Kovacs. Two Views of Classifier Systems. In IWLCS’01 [1679], pages 74–87,
2001. doi: 10.1007/3-540-48104-4 6. Fully available at http://www.cs.bris.ac.uk/
Publications/Papers/1000647.pdf [accessed 2007-09-11].
1589. Tim Kovacs, Xavier Llorà, Keiki Takadama, Pier Luca Lanzi, Wolfgang Stolzmann, and
Stewart W. Wilson, editors. Revised Selected Papers of the International Workshops on
Learning Classifier Systems (IWLCS 03-05), 2007, volume 4399/2007 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-71231-2. isbn: 3-540-71230-5.
OCLC: 612928449. Library of Congress Control Number (LCCN): 2007923322. See also
[27, 2280, 2578].
1590. John R. Koza. Non-Linear Genetic Algorithms for Solving Problems. United States Patent
4,935,877, United States Patent and Trademark Office (USPTO): Alexandria, VA, USA,
1988. Filed May 20, 1988. Issued June 19, 1990. Australian patent 611,350 issued september
21, 1991. Canadian patent 1,311,561 issued december 15, 1992.
1591. John R. Koza. Hierarchical Genetic Algorithms Operating on Populations of Com-
puter Programs. In IJCAI’89-I [2593], pages 768–774, 1989. Fully available
at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Koza89.html and http://
www.genetic-programming.com/jkpdf/ijcai1989.pdf [accessed 2010-08-19].
1592. John R. Koza. A Hierarchical Approach to Learning the Boolean Multi-
plexer Function. In FOGA’90 [2562], pages 171–191, 1990. Fully available
at http://www.genetic-programming.com/jkpdf/foga1990.pdf [accessed 2010-08-19].
CiteSeerx : 10.1.1.141.255.
1593. John R. Koza. Concept Formation and Decision Tree Induction using the Genetic Pro-
gramming Paradigm. In PPSN I [2438], pages 124–128, 1990. Fully available at http://
citeseer.ist.psu.edu/61578.html [accessed 2008-05-29].
1594. John R. Koza. Evolution and Co-Evolution of Computer Programs to Control
Independent-Acting Agents. In SAB’90 [1876], pages 366–375, 1990. Fully avail-
able at http://www.genetic-programming.com/jkpdf/sab1990.pdf [accessed 2008-10-
x
19]. CiteSeer : 10.1.1.140.7387.

1595. John R. Koza. Genetic Evolution and Co-Evolution of Computer Programs. In Artificial
Life II [1668], pages 603–629, 1990. Fully available at http://citeseer.ist.psu.
edu/177879.html, http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Koza_
geCP.html, and http://www.genetic-programming.com/jkpdf/alife1990.pdf
x
[accessed 2008-05-29]. CiteSeer : 10.1.1.140.6634. Revised November 29, 1990.

1596. John R. Koza. Genetic Programming: A Paradigm for Genetically Breeding Populations
of Computer Programs to Solve Problems. Technical Report STAN-CS-90-1314, Stanford
University, Computer Science Department: Stanford, CA, USA, June 1990. Fully avail-
REFERENCES 1077
able at http://www.genetic-programming.com/jkpdf/tr1314.pdf [accessed 2010-08-19].
CiteSeerx : 10.1.1.4.9961. See also [2559].
1597. John R. Koza. Concept Formation and Decision Tree Induction using the Genetic Pro-
gramming Paradigm. In PPSN I [2438], pages 124–128, 1990. doi: 10.1007/BFb0029742.
CiteSeerx : 10.1.1.53.5800.
1598. John R. Koza. Non-Linear Genetic Algorithms for Solving Problems by Finding a Fit Com-
position of Functions. United States Patent 5,136,686, United States Patent and Trademark
Office (USPTO): Alexandria, VA, USA. Filed March 28, 1990, Issued August 4, 1992.
1599. John R. Koza. Genetic Programming II: Automatic Discovery of Reusable Programs, Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, July 4, 1994.
isbn: 0-262-11189-6. Google Books ID: t7tpQgAACAAJ. OCLC: 30725207, 180673805,
264562231, 311925261, 493266030, and 634102150. Library of Congress Control
Number (LCCN): 94076375. GBV-Identification (PPN): 153763795, 186785119, and
308224353. LC Classification: QA76.6 .K693 1994.
1600. John R. Koza. A Response to the ML-95 Paper entitled “Hill Climbing beats Genetic Search
on a Boolean Circuit Synthesis of Koza’s”. self-published. Fully available at http://
www.genetic-programming.com/jktahoe24page.html [accessed 2009-06-16]. Version 1 dis-
tributed at the Twelfth International Machine Learning Conference (ML-95). Version 2 dis-
tributed at the Sixth International Conference on Genetic Algorithms (ICGA-95). Version 3
deposited at WWW site on May 26, 1996. See also [883, 1665, 2221].
1601. John R. Koza. Genetic Algorithms and Genetic Programming at Stanford. Stanford University
Bookstore, Stanford University: Stanford, CA, USA, Fall 2003. Book of Student Papers from
John Koza’s Course at Stanford on Genetic Algorithms and Genetic Programming. Computer
Science 426 (CS 426), Biomedical Informatics 226 (BMI 226).
1602. John R. Koza. Genetic Programming: On the Programming of Computers by
Means of Natural Selection, Bradford Books. MIT Press: Cambridge, MA, USA,
December 1992. isbn: 0-262-11170-5. Google Books ID: Bhtxo60BV0EC and
Wd8TAAAACAAJ. OCLC: 26263956, 245844858, 312492070, 473279968, 476715745,
and 610975642. Library of Congress Control Number (LCCN): 92025785. GBV-
Identification (PPN): 110450795, 152621075, 185924921, 191489522, 237675633,
246890320, 282081046, and 49354836X. LC Classification: QA76.6 .K695 1992. 1992
first edition, 1993 second edition.
1603. John R. Koza, editor. Late Breaking Papers at the First Annual Conference Genetic Pro-
gramming (GP’96 LBP), July 28–31, 1996, Stanford University: Stanford, CA, USA. Stanford
University Bookstore, Stanford University: Stanford, CA, USA. isbn: 0-18-201031-7. See
also [1609].
1604. John R. Koza, editor. Late Breaking Papers at the 1997 Genetic Programming Conference
(GP’97 LBP), July 13–16, 1997, Stanford University: Stanford, CA, USA. Stanford Univer-
sity Bookstore, Stanford University: Stanford, CA, USA. isbn: 0-18-206995-8. Google
Books ID: Vn-nAAAACAAJ and cFI7OAAACAAJ. OCLC: 38045123. See also [1611].
1605. John R. Koza, editor. Late Breaking Papers at the Genetic Programming 1998 Conference
(GP’98 LBP), June 22–25, 1998, University of Wisconsin: Madison, WI, USA. Stanford
University Bookstore, Stanford University: Stanford, CA, USA. See also [1612].
1606. John R. Koza and Eric V. Siegel, editors. Proceedings of 1995 AAAI Fall Symposium Series,
Genetic Programming Track, November 10–12, 1995, Massachusetts Institute of Technology
(MIT): Cambridge, MA, USA. The symposium proceedings appeared as Technical Report
FS-95-01.
1607. John R. Koza, David Andre, Forrest H. Bennett III, and Martin A. Keane. Evolution of a Low-
Distortion, Low-Bias 60 Decibel Op Amp with Good Frequency Generalization using Genetic
Programming. In GP’96 LBP [1603], pages 94–100, 1996. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/koza_1996_eldld60db.html and http://
www.genetic-programming.com/jkpdf/gp1996lbpamplifier.pdf [accessed 2009-06-16].
CiteSeerx : 10.1.1.50.6221. See also [271].
1608. John R. Koza, David Andre, Forrest H. Bennett III, and Martin A. Keane. Use of Au-
tomatically Defined Functions and Architecture-Altering Operations in Automated Cir-
cuit Synthesis Using Genetic Programming. In GP’96 [1609], pages 132–149, 1996.
CiteSeerx : 10.1.1.51.3933.
1609. John R. Koza, David Edward Goldberg, David B. Fogel, and Rick L. Riolo, editors. Pro-
1078 REFERENCES
ceedings of the First Annual Conference of Genetic Programming (GP’96), July 28–31, 1996,
Stanford University: Stanford, CA, USA, Complex Adaptive Systems, Bradford Books. MIT
Press: Cambridge, MA, USA. isbn: 0-262-61127-9. Google Books ID: ndy0QgAACAAJ.
OCLC: 35580073, 70655786, and 422034061. See also [1603].
1610. John R. Koza, Forrest H. Bennett III, Jeffrey L. Hutchings, Stephen L. Bade, Martin A.
Keane, and David Andre. Evolving Sorting Networks using Genetic Programming and the
Rapidly Reconfigurable Xilinx 6216 Field-Programmable Gate Array. In Conference Record
of the Thirty-First Asilomar Conference on Signals, Systems & Computers [893], pages 404–
410, volume 1, 1997. doi: 10.1109/ACSSC.1997.680275. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/koza_1997_ASILIMOAR.html [accessed 2008-12-
24]. INSPEC Accession Number: 6020885.

1611. John R. Koza, Kalyanmoy Deb, Marco Dorigo, David B. Fogel, Max H. Garzon, Hitoshi
Iba, and Rick L. Riolo, editors. Proceedings of the Second Annual Conference on Genetic
Programming (GP’97), July 13–16, 1997, Stanford University: Stanford, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-558-60483-9. Google Books
ID: PFIZAQAAIAAJ. OCLC: 60136918. GBV-Identification (PPN): 235727091. See also
[1604].
1612. John R. Koza, Wolfgang Banzhaf, Kumar Chellapilla, Kalyanmoy Deb, Marco Dorigo,
David B. Fogel, Max H. Garzon, David Edward Goldberg, Hitoshi Iba, and Rick L. Ri-
olo, editors. Proceedings of the Third Annual Genetic Programming Conference (GP’98),
July 22–25, 1998, University of Wisconsin: Madison, WI, USA. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. isbn: 1-55860-548-7. OCLC: 123292121. See also
[1605].
1613. John R. Koza, Forrest H. Bennett III, David Andre, and Martin A. Keane. The Design of Ana-
log Circuits by Means of Genetic Programming. In Evolutionary Design by Computers [272],
Chapter 16, pages 365–385. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA,
1999. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/koza_
1999_dacGP.html and http://www.genetic-programming.com/jkpdf/edc1999.
pdf [accessed 2009-06-16].
1614. John R. Koza, Forrest H. Bennett III, David Andre, and Martin A. Keane. Automatic
Design of Analog Electrical Circuits using Genetic Programming. In Hugh Cartwright, editor,
Intelligent Data Analysis in Science [503], Chapter 8, pages 172–200. Oxford University Press,
Inc.: New York, NY, USA, 2000. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/koza_2000_idas.html [accessed 2009-06-16].
1615. John R. Koza, Martin A. Keane, Matthew J. Streeter, William Mydlowec, Jessen Yu, and
Guido Lanza. Genetic Programming IV: Routine Human-Competitive Machine Intelligence,
volume 5 in Genetic Programming Series. Springer Science+Business Media, Inc.: New York,
NY, USA, 2003. isbn: 0-387-25067-0, 0-387-26417-5, 1402074468, and 6610611831.
Google Books ID: YQxWzAEnINIC and vMaVhoI-hVUC. OCLC: 51817183, 60594679,
249361385, 255445558, 318293809, and 403761573.
1616. Dexter C. Kozen. The Design and Analysis of Algorithms, Texts and Monographs in Com-
puter Science. Springer-Verlag GmbH: Berlin, Germany, 1992. isbn: 0-387-97687-6 and
3-540-97687-6. Google Books ID: L AMnf9UF9QC. OCLC: 26014868, 318320484,
475812425, and 610835985. Library of Congress Control Number (LCCN): 91036759.
GBV-Identification (PPN): 018981704 and 118353721. LC Classification: QA76.9.A43
K69 1991.
1617. Oliver Kramer. Self-Adaptive Heuristics for Evolutionary Computation, volume 147 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, July 2008.
doi: 10.1007/978-3-540-69281-2. isbn: 3-540-69280-0. Google Books ID: f4QwC AJEjwC.
OCLC: 233933242. Library of Congress Control Number (LCCN): 2008928449. LC Clas-
sification: QA76.618 .K73 2008.
1618. Natalio Krasnogor and James E. Smith. A Tutorial for Competent Memetic Algorithms:
Model, Taxonomy, and Design Issues. IEEE Transactions on Evolutionary Computa-
tion (IEEE-EC), 9(5):474–488, October 2005, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/TEVC.2005.850260. Fully available at http://www.cs.nott.ac.uk/
˜nxk/PAPERS/IEEE-TEC-lastVersion.pdf [accessed 2008-03-28]. INSPEC Accession Num-
ber: 8608338.
1619. Natalio Krasnogor, Giuseppe Nicosia, Mario Pavone, and David Alejandro Pelta, editors.
REFERENCES 1079
Proceedings of the 2nd International Workshop Nature Inspired Cooperative Strategies for
Optimization (NICSO’07), November 8–10, 2007, Acireale, Catania, Sicily, Italy, volume
129/2008 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-540-78987-1. Library of Congress Control Number (LCCN): 2008924783.
1620. Natalio Krasnogor, Marı́a Belén Melián-Batista, José Andrés Moreno Pérez, J. Marcos
Moreno-Vega, and David Alejandro Pelta, editors. Proceedings of the 3rd International
Workshop Nature Inspired Cooperative Strategies for Optimization (NICSO’08), Novem-
ber 12–14, 2008, Puerto de la Cruz, Tenerife, Spain, volume 236/2009 in Studies in Com-
putational Intelligence. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-642-03211-0.
isbn: 3-642-03210-9.
1621. Ramprasad S. Krishnamachari and Panos Y. Papalambros. Hierarchical Decomposition Syn-
thesis in Optimal Systems Design. Technical Report 95-16, University of Michigan, Depart-
ment of Mechanical Engineering and Applied Mechanics, Design Laboratory: Ann Arbor, MI,
USA, August 1995–September 1996. Fully available at http://hdl.handle.net/2027.
42/6077 [accessed 2009-10-25]. CiteSeerx : 10.1.1.25.9434. See also [1622].
1622. Ramprasad S. Krishnamachari and Panos Y. Papalambros. Hierarchical Decomposition
Synthesis in Optimal Systems Design. Journal of Mechanical Design, 119(4):448–457,
December 1997, American Society of Mechanical Engineers (ASME): New York, NY,
USA. doi: 10.1115/1.2826388. Fully available at http://ode.engin.umich.edu/
publications/PapalambrosPapers/1996/102.pdf [accessed 2009-10-25]. See also [1621].
1623. David Kristoffersson. A Software Framework for Genetic Programming. Bachelor’s
thesis, Luleå University of Technology (LTU), Skellefteå Campus: Skellefteå, Swe-
den, Spring 2005, Johann Nordlander and Daniel Gjörwell, Advisors. Fully avail-
able at http://epubl.ltu.se/1404-5494/2005/28/LTU-HIP-EX-0528-SE.pdf [ac-
cessed 2010-11-21]. issn: 1404-5494. 2005:28 HIP. ISRN: LTU-HIP-EX–05/28–SE.

1624. Antonio Krüger and Rainer Malaka, editors. Proceedings of Artificial Intelligence in Mobile
System (AIMS’03), October 12, 2003, Westin Seattle: Seattle, WA, USA. Fully available
at http://w5.cs.uni-sb.de/˜krueger/aims2003/ [accessed 2009-09-05]. Held in conjunc-
tion with Ubicomp 2003.
1625. Joseph. B. Kruskal, Jr. On the Shortest Spanning Subtree of a Graph and the Traveling
Salesman Problem. Proceedings of the American Mathematical Society, 7(1):48–50, Febru-
ary 1956, American Mathematical Society (AMS) Bookstore: Providence, RI, USA. JSTOR
Stable ID: 2033241.
1626. Christian Kubczak, Tiziana Margaria, and Bernhard Steffen. Semantic Web Services Chal-
lenge 2006 – An Approach to Mediation and Discovery with jABC and miAamic. In
Third Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automat-
ing Web Services Mediation, Choreography and Discovery [2690], 2006. Fully available
at http://sws-challenge.org/workshops/2006-Athens/papers/sws_stage_3.
pdf [accessed 2010-12-16]. See also [1627–1632, 1712, 1831, 1832].
1627. Christian Kubczak, Tiziana Margaria, and Bernhard Steffen. Semantic Web Services Chal-
lenge 2006 – The jABC Approach to Mediation and Choreography. In Second Workshop of the
Semantic Web Service Challenge 2006 – Challenge on Automating Web Services Mediation,
Choreography and Discovery [2689], 2006. Fully available at http://sws-challenge.
org/workshops/2006-Budva/papers/unido_sws_2006_draft.pdf [accessed 2010-12-14].
See also [1626, 1628–1632, 1712, 1831, 1832].
1628. Christian Kubczak, Ralf Nagel, Tiziana Margaria, and Bernhard Steffen. The jABC Ap-
proach to Mediation and Choreography. In First Workshop of the Semantic Web Ser-
vice Challenge 2006 – Challenge on Automating Web Services Mediation, Choreography and
Discovery [2688], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Stanford/papers/06.pdf. See also [1626, 1627, 1629–1632, 1712, 1831, 1832].
1629. Christian Kubczak, Tiziana Margaria, Bernhard Steffen, and Stefan Naujokat. Service-
Oriented Mediation with jETI/jABC: Verification and Export. In WI/IAT Work-
shops’07 [1731], pages 144–147, 2007. doi: 10.1109/WI-IATW.2007.27. INSPEC Accession
Number: 9894901. See also [1626–1628, 1630–1632, 1712, 1831, 1832].
1630. Christian Kubczak, Tiziana Margaria, Christian Winkler, and Bernhard Steffen. Se-
mantic Web Services Challenge 2007 – An Approach to Discovery with miAam-
ics and jABC. In KWEB SWS Challenge Workshop [2450], 2007. Fully avail-
able at http://sws-challenge.org/workshops/2007-Innsbruck/papers/2-sws_
1080 REFERENCES
stage_4_disc.pdf [accessed 2010-12-16]. See also [1626–1629, 1631, 1632, 1712, 1831, 1832].
1631. Christian Kubczak, Christian Winkler, Tiziana Margaria, and Bernhard Steffen. An Ap-
proach to Discovery with miAamics and jABC. In WI/IAT Workshops’07 [1731], pages
157–160, 2007. doi: 10.1109/WI-IATW.2007.26. INSPEC Accession Number: 9894904. See
also [1626–1630, 1712, 1831, 1832, 2166].
1632. Christian Kubczak, Tiziana Margaria, Matthias Kaiser, Jens Lemcke, and Björn Knuth.
Abductive Synthesis of the Mediator Scenario with jABC and GEM. In EON-
SWSC’08 [1023], 2008. Fully available at http://sunsite.informatik.rwth-aachen.
de/Publications/CEUR-WS/Vol-359/Paper-5.pdf [accessed 2010-12-16]. See also [1626–
1628, 1630, 1631, 1712, 1831, 1832, 2166].
1633. Michal Kudelski and Andrzej Pacut. Ant Routing with Distributed Geographical Localiza-
tion of Knowledge in Ad-Hoc Networks. In EvoWorkshops’09 [1052], pages 111–116, 2009.
doi: 10.1007/978-3-642-01129-0 14.
1634. Ben Kuipers and Bonnie Lynn Webber, editors. Proceedings of the Fourteenth National
Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelli-
gence Conference, (AAAI’97, IAAI’97), July 27–31, 1997, Providence, RI, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51095-2. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai97.php and http://
www.aaai.org/Conferences/IAAI/iaai97.php [accessed 2007-09-06].
1635. Saku Kukkonen and Jouni A. Lampinen. Ranking-Dominance and Many-Objective Optimiza-
tion. In CEC’07 [1343], pages 3983–3990, 2007. doi: 10.1109/CEC.2007.4424990. INSPEC
Accession Number: 9889569.
1636. Anup Kumar, Rakesh M. Pathak, M. C. Gupta, and Yash P. Gupta. Genetic Algorithm
based Approach for Designing Computer Network Topolgies. In CSC’93 [1654], pages 358–
365, 1993. doi: 10.1145/170791.170871.
1637. Sanjeev Kumar and Peter J. Bentley. Computational Embryology: Past, Present and Future.
In Advances in Evolutionary Computing – Theory and Applications [1049], pages 461–477.
Springer New York: New York, NY, USA, 2002. Fully available at http://www.cs.ucl.
ac.uk/staff/P.Bentley/KUBECH1.pdf [accessed 2010-08-06]. CiteSeerx : 10.1.1.63.1505.
1638. T. V. Vijay Kumar and Aloke Ghoshal. A Reduced Lattice Greedy Algo-
rithm for Selecting Materialized Views. In ICISTM’09 [2218], pages 6–18, 2009.
doi: 10.1007/978-3-642-00405-6 5. See also [1639].
1639. T. V. Vijay Kumar and Aloke Ghoshal. Greedy Selection of Materialized Views. International
Journal of Computer and Communication Technology (IJCCT), 1(1):47–58, 2009, Khandgiri,
Bhubaneswar, Orissa, India. Fully available at http://www.interscience.in/ijcct/
IJCCT_Paper5.pdf [accessed 2011-04-13]. See also [1638].
1640. Viktor M. Kureichik, Sergey P. Malyukov, Vladimir V. Kureichik, and Alexan-
der S. Malioukov. Genetic Algorithms for Applied CAD Problems, volume 212
in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-540-85281-0. isbn: 3-540-85280-8. Google Books ID: lqCoF45aYn0C.
1641. Věra Kůrková, Nigel C. Steele, Roman Neruda, and Miroslav Kárný, editors. Proceed-
ings of the 5th International Conference on Artificial Neural Networks and Genetic Algo-
rithms (ICANNGA’01), April 22–25, 2001, Lichtenstein Palace: Prague, Czech Republic.
isbn: 3-211-83651-9. OCLC: 47701381, 248270107, and 440721146.
1642. Frank Kursawe. A Variant of Evolution Strategies for Vector Optimization. In PPSN I [2438],
pages 193–197, 1990. doi: 10.1007/BFb002972. Fully available at http://eprints.kfupm.
edu.sa/22064/ and http://www.lania.mx/˜ccoello/kursawe.ps.gz [accessed 2009-06-
x
22]. CiteSeer : 10.1.1.47.8050.

1643. Selahattin Kuru, editor. Proceedings of the Second Turkish Symposium on Artificial In-
telligence and Neural Networks (Tuärk Yapay Zeka ve Yapay Sinir Ağları Sempozyumu)
(TAINN’93), June 24–25, 1993, Boğaziçi Üniversitesi: Istanbul, Turkey. OCLC: 283728082.
1644. Ulrich Küster and Birgitta König-Ries. Discovery and Mediation using the DIANE Ser-
vice Description. In Third Workshop of the Semantic Web Service Challenge 2006 – Chal-
lenge on Automating Web Services Mediation, Choreography and Discovery [2690], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Athens/papers/
SWS-Challenge-2006-Athens-Full.pdf [accessed 2010-12-16]. See also [1645, 1646, 1648–
1650].
1645. Ulrich Küster and Birgitta König-Ries. Service Discovery using DIANE Service De-
REFERENCES 1081
scriptions – A Solution to the SWS-Challenge Discovery Scenarios. In KWEB SWS
Challenge Workshop [2450], 2007. Fully available at http://sws-challenge.org/
workshops/2007-Innsbruck/papers/1-Ulrich-2007.pdf [accessed 2010-12-16]. See also
[1644, 1646, 1648–1650].
1646. Ulrich Küster and Birgitta König-Ries. Semantic Mediation between Business Partners
– A SWS-Challenge Solution Using DIANE Service Descriptions. In WI/IAT Work-
shops’07 [1731], pages 139–143, 2007. doi: 10.1109/WI-IATW.2007.14. INSPEC Accession
Number: 9894900. See also [1644, 1645, 1648–1650].
1647. Ulrich Küster and Birgitta König-Ries. Semantic Service Discovery with DIANE Service
Descriptions. In WI/IAT Workshops’07 [1731], pages 152–156, 2007. doi: 10.1109/WI-I-
ATW.2007.87. INSPEC Accession Number: 9894903.
1648. Ulrich Küster, Christian Klein, and Birgitta König-Ries. Discovery and Mediation using the
DIANE Service Description. In First Workshop of the Semantic Web Service Challenge 2006
– Challenge on Automating Web Services Mediation, Choreography and Discovery [2688],
2006. Fully available at http://sws-challenge.org/workshops/2006-Stanford/
papers/07.pdf [accessed 2010-12-12]. See also [1644–1646, 1649, 1650].
1649. Ulrich Küster, Birgitta König-Ries, and Michael Klein. Discovery and Mediation us-
ing DIANE Service Descriptions. In Second Workshop of the Semantic Web Service
Challenge 2006 – Challenge on Automating Web Services Mediation, Choreography and
Discovery [2689], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Budva/papers/SWS-Challenge-2006-Budva.pdf [accessed 2010-12-14]. See also
[1644–1646, 1648, 1650].
1650. Ulrich Küster, Andrea Turati, Maciej Zaremba, Birgitta König-Ries, Dario Cerizza, Emanuele
Della Valle, Marco Brambilla, Stefano Ceri, Federico Michele Facca, and Christina Tziviskou.
Service Discovery with SWE-ET and DIANE – A Comparative Evaluation by Means
of Solutions to a Common Scenario. In ICEIS’07 [494], pages 430–437, volume 4,
2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/4-ICEIS2007_DIANE_CEFRIEL.pdf [accessed 2010-12-16]. See also [383–
386, 1644–1646, 1648, 1649].
1651. Ugur Kuter, Dana Nau, Marco Pistore, and Paolo Traverso. A Hierarchical Task-Network
Planner based on Symbolic Model Checking. In ICAPS’05 [315], pages 300–309, 2005. Fully
available at http://www.cs.umd.edu/˜nau/papers/kuter05hierarchical.pdf [ac-
x
cessed 2011-01-11]. CiteSeer : 10.1.1.81.669.

1652. Vladimir Kvasnicka, Martin Pelikan, and Jiri Pospichal. Hill Climbing with Learning
(An Abstraction of Genetic Algorithm). Neural Network World – International Journal
on Non-Standard Computing and Artificial Intelligence, 6:773–796, 1996, Academy of Sci-
ences of the Czech Republic, Institute of Computer Science: Prague, Czech Republic.
Fully available at http://130.203.133.121:8080/viewdoc/summary?doi=10.1.1.
56.4888 [accessed 2009-08-21]. CiteSeerx : 10.1.1.56.4888.
1653. Chung Min Kwan and C. S. Chang. Application of Evolutionary Algorithm on a Trans-
portation Scheduling Problem – The Mass Rapid Transit. In CEC’05 [641], pages 987–994,
volume 2, 2005. Fully available at http://www.lania.mx/˜ccoello/EMOO/kwan05.
pdf.gz [accessed 2007-08-27].
1654. Stan C. Kwasny and John F. Buck, editors. Proceedings of the 21s/1993 ACM Confer-
ence on Computer Science (CSC’93), February 16–18, 1993, Indianapolis, IN, USA. ACM
Press: New York, NY, USA. isbn: 0-89791-558-5. Google Books ID: CJq9PwAACAAJ and
fHpQAAAAMAAJ. OCLC: 28196251, 84641266, 221446969, 221446969, 257836504,
258290585, and 311876190.

1655. Jeffrey C. Lagraias, James A. Reeds, Margaret H. Wright, and Paul E. Wright. Conver-
gence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal
on Optimization (SIOPT), 9(1):112–147, 1998, Society for Industrial and Applied Mathe-
matics (SIAM): Philadelphia, PA, USA. doi: 10.1137/S1052623496303470. Fully available
at http://www.aoe.vt.edu/˜cliff/aoe5244/nelder_mead_2.pdf [accessed 2008-06-14].
1082 REFERENCES
1656. Chih-Chung Lai, Chuan-Kang Ting, and Ren-Song Ko. An Effective Genetic Algorithm for
Improving Wireless Sensor Network Lifetime. In GECCO’07-I [2699], pages 2260–2260, 2007.
doi: 10.1145/1276958.1277395. Poster Session: Real-world applications. See also [1657].
1657. Chih-Chung Lai, Chuan-Kang Ting, and Ren-Song Ko. An Effective Genetic Algorithm
to Improve Wireless Sensor Network Lifetime for Large-Scale Surveillance Applications. In
CEC’07 [1343], pages 3531–3538, 2007. doi: 10.1109/CEC.2007.4424930. See also [1656].
1658. Timothy Lai. Discovery of Understandable Math Formulas Using Genetic Programming.
In Genetic Algorithms and Genetic Programming at Stanford [1601], pages 118–127. Stan-
ford University Bookstore, Stanford University: Stanford, CA, USA, 2003. Fully available
at http://www.genetic-programming.org/sp2003/Lai.pdf [accessed 2010-11-21].
1659. John E. Laird, editor. Proceedings of 5th International Conference on Machine Learning
(ICML’88), June 12–14, 1988, University of Michigan: Ann Arbor, MI, USA. Morgan Kauf-
mann: San Mateo, CA, USA. isbn: 0-934613-64-8. Google Books ID: FldQAAAAMAAJ.
OCLC: 18017017, 82090777, 260147544, 441149879, 471061365, 475831474,
476283025, 499414195, and 610512831. GBV-Identification (PPN): 04232307X.
1660. John E. Laird, editor. Proceedings of the 5th International Workshop on Machine Learning
(ML’88), June 12–14, 1988, Ann Arbor, MI, USA. Morgan Kaufmann Publishers Inc.: San
Francisco, CA, USA. isbn: 0-934613-64-8.
1661. Gary B. Lamont and Eric M. Holloway. Military Network Security using Self Orga-
nized Multi-Agent Entangled Hierarchies. In GECCO’09-II [2343], pages 2559–2566, 2009.
doi: 10.1145/1570256.1570361.
1662. Jouni A. Lampinen and Ivan Zelinka. On Stagnation of the Differential Evolution Algorithm.
In MENDEL’00 [2106], pages 76–83, 2000. CiteSeerx : 10.1.1.35.7932.
1663. Ailsa H. Land and A. G. Doig. An Automatic Method of Solving Discrete Programming
Problems. Econometrica – Journal of the Econometric Society, 28(3):497–520, July 1960,
Wiley Interscience: Chichester, West Sussex, UK and Econometric Society. Fully available
at http://jmvidal.cse.sc.edu/library/land60a.pdf [accessed 2009-06-29].
1664. Edmund Landau. Handbuch der Lehre von der Verteilung der Primzahlen. B. G. Teub-
ner: Leipzig, Germany and AMS Chelsea Publishing: Providence, RI, USA, 1909–1953.
isbn: 0821826506. Google Books ID: 2ivxUiSogLgC, 81m4AAAAIAAJ, SFm4AAAAIAAJ,
fbyvOgAACAAJ, and u YAOwAACAAJ.
1665. Kevin J. Lang. Hill Climbing Beats Genetic Search on a Boolean Circuit Synthesis Problem
of Koza’s. In ICML’95 [2221], pages 340–343, 1995. Fully available at http://www.cs.
cornell.edu/Courses/cs472/2004fa/Materials/icml-GP.pdf and http://www.
genetic-programming.com/lang-ml95.ps [accessed 2009-06-16]. See also [1600].
1666. Christopher Gale Langdon, editor. The Proceedings of an Interdisciplinary Workshop on
the Synthesis and Simulation of Living Systems (Artificial Life’87), September 21, 1987,
Oppenheimer Study Center: Los Alamaos, NM, USA, volume 6 in Santa Fe Institue Stud-
ies in the Sciences of Complexity. Addison-Wesley Longman Publishing Co., Inc.: Boston,
MA, USA and Westview Press. isbn: 0201093464 and 0201093561. Google Books
ID: 5W05AgAACAAJ. OCLC: 257011165 and 260158809. Published in January 1989.
1667. Christopher Gale Langdon, editor. Proceedings of the Workshop on Artificial Life (Arti-
ficial Life III), June 15–19, 1992, Santa Fé, NM, USA, volume 17 in Santa Fe Institue
Studies in the Sciences of Complexity. Addison-Wesley Longman Publishing Co., Inc.:
Boston, MA, USA and Westview Press. isbn: 0-201-62492-3 and 0-201-62494-X.
OCLC: 29548167, 475766246, 501682096, 624447801, and 633335938. GBV-
Identification (PPN): 181621835. Published in August 1993.
1668. Christopher Gale Langdon, Charles E. Taylor, Doyne J. Farmer, and Steen Rasmussen, ed-
itors. Proceedings of the Workshop on Artificial Life (Artificial Life II), February 1990,
volume X in Santa Fe Institue Studies in the Sciences of Complexity. Addison-Wesley Long-
man Publishing Co., Inc.: Boston, MA, USA and Westview Press. isbn: 0201525704 and
0201525712. Google Books ID: 7WZAGQAACAAJ, 7mZAGQAACAAJ, and wAx7AAAACAAJ.
OCLC: 23733587, 177349139, 180517573, 223313171, 257010417, and 258496439.
1669. William B. Langdon. Mapping Non-conventional Extensions of Genetic Programming.
In UC’06 [472], pages 166–180, 2006. doi: 10.1007/11839132 14. Fully available
at http://www.cs.ucl.ac.uk/staff/wlangdon/ftp/papers/wbl_uc2002.pdf [ac-
cessed 2010-11-21].

1670. William B. Langdon. A SIMD Interpreter for Genetic Programming on GPU Graphics Cards.
REFERENCES 1083
Computer Science Technical Report CSM-470, University of Essex, Departments of Mathe-
matical and Biological Sciences: Wivenhoe Park, Colchester, Essex, UK, July 3, 2007. Fully
available at http://cswww.essex.ac.uk/technical-reports/2007/csm_470.pdf
[accessed 2008-12-24]. issn: 1744-8050. See also [1671].

1671. William B. Langdon and Wolfgang Banzhaf. A SIMD Interpreter for Genetic Pro-
gramming on GPU Graphics Cards. In EuroGP’08 [2080], pages 73–85, 2008.
doi: 10.1007/978-3-540-78671-9 7. Fully available at http://www.cs.bham.ac.uk/
˜wbl/biblio/gp-html/langdon_2008_eurogp.html and http://www.cs.ucl.ac.
uk/staff/W.Langdon/ftp/papers/langdon_2008_eurogp.pdf [accessed 2009-08-04]. See
also [1670].
1672. William B. Langdon and Riccardo Poli. Fitness Causes Bloat: Mutation. In Eu-
roGP’98 [210], pages 37–48, 1998. Fully available at ftp://ftp.cwi.nl/pub/W.
B.Langdon/papers/WBL.euro98_bloatm.ps.gz and http://citeseer.ist.psu.
edu/langdon98fitness.html [accessed 2007-09-09]. CiteSeerx : 10.1.1.40.8288.
1673. William B. Langdon and Riccardo Poli. Foundations of Genetic Programming. Springer-
Verlag GmbH: Berlin, Germany, January 2002. isbn: 3-540-42451-2. Partly avail-
able at http://www.cs.ucl.ac.uk/staff/W.Langdon/FOGP/ [accessed 2008-02-15]. Google
Books ID: PZli0s95Jd0C. OCLC: 47717957, 248023954, and 491544616. Library of
Congress Control Number (LCCN): 2001049394. GBV-Identification (PPN): 334151872.
LC Classification: QA76.623 .L35 2002.
1674. William B. Langdon, Terence Soule, Riccardo Poli, and James A. Foster. The Evolution
of Size and Shape. In Advances in Genetic Programming [2569], Chapter 8, pages 163–190.
MIT Press: Cambridge, MA, USA, 1996. Fully available at http://www.cs.bham.ac.uk/
˜wbl/aigp3/ch08.pdf and http://www.cs.bham.ac.uk/˜wbl/aigp3/ch08.ps.gz
x
[accessed 2011-11-23]. CiteSeer : 10.1.1.59.1581.

1675. William B. Langdon, Erick Cantú-Paz, Keith E. Mathias, Rajkumar Roy, David Davis, Ric-
cardo Poli, Hari Balakrishnan, Vasant Honavar, Günter Rudolph, Joachim Wegener, Larry
Bull, Mitchell A. Potter, Alan C. Schultz, Julian Francis Miller, Edmund K. Burke, and
Natasa Jonoska, editors. Proceedings of the Genetic and Evolutionary Computation Confer-
ence (GECCO’02), July 9–13, 2002, Roosevelt Hotel: New York, NY, USA. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-878-8, 1-55860-881-8,
and 1-55860-903-2. Google Books ID: 7YZQAAAAMAAJ, N3kCAAAACAAJ, QDq-IgAACAAJ,
W28dHQAACAAJ, and tBThPAAACAAJ. A joint meeting of the eleventh International Con-
ference on Genetic Algorithms (ICGA-2002) and the seventh Annual Genetic Programming
Conference (GP-2002). See also [224, 478, 1790].
1676. Pat Langley, editor. Proceedings of the 17th International Conference on Machine Learn-
ing (ICML’00), June 29–July 2, 2000, Stanford University: Stanford, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-707-2.
1677. Pier Luca Lanzi and Rick L. Riolo. A Roadmap to the Last Decade of Learning
Classifier System Research (From 1989 to 1999). In IWLCS’00 [1678], page 33, 2000.
doi: 10.1007/3-540-45027-0 2.
1678. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Advances in Learning
Classifier Systems, Revised Papers of the Third International Workshop on Learning Classi-
fier Systems (IWLCS’00), September 15–16, 2000, Paris, France, volume 1996/2001 in Lec-
ture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-42437-7. OCLC: 441809319. co-
den: LNCSD9. A joint workshop of the sixth International Conference on Simulation of
Adaptive Behaviour (SAB2000, and the sixth International Conference on Parallel Problem
Solving from Nature (PPSN VI, seeAlsoilPROC2000PPSN), published in 2001.
1679. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Advances in Learn-
ing Classifier Systems, Revised Papers of the Fourth International Workshop on Learn-
ing Classifier Systems (IWLCS’01), July 7–8, 2001, San Francisco, CA, USA, volume
2321/2002 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48104-4.
isbn: 3-540-43793-2. Google Books ID: LX6ky1cIgXAC. Held during GECCO-2001, pub-
lished in 2002. See also [2570].
1680. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Revised Papers of
the 5th International Workshop on Learning Classifier Systems (IWLCS’02), September 7–8,
1084 REFERENCES
2002, Granada, Spain, volume 2661/2003 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-20544-6. Published in 2003.
1681. Pier Luca Lanzi, Daniele Loiacono, Stewart W. Wilson, and David Edward Goldberg. Gen-
eralization in the XCSF Classifier System: Analysis, Improvement, and Extension. Evo-
lutionary Computation, 15(2):133–168, Summer 2007, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.2007.15.2.133. CiteSeerx : 10.1.1.114.7870. PubMed ID: 17535137.
1682. Patrick LaRoche and A. Nur Zincir-Heywood. 802.11 Network Intrusion Detec-
tion Using Genetic Programming. In GECCO’05 WS [2339], pages 170–171,
2005. doi: 10.1145/1102256.1102295. Fully available at http://larocheonline.ca/
˜plaroche/papers/gecco-laroche-heywood.pdf [accessed 2008-06-16].
1683. Pedro Larrañaga and José Antonio Lozano, editors. Estimation of Distribution Algorithms
– A New Tool for Evolutionary Computation, volume 2 in Genetic and Evolutionary Com-
putation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA,
USA, 2001. isbn: 0-7923-7466-5. Google Books ID: kyU74gxp1rsC and o0llxS4u93wC.
OCLC: 47996547.
1684. Christian W. G. Lasarczyk and Wolfgang Banzhaf. An Algorithmic Chemistry for Genetic
Programming. In EuroGP’05 [1518], pages 1–12, 2005. doi: 10.1007/978-3-540-31989-4 1.
Fully available at http://ls11-www.cs.uni-dortmund.de/downloads/papers/
LaBa05.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/eurogp_
LasarczykB05.html [accessed 2010-08-01]. CiteSeerx : 10.1.1.131.4314.
1685. Christian W. G. Lasarczyk and Wolfgang Banzhaf. Total Synthesis of Algo-
rithmic Chemistries. In GECCO’05 [304], pages 1635–1640, volume 2, 2005.
doi: 10.1145/1068009.1068285. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gecco2005/docs/p1635.pdf [accessed 2009-08-02].
1686. Jörg Lässig and Achim G. Hoffmann. Threshold-selecting Strategy for Best Possible
Ground State Detection with Genetic Algorithms. Physical Review E, 79(4):046702–046702–
8, April 2009, American Physical Society: College Park, MD, USA. doi: 10.1103/Phys-
RevE.79.046702. See also [1687].
1687. Jörg Lässig, Karl Heinz Hoffmann, and Mihaela Enăchescu. Threshold Selecting: Best
Possible Probability Distribution for Crossover Selection in Genetic Algorithms. In
GECCO’08 [1519], pages 2181–2185, 2008. doi: 10.1145/1388969.1389044. See also [1686].
1688. Michael Lässig and Angelo Valleriani, editors. Biological Evolution and Statistical Physics,
volume 585/2002 in Lecture Notes in Physics. Springer-Verlag: Berlin/Heidelberg, 2002.
doi: 10.1007/3-540-45692-9. isbn: 3-540-43188-8. Google Books ID: b7SL7kc5N7oC.
issn: 0075-8450.
1689. Antonio LaTorre, José Marı́a Peña, Santiago Muelas, and Manuel Zaforas. Hybrid Evolu-
tionary Algorithms for Large Scale Continuous Problems. In GECCO’09-I [2342], pages
1863–1864, 2009. doi: 10.1145/1569901.1570205.
1690. Hui Keng Lau, Jonathan Timmis, and Ian Bate. Anomaly Detection Inspired by
Immune Network Theory: A Proposal. In CEC’09 [1350], pages 3045–3051, 2009.
doi: 10.1109/CEC.2009.4983328. Fully available at ftp://ftp.cs.york.ac.uk/papers/
rtspapers/R:Lau:2009.pdf [accessed 2009-09-05]. INSPEC Accession Number: 10688923.
1691. Marco Laumanns, Eckart Zitzler, and Lothar Thiele. A Unified Model for Multi-Objective
Evolutionary Algorithms with Elitism. In CEC’00 [1333], pages 46–53, volume 1, 2000.
doi: 10.1109/CEC.2000.870274. Fully available at http://www.lania.mx/˜ccoello/
EMOO/laumanns00.ps.gz [accessed 2010-08-01]. CiteSeerx : 10.1.1.36.4389. INSPEC Ac-
cession Number: 6734602.
1692. Marco Laumanns, Lothar Thiele, Kalyanmoy Deb, and Eckart Zitzler. On the Convergence
and Diversity-Preservation Properties of Multi-Objective Evolutionary Algorithms. TIK-
Report 108, Kanpur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical
Engineering, Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India,
June 13, 2001. Fully available at http://e-collection.ethbib.ethz.ch/view/eth:
24717 [accessed 2009-07-09]. CiteSeerx : 10.1.1.28.6480.
1693. Eugène L. Lawler. Introduction to Stochastic Processes. Chapman & Hall: London, UK and
CRC Press, Inc.: Boca Raton, FL, USA, 2nd edition, July 1995. isbn: 0412995115 and
158488651X. Google Books ID: 1L8uPQAACAAJ, LATJdzBlajUC, and ypD8ZFqdPlUC.
OCLC: 31288133, 64084881, and 266086758.
1694. Eugène L. Lawler, Jan Karel Lenstra, A. H. G. Rinnooy-Kan, and David B. Shmoys.
REFERENCES 1085
The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, Wiley-
Interscience Series in Discrete Mathematics, Estimation, Simulation, and Control – Wiley-
Interscience Series in Discrete Mathematics and Optimization. Wiley Interscience: Chich-
ester, West Sussex, UK, September 1985. isbn: 0-471-90413-9. OCLC: 11756468,
260186267, 468006920, 488801427, 611882247, and 631997749. Library of
Congress Control Number (LCCN): 85003158. GBV-Identification (PPN): 011848790,
02528360X, 042459834, 042637112, 043364950, 078586313, 156527677, 175957169,
and 190388226. LC Classification: QA164 .T73 1985.
1695. Mark Lawrence and Colin Wilsdon, editors. Operational Research Tutorial Papers – Annual
Conference of the Operational Research Society, 1995, Canterbury, Kent, UK. Operations Re-
search Society: Birmingham, UK, Hoskyns Cap Gemini Sogeti, and Stockton Press: Hamp-
shire, UK. isbn: 0-903440-15-6. Google Books ID: pLkmAQAACAAJ. OCLC: 222984071
and 249133773.
1696. Michael Lawrence. Multiobjective Genetic Algorithms for Materialized View Se-
lection in OLAP Data Warehouses. In GECCO’06 [1516], pages 669–706, 2006.
doi: 10.1145/1143997.1144120.
1697. Steve Lawrence and C. Lee Giles. Overfitting and Neural Networks: Conjugate Gradient
and Backpropagation. In IJCNN’00 [82], pages 1114–1119, volume 1, 2000. doi: 10.1109/I-
JCNN.2000.857823. CiteSeerx : 10.1.1.27.9735. INSPEC Accession Number: 6696828.
1698. Antonio Lazcano and Stanley L. Miller. How long did it take for Life to begin and evolve
to Cyanobacteria? Journal of Molecular Evolution, 39(6):546–554, December 1994, Springer
New York: New York, NY, USA. doi: 10.1007/BF00160399. PubMed ID: 11536653.
1699. A. C. G. Verdugo Lazo and P. N. Rathie. On the Entropy of Continuous Probability Dis-
tributions. IEEE Transactions on Information Theory (TIT), 24(1):120–122, January 1978,
Institute of Electrical and Electronics Engineers (IEEE) Press: New York, NY, USA.
1700. Lucien Marie Le Cam and Jerzy Neyman, editors. Proceedings of the Fifth Berkeley Sympo-
sium on Mathematical Statistics and Probability (June 21-July 18, 1965 and December 27,
1965-January 7, 1966), 1967, University of California, Statistical Laboratory: Berkeley, CA,
USA. University of California Press: Berkeley, CA, USA. Fully available at http://
projecteuclid.org/euclid.bsmsp/1200513453, http://projecteuclid.
org/euclid.bsmsp/1200513615, http://projecteuclid.org/euclid.bsmsp/
1200513981, and http://ucblibrary4.berkeley.edu:8088/xdlib//math/ucb/
mets/cumath_9_1_00019737.xml [accessed 2009-10-20]. Google Books ID: IX9WAAAAMAAJ,
LVI4AAAAIAAJ, NA-3QQAACAAJ, and f7xWAAAAMAAJ. OCLC: 5772953, 25234711,
25234721, 25234803, 25234809, 36997875, 71753794, 221157481, 223463967, and
300992815. issn: 0097-0433.
1701. Joshua Lederberg and Alexa T. McCray. ’Ome Sweet ’Omics – A Genealogical Treasury of
Words. The Scientist – Magazine of the Life Sciences, 15(7):8, April 2001. Fully available
at http://lhncbc.nlm.nih.gov/lhc/docs/published/2001/pub2001047.pdf [ac-
cessed 2009-06-28].

1702. Chang-Yong Lee and Xin Yao. Evolutionary Programming using the Mutations based
on the Lévy Probability Distribution. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 8(1):1–13, February 2004, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.816583. CiteSeerx : 10.1.1.6.2145. INSPEC Accession Num-
ber: 8003613.
1703. Jack Yiu-Bun Lee and P. C. Wong. The Effect of Function Noise on GP Efficiency. In
Progress in Evolutionary Computation, AI’93 (Melbourne, Victoria, Australia, 1993-11-16)
and AI’94 Workshops (Armidale, NSW, Australia, 1994-11-22/23) on Evolutionary Compu-
tation, Selected Papers [3023], pages 1–16, 1993–1994. doi: 10.1007/3-540-60154-6 43.
1704. Jongsoo Lee and Prabhat Hajela. Parallel Genetic Algorithm Implementation in Multidisci-
plinary Rotor Blade Design. Technical Report ADA305771, Renssalaer Polytechnic Institute,
Aeronautical Engineering and Mechanics Department, Mechanical Engineering: Troy, NY,
USA, Defense Technical Information Center (DTIC): Fort Belvoir, VA, USA, November 1995.
Fully available at http://handle.dtic.mil/100.2/ADA305771 [accessed 2009-06-17]. See
also [1705].
1705. Jongsoo Lee and Prabhat Hajela. Parallel Genetic Algorithm Implementation in Multi-
disciplinary Rotor Blade Design. Journal of Aircraft, 33(5):962–966, September–October
1996, American Institute of Aeronautics and Astronautics (AIAA): Reston, VA, USA.
1086 REFERENCES
doi: 10.2514/3.47042. See also [1704].
1706. Minsoo Lee and Joachim Hammer. Speeding up Materialized View Selection in Data Ware-
house using a Randomized Algorithm. International Journal of Cooperative Information
Systems (IJCIS), 10(3):327–353, September 2001, World Scientific Publishing Co.: Singa-
pore. doi: 10.1142/S0218843001000370. Fully available at http://www.cise.ufl.edu/
˜jhammer/publications/IJCIS-DW2001/minsoolee.doc [accessed 2011-04-09].
1707. S. Lee, S. Soak, K. Kim, H. Park, and M. Jeon. Statistical Properties Analysis of Real
World Tournament Selection in Genetic Algorithms. Applied Intelligence – The Interna-
tional Journal of Artificial Intelligence, Neural Networks, and Complex Problem-Solving
Technologies, 28(2):195–205, April 2008, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10489-007-0062-2.
1708. Shane Legg, Marcus Hutter, and Akshat Kumar. Tournament versus Fitness Uniform Selec-
tion. Technical Report IDSIA-04-04, Dalle Molle Institute for Artificial Intelligence (IDSIA),
University of Lugano, Faculty of Informatics / University of Applied Sciences of Southern
Switzerland (SUPSI), Department of Innovative Technologies: Manno-Lugano, Switzerland,
March 4, 2004. Fully available at http://www.idsia.ch/idsiareport/IDSIA-04-04.
pdf [accessed 2011-04-29]. See also [1313, 1709].
1709. Shane Legg, Marcus Hutter, and Akshat Kumar. Tournament versus Fitness Uniform Selec-
tion. In CEC’04 [1369], pages 2144–2151, 2004. doi: 10.1109/CEC.2004.1331162. Fully avail-
able at http://www.hutter1.net/ai/fussexp.pdf, http://www.hutter1.net/
ai/fussexp.ps, and http://www.hutter1.net/ai/fussexp.zip [accessed 2011-04-29].
CiteSeerx : 10.1.1.71.2663. arXiv ID: cs/0403038v1. INSPEC Accession Num-
ber: 8229180. See also [1312, 1313, 1708].
1710. Joel Lehman and Kenneth Owen Stanley. Exploiting Open-Endedness to Solve Prob-
lems Through the Search for Novelty. In Artificial Life XI [446], pages 329–336,
2008. Fully available at http://alifexi.alife.org/papers/ALIFExi_pp329-336.
pdf and http://eplex.cs.ucf.edu/papers/lehman_alife08.pdf [accessed 2011-03-01].
1711. Jingsheng Lei, JingTao Yao, and Qingfu Zhang, editors. Proceedings of the Third In-
ternational Conference on Advances in Natural Computation (ICNC’07), August 24–27,
2007, Hǎikǒu, Hǎinán, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
isbn: 0-7695-2875-9. Google Books ID: oAIEHwAACAAJ. OCLC: 166273050 and
181014549. Held jointly with the 4th International Conference on Fuzzy Systems and
Knowledge Discovery (FSKD 2007).
1712. Jens Lemcke, Matthias Kaiser, Christian Kubczak, Tiziana Margaria, and Björn Knuth. Ad-
vances in Solving the Mediator Scenario with jABC and jABC/GEM. In SWSC’08, Seman-
tic Web Services Challenge: Proceedings of the 2008 Workshops [2166, 2590], pages 89–102.
Stanford University, Computer Science Department, Stanford Logic Group: Stanford, CA,
USA, 2008. Fully available at http://logic.stanford.edu/reports/LG-2009-01.
pdf [accessed 2010-12-16]. Part of [2166]. See also [1626–1632, 1831, 1832].
1713. James G. Lennox. Aristotle’s Philosophy of Biology: Studies in the Origins of Life Science,
Cambridge Studies in Philosophy and Biology. Cambridge University Press: Cambridge,
UK, April 2001. isbn: 0521650275, 0521659760, and 978-0521650274. Google Books
ID: CqmXeBDkb6oC and IijTXV3 ZLMC. OCLC: 43694261 and 45592829. LC Classifica-
tion: QH331 .L528 2001.
1714. Henry Leung, editor. Proceedings of the Sixth IASTED International Conference on Artificial
Intelligence and Soft Computing (ASC’02), July 17–19, 2002, Banff, AB, Canada. ACTA
Press: Calgary, AB, Canada. isbn: 0-88986-346-6. Google Books ID: Zd3NPQAACAAJ.
OCLC: 52161283 and 249212159.
1715. Kwong Sak Leung, Kin Hong Lee, and Sin Man Cheang. Genetic Parallel Programming –
Evolving Linear Machine Codes on a Multiple-ALU processor. In ICAIET’02 [3004], pages
207–213, 2002.
1716. Joel R. Levin. What If There Were No More Bickering about Statistical Significance Tests?
Research in the Schools (RITS), 5(2):43–53, 1998, Tony Onwuegbuzie and John Slate, ed-
itors. Fully available at http://www.personal.psu.edu/users/d/m/dmr/sigtest/
6mspdf.pdf [accessed 2008-08-15].
1717. Andrew Lewis, Gerhard Weis, Marcus Randall, Amir Galehdar, and David Thiel. Optimising
Efficiency and Gain of Small Meander Line RFID Antennas using Ant Colony System. In
CEC’09 [1350], pages 1486–1492, 2009. doi: 10.1109/CEC.2009.4983118. INSPEC Accession
REFERENCES 1087
Number: 10688713.
1718. Peter R. Lewis, Paul Marrow, and Xin Yao. Evolutionary Market Agents and Heterogeneous
Service Providers: Achieving Desired Resource Allocations. In CEC’09 [1350], pages 904–910,
2009. doi: 10.1109/CEC.2009.4983041. INSPEC Accession Number: 10688568.
1719. Robert Michael Lewis, Virginia Joanne Torczon, and Michael W. Trosset. Direct Search Meth-
ods: Then and Now. Technical Report NASA/CR-2000-210125, NASA Langley Research
Center, Institute for Computer Applications in Science and Engineering (ICASE): Hampton,
VA, USA, May 2000. Fully available at http://www.cs.wm.edu/˜va/research/jcam.
pdf [accessed 2010-12-25]. CiteSeerx : 10.1.1.35.5338. ICASE Report No. 2000-26. See also
[1720].
1720. Robert Michael Lewis, Virginia Joanne Torczon, and Michael W. Trosset. Direct Search
Methods: Then and Now. Journal of Computational and Applied Mathematics, 124(1-
2):191–207, December 2000, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/S0377-0427(00)00423-4. See also [1719].
1721. Jin Li. FGP: A Genetic Programming Based Tool for Financial Forecasting. PhD thesis,
University of Essex: Wivenhoe Park, Colchester, Essex, UK, October 6, 2001. See also
[1722].
1722. Jin Li, Xiaoli Li, and Xin Yao. Cost-Sensitive Classification with Genetic Programming. In
CEC’05 [641], pages 2114–2121, 2005. Fully available at http://www.cs.bham.ac.uk/
x
˜xin/papers/LiLiYaoCEC05.pdf [accessed 2010-06-25]. CiteSeer : 10.1.1.66.7637. See
also [1721].
1723. Leon Y. O. Li. Vehicle Routeing for Winter Gritting. PhD thesis, Lancaster University,
Department of Management Science: Lancaster, Lancashire, UK, 1992.
1724. Leon Y. O. Li and Richard W. Eglese. An Interactive Algorithm for Vehicle Routeing for
Winter - Gritting. The Journal of the Operational Research Society (JORS), 47(2):217–228,
February 1996, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and
Operations Research Society: Birmingham, UK. JSTOR Stable ID: 2584343.
1725. Wei Li. Using Genetic Algorithm for Network Intrusion Detection. In Proceedings of
the United States Department of Energy Cyber Security Group 2004 Training Confer-
ence [1492], 2004. Fully available at http://www.security.cse.msstate.edu/docs/
Publications/wli/DOECSG2004.pdf [accessed 2009-05-11].
1726. Xiang Li and Vic Ciesielski. Using Loops in Genetic Programming for A Two Class Bi-
nary Image Classication Problem. In AI’04 [2871], pages 898–909, 2004. Fully available
at http://goanna.cs.rmit.edu.au/˜vc/papers/aust-ai04.pdf [accessed 2010-11-20].
CiteSeerx : 10.1.1.87.3912.
1727. Xiaodong Li. Niching without Niching Parameters: Particle Swarm Optimization
Using a Ring Topology. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 14(1):150–169, February 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2009.2026270. Fully available at http://goanna.cs.rmit.edu.au/
˜xiaodong/publications/rpso-ieee-tec.pdf [accessed 2011-11-10]. INSPEC Accession
Number: 11088362.
1728. Xiaodong Li, Jürgen Branke, and Tim Blackwell. Particle Swarm with Speciation and
Adaptation in a Dynamic Environment. In GECCO’06 [1516], pages 51–58, 2006.
doi: 10.1145/1143997.1144005.
1729. Xiaodong Li, Michael Kirley, Mengjie Zhang, David Green, Vic Ciesielski, Hussein A.
Abbass, Zbigniew Michalewicz, Tim Hendtlass, Kalyanmoy Deb, Kay Chen Tan, Jürgen
Branke, and Yuhui Shi, editors. Proceedings of the Seventh International Conference on
Simulated Evolution And Learning (SEAL’08), December 7–10, 2008, Melbourne, VIC,
Australia, volume 5361/2008 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-89694-4. isbn: 3-540-89693-7. Library of Congress Control Number
(LCCN): 2008939923.
1730. Yong Li and Shihua Gong. Dynamic Ant Colony Optimisation for TSP. The International
Journal of Advanced Manufacturing Technology, 22(7-8):528–533, November 2003, Springer-
Verlag London Limited: London, UK. doi: 10.1007/s00170-002-1478-9.
1731. Yuefeng Li and Vijay V. Raghavan, editors. Proceedings of the 2007 IEEE/WIC/ACM In-
ternational Conferences on Web Intelligence and Intelligent Agent Technology Workshops
1088 REFERENCES
(WI/IAT Workshops’07), November 2–5, 2007, Silicon Valley, CA, USA. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7695-3028-1. Google Books
ID: E3ALMwAACAAJ, TTBEPwAACAAJ, and wWCDRAAACAAJ. Library of Congress Control
Number (LCCN): 2007933794. Order No.: P3028.
1732. J. J. Liang, Thomas Philip Runarsson, Efrén Mezura-Montes, Maurice Clerc, Ponnuthu-
rai Nagaratnam Suganthan, Carlos Artemio Coello Coello, and Kalyanmoy Deb. Problem
Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-
Parameter Optimization. Technical Report, Nanyang Technological University (NTU): Sin-
gapore, September 18, 2006. Fully available at http://www.lania.mx/˜emezura/util/
files/tr_cec06.pdf [accessed 2009-10-26]. Partly available at http://www3.ntu.edu.sg/
home/EPNSugan/index_files/CEC-06/CEC06.htm [accessed 2009-10-26].
1733. Suiliong Liang, A. Nur Zincir-Heywood, and Malcom Ian Heywood. The Effect of Rout-
ing under Local Information using a Social Insect Metaphor. In CEC’02 [944], pages
1438–1443, 2002. Fully available at http://users.cs.dal.ca/˜mheywood/X-files/
Publications/01004454.pdf [accessed 2008-07-26]. See also [1734].
1734. Suiliong Liang, A. Nur Zincir-Heywood, and Malcom Ian Heywood. Adding More In-
telligence to the Network Routing Problem: AntNet and GA-Agents. Applied Soft
Computing, 6(3):244–257, March 2006, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2005.01.005. See also [1733].
1735. Weifa Liang, Hui Wang, and Maria E. Orlowska. Materialized View Selection under the
Maintenance Time Constraint. Data & Knowledge Engineering, 37(2):203–216, May 2001,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Sci-
entific Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0169-023X(01)00007-6.
CiteSeerx : 10.1.1.90.9530.
1736. Pierre Liardet, Pierre Collet, Cyril Fonlupt, Evelyne Lutton, and Marc Schoenauer, editors.
Proceedings of the 6th International Conference on Artificial Evolution, Evolution Artificielle
(EA’03), October 27–30, 2003, Marseilles, France, volume 2936 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-21523-9. Published
in 2003.
1737. Ching-Fang Liaw. A Hybrid Genetic Algorithm for the Open Shop Scheduling Problem. Eu-
ropean Journal of Operational Research (EJOR), 124(1):28–42, July 1, 2000, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(99)00168-X.
1738. Vibeke Libby, editor. Proceedings of Data Structures and Target Classification, the SPIE’s In-
ternational Symposium on Optical Engineering and Photonics in Aerospace Sensing, April 1–
2, 1991, Orlando, FL, USA, volume 1470 in The Proceedings of SPIE. Society of Photographic
Instrumentation Engineers (SPIE): Bellingham, WA, USA. isbn: 0819405795. Google Books
ID: Ne1RAAAAMAAJ. OCLC: 24461715.
1739. Gunar E. Liepins and Michael D. Vose. Deceptiveness and Genetic Algorithm Dynamics. In
FOGA’90 [2562], pages 36–50, 1990. Fully available at http://www.osti.gov/bridge/
servlets/purl/6445602-CfqU6M/ [accessed 2007-11-05].
1740. Pedro Lima, editor. Robotic Soccer. I-Tech Education and Publishing: Vienna, Aus-
tria, 2007. isbn: 3-902613-07-6. Fully available at http://s.i-techonline.
com/Book/Robotic-Soccer/ISBN978-3-902613-21-9.html [accessed 2008-04-23]. Google
Books ID: 5lAuQwAACAAJ. OCLC: 441015574.
1741. JiGuan G. Lin Lin. On Min-Norm and Min-Max Methods of Multi-Objective Optimiza-
tion. Mathematical Programming, 103(1):1–33, May 2005, Springer-Verlag: Berlin/Heidel-
berg. doi: 10.1007/s10107-003-0462-y.
1742. Charles X. Ling. Overfitting and Generalization in Learning Discrete Patterns. Neu-
rocomputing, 8(3):341–347, August 1995, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0925-2312(95)00050-G. CiteSeerx : 10.1.1.17.3198. Optimization and Com-
binatorics, Part I-III.
1743. Marc Lipsitch. Adaptation on Rugged Landscapes Generated by Iterated Local Interactions
of Neighboring Genes. In ICGA’91 [254], pages 128–135, 1991.
1744. Proceedings of the 15th Portuguese Conference on Artificial Intelligence (EPIA’11), Octo-
ber 10–13, 2011, Lisbon, Portugal. Partly available at http://epia2011.appia.pt/
[accessed 2011-04-24].

1745. Duixian Liu and Brian Gaucher, editors. 2006 IEEE International Workshop on Antenna
REFERENCES 1089
Technology: Small Antennas and Novel Metamaterials (iWAT’06), March 6, 2006–March 8,
2005, Crowne Plaza Hotel: White Plains, NY, USA. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-9443-7. Google Books ID: IxvhAAAACAAJ. OCLC: 122521207,
289019234, 326736309, and 423939179. Catalogue no.: 06EX1191.
1746. Fang Liu and Le-Ping Lin. Unsupervised Anomaly Detection Based on an Evolutionary
Artificial Immune Network. In EvoWorkshops’05 [2340], pages 166–174, 2005.
1747. Pu Liu, Francis C. M. Lau, Michael J. Lewis, and Cho-li Wang. A New Asynchronous
Parallel Evolutionary Algorithm for Function Optimization. In PPSN VII [1870], pages
401–410, 2002. doi: 10.1007/3-540-45712-7 39.
1748. Tie-Yan Liu, Yiming Yang, Hao Wan, Hua-Jun Zeng, Zheng Chen, and Wei-Ying Ma. Sup-
port Vector Machines Classification with a Very Large-Scale Taxonomy. ACM SIGKDD Ex-
plorations Newsletter, 7(1):36–43, June 2005, Association for Computing Machinery (ACM):
New York, NY, USA. doi: 10.1145/1089815.1089821. CiteSeerx : 10.1.1.72.1506.
1749. Yongguo Liu, Kefei Chen, Xiaofeng Liao, and Wei Zhang. A Genetic Clustering
Method for Intrusion Detection. Pattern Recognition, 37(5):927–942, May 2004, El-
sevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.patcog.2003.09.011.
1750. Silvana Livramento, Arnaldo V. Moura, Flávio K. Miyazawa, Mário M. Harada, and Ed-
uardo Reck Miranda. A Genetic Algorithm for Telecommunication Network Design. In
EvoWorkshops’04 [2254], pages 140–149, 2004.
1751. Xavier Llorà and Kumara Sastry. Fast Rule Matching for Learning Classifier Systems via Vec-
tor Instructions. In GECCO’06 [1516], pages 1513–1520, 2006. doi: 10.1145/1143997.1144244.
See also [1752].
1752. Xavier Llorà and Kumara Sastry. Fast Rule Matching for Learning Classifier Systems via
Vector Instructions. IlliGAL Report 2006001, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, Univer-
sity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2006. Fully
available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2006001.pdf
[accessed 2007-09-12]. See also [1751].

1753. Fernando G. Lobo, Cláudio F. Lima, and Zbigniew Michalewicz, editors. Parameter
Setting in Evolutionary Algorithms, volume 54 in Studies in Computational Intelli-
gence. Springer-Verlag: Berlin/Heidelberg, March 2007. doi: 10.1007/978-3-540-69432-8.
isbn: 3-540-69431-5 and 3-540-69432-3. Google Books ID: WgMmGQAACAAJ,
vfjLsGBmi7kC, and voAuHwAACAAJ. Library of Congress Control Number
(LCCN): 2006939345.
1754. José Lobo, John H. Miller, and Walter Fontana. Neutrality in Technological Landscapes.
Working papers, Santa Fe Institute: Santa Fé, NM, USA, January 26, 2004. Fully avail-
able at http://fontana.med.harvard.edu/www/Documents/WF/Papers/tech.
neut.pdf, http://time.dufe.edu.cn/jingjiwencong/waiwenziliao/neut.
pdf, and http://zia.hss.cmu.edu/miller/papers/neut.pdf [accessed 2010-12-04].
CiteSeerx : 10.1.1.61.9296.
1755. Marco Locatelli. A Note on the Griewank Test Function. Journal of Global Opti-
mization, 25(2):169–174, February 2003, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1021956306041.
1756. Darrell F. Lochtefeld. Multi-Objectivization in Genetic Algorithms. PhD thesis, Wright
State University (WSU), School of Graduate Studies: Dayton, OH, USA, May 26, 2011,
Frank William Ciarallo, Supervisor, Raymond R. Hill, Yan Liu, W. Paul Murdock, and
Mateen M. Rizki, Committee members. Fully available at http://etd.ohiolink.edu/
send-pdf.cgi/Lochtefeld%20Darrell%20F.pdf?wright1308765665 [accessed 2011-12-
04].

1757. Darrell F. Lochtefeld and Frank William Ciarallo. Deterministic Helper Objective Sequence
Applied to the Job-Shop Scheduling Problem. In GECCO’10 [30], pages 431–438, 2010.
doi: 10.1145/1830483.1830566.
1758. Darrell F. Lochtefeld and Frank William Ciarallo. Helper-Objective Optimization Strategies
for the Job-Shop Scheduling Problem. Applied Soft Computing, 11(6):4161–4174, Septem-
ber 2011, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2011.03.007.
1759. A. Geoff Lockett and Gerd Islei, editors. Proceedings of the 8th International Confer-
ence on Multiple Criteria Decision Making: Improving Decision Making in Organizations
1090 REFERENCES
(MCDM’88), August 21–26, 1988, University of Manchester, Manchester Business School:
Manchester, UK, Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 0387517952 and 3540517952. OCLC: 20672173, 230998532,
311213400, 613307351, and 634248012. Published in December 1989.
1760. Reinhard Lohmann. Structure Evolution and Incomplete Induction. Biological Cybernetics,
69(4):319–326, August 1993, Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/BF00203128.
1761. Reinhard Lohmann. Structure Evolution and Neural Systems. In Dynamic, Genetic, and
Chaotic Programming: The Sixth-Generation [2559], pages 395–411. Wiley Interscience:
Chichester, West Sussex, UK, 1992.
1762. Jason D. Lohn, editor. The 2003 NASA/DoD Conference on Evolvable Hardware (EH’03),
June 9–11, 2003, Chicago, IL, USA. IEEE Computer Society: Washington, DC, USA.
1763. Jason D. Lohn and Silvano P. Colombano. Automated Analog Circuit Synthesis using a
Linear Representation. In ICES’98 [2509], pages 125–133, 1999. Fully available at http://
ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-16]. See also [1764].
1764. Jason D. Lohn and Silvano P. Colombano. A Circuit Representation Technique for Automated
Circuit Design. IEEE Transactions on Evolutionary Computation (IEEE-EC), 3(3):205–219,
September 1999, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.788491.
Fully available at http://ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2010-08-
x
27]. CiteSeer : 10.1.1.6.7507. INSPEC Accession Number: 6366632. See also [1763, 1765,

1766].
1765. Jason D. Lohn, Silvano P. Colombano, Garyl L. Haith, and Dimitris Stassinopoulos. A
Parallel Genetic Algorithm for Automated Electronic Circuit Design. In CAS’00 [2001], 2000.
Fully available at http://ic.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-
16]. See also [1764, 1766].

1766. Jason D. Lohn, Garyl L. Haith, Silvano P. Colombano, and Dimitris Stassinopoulos. To-
wards Evolving Electronic Circuits for Autonomous Space Applications. In Proceedings of
the IEEE Aerospace Conference [1322], 2000. doi: 10.1109/AERO.2000.878523. Fully avail-
able at http://ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-16]. INSPEC
Accession Number: 6795690. See also [1764, 1765].
1767. Jason D. Lohn, William F. Kraus, and Derek Linden. Evolutionary Optimization of a Quadri-
filar Helical Antenna. In IEEE AP-S International Symposium and USNC/URSI National Ra-
dio Science Meeting [1336], 2002. Fully available at http://ti.arc.nasa.gov/people/
jlohn/Papers/aps2002.pdf [accessed 2009-06-17]. See also [1768, 1770].
1768. Jason D. Lohn, Derek Linden, Gregory S. Hornby, William F. Kraus, and Adán Rodrı́guez-
Arroyo. Evolutionary Design of an X-Band Antenna for NASA’s Space Technology 5 Mission.
In EH’03 [1762], pages 155–163, 2003. doi: 10.1109/EH.2003.1217660. See also [1767, 1770].
1769. Jason D. Lohn, Gregory S. Hornby, and Derek Linden. An Evolved Antenna for Deploy-
ment on NASA’s Space Technology 5 Mission. In GPTP’04 [2089], pages 301–315, 2004.
doi: 10.1007/b101112. Fully available at http://ic.arc.nasa.gov/people/hornby/
papers/lohn_gptp04.ps.gz [accessed 2007-08-17].
1770. Jason D. Lohn, Derek Linden, Gregory S. Hornby, William F. Kraus, Adán Rodrı́guez-Arroyo,
and Stephen E. Seufert. Evolutionary Design of an X-Band Antenna for NASA’s Space Tech-
nology 5 Mission. In 2004 IEEE Antennas and Propagation Society (A-P-S) International
Symposium and USNC / URSI National’Radio Science Meeting [2121], pages 2313–2316, vol-
ume 3, 2004. doi: 10.1109/APS.2004.1331834. Fully available at http://ti.arc.nasa.
gov/projects/esg/publications/lohn_papers/aps2004.pdf [accessed 2009-06-17]. See
also [1767, 1768].
1771. Proceedings of The Annual Conference of the South Eastern Accounting Group (SEAG) 2003,
September 8, 2003, London Metropolitan University: London, UK.
1772. Heitor Silverio Lopes and Wagner R. Weinert. EGIPSYS: an Enhanced Gene Expression
Programming Approach for Symbolic Regression Problems. International Journal of Ap-
plied Mathematics and Computer Science (AMCS), 14(3), September 2004, University of
Zielona Góra: Zielona Góra, Poland and Lubuskie Scientific Society: Zielona Góra, Poland.
Fully available at http://matwbn.icm.edu.pl/ksiazki/amc/amc14/amc1437.pdf
and http://www.amcs.uz.zgora.pl/?action=get_file&file=AMCS_2004_14_3_
7.pdf [accessed 2009-09-08]. Special Issue: Evolutionary Computation. AMCS Centro Federal
de Educacao Tecnologica do Parana / CPGEI Av. 7 de setembro, 3165, 80230-901 Curitiba
(PR), Brazil.
REFERENCES 1091
1773. Heitor Silverio Lopes, Fabiano Luis de Sousa, and Luiz Carlos Gadelha DeSouza. The Gen-
eralized Extremal Optimization with Real Codification. In EngOpt’08 [1227], 2008. Fully
available at http://www.engopt.org/nukleo/pdfs/0201_engopt_2008_mainenti_
etal.pdf [accessed 2009-09-09].
1774. Luı́s Seabra Lopes, Nuno Lau, Pedro Mariano, and Luı́s Mateus Rocha, editors. Progress in
Artificial Intelligence – Proceedings of the 14th Portuguese Conference on Artificial Intelli-
gence (EPIA’09), October 12–15, 2009, Aveiro, Portugal, volume 5816 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1775. Antonio López Jaimes and Carlos Artemio Coello Coello. Some Techniques to Deal
with Many-Objective Problems. In GECCO’09-II [2343], pages 2693–2696, 2009.
doi: 10.1145/1570256.1570386.
1776. Pascal Lorenz, Petre Dini, and Vladimir Uskov, editors. Proceedings of the 10th International
Conference on Telecommunications (ICT’03), February 21–March 1, 2003, Sofitel Coralia
Maeva Beach Hotel, Tahiti, Papeete, French Polynesia. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-7661-7. Google Books ID: -TurAAAACAAJ. OCLC: 52223801,
80325313, and 423983590. Catalogue no.: 03EX628.
1777. D. Lorenzoli, S. Mussino, M. Pezzé, A. Sichel, D. Tosi, and Daniela Schilling. A SOA based
Self-Adaptive PERSONAL MOBILITY MANAGER. In SCContest’06 [554], pages 479–486,
2006. doi: 10.1109/SCC.2006.16. INSPEC Accession Number: 9165412.
1778. Loughborough Antennas & Propagation Conference (PAPC’06), April 11–12, 2006. Lough-
borough University: Loughborough, Leicestershire, UK. isbn: 0947974415. Google Books
ID: K5K4NwAACAAJ.
1779. Jorge Loureiro and Orlando Belo. An Evolutionary Approach to the Selection and Allocation
of Distributed Cubes. In IDEAS’06 [1372], pages 243–248, 2006. doi: 10.1109/IDEAS.2006.9.
INSPEC Accession Number: 9353377.
1780. Jorge Loureiro and Orlando Belo. Swarm Intelligence in Cube Selection and Allo-
cation for Multi-Node OLAP Systems. In SCSS’06 [878], pages 229–239. Springer-
Verlag GmbH: Berlin, Germany, 2006, University of Bridgeport: Bridgeport, CT, USA.
doi: 10.1007/978-1-4020-6264-3 41.
1781. Jorge Loureiro and Orlando Belo. A Metamorphosis Algorithm for the Optimiza-
tion of a Multi-Node OLAP System. In EPIA’07 [2026], pages 383–394, 2007.
doi: 10.1007/978-3-540-77002-2 32.
1782. José Antonio Lozano, Pedro Larrañaga, Iñaki Inza, and Endika Bengoetxea, editors. To-
wards a New Evolutionary Computation – Advances on Estimation of Distribution Algo-
rithms, volume 192/2006 in Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, 2006. doi: 10.1007/11007937. isbn: 3-540-29006-0. Google Books
ID: 0dku9OKxl6AC and r0UrGB8y2V0C. OCLC: 63763942, 181473672, and 318299594.
Library of Congress Control Number (LCCN): 2005932568.
1783. Haiming Lu. State-of-the-Art Multiobjective Evolutionary Algorithms – Pareto Ranking, Den-
sity Estimation and Dynamic Population. PhD thesis, Oklahoma State University, Faculty of
the Graduate College: Stillwater, OK, USA, August 2002, Gary G. Yen, Advisor. Fully avail-
able at http://www.lania.mx/˜ccoello/EMOO/thesis_lu.pdf.gz [accessed 2010-08-02].
ID: AAT 3080536.
1784. Quiang Lu and Xin Yao. Clustering and Learning Gaussian Distribution for Contin-
uous Optimization. IEEE Transactions on Systems, Man, and Cybernetics – Part C:
Applications and Reviews, 35(2):195–204, May 2005, IEEE Systems, Man, and Cyber-
netics Society: New York, NY, USA. doi: 10.1109/TSMCC.2004.841914. Fully avail-
able at http://www.cs.bham.ac.uk/˜xin/papers/published_IEEETSMC_LuYa05.
pdf [accessed 2010-12-02]. CiteSeerx : 10.1.1.65.7063. INSPEC Accession Number: 8396860.
1785. Wei Lu and Issa Traore. Detecting New Forms of Network Intrusions Using Genetic Pro-
gramming. Computational Intelligence, 20(3):475–494, August 2004, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1111/j.0824-7935.2004.00247.x. Fully available at http://
www.isot.ece.uvic.ca/publications/journals/coi-2004.pdf [accessed 2008-06-11].
1786. Proceedings of third Conference on Artificial General Intelligence (AGI’10), March 5–8, 2010,
Lugano, Switzerland.
1787. Sean Luke and Liviu Panait. A Survey and Comparison of Tree Generation Algorithms.
In GECCO’01 [2570], pages 81–88, 2001. Fully available at http://www.cs.gmu.edu/
x
˜sean/papers/treegenalgs.pdf [accessed 2009-08-07]. CiteSeer : 10.1.1.28.4837.
1092 REFERENCES
1788. Sean Luke and Lee Spector. Evolving Graphs and Networks with Edge Encoding: A Prelimi-
nary Report. In GP’96 LBP [1603], 1996. Fully available at http://cs.gmu.edu/˜sean/
papers/graph-paper.pdf [accessed 2008-08-02]. CiteSeerx : 10.1.1.25.9484.
1789. Sean Luke and Lee Spector. Evolving Teamwork and Coordination with Genetic Pro-
gramming. In GP’96 [1609], pages 150–156, 1996. Fully available at http://www.ifi.
uzh.ch/groups/ailab/people/nitschke/refs/Cooperation/cooperation.pdf
x
[accessed 2009-08-01]. CiteSeer : 10.1.1.29.6914.

1790. Sean Luke, Conor Ryan, and Una-May O’Reilly, editors. GECCO 2002: Graduate Student
Workshop (GECCO’02 GSWS), July 9, 2002, Roosevelt Hotel: New York, NY, USA. AAAI
Press: Menlo Park, CA, USA. See also [224, 478, 1675].
1791. Sean Luke, Liviu Panait, Gabriel Balan, Sean Paus, Zbigniew Skolicki, Jeff Bassett, Robert
Hubley, and Alexander Chircop. ECJ: A Java-based Evolutionary Computation Research
System. George Mason University (GMU), Evolutionary Computation Laboratory (EClab):
Fairfax, VA, USA, 2006. Fully available at http://cs.gmu.edu/˜eclab/projects/
ecj/ [accessed 2010-11-02].
1792. Eduard Lukschandl, Magnus Holmlund, Eric Moden, Mats G. Nordahl, and Peter Nordin.
Induction of Java Bytecode with Genetic Programming. In GP’98 LBP [1605], pages 135–142,
1998.
1793. Eduard Lukschandl, Henrik Borgvall, Lars Nohle, Mats G. Nordahl, and Peter Nordin. Evolv-
ing Routing Algorithms with the JBGP-System. In EvoWorkshops’99 [2197], pages 193–202,
1999. doi: 10.1007/10704703 16. See also [1794, 1795].
1794. Eduard Lukschandl, Henrik Borgvall, Lars Nohle, Mats G. Nordahl, and Peter Nordin.
Distributed Java Bytecode Genetic Programming with Telecom Applications. In Eu-
roGP’00 [2198], pages 316–325, 2000. See also [1793].
1795. Eduard Lukschandl, Peter Nordin, and Mats G. Nordahl. Using the Java Method Evolver
for Load Balancing in Communication Networks. In GECCO’00 LBP [2928], pages 236–239,
2000. See also [1793].
1796. F. Luna, Antonio Jesús Nebro Urbaneja, Bernabé Dorronsoro Dı́az, Enrique Alba Tor-
res, Pascal Bouvry, and L. Hogie. Optimal Broadcasting in Metropolitan MANETs Us-
ing Multiobjective Scatter Search. In EvoWorkshops’06 [2341], pages 255–266, 2006.
doi: 10.1007/11732242 23.
1797. Sen Luo, Bin Xu, and Yixin Yan. An Accumulated-QoS-First Search Approach for Semantic
Web Service Composition. In SOCA’10 [1378], 2010. See also [320, 1146, 1571, 2998, 3010,
3011].
1798. Mikulas Luptacik and Rudolf Vetschera, editors. Proceedings of the First MCDM Winter Con-
ference, 16th International Conference on Multiple Criteria Decision Making (MCDM’02),
February 12–22, 2002, Hotel Panhans: Semmering, Austria. Partly available at http://
orgwww.bwl.univie.ac.at/mcdm2002 [accessed 2007-09-10].
1799. Jay L. Lush. Progeny Test and Individual Performance as Indicators of an Animal’s Breeding
Value. Journal of Dairy Science (JDS), 18(1):1–19, January 1935, American Dairy Science
Association (ADSA): Champaign, IL, USA. Fully available at http://jds.fass.org/
cgi/reprint/18/1/1 [accessed 2007-11-27].
1800. Luc Luyet, Sacha Varone, and Nicolas Zufferey. An Ant Algorithm for the
Steiner Tree Problem in Graphs. In EvoWorkshops’07 [1050], pages 42–51, 2007.
doi: 10.1007/978-3-540-71805-5 5.

1801. Huanyu Ma, Wei Jiang, Songlin Hu, Zhenqiu Huang, and Zhiyong Liu. Two-Phase Graph
Search Algorithm for QoS-Aware Automatic Service Composition. In SOCA’10 [1378], 2010.
See also [320, 1299].
1802. Wolfgang Maass, editor. Proceedings of the Eighth Annual Conference on Computational
Learning Theory (COLT’95), July 5–8, 1995, Santa Cruz, CA, USA. ACM Press: New York,
NY, USA. isbn: 0-89791-723-5. Google Books ID: L2NQAAAAMAAJ and mDtrOgAACAAJ.
OCLC: 33144512, 82595159, 283286354, 300096337, and 318352896. ACM Order
No.: 604950.
1803. Penousal Machado, Jorge Tavares, Francisco Baptista Pereira, and Ernesto Jorge Fer-
REFERENCES 1093
nandes Costa. Vehicle Routing Problem: Doing It The Evolutionary Way.
In GECCO’02 [1675], page 690, 2002. Fully available at http://osiris.
tuwien.ac.at/˜wgarn/VehicleRouting/GECCO02_VRPCoEvo.pdf [accessed 2008-10-27].
CiteSeerx : 10.1.1.16.8208.
1804. Michael Machtey and Paul Young. An Introduction to the General Theory of Algorithms,
volume 2 in Theory of Computation, The Computer Science Library. Elsevier Science Pub-
lishing Company, Inc.: New York, NY, USA and North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands, 1978. isbn: 0-444-00226-X and 0-444-00227-8. Google
Books ID: qncEAQAAIAAJ.
1805. Kenneth J. Mackin and Eiichiro Tazaki. Emergent Agent Communication in Multi-Agent
Systems using Automatically Defined Function Genetic Programming (ADF-GP). In
SMC’99 [1331], pages 138–142, volume 5, 1999. doi: 10.1109/ICSMC.1999.815536. See also
[1806, 1807].
1806. Kenneth J. Mackin and Eiichiro Tazaki. Unsupervised Training of Multiobjective Agent
Communication using Genetic Programming. In KES’00 [1279], pages 738–741, volume
2, 2000. doi: 10.1109/KES.2000.884152. Fully available at http://www.cs.bham.ac.
uk/˜wbl/biblio/gp-html/Mackin00.html and http://www.lania.mx/˜ccoello/
EMOO/mackin00.pdf.gz [accessed 2008-07-28]. See also [1805].
1807. Kenneth J. Mackin and Eiichiro Tazaki. Multiagent Communication Combining Genetic
Programming and Pheromone Communication. Kybernetes, 31(6):827–843, 2002, Thales
Publications (W.O.) Ltd.: Irvine, CA, USA and Emerald Group Publishing Limited: Bingley,
UK. doi: 10.1108/03684920210432808. Fully available at http://www.emeraldinsight.
com/10.1108/03684920210432808 [accessed 2008-07-28]. See also [1805].
1808. James B. MacQueen. Some Methods of Classification and Analysis of Multivariate Observa-
tions. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Proba-
bility (June 21-July 18, 1965 and December 27, 1965-January 7, 1966) [1700], pages 281–297,
volume I: Statistics, 1967. Fully available at http://digitalassets.lib.berkeley.
edu/math/ucb/text/math_s5_v1_article-17.pdf [accessed 2009-09-20].
1809. Proceedings of the International Joint Conference on Computational Intelligence (IJCCI’09),
October 5, 2009, Madeira, Portugal.
1810. Alexander Maedche, Steffen Staab, Claire Nedellec, and Eduard H. Hovy, editors. Work-
shop on Ontology Learning, Proceedings of the Second Workshop on Ontology Learning
(OL’2001), August 4, 2001, Seattle, WA, USA, volume 38 in CEUR Workshop Pro-
ceedings (CEUR-WS.org). Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen,
Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-47/ [accessed 2008-04-01]. Held in conjunction with the 17th International Conference on
Artificial Intelligence IJCAI’2001. See also [2012].
1811. Pattie Maes, Maja J. Mataric, Jean-Arcady Meyer, Jordan B. Pollack, and Stewart W.
Wilson, editors. Proceedings of the Fourth International Conference on Simulation of Adap-
tive Behavior: From Animals to Animats 4 (SAB’96), September 9–13, 1996, Cape Code,
MA, USA. MIT Press: Cambridge, MA, USA. isbn: 0-262-63178-4. Google Books
ID: V3pksEEKxUkC. issn: 1089-4365.
1812. Anne E. Magurran. Ecological Diversity and Its Measurement. Princeton University Press:
Princeton, NJ, USA and Taylor and Francis LLC: London, UK, 1987. isbn: 0-691-08485-8,
0-691-08491-2, and 0709935390. Google Books ID: GV5eJQAACAAJ, MhsuGQAACAAJ,
f38pHgAACAAJ, gH8pHgAACAAJ, iHoOAAAAQAAJ, and wvSEAAAACAAJ.
1813. Anne E. Magurran. Biological Diversity. Current Biology, 15(4):R116–R118, February 22,
2005, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.cub.2005.02.006. Fully
available at http://dx.doi.org/10.1016/j.cub.2005.02.006 [accessed 2008-11-10].
1814. Hadj Mahboubi, Kamel Aouiche, and Jérôme Darmont. Materialized View Se-
lection by Query Clustering in XML Data Warehouses. In CSIT’06 [2687],
2006. Fully available at http://eric.univ-lyon2.fr/˜jdarmont/publications/
files/CSIT06-Mahboubi-et-al.pdf and http://hal.archives-ouvertes.fr/
docs/00/32/06/32/PDF/CSIT_2006-Mahboubi-et-al.pdf [accessed 2011-04-12]. arXiv
ID: 0809.1963v1.
1815. Oded Maimon and Lior Rokach, editors. The Data Mining and Knowledge Discov-
ery Handbook. Springer Science+Business Media, Inc.: New York, NY, USA, 2005.
1094 REFERENCES
doi: 10.1007/b107408. isbn: 0-387-24435-2 and 0-387-25465-X.
1816. Mohammad A. Makhzan and Kwei-Jay Lin. Solution to a Complete Web Ser-
vice Discovery and Composition. In CEC/EEE’06 [2967], pages 455–457, 2006.
doi: 10.1109/CEC-EEE.2006.83. INSPEC Accession Number: 9189347. See also [318].
1817. Masud Ahmad Malik. Evolution of the High Level Programming Languages: A Critical
Perspective. SIGPLAN Notices, 33(12):72–80, December 1998, ACM Press: New York, NY,
USA. doi: 10.1145/307824.307882.
1818. David Malkin. The Evolutionary Impact of Gradual Complexification on Complex Sys-
tems. PhD thesis, University College London (UCL), Department of Computer Science:
London, UK, 2008. Fully available at http://www.cs.ucl.ac.uk/staff/P.Bentley/
DavidMalkinsthesis.pdf [accessed 2010-07-27].
1819. Proceedings of the 2nd European Computing Conference (ECC’08), September 11–13, 2008,
Malta.
1820. Bernard Manderick and Frans Moyson. The Collective Behaviour of Ants: An Example
of Self-Organization in Massive Parallelism. In Proceedings of AAAI Spring Symposium on
Parallel Models of Intelligence: How Can Slow Components Think So Fast? [2600], 1988.
1821. Bernard Manderick, Mark de Weger, and Piet Spiessens. The Genetic Algorithm and the
Structure of the Fitness Landscape. In ICGA’91 [254], pages 143–150, 1991.
1822. Vittorio Maniezzo and Matteo Milandri. An Ant-Based Framework for Very Strongly Con-
straint Problem. In ANTS’02 [816], pages 115–148, 2002. doi: 10.1007/3-540-45724-0 19.
1823. Vittorio Maniezzo, Luca Maria Gambardella, and Fabio de Luigi. Ant Colony Optimiza-
tion. In New Optimization Techniques in Engineering [2083], Chapter 5, pages 101–117.
Springer-Verlag GmbH: Berlin, Germany. Fully available at http://www.idsia.ch/
x
˜luca/aco2004.pdf [accessed 2007-09-13]. CiteSeer : 10.1.1.10.7560.
1824. Vittorio Maniezzo, Antonella Carbonaro, Matteo Golfarelli, and Stefano Rizzi. An ANTS Al-
gorithm for Optimizing the Materialization of Fragmented Views in Data Warehouse: Prelim-
inary Results. In EvoWorkshops’01 [343], pages 80–89, 2001. doi: 10.1007/3-540-45365-2 9.
Fully available at http://bias.csr.unibo.it/golfarelli/Papers/evocop01.pdf
[accessed 2011-04-10].

1825. Vittorio Maniezzo, Antonella Carbonaro, Matteo Golfarelli, and Stefano Rizzi. ANTS for
Data Warehouse Logical Design. In MIC’01 [2294], pages 249–254, 2001. Fully avail-
able at http://www-db.deis.unibo.it/˜srizzi/PDF/mic01.pdf [accessed 2011-04-10].
CiteSeerx : 10.1.1.10.1324 and 10.1.1.74.9797.
1826. Vittorio Maniezzo, Roberto Battiti, and Jean-Paul Watson, editors. Learning and In-
telligent OptimizatioN (LION 2), December 8–12, 2007, University of Trento: Trento,
Italy, volume 5313/2008 in Theoretical Computer Science and General Issues (SL 1),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-92695-5. isbn: 3-540-92694-1. Library of Congress Control Number
(LCCN): 2008941292.
1827. Reinhard Männer and Bernard Manderick, editors. Proceedings of Parallel Problem Solv-
ing from Nature 2 (PPSN II), September 28–30, 1992, Brussels, Belgium. Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands and North-Holland Scientific Publishers Ltd.:
Amsterdam, The Netherlands. isbn: 0-444-89730-5. Google Books ID: jjjlAAAACAAJ.
OCLC: 26362049, 180616056, and 636493751. Library of Congress Control Number
(LCCN): 92023499. GBV-Identification (PPN): 110538498. LC Classification: QA76.58.
1828. Yannis Manolopoulos, Jaroslav Pokorný, and Timos K. Sellis, editors. Proceedings of the
10th East European Conference on Advances in Databases and Information Systems (AD-
BIS’06), September 3–7, 2009, Thessaloniki, Greece, volume 4152 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11827252.
isbn: 3-540-37899-5. Library of Congress Control Number (LCCN): 2006931262.
1829. Steven Manos, Leon Poladian, Peter J. Bentley, and Maryanne Large. A Genetic Algo-
rithm with a Variable-Length Genotype and Embryogeny for Microstructured Optical Fi-
bre Design. In GECCO’06 [1516], pages 1721–1728, 2006. doi: 10.1145/1143997.1144278.
CiteSeerx : 10.1.1.152.7629.
1830. Elena Marchiori and Adri G. Steenbeek. An Evolutionary Algorithm for Large Scale Set
Covering Problems with Application to Airline Crew Scheduling. In EvoWorkshops’00 [464],
pages 370–384, 2000. doi: 10.1007/3-540-45561-2 36. CiteSeerx : 10.1.1.36.8069.
1831. Tiziana Margaria, Christian Winkler, Christian Kubczak, Bernhard Steffen, Marco
REFERENCES 1095
Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele
Facca, and Christina Tziviskou. SWS Mediator with WEBML/WEBRATIO and
JABC/JETI: A Comparison. In ICEIS’07 [494], pages 422–429, volume 4,
2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/3-SWSC06-med-webml-jabc-fin.pdf [accessed 2010-12-16]. See also [383–
387, 1626–1632, 1832].
1832. Tiziana Margaria, Marco Bakera, Harald Raffelt, and Bernhard Steffen. Synthesiz-
ing the Mediator with jABC/ABC. In EON-SWSC’08 [1023], 2008. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/Paper-4.pdf [accessed 2010-12-16]. See also [1626–1632, 1712, 1831, 2166].
1833. Pedro José Marrón, editor. 5. GI/ITG KuVS Fachgespräch “Drahtlose Sensornetze”, July 17–
18, 2006, volume 2006/07. Universität Stuttgart, Fakultät 5: Informatik, Elektrotechnik und
Informationstechnik, Institut für Parallele und Verteilte Systeme (IPVS): Stuttgart, Ger-
many. Fully available at http://elib.uni-stuttgart.de/opus/volltexte/2006/
2838/pdf/TR_2006_07.pdf [accessed 2009-08-01]. urn: urn:nbn:de:bsz:93-opus-28388.
1834. Silvano Martello and Paolo Toth. Knapsack Problems – Algorithms and Computer Im-
plementations, volume 25 in Estimation, Simulation, and Control – Wiley-Interscience
Series in Discrete Mathematics and Optimization. Wiley Interscience: Chichester,
West Sussex, UK, 1990. isbn: 0471924202. Google Books ID: 0dhQAAAAMAAJ and
BrcjQAAACAAJ. OCLC: 21339027 and 231141136. Library of Congress Control Number
(LCCN): 90012279. GBV-Identification (PPN): 032258186, 073756830, and 115819762.
LC Classification: QA267.7 .M37 1990.
1835. Max Martin, Eric Drucker, and Walter Don Potter. GA, EO, and PSO Applied to the Discrete
Network Configuration Problem. In GEM’08 [120], 2008. Fully available at http://www.
cs.uga.edu/˜potter/CompIntell/GEM_Martin-et-al.pdf [accessed 2008-08-23].
1836. Trevor Philip Martin and Anca L. Ralescu, editors. Fuzzy Logic in Artificial Intelligence, To-
wards Intelligent Systems, IJCAI’95 Selected Papers from Workshop (IJCAI’95 Fuzzy WS),
August 19–21, 1995, volume 1188/1997 in Lecture Notes in Artificial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-62474-0. isbn: 3-540-62474-0. Google Books ID: MGngXEbCI64C.
OCLC: 36215016, 150397549, 301584096, and 312826327. See also [1861, 2919].
1837. William Martin and Michael J. Russel. On the Origins of Cells: A Hypothesis for the
Evolutionary Transitions from Abiotic Geochemistry to Chemoautotrophic Prokaryotes, and
from Prokaryotes to Nucleated Cells. Philosophical Transactions of the Royal Society B:
Biological Sciences, 358(1429):59–85, 2003, Royal Society of Edinburgh: Edinburgh, Scotland,
UK. doi: 10.1098/rstb.2002.1183. PubMed ID: 12594918.
1838. Worthy Neil Martin, Jens Lienig, and James P. Cohoon. Island (Migration) Models: Evo-
lutionary Algorithms Based on Punctuated Equilibria. In Handbook of Evolutionary Com-
putation [171], Chapter C6.3, pages 448–463. Oxford University Press, Inc.: New York, NY,
USA, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK,
and CRC Press, Inc.: Boca Raton, FL, USA, 1997.
1839. Jean-Philippe Martin-Flatin, Joe Sventek, and Kurt Geihs, editors. IFIP/IEEE International
Workshop on Self-Managed Systems and Services (SelfMan’05), May 19, 2005, Nice Acropolis
Exhibition Center: Nice, France.
1840. Flávio Martins, Eduardo Carrano, Elizabeth Wanner, Ricardo H. C. Takahashi, and Ger-
aldo Robson Mateus. A Dynamic Multiobjective Hybrid Approach for Designing Wireless
Sensor Networks. In CEC’09 [1350], pages 1145–1152, 2009. doi: 10.1109/CEC.2009.4983075.
INSPEC Accession Number: 10688670.
1841. Radek Matoušek and Lars Nolle. GAHC: Improved GA with HC mutation. In
WCECS’07 [1406], pages 915–920, 2007. Fully available at http://www.iaeng.org/
publication/WCECS2007/WCECS2007_pp915-920.pdf [accessed 2009-10-24].
1842. Radek Matoušek and Pavel Ošmera, editors. Proceedings of the 9th International Confer-
ence on Soft Computing (MENDEL’03), June 4–6, 2003, Brno University of Technology:
Brno, Czech Republic. Brno University of Technology, Ústav Automatizace a Informatiky:
Brno, Czech Republic. isbn: 80-214-2411-7. OCLC: 249853754 and 320595534. GBV-
Identification (PPN): 390844543.
1843. Yoshiyuki Matsumura, Kazuhiro Ohkura, and Kanji Ueda. Advantages of Global Discrete
Recombination in (µ/µ, λ)-Evolution Strategies. In CEC’02 [944], pages 1848–1853, volume
1096 REFERENCES
2, 2002. doi: 10.1109/CEC.2002.1004524.
1844. Giles Mayley. Guiding or Hiding: Explorations into the Effects of Learning on the Rate of
Evolution. In ECAL’97 [1309], pages 135–144, 1997. CiteSeerx : 10.1.1.50.9241.
1845. Ernst Mayr. The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Har-
vard University Press: Cambridge, MA, USA and Belknap Press: Cambridge, MA, USA,
1982. isbn: 0-674-36445-7 and 0-674-36446-5. Google Books ID: V 0TAQAAIAAJ and
pHThtE2R0UQC. OCLC: 7875904, 185404286, and 239745622.
1846. John P. McDermott, editor. Proceedings of the 10th International Joint Conference on Ar-
tificial Intelligence (IJCAI’87-I), August 1987, Milano, Italy, volume 1. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-87-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1847].
1847. John P. McDermott, editor. Proceedings of the 10th International Joint Conference on Ar-
tificial Intelligence (IJCAI’87-II), August 1987, Milano, Italy, volume 2. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-87-VOL2/content/content.htm [accessed 2008-04-01]. See also [1846].
1848. John Robert McDonnell, Robert G. Reynolds, and David B. Fogel, editors. Proceedings of
the 4th Annual Conference on Evolutionary Programming (EP’95), March 1–2, 1995, San
Diego, CA, USA, Complex Adaptive Systems, Bradford Books. MIT Press: Cambridge,
MA, USA. isbn: 0-262-13317-2. Google Books ID: OPhQAAAAMAAJ and ipiiDmTrJeAC.
OCLC: 32666904.
1849. Deborah L. McGuinness and George Ferguson, editors. Proceedings of the Nineteenth
National Conference on Artificial Intelligence and the Sixteenth Conference on Innova-
tive Applications of Artificial Intelligence (AAAI’04 / IAAI’04), July 25–29, 2004, San
José, CA, USA. AAAI Press: Menlo Park, CA, USA and MIT Press: Cambridge, MA,
USA. isbn: 0-262-51183-5. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai04.php and http://www.aaai.org/Conferences/IAAI/iaai04.php [ac-
cessed 2007-09-06].

1850. Sheila A. McIlraith and Tran Cao Son. Adapting Golog for Composition of Seman-
tic Web Services. In KR’02 [900], pages 482–496, 2002. Fully available at http://
reference.kfupm.edu.sa/content/a/d/adapting_golog_for_composition_
of_semant_10675.pdf and http://www.cs.nmsu.edu/˜tson/papers/kr02gl.pdf
[accessed 2011-01-11].

1851. Sheila A. McIlraith, Tran Cao Son, and Honglei Zeng. Semantic Web Services. IEEE Intelli-
gent Systems Magazine, 16(2):46–53, March–April 2001, IEEE Computer Society Press: Los
Alamitos, CA, USA. doi: 10.1109/5254.920599. INSPEC Accession Number: 6941088.
1852. Robert Ian McKay, Xin Yao, Charles S. Newton, Jong-Hwan Kim, and Takeshi Fu-
ruhashi, editors. Selected Papers from the Second Asia-Pacific Conference on Simulated
Evolution and Learning (SEAL’98), November 24–27, 1998, Canberra, Australia, volume
1585/1999 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48873-1.
isbn: 3-540-65907-2.
1853. Robert Ian McKay, Xuan Hoai Nguyen, Peter Alexander Whigham, and Yin Shan. Grammars
in Genetic Programming: A Brief Review. In ISICA [1489], pages 3–18, 2005. Fully available
at http://sc.snu.ac.kr/PAPERS/isica05.pdf [accessed 2007-08-15].
1854. Ken I. M. McKinnon. Convergence of the Nelder-Mead Simplex Method to a Nonstationary
Point. SIAM Journal on Optimization (SIOPT), 9(1):148–158, 1998, Society for Industrial
and Applied Mathematics (SIAM): Philadelphia, PA, USA. doi: 10.1137/S1052623496303482.
1855. Nicholas Freitag McPhee and Justin Darwin Miller. Accurate Replication in Genetic
Programming. In ICGA’95 [883], pages 303–309, 1995. Fully available at http://
www.mrs.umn.edu/˜mcphee/Research/Accurate_replication.ps [accessed 2011-11-23].
CiteSeerx : 10.1.1.51.6390.
1856. Nicholas Freitag McPhee and Riccardo Poli. Memory with Memory: Soft As-
signment in Genetic Programming. In GECCO’08 [1519], pages 1235–1242, 2008.
doi: 10.1145/1389095.1389336. Fully available at http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/McPhee_2008_gecco.html [accessed 2009-08-02].
1857. Jörn Mehnen, Mario Köppen, Ashraf Saad, and Ashutosh Tiwari, editors. Applications of
Soft Computing – From Theory to Praxis: 13th Online World Conference on Soft Computing
in Industrial Applications, November 10–21, 2008, volume 58 in Advances in Intelligent and
REFERENCES 1097
Soft Computing. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-540-89619-7.
1858. Yi Mei, Ke Tang, and Xin Yao. A Global Repair Operator for Capacitated Arc
Routing Problem. IEEE Transactions on Systems, Man, and Cybernetics – Part
B: Cybernetics, 39(3):723–734, June 2009, IEEE Systems, Man, and Cybernetics So-
ciety: New York, NY, USA. doi: 10.1109/TSMCB.2008.2008906. Fully available
at http://www.cs.bham.ac.uk/˜xin/papers/TSMC09_MeiTangYao.pdf [accessed 2011-
x
10-01]. CiteSeer : 10.1.1.147.5933. PubMed ID: 19211356.

1859. Yi Mei, Ke Tang, and Xin Yao. Decomposition-Based Memetic Algorithm for Multi-
Objective Capacitated Arc Routing Problem. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 15(2):151–165, April 2011, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/TEVC.2010.2051446. Fully available at http://
ieee-cis.org/_files/EAC_Research_2009_Report_Mei.pdf and http://mail.
ustc.edu.cn/˜meiyi/Papers/D-MAENS.pdf [accessed 2011-02-22]. INSPEC Accession Num-
ber: 11887930. to appear.
1860. Proceedings of the Second International Workshop on Local Search Techniques in Constraint
Satisfaction, October 1, 2005, Melia Sitges Hotel: Sitges, Barcelona, Catalonia, Spain.
1861. Christopher S. Mellish, editor. Proceedings of the Fourteenth International Joint Conference
on Artificial Intelligence (IJCAI’95), August 20–25, 1995, Montréal, QC, Canada. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-363-8. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL%201/content/content.
htm and http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL2/CONTENT/content.htm
[accessed 2008-04-01]. Google Books ID: yI7UPwAACAAJ. OCLC: 41327748, 263632421, and

300469403. See also [1836, 2919].


1862. Abdelhamid Mellouk, Jun Bi, Guadalupe Ortiz, Kak Wah (Dickson) Chiu, and Manuela
Popescu, editors. Proceedings of The Third International Conference on Internet and
Web Applications and Services (ICIW’08), June 8–13, 2008, Athens, Greece. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7695-3163-6. Partly available
at http://www.iaria.org/conferences2008/ICIW08.html [accessed 2011-02-28]. Google
Books ID: GiW-PQAACAAJ. Library of Congress Control Number (LCCN): 2008922600.
Product Number: E3163. BMS Part Number: CFP0816C-CDR.
1863. Leonor Albuquerque Melo. Multi-colony Ant Colony Optimization for the Node Placement
Problem. In GECCO’09-II [2343], pages 2713–2716, 2009. doi: 10.1145/1570256.1570391.
1864. Adriana Menchaca-Méndez and Carlos Artemio Coello Coello. A New Proposal to Hybridize
the Nelder-Mead Method to a Differential Evolution Algorithm for Constrained Optimiza-
tion. In CEC’09 [1350], pages 2598–2605, 2009. doi: 10.1109/CEC.2009.4983268. Fully
available at www.lania.mx/˜ccoello/EMOO/menchaca09.pdf.gz [accessed 2009-09-05]. IN-
SPEC Accession Number: 10688863.
1865. Jerry M. Mendel, Takashi Omari, and Xin Yao, editors. The First IEEE Symposium on Foun-
dations of Computational Intelligence (FOCI’07), April 1–5, 2007, Hilton Hawaiian Village
Hotel (Beach Resort & Spa): Honolulu, HI, USA. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 1424407036. Google Books ID: -8kaOAAACAAJ and KaekAAAACAAJ. Catalogue
no.: 07EX1569. Sponsored by IEEE Computational Intelligence Society.
1866. Roberto R. F. Mendes and Arvind S. Mohais. DynDE: A Differential Evolution for
Dynamic Optimization Problems. In CEC’05 [641], pages 2808–2815, volume 3, 2005.
doi: 10.1109/CEC.2005.1555047. Fully available at http://www3.di.uminho.pt/˜rcm/
publications/DynDE.pdf [accessed 2010-08-24]. CiteSeerx : 10.1.1.125.2146. INSPEC Ac-
cession Number: 8715549.
1867. Roberto R. F. Mendes, Fabricio de B. Voznika, Alex Alves Freitas, and Julio C. Nievola.
Discovering Fuzzy Classification Rules with Genetic Programming and Co-Evolution.
In PKDD’01 [723], pages 314–325, 2001. doi: 10.1007/3-540-44794-6 26. Fully avail-
able at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Mendes_2001_PKDD.
html and http://www.cs.kent.ac.uk/people/staff/aaf/pub_papers.dir/
PKDD-2001.ps [accessed 2010-08-19]. CiteSeerx : 10.1.1.17.8631.
1868. Xiaofeng Meng, Jianwen Su, and Yujun Wang, editors. Proceedings of the Third International
Conference on Advances in Web-Age Information Management (WAIM’02), August 11–13,
2002, Běijı̄ng, China, volume 2419 in Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45703-8. isbn: 3-540-44045-3.
1869. Manfred Menze. Evolutionäre Algorithmen zur Ad-hoc-Tourenplanung: InWeSt – Intelligente
1098 REFERENCES
Wechselbrückensteuerung. Technical Report, Micromata GmbH: Kassel, Hesse, Germany,
November 1, 2010. Fully available at http://www.micromata.de/fileadmin/user_
upload/Case_studies/cs_InWeSt.pdf [accessed 2011-09-05].
1870. Juan Julián Merelo-Guervós, Panagiotis Adamidis, Hans-Georg Beyer, José Luis Fernández-
Villacañas Martı́n, and Hans-Paul Schwefel, editors. Proceedings of the 7th International
Conference on Parallel Problem Solving from Nature (PPSN VII), September 7–11, 2002,
Granada, Spain, volume 2439/2002 in Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45712-7. isbn: 3-540-44139-5. Google
Books ID: KD-WMBb4AhkC. OCLC: 50422940, 50801109, 66012617, and 248499908.
1871. Daniel Merkle, Martin Middendorf, and Hartmut Schmeck. Ant Colony Optimiza-
tion for Resource-Constrained Project Scheduling. In GECCO’00 [2935], pages 893–
900, 2000. Fully available at http://pacosy.informatik.uni-leipzig.de/pv/
Personen/middendorf/Papers/ [accessed 2009-06-27]. CiteSeerx : 10.1.1.20.8614.
1872. Laurence D. Merkle and Frank W. Moore, editors. Defense Applications of Computational
Intelligence Workshop (DACI’08), July 12, 2008, Renaissance Atlanta Hotel Downtown: At-
lanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
1873. Achille Messac and Christopher A. Mattson. Generating Well-Distributed Sets of Pareto
Points for Engineering Design Using Physical Programming. Optimization and Engineering, 3
(4):431–450, December 2002, Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer
Academic Publishers: Norwell, MA, USA. doi: 10.1023/A:1021179727569. Fully available
at http://www.rpi.edu/˜messac/Publications/messac_2002_pareto_pp_opte.
pdf [accessed 2009-06-21].
1874. Nicholas Metropolis, Arianna W. Rosenbluth, Marshall Nicholas Rosenbluth, Augusta H.
Teller, and Edward Teller. Equation of State Calculations by Fast Computing Machines.
The Journal of Chemical Physics, 21(6):1087–1092, June 1953, American Institute of Physics
(AIP): College Park, MD, USA. doi: 10.1063/1.1699114. Fully available at http://home.
gwu.edu/˜stroud/classics/Metropolis53.pdf, http://sc.fsu.edu/˜beerli/
mcmc/metropolis-et-al-1953.pdf, and http://scienze-como.uninsubria.it/
bressanini/montecarlo-history/mrt2.pdf [accessed 2010-09-25].
1875. Proceedings of the Second International Workshop on Advanced Computational Intelligence
(IWACI’09), October 8–9, 2009, Mexico City, México.
1876. Jean-Arcady Meyer and Stewart W. Wilson, editors. From Animals to Animats: Proceedings
of the First International Conference on Simulation of Adaptive Behavior (SAB’90), Septem-
ber 24–28, 1990, Paris, France. MIT Press: Cambridge, MA, USA. isbn: 0-262-63138-5.
Published in 1991.
1877. Mike Meyer and James Rosenberger, editors. Proceedings of the 27th Symposium on the Inter-
face Computing Science and Statistics (Interface’95), June 21–24, 1995, David L. Lawrence
Convention Center and Pittsburgh Vista Hotel: Pittsburgh, PA, USA.
1878. Silja Meyer-Nieberg and Hans-Georg Beyer. Self-Adaptation in Evolutionary Algo-
rithms. In Parameter Setting in Evolutionary Algorithms [1753], Chapter 3, pages
47–75. Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-69432-8 3.
CiteSeerx : 10.1.1.60.4624.
1879. Marc Mezard, Giorgio Parisi, and Miguel Angel Virasoro. Spin Glass Theory and Be-
yond – An Introduction to the Replica Method and Its Applications, volume 9 in World
Scientific Lecture Notes in Physics. World Scientific Publishing Co.: Singapore, Novem-
ber 1986. isbn: 9971-50-115-5 and 9971-50-116-3. Google Books ID: 84kNHQAACAAJ
and KKbuIAAACAAJ.
1880. Efrén Mezura-Montes and Carlos Artemio Coello Coello. Using the Evolution Strategies’
Self-Adaptation Mechanism and Tournament Selection for Global Optimization. In AN-
NIE’03 [670], pages 373–378, 2003. Fully available at http://www.lania.mx/˜emezura/
documentos/annie03.pdf [accessed 2008-03-20]. Theoretical Developments in Computational
Intelligence Award, 1st runner up.
1881. Efrén Mezura-Montes, Carlos Artemio Coello Coello, and Ricardo Landa-Becerra. Engi-
neering Optimization Using a Simple Evolutionary Algorithm. In ICTAI’03 [1367], pages
149–156, 2003. CiteSeerx : 10.1.1.1.2837.
1882. Efrén Mezura-Montes, Jesús Velázquez-Reyes, and Carlos Artemio Coello Coello. A Compar-
ative Study of Differential Evolution Variants for Global Optimization. In GECCO’06 [1516],
pages 485–492, 2006. doi: 10.1145/1143997.1144086. Fully available at http://delta.
REFERENCES 1099
cs.cinvestav.mx/˜ccoello/conferences/mezura-gecco2006.pdf.gz [accessed 2010-
.
08-24]

1883. Zbigniew Michalewicz. A Step Towards Optimal Topology of Communication Networks. In


Proceedings of Data Structures and Target Classification, the SPIE’s International Sympo-
sium on Optical Engineering and Photonics in Aerospace Sensing [1738], pages 112–122, 1991.
doi: 10.1117/12.44844.
1884. Zbigniew Michalewicz. A Perspective on Evolutionary Computation. In Progress
in Evolutionary Computation, AI’93 (Melbourne, Victoria, Australia, 1993-11-16) and
AI’94 Workshops (Armidale, NSW, Australia, 1994-11-22/23) on Evolutionary Compu-
tation, Selected Papers [3023], pages 73–89, 1993–1994. doi: 10.1007/3-540-60154-6 49.
CiteSeerx : 10.1.1.90.7608.
1885. Zbigniew Michalewicz. Genetic Algorithms, Numerical Optimization, and Constraints. In
ICGA’95 [883], pages 151–158., 1995. Fully available at http://www.cs.adelaide.edu.
au/˜zbyszek/Papers/p16.pdf [accessed 2009-02-28]. CiteSeerx : 10.1.1.14.1880.
1886. Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolution Pro-
grams. Springer-Verlag GmbH: Berlin, Germany, 1996. isbn: 3-540-58090-5 and
3-540-60676-9. Google Books ID: vlhLAobsK68C. asin: 3540606769.
1887. Zbigniew Michalewicz and David B. Fogel. How to Solve It: Modern Heuristics. Springer-
Verlag: Berlin/Heidelberg, 2nd edition, 2004. isbn: 3-540-22494-7. Google Books
ID: RJbV -JlIUQC.
1888. Zbigniew Michalewicz and Girish Nazhiyath. Genocop III: A Co-Evolutionary Al-
gorithm for Numerical Optimization with Nonlinear Constraints. In CEC’95 [1361],
pages 647–651, volume 2, 1995. doi: 10.1109/ICEC.1995.487460. Fully available
at http://www.cs.cinvestav.mx/˜constraint/papers/p24.ps [accessed 2009-02-28].
CiteSeerx : 10.1.1.14.2923.
1889. Zbigniew Michalewicz and Robert G. Reynolds, editors. Proceedings of the IEEE Congress on
Evolutionary Computation (CEC’08), June 1–6, 2008, Hong Kong Convention and Exhibition
Centre: Hong Kong (Xiānggǎng), China. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 1-4244-1822-4. Google Books ID: W5HqHwAACAAJ and 142PQAACAAJ. Part of
[3104].
1890. Zbigniew Michalewicz and Marc Schoenauer. Evolutionary Algorithms for Constrained
Parameter Optimization Problems. Evolutionary Computation, 4(1):1–32, Spring 1996,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.1.1. Fully available
at http://www.cs.adelaide.edu.au/˜zbyszek/Papers/p30.pdf [accessed 2009-02-28].
CiteSeerx : 10.1.1.13.9279.
1891. Zbigniew Michalewicz, J. David Schaffer, Hans-Paul Schwefel, David B. Fogel, and Hi-
roaki Kitano, editors. Proceedings of the First IEEE Conference on Evolutionary Computa-
tion (CEC’94), June 27–29, 1994, Walt Disney World Dolphin Hotel: Orlando, FL, USA.
IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer Society: Piscataway, NJ,
USA. isbn: 0-7803-1899-4 and 0-7803-1900-1. Google Books ID: W6FVAAAAMAAJ and
Zhp7RAAACAAJ. OCLC: 636736723. INSPEC Accession Number: 4812337. Part of [1324].
1892. GeoLife GPS Trajectories. Microsoft Research Asia: Běijı̄ng, China, September 30,
2010. Fully available at http://research.microsoft.com/en-us/downloads/
b16d359d-d164-469e-9fd4-daa38f2b2e13/default.aspx [accessed 2011-02-19]. See also
[3077].
1893. Kaisa Miettinen. Nonlinear Multiobjective Optimization, volume 12 in International Series
in Operations Research & Management Science. Kluwer Academic Publishers: Norwell,
MA, USA and Springer Netherlands: Dordrecht, Netherlands, 1998. isbn: 0-7923-8278-1.
Google Books ID: ha zLdNtXSMC. OCLC: 39545951.
1894. Kaisa Miettinen, Marko M. Mäkelä, Pekka Neittaanmäki, and Jacques Periaux, editors. In-
vited Lectures of the European Short Course on Genetic Algorithms and Evolution Strategies
(EUROGEN’99), May 30–June 3, 1999, University of Jyväskylä: Jyväskylä, Finland. Wi-
ley Interscience: Chichester, West Sussex, UK. isbn: 0471999024. OCLC: 41086671,
245840787, and 313492480. Full title: Evolutionary Algorithms in Engineering and Com-
puter Science – Recent Advances in Genetic Algorithms, Evolution Strategies, Evolutionary
Programming, Genetic Programming and Industrial Applications.
1895. Kaisa Miettinen, Pekka J. Korhonen, and Jyrki Wallenius, editors. “Environment and Policy”
– Proceedings of the 21st International Conference on Multiple Criteria Decision Making
1100 REFERENCES
(MCDM’11), June 13–17, 2011, University of Jyväskylä: Jyväskylä, Finland.
1896. Mitsunori Miki, Tomoyuki Hiroyasu, and Jun’ya Wako. Adaptive Temperature Schedule
Determined by Genetic Algorithm for Parallel Simulated Annealing. In CEC’03 [2395],
pages 459–466, volume 1, 2003. doi: 10.1109/CEC.2003.1299611. Fully avail-
able at http://mikilab.doshisha.ac.jp/dia/research/person/wako/society/
cec2003/cec03_wako.pdf [accessed 2007-09-15]. INSPEC Accession Number: 8008655.
1897. Brad L. Miller and David Edward Goldberg. Genetic Algorithms, Tournament Selection,
and the Effects of Noise. IlliGAL Report 95006, Illinois Genetic Algorithms Laboratory
(IlliGAL), Department of Computer Science, Department of General Engineering, University
of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, July 1995. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/95006.ps.Z [accessed 2009-
x
07-10]. CiteSeer : 10.1.1.30.6625. See also [1898].

1898. Brad L. Miller and David Edward Goldberg. Genetic Algorithms, Selection Schemes, and
the Varying Effects of Noise. Evolutionary Computation, 4(2):113–131, Summer 1996, MIT
Press: Cambridge, MA, USA, Marc Schoenauer, editor. doi: 10.1162/evco.1996.4.2.113.
CiteSeerx : 10.1.1.31.3449. See also [1897].
1899. Brad L. Miller and Michael J. Shaw. Genetic Algorithms with Dynamic Niche Sharing
for Multimodal Function Optimization. IlliGAL Report 95010, Illinois Genetic Algorithms
Laboratory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, December 1, 1995.
CiteSeerx : 10.1.1.52.3568.
1900. George A. Miller. The Magical Number Seven, Plus or Minus Two: Some Limits on
our Capacity for Processing Information. Psychological Review, 63(2):81–97, March 1956,
American Psychological Association (APA): Washington, DC, USA. Fully available
at http://www.psych.utoronto.ca/users/peterson/psy430s2001/Miller%20GA
%20Magical%20Seven%20Psych%20Review%201955.pdf [accessed 2009-07-17]. PubMed
ID: 13310704.
1901. Julian Francis Miller and Peter Thomson. Cartesian Genetic Programming. In
EuroGP’00 [2198], pages 121–132, 2000. doi: 10.1007/b75085. Fully available
at http://www.cartesiangp.co.uk/papers/2000/mteurogp2000.pdf [accessed 2009-
x
07-04]. CiteSeer : 10.1.1.26.8515.

1902. Julian Francis Miller, Adrian Thompson, Peter Thomson, and Terence Claus Fogarty, edi-
tors. Proceedings of Third International Conference on Evolvable Systems – From Biology
to Hardware (ICES’00), April 17–19, 2000, Edinburgh, Scotland, UK, volume 1801/2000
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-46406-9. isbn: 3-540-67338-5.
1903. Julian Francis Miller, Marco Tomassini, Pier Luca Lanzi, Conor Ryan, Andrea G. B. Tet-
tamanzi, and William B. Langdon, editors. Proceedings of the 4th European Conference on
Genetic Programming (EuroGP’01), April 18–20, 2001, Lake Como, Milan, Italy, volume
2038/2001 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-41899-7. OCLC: 46502093, 48730451, 66624523, 76281028,
456658150, 501702396, 614899119, and 634229636. Library of Congress Control Num-
ber (LCCN): 2001020853. LC Classification: QA76.623 .E94 2001.
1904. Stanley L. Miller. Production of Amino Acids under Possible Primitive Earth Conditions.
Science Magazine, 117(3046):528–529, May 15, 1953, American Association for the Advance-
ment of Science (AAAS): Washington, DC, USA and HighWire Press (Stanford University):
Cambridge, MA, USA. doi: 10.1126/science.117.3046.528. Fully available at http://www.
issol.org/miller/miller1953.pdf [accessed 2009-08-05]. PubMed ID: 13056598.
1905. Stanley L. Miller and Harold C. Urey. Organic Compound Synthesis on the Primitive Earth:
Several Questions about the Origin of Life have been Answered, but much Remains to be
Studied. Science Magazine, 130(3370):245–251, July 31, 1959, American Association for the
Advancement of Science (AAAS): Washington, DC, USA and HighWire Press (Stanford Uni-
versity): Cambridge, MA, USA. doi: 10.1126/science.130.3370.245. PubMed ID: 13668555.
1906. Hokey Min, Tomasz G. Smolinski, and Grzegorz M. Boratyn. A Genetic Algorithm-based
Data Mining Approach to Profiling the Adopters And Non-Adopters of E-Purchasing. In
IRI’01 [2524], pages 1–6, 2001. CiteSeerx : 10.1.1.11.4711.
1907. Marvin Lee Minsky. Steps toward Artificial Intelligence. pages 8–30, volume 49 (1). Institu-
tion of Electrical Engineers (IEE): London, UK and Institution of Engineering and Technology
REFERENCES 1101
(IET): Stevenage, Herts, UK, January 1961. doi: 10.1109/JRPROC.1961.287775. Reprinted
in [1908].
1908. Marvin Lee Minsky. Steps toward Artificial Intelligence. In Computers and Thought [897],
pages 406–450. McGraw-Hill: New York, NY, USA, 1963–1995. Fully available at http://
web.media.mit.edu/˜minsky/papers/steps.html [accessed 2007-08-05]. Reprint of [1907].
1909. Daniele Miorandi, Paolo Dini, Eitan Altman, and Hisao Kameda, authors. Lidia A. R. Ya-
mamoto, editor. WP 2.2 – Paradigm Applications and Mapping, D2.2.2 Framework for Dis-
tributed On-line Evolution of Protocols and Services. BIONETS (European Commission –
Future and Emerging Technologies Project FP6-027748): Trento, Italy, 2nd edition, June 14,
2007, David Linner and Françoise Baude, Supervisors. Fully available at http://www.
bionets.eu/docs/BIONETS_D2_2_2.pdf [accessed 2008-06-23]. BIONETS/CN/wp2.2/v2.1.
Status: Final, Public.
1910. Teresa Miquélez, Endika Bengoetxea, and Pedro Larrañaga. Evolutionary Computation based
on Bayesian Classifiers. International Journal of Applied Mathematics and Computer Science
(AMCS), 14(3):335–349, September 2004, University of Zielona Góra: Zielona Góra, Poland
and Lubuskie Scientific Society: Zielona Góra, Poland. Fully available at http://matwbn.
icm.edu.pl/ksiazki/amc/amc14/amc1434.pdf and http://www.amcs.uz.zgora.
pl/?action=get_file&file=AMCS_2004_14_3_4.pdf [accessed 2009-08-27].
1911. Proceedings of the Fifth International Workshop on the Synthesis and Simulation of Living
Systems (Artificial Life V), May 16–18, 1996, Nara, Japan. MIT Press: Cambridge, MA,
USA.
1912. Melanie Mitchell. An Introduction to Genetic Algorithms, Complex Adaptive Systems, Brad-
ford Books. MIT Press: Cambridge, MA, USA, February 1998. isbn: 0-262-13316-4
and 0-262-63185-7. Google Books ID: 0eznlz0TF-IC. OCLC: 44708663 and
256769418. Library of Congress Control Number (LCCN): 95024489. GBV-Identification
(PPN): 083660852, 240449282, 269703497, and 557262135. LC Classification: QH441.2
.M55 1996.
1913. Melanie Mitchell, Stephanie Forrest, and John Henry Holland. The Royal Road for Genetic
Algorithms: Fitness Landscapes and GA Performance. In ECAL’91 [2789], pages 245–254,
1991. Fully available at http://web.cecs.pdx.edu/˜mm/ecal92.pdf [accessed 2010-08-03].
CiteSeerx : 10.1.1.49.212.
1914. Tom M. Mitchell. Generalization as Search. In Readings in Artificial Intelligence: A Col-
lection of Articles [2872], pages 517–542. Tioga Publishing Co.: Wellsboro, PA, USA and
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1981. See also [1915].
1915. Tom M. Mitchell. Generalization as Search. Artificial Intelligence, 18
(2):203–226, March 1982, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(82)90040-6. CiteSeerx : 10.1.1.121.5764. See also [1914].
1916. Tom M. Mitchell and Reid G. Smith, editors. Proceedings of the 7th National Conference
on Artificial Intelligence (AAAI’88), August 21–26, 1988, St. Paul, MN, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51055-3. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai88.php [accessed 2007-09-06].
1917. Debasis Mitra, Fabio I. Romeo, and Alberto L. Sangiovanni-Vincentelli. Convergence and
Finite-Time Behavior of Simulated Annealing. In CDC’85 [984], pages 761–767, 1985.
doi: 10.1109/CDC.1985.268600.
1918. Mamoru Mitsuishi, Kanji Ueda, and Fumihiko Kimura, editors. Manufacturing Systems and
Technologies for the New Frontier – The 41st CIRP Conference on Manufacturing Systems,
May 26–28, 2008, Hongo Campus, The University of Tokyo: Tokyo, Japan. Springer-Verlag
London Limited: London, UK. Library of Congress Control Number (LCCN): 2008924671.
1919. Michael Mitzenmacher and Eli Upfal. Probability and Computing: Randomized Algo-
rithms and Probabilistic Analysis. Cambridge University Press: Cambridge, UK, 2005.
isbn: 0-521-83540-2. Google Books ID: 0bAYl6d7hvkC. OCLC: 55845691, 58999923,
and 502475686. Library of Congress Control Number (LCCN): 2004054540. GBV-
Identification (PPN): 390183326 and 555603164.
1920. Jun’ichi Mizoguchi, Hitoshi Hemmi, and Katsunori Shimohara. Production Genetic Algo-
rithms for Automated Hardware Design Through an Evolutionary Process. In CEC’94 [1891],
pages 661–664, volume 2, 1994. doi: 10.1109/ICEC.1994.349980. INSPEC Accession Num-
ber: 4812352.
1921. Arvind S. Mohais, Sven Schellenberg, Maksud Ibrahimov, Neal Wagner, and Zbigniew
1102 REFERENCES
Michalewicz. An Evolutionary Approach to Practical Constraints in Scheduling: A Case-
Study of the Wine Bottling Problem. In Variants of Evolutionary Algorithms for Real-World
Applications [567], Chapter 2, pages 31–58. Springer-Verlag: Berlin/Heidelberg, 2011–2012.
doi: 10.1007/978-3-642-23424-8 2.
1922. Masoud Mohammadian, Ruhul A. Sarker, and Xin Yao, editors. Computational Intelligence
in Control. Idea Group Publishing (Idea Group Inc., IGI Global): New York, NY, USA,
2003. isbn: 1591400376. Google Books ID: N3m6XYXL5K0C.
1923. Masoud Mohammadian, Lotfi A. Zadeh, and Stephen Grossberg, editors. Proceedings
of the International Conference on Computational Intelligence for Modelling, Control and
Automation and International Conference on Intelligent Agents, Web Technologies and
Internet Commerce Vol-2 (CIMCA-IAWTIC’06), November 28–December 1, 2006, Syd-
ney, NSW, Australia, volume 02. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7695-2731-0. OCLC: 85810664 and 123929091. GBV-Identification
(PPN): 593426258. ID: E2731.
1924. Mukesh K. Mohania and A. Min Tjoa, editors. Proceedings of the First International Con-
ference DataWarehousing and Knowledge Discovery (DaWaK’99), August 30–September 1,
1999, Florence, Italy, volume 1676 in Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48298-9. isbn: 3-540-66458-0.
1925. Parviz Moin and William C. Reynolds, editors. Proceedings of the 1998 Summer Program,
July 5–31, 1998. NASA Center for Turbulence Research (CTR): Stanford, CA, USA. Fully
available at http://www.stanford.edu/group/ctr/Summer/SP98.html [accessed 2009-
06-16].

1926. Guillermo Molina and Enrique Alba Torres. Location Discovery in Wireless Sensor Networks
Using a Two-Stage Simulated Annealing. In EvoWorkshops’09 [1052], pages 11–20, 2009.
doi: 10.1007/978-3-642-01129-0 2.
1927. Daniel Molina Cabrera, Manuel Lozano Márquez, and Francisco Herrera Triguero. Memetic
Algorithm with Local Search Chaining for Large Scale Continuous Optimization Problems.
In CEC’09 [1350], pages 830–837, 2009. doi: 10.1109/CEC.2009.4983031. INSPEC Accession
Number: 10688603.
1928. Nicolas Monmarché, El-Ghazali Talbi, Pierre Collet, Marc Schoenauer, and Evelyne
Lutton, editors. Revised Selected Papers of the 8th International Conference on Ar-
tificial Evolution, Evolution Artificielle (EA’07), October 29–31, 2007, Tours, France,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-79305-2. isbn: 3-540-79304-6. Partly available at http://ea07.
hant.li.univ-tours.fr/ [accessed 2007-09-09]. Published in 2008.
1929. Raul Monroy, Gustavo Arroyo-Figueroa, Luis Enrique Sucar, and Juan Humberto Sossa
Azuela, editors. Advances in Artificial Intelligence, Proceedings of the Third Mexican In-
ternational Conference on Artificial Intelligence (MICAI’04), April 26–30, 2004, Mexico
City, México, volume 2972/2004 in Lecture Notes in Artificial Intelligence (LNAI, SL7),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b96521. isbn: 3-540-21459-3.
1930. David J. Montana. Strongly Typed Genetic Programming. BBN Technical Report #7866,
Bolt Beranek and Newman Inc. (BBN): Cambridge, MA, USA, May 7, 1993. Fully available
at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/stgp.
ps.Z [accessed 2010-12-04]. CiteSeerx : 10.1.1.38.4986. See also [1931, 1932].
1931. David J. Montana. Strongly Typed Genetic Programming. BBN Technical Report
#7866, Bolt Beranek and Newman Inc. (BBN): Cambridge, MA, USA, March 25, 1994.
Fully available at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/
papers/stgp2.ps.Z [accessed 2010-12-04]. CiteSeerx : 10.1.1.38.4986. See also [1930, 1932].
1932. David J. Montana. Strongly Typed Genetic Programming. Evolutionary Computation, 3(2):
199–230, Summer 1995, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.2.199.
Fully available at http://personal.d.bbn.com/˜dmontana/papers/stgp.pdf
and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/montana_stgpEC.html
[accessed 2011-11-23].

1933. David J. Montana and Talib Hussain. Adaptive Reconfiguration of Data Networks using
Genetic Algorithms. Applied Soft Computing, 4(4):433–444, October 2004, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2004.02.002. CiteSeerx : 10.1.1.95.1975.
See also [1935].
REFERENCES 1103
1934. David J. Montana and Jason Redi. Optimizing Parameters of a Mobile Ad Hoc Net-
work Protocol with a Genetic Algorithm. In GECCO’05 [304], pages 1993–1998, 2005.
doi: 10.1145/1068009.1068342. Fully available at http://vishnu.bbn.com/papers/
erni.pdf [accessed 2008-10-16]. CiteSeerx : 10.1.1.86.5827.
1935. David J. Montana, Talib Hussain, and Tushar Saxena. Adaptive Reconfiguration of Data
Networks using Genetic Algorithms. In GECCO’02 [1675], pages 1141–1149, 2002. See also
[1933].
1936. Proceedings of the 2nd International Workshop on Machine Learning (ML’83), 1983, Monti-
cello, IL, USA.
1937. The Seventh Metaheuristics International Conference (MIC’07), June 25–29, 2007, Montréal,
QC, Canada. Partly available at http://www.mic2007.ca/ [accessed 2007-09-12].
1938. H. L. Moore. Cours d’Économie Politique. By VILFREDO PARETO, Professeur à
l’Université de Lausanne. Vol. I. Pp. 430. I896. Vol. II. Pp. 426. I897. Lausanne: F. Rouge.
The ANNALS of the American Academy of Political and Social Science, 9(3):128–131, 1897,
SAGE Publications: Thousand Oaks, CA, USA and American Academy of Political and
Social Science: Philadelphia, PA, USA. doi: 10.1177/000271629700900314. See also [2127].
1939. Federico Morán, Alvaro Moreno, Juan Julián Merelo-Guervós, and Pablo Chacón, editors.
Advances in Artificial Life – Proceedings of the Third European Conference on Artificial
Life (ECAL’95), June 4–6, 1995, Granada, Spain, volume 929/1995 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-59496-5. isbn: 3-540-59496-5. Google
Books ID: 6CujMeca5S0C.
1940. Matthew Moran, Maciej Zaremba, and Tomas Vitvar. Service Discovery and Composition
with WSMX for SWS Challenge Workshop IV. In KWEB SWS Challenge Workshop [2450],
2007. Fully available at http://sws-challenge.org/workshops/2007-Innsbruck/
papers/4-SWSCha_IV_2pager.pdf. See also [582, 1210, 3057–3059].
1941. Jorge J. Moré. A Collection of Nonlinear Model Problems. In Computational Solution of
Nonlinear Systems of Equations – Proceedings of the 1988 SIAM-AMS Summer Seminar on
Computational Solution of Nonlinear Systems of Equations [73], pages 723–762, 1988. See
also [1942].
1942. Jorge J. Moré. A Collection of Nonlinear Model Problems. Technical Report MCS-P60-0289,
Argonne National Laboratory: Argonne, IL, USA, February 1989. See also [1941].
1943. Conwy Lloyd Morgan. On Modification and Variation. Simulation – Transactions of The
Society for Modeling and Simulation International, SAGE Journals Online, 4(99):733–740,
November 20, 1896, Society for Modeling and Simulation International (SCS): San Diego,
CA, USA. doi: 10.1126/science.4.99.733. See also [1944].
1944. Conwy Lloyd Morgan. On Modification and Variation. In Adaptive Individuals in Evolving
Populations: Models and Algorithms [255], pages 81–90. Addison-Wesley Longman Publish-
ing Co., Inc.: Boston, MA, USA and Westview Press, 1996. See also [1943].
1945. Proceedings of the 10th International Conference on Machine Learning (ICML’93), June 27–
29, 1993, University of Massachusetts (UMASS): Amherst, MA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-307-7.
1946. Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence
(IJCAI’97-I), August 23–29, 1997, Nagoya, Japan, volume 1. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-97-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1947, 2260].
1947. Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence
(IJCAI’97-II), August 23–29, 1997, Nagoya, Japan, volume 2. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-97-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [1946, 2260].
1948. Naoki Mori, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to a Changing Environment
by Means of the Thermodynamical Genetic Algorithm. In PPSN IV [2818], pages 513–522,
1996. doi: 10.1007/3-540-61723-X 1015.
1949. Naoki Mori, Seiji Imanishi, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to Changing
Environments by Means of the Memory Based Thermodynamical Genetic Algorithm. In
ICGA’97 [166], pages 299–306, 1997.
1950. Naoki Mori, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to a Changing Environment
by Means of the Feedback Thermodynamical Genetic Algorithm. In PPSN V [866], pages
1104 REFERENCES
149–158, 1998. doi: 10.1007/BFb0056858.
1951. David E. Moriarty, Alan C. Schultz, and John J. Grefenstette. Evolutionary Algorithms for
Reinforcement Learning. Artificial Intelligence Review – An International Science and En-
gineering Journal, 11:241–276, September 1999, Springer Netherlands: Dordrecht, Nether-
lands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1613/jair.613.
Fully available at http://www.jair.org/media/613/live-613-1809-jair.pdf [ac-
x
cessed 2007-09-12]. CiteSeer : 10.1.1.40.2231.

1952. Artemis Moroni, Fernando José Von Zuben, and Jonatas Manzolli. ArTbitration. In
GAVAM’00 [1452], pages 143–145, 2000. See also [1953].
1953. Artemis Moroni, Fernando José Von Zuben, and Jonatas Manzolli. ArTbitrariness in Mu-
sic (Musical ArTbitrariness). In GA’00 [2201], 2000. Fully available at http://www.
generativeart.com/on/cic/2000/MORONI.HTM [accessed 2009-06-18]. See also [1952].
1954. Robert Morris, editor. Deep Blue Versus Kasparov: The Significance for Artificial In-
telligence, Collected Papers from the 1997 AAAI Workshop (AAAI’97 WS), July 28,
1997. AAAI Press: Menlo Park, CA, USA. isbn: 1-57735-031-6. GBV-Identification
(PPN): 242427014.
1955. Ronald Walter Morrison. Designing Evolutionary Algorithms for Dynamic Environments.
PhD thesis, George Mason University (GMU): Fairfax, VA, USA, 2002, Kenneth Alan De
Jong, Advisor. See also [1956].
1956. Ronald Walter Morrison. Designing Evolutionary Algorithms for Dynamic Environments,
volume 24 in Natural Computing Series. Springer New York: New York, NY, USA, June 2004.
isbn: 3-540-21231-0. Google Books ID: EaNkNDYq 8. OCLC: 55746331 and 57168232.
Library of Congress Control Number (LCCN): 2004102479. See also [1955].
1957. Ronald Walter Morrison and Kenneth Alan De Jong. A Test Problem Generator for
Non-Stationary Environments. In CEC’99 [110], pages 2047–2053, volume 3, 1999.
doi: 10.1109/CEC.1999.785526.
1958. Ronald Walter Morrison and Kenneth Alan De Jong. Measurement of Population
Diversity. In EA’01 [606], pages 1047–1074, 2001. doi: 10.1007/3-540-46033-0 3.
CiteSeerx : 10.1.1.108.1903.
1959. Joel N. Morse. Reducing the Size of the Nondominated Set: Pruning by Clustering. Comput-
ers & Operations Research, 7(1–2):55–66, 1980, Pergamon Press: Oxford, UK and Elsevier
Science Publishers B.V.: Essex, UK. doi: 10.1016/0305-0548(80)90014-3.
1960. Joel N. Morse, editor. Proceedings of the 4th International Conference on Multiple Crite-
ria Decision Making: Organizations, Multiple Agents With Multiple Criteria (MCDM’80),
August 10–15, 1980, University of Delaware: Newark, DE, USA, volume 190 in Lec-
ture Notes in Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg.
isbn: 0387108211. Google Books ID: ms5-AAAACAAJ. OCLC: 7575489, 246648572,
310683024, 421655673, 470793740, and 612090473.
1961. Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Mar-
tial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program
C3P 826, California Institute of Technology (Caltech), Caltech Concurrent Computa-
tion Program (C3P): Pasadena, CA, USA, 1989. Fully available at http://www.each.
usp.br/sarajane/SubPaginas/arquivos_aulas_IA/memetic.pdf [accessed 2011-12-05].
CiteSeerx : 10.1.1.27.9474.
1962. Pablo Moscato. Memetic Algorithms. In Handbook of Applied Optimization [2125], Chapter
3.6.4, pages 157–167. Oxford University Press, Inc.: New York, NY, USA, 2002.
1963. Pablo Moscato and Carlos Cotta. A Gentle Introduction to Memetic Algorithms. In
Handbook of Metaheuristics [1068], Chapter 5, pages 105–144. Kluwer Academic Pub-
lishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands, 2003.
doi: 10.1007/0-306-48056-5 5. Fully available at http://www.lcc.uma.es/˜ccottap/
papers/handbook03memetic.pdf [accessed 2010-09-11]. CiteSeerx : 10.1.1.77.5300.
1964. Sanaz Mostaghim. Multi-objective Evolutionary Algorithms: Data structures, Convergence
and, Diversity, Elektrotechnik. PhD thesis, Universität Paderborn, Fakultät für Elektrotech-
nik, Informatik und Mathematik: Paderborn, Germany, Shaker Verlag GmbH: Aachen,
North Rhine-Westphalia, Germany, February 2005. isbn: 3-8322-3661-9. Google Books
ID: HoVLAAAACAAJ and pJpeOQAACAAJ. OCLC: 62900366.
1965. Jack Mostow and Chuck Rich, editors. Proceedings of the Fifteenth National Conference
on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Confer-
REFERENCES 1105
ence (AAAI’98, IAAI’98), July 26–30, 1998, Madison, WI, USA. AAAI Press: Menlo Park,
CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51098-7. Partly available
at http://www.aaai.org/Conferences/AAAI/aaai98.php and http://www.aaai.
org/Conferences/IAAI/iaai98.php [accessed 2007-09-06].
1966. Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms, Cambridge Interna-
tional Series on Parallel Computation. Cambridge University Press: Cambridge, UK, 1995.
isbn: 0-521-47465-5. Google Books ID: QKVY4mDivBEC. OCLC: 31606869. GBV-
Identification (PPN): 180148303, 310077257, 508849136, 555592650, and 563593563.
LC Classification: QA274 .M68 1995.
1967. Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms. ACM Comput-
ing Surveys (CSUR), 28(1):33–37, March 1996, ACM Press: New York, NY, USA.
doi: 10.1145/234313.234327.
1968. VIII Metaheuristics International Conference (MIC’09), July 13–16, 2009, Mövenpick Hotel
Hamburg: Hamburg, Germany.
1969. Heinz Mühlenbein. Parallel Genetic Algorithms, Population Genetics and Combinatorial
Optimization. In WOPPLOT’89 [245], pages 398–406, 1989. doi: 10.1007/3-540-55027-5 23.
See also [1970].
1970. Heinz Mühlenbein. Parallel Genetic Algorithms, Population Genetics and Combinatorial
Optimization. In ICGA’89 [2414], pages 416–421, 1989. See also [1969].
1971. Heinz Mühlenbein. Evolution in Time and Space – The Parallel Genetic Algorithm. In
FOGA’90 [2562], pages 316–337, 1990. Fully available at http://muehlenbein.org/
parallel.PDF and http://www.iais.fraunhofer.de/fileadmin/images/pics/
Abteilungen/INA/muehlenbein/PDFs/evolutionary/I/1991_Evolution.PDF [ac-
x
cessed 2009-08-12]. CiteSeer : 10.1.1.54.6763.

1972. Heinz Mühlenbein. How Genetic Algorithms Really Work – I. Mutation and Hillclimbing. In
PPSN II [1827], pages 15–26, 1992. Fully available at http://muehlenbein.org/mut92.
pdf [accessed 2010-09-11].
1973. Heinz Mühlenbein. The Equation for Response to Selection and its Use for Prediction.
Evolutionary Computation, 5(3):303–346, Fall 1997, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1997.5.3.303. Fully available at http://citeseer.ist.psu.edu/old/
730919.html [accessed 2009-08-21]. PubMed ID: 10021762.
1974. Heinz Mühlenbein and Gerhard Paaß. From Recombination of Genes to the Estima-
tion of Distributions I. Binary Parameters. In PPSN IV [2818], pages 178–187, 1996.
doi: 10.1007/3-540-61723-X 982. Fully available at ftp://ftp.ais.fraunhofer.de/
pub/as/ga/gmd_as_ga-96_04.ps [accessed 2009-08-20]. CiteSeerx : 10.1.1.7.7030. See
also [1976].
1975. Heinz Mühlenbein and Dirk Schlierkamp-Voosen. Predictive Models for the Breeder
Genetic Algorithm I: Continuous Parameter Optimization. Evolutionary Computation,
1(1):25–49, Spring 1993, MIT Press: Cambridge, MA, USA, Marc Schoenauer, edi-
tor. doi: 10.1162/evco.1993.1.1.25. Fully available at http://www.muehlenbein.org/
breeder93.pdf [accessed 2010-12-04]. CiteSeerx : 10.1.1.30.9381.
1976. Heinz Mühlenbein, Jürgen Bendisch, and Hans-Michael Voigt. From Recombination of Genes
to the Estimation of Distributions II. Continuous Parameters. In PPSN IV [2818], pages 188–
197, 1996. doi: 10.1007/3-540-61723-X 983. Fully available at http://www.amspr.gfai.
de/techreports/ppsn96-1.pdf [accessed 2009-08-20]. CiteSeerx : 10.1.1.9.2941. See also
[1974].
1977. Srinivas Mukkamala, Andrew H. Sung, and Ajith Abraham. Modeling Intrusion Detection
Systems using Linear Genetic Programming Approach. In Robert Orchard, Chunsheng Yang,
and Moonis Ali, editors, IEA/AIE’04 [2088], pages 633–642, 2004. Fully available at http://
www.rmltech.com/LGP%20Based%20IDS.pdf [accessed 2008-06-17].
1978. Jörg Müller, W. Lewis Johnson, and Barbara Hayes-Roth, editors. Proceedings of the First
International Conference on Autonomous Agents (AGENTS’97), February 5–8, 1997, Marina
del Rey, CA, USA. ACM Press: New York, NY, USA. isbn: 0-89791-877-0. Fully available
at http://sigart.acm.org/proceedings/agents97/ [accessed 2009-06-27]. Google Books
ID: ktPaAQAACAAJ. OCLC: 37622491 and 71484604. Sponsor: SIGGRAPH – ACM
Special Interest Group on Computer Graphics and Interactive Techniques.
1979. Masaharu Munetomo. Designing Genetic Algorithms for Adaptive Routing Algorithms
in the Internet. In GECCO’02 WS [224], pages 215–216, 2002. Fully available
1106 REFERENCES
at http://uk.geocities.com/markcsinclair/ps/etppf_mun.ps.gz [accessed 2009-09-
02]. CiteSeerx : 10.1.1.50.2448. Partly available at http://assam.cims.hokudai.
ac.jp/˜munetomo/documents/gecco99-ws.ppt and http://uk.geocities.com/
markcsinclair/ps/etppf_mun_slides.ps.gz [accessed 2009-09-02]. See also [1982].
1980. Masaharu Munetomo and David Edward Goldberg. Linkage Identification by Non-
monotonicity Detection for Overlapping Functions. IlliGAL Report 99005, Illinois Ge-
netic Algorithms Laboratory (IlliGAL), Department of Computer Science, Department of
General Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL,
USA, January 1999. Fully available at http://www.illigal.uiuc.edu/pub/papers/
IlliGALs/99005.ps.Z [accessed 2008-10-17]. CiteSeerx : 10.1.1.33.9322. See also [1981].
1981. Masaharu Munetomo and David Edward Goldberg. Linkage Identification by Non-
monotonicity Detection for Overlapping Functions. Evolutionary Computation, 7(4):377–398,
Winter 1999, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1999.7.4.377. See also
[1980].
1982. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. An Adaptive Network Routing
Algorithm Employing Path Genetic Operators. In ICGA’97 [166], pages 643–649, 1997. See
also [1979, 1983, 1984].
1983. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. An Adaptive Routing Algorithm
with Load Balancing by a Genetic Algorithm. Transactions of the Information Processing
Society of Japan (IPSJ), 39(2):219–227, 1998. in Japanese. See also [1982].
1984. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. A Migration Scheme for the
Genetic Adaptive Routing Algorithm. In SMC’98 [1364], pages 2774–2779, volume 3, 1998.
doi: 10.1109/ICSMC.1998.725081. See also [1982].
1985. Durga Prasand Muni, Nikhil R. Pal, and Jyotirmoy Das. A Novel Approach to Design
Classifiers using Genetic Programming. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 8(2):183–196, April 2004, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2004.825567. INSPEC Accession Number: 7973773.
1986. Nitin Muttil and Shie-Yui Liong. Superior Exploration–Exploitation Balance in Shuf-
fled Complex Evolution. Journal of Hydraulic Engineering, 130(12):1202–1205, Decem-
ber 2004, ASCE (American Society of Civil Engineers) Publications: Reston, VA, USA.
doi: 10.1061/(ASCE)0733-9429(2004)130:12(1202).
1987. John Mylopoulos and Raymond Reiter, editors. Proceedings of the 12th International
Joint Conference on Artificial Intelligence (IJCAI’91-I), August 24–30, 1991, Sydney,
NSW, Australia, volume 1. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-160-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-91-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [831, 1988, 2122].
1988. John Mylopoulos and Raymond Reiter, editors. Proceedings of the 12th International
Joint Conference on Artificial Intelligence (IJCAI’91-II), August 24–30, 1991, Sydney,
NSW, Australia, volume 2. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-160-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-91-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [831, 1987, 2122].

1989. Francesco Nachira, Paolo Dini, Andrea Nicolai, Marion Le Louarn, and Lorena Rivera
Lèon, editors. Digital Business Ecosystems. Office for Official Publications of the
European Communities: Luxembourg, 2007. isbn: 92-79-01817-5. Fully avail-
able at http://www.digital-ecosystems.org/book/2006-4156_PROOF-DCS.pdf
and http://www.digital-ecosystems.org/book/2006-4156_PROOF-DCS.pdf [ac-
cessed 2009-07-21].

1990. Thomas P. Nadeau and Toby J. Teorey. Achieving Scalability in OLAP Materialized
View Selection. In DOLAP’02 [2558], pages 28–34, 2002. doi: 10.1145/583890.583895.
CiteSeerx : 10.1.1.96.8347.
1991. Proceedings of the Third Asia-Pacific Conference on Simulated Evolution And Learning
(SEAL’00), October 25–27, 2000, Nagoya Congress Center: Nagoya, Japan.
1992. Third Online Workshop on Soft Computing (WSC3), August 1998, Nagoya, Japan.
1993. Fourth Online Workshop on Soft Computing (WSC4), September 1999, Nagoya, Japan.
REFERENCES 1107
1994. Akito Naito, Fumio Harashima, Shun-Ichi Amari, Toshio Fukuda, and Kunihiko Fukushima,
editors. Proceedings of 1993 International Joint Conference on Neural Networks (IJCNN
’93), October 25–29, 1993, Nagoya Congress Center: Nagoya, Japan. IEEE Computer
Society Press: Los Alamitos, CA, USA. isbn: 0-7803-1421-2, 0-7803-1422-0, and
0-7803-1423-9. Library of Congress Control Number (LCCN): 93-79884. Catalogue
no.: 93CH3353-0.
1995. Tadashi Nakano and Tatsuya Suda. Adaptive and Evolvable Network Services. In GECCO’04-
I [751], pages 151–162, 2004. Fully available at http://www.cs.york.ac.uk/rts/docs/
GECCO_2004/Conference%20proceedings/papers/3102/31020151.pdf [accessed 2008-
08-29]. See also [1996, 1997].

1996. Tadashi Nakano and Tatsuya Suda. Self-Organizing Network Services with Evolutionary
Adaptation. IEEE Transactions on Neural Networks, 16(5):1269–1278, September 2005,
IEEE Computer Society Press: Los Alamitos, CA, USA. doi: 10.1109/TNN.2005.853421.
CiteSeerx : 10.1.1.74.9149. INSPEC Accession Number: 8586518. See also [1995].
1997. Tadashi Nakano and Tatsuya Suda. Applying Biological Principles to Designs of Network
Services. Applied Soft Computing, 7(3):870–878, June 2007, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.asoc.2006.04.006. See also [1995].
1998. Wonhong Nam, Hyunyoung Kil, and Dongwon Lee. Type-Aware Web Service Composi-
tion Using Boolean Satisfiability Solver. In CEC/EEE’08 [1346], pages 331–334, 2008.
doi: 10.1109/CECandEEE.2008.108. INSPEC Accession Number: 10475104. See also
[204, 1999, 2064, 2066, 3034].
1999. Wonhong Nam, Hyunyoung Kil, and Jungjae Lee. QoS-Driven Web Service Composi-
tion Using Learning-Based Depth First Search. In CEC’09 [1245], pages 507–510, 2009.
doi: 10.1109/CEC.2009.50. INSPEC Accession Number: 10839131. See also [662, 1571, 1998,
2064, 2066, 2067, 3034].
2000. Srini Narayanan and Sheila A. McIlraith. Simulation, Verification and Automated Compo-
sition of Web Services. In WWW’02 [26], pages 77–88, 2002. doi: 10.1145/511446.511457.
Fully available at http://reference.kfupm.edu.sa/content/s/i/simulation__
verification_and_automated_c_10676.pdf, http://www.cs.toronto.
edu/kr/publications/nar-mci-www11.pdf, http://www.cs.toronto.edu/
˜sheila/publications/nar-mci-www11.pdf, http://www.cs.uoregon.edu/
classes/08W/cis607sii/papers/NarayananMcIlraith02.pdf, and http://
www.icsi.berkeley.edu/˜snarayan/nar-mci-www11.pdf [accessed 2011-01-11].

CiteSeerx : 10.1.1.16.4126.
2001. NASA High Performance Computing and Communications Computational Aerosciences
Workshop (CAS’00), February 15–17, 2000, NASA Ames Research Center: Moffett Field,
CA, USA.
2002. Collected Abstracts for the First International Workshop on Learning Classifier Systems
(IWLCS’92), October 6–9, 1992, NASA Johnson Space Center: Houston, TX, USA. See
also [2534].
2003. 9th Conference on Technologies and Applications of Artificial Intelligence (TAAI’04), Novem-
ber 5–6, 2004, National Chengchi University (NCCU): Muzha, Wenshan District, Taipei City,
Taiwan. Fully available at http://www.taai.org.tw/taai2004/ [accessed 2010-07-30].
2004. 10th Conference on Technologies and Applications of Artificial Intelligence (TAAI’05), De-
cember 2–3, 2005, National University of Kaohsiung: Kaohsiung, Taiwan. Fully available
at http://www.taai.org.tw/taai2005/ [accessed 2010-07-30].
2005. Techniques foR Implementing Constraint Programming Systems (TRICS). Technical Report
TRA9/00, National University of Singapore, School of Computing: Singapore, September 22,
2000, Singapore. Workshop in conjunction with CP 2000.
2006. 12th Conference on Technologies and Applications of Artificial Intelligence (TAAI’07),
November 16–17, 2007, National Yunlin University of Science and Technology: Douliou,
Yunlin, Taiwan.
2007. Bart Naudts and Alain Verschoren. Epistasis on Finite and Infinite Spaces. In Inter-
Symp’96 [1412], pages 19–23, 1996. CiteSeerx : 10.1.1.32.6455.
2008. Bart Naudts and Alain Verschoren. Epistasis and Deceptivity. Bulletin of the Belgian Mathe-
matical Society – Simon Stevin, 6(1):147–154, 1999, Belgian Mathematical Society: Campus
de la Plaine, Brussels, Belgium. Fully available at http://projecteuclid.org/euclid.
bbms/1103149975 [accessed 2009-07-11]. CiteSeerx : 10.1.1.32.4912. Zentralblatt MATH
1108 REFERENCES
identifier: 0915.68143. Mathematical Reviews number (MathSciNet): MR1674709.
2009. Bart Naudts, Dominique Suys, and Alain Verschoren. Generalized Royal Road Functions
and their Epistasis. Computers and Artificial Intelligence, 19(4), 2000, Slovak Academy of
Science: Bratislava, Slovakia.
2010. Boris Naujoks, Nicola Beume, and Michael T.M. Emmerich. Multi-objective Optimisa-
tion using S-metric Selection: Application to Three-Dimensional Solution Spaces. In
CEC’05 [641], pages 1282–1289, volume 2, 2005. doi: 10.1109/CEC.2005.1554838. Fully
available at http://www.lania.mx/˜ccoello/naujoks05.pdf.gz [accessed 2009-07-18].
CiteSeerx : 10.1.1.60.4181. INSPEC Accession Number: 8715411.
2011. Yehuda Naveh and Pierre Flener, editors. Fifth International Workshop on Local Search Tech-
niques in Constraint Satisfaction (LSCS’08), September 18, 2008, Sydney, NSW, Australia.
Fully available at http://sysrun.haifa.il.ibm.com/hrl/lscs2008/ [accessed 2009-06-
24].

2012. Bernhard Nebel, editor. Proceedings of the Seventeenth International Joint Conference on
Artificial Intelligence (IJCAI’01), August 4–10, 2001, Seattle, WA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-777-3 and 1-55860-813-3.
Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-2001/content/content.
htm [accessed 2008-04-01]. Google Books ID: BHlFAAAAYAAJ. OCLC: 48481676. See also [1810].
2013. Nadia Nedjah and Luiza de Macedo Mourelle, editors. Swarm Intelligent Systems, vol-
ume 26 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2006.
doi: 10.1007/978-3-540-33869-7. isbn: 3-540-33868-3. Google Books ID: 32PmGAAACAAJ
and MJOvAAAACAAJ. Library of Congress Control Number (LCCN): 2006925434. Series
editor: Janusz Kacprzyk.
2014. Nadia Nedjah, Luiza de Macedo Mourelle, Ajith Abraham, and Mario Köppen, editors.
5th International Conference on Hybrid Intelligent Systems (HIS’05), November 6–9, 2005,
Pontifical Catholic University of Rio de Janeiro (PUC-Rio): Rio de Janeiro, RJ, Brazil.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-2457-5. Partly available
at http://his05.hybridsystem.com/ [accessed 2007-09-01].
2015. Nadia Nedjah, Enrique Alba Torres, and Luiza de Macedo Mourelle, editors. Parallel Evolu-
tionary Computations, volume 22/2006 in Studies in Computational Intelligence. Springer-
Verlag: Berlin/Heidelberg, June 2006. doi: 10.1007/3-540-32839-4. isbn: 3-540-32837-8
and 3-540-32839-4. Google Books ID: J36LHQAACAAJ and kOC8GAAACAAJ. Library of
Congress Control Number (LCCN): 2006921792.
2016. Nadia Nedjah, Leandro dos Santos Coelho, and Luiza de Macedo Mourelle, editors.
Multi-Objective Swarm Intelligent Systems: Theory & Experiences, volume 261/2010
in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2010.
doi: 10.1007/978-3-642-05165-4. Library of Congress Control Number (LCCN): 2009940420.
2017. John Ashworth Nelder and Roger A. Mead. A Simplex Method for Function Minimization.
The Computer Journal, Oxford Journals, 7(4):308–313, January 1965, British Computer Soci-
ety: Swindon, UK. doi: 10.1093/comjnl/7.4.308. Fully available at http://www.garfield.
library.upenn.edu/classics1979/A1979HZ22100001.pdf and http://www.
rupley.com/˜jar/Rupley/Code/src/simplex/nelder-mead-simplex.pdf [ac-
x
cessed 2010-12-24]. CiteSeer : 10.1.1.144.5920.

2018. Kevin M. Nelson. A Comparison of Evolutionary Programming and Genetic Algorithms for
Electronic Part Placement. In EP’95 [1848], pages 503–519, 1995.
2019. Venkata Ranga Neppalli, Chuen-Lung Chen, and Jatinder N. D. Gupta. Genetic Algo-
rithms for the two-stage bicriteria Flowshop Problem. European Journal of Operational Re-
search (EJOR), 95(2), December 6, 1996, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/0377-2217(95)00275-8.
2020. Ferrante Neri, Niko Kotilainen, and Mikko Vapa. An Adaptive Global-Local Memetic Al-
gorithm to Discover Resources in P2P Networks. In EvoWorkshops’07 [1050], pages 61–70,
2007. doi: 10.1007/978-3-540-71805-5 7.
2021. Ferrante Neri, Jari Toivanen, Giuseppe Leonardo Cascella, and Yew-Soon Ong.
An Adaptive Multimeme Algorithm for Designing HIV Multidrug Therapies.
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB),
4(2), April 2007, IEEE Computer Society Press: Los Alamitos, CA, USA.
doi: 10.1109/TCBB.2007.070202 and 10.1109/tcbb.2007.1039. Fully available at http://
REFERENCES 1109
ntu-cg.ntu.edu.sg/ysong/journal/Adaptive-Multimeme-HIV.pdf [accessed 2011-04-
23]. CiteSeerx : 10.1.1.124.5677. PubMed ID: 17473319.
2022. Ferrante Neri, Jari Toivanen, and Raino A. E. Mäkinen. An Adaptive Evolutionary Al-
gorithm with Intelligent Mutation Local Searchers for Designing Multidrug Therapies for
HIV. Applied Intelligence – The International Journal of Artificial Intelligence, Neural Net-
works, and Complex Problem-Solving Technologies, 27(3):219–235, December 2007, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/s10489-007-0069-8.
2023. Arnold Neumaier. Global Optimization and Constraint Satisfaction. In GICOLAG’06 [2024],
2006. Fully available at http://www.mat.univie.ac.at/˜neum/glopt.html [ac-
cessed 2009-06-11].

2024. Arnold Neumaier, Immanuel Bomze, Laurence A. Wolsey, and Ioannis Emiris, editors. Global
Optimization – Integrating Convexity, Optimization, Logic Programming, and Computational
Algebraic Geometry (GICOLAG’06), December 4–8, 2006, International Erwin Schrödinger
Institute for Mathematical Physics (ESI): Vienna, Austria. Fully available at http://www.
mat.univie.ac.at/˜neum/gicolag.html [accessed 2009-06-11].
2025. Frank Neumann and Ingo Wegener. Can Single-Objective Optimization Profit from Multi-
objective Optimization? In Multiobjective Problem Solving from Nature – From Concepts
to Applications [1554], pages 115–130. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-3-540-72964-8 6.
2026. José Neves, Manuel Filipe Santos, and José Manuel Machado, editors. Progress in Ar-
tificial Intelligence – Proceedings of the 13th Portuguese Conference on Aritficial Intelli-
gence; Workshops: GAIW, AIASTS, ALEA, AMITA, BAOSW, BI, CMBSB, IROBOT,
MASTA, STCS, and TEMA (EPIA’07), December 3–7, 2007, Guimarães, Portugal, volume
4874 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-77002-2.
isbn: 3-540-77000-3. Library of Congress Control Number (LCCN): 2007939905.
2027. Proceedings of the 12th IEEE Congress on Evolutionary Computation (CEC’11), June 5–8,
2011, New Orleans, LA, USA.
2028. Mark E. J. Newman and Robin Engelhardt. Effect of Neutral Selection on the Evolution of
Molecular Species. Proceedings of the Royal Society B: Biological Sciences, 256(1403):1333–
1338, July 22, 1998, Royal Society Publishing: London, UK. doi: 10.1098/rspb.1998.0438.
Fully available at http://www.santafe.edu/media/workingpapers/98-01-001.
pdf [accessed 2010-12-04]. CiteSeerx : 10.1.1.43.9103. arXiv ID: adap-org/9712005v1.
2029. Sir Isaac Newton. Philosophiæ Naturalis Principia Mathematica. Apud GUIL & Joh. INNYS,
Regiæ Societatis Typographos: London, UK, July 5, 1687. Fully available at http://gdz.
sub.uni-goettingen.de/no_cache/dms/load/img/?IDDOC=294010 [accessed 2008-08-
21].

2030. Jerzy Neyman and Egon Sharpe Pearson. On the Use and Interpretation of Certain Test Cri-
teria for Purposes of Statistical Inference, Part I. Biometrika, 20A(1-2):175–240, July 1928,
Oxford University Press, Inc.: New York, NY, USA. doi: doi:10.1093/biomet/20A.1-2.175.
Reprinted in [2031].
2031. Jerzy Neyman and Egon Sharpe Pearson. Joint Statistical Papers. Cambridge Univer-
sity Press: Cambridge, UK, University of California Press: Berkeley, CA, USA, Hodder
Arnold: London, UK, and Lubrecht & Cramer, Ltd.: Port Jervis, NY, USA, October 1967.
isbn: 0520009916 and 0852647069. Google Books ID: KH1GHgAACAAJ. OCLC: 527105.
asin: B000JM5XB0 and B000P0O0T2.
2032. Xuan Hoai Nguyen. A Flexible Representation for Genetic Programming: Lessons from
Natural Language Processing. PhD thesis, University of New South Wales, University Col-
lege, School of Information Technology and Electrical Engineering: Sydney, NSW, Aus-
tralia, December 2004. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/hoai_thesis.html and http://www.cs.ucl.ac.uk/staff/W.Langdon/
ftp/papers/hoai_thesis.tar.gz [accessed 2010-07-26]. OCLC: 224847044, 455148847,
and 648818432.
2033. Xuan Hoai Nguyen, Robert Ian McKay, and Daryl Leslie Essam. Some Experimental Results
with Tree Adjunct Grammar Guided Genetic Programming. In EuroGP’02 [977], pages
229–238, 2002. doi: 10.1007/3-540-45984-7 22. Fully available at http://sc.snu.ac.kr/
PAPERS/TAGGGP.pdf [accessed 2007-09-09].
2034. Xuan Hoai Nguyen, Robert Ian McKay, Daryl Leslie Essam, and R. Chau. Solv-
1110 REFERENCES
ing the Symbolic Regression Problem with Tree-Adjunct Grammar Guided Genetic
Programming: The Comparative Results. In CEC’02 [944], pages 1326–1331, 2002.
doi: 10.1109/CEC.2002.1004435. Fully available at http://sc.snu.ac.kr/PAPERS/
MCEC2002.pdf [accessed 2007-09-09]. INSPEC Accession Number: 7328878.
2035. Giuseppe Nicosia, Vincenzo Cutello, Peter J. Bentley, and Jonathan Timmis, editors. Pro-
ceedings of the 3th International Conference on Artificial Immune Systems (ICARIS’04),
September 13–16, 2004, Catania, Sicily, Italy, volume 3239 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-23097-1.
2036. Søren R. Nielsen, David Pisinger, and Per Marquardsen. Automatic Transformation of
Constraint Satisfaction Problems to Integer Linear Form – An Experimental Study. In
TRICS [2005], 2000. CiteSeerx : 10.1.1.36.9656 and 10.1.1.37.2112.
2037. Stefan Niemczyk. Ein Modellproblem mit einstellbarer Schwierigkeit zur Evaluierung Evo-
lutionärer Algorithmen. Master’s thesis, University of Kassel, Fachbereich 16: Elektrotech-
nik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, May 5, 2008, Thomas
Weise, Advisor, Kurt Geihs and Albert Zündorf, Committee members. Fully available
at http://www.it-weise.de/documents/files/N2008MPMES.pdf.
2038. Stefan Niemczyk and Thomas Weise. A General Framework for Multi-Model Estima-
tion of Distribution Algorithms. Technical Report, University of Kassel, Fachbereich
16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany,
March 10, 2010. Fully available at http://www.it-weise.de/documents/files/
NW2010AGFFMMEODA.pdf [accessed 2010-06-19]. See also [2917].
2039. Nils J. Nilsson, editor. Proceedings of the 3rd International Joint Conference on Artificial
Intelligence (IJCAI’73), August 20–23, 1973, Stanford University: Stanford, CA, USA. Fully
available at http://dli.iiit.ac.in/ijcai/IJCAI-73/CONTENT/content.htm [ac-
cessed 2008-04-01].

2040. Jorge Nocedal and Stephen J. Wright. Numerical Optimization, volume 18 in Springer Series
in Operations Research and Financial Engineering. Springer-Verlag GmbH: Berlin, Germany,
2nd edition, 2006. doi: 10.1007/978-0-387-40065-5. isbn: 0-387-98793-2. Google Books
ID: 2B9LUYAP3n4C, eNlPAAAAMAAJ, and epc5fX0lqRIC. Library of Congress Control
Number (LCCN): 2006923897.
2041. David Noever and Subbiah Baskaran. Steady-State vs. Generational Genetic Algorithms: A
Comparison of Time Complexity and Convergence Properties. Technical Report 92-07-032,
Santa Fe Institute: Santa Fé, NM, USA.
2042. Andreas Nolte and Rainer Schrader. A Note on the Finite Time Behaviour of Simulated
Annealing. In SOR’96 [3092], pages 175–180, 1996. See also [2043].
2043. Andreas Nolte and Rainer Schrader. A Note on the Finite Time Behaviour of Simulated An-
nealing. Mathematics of Operations Research (MOR), 25(3):476–484, August 2000, Institute
for Operations Research and the Management Sciences (INFORMS): Linthicum, ML, USA.
doi: 10.1287/moor.25.3.476.12211. Fully available at http://www.zaik.de/˜paper/
unzip.html?file=zaik1999-347.ps [accessed 2009-07-03]. CiteSeerx : 10.1.1.40.9456.
Revised version from March 1999. See also [2042].
2044. Peter Nordin. A Compiling Genetic Programming System that Directly Manipulates the
Machine Code. In Advances in Genetic Programming Volume 1 [1539], Chapter 14, pages
311–331. MIT Press: Cambridge, MA, USA, 1994. Machine code GP Sun Spark and i868.
2045. Peter Nordin. Evolutionary Program Induction of Binary Machine Code and its Appli-
cations. PhD thesis, Universität Dortmund, Fachbereich Informatik: Dortmund, North
Rhine-Westphalia, Germany, Krehl Verlag: Münster, Germany. isbn: 3931546071.
OCLC: 40285878. GBV-Identification (PPN): 231072783. asin: 3931546071 and
B0033DLUWS.
2046. Peter Nordin and Wolfgang Banzhaf. Evolving Turing-Complete Programs for a
Register Machine with Self-Modifying Code. In ICGA’95 [883], pages 318–325,
1995. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
Nordin_1995_tcp.html [accessed 2008-09-16]. CiteSeerx : 10.1.1.34.4526.
2047. Peter Nordin and Wolfgang Banzhaf. Programmatic Compression of Images and Sound.
In GP’96 [1609], pages 345–350, 1996. Fully available at http://www.cs.mun.ca/
x
˜banzhaf/papers/gp96.pdf [accessed 2010-08-22]. CiteSeer : 10.1.1.32.7927.
2048. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Defined Introns
and Destructive Crossover in Genetic Programming. In Proceedings of the Workshop
REFERENCES 1111
on Genetic Programming: From Theory to Real-World Applications [2326], pages 6–
22, 1995. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
nordin_1995_introns.html [accessed 2010-08-22]. CiteSeerx : 10.1.1.38.8934. See also
[2049, 2051].
2049. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Defined Introns and De-
structive Crossover in Genetic Programming. SysReport 3/95, Universität Dortmund, Fach-
bereich Informatik, Systems Analysis: Dortmund, North Rhine-Westphalia, Germany, 1995.
Fully available at http://www.cs.mun.ca/˜banzhaf/papers/ML95.pdf [accessed 2010-08-
x
22]. CiteSeer : 10.1.1.38.8934. See also [2048, 2051].

2050. Peter Nordin, Wolfgang Banzhaf, and Frank D. Francone. Efficient Evolution of Machine
Code for CISC Architectures using Instruction Blocks and Homologous Crossover. In Ad-
vances in Genetic Programming [2569], Chapter 12, pages 275–299. MIT Press: Cam-
bridge, MA, USA, 1996. Fully available at http://www.cs.bham.ac.uk/˜wbl/aigp3/
ch12.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/nordin_1999_
aigp3.html [accessed 2008-09-16].
2051. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Defined Introns and De-
structive Crossover in Genetic Programming. In Advances in Genetic Programming II [106],
pages 111–134. MIT Press: Cambridge, MA, USA, 1996. See also [2048, 2049].
2052. Peter Nordin, Wolfgang Banzhaf, and Markus F. Brameier. Evolution of a World
Model for a Miniature Robot using Genetic Programming. Automation and Remote
Control, 25(1–2):105–116, October 31, 1998, Springer Science+Business Media, Inc.:
New York, NY, USA and MAIK Nauka/Interperiodica – Pleiades Publishing, Inc.:
Moscow, Russia. doi: 10.1016/S0921-8890(98)00004-9. Fully available at http://
www.cs.bham.ac.uk/˜wbl/biblio/gp-html/Nordin_1998_RAS.html [accessed 2008-09-
x
16]. CiteSeer : 10.1.1.54.1211.

2053. Peter Nordin, Frank Hoffmann, Frank D. Francone, and Markus F. Brameier. AIM-
GP and Parallelism. In CEC’99 [110], pages 1059–1066, 1999. Fully available
at http://www.cs.mun.ca/˜banzhaf/papers/cec_parallel.pdf [accessed 2008-09-16].
CiteSeerx : 10.1.1.6.5450.
2054. Michael G. Norman and Pablo Moscato. A Competitive and Cooperative Approach to Com-
plex Combinatorial Search. Caltech Concurrent Computation Program 790, California Insti-
tute of Technology (Caltech), Caltech Concurrent Computation Program (C3P): Pasadena,
CA, USA, 1989. CiteSeerx : 10.1.1.44.776. See also [2055].
2055. Michael G. Norman and Pablo Moscato. A Competitive and Cooperative Approach to Com-
plex Combinatorial Search. In JAIIO’91 [514], pages 3.15–3.29, 1991. Also published as
Technical Report Caltech Concurrent Computation Program, Report. 790, California Insti-
tute of Technology, Pasadena, California, USA, 1989. See also [2054].
2056. Proceedings of the Second Hawaii International Conference on System Sciences (HICSS’69),
January 22–24, 1969, University of Hawaii at Manoa: Honolulu, HI, USA. North-Holland
Scientific Publishers Ltd.: Amsterdam, The Netherlands.
2057. M. Nowak and Peter K. Schuster. Error Thresholds of Replication in Finite Populations
Mutation Frequencies and the Onset of Muller’s Ratchet. Journal of Theoretical Biology, 137
(4):375–395, April 20, 1989, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
Imprint: Academic Press Professional, Inc.: San Diego, CA, USA, D. Kirschner, Y. Iwasa,
and L. Wolpert, editors. PubMed ID: 2626057.

2058. Martin J. Oates and David Wolfe Corne. Global Web Server Load Balancing using Evo-
lutionary Computational Techniques. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 5(4):297–312, August 2001, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s005000100103.
2059. Shigeru Obayashi. Multidisciplinary Design Optimization of Aircraft Wing Planform
Based on Evolutionary Algorithms. In SMC’98 [1364], pages 3148–3153, volume 4, 1998.
doi: 10.1109/ICSMC.1998.726486. Fully available at http://www.lania.mx/˜ccoello/
obayashi98a.pdf.gz [accessed 2009-06-16]. INSPEC Accession Number: 6195712.
2060. Shigeru Obayashi and Daisuke Sasaki. Visualization and Data Mining of Pareto
1112 REFERENCES
Solutions Using Self-Organizing Map. In EMO’03 [957], pages 796–806, 2003.
doi: 10.1007/3-540-36970-8 56. Fully available at http://www.ifs.tohoku.ac.jp/
edge/library/Final%20EMO2003.pdf [accessed 2009-07-18].
2061. Shigeru Obayashi, Kalyanmoy Deb, Carlo Poloni, Tomoyuki Hiroyasu, and Tadahiko
Murata, editors. Proceedings of the Fourth International Conference on Evolution-
ary Multi-Criterion Optimization (EMO’07), March 5–8, 2007, Matsushima, Sendai,
Japan, volume 4403/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/978-3-540-70928-2. isbn: 3-540-70927-4, 3-540-70928-2, and
6610865078. Google Books ID: f9Lcf6imJ8MC. OCLC: 84611501, 180728448,
315394594, 318300130, and 403748482. Library of Congress Control Number
(LCCN): 2007921125.
2062. Gabriela Ochoa, Inman Harvey, and Hilary Buxton. Error Thresholds and Their Relation to
Optimal Mutation Rates. In ECAL’99 [934], pages 54–63, 1999. doi: 10.1007/3-540-48304-7.
Fully available at ftp://ftp.cogs.susx.ac.uk/pub/users/inmanh/ecal99.ps.gz
x
[accessed 2009-07-10]. CiteSeer : 10.1.1.43.9263.

2063. Christopher K. Oei, David Edward Goldberg, and Shau-Jin Chang. Tournament Selection,
Niching, and the Preservation of Diversity. IlliGAL Report 91011, Illinois Genetic Algo-
rithms Laboratory (IlliGAL), Department of Computer Science, Department of General En-
gineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, Decem-
ber 1991. Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/
91011.ps.Z [accessed 2008-03-15].
2064. Seog-Chan Oh, Byung-Won On, Eric J. Larson, and Dongwon Lee. BF*: Web Services
Discovery and Composition as Graph Search Problem. In EEE’05 [1371], pages 784–786,
2005. doi: 10.1109/EEE.2005.41. INSPEC Accession Number: 8530439. See also [317, 662,
1999, 2065–2067, 3034].
2065. Seog-Chan Oh, Hyunyoung Kil, Dongwon Lee, and Soundar R. T. Kumara. Algorithms
for Web Services Discovery and Composition Based on Syntactic and Semantic Service De-
scriptions. In CEC/EEE’06 [2967], pages 433–435, 2006. doi: 10.1109/CEC-EEE.2006.12.
Fully available at http://pike.psu.edu/publications/eee06.pdf [accessed 2010-12-10].
CiteSeerx : 10.1.1.77.9275. INSPEC Accession Number: 9189341. See also [318, 662,
1998, 2064, 2066, 2067, 3034].
2066. Seog-Chan Oh, John Jung-Woon Yoo, Hyunyoung Kil, Dongwon Lee, and Soundar R. T. Ku-
mara. Semantic Web-Service Discovery and Composition Using Flexible Parameter Match-
ing. In CEC/EEE’07 [1344], pages 533–536, 2007. doi: 10.1109/CEC-EEE.2007.86. INSPEC
Accession Number: 9868641. See also [319, 662, 1999, 2064, 2065, 2067, 3034].
2067. Seog-Chan Oh, Ju-Yeon Lee, Seon-Hwa Cheong, Soo-Min Lim, Min-Woo Kim, Sang-
Seok Lee, Jin-Bum Park, Sang-Do Noh, and Mye M. Sohn. WSPR*: Web-Service
Planner Augmented with A* Algorithm. In CEC’09 [1245], pages 515–518, 2009.
doi: 10.1109/CEC.2009.13. INSPEC Accession Number: 10839129. See also [662, 1571, 1999,
2064–2066, 3034].
2068. Miloš Ohlı́dal, Jiřı́ Jaroš, Josef Schwarz, and Václav Dvořák. Evolutionary Design of OAB and
AAB Communication Schedules for Interconnection Network. In EvoWorkshops’06 [2341],
pages 267–278, 2006. doi: 10.1007/11732242 24.
2069. Tatsuya Okabe, Yaochu Jin, Bernhard Sendhoff, and Markus Olhofer. Voronoi-based Es-
timation of Distribution Algorithm for Multi-objective Optimization. In CEC’04 [1369],
pages 1594–1601, volume 2, 2004. doi: 10.1109/CEC.2004.1331086. Fully available
at http://www.honda-ri.org/intern/hriliterature/PN%2008-04 and http://
www.soft-computing.de/cec-2004-1445-final.pdf [accessed 2009-08-29]. INSPEC Ac-
cession Number: 8229117.
2070. Nicole Oldham, Christopher Thomas, Amit P. Sheth, and Kunal Verma. METEOR-S Web
Service Annotation Framework with Machine Learning Classification. In SWSWPC’04 [493],
pages 137–146, 2004. doi: 10.1007/978-3-540-30581-1 12.
2071. Gina Oliveira and Stefano Vita. A Multi-Objective Evolutionary Algorithm with ǫ-dominance
to Calculate Multicast Routes with QoS Requirements. In CEC’09 [1350], pages 182–189,
2009. doi: 10.1109/CEC.2009.4982946. INSPEC Accession Number: 10688528.
2072. Anne L. Olsen. Penalty Functions and the Knapsack Problem. In CEC’94 [1891], pages
554–558, volume 2, 1994. doi: 10.1109/ICEC.1994.350000.
REFERENCES 1113
2073. Donald M. Olsson and Lloyd S. Nelson. The Nelder-Mead Simplex Procedure for Function
Minimization. Technometrics, 17(1):45–51, February 1975, American Statistical Association
(ASA): Alexandria, VA, USA and American Society for Quality (ASQ): Milwaukee, WI,
USA. JSTOR Stable ID: 1267998.
2074. Mihai Oltean. Searching for a Practical Evidence of the No Free Lunch Theorems. In
BioADIT’04 [1399], pages 472–483, 2004. doi: 10.1007/b101281. Fully available at http://
www.cs.ubbcluj.ro/˜moltean/oltean_bioadit_springer2004.pdf [accessed 2009-03-
01].

2075. Beatrice M. Ombuki-Berman and Franklin T. Hanshar. Using Genetic Algorithms for Multi-
depot Vehicle Routing. In Bio-inspired Algorithms for the Vehicle Routing Problem [2162],
pages 77–99. Springer-Verlag: Berlin/Heidelberg, 2009. doi: 10.1007/978-3-540-85152-3 4.
2076. Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife’07), April 1–5, 2007,
Honolulu, HI, USA. Omipress: Madison, WI, USA and IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA. isbn: 1-4244-0701-X.
2077. Michael O’Neill and Conor Ryan, editors. Grammatical Evolution Workshop (GEWS’02),
2002, Roosevelt Hotel: New York, NY, USA. AAAI Press: Menlo Park, CA, USA. Fully
available at http://www.grammatical-evolution.org/gews2002/ [accessed 2009-07-09].
Partly available at http://www.grammatical-evolution.org/pubs.html [accessed 2009-
07-09]. Part of [1790].

2078. Michael O’Neill and Conor Ryan, editors. Proceedings of the Second Grammatical Evolution
Workshop (GEWS’03), July 12, 2003, Holiday Inn Chicago: Chicago, IL, USA. See also
[484, 485].
2079. Michael O’Neill and Conor Ryan, editors. The 3rd Grammatical Evolution Workshop
(GEWS’04), June 26, 2004, Red Lion Hotel: Seattle, WA, USA. Published as CD. See
also [751, 752].
2080. Michael O’Neill, Leonardo Vanneschi, Steven Matt Gustafson, Anna Isabel Esparcia-
Alcázar, Ivanoe de Falco, Antonio Della Cioppa, and Ernesto Tarantino, editors. Ge-
netic Programming – Proceedings of the 11th European Conference on Genetic Program-
ming (EuroGP’08), March 26–28, 2008, Naples, Italy, volume 4971/2008 in Theoret-
ical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-78671-9.
isbn: 3-540-78670-8. Partly available at http://evostar.iti.upv.es/index.
php?option=com_content&view=article&id=46 [accessed 2011-02-28]. Google Books
ID: OAHUcHbXoNkC. OCLC: 280852852. Library of Congress Control Number
(LCCN): 2008922954.
2081. Yew-Soon Ong and Andy J. Keane. Meta-Lamarckian Learning in Memetic Algorithms.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 8(2):99–110, 2004, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/TEVC.2003.819944. Fully available
at http://eprints.soton.ac.uk/22794/1/ong_04.pdf [accessed 2011-04-23]. INSPEC
Accession Number: 7973767.
2082. Seizo Onoe, Mohsen Guizani, Hsiao-Hwa Chen, and Mamoru Sawahashi, editors. Proceed-
ings of the International Conference on Wireless Communications and Mobile Computing
(IWCMC’06), July 3–6, 2006, Vancouver, BC, Canada. ACM Press: New York, NY, USA.
isbn: 1-59593-306-9. OCLC: 288959906.
2083. Godfrey C. Onwubolu and B. V. Babu, editors. New Optimization Techniques in Engineering,
volume 141 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-20167-X. Google Books ID: Yrsq-Pz0kAAC.
2084. Aleksandr Ivanovich Oparin. The Origin of Life. Courier Dover Publications: North
Chelmsford, MA, USA, 2nd edition, 1953–2003. isbn: 0486495221 and 0486602133.
Partly available at http://www.valencia.edu/˜orilife/textos/The%20Origin
%20of%20Life.pdf. Google Books ID: 3CYJOwAACAAJ, 65c5AAAAMAAJ, Jv8psJCtI0gC,
lDqJwAACAAJ, kW8ZKwAACAAJ, and qZB2o69KWSwC.
2085. Stan Openshaw. Building an Automated Modelling System to Explore a Universe of Spatial
Interaction Models. Geographical Analysis – An International Journal of Theoretical Geog-
raphy, 26(1):31–46, 1998, Wiley Periodicals, Inc.: Wilmington, DE, USA and Ohio State
University, Department of Geography: Columbus, OH, USA.
2086. Stan Openshaw and Ian Turton. Building New Spatial Interaction Models Using
Genetic Programming. In AISB’94 [936], 1994. Fully available at http://www.
1114 REFERENCES
cs.bham.ac.uk/˜wbl/biblio/gp-html/openshaw_1994_bnsim.html and http://
www.geog.leeds.ac.uk/papers/94-1/94-1.pdf [accessed 2008-09-16].
2087. Abstracts of the National Conference of the Operations Research Society of Japan. Operations
Research Society of Japan (OSRJ): Tokyo, Japan.
2088. Robert Orchard, Chunsheng Yang, and Moonis Ali, editors. Proceedings of the 17th
International Conference on Industrial and Engineering Applications of Artificial Intelli-
gence and Expert Systems (IEA/AIE’04), May 17–20, 2004, Ottawa, ON, Canada, vol-
ume 3029/2004 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b97304.
isbn: 3-540-22007-0. Google Books ID: T52Lf6oWKuMC. OCLC: 55592278, 59669961,
66574763, and 249601813. Library of Congress Control Number (LCCN): 2004105117.
2089. Una-May O’Reilly, Gwoing Tina Yu, Rick L. Riolo, and Bill Worzel, editors. Genetic Pro-
gramming Theory and Practice II, Proceedings of the Second Workshop on Genetic Pro-
gramming (GPTP’04), May 13–15, 2004, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA, volume 8 in Genetic Programming Series.
Kluwer Publishers: Boston, MA, USA. doi: 10.1007/b101112. isbn: 0-387-23253-2 and
0-387-23254-0.
2090. Agustin Orfila, Juan M. Estevez-Tapiador, and Arturo Ribagorda. Evolving High-Speed,
Easy-to-Understand Network Intrusion Detection Rules with Genetic Programming. In
EvoWorkshops’09 [1052], pages 93–98, 2009. doi: 10.1007/978-3-642-01129-0 11.
2091. Leslie E. Orgel. Prebiotic Adenine Revisited: Eutectics and Photochemistry. Origins of Life
and Evolution of the Biosphere, 34(4):361–369, August 2004, Springer Netherlands: Dor-
drecht, Netherlands. doi: 10.1023/B:ORIG.0000029882.52156.c2.
2092. Helcio R. B. Orlande, editor. 4th International Conference on Inverse Problems in En-
gineering: Theory and Practice – Inverse Problems in Engineering: Theory and Practice
(ICIPE’02), May 26–31, 2002, Angra dos Reis, Rio de Janeiro, Brazil. United Engineering
Foundation (UEF): New York, NY, USA. doi: 10.1080/089161502753341870. Fully available
at http://www.lttc.coppe.ufrj.br/4icipe/ [accessed 2009-07-03].
2093. Late Breaking Papers at Genetic and Evolutionary Computation Conference (GECCO’99
LBP), July 13–17, 1999, Orlando, FL, USA. See also [211, 2502].
2094. The Second International Workshop on Learning Classifier Systems (IWLCS’99), July 13,
1999, Orlando, FL, USA. Co-located with GECCO-99. See also [211].
2095. Albert Orriols-Puig and Ester Bernadó-Mansilla. Evolutionary Rule-Based Systems
for Imbalanced Data Sets. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 13(3), February 2009, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s00500-008-0319-7.
2096. Albert Orriols-Puig, Jorge Casillas, and Ester Bernadó-Mansilla. Genetic-based Ma-
chine Learning Systems are Competitive for Pattern Recognition. Evolutionary
Intelligence, 1(3):209–232, October 2008, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/s12065-008-0013-9.
2097. First European Working Session on Learning (EWSL’86), February 3–4, 1986, Orsay, France.
2098. Proceedings of the 10th International Symposium on Experimental Algorithms (SEA’11),
May 5–7, 2011, Orthodox Academy of Crete (OAC): Kolymvari, Kissamos, Chania, Crete,
Greece.
2099. Domingo Ortiz-Boyer, César Hervás-Martı́nez, and Carlos A. Reyes Garcı́a. CIXL2: A
Crossover Operator for Evolutionary Algorithms Based on Population Features. Journal of
Artificial Intelligence Research (JAIR), 24:1–48, July 2005, AI Access Foundation, Inc.: El
Segundo, CA, USA and AAAI Press: Menlo Park, CA, USA. Fully available at http://
www.aaai.org/Papers/JAIR/Vol24/JAIR-2401.pdf and http://www.cs.cmu.
edu/afs/cs/project/jair/pub/volume24/ortizboyer05a-html/Ortiz-Boyer.
html [accessed 2009-10-24].
2100. Emilio G. Ortiz-Garcı́a, Lucas Martı́nez-Bernabeu, Sancho Salcedo-Sanz, Francisco Flórez,
José Antonio Portilla-Figueras, and Angel M. Pérez-Bellido. A Parallel Evolutionary Algo-
rithm for the Hub Location Problem with Fully Interconnected Backbone and Access Net-
works. In CEC’09 [1350], pages 1501–1506, 2009. doi: 10.1109/CEC.2009.4983120. INSPEC
Accession Number: 10688715.
2101. Henry F. Osborn. Oytogenic and Phylogenic Variation. Science Magazine, 4(100):786–
789, November 27, 1896, American Association for the Advancement of Science (AAAS):
REFERENCES 1115
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.4.100.786.
2102. Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press: Cambridge,
MA, USA, July 1994. isbn: 0-2626-5040-1. Google Books ID: 5ntdaYX4LPkC.
2103. Ibrahim H. Osman. An Introduction to Metaheuristics. In Operational Research Tutorial
Papers – Annual Conference of the Operational Research Society [1695], pages 92–122, 1995.
2104. Ibrahim H. Osman and James P. Kelly, editors. 1st Metaheuristics International Conference
– Meta-Heuristics: The Theory and Applications (MIC’95), July 22–25, 1995, Breckenridge,
CO, USA. Kluwer Publishers: Boston, MA, USA. isbn: 0-7923-9700-2. Published in 1996.
2105. Ibrahim H. Osman and Gilbert Laporte. Metaheuristics: A bibliography. Annals
of Operations Research, 63(5):511–623, October 1996, Springer Netherlands: Dordrecht,
Netherlands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1007/BF02125421. Fully available at http://sb.aub.edu.lb/PersonalSites/
DrOsman/docs/Articles-pdf/BIBmh96.pdf [accessed 2010-09-17].
2106. Pavel Osmera, editor. Proceedings of the 6th International Conference on Soft Comput-
ing (MENDEL’00), June 7–9, 2000, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology, Ústav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-1609-2.
2107. Pavel Ošmera and Radek Matoušek, editors. Proceedings of the 13th International Conference
on Soft Computing (MENDEL’07), September 5–7, 2007, Prague, Czech Republic. Brno
University of Technology, Faculty of Mechanical Engineering: Brno, Czech Republic.
2108. Fernando E. B. Otero, Monique M. S. Silvia, and Alex Alves Freitas. Genetic Programming
For Attribute Construction In Data Mining. In GECCO’02 [1675], page 1270, 2002. See also
[2109].
2109. Fernando E. B. Otero, Monique M. S. Silva, Alex Alves Freitas, and Julio C. Nievola. Genetic
Programming for Attribute Construction in Data Mining. In EuroGP’03 [2360], pages 101–
121, 2003. doi: 10.1007/3-540-36599-0 36. Fully available at http://www.cs.kent.ac.uk/
people/staff/aaf/pub_papers.dir/EuroGP-2003-Fernando.pdf [accessed 2007-09-09].
See also [2108].
2110. James G. Oxley. Matroid Theory, volume 3 in Oxford Graduate Texts in Mathematics.
Oxford University Press, Inc.: New York, NY, USA, 1992. isbn: 0-19-853563-5 and
0-19-920250-8. OCLC: 26053360, 440044001, and 474911564. Library of Congress
Control Number (LCCN): 92020802. GBV-Identification (PPN): 111250307, 230783953,
383918693, 470663685, 501250778, and 522307191. LC Classification: QA166.6 .O95
1992.
2111. Akira Oyama. Wing Design Using Evolutionary Algorithm. PhD thesis, Tokyo University,
Department of Aeronautics and Space Engineering: Tokyo, Japan, March 2000, Kazuhiro
Nakahashi and Shigeru Obayashi, Advisors. Fully available at http://flab.eng.isas.
ac.jp/member/oyama/index2e.html [accessed 2009-06-16]. CiteSeerx : 10.1.1.24.3306.

2112. Ingo Paenke, Jürgen Branke, and Yaochu Jin. On the Influence of Phenotype Plasticity on
Genotype Diversity. In FOCI’07 [1865], pages 33–40, 2007. doi: 10.1109/FOCI.2007.372144.
Fully available at http://www.honda-ri.org/intern/Publications/PN%2052-06
and http://www.soft-computing.de/S001P005.pdf [accessed 2008-11-10]. Best student
paper.
2113. Charles Campbell Palmer. An Approach to a Problem in Network Design using Genetic
Algorithms. PhD thesis, Polytechnic University: New York, NY, USA, April 1994, Aaron
Kershenbaum, Supervisor, Susan Flynn Hummel, Richard M. van Slyke, and Robert R.
Boorstyn, Committee members. Fully available at ftp://ftp.cs.cuhk.hk/pub/EC/
thesis/phd/palmer-phd.ps.gz [accessed 2010-07-28]. CiteSeerx : 10.1.1.38.1716. UMI
Order No.: GAX94-31740. See also [2114].
2114. Charles Campbell Palmer and Aaron Kershenbaum. An Approach to a Problem in Network
Design using Genetic Algorithms. Networks, 26(3):151–163, 1995, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1002/net.3230260305. See also [2113].
2115. Charles Campbell Palmer and Aaron Kershenbaum. Representing Trees in Genetic Algo-
1116 REFERENCES
rithms. In CEC’94 [1891], pages 379–384, volume 1, 1994. CiteSeerx : 10.1.1.36.1655.
See also [2116].
2116. Charles Campbell Palmer and Aaron Kershenbaum. Representing Trees in Genetic Algo-
rithms. In Handbook of Evolutionary Computation [171], Chapter G1.3. Oxford University
Press, Inc.: New York, NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac House,
Temple Back, Bristol, UK, and CRC Press, Inc.: Boca Raton, FL, USA, 1997. Fully available
at http://www.cs.cinvestav.mx/˜constraint/papers/palmer94representing.
ps [accessed 2010-07-26]. See also [2115].
2117. Themis Palpanas, Nick Koudas, and Alberto Mendelzon. On Space Constrained Set Selection
Problem. Data & Knowledge Engineering, 67(1):200–218, October 2008, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/j.datak.2008.06.011.
2118. Guanyu Pan, Quansheng Dou, and Xiaohua Liu. Performance of two Improved Particle
Swarm Optimization In Dynamic Optimization Environments. In ISDA’06 [553], pages 1024–
1028, volume 2, 2006. doi: 10.1109/ISDA.2006.253752. INSPEC Accession Number: 9275253.
2119. Hui Pan, Ling Wang, and Bo Liu. Particle Swarm Optimization for Function Optimization in
Noisy Environment. Journal of Applied Mathematics and Computation, 181(2):908–919, Oc-
tober 15, 2006, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.amc.2006.01.066.
2120. Giselher Pankratz and Veikko Krypczyk. Benchmark Data Sets for Dynamic Vehicle
Routing Problems. FernUniversität in Hagen, Lehrstuhl für Wirtschaftsinformatik: Ha-
gen, North Rhine-Westphalia, Germany, June 12, 2007. Fully available at http://www.
fernuni-hagen.de/WINF/inhfrm/benchmark_data.htm [accessed 2008-10-27].
2121. Isueh-Yuan Pao. 2004 IEEE Antennas and Propagation Society (A-P-S) International Sympo-
sium and USNC / URSI National’Radio Science Meeting, June 20–25, 2004. IEEE Computer
Society: Piscataway, NJ, USA. doi: 10.1109/APS.2004.1330376. isbn: 0-7803-8302-8.
2122. Mike P. Papazoglou and John Zeleznikow, editors. The Next Generation of Informa-
tion Systems: From Data to Knowledge – A Selection of Papers Presented at Two
IJCAI-91 Workshops (IJCAI’91 WS), August 26, 1991, Sydney, NSW, Australia, volume
611 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-55616-8 and
3-540-55616-8. OCLC: 26095701, 123310072, 230127949, 231363363, 320295775,
321349775, 423036170, 441767836, 471535298, and 499252204. Library of Congress
Control Number (LCCN): 92021889. GBV-Identification (PPN): 119292769. LC Classifi-
cation: QA76.9.D3 N485 1992. See also [831, 1987, 1988].
2123. Proceedings of the 5th International Conference on Simulated Evolution And Learning
(SEAL’04), October 26–29, 2004, Paradise Hotel: Busan, South Korea.
2124. Panos M. Pardalos and Ding-Zhu Du, editors. Handbook of Combinatorial Op-
timization, volume 1–3. Kluwer Academic Publishers: Norwell, MA, USA, Oc-
tober 31, 1990. isbn: 0-7923-5018-9, 0-7923-5019-7, 0-7923-5285-8, and
0-7923-5293-9. OCLC: 38485795 and 490888316. Library of Congress Control Number
(LCCN): 98002942. LC Classification: QA402.5 .H335 1998.
2125. Panos M. Pardalos and Mauricio G.C. Resende, editors. Handbook of Applied Opti-
mization. Oxford University Press, Inc.: New York, NY, USA, February 22, 2002.
isbn: 0-19-512594-0. OCLC: 45532495 and 248337411.
2126. Panos M. Pardalos, Nguyen Van Thoai, and Reiner Horst. Introduction to Global Op-
timization, volume 3 & 48 in Nonconvex Optimization and Its Applications. Springer
Science+Business Media, Inc.: New York, NY, USA, 1st/2nd edition, 1995–2000.
isbn: 0-7923-3556-2 and 0-7923-6756-1. Google Books ID: D8LxE6PB8fIC,
atHlWfTt9KAC, dbu02-1JbLIC, and w6bRM8W-oTgC.
2127. Vilfredo Federico Pareto. Cours d’Économie Politique. F. Rouge: Lausanne/Paris, France,
1896–1897. Partly available at http://ann.sagepub.com/cgi/reprint/9/3/128.pdf
[accessed 2009-04-23]. 2 volumes. See also [1938].

2128. Jong-man Park, Jeon-gue Park, Chong-hyun Lee, and Mun-sung Han. Robust and Effi-
cient Genetic Crossover Operator: Homologous Recombination. In IJCNN ’93 [1994], pages
2975–2978, volume 3, 1993. doi: 10.1109/IJCNN.1993.714347. INSPEC Accession Num-
ber: 4962425.
2129. Julia K. Parrish and William M. Hamner, editors. Animal Groups in Three Dimen-
sions: How Species Aggregate. Cambridge University Press: Cambridge, UK, Decem-
REFERENCES 1117
ber 1997. doi: 10.2277/0521460247. isbn: 0521460247. Google Books ID: zj4I4HNZSR0C.
OCLC: 37156372 and 243890243.
2130. Konstantinos E. Parsopoulos. Cooperative Micro-Differential Evolution for High-Dimensional
Problems. In GECCO’09-I [2342], pages 531–538, 2009. doi: 10.1145/1569901.1569975.
2131. Janice Partyka and Randolph Hall. Vehicle Routing Software Survey, volume 37 (1). Lion-
heart Publishing, Inc.: Marietta, GA, USA, February 2010. Fully available at http://www.
lionhrtpub.com/orms/surveys/Vehicle_Routing/vrss.html [accessed 2011-04-25]. See
also [2132].
2132. Janice Partyka and Randolph Hall. Vehicle Routing Software Survey – On the Road to
Connectivity – Creative integration of computer, communication and location technologies
help a wide range of industries thrive in difficult times. OR/MS Today, February 2010,
Lionheart Publishing, Inc.: Marietta, GA, USA and Institute for Operations Research and the
Management Sciences (INFORMS): Linthicum, ML, USA. Fully available at http://www.
lionhrtpub.com/orms/orms-2-10/frsurvey.html [accessed 2011-04-25]. See also [2131].
2133. Dilip Patel, Shushma Patel, and Yingxu Wang, editors. Proceedings of the 2nd IEEE Inter-
national Conference on Cognitive Informatics (ICCI’03), August 18–20, 2003, London, UK.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-1986-5.
2134. Norman R. Paterson. Genetic Programming with Context-Sensitive Grammars. PhD thesis,
University of St Andrews, School of Computer Science: St Andrews, Scotland, UK, Septem-
ber 2002, Michael Livesey, Supervisor. Fully available at http://www.cs.bham.ac.uk/
˜wbl/biblio/gp-html/paterson_thesis.html [accessed 2009-06-29].
2135. David A. Patterson and John L. Hennessy. Computer Organization and Design: The
Hardware/Software Interface, The Morgan Kaufmann Series in Computer Architecture
and Design. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 3rd edition,
2004. isbn: 0-12-088433-X and 1-558-60604-1. Google Books ID: 1lD9LZRcIZ8C.
OCLC: 56213091 and 503402668. GBV-Identification (PPN): 390849650.
2136. Diane B. Paul. FITNESS: Historical Perspectives. In Keywords in Evolutionary Biol-
ogy [1521], pages 112–114. Harvard University Press: Cambridge, MA, USA, 1992.
2137. Linus Carl Pauling. The Nature of the Chemical Bond and the Structure of Molecules
and Crystals: An Introduction to Modern Structural Chemistry. Cornell University Press:
Ithaca, NY, USA, 3rd edition, 1960. isbn: 0-8014-0333-2. Fully available at http://
osulibrary.oregonstate.edu/specialcollections/coll/pauling/ [accessed 2007-
07-12].

2138. Terry Payne and Valentina Tamma, editors. AAAI Fall Symposium on Agents and the Seman-
tic Web, November 3–6, 2005, Arlington, VA, USA, volume FS-05-01. AAAI Press: Menlo
Park, CA, USA.
2139. Darren Pearce, editor. The 12th White House Papers Graduate Research in Cognitive and
Computing Sciences at Sussex. Cognitive Science Research Papers (CSRP) CSRP 512, Uni-
versity of Sussex, School of Cognitive Science: Brighton, UK, November 1999, Isle of Thorns,
UK. Fully available at ftp://ftp.informatics.sussex.ac.uk/pub/reports/csrp/
csrp512.ps.Z [accessed 2009-09-02]. issn: 1350-3162.
2140. Judea Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving, The
Addison-Wesley Series in Artificial Intelligence. Addison-Wesley Longman Publishing Co.,
Inc.: Boston, MA, USA, 1984. isbn: 0-201-05594-5. Google Books ID: 0XtQAAAAMAAJ
and 1HpQAAAAMAAJ. OCLC: 9644728 and 251831871. Library of Congress Control Num-
ber (LCCN): 83012217. GBV-Identification (PPN): 013593463 and 021186472. LC
Classification: Q335 .P38 1984.
2141. David W. Pearson, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the
2nd International Conference on Artificial Neural Nets and Genetic Algorithms (ICAN-
NGA’95), April 18–21, 1995, Alès, France. Springer New York: New York, NY, USA.
isbn: 3-211-82692-0. Google Books ID: 77hQAAAAMAAJ. OCLC: 32241732.
2142. David W. Pearson, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the 6th
International Conference on Artificial Neural Nets and Genetic Algorithms (ICANNGA’03),
April 23–25, 2003, Roanne, France.
2143. Karl Pearson. The Problem of the Random Walk. Nature, 72:294, July 27, 1905.
doi: 10.1038/072294b0.
2144. Witold Pedrycz and Athanasios V. Vasilakos, editors. Computational Intelligence in
Telecommunications Networks. CRC Press, Inc.: Boca Raton, FL, USA, September 15,
1118 REFERENCES
2000. isbn: 084931075X. Google Books ID: 7lVj5pqSxXIC and dE6d-jPoDEEC.
OCLC: 43894172 and 55103484.
2145. Martin Pelikan. Hierarchical Bayesian Optimization Algorithm – Toward a New Generation
of Evolutionary Algorithms, volume 170/2005 in Studies in Fuzziness and Soft Computing.
Springer-Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/b10910. isbn: 3-540-23774-7.
Google Books ID: R0QHqcaTfIC. OCLC: 57730016 and 288213853. Library of Congress
Control Number (LCCN): 2004116659.
2146. Martin Pelikan. Analysis of Estimation of Distribution Algorithms and Genetic Algorithms on
NK Landscapes. In GECCO’08 [1519], pages 1033–1040, 2008. doi: 10.1145/1389095.1389287.
arXiv ID: 0801.3111. Session: Genetic Algorithms Papers.
2147. Martin Pelikan and David Edward Goldberg. Genetic Algorithms, Clustering, and the Break-
ing of Symmetry. IlliGAL Report 2000013, Illinois Genetic Algorithms Laboratory (Illi-
GAL), Department of Computer Science, Department of General Engineering, University of
Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, March 11, 2003. Fully avail-
able at www.illigal.uiuc.edu/pub/papers/IlliGALs/2000013.ps.Z [accessed 2009-08-
x
28]. CiteSeer : 10.1.1.35.8620. See also [2148].

2148. Martin Pelikan and David Edward Goldberg. Genetic Algorithms, Clustering, and the Break-
ing of Symmetry. In PPSN VI [2421], pages 385–394, 2000. doi: 10.1007/3-540-45356-3 38.
See also [2147].
2149. Martin Pelikan and Kumara Sastry, editors. Proceedings of the Optimization by Building
and Using Probabilistic Models Workshop (OBUPM’01), July 7, 2001, Holiday Inn Golden
Gateway Hotel: San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. Part of [2570].
2150. Martin Pelikan, David Edward Goldberg, and Erick Cantú-Paz. Linkage Problem, Distri-
bution Estimation, and Bayesian Networks. Technical Report 98013, Illinois Genetic Algo-
rithms Laboratory (IlliGAL), Department of Computer Science, Department of General Engi-
neering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, Novem-
ber 1998. Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/
98013.ps.Z [accessed 2009-07-11]. CiteSeerx : 10.1.1.45.213. See also [483]. Newly published
as [2152].
2151. Martin Pelikan, David Edward Goldberg, and Erick Cantú-Paz. BOA: The Bayesian Opti-
mization Algorithm. In GECCO’99 [211], pages 525–532, 1999. Fully available at https://
eprints.kfupm.edu.sa/28537/ [accessed 2008-10-18]. CiteSeerx : 10.1.1.46.8131. See also
[2152].
2152. Martin Pelikan, David Edward Goldberg, and Erick Cantú-Paz. BOA: The Bayesian Op-
timization Algorithm. IlliGAL Report 99003, Illinois Genetic Algorithms Laboratory (Illi-
GAL), Department of Computer Science, Department of General Engineering, University of
Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 1999. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/99003.ps.Z [accessed 2009-
x
07-11]. CiteSeer : 10.1.1.45.934. See also [2150, 2151].

2153. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Opti-
mization by Building and Using Probabilistic Models. Technical Report 99018, Illinois
Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science, Department
of General Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign,
IL, USA, September 1999. Fully available at http://eprints.kfupm.edu.sa/21437/
and http://www.google.de/url?sa=t&source=web&ct=res&cd=8&url=http
%3A%2F%2Fwww.illigal.uiuc.edu%2Fpub%2Fpapers%2FIlliGALs%2F99018.ps.
Z&ei=N5JPSoz9DZOqmgP_wfiZCw&usg=AFQjCNEuIEIxh7My7MJ2wIKuLw7tQ34DMg
x
[accessed 2009-07-04]. CiteSeer : 10.1.1.40.1920. See also [2154, 2156].

2154. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Optimization
by Building and Using Probabilistic Models. In ACC [1332], pages 3289–3293, volume 5,
2000. doi: 10.1109/ACC.2000.879173. See also [2153, 2156].
2155. Martin Pelikan, Heinz Mühlenbein, and A. O. Rodriguez, editors. Proceedings of the Opti-
mization by Building and Using Probabilistic Models Workshop (OBUPM’2000), 2000, Riviera
Hotel and Casino: Las Vegas, NV, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. Part of [2935].
2156. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Optimiza-
tion by Building and Using Probabilistic Models. Computational Optimization and Appli-
REFERENCES 1119
cations, 21(1):5–20, January 2002, Kluwer Academic Publishers: Norwell, MA, USA and
Springer Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1013500812258. Fully avail-
able at http://w3.ualg.pt/˜flobo/papers/pmbga-survey.pdf [accessed 2009-07-04]. See
also [2153, 2154].
2157. Martin Pelikan, Kumara Sastry, and David Edward Goldberg. Multiobjective hBOA, Clus-
tering, and Scalability. IlliGAL Report 2005005, Illinois Genetic Algorithms Laboratory
(IlliGAL), Department of Computer Science, Department of General Engineering, Univer-
sity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, February 2005. Fully
available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2005005.pdf
[accessed 2009-08-28]. arXiv ID: cs/0502034v1. See also [2398].

2158. Martin Pelikan, Kumara Sastry, and Erick Cantú-Paz, editors. Scalable Optimiza-
tion via Probabilistic Modeling – From Algorithms to Applications, volume 33 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2006.
doi: 10.1007/978-3-540-34954-9. isbn: 3-540-34953-7. Google Books ID: znzGDXPh6NAC.
OCLC: 72437004, 180920474, and 318298233. Library of Congress Control Number
(LCCN): 2006928618.
2159. David Alejandro Pelta and Natalio Krasnogor, editors. Proceedings of the 1st International
Workshop Nature Inspired Cooperative Strategies for Optimization (NICSO’06), June 29–
30, 2006, Granada, Spain. Rosillo’s: Granada, Spain. isbn: 8468993239. Google Books
ID: jzgDGQAACAAJ.
2160. Parag C. Pendharkar and Gary J. Koehler. A General Steady State Distribution based
Stopping Criteria for Finite Length Genetic Algorithms. European Journal of Operational
Research (EJOR), 176(3):1436–1451, February 2007, Elsevier Science Publishers B.V.: Am-
sterdam, The Netherlands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam,
The Netherlands, Roman Slowinski, Jesus Artalejo, Jean-Charles Billaut, Robert Dyson, and
Lorenzo Peccati, editors. doi: 10.1016/j.ejor.2005.10.050.
2161. Fei Peng, Ke Tang, Guoliang Chen, and Xin Yao. Population-based Algorithm Portfo-
lios for Numerical Optimization. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 14(5):782–800, March 29, 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2010.2040183. INSPEC Accession Number: 11559790.
2162. Francisco Baptista Pereira and Jorge Tavares, editors. Bio-inspired Algorithms for the Vehi-
cle Routing Problem, volume 161 in Studies in Computational Intelligence. Springer-Verlag:
Berlin/Heidelberg, 2009. doi: 10.1007/978-3-540-85152-3. Library of Congress Control Num-
ber (LCCN): 2008933225.
2163. J. Périaux, Carlo Poloni, and Gabriel Winter, authors. D. Quagliarella, editor. Genetic
Algorithms and Evolution Strategy in Engineering and Computer Science: Recent Ad-
vances and Industrial Applications. John Wiley & Sons Ltd.: New York, NY, USA, 1998.
isbn: 0471977101. Google Books ID: y NQAAAAMAAJ. OCLC: 38935523.
2164. Timothy Perkis. Stack Based Genetic Programming. In CEC’94 [1891], pages
148–153, volume 1, 1994. Fully available at ftp://coast.cs.purdue.edu/pub/
doc/genetic_algorithms/GP/Stack-Genetic-Programming.ps.gz [accessed 2010-08-
x
19]. CiteSeer : 10.1.1.53.2622.

2165. Adrian Perrig and Dawn Xiaodong Song. A First Step Towards the Automatic
Generation of Security Protocols. In NDSS’00 [1413], pages 73–83, 2000. Fully
available at http://sparrow.ece.cmu.edu/˜adrian/projects/protgen/protgen.
pdf, http://www.cs.berkeley.edu/˜dawnsong/papers/apg.pdf, and http://
www.isoc.org/isoc/conferences/ndss/2000/proceedings/031.pdf [accessed 2009-
x
09-02]. CiteSeer : 10.1.1.42.2688.

2166. Charles J. Petrie, editor. Semantic Web Services Challenge: Proceedings of the 2008 Work-
shops, 2009, LG-2009-01. Stanford University, Computer Science Department, Stanford Logic
Group: Stanford, CA, USA. Fully available at http://logic.stanford.edu/reports/
LG-2009-01.pdf [accessed 2010-12-12]. See also [1023].
2167. Alan Pétrowski. A Clearing Procedure as a Niching Method for Genetic Algorithms. In
CEC’96 [1445], pages 798–803, 1996. doi: 10.1109/ICEC.1996.542703. Fully available
at http://sci2s.ugr.es/docencia/cursoMieres/clearing-96.pdf [accessed 2008-12-
x
16]. CiteSeer : 10.1.1.33.8027. INSPEC Accession Number: 5375492.

2168. Alan Pétrowski. An Efficient Hierarchical Clustering Technique for Speciation. Tech-
nical Report, Institut National des Télécommunications: Evry Cedex, France, 1997.
1120 REFERENCES
CiteSeerx : 10.1.1.54.1131.
2169. Ferdinando Pezzella, editor. Giornate di Lavoro (Entreprise Systems: Management of Tech-
nological and Organizational Changes, “Gestione del cambiamento tecnologico ed organizza-
tivo nei sistemi d’impresa”) (AIRO’95), September 20–22, 1995, Ancona, Italy. Associazione
Italiana di Ricerca Operativa – Optimization and Decision Sciences (AIRO): Italy.
2170. Patrick C. Phillips. The Language of Gene Interaction. Genetics, 149(3):1167–1171,
July 1998, Genetics Society of America (GSA): Bethesda, MD, USA. Fully available
at http://www.genetics.org/cgi/reprint/149/3/1167.pdf [accessed 2009-07-11].
2171. Jiratta Phuboon-on and Raweewan Auepanwiriyakul. Selecting Materialized View Using
Two-Phase Optimization with Multiple View Processing Plan. Technical Report 27, World
Academy of Science, Engineering and Technology (WASET): Baky, AZ, USA, 2007. Fully
available at http://www.waset.org/journals/waset/v27/v27-30.pdf [accessed 2011-
04-13].

2172. Samuel Pierre. Inferring New Design Rules by Machine Learning: A Case Study of Topological
Optimization. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and
Humans, 28(5):575–585, September 1998, IEEE Systems, Man, and Cybernetics Society:
New York, NY, USA. doi: 10.1109/3468.709602. issn: 6006762.
2173. Samuel Pierre and Ali Elgibaoui. Improving Communication Network Topologies using Tabu
Search. In LCN’97 [1329], pages 44–53, 1997. doi: 10.1109/LCN.1997.630901. INSPEC
Accession Number: 5766134.
2174. Samuel Pierre and Fabien Houéto. Assigning Cells to Switches in Cellular Mobile Networks
using Taboo Search. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cyber-
netics, 32(3):351–356, June 2000, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/TSMCB.2002.999810. PubMed ID: 18238132. See also [2175].
2175. Samuel Pierre and Fabien Houéto. A Tabu Search Approach for Assigning Cells to Switches
in Cellular Mobile Networks. Computer Communications – The International Journal for
the Computer and Telecommunications Industry, 25(5):464–477, March 15, 2002, Elsevier
Science Publishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(01)00371-1. See also [2174].
2176. Samuel Pierre and Gabriel Ioan Ivascu, editors. Proceedings of the 2nd IEEE Interna-
tional Conference on Wireless and Mobile Computing, Networking and Communications
(WiMob’06), June 19–21, 2006, Montréal, QC, Canada. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 1-4244-0494-0. OCLC: 77118231 and 122525160. Order No.: 06EX1436.
2177. Samuel Pierre and Gisèle Legault. An Evolutionary Approach for Configuring Economical
Packet Switched Computer Networks. Artificial Intelligence in Engineering, 10(2):127–134,
1996, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0954-1810(95)00022-4. See
also [2178].
2178. Samuel Pierre and Gisèle Legault. A Genetic Algorithm for Designing Distributed Computer
Network Topologies. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cyber-
netics, 28(2):249–258, May 1998, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/3477.662766. See also [2177].
2179. Massimo Pigliucci, Courtney J. Murren, and Carl D. Schlichting. Review Article:
Phenotypic Plasticity in Evolution. Journal of Experimental Biology (JEB), 209(12):
2362–2367, June 2006, The Company of Biologists Ltd. (COB): Cambridge, MA, USA.
doi: 10.1242/jeb.02070. Fully available at http://jeb.biologists.org/cgi/reprint/
209/12/2362.pdf [accessed 2008-09-11].
2180. Martin Pincus. A Monte Carlo Method for the Approximate Solution of Certain Types
of Constrained Optimization Problems. Operations Research, 18(6):1225–1228, November–
December 1970, Institute for Operations Research and the Management Sciences (IN-
FORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge, MA,
USA.
2181. János D. Pintér. Global Optimization in Action – Continuous and Lipschitz Optimiza-
tion: Algorithms, Implementations and Applications, volume 6 in Nonconvex Optimization
and Its Applications. Springer Science+Business Media, Inc.: New York, NY, USA. Im-
print: Kluwer Academic Publishers: Norwell, MA, USA, 1996. isbn: 0792337573. Google
Books ID: G8pF982ckNsC. GOiA won the 2000 INFORMS Computing Society prize (typed
in by E. Loew June 2008).
2182. 15th European Conference on Machine Learning (ECML’04), September 20–24, 2004, Pisa,
Italy.
REFERENCES 1121
2183. Proceedings of the 1st Workshop on Machine Learning (ML’80), 1980, Pittsburgh, PA, USA.
2184. R. L. Plackett. Some Theorems in Least Squares. Biometrika, 37(12):149–157, 1950, Oxford
University Press, Inc.: New York, NY, USA.
2185. Michaël Defoin Platel, Manuel Clergue, and Philippe Collard. Maximum Homologous
Crossover for Linear Genetic Programming. In EuroGP’03 [2360], pages 29–48, 2003. Fully
available at http://hal.archives-ouvertes.fr/docs/00/15/97/39/PDF/eurogp_
03.pdf, http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/platel83.html, and
http://www.i3s.unice.fr/˜clergue/siteCV/publi/eurogp_03.pdf [accessed 2010-
x
12-04]. CiteSeer : 10.1.1.105.1838.

2186. Michaël Defoin Platel, Sébastien Vérel, Manuel Clergue, and Philippe Collard. From Royal
Road to Epistatic Road for Variable Length Evolution Algorithm. In EA’03 [1736], pages 3–
14, 2003. doi: 10.1007/b96080. Fully available at http://arxiv.org/abs/0707.0548 and
http://www.i3s.unice.fr/˜verel/publi/ea03-fromRRtoER.pdf [accessed 2007-12-01].
2187. Michaël Defoin Platel, Stefan Schliebs, and Nikola Kasabov. Quantum-Inspired Evolutionary
Algorithm: A Multimodel EDA. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 13(6):1218–1232, December 2009, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2008.2003010. INSPEC Accession Number: 10998923.
2188. Alexander Podlich. Intelligente Planung und Optimierung des Güterverkehrs auf Straße und
Schiene mit Evolutionären Algorithmen. Master’s thesis, University of Kassel, Fachbereich
16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, Febru-
ary 2009, Thomas Weise, Christian Gorldt, and Manfred Menze, Advisors, Kurt Geihs and
Bernd Scholz-Reiter, Committee members.
2189. Alexander Podlich, Thomas Weise, Manfred Menze, and Christian Gorldt. Intel-
ligente Wechselbrückensteuerung für die Logistik von Morgen. In GSN’09 [2836],
pages 1–10, 2009. Fully available at http://eceasst.cs.tu-berlin.de/index.
php/eceasst/article/viewFile/205/207, http://journal.ub.tu-berlin.de/
index.php/eceasst/article/viewFile/205/207, and http://www.it-weise.
de/documents/files/PWMG2009IWFDLVM.pdf [accessed 2010-11-02].
2190. Hartmut Pohlheim. GEATbx Introduction – Evolutionary Algorithms: Overview, Methods
and Operators. GEATbx.com: Berlin, Germany, version 3.7 edition, November 2005. Fully
available at http://www.geatbx.com/download/GEATbx_Intro_Algorithmen_v38.
pdf [accessed 2007-07-03]. Documentation for GEATbx version 3.7 (Genetic and Evolutionary
Algorithm Toolbox for use with Matlab).
2191. Riccardo Poli. Parallel Distributed Genetic Programming. Technical Report CSRP-96-15,
University of Birmingham, School of Computer Science: Birmingham, UK, September 1996.
CiteSeerx : 10.1.1.54.7666. See also [2192].
2192. Riccardo Poli. Parallel Distributed Genetic Programming. In New Ideas in Optimiza-
tion [638], pages 403–432. McGraw-Hill Ltd.: Maidenhead, UK, England, 1999. Fully avail-
able at http://cswww.essex.ac.uk/staff/rpoli/papers/Poli-NIO-1999-PDGP.
pdf [accessed 2007-11-04]. CiteSeerx : 10.1.1.15.118. See also [2191].
2193. Riccardo Poli and William B. Langdon. A New Schema Theory for Genetic Programming
with One-point Crossover and Point Mutation. In GP’97 [1611], pages 278–285, 1997.
CiteSeerx : 10.1.1.16.990. See also [2194].
2194. Riccardo Poli and William B. Langdon. A New Schema Theory for Ge-
netic Programming with One-Point Crossover and Point Mutation. Technical Re-
port CSRP-97-3, University of Birmingham, School of Computer Science: Birm-
ingham, UK, January 1997. Fully available at ftp://ftp.cs.bham.ac.uk/pub/
tech-reports/1997/CSRP-97-03.ps.gz and http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/poli_1997_schemaTR.html [accessed 2008-06-17]. Revised August 1997.
See also [2193].
2195. Riccardo Poli, William B. Langdon, Marc Schoenauer, Terence Claus Fogarty, and Wolfgang
Banzhaf, editors. Late Breaking Papers at EuroGP’98: The First European Workshop on
Genetic Programming (EuroGP’98 LBP), April 14–15, 1998, Paris, France. Distributed at
the workshop. See also [210].
2196. Riccardo Poli, Peter Nordin, William B. Langdon, and Terence Claus Fogarty, editors.
Proceedings of the Second European Workshop on Genetic Programming (EuroGP’99),
May 26–27, 1999, Göteborg, Sweden, volume 1598/1999 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-65899-8.
1122 REFERENCES
OCLC: 634229593. Library of Congress Control Number (LCCN): 99028995. LC Classi-
fication: QA76.623 .E94 1999.
2197. Riccardo Poli, Hans-Michael Voigt, Stefano Cagnoni, David Wolfe Corne, George D. Smith,
and Terence Claus Fogarty, editors. Proceedings of the First European Workshops on Evo-
lutionary Image Analysis, Signal Processing and Telecommunications: EvoIASP’99, Eu-
roEcTel’99 (EvoWorkshops’99), May 26–27, 1999, Göteborg, Sweden, volume 1596/1999
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/10704703. isbn: 3-540-65837-8. Google Books ID: CYDLxvi0-SYC.
OCLC: 41165439, 59385457, 190833621, and 202788825.
2198. Riccardo Poli, Wolfgang Banzhaf, William B. Langdon, Julian Francis Miller, Peter Nordin,
and Terence Claus Fogarty, editors. Proceedings of the Third European Conference on Genetic
Programming (EuroGP’00), April 15–16, 2000, Edinburgh, Scotland, UK, volume 1802/2000
in Lecture Notes in Computer Science (LNCS). EvoGP, The Genetic Programming Working
Group of EvoNET, The Network of Excellence in Evolutionary Computing: France, Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/b75085. isbn: 3-540-67339-3. Google Books
ID: KhaR0XZHedEC. OCLC: 43811252, 174380171, 243487736, and 313540688. In
conjunction with ICES2000 and EvoWorkshops2000 on evolutionary computing.
2199. Riccardo Poli, William B. Langdon, and Nicholas Freitag McPhee. A Field Guide
to Genetic Programming. Lulu Enterprises UK Ltd: London, UK, March 2008.
isbn: 1-4092-0073-6. Fully available at http://www.gp-field-guide.org.uk/
and http://www.lulu.com/items/volume_63/2167000/2167025/2/print/book.
pdf [accessed 2008-03-27]. Google Books ID: 3PBrqNK5fFQC. OCLC: 225855345. With contri-
butions by John R. Koza.
2200. Riccardo Poli, Nicholas Freitag McPhee, Luca Citi, and Ellery Crane. Memory with Memory
in Genetic Programming. Journal of Artificial Evolution and Applications, 2009(570606),
2009, Hindawi Publishing Corporation: Nasr City, Cairo, Egypt, Leonardo Vanneschi, edi-
tor. doi: 10.1155/2009/570606. Fully available at http://www.hindawi.com/journals/
jaea/2009/570606.html [accessed 2009-09-09].
2201. 3rd Generative Art International Conference (GA’00), December 14–16, 2000, Politecnico di
Milano: Milano, Italy. Politecnico di Milano, Generative Design Lab: Milano, Italy. Fully
available at http://www.generativeart.com/on/cic/GA2000.htm [accessed 2009-06-18].
2202. Shankar R. Ponnekanti and Armando Fox. SWORD: A Developer Toolkit for Web Service
Composition. In WWW’02 [26], 2002. Fully available at http://www2002.org/CDROM/
alternate/786/ [accessed 2011-01-11].
2203. H. Vincent Poor. An Introduction to Signal Detection and Estimation, Springer
Texts in Electrical Engineering. Birkhäuser Verlag: Basel, Switzerland, 2nd edition,
1994. isbn: 0-387-94173-8 and 3-540-94173-8. Google Books ID: eI1DBIUbzAQC.
OCLC: 28891765.
2204. Lucian Popa, Serge Abiteboul, and Phokion G. Kolaitis, editors. Proceedings of the
Twenty-First ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Sys-
tems (PODS’02), June 2–6, 2002, Madison, WI, USA. ACM Press: New York, NY, USA.
isbn: 1-58113-507-6.
2205. Proceedings of the 4th International Conference on Metaheuristics and Nature Inspired Com-
puting (META’12), October 27–31, 2012, Port El Kantaoui, Sousse, Tunisia.
2206. Bruce W. Porter and Raymond J. Mooney, editors. Proceedings of the 7th International
Workshop on Machine Learning (ML’90), June 21–23, 1990, Austin, TX, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-141-4.
2207. Agent Modeling, Papers from the 1996 AAAI Workshop, August 4, 1996, Portland, OR, USA.
isbn: 1-57735-008-1. GBV-Identification (PPN): 230032508. Technical report WS-96-02
of the American Association for Artificial Intelligence. See also [586].
2208. Vincent William Porto, N. Saravanan, D. Waagen, and Ágoston E. Eiben, editors. Evolu-
tionary Programming VII – Proceedings of the 7th International Conference on Evolutionary
Programming (EP’98), May 25–27, 1998, Mission Valley Marriott: San Diego, CA, USA,
volume 1447/1998 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/BFb0040753. isbn: 3-540-64891-7. OCLC: 39539448,
150397169, 247514628, and 313127561. In cooperation with IEEE Neural Networks
Council.
2209. 16th European Conference on Machine Learning (ECML’05), October 3–7, 2005, Porto, Por-
REFERENCES 1123
tugal.
2210. Mitchell A. Potter and Kenneth Alan De Jong. A Cooperative Coevolutionary
Approach to Function Optimization. In PPSN III [693], pages 249–257, 1994.
doi: 10.1007/3-540-58484-6 269. CiteSeerx : 10.1.1.54.7033.
2211. Mitchell A. Potter and Kenneth Alan De Jong. Cooperative Coevolution: An Ar-
chitecture for Evolving Coadapted Subcomponents. Evolutionary Computation, 8(1):
1–29, Spring 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600568086.
Fully available at http://mitpress.mit.edu/journals/EVCO/Potter.pdf and
http://mitpress.mit.edu/journals/pdf/evco_8_1_1_0.pdf [accessed 2010-09-13].
CiteSeerx : 10.1.1.134.2926.
2212. Jean-Yves Potvin. A Review of Bio-inspired Algorithms for Vehicle Routing. In Bio-inspired
Algorithms for the Vehicle Routing Problem [2162], pages 1–34. Springer-Verlag: Berlin/Hei-
delberg, 2009. doi: 10.1007/978-3-540-85152-3 1.
2213. Kata Praditwong, Mark Harman, and Xin Yao. Software Module Clustering as
a Multi-Objective Search Problem. IEEE Transactions on Software Engineering,
37(2):264–282, March–April 2011, IEEE Computer Society: Piscataway, NJ, USA.
doi: 10.1109/TSE.2010.26. Fully available at http://www.cs.bham.ac.uk/˜xin/
papers/PraditwongHarmanYao10TSE.pdf [accessed 2011-08-07]. INSPEC Accession Num-
ber: 11883163.
2214. Bhanu Prasad, editor. Proceedings of the 1st Indian International Conference on Artificial
Intelligence (IICAI-03), December 18–20, 2003, Hyderabad, India. isbn: 0-9727412-0-8.
2215. Bhanu Prasad, editor. Proceedings of the 2nd Indian International Conference on Artificial
Intelligence (IICAI-05), December 20–22, 2005, Pune, India. isbn: 0-9727412-1-6.
2216. Bhanu Prasad, editor. Proceedings of the 3rd Indian International Conference on Artificial
Intelligence (IICAI-07), December 17–19, 2007, Pune, India.
2217. Bhanu Prasad, Pawan Lingras, and Ashwin Ram, editors. Proceedings of the 4th Indian In-
ternational Conference on Artificial Intelligence (IICAI-09), December 16–18, 2009, Tumkur,
India.
2218. Sushil K. Prasad, Susmi Routray, Reema Khurana, and Sartaj Sahni, editors. Proceed-
ings of the Third International Conference on Information Systems, Technology and Man-
agement (ICISTM’09), March 12–13, 2009, Ghaziabad, India, volume 31 in Communi-
cations in Computer and Information Science. Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-00405-6. isbn: 3-642-00404-0. Library of Congress Control Number
(LCCN): 2009921607.
2219. Steven D. Prestwich, S. Armagan Tarim, and Brahim Hnich. Template Design Under
Demand Uncertainty by Integer Linear Local Search. International Journal of Produc-
tion Research, 44(22):4915–4928, November 2006, Taylor and Francis LLC: London, UK.
doi: 10.1080/00207540600621060.
2220. Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen. Differential Evolution – A
Practical Approach to Global Optimization, Natural Computing Series. Birkhäuser Ver-
lag: Basel, Switzerland, 2005. isbn: 3-540-20950-6 and 3-540-31306-0. Google Books
ID: S67vX-KqVqUC. OCLC: 63127211, 63381529, and 318292603.
2221. Armand Prieditis and Stuart J. Russell, editors. Proceedings of the Twelfth International
Conference on Machine Learning (ICML’95), July 9–12, 1995, Tahoe City, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-377-8.
2222. Fifth Annual Princeton Conference on Information Science and Systems, March 1971. Prince-
ton University Press: Princeton, NJ, USA.
2223. Les Proll and Barbara Smith. Integer Linear Programming and Constraint Programming
Approaches to a Template Design Problem. INFORMS Journal on Computing (JOC), 10
(03):225–275, Summer 1998, Institute for Operations Research and the Management Sciences
(INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge,
MA, USA. doi: 10.1287/ijoc.10.3.265.
2224. Philip E. Protter. Stochastic Integration and Differential Equations, volume 21 in Stochastic
Modelling and Applied Probability. Springer-Verlag: Berlin/Heidelberg, 2nd edition, 2004.
isbn: 3-540-00313-4. Google Books ID: mJkFuqwr5xgC.
2225. Proceedings of the Fourth International Workshop on Local Search Techniques in Constraint
Satisfactio (LSCS’07), September 23, 2007, Providence, RI, USA.
2226. Adam Prügel-Bennett. Modelling GA Dynamics. In Proceedings of the Second Evonet Sum-
1124 REFERENCES
mer School on Theoretical Aspects of Evolutionary Computing [1480], pages 59–86, 1999.
Fully available at http://eprints.ecs.soton.ac.uk/13205 [accessed 2010-08-01].
2227. William F. Punch, Douglas Zongker, and Erik D. Goodman. The Royal Tree Problem,
a Benchmark for Single and Multiple Population Genetic Programming. In Advances
in Genetic Programming II [106], pages 299–316. MIT Press: Cambridge, MA, USA,
1996. Fully available at http://citeseer.ist.psu.edu/147908.html [accessed 2007-10-
x
14]. CiteSeer : 10.1.1.48.6434.

2228. Robin Charles Purshouse. On the Evolutionary Optimisation of Many Objectives. PhD
thesis, University of Sheffield, Department of Automatic Control and Systems Engineering:
Sheffield, UK, September 2003, Peter J. Fleming, Advisor. CiteSeerx : 10.1.1.10.8816.
2229. Robin Charles Purshouse and Peter J. Fleming. The Multi-Objective Genetic Algorithm Ap-
plied to Benchmark Problems – An Analysis. Research Report 796, University of Sheffield,
Department of Automatic Control and Systems Engineering: Sheffield, UK, August 2001.
Fully available at http://citeseer.ist.psu.edu/old/499210.html and http://
www.lania.mx/˜ccoello/EMOO/purshouse01.pdf.gz [accessed 2009-08-07].
2230. Robin Charles Purshouse and Peter J. Fleming. Conflict, Harmony, and Independence: Re-
lationships in Evolutionary Multi-Criterion Optimisation. In EMO’03 [957], pages 16–30,
2003. doi: 10.1007/3-540-36970-8 2. Fully available at http://www.lania.mx/˜ccoello/
EMOO/purshouse03.pdf.gz [accessed 2009-07-18].
2231. Robin Charles Purshouse and Peter J. Fleming. Evolutionary Many-Objective Optimi-
sation: An Exploratory Analysis. In CEC’03 [2395], pages 2066–2073, volume 3, 2003.
doi: 10.1109/CEC.2003.1299927. Fully available at http://www.lania.mx/˜ccoello/
EMOO/purshouse03b.pdf.gz. INSPEC Accession Number: 8494841.

2232. D. Quagliarella, J. Périaux, Carlo Poloni, and Gabriel Winter, editors. Genetic Algorithms
and Evolution Strategy in Engineering and Computer Science: Recent Advances and Indus-
trial Applications – Proceedings of the 2nd European Short Course on Genetic Algorithms
and Evolution Strategies (EUROGEN’97), November 28–December 4, 1997, Triest, Italy, Re-
cent Advances in Industrial Applications. John Wiley & Sons Ltd.: New York, NY, USA.
isbn: 0-471-97710-1. GBV-Identification (PPN): 236675036 and 252253957.
2233. R. J. Quick, Victor J. Rayward-Smith, and George D. Smith. The Royal Road Func-
tions: Description, Intent and Experimentation. In AISB’96 [938], pages 223–235, 1996.
doi: 10.1007/BFb0032786.
2234. John Ross Quinlan. C4.5: Programs for Machine Learning, Morgan Kaufmann Series in
Machine Learning. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993.
isbn: 1-558-60238-0. Google Books ID: AABrrnZWgPEC. OCLC: 26547590, 249341510,
257317819, and 473499718. Library of Congress Control Number (LCCN): 92032653.
GBV-Identification (PPN): 376624361.
2235. Alejandro Quintero and Samuel Pierre. Assigning Cells to Switches in Cellular Mobile Net-
works: A Comparative Study. Computer Communications – The International Journal for
the Computer and Telecommunications Industry, 26(9):950–960, June 2, 2003, Elsevier Sci-
ence Publishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(02)00224-4. See also [2237].
2236. Alejandro Quintero and Samuel Pierre. Evolutionary Approach to Optimize the As-
signment of Cells to Switches in Personal Communication Networks. Computer Com-
munications – The International Journal for the Computer and Telecommunications In-
dustry, 26(9):927–938, June 2, 2003, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0140-3664(02)00238-4. See also [2235, 2237].
2237. Alejandro Quintero and Samuel Pierre. Sequential and Multi-Population Memetic Algo-
rithms for Assigning Cells to Switches in Mobile Networks. Computer Networks, 43(3):
247–261, October 22, 2003, Elsevier Science Publishers B.V.: Essex, UK. Imprint: North-
Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands, Ian F. Akyildiz, editor.
doi: 10.1016/S1389-1286(03)00270-6. See also [2235, 2236].
2238. Mohammad Adil Qureshi. Evolving Agents. In GP’96 [1609], pages 369–374, 1996. Fully
available at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/AQ.gp96.
ps.gz [accessed 2007-09-17]. See also [2239, 2240].
REFERENCES 1125
2239. Mohammad Adil Qureshi. Evolving Agents. Research Note RN/96/4, University Col-
lege London (UCL): London, UK, January 1996. Fully available at http://www.cs.
bham.ac.uk/˜wbl/biblio/gp-html/qureshi_1996_eaRN.html and http://www.
cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/AQ.gp96.ps.gz [accessed 2008-09-02]. See
also [2238].
2240. Mohammad Adil Qureshi. The Evolution of Agents. PhD thesis, University College Lon-
don (UCL), Department of Computer Science: London, UK, July 2001, Jon Crowcroft,
Supervisor. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/
qureshi_thesis.html [accessed 2009-09-02]. CiteSeerx : 10.1.1.4.3698. See also [2238,
2239].

2241. Nicholas J. Radcliffe. Equivalence Class Analysis of Genetic Algorithms. Complex Sys-
tems, 5(2):183–205, 1991, Complex Systems Publications, Inc.: Champaign, IL, USA.
Fully available at ftp://ftp.epcc.ed.ac.uk/pub/tr/90/tr9003.ps.Z [accessed 2010-07-
x
26]. CiteSeer : 10.1.1.7.2988.

2242. Nicholas J. Radcliffe. The Algebra of Genetic Algorithms. Technical Report TR92-11, Uni-
versity of Edinburgh, Edinburgh Parallel Computing Centre (EPCC): Edinburgh, Scotland,
UK, 1992. Fully available at ftp://ftp.epcc.ed.ac.uk/pub/tr/92/tr9211.ps.Z [ac-
cessed 2010-07-26]. See also [2244].

2243. Nicholas J. Radcliffe. Non-Linear Genetic Representations. In PPSN II [1827], pages 259–
268, 1992. Fully available at http://users.breathe.com/njr/papers/ppsn92.pdf
x
[accessed 2010-07-26]. CiteSeer : 10.1.1.7.1530.

2244. Nicholas J. Radcliffe. The Algebra of Genetic Algorithms. Annals of Mathematics and Arti-
ficial Intelligence, 10(4):339–384, December 1994, Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1007/BF01531276. Fully available at http://stochasticsolutions.com/
pdf/amai94.pdf [accessed 2010-07-27]. CiteSeerx : 10.1.1.21.9675, 10.1.1.51.2195, and
10.1.1.8.2450. See also [2242].
2245. Nicholas J. Radcliffe and Patrick David Surry. Formal Memetic Algorithms. In AISB’94 [936],
pages 1–16, 1994. doi: 10.1007/3-540-58483-8 1. CiteSeerx : 10.1.1.38.9885.
2246. Nicholas J. Radcliffe and Patrick David Surry. Fitness Variance of Formae and Performance
Prediction. In FOGA 3 [2932], pages 51–72, 1994. Fully available at http://users.
breathe.com/njr/papers/foga94.pdf [accessed 2009-07-10]. CiteSeerx : 10.1.1.40.4508
and 10.1.1.57.635.
2247. Nicholas J. Radcliffe and Patrick David Surry. Fundamental Limitations on Search Al-
gorithms: Evolutionary Computing in Perspective. In Computer Science Today – Recent
Trends and Developments [2777], pages 275–291. Springer-Verlag GmbH: Berlin, Germany,
1995. doi: 10.1007/BFb0015249. CiteSeerx : 10.1.1.49.2376.
2248. Sabine Radke and Deutsches Institut für Wirtschaftsforschung (DIW): Berlin, Germany.
Verkehr in Zahlen 2006/2007. Deutscher Verkehrs-Verlag GmbH: Hamburg, Germany
and Bundesministerium für Verkehr, Bau- und Stadtentwicklung: Berlin, Germany, 2006.
isbn: 3-87154-349-7. OCLC: 255962850. GBV-Identification (PPN): 526782250.
2249. R. Raghuram. Computer Simulation of Electronic Circuits. New Age International (P)
Ltd., Publishers: New Delhi, India, 1989–August 2007. isbn: 8122401112. Google Books
ID: kVs2liYNir0C.
2250. Yahya Rahmat-Samii and Eric Michielssen, editors. Electromagnetic Optimization by Genetic
Algorithms, Wiley Series in Microwave and Optical Engineering. John Wiley & Sons Ltd.:
New York, NY, USA, June 1999. isbn: 0-471-29545-0.
2251. Günther R. Raidl. A Hybrid GP Approach for Numerically Robust Symbolic Regres-
sion. In GP’98 [1612], pages 323–328, 1998. Fully available at http://www.ads.
tuwien.ac.at/publications/bib/pdf/raidl-98c.pdf and http://www.cs.
bham.ac.uk/˜wbl/biblio/gp-html/raidl_1998_hGPnrsr.html [accessed 2009-09-09].
CiteSeerx : 10.1.1.20.6555.
2252. Günther R. Raidl and Jens Gottlieb, editors. Proceedings of the 5th European Conference on
Evolutionary Computation in Combinatorial Optimization (EvoCOP’05), March 30–April 1,
1126 REFERENCES
2005, Lausanne, Switzerland, volume 3448/2005 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-25337-8.
2253. Günther R. Raidl, Jean-Arcady Meyer, Martin Middendorf, Stefano Cagnoni, Juan
Jesús Romero Cardalda, David Wolfe Corne, Jens Gottlieb, Agnès Guillot, Emma Hart,
Colin G. Johnson, and Elena Marchiori, editors. Applications of Evolutionary Computing,
Proceedings of EvoWorkshop 2003: EvoBIO, EvoCOP, EvoIASP, EvoMUSART, EvoROB,
and EvoSTIM (EvoWorkshop’03), April 14–16, 2003, University of Essex: Wivenhoe Park,
Colchester, Essex, UK, volume 2611/2003 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-00976-0.
2254. Günther R. Raidl, Stefano Cagnoni, Jürgen Branke, David Wolfe Corne, Rolf Drech-
sler, Yaochu Jin, Colin G. Johnson, Penousal Machado, Elena Marchiori, Franz Roth-
lauf, George D. Smith, and Giovanni Squillero, editors. Applications of Evolution-
ary Computing – Proceedings of EvoWorkshops 2004: EvoBIO, EvoCOMNET, EvoHOT,
EvoIASP, EvoMUSART, and EvoSTOC (EvoWorkshops’04), April 5–7, 2004, Coimbra, Por-
tugal, volume 3005/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/b96500. isbn: 3-540-21378-3. Google Books
ID: 1K9iHdLz0BQC. OCLC: 54880715 and 249136935. Library of Congress Control
Number (LCCN): 2004102415.
2255. Tapani Raiko, Pentti Haikonen, and Jaakko Väyrynen, editors. AI and Machine Con-
sciousness – Proceedings of the 13th Finnish Artificial Intelligence Conference (STeP’08),
August 20–22, 2008, Helsinki University of Technology: Espoo, Finland, volume 24
in Publications of the Finnish Artificial Intelligence Society. Nokia Research Center:
Helsinki, Finland. Fully available at http://www.stes.fi/step2008/proceedings/
step2008proceedings.pdf [accessed 2009-10-25].
2256. Albert Rainer. Web Service Composition using Answer Set Programming.
In PuK’05 [2409], 2005. Fully available at http://www-is.informatik.
uni-oldenburg.de/˜sauer/puk2005/paper/4_puk2005_rainer.pdf [accessed 2011-01-
x
11]. CiteSeer : 10.1.1.136.1048.

2257. Albert Rainer and Jürgen Dorn. MOVE: A Generic Service Composition Frame-
work for Service Oriented Architectures. In CEC’09 [1245], pages 503–506, 2009.
doi: 10.1109/CEC.2009.56. INSPEC Accession Number: 10839130. See also [822, 823, 1571].
2258. Hassan Rajaei, Gabriel Wainer, and Michael Chinni, editors. Proceedings of the 2008 Spring
Simulation Multiconference (SpringSim’08), April 14–17, 2008, Crowne Plaza Ottawa Hotel:
Ottawa, ON, Canada, volume 40 in Simulation Series. Society for Computer Simulation, Inter-
national: San Diego, CA, USA. isbn: 1-56555-319-5. Google Books ID: Y5juPgAACAAJ.
OCLC: 316331262.
2259. Anca L. Ralescu, editor. Fuzzy Logic in Artificial Intelligence, Workshop Proceedings (IJ-
CAI’93 Fuzzy WS), August 28, 1993, Chambéry, France, volume 847 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-58409-9 and
3-540-58409-9. OCLC: 31010125, 246325420, 311898734, 422861463, 473142458,
474000977, 497538589, and 502373916. GBV-Identification (PPN): 165340169,
272032549, and 595124968. See also [182, 183, 927].
2260. Anca L. Ralescu and James G. Shanahan, editors. Fuzzy Logic in Artificial Intelligence,
Selected and Invited Workshop Papers (IJCAI’97 Fuzzy WS), August 23–24, 1997, Nagoya,
Japan, volume 1566 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-66374-6.
GBV-Identification (PPN): 300375778 and 534418899. See also [1946, 1947].
2261. Theodore K. Ralphs. Vehicle Routing Data Sets. COmputational INfrastructure for Opera-
tions Research (COIN-OR Foundation, Inc.), October 3, 2003. Fully available at http://
www.coin-or.org/SYMPHONY/branchandcut/VRP/data/ [accessed 2008-10-27].
2262. Theodore K. Ralphs, L. Kopman, William R. Pulleyblank, and L. E. Trotter. On the Capaci-
tated Vehicle Routing Problem. Mathematical Programming, 94(2–3):343–359, January 2003,
Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/s10107-002-0323-0.
2263. Proceedings of the 2005 International Conference on Machine Learning and Cybernetics
(ICMLC’05), August 18–21, 2005, Ramada Pearl Hotel: Guǎngzhōu, Guǎngdōng, China.
isbn: 0-7803-9091-1.
2264. Krishna Raman, Yue Zhang, Mark Panahi, and Kwei-Jay Lin. Customizable Business
Process Composition with Query Optimization. In CEC/EEE’08 [1346], pages 363–366,
REFERENCES 1127
2008. doi: 10.1109/CECandEEE.2008.152. INSPEC Accession Number: 10475112. See also
[204, 3067, 3073, 3074].
2265. Valliappan Ramasamy. Syntactical & Semantical Web Services Discovery And Composition.
In CEC/EEE’06 [2967], pages 439–441, 2006. doi: 10.1109/CEC-EEE.2006.86. INSPEC
Accession Number: 9189343. See also [318].
2266. Federico Ramı́rez and Olac Fuentes. Spectral Analysis Using Evolution Strategies. In
ASC’02 [1714], pages 357–391, 2002. Fully available at http://www.cs.utep.edu/
ofuentes/ASC02_RF.pdf [accessed 2009-09-18].
2267. Soraya Rana. Examining the Role of Local Optima and Schema Processing in Genetic
Search. PhD thesis, Colorado State University, Department of Computer Science, GENI-
TOR Research Group in Genetic Algorithms and Evolutionary Computation: Fort Collins,
CO, USA, July 1, 1999. Fully available at http://www.cs.colostate.edu/˜genitor/
dissertations/rana.ps.gz [accessed 2010-07-23]. CiteSeerx : 10.1.1.18.2879.
2268. William Rand, Sevan G. Ficici, and Rick L. Riolo, editors. Evolutionary Computation and
Multi-Agent Systems and Simulation Workshop (ECoMASS’08), July 12, 2008, Renaissance
Atlanta Hotel Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. Part of
[1519].
2269. Alain T. Rappaport and Reid G. Smith, editors. Proceedings of the The Second Confer-
ence on Innovative Applications of Artificial Intelligence (IAAI’90), May 1–3, 1990, Wash-
ington, DC, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-68068-8. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai90.php [accessed 2007-09-06].
Published in 1991.
2270. Khaled Rasheed and Brian D. Davison. Effect of Global Parallelism on the Behavior of a
Steady State Genetic Algorithm for Design Optimization. In CEC’99 [110], pages 534–541,
1999. Fully available at http://www.cse.lehigh.edu/˜brian/pubs/1999/cec99/
pgado.cec99.pdf [accessed 2010-08-01]. CiteSeerx : 10.1.1.114.2664. See also [698].
2271. Reza Rastegar and Arash Hariri. A Step Forward in Studying the Compact Genetic Algo-
rithm. Evolutionary Computation, 14(3):277–289, Fall 2006, MIT Press: Cambridge, MA,
USA. doi: 10.1162/evco.2006.14.3.277. arXiv ID: 0901.0598v1.
2272. Rajeev Rastogi and Kyuseok Shim. PUBLIC: A Decision Tree Classifier that Integrates
Building and Pruning. In VLDB’98 [1151], pages 404–415, 1998. Fully available at http://
eprints.kfupm.edu.sa/60034/ and http://www.vldb.org/conf/1998/p404.pdf
[accessed 2009-05-11].

2273. Tapabrata Ray, Kang Tai, and Kin Chye Seow. Multiobjective Design Optimization by an
Evolutionary Algorithm. Engineering Optimization, 33(4):399–424, 2001, Taylor and Francis
LLC: London, UK and Informa plc: London, UK. doi: 10.1080/03052150108940926.
2274. Victor J. Rayward-Smith, editor. Applications of Modern Heuristic Methods – Proceedings
of the UNICOM Seminar on Adaptive Computing and Information Processing, January 25–
27, 1994. Alfred Waller Ltd.: Henley-on-Thames, Oxfordshire, UK, Nelson Thornes Ltd.:
Cheltenham, Gloucestershire, UK, and Unicom Seminars Ltd.: Uxbridge, Middlesex, UK.
isbn: 1872474284. Google Books ID: BjOTAAAACAAJ and NmBRAAAAMAAJ.
2275. Victor J. Rayward-Smith. A Unified Approach to Tabu Search, Simulated Annealing and Ge-
netic Algorithms. In Applications of Modern Heuristic Methods – Proceedings of the UNICOM
Seminar on Adaptive Computing and Information Processing [2274], pages 55–78, volume 1,
1994.
2276. Victor J. Rayward-Smith, Ibrahim H. Osman, Colin R. Reeves, and George D. Smith, editors.
Modern Heuristic Search Methods. John Wiley & Sons Ltd.: New York, NY, USA, 1996.
isbn: 0470866500 and 0471962805. Google Books ID: DHFRAAAAMAAJ, Sd4LAAAACAAJ,
and btnFAAAACAAJ.
2277. Ingo Rechenberg. Cybernetic Solution Path of an Experimental Problem. Royal Aircraft
Establishment: Farnborough, Hampshire, UK, August 1965. Library Translation 1122.
Reprinted in [939].
2278. Ingo Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien
der biologischen Evolution, volume 15 in Problemata. PhD thesis, Technische Univer-
sität Berlin: Berlin, Germany, Friedrick Frommann Verlag: Stuttgart, Germany, 1971–
1973. isbn: 3-7728-0373-3 and 3-7728-0374-1. Google Books ID: QcNNGQAACAAJ.
OCLC: 9020616, 462818122, and 500569005. Library of Congress Control Number
(LCCN): 74320689. GBV-Identification (PPN): 024852090. LC Classification: QH371
1128 REFERENCES
.R33.
2279. Ingo Rechenberg. Evolutionsstrategie ’94, volume 1 in Werkstatt Bionik und Evolutionstech-
nik. Frommann-Holzboog Verlag: Bad Cannstadt, Stuttgart, Baden-Württemberg, Germany,
1994. isbn: 3-7728-1642-8. Google Books ID: savAAAACAAJ. OCLC: 75424354. GBV-
Identification (PPN): 153251220. Libri-Number: 8263485. See also [2278].
2280. Seventh International Workshop on Learning Classifier Systems (IWLCS’04), June 26, 2004,
Red Lion Hotel: Seattle, WA, USA. Held during GECCO 2004. See also [751, 752, 1589].
2281. R. Reddy, editor. Proceedings of the 5th International Joint Conference on Artificial In-
telligence (IJCAI’77-I), August 22–25, 1977, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA, volume 1. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-77-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [2282].
2282. R. Reddy, editor. Proceedings of the 5th International Joint Conference on Artificial Intel-
ligence (IJCAI’77-II), August 22–25, 1977, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA, volume 2. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-77-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [2281].
2283. Colin R. Reeves, editor. Modern Heuristic Techniques for Combinatorial Problems, Ad-
vanced Topics in Computer Science Series. Blackwell Publishing Ltd: Chichester, West Sus-
sex, UK, April 1993. isbn: 0077092392, 0-470-22079-1, and 0-632-03238-3. Google
Books ID: G6NfQgAACAAJ. OCLC: 26805931. Library of Congress Control Number
(LCCN): 92033335. GBV-Identification (PPN): 121998312 and 17228810X. LC Clas-
sification: QA402.5 .M62 1993. Outgrowth of a conference held at Bangor, North Wales in
1990.
2284. Colin R. Reeves and Christine C. Wright. An Experimental Design Perspective
on Genetic Algorithms. In FOGA 3 [2932], pages 7–22, 1994. Fully avail-
able at http://www.cs.colostate.edu/˜whitley/CS640/walshdesign.ps.gz [ac-
x
cessed 2010-09-14]. CiteSeer : 10.1.1.54.1692.

2285. Colin R. Reeves and Christine C. Wright. Epistasis in Genetic Algorithms: An Experimental
Design Perspective. In ICGA’95 [883], pages 217–224, 1995. CiteSeerx : 10.1.1.39.29.
2286. Dirk Reichelt, Peter Gmilkowsky, and Sebastian Linser. A Study of an Iterated Local Search
on the Reliable Communication Networks Design Problem. In EvoWorkshops’05 [2340], pages
156–165, 2005.
2287. Christian M. Reidys and Peter F. Stadler. Neutrality in Fitness Landscapes. Jour-
nal of Applied Mathematics and Computation, 117(2–3):321–350, January 25, 2001,
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0096-3003(99)00166-6.
CiteSeerx : 10.1.1.46.962.
2288. Gerhard Reinelt. TSPLIB. Office Research Group Discrete Optimization, Ruprecht Karls
University of Heidelberg: Heidelberg, Germany, 1995. Fully available at http://comopt.
ifi.uni-heidelberg.de/software/TSPLIB95/ [accessed 2010-11-23]. See also [2289, 2290].
2289. Gerhard Reinelt. TSPLIB – A Traveling Salesman Problem Library. ORSA Journal on
Computing, 3(4):376–384, Fall 1991, Operations Research Society of America (ORSA).
doi: 10.1287/ijoc.3.4.376. See also [2288, 2290].
2290. Gerhard Reinelt. TSPLIB 95. Technical Report, Universität Heidelberg, Institut für
Mathematik: Heidelberg, Germany, 1995. Fully available at http://comopt.ifi.
uni-heidelberg.de/software/TSPLIB95/DOC.PS [accessed 2011-09-18]. See also [2288,
2289].
2291. Mark J. Rentmeesters, Wei K. Tsai, and Kwei-Jay Lin. A Theory of Lexi-
cographic Multi-Criteria Optimization. In ICECCS’96 [1327], pages 76–96, 1996.
doi: 10.1109/ICECCS.1996.558386.
2292. Alfréd Rényi. Probability Theory. Dover Publications: Mineola, NY, USA,
May 2007. isbn: 0486458679. Google Books ID: Z9ueAQAACAAJ and yx8IAQAAIAAJ.
OCLC: 77708371.
2293. Proceedings of the Fourth Joint Conference on Information Science (JCIS 1998), Section:
The Second International Workshop on Frontiers in Evolutionary Algorithms (FEA’98), Oc-
tober 24–28, 1998, Research Triangle Park, NC, USA, volume 2. Workshop held in conjunc-
tion with Fourth Joint Conference on Information Sciences.
2294. Mauricio G.C. Resende, Jorge Pinho de Sousa, and Ana Viana, editors. 4th Metaheuristics
International Conference – Metaheuristics: Computer Decision-Making (MIC’01), July 16–
20, 2001, Porto, Portugal, volume 86 in Applied Optimization. Springer-Verlag GmbH:
REFERENCES 1129
Berlin, Germany. isbn: 1-4020-7653-3. Partly available at http://paginas.fe.up.
pt/˜gauti/MIC2001/ [accessed 2007-09-12]. OCLC: 53123574 and 639072881. Library of
Congress Control Number (LCCN): 2003061990. Published in November 30, 2003.
2295. Bernd Reusch, editor. International Conference on Computational Intelligence: The-
ory and Applications – 6th Fuzzy Days, May 25–28, 1996, Dortmund, North Rhine-
Westphalia, Germany, volume 1625/1999 in Lecture Notes in Computer Science (LNCS).
International Biometric Society (IBS): Washington, DC, USA. doi: 10.1007/3-540-48774-3.
isbn: 3-540-66050-X. Google Books ID: KQm0joXBr8IC. OCLC: 41380465 and
209204083.
2296. Proceedings of the IJCAI-99 Workshop on Intelligent Information Integration (IJCAI’99
WS), July 31, 1999, City Conference Center: Stockholm, Sweden, volume 23 in CEUR Work-
shop Proceedings (CEUR-WS.org). Rheinisch-Westfälische Technische Hochschule (RWTH)
Aachen, Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-23/ [accessed 2008-04-01]. Held in conjunction with the Sixteenth International Joint Con-
ference on Artificial Intelligence. See also [730, 731].
2297. Bernardete Ribeiro, Rudolf F. Albrecht, Andrej Dobnikar, David W. Pearson, and Nigel C.
Steele, editors. Proceedings of the 7th International Conference on Adaptive and Natural
Computing Algorithms (ICANNGA’05), March 21–23, 2005, University of Coimbra: Coimbra,
Portugal. Springer Verlag GmbH: Vienna, Austria. Partly available at http://icannga05.
dei.uc.pt/ [accessed 2007-08-31].
2298. Celso C. Ribeiro and Pierre Hansen, editors. 3rd Metaheuristics International Conference
– Essays and Surveys in Metaheuristics (MIC’99), July 19–23, 1999, Angra dos Reis, Rio
de Janeiro, Brazil, volume 15 in Operations Research/Computer Science Interfaces Series.
Springer-Verlag GmbH: Berlin, Germany. isbn: 0-7923-7520-3. Published in 2001.
2299. James P. Rice. Mathematical Statistics and Data Analysis, Discussion Paper Series In Eco-
nomics And Econometrics. Academic Internet Publishers, Inc. (AIPI): www.cram101.com,
University of Southampton, 1995–April 2006. isbn: 0495110892, 0534209343, and
0534399428. Google Books ID: BXQOAAAACAAJ, bIkQAQAAIAAJ, and bbCPAAAACAAJ.
OCLC: 71364578 and 315216152.
2300. Judith Rich Harris. Parental Selection: A Third Selection Process in the Evolution of Human
Hairlessness and Skin Color. Medical Hypotheses, 66(6):1053–1059, 2006, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.mehy.2006.01.027. PubMed ID: 16527428.
2301. Sebastian Richly, Georg Püschel, Dirk Habich, and Sebastian Götz. MapReduce for Scalable
Neural Nets Training. In SERVICES Cup’10 [2457], pages 99–106, 2010. doi: 10.1109/SER-
VICES.2010.36. INSPEC Accession Number: 11535225.
2302. Hendrik Richter. Behavior of Evolutionary Algorithms in Chaotically Changing Fitness Land-
scapes. In PPSN VIII [3028], pages 111–120, 2008. doi: 10.1007/b100601.
2303. Mark Ridley. Evolution. Blackwell Publishing Ltd: Chichester, West Sussex, UK, 8th edi-
tion, 2003. isbn: 0192892878, 0-632-03481-5, 0-86542-226-5, and 1-4051-0345-0.
Google Books ID: HCh7n6zuWeUC, WU wAAAAMAAJ, b-HGB9PqXCUC, xPAPgcZflzYC,
and ysddSQAACAAJ. GBV-Identification (PPN): 076515354 and 126473129. Libri-
Number: 3190331.
2304. John Riedl and Randall W. Hill Jr., editors. Proceedings of the Fifteenth Conference
on Innovative Applications of Artificial Intelligence (IAAI’03), August 12–14, 2003, Aca-
pulco, México. isbn: 1-57735-188-6. Partly available at http://www.aaai.org/
Conferences/IAAI/iaai03.php [accessed 2007-09-06].
2305. Rupert J. Riedl. A Systems-Analytical Approach to Macro-Evolutionary Phenomena. Quar-
terly Review of Biology (QRB), Chicago Journals, 52(4):351–370, December 1977, Stony
Brook University: Stony Brook, NY, USA. doi: 10.1086/410123.
2306. Rick L. Riolo and Bill Worzel, editors. Genetic Programming Theory and Practice, Pro-
ceedings of the Genetic Programming Theory Practice 2003 Workshop (GPTP’03), May 15–
17, 2003, University of Michigan, Center for the Study of Complex Systems (CSCS): Ann
Arbor, MI, USA. Kluwer Publishers: Boston, MA, USA and Springer New York: New
York, NY, USA. isbn: 1402075812. Partly available at http://www.cscs.umich.edu/
gptp-workshops/gptp2003/ [accessed 2007-09-28].
2307. Rick L. Riolo, Terence Soule, and Bill Worzel, editors. Genetic Programming Theory
and Practice IV, Proceedings of the Genetic Programming Theory Practice 2006 Workshop
1130 REFERENCES
(GPTP’06), May 11–13, 2006, University of Michigan, Center for the Study of Complex
Systems (CSCS): Ann Arbor, MI, USA. Springer-Verlag GmbH: Berlin, Germany. Partly
available at http://www.cscs.umich.edu/gptp-workshops/gptp2006/ [accessed 2007-
09-28].

2308. Rick L. Riolo, Terence Soule, and Bill Worzel, editors. Genetic Programming Theory
and Practice VI, Proceedings of the Sixth Workshop on Genetic Programming (GPTP’08),
May 15–17, 2008, University of Michigan, Center for the Study of Complex Systems (CSCS):
Ann Arbor, MI, USA, Genetic and Evolutionary Computation. Springer US: Boston, MA,
USA and Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1007/978-0-387-87623-8.
Library of Congress Control Number (LCCN): 2008936134.
2309. Rick L. Riolo, Una-May O’Reilly, and Trent McConaghy, editors. Genetic Programming
Theory and Practice VII, Proceedings of the Seventh Workshop on Genetic Programming
(GPTP’09), May 14–16, 2009, University of Michigan, Center for the Study of Com-
plex Systems (CSCS): Ann Arbor, MI, USA, Genetic and Evolutionary Computation.
Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1007/978-1-4419-1626-6. Library of Congress Control Number (LCCN): 2009939939.
2310. Rick L. Riolo, Trent McConaghy, and Ekaterina Vladislavleva, editors. Genetic Pro-
gramming Theory and Practice VIII, Proceedings of the Eighth Workshop on Genetic Pro-
gramming (GPTP’10), May 20–22, 2010, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA, volume 8 in Genetic and Evolutionary
Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Nor-
well, MA, USA. doi: 10.1007/978-1-4419-7747-2. Library of Congress Control Number
(LCCN): 20100938320.
2311. Jose L. Risco-Martı́n, Francisco Fernández de Vega, and Juan Lanchares, editors. Third
Workshop on Parallel Architectures and Bioinspired Algorithms (WPBA’10), September 11–
12, 2010, Vienna, Austria.
2312. Irina Rish. An Empirical Study of the Naive Bayes Classifier. In IJCAI’01 [2012], pages 41–
46, 2001. Fully available at http://www.cc.gatech.edu/˜isbell/classes/reading/
papers/Rish.pdf [accessed 2007-08-11].
2313. Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of
Mathematical Statistics, 22(3):400–407, September 1951, Institute of Mathematical Statis-
tics: Beachwood, OH, USA. doi: 10.1214/aoms/1177729586. Fully available at http://
projecteuclid.org/euclid.aoms/1177729586 [accessed 2008-10-11]. Zentralblatt MATH
identifier: 0054.05901. Mathematical Reviews number (MathSciNet): MR42668.
2314. Luı́s Mateus Rocha, Larry S. Yaeger, Mark A. Bedau, Dario Floreano, Robert L. Gold-
stone, and Alessandro Vespignani, editors. ALIFE X: Proceedings of the Tenth Interna-
tional Conference on the Simulation and Synthesis of Living Systems (ALIFE’06), June 3–
7, 2006, Bloomington, IN, USA, Bradford Books. MIT Press: Cambridge, MA, USA.
isbn: 0-262-68162-5. Library of Congress Control Number (LCCN): 2006922394. GBV-
Identification (PPN): 516051768. LC Classification: QH324.2 .I556 2006.
2315. Miguel Rocha, Pedro Sousa, Paulo Cortez, and Miguel Rio. Evolutionary Computation for
Quality of Service Internet Routing Optimization. In EvoWorkshops’07 [1050], pages 71–80,
2007. doi: 10.1007/978-3-540-71805-5 8.
2316. Alex Rogers and Adam Prügel-Bennett. Modelling the Dynamics of a Steady-State Genetic
Algorithm. In FOGA’98 [208], pages 57–68, 1998. Fully available at http://eprints.
ecs.soton.ac.uk/451/ [accessed 2010-08-01]. CiteSeerx : 10.1.1.28.1226.
2317. Daniel S. Rokhsar, Philip Warren Anderson, and Daniel L. Steingart. Self-Organization in
Prebiological Systems: Simulations of a Model for the Origin of Genetic Information. Journal
of Molecular Evolution, 23(2):119–126, June 1986, Springer New York: New York, NY, USA,
Martin Kreitman, editor. doi: 10.1007/BF02099906.
2318. Learning and Intelligent OptimizatioN (LION 5), January 17–21, 2011, Rome, Italy.
2319. Cristóbal Romero and Sebastián Ventura. Educational Data Mining: A Survey From 1995 to
2005. Expert Systems with Applications – An International Journal, 33(1):135–146, July 2007,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.eswa.2006.04.005.
2320. Juan Romero and Penousal Machado, editors. The Art of Artificial Evolution: A Handbook
on Evolutionary Art and Music, Natural Computing Series. Springer New York: New York,
NY, USA, November 2007. doi: 10.1007/978-3-540-72877-1. Library of Congress Control
REFERENCES 1131
Number (LCCN): 2007937632.
2321. Simon Ronald. Preventing Diversity Loss in a Routing Genetic Algorithm with Hash Tag-
ging. Complexity International, 2(2), April 1995, Monash University, Faculty of Information
Technology: Victoria, Australia. Fully available at http://www.complexity.org.au/
ci/vol02/sr_hash/ [accessed 2009-07-09].
2322. Simon Ronald. Genetic Algorithms and Permutation-Encoded Problems. Diversity Preser-
vation and a Study of Multimodality. PhD thesis, University of South Australia (UniSA),
Department of Computer and Information Science: Mawson Lakes, SA, Australia, 1996.
2323. Simon Ronald. Robust Encodings in Genetic Algorithms: A Survey of Encoding Issues. In
CEC’97 [173], pages 43–48, 1997. doi: 10.1109/ICEC.1997.592265.
2324. Simon Ronald, John Asenstorfer, and Millist Vincent. Representational Redundancy
in Evolutionary Algorithms. In CEC’95 [1361], pages 631–637, volume 2, 1995.
doi: 10.1109/ICEC.1995.487457.
2325. Agostinho Rosa, editor. Proceedings of the International Conference on Evolutionary Com-
putation (ICEC’09), 2009, Madeira, Portugal. Part of [1809].
2326. Justinian P. Rosca. Proceedings of the Workshop on Genetic Programming: From Theory to
Real-World Applications. Technical Report 95.2, University of Rochester, National Resource
Laboratory for the Study of Brain and Behavior: Rochester, NY, USA, Morgan Kaufmann:
San Mateo, CA, USA, July 9, 1995, Tahoe City, CA, USA. Held in conjunction with the
twelfth International Conference on Machine Learning.
2327. Justinian P. Rosca. An Analysis of Hierarchical Genetic Programming. Technical Re-
port TR566, University of Rochester, Computer Science Department: Rochester, NY, USA,
March 1995. Fully available at ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/95.
tr566.ps.gz [accessed 2009-07-10]. CiteSeerx : 10.1.1.26.2552.
2328. Justinian P. Rosca. Generality versus Size in Genetic Programming. In GP’96 [1609],
pages 381–387, 1996. Fully available at ftp://ftp.cs.rochester.edu/pub/u/rosca/
gp/96.gp.ps.gz and http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/rosca_
1996_gVsGP.html [accessed 2011-11-23]. CiteSeerx : 10.1.1.54.2363.
2329. Justinian P. Rosca and Dana H. Ballard. Causality in Genetic Programming. In
ICGA’95 [883], pages 256–263, 1995. CiteSeerx : 10.1.1.26.2503.
2330. Florian Rosenberg, Christoph Nagl, and Schahram Dustdar. Applying Distributed Busi-
ness Rules – The VIDRE Approach. In SCContest’06 [554], pages 471–478, 2006.
doi: 10.1109/SCC.2006.22. INSPEC Accession Number: 9165411.
2331. Richard S. Rosenberg. Simulation of Genetic Populations with Biochemical Properties. PhD
thesis, University of Michigan: Ann Arbor, MI, USA, June 1967. Fully available at http://
deepblue.lib.umich.edu/handle/2027.42/7321 [accessed 2010-08-03]. University Micro-
films No.: UMR3370. ID: bad1552.0001.001. ORA Project 08333.
2332. Jay S. Rosenblatt, Robert A. Hinde, Colin Beer, and Marie-Claire Busnel, editors. Advances
in the Study of Behavior, volume 12 in Advances in the Study of Behavior. Academic Press
Professional, Inc.: San Diego, CA, USA. isbn: 0-12-004512-5. OCLC: 646758333. GBV-
Identification (PPN): 011437138.
2333. Howard H. Rosenbrock. An Automatic Method for Finding the Greatest or Least Value
of a Function. The Computer Journal, Oxford Journals, 3(3):175–184, March 1960, British
Computer Society: Swindon, UK. doi: 10.1093/comjnl/3.3.175.
2334. Paul L. Rosin and Freddy Fierens. Improving Neural Network Generalisation. In
IGARSS’95 [2609], pages 1255–1257, volume 2, 1995. doi: 10.1109/IGARSS.1995.521718.
Fully available at http://users.cs.cf.ac.uk/Paul.Rosin/resources/papers/
overfitting.pdf [accessed 2009-07-12]. CiteSeerx : 10.1.1.48.3136. INSPEC Accession
Number: 5112834.
2335. Claudio Rossi, Elena Marchiori, and Joost N. Kok. An Adaptive Evolutionary Algo-
rithm for the Satisfiability Problem. In SAC’00 [23], pages 463–469, volume 1, 2000.
doi: 10.1145/335603.335912. CiteSeerx : 10.1.1.37.4771. See also [1117].
2336. Gerald P. Roston. A Genetic Methodology for Configuration Design, CMU-RI-TR-94-42. PhD
thesis, Carnegie Mellon University (CMU), Department of Mechanical Engineering: Pitts-
burgh, PA, USA, December 1994, Robert Sturges, Jr. and William “Red“ Wittaker, Advi-
sors. Fully available at http://www.ri.cmu.edu/pubs/pub_3335.html [accessed 2010-08-
x
19]. CiteSeer : 10.1.1.152.7521.

2337. Franz Rothlauf, editor. Late Breaking Papers at Genetic and Evolutionary Computation
1132 REFERENCES
Conference (GECCO’05 LBP), June 25–29, 2005, Loews L’Enfant Plaza Hotel: Washington,
DC, USA. Also distributed on CD-ROM at GECCO-2005. See also [304, 2339].
2338. Franz Rothlauf. Representations for Genetic and Evolutionary Algorithms, volume 104 in
Studies in Fuzziness and Soft Computing. Physica-Verlag GmbH & Co.: Heidelberg, Ger-
many, 2nd edition, 2002–2006. isbn: 3-540-25059-X and 3790814962. Google Books
ID: fQrSUwop4JkC, rNxNAAAACAAJ, and tHYlQfvsP6cC. Foreword by David E. Gold-
berg.
2339. Franz Rothlauf, Misty Blowers, Jürgen Branke, Stefano Cagnoni, Ivan I. Garibay, Ozlem
Garibay, Jörn Grahl, Gregory S. Hornby, Edwin D. de Jong, Tim Kovacs, Sanjeev Kumar,
Cláudio F. Lima, Xavier Llorà, Fernando G. Lobo, Laurence D. Merkle, Julian Francis Miller,
Jason H. Moore, Michael O’Neill, Martin Pelikan, Terry P. Riopka, Marylyn D. Ritchie, Ku-
mara Sastry, Stephen L. Smith, Hal Stringer, Keiki Takadama, Marc Toussaint, Stephen C.
Upton, Alden H. Wright, and Shengxiang Yang, editors. Proceedings of the 2005 Work-
shops on Genetic and Evolutionary Computation (GECCO’05 WS), June 25–26, 2005, Loews
L’Enfant Plaza Hotel: Washington, DC, USA. ACM Press: New York, NY, USA. See also
[304, 2337].
2340. Franz Rothlauf, Jürgen Branke, Stefano Cagnoni, David Wolfe Corne, Rolf Drech-
sler, Yaochu Jin, Penousal Machado, Elena Marchiori, Juan Romero, George D. Smith,
and Giovanni Squillero, editors. Applications of Evolutionary Computing, Proceedings
of EvoWorkshops 2005 – EvoBIO, EvoCOMNET, EvoHOT, EvoIASP, EvoMUSART,
and EvoSTOC (EvoWorkshops’05), March 30–April 1, 2005, Lausanne, Switzerland,
volume 3449/2005 in Theoretical Computer Science and General Issues (SL 1), Lec-
ture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b106856. isbn: 3-540-25396-3. Google Books ID: -4dQAAAAMAAJ and
3X86XlOROyMC. OCLC: 59009564, 59223261, 145832358, and 254368497. Library
of Congress Control Number (LCCN): 2005922824.
2341. Franz Rothlauf, Jürgen Branke, Stefano Cagnoni, Ernesto Jorge Fernandes Costa, Carlos
Cotta, Rolf Drechsler, Evelyne Lutton, Penousal Machado, Jason H. Moore, Juan Romero,
George D. Smith, Giovanni Squillero, and Hideyuki Takagi, editors. Applications of Evo-
lutionary Computing – Proceedings of EvoWorkshops 2006: EvoBIO, EvoCOMNET, Evo-
HOT, EvoIASP, EvoINTERACTION, EvoMUSART, and EvoSTOC (EvoWorkshops’06),
April 10–12, 2006, Budapest, Hungary, volume 3907/2006 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11732242. isbn: 3-540-33237-5. Google Books
ID: S4VQAAAAMAAJ. OCLC: 65216918, 67831340, 145829509, 150398193, 181551989,
and 316263582. Library of Congress Control Number (LCCN): 2006922618.
2342. Franz Rothlauf, Günther R. Raidl, Anna Isabel Esparcia-Alcázar, Ying-Ping Chen, Gabriela
Ochoa, Ender Ozcan, Marc Schoenauer, Anne Auger, Hans-Georg Beyer, Nikolaus Hansen,
Steffen Finck, Raymond Ros, L. Darrell Whitley, Garnett Wilson, Simon Harding, William B.
Langdon, Man Leung Wong, Laurence D. Merkle, Frank W. Moore, Sevan G. Ficici, William
Rand, Rick L. Riolo, Nawwaf Kharma, William R. Buckley, Julian Francis Miller, Ken-
neth Owen Stanley, Jaume Bacardit i Peñarroya, Will N. Browne, Jan Drugowitsch, Nicola
Beume, Mike Preuß, Stephen Frederick Smith, Stefano Cagnoni, Alexandru Floares, Aaron
Baughman, Steven Matt Gustafson, Maarten Keijzer, Arthur Kordon, and Clare Bates
Congdon, editors. Proceedings of the 11th Annual Conference on Genetic and Evolution-
ary Computation (GECCO’09-I), July 8–12, 2009, Delta Centre-Ville Hotel: Montréal, QC,
Canada. Association for Computing Machinery (ACM): New York, NY, USA. Partly avail-
able at http://www.sigevo.org/gec-summit-2009/instructions-papers.html
[accessed 2011-02-28]. See also [2343].

2343. Franz Rothlauf, Günther R. Raidl, Anna Isabel Esparcia-Alcázar, Ying-Ping Chen, Gabriela
Ochoa, Ender Ozcan, Marc Schoenauer, Anne Auger, Hans-Georg Beyer, Nikolaus Hansen,
Steffen Finck, Raymond Ros, L. Darrell Whitley, Garnett Wilson, Simon Harding, William B.
Langdon, Man Leung Wong, Laurence D. Merkle, Frank W. Moore, Sevan G. Ficici, William
Rand, Rick L. Riolo, Nawwaf Kharma, William R. Buckley, Julian Francis Miller, Ken-
neth Owen Stanley, Jaume Bacardit i Peñarroya, Will N. Browne, Jan Drugowitsch, Nicola
Beume, Mike Preuß, Stephen L. Smith, Stefano Cagnoni, Alexandru Floares, Aaron Baugh-
man, Steven Matt Gustafson, Maarten Keijzer, Arthur Kordon, and Clare Bates Congdon,
editors. Proceedings of the 11th Annual Conference – Companion on Genetic and Evolu-
REFERENCES 1133
tionary Computation Conference (GECCO’09-II), July 8–12, 2009, Delta Centre-Ville Hotel:
Montréal, QC, Canada. Association for Computing Machinery (ACM): New York, NY, USA.
ACM Order No.: 910092. See also [2342].
2344. Rick D. Routledge. Diversity Indices: Which ones are admissible? Journal of Theoreti-
cal Biology, 76(4):503–515, February 21, 1979, Elsevier Science Publishers B.V.: Amster-
dam, The Netherlands. Imprint: Academic Press Professional, Inc.: San Diego, CA, USA.
doi: 10.1016/0022-5193(79)90015-8.
2345. J. P. Royston. Some Techniques for Assessing Multivariate Normality Based on the Shapiro-
Wilk W. Journal of the Royal Statistical Society: Series C – Applied Statistics, 32(2):121–133,
1983, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
2346. Stefan Rudlof and Mario Köppen. Stochastic Hill Climbing with Learning by Vectors of
Normal Distribution. In WSC1 [1001], pages 60–70, 1996. Fully available at http://
eprints.kfupm.edu.sa/66958/1/66958.pdf [accessed 2009-08-21]. Second corrected and
enhanced version, 1997-09-03.
2347. William Michael Rudnick. Genetic Algorithms and Fitness Variance with an Application to
the Automated Design of Artificial Neural Networks. PhD thesis, Oregon Graduate Institute
of Science & Technology: Beaverton, OR, USA, 1992. UMI Order No.: GAX92-22642.
2348. Günter Rudolph. An Evolutionary Algorithm for Integer Programming. In PPSN
III [693], pages 139–148, 1994. doi: 10.1007/3-540-58484-6 258. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/people/rudolph/publications/
papers/PPSN94.pdf [accessed 2010-08-12]. CiteSeerx : 10.1.1.40.5981.
2349. Günter Rudolph. How Mutation and Selection Solve Long-Path Problems in Polynomial
Expected Time. Evolutionary Computation, 4(2):195–205, Summer 1996, MIT Press: Cam-
bridge, MA, USA. doi: 10.1162/evco.1996.4.2.195. CiteSeerx : 10.1.1.42.2612.
2350. Günter Rudolph. Convergence Properties of Evolutionary Algorithms, volume 35 in
Forschungsergebnisse zur Informatik. PhD thesis, Universität Dortmund: Dortmund,
North Rhine-Westphalia, Germany, 1996. isbn: 3-86064-554-4. Google Books
ID: V9f9AAAACAAJ. Published 1997.
2351. Günter Rudolph. Self-Adaptation and Global Convergence: A Counter-Example. In
CEC’99 [110], pages 646–651, volume 1, 1999. doi: 10.1109/CEC.1999.781994. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/people/rudolph/publications/
papers/CEC99.pdf [accessed 2009-07-10]. CiteSeerx : 10.1.1.36.3755 and 10.1.1.56.9366.
2352. Günter Rudolph. Self-Adaptive Mutations may lead to Premature Convergence. IEEE Trans-
actions on Evolutionary Computation (IEEE-EC), 5(4):410–414, 2001, IEEE Computer So-
ciety: Washington, DC, USA. doi: 10.1109/4235.942534. See also [2353].
2353. Günter Rudolph. Self-Adaptive Mutations may lead to Premature Convergence. Technical
Report CI–73/99, Universität Dortmund, Fachbereich Informatik: Dortmund, North
Rhine-Westphalia, Germany, April 30, 2001. Fully available at http://ls11-www.
cs.uni-dortmund.de/people/rudolph/publications/papers/tec335.pdf [ac-
x
cessed 2009-07-10]. CiteSeer : 10.1.1.28.8514 and 10.1.1.36.5633. See also [2352].

2354. Günter Rudolph, Simon M. Lucas, and Carlo Poloni. Proceedings of 10th International
Conference on Parallel Problem Solving from Nature (PPSN X), September 13–17, 2008,
Dortmund, North Rhine-Westphalia, Germany, volume 5199/2008 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-87700-4. isbn: 3-540-87699-5.
Google Books ID: OO-5OgAACAAJ and PPxMVCKTbtoC. Library of Congress Control Number
(LCCN): 2008934679.
2355. Thomas Philip Runarsson, Hans-Georg Beyer, Edmund K. Burke, Juan Julián Merelo-
Guervós, L. Darrell Whitley, and Xin Yao, editors. Proceedings of 9th International Con-
ference on Parallel Problem Solving from Nature (PPSN IX), September 9–13, 2006, Uni-
versity of Iceland: Reykjavik, Iceland, volume 4193/2006 in Theoretical Computer Science
and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11844297. isbn: 3-540-38990-3. Google Books
ID: L2QFAQAACAAJ. OCLC: 71747790, 255610906, 315830350, and 318289018. Li-
brary of Congress Control Number (LCCN): 2006931845.
2356. Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach (AIMA).
Prentice Hall International Inc.: Upper Saddle River, NJ, USA and Pearson Education:
Upper Saddle River, NJ, USA, 2nd edition, December 20, 2002. isbn: 0-13-080302-2,
1134 REFERENCES
0-13-790395-2, and 8120323823. Google Books ID: 0GldGwAACAAJ, 5WfMAQAACAAJ,
5mfMAQAACAAJ, and DvqIIwAACAAJ.
2357. Tomas Ruzgas, Rimantas Rudzkis, and Mindaugas Kavaliauskas. Application of Clustering in
the Non-Parametric Estimation of Distribution Density. Nonlinear Analysis: Modelling and
Control (NLAMC), 11(4):393–411, November 4, 2006, Lithuanian Association of Nonlinear
Analysts (LANA): Vilnius, Lithuania. Fully available at http://www.lana.lt/journal/
23/Ruzgas.pdf [accessed 2009-08-28].
2358. Conor Ryan, J. J. Collins, and Michael O’Neill. Grammatical Evolution: Evolving Pro-
grams for an Arbitrary Language. In EuroGP’98 [210], pages 83–95, 1998. Fully available
at http://www.grammatical-evolution.org/papers/eurogp98.ps [accessed 2009-07-
x
04]. CiteSeer : 10.1.1.38.7697.

2359. Conor Ryan, Michael O’Neill, and J. J. Collins. Grammatical Evolution: Solving Trigono-
metric Identities. In MENDEL’98 [417], pages 111–119, 1998. Fully available at http://
www.grammatical-evolution.org/papers/mendel98/index.html [accessed 2010-12-02].
CiteSeerx : 10.1.1.30.4253.
2360. Conor Ryan, Terence Soule, Maarten Keijzer, Edward P. K. Tsang, Riccardo Poli, and
Ernesto Jorge Fernandes Costa, editors. Proceedings of the 6th European Conference on
Genetic Programming (EuroGP’03), April 14–16, 2003, University of Essex, Departments
of Mathematical and Biological Sciences: Wivenhoe Park, Colchester, Essex, UK, vol-
ume 2610/2003 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-36599-0. isbn: 3-540-00971-X. Google Books
ID: pVhXLZKMF-YC.
2361. Nestor Rychtyckyj and Daniel Shapiro, editors. Proceedings of the Twenty-second Innova-
tive Applications of Artificial Intelligence Conference (IAAI’10), July 11–15, 2010, Westin
Peachtree Plaza: Atlanta, GA, USA. AAAI Press: Menlo Park, CA, USA.
2362. Proceedings of the Second National Conference on Evolutionary Computation and Global Op-
timization (Materialy II Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Glob-
alna), September 16–19, 1997, Rytro, Poland. Google Books ID: IGP4tgAACAAJ.

2363. Paul S. Andrews, Jonathan Timmis, Nick D. L. Owens, Uwe Aickelin, Emma Hart, Andy
Hone, and Andrew M. Tyrrell, editors. Proceedings of the 8th International Conference on
Artificial Immune Systems (ICARIS’09), August 9–12, 2009, York, UK, volume 5666 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2364. Leonardo Sá, Pedro Vieira, and Antônio Carneiro de Mesquita Filho. Evolutionary Synthesis
of Low-Sensitivity Antenna Matching Networks using Adjacency Matrix Representation. In
CEC’09 [1350], pages 1201–1208, 2009. doi: 10.1109/CEC.2009.4983082. INSPEC Accession
Number: 10688677.
2365. Ashish Sabharwal. Combinatorial Problems I: Finding Solutions. In 2nd
Asian-Pacific School on Statistical Physics and Interdisciplinary Applications [987],
2008. Fully available at http://www.cs.cornell.edu/˜sabhar/tutorials/
kitpc08-combinatorial-problems-I.ppt [accessed 2010-08-29].
2366. Krzystof Sacha, editor. Software Engineering Techniques: Design for Quality – IFIP Working
Conference on Software Engineering Techniques (SET’06), October 17–20, 2006, Warsaw
University: Warsaw, Poland. Springer-Verlag: Berlin/Heidelberg. isbn: 0-387-39387-0.
Co-located with VIII Conference on Software Engineering – KKIO 2006.
2367. Lorenza Saitta, editor. Proceedings of the 13th International Conference on Machine Learning
(ICML’96), July 3–6, 1996, Bari, Italy. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-419-7.
2368. Proceedings of the Third World Congress on Nature and Biologically Inspired Computing
(NaBIC’11), November 19–21, 2011, Salamanca, Spain.
2369. Peter Salamon, Paolo Sibani, and Richard Frost. Facts, Conjectures, and Improvements
for Simulated Annealing, volume 7 in SIAM Monographs on Mathematical Modeling and
Computation. Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA,
USA, 2002. isbn: 0898715083. Google Books ID: jhAldlYvClcC.
2370. Sancho Salcedo-Sanz and Xin Yao. A Hybrid Hopfield Network-Genetic Algorithm Ap-
REFERENCES 1135
proach for the Terminal Assignment Problem. IEEE Transactions on Systems, Man,
and Cybernetics – Part B: Cybernetics, 34(6):2343–2353, December 2004, IEEE Systems,
Man, and Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCB.2004.836471.
CiteSeerx : 10.1.1.62.7879. INSPEC Accession Number: 8207612. See also [2371].
2371. Sancho Salcedo-Sanz and Xin Yao. Assignment of Cells to Switches in a Cellular Mo-
bile Network using a Hybrid Hopfield Network-Genetic Algorithm Approach. Applied Soft
Computing, 8(1):216–224, January 2008, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2007.01.002. See also [2370].
2372. Sancho Salcedo-Sanz and Xin Yao. Assignment of Cells to Switches in a Cellular Mo-
bile Network using a Hybrid Hopfield Network-Genetic Algorithm Approach. Applied Soft
Computing, 8(1):216–224, January 2008, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2007.01.002.
2373. Sancho Salcedo-Sanz, Yong Xu, and Xin Yao. Hybrid Meta-Heuristics Algorithms for Task
Assignment in Heterogeneous Computing Systems. Computers & Operations Research, 33(3):
820–835, March 2006, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.cor.2004.08.010. Fully available at http://www.cs.bham.ac.
uk/˜xin/papers/SalcedoSanzXuYao2006.pdf [accessed 2009-08-10].
2374. Sancho Salcedo-Sanz, José Antonio Portilla-Figueras, Emilio G. Ortiz-Garcı́a, Angel M.
Pérez-Bellido, Christopher Thraves, Antonio Fernández-Anta, and Xin Yao. Optimal Switch
Location in Mobile Communication Networks using Hybrid Genetic Algorithms. Applied Soft
Computing, 8(4):1486–1497, September 2008, Elsevier Science Publishers B.V.: Essex, UK,
Rajkumar Roy, editor. doi: 10.1016/j.asoc.2007.10.021. Fully available at http://nical.
ustc.edu.cn/papers/asc2007b.pdf [accessed 2008-09-01]. CiteSeerx : 10.1.1.87.8731.
2375. Muhammad Saleem and Muddassar Farooq. BeeSensor: A Bee-Inspired Power Aware Rout-
ing Protocol for Wireless Sensor Networks. In EvoWorkshops’07 [1050], pages 81–90, 2007.
doi: 10.1007/978-3-540-71805-5 9.
2376. Abdellah Salhi, Hugh Glaser, and David de Roure. Parallel Implementation of a Genetic-
Programming based Tool for Symbolic Regression. DSSE Technical Reports DSSE-TR-97-
3, University of Southampton, Department of Electronics and Computer Science, Declara-
tive Systems & Software Engineering Group: Highfield, Southampton, UK, July 18, 1997.
Fully available at http://www.dsse.ecs.soton.ac.uk/techreports/95-03/97-3.
html [accessed 2007-09-09]. See also [2377].
2377. Abdellah Salhi, Hugh Glaser, and David de Roure. Parallel Implementation of a Genetic-
Programming Based Tool for Symbolic Regression. Information Processing Letters, 66
(6):299–307, June 30, 1998, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/S0020-0190(98)00056-8. See also [2376].
2378. David Salomon. Assemblers andLoaders, Ellis Horwood Series in Computers and Their Ap-
plications. Ellis Horwood: Chichester, West Sussex, UK, 1993. isbn: 0-13-052564-2.
OCLC: 26404704, 473153955, and 636481452. Library of Congress Control Num-
ber (LCCN): 92028178. GBV-Identification (PPN): 110448634. LC Classifica-
tion: QA76.76.A87 S35 1992.
2379. Proceedings of the Eighth Joint Conference on Information Science (JCIS 2005), Section: The
Sixth International Workshop on Frontiers in Evolutionary Algorithms (FEA’05), July 16–
21, 2005, Salt Lake City Mariott City Center: Salt Lake City, UT, USA. Partly avail-
able at http://fs.mis.kuas.edu.tw/˜cobol/JCIS2005/jcis05/Track4.html [ac-
cessed 2007-09-16]. Workshop held in conjunction with Eighth Joint Conference on Information

Sciences.
2380. Rafal Salustowicz and Jürgen Schmidhuber. Probabilistic Incremental Program Evolution:
Stochastic Search Through Program Space. Evolutionary Computation, 5(2):123–141, Febru-
ary 1997, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1997.5.2.123. See also [2381].
2381. Rafal Salustowicz and Jürgen Schmidhuber. Probabilistic Incremental Program Evolu-
tion: Stochastic Search Through Program Space. In ECML’97 [2781], pages 213–220,
1997. doi: 10.1007/3-540-62858-4 86. Fully available at ftp://ftp.idsia.ch/pub/
juergen/ECML_PIPE.ps.gz and http://eprints.kfupm.edu.sa/59137/1/59137.
pdf [accessed 2009-08-23]. CiteSeerx : 10.1.1.45.231. See also [2380].
2382. Claude Sammut and Achim G. Hoffmann, editors. Proceedings of the 19th International
Conference on Machine Learning (ICML’02), July 8–12, 2002, University of New South
1136 REFERENCES
Wales: Sydney, NSW, Australia. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-873-7.
2383. Arthur L. Samuel. Some Studies in Machine Learning using the Game of Checkers. IBM
Journal of Research and Development, 3(3):210–229, July 1959. doi: 10.1147/rd.33.0210. See
also [2384, 2385].
2384. Arthur L. Samuel. Some Studies in Machine Learning Using the Game of Checkers. In
Computers and Thought [897], pages 71–105. McGraw-Hill: New York, NY, USA, 1963–1995.
See also [2383, 2385].
2385. Arthur L. Samuel. Some Studies in Machine Learning using the Game of Check-
ers II – Recent Progress. IBM Journal of Research and Development, 11(6):601–
617, November 1967. Fully available at http://pages.cs.wisc.edu/˜dyer/cs540/
handouts/samuel-checkers.pdf and http://pages.cs.wisc.edu/˜jerryzhu/
cs540/handouts/samuel-checkers.pdf [accessed 2010-08-18]. See also [2383].
2386. Proceedings of the 2014 International Conference on Systems, Man, and Cybernetics
(SMC’14), October 5–8, 2014, San Diego, CA, USA.
2387. Harilaos G. Sandalidis, Constandinos X. Mavromoustakis, and Peter P. Stavroulakis. Per-
formance Measures of an Ant Based Decentralised Routing Scheme for Circuit Switch-
ing Communication Networks. Soft Computing – A Fusion of Foundations, Method-
ologies and Applications, 5(4):313–317, August 2001, Springer-Verlag: Berlin/Heidel-
berg. doi: 10.1007/s005000100104. Fully available at http://www.springerlink.com/
content/g1u7lygrugwk3nl5/fulltext.pdf [accessed 2008-09-03].
2388. Francisco Sandoval, Carlos Camacho-Penãlosa, and Antonio Puerta, editors. IEEE
Mediterranean Electrotechnical Conference (MELECON’06), May 16–19, 2006, Be-
nalmádena (Málaga), Spain, volume 2. IEEE Computer Society: Washington, DC, USA.
isbn: 1-4244-0087-2. Google Books ID: cXMpAAAACAAJ. OCLC: 75490138, 81252104,
237880307, and 317661437.
2389. Yasuhito Sano and Hajime Kita. Optimization of Noisy Fitness Functions by Means of
Genetic Algorithms Using History of Search. In PPSN VI [2421], pages 571–580, 2000.
doi: 10.1007/3-540-45356-3 56.
2390. Yasuhito Sano and Hajime Kita. Optimization of Noisy Fitness Functions by means of
Genetic Algorithms using History of Search with Test of Estimation. In CEC’02 [944], pages
360–365, 2002. doi: 10.1109/CEC.2002.1006261.
2391. Bruno Sareni and Laurent Krähenbühl. Fitness Sharing and Niching Methods Revisited.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 2(3):97–106, September 1998,
IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.735432. Fully avail-
able at http://hal.archives-ouvertes.fr/hal-00359799/en/ [accessed 2010-08-02]. IN-
SPEC Accession Number: 6114042. ID: hal-00359799, version 1.
2392. Manish Sarkar. Evolutionary Programming-Based Fuzzy Clustering. In EP’96 [946], pages
247–256, 1996.
2393. Ruhul A. Sarker, Hussein A. Abbass, and Samin Karim. An Evolutionary Algorithm for
Constrained Multiobjective Optimization Problems. In AJWIS’01 [1498], pages 113–122,
2001. Fully available at http://www.lania.mx/˜ccoello/EMOO/sarker01a.pdf.gz
x
[accessed 2008-11-14]. CiteSeer : 10.1.1.8.995.

2394. Ruhul A. Sarker, Masoud Mohammadian, and Xin Yao, editors. Evolutionary Optimization,
volume 48 in International Series in Operations Research & Management Science. Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands,
2002. doi: 10.1007/b101816. isbn: 0-306-48041-7 and 0-7923-7654-4.
2395. Ruhul A. Sarker, Robert G. Reynolds, Hussein A. Abbass, Kay Chen Tan, Robert Ian
McKay, Daryl Leslie Essam, and Tom Gedeon, editors. Proceedings of the IEEE Congress
on Evolutionary Computation (CEC’03), December 8–13, 2003, Canberra, Australia.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-7804-0. Google Books
ID: ALhVAAAAMAAJ, CyBpAAAACAAJ, hblVAAAAMAAJ, and u7hVAAAAMAAJ. INSPEC Ac-
cession Number: 8188933. Catalogue no.: 03TH8674. CEC 2003 – A joint meeting of the IEEE,
the IEAust, the EPS, and the IEE. Congress on Evolutionary Computation, IEEE/Neural
Networks Council, Evolutionary Programming Society, Galesia, IEE.
2396. Warren S. Sarle. Stopped Training and Other Remedies for Overfitting. In Interface’95 [1877],
pages 352–360, 1995. Fully available at ftp://ftp.sas.com/pub/neural/inter95.ps.
Z [accessed 2009-07-12]. CiteSeerx : 10.1.1.42.3920.
REFERENCES 1137
2397. Warren S. Sarle. What is overfitting and how can I avoid it? In Usenet FAQs: comp.ai.neural-
nets FAQ [892], Chapter Part 3 of 7: General, Section 3. FAQS.ORG – Internet FAQ Archives:
USA, 2009. Fully available at http://www.faqs.org/faqs/ai-faq/neural-nets/
part3/section-3.html [accessed 2008-02-29].
2398. Kumara Sastry and David Edward Goldberg. Multiobjective hBOA, Clustering, and
Scalability. In GECCO’05 [304], pages 663–670, 2005, Martin Pelikan, Advisor.
doi: 10.1145/1068009.1068122. See also [2157].
2399. Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection With
Replacement Using Apparent Added Noise. In GECCO’01 [2570], page 781, 2001.
CiteSeerx : 10.1.1.16.786. See also [2400].
2400. Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection With Replace-
ment Using Apparent Added Noise. IlliGAL Report 2001014, Illinois Genetic Algorithms Lab-
oratory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2001.
Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2001014.
ps.Z [accessed 2010-08-03]. See also [2399].
2401. Kumara Sastry and David Edward Goldberg. Let’s Get Ready to Rumble Redux: Crossover
versus Mutation Head to Head on Exponentially Scaled Problems. In GECCO’07-I [2699],
pages 1380–1387, 2007. doi: 10.1145/1276958.1277215. See also [2402].
2402. Kumara Sastry and David Edward Goldberg. Let’s Get Ready to Rumble Redux: Crossover
versus Mutation Head to Head on Exponentially Scaled Problems. IlliGAL Report 2007005,
Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science, De-
partment of General Engineering, University of Illinois at Urbana-Champaign: Urbana-
Champaign, IL, USA, February 11, 2007. Fully available at http://www.illigal.uiuc.
edu/pub/papers/IlliGALs/2007005.pdf [accessed 2008-07-21]. See also [2401].
2403. Hiroyuki Sato, Arturo Hernández Aguirre, and Kiyoshi Tanaka. Controlling Dominance Area
of Solutions and Its Impact on the Performance of MOEAs. In EMO’07 [2061], pages 5–20,
2007. doi: 10.1007/978-3-540-70928-2 5.
2404. Satoshi Sato, Tadashi Ishihara, and Hikaru Inooka. Systematic Design via the Method of
Inequalities. IEEE Control Systems Magazine, pages 57–65, October 1996, IEEE Computer
Society: Piscataway, NJ, USA. doi: 10.1109/37.537209.
2405. Shinji Sato. Simulated Quenching: A New Placement Method for Module Generation.
In ICCAD [1328], pages 538–541, 1997. doi: 10.1109/ICCAD.1997.643591. Fully avail-
able at http://www.sigda.org/Archives/ProceedingArchives/Iccad/Iccad97/
papers/1997/iccad97/pdffiles/09c_2.pdf [accessed 2010-09-25]. INSPEC Accession
Number: 5781467.
2406. Abdul Sattar and Byeong-Ho Kang, editors. Advances in Artificial Intelligence. Proceed-
ings of the 19th Australian Joint Conference on Artificial Intelligence (AI’06), December 4–
6, 2006, Hobart, Australia, volume 4304/2006 in Lecture Notes in Artificial Intelligence
(LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/11941439. isbn: 3-540-49787-0. Library of Congress Control Num-
ber (LCCN): 2006936897.
2407. Franklin E. Satterthwaite. Random Balance Experimentation. Technometrics, 1(2):111–137,
May 1959, American Statistical Association (ASA): Alexandria, VA, USA and American
Society for Quality (ASQ): Milwaukee, WI, USA. JSTOR Stable ID: 1266466.
2408. Franklin E. Satterthwaite. REVOP or Random Evolutionary Operation. Technical Report
10-10-59, Merrimack College: North Andover, MA, USA, 1959.
2409. Jürgen Sauer, editor. 19. Workshop “Planen, Scheduling und Konfigurieren, Entwerfen“
(PuK’05), September 11, 2005, Koblenz, Germany. Fully available at http://www-is.
informatik.uni-oldenburg.de/˜sauer/puk2005/paper.html [accessed 2011-01-11].
2410. Yoshikazu Sawaragi, Koichi Inoue, and Hirotaka Nakayama, editors. Proceedings of the 7th
International Conference on Multiple Criteria Decision Making (MCDM’1986), August 18–
22, 1986, Kyoto, Japan. isbn: 0-387-17718-3, 0-387-17719-1, 3-540-17718-3, and
3-540-17719-1.
2411. Dhish Kumar Saxena and Kalyanmoy Deb. Dimensionality Reduction of Objectives and
Constraints in Multi-Objective Optimization Problems: A System Design Perspective. In
CEC’08 [1889], pages 3204–3211, 2008. doi: 10.1109/CEC.2008.4631232. INSPEC Accession
Number: 10250936.
1138 REFERENCES
2412. Robert Schaefer, Carlos Cotta, Joanna Kolodziej, and Günter Rudolph, editors. Pro-
ceedings of the 11th International Conference on Parallel Problem Solving From Nature,
Part 1 (PPSN XI-1), September 11–15, 2010, AGH University of Science and Technol-
ogy: Kraków, Poland, volume 6238 in Theoretical Computer Science and General Issues
(SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 3-642-15843-9. Partly available at http://home.agh.edu.pl/˜ppsn/
[accessed 2011-02-28]. Library of Congress Control Number (LCCN): 2010934114.

2413. Robert Schaefer, Carlos Cotta, Joanna Kolodziej, and Günter Rudolph, editors. Pro-
ceedings of the 11th International Conference on Parallel Problem Solving From Nature,
Part 2 (PPSN’10-2), September 11–15, 2010, AGH University of Science and Technol-
ogy: Kraków, Poland, volume 6239 in Theoretical Computer Science and General Issues
(SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 3-642-15870-6. Partly available at http://home.agh.edu.pl/˜ppsn/
[accessed 2011-02-28]. Library of Congress Control Number (LCCN): 2010934114.

2414. J. David Schaffer, editor. Proceedings of the 3rd International Conference on Genetic Algo-
rithms (ICGA’89), June 4–7, 1989, George Mason University (GMU): Fairfax, VA, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-066-3.
Google Books ID: BPBQAAAAMAAJ and TYvqPAAACAAJ. OCLC: 19722958, 84251559,
180515150, 231091143, 257885359, and 264619464.
2415. J. David Schaffer and L. Darrell Whitley, editors. Proceedings of the International Workshop
on Combinations of GeneticAlgorithms and Neural Networks (COGANN’92), June 2, 1992,
Baltimore, MD, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA. doi: 10.1109/COGANN.1992.273951. isbn: 0-8186-2787-5. OCLC: 27675198,
65776456, 174278971, 224222497, 311746303, 475797050, and 496330593. INSPEC
Accession Number: 4487943. Catalogue no.: 92TH0435-8.
2416. J. David Schaffer, Richard A. Caruana, Larry J. Eshelman, and Rajarshi Das. A Study
of Control Parameters Affecting Online Performance of Genetic Algorithms for Function
Optimization. In ICGA’89 [2414], pages 51–60, 1989.
2417. J. David Schaffer, Larry J. Eshelman, and Daniel Offutt. Spurious Correlations and Prema-
ture Convergence in Genetic Algorithms. In FOGA’90 [2562], pages 102–112, 1990.
2418. Robert E. Schapire. The Strength of Weak Learnability. Machine Learning, 5:
197–227, June 1990, Kluwer Academic Publishers: Norwell, MA, USA and Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00116037. Fully available
at http://citeseer.ist.psu.edu/schapire90strength.html and http://www.
springerlink.com/content/x02406w7q5038735/fulltext.pdf [accessed 2007-09-15].
2419. R. Schilling, W. Haase, Jacques Periaux, and H. Baier, editors. Proceedings of the Sixth
Conference on Evolutionary and Deterministic Methods for Design, Optimization and Control
with Applications to Indutsrial and Societal Problems (EUROGEN’05), September 12–14,
2005, Technische Universität München: Munich, Bavaria, Germany. Technische Universität
München: Munich, Bavaria, Germany. Partly available at http://www.flm.mw.tum.de/
EUROGEN05/ [accessed 2007-09-16]. in association with ECCOMAS and ERCOFTAC.
2420. Jürgen Schmidhuber. Evolutionary Principles in Self-Referential Learning. (On Learning
how to Learn: The Meta-Meta-... Hook.). Master’s thesis, Technische Universität München,
Institut für Informatik, Lehrstuhl Professor Radig: Munich, Bavaria, Germany, May 14, 1987.
Fully available at http://www.idsia.ch/˜juergen/diploma.html [accessed 2007-11-01].
2421. Marc Schoenauer, Kalyanmoy Deb, Günter Rudolph, Xin Yao, Evelyne Lutton, Juan Julián
Merelo-Guervós, and Hans-Paul Schwefel, editors. Proceedings of the 6th International Con-
ference on Parallel Problem Solving from Nature (PPSN VI), September 18–20, 2000, Paris,
France, volume 1917/2000 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/3-540-45356-3. isbn: 3-540-41056-2. Google Books
ID: GDz2nCAJpJ4C and gI26Cld2BY0C. OCLC: 44914243, 247592642, and 313676512.
2422. Armin Scholl and Robert Klein. Bin Packing. Friedrich Schiller University of Jena,
Wirtschaftswissenschaftliche Fakultät: Jena, Thuringia, Germany, September 2, 2003. Fully
available at http://www.wiwi.uni-jena.de/Entscheidung/binpp/ [accessed 2010-10-12].
See also [1834].
2423. Armin Scholl, Robert Klein, and Christian Jürgens. BISON: A Fast Hybrid Procedure for Ex-
actly Solving the One-Dimensional Bin Packing Problem. Computers & Operations Research,
24(7):627–645, July 1997, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.:
REFERENCES 1139
Essex, UK. doi: 10.1016/S0305-0548(96)00082-2.
2424. Ruud Schoonderwoerd. Collective Intelligence for Network Control. Master’s thesis,
Delft University of Technology, Faculty of Technical Mathematics and Informatics: Delft,
The Netherlands, May 30, 1996, Henk Koppelaar, Eugene J. H. Kerckhoffs, Leon J. M.
Rothkrantz, and Janet L. Bruten, Committee members. Fully available at http://staff.
washington.edu/paymana/swarm/schoon96-thesis.pdf [accessed 2008-06-12]. See also
[2425].
2425. Ruud Schoonderwoerd, Owen E. Holland, Janet L. Bruten, and Leon J. M. Rothkrantz. Ant-
Based Load Balancing in Telecommunications Networks. Adaptive Behavior, 5(2):169–207,
Fall 1996, SAGE Publications: Thousand Oaks, CA, USA. doi: 10.1177/105971239700500203.
Fully available at http://www.ifi.uzh.ch/ailab/teaching/FG06/ [accessed 2009-06-27].
CiteSeerx : 10.1.1.4.3655. See also [2424, 2426].
2426. Ruud Schoonderwoerd, Owen E. Holland, and Janet L. Bruten. Ant-like Agents for
Load Balancing in Telecommunications Networks. In AGENTS’97 [1978], pages 209–
216, 1997. doi: 10.1145/267658.267718. Fully available at http://sigart.acm.
org/proceedings/agents97/A153/A153.ps and http://staff.washington.edu/
paymana/swarm/schoon97-icaa.pdf [accessed 2009-06-26]. See also [2425].
2427. Thomas J. M. Schopf, editor. Models in Paleobiology. Freeman, Cooper & Co: San Francisco,
CA, USA and DoubleDay: New York, NY, USA, January 1972. isbn: 0877353255. Google
Books ID: Db0eAAAACAAJ and OjjlOgAACAAJ. OCLC: 572084.
2428. Herb Schorr and Alain T. Rappaport, editors. Proceedings of the The First Conference
on Innovative Applications of Artificial Intelligence (IAAI’89), March 28–30, 1989, Stan-
ford University: Stanford, CA, USA. AAAI Press: Menlo Park, CA, USA. Partly avail-
able at http://www.aaai.org/Conferences/IAAI/iaai89.php [accessed 2007-09-06]. Pub-
lished in 1991.
2429. Nicol N. Schraudolph and Richard K. Belew. Dynamic Parameter Encoding for Ge-
netic Algorithms. Machine Learning, 9:9–21, June 1992, Kluwer Academic Pub-
lishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1022624728869. Fully available at http://cnl.salk.edu/˜schraudo/
pubs/SchBel92.pdf. CiteSeerx : 10.1.1.42.1387.
2430. Alexander Schrijver. A Course in Combinatorial Optimization. self-published, January 22,
2009. Fully available at http://homepages.cwi.nl/˜lex/files/dict.pdf [accessed 2010-
08-04].

2431. Crystal Schuil, Matthew Valente, Justin Werfel, and Radhika Nagpal. Collective Construc-
tion Using Lego Robots. In AAAI’06 / IAAI’06 [1055], volume 2, 2006. Fully avail-
able at http://www.eecs.harvard.edu/˜rad/ssr/papers/aaai06-schuil.pdf [ac-
cessed 2008-06-12].

2432. Peter K. Schuster and Manfred Eigen. The Hypercycle, A Principle of Natural Self-
Organization. Springer-Verlag GmbH: Berlin, Germany, 1979. isbn: 0-387-09293-5. Google
Books ID: Af49AAAAYAAJ and P54PPAAACAAJ. OCLC: 4665354.
2433. Hans-Paul Schwefel. Kybernetische Evolution als Strategie der exprimentellen Forschung
in der Strömungstechnik. Master’s thesis, Technische Universität Berlin: Berlin, Germany,
1965.
2434. Hans-Paul Schwefel. Experimentelle Optimierung einer Zweiphasendüse Teil I. Technical
Report 35, AEG Research Institute: Berlin, Germany, 1968. Project MHD–Staustrahlrohr
11.034/68.
2435. Hans-Paul Schwefel. Evolutionsstrategie und numerische Optimierung. PhD thesis, Technis-
che Universität Berlin, Institut für Meß- und Regelungstechnik, Institut für Biologie und An-
thropologie: Berlin, Germany, 1975. OCLC: 52361662 and 251655294. GBV-Identification
(PPN): 156923181.
2436. Hans-Paul Schwefel. Numerical Optimization of Computer Models. John Wiley & Sons
Ltd.: New York, NY, USA, 1981. isbn: 0-471-09988-0. OCLC: 8011455 and
301251708. Library of Congress Control Number (LCCN): 81173223. GBV-Identification
(PPN): 01489498X. LC Classification: QA402.5 .S3813.
2437. Hans-Paul Schwefel. Evolution and Optimum Seeking, Sixth-Generation Computer Technol-
ogy Series. John Wiley & Sons Ltd.: New York, NY, USA, 1995. isbn: 0-471-57148-2.
Google Books ID: dfNQAAAAMAAJ. OCLC: 30701094 and 222866514. Library of Congress
Control Number (LCCN): 94022270. GBV-Identification (PPN): 194661636. LC Classifi-
1140 REFERENCES
cation: QA402.5 .S375 1995.
2438. Hans-Paul Schwefel and Reinhard Männer, editors. Proceedings of the 1st Workshop on Par-
allel Problem Solving from Nature (PPSN I), October 1–3, 1990, FRG Dortmund: Dort-
mund, North Rhine-Westphalia, Germany, volume 496/1991 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0029723.
isbn: 0-387-54148-9 and 3-540-54148-9. Google Books ID: W4r2KQAACAAJ,
a7LiAAAACAAJ, and cZhQAAAAMAAJ.
2439. A. Carlisle Scott and Philip Klahr, editors. Proceedings of the The Fourth Conference on Inno-
vative Applications of Artificial Intelligence (IAAI’92), July 12–16, 1992, San José, CA, USA.
AAAI Press: Menlo Park, CA, USA. isbn: 0-262-69155-8. Partly available at http://
www.aaai.org/Conferences/IAAI/iaai912.php [accessed 2007-09-06].
2440. Ian Scriven, Andrew Lewis, and Sanaz Mostaghim. Dynamic Search Initialisation Strategies
for Multi-Objective Optimisation in Peer-to-Peer Networks. In CEC’09 [1350], pages 1515–
1522, 2009. doi: 10.1109/CEC.2009.4983122. INSPEC Accession Number: 10688717.
2441. Ninth International Workshop on Learning Classifier Systems (IWLCS’06), July 8–9, 2006,
Seattle, WA, USA. Held during GECCO-2006 (. See also [1516].
2442. Proceedings of the 28th International Conference on Machine Learning (ICML’11), June 11–
14, 2011, Seattle, WA, USA.
2443. Michèle Sebag and Antoine Ducoulombier. Extending Population-Based Incremental
Learning to Continuous Search Spaces. In PPSN V [866], pages 418–427, 1998.
CiteSeerx : 10.1.1.42.1884.
2444. Anthony V. Sebald and Lawrence Jerome Fogel, editors. Proceedings of the 3rd Annual
Conference on Evolutionary Programming (EP’94), February 24–26, 1994, San Diego, CA,
USA. World Scientific Publishing: River Edge, NJ, USA. isbn: 9810218109. Library of
Congress Control Number (LCCN): 95118447.
2445. Jimmy Secretan, Nicholas Beato, David B. D’Ambrosio, Adelein Rodriguez, Adam Campbell,
and Kenneth Owen Stanley. Evolving Pictures Collaboratively Online. In CHI’08 [139],
pages 1759–1768, 2008. doi: 10.1145/1357054.1357328. Fully available at http://eplex.
cs.ucf.edu/papers/secretan_chi08.pdf [accessed 2011-02-23].
2446. Robert Sedgewick. Algorithms in Java, Parts 1–4 (Fundamentals: Data Structures, Sort-
ing, Searching). Addison-Wesley Professional: Reading, MA, USA, 3rd edition, Septem-
ber 2002. isbn: 0-201-36120-5. Google Books ID: hyvdUQUmf2UC. GBV-Identification
(PPN): 247734144, 356227073, 511295804, 525555153, and 595338534. With Java
consultation by Michael Schidlowsky.
2447. Alberto Maria Segre, editor. Proceedings of the 6th International Workshop on Machine
Learning (ML’89), June 26–27, 1989, Cornell University Press: Ithaca, NY, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-036-1.
2448. Y. Ahmet Şekercioğlu, Andreas Pitsilides, and Athanasios V. Vasilakos. Computa-
tional Intelligence in Management of ATM Networks: A Survey of Current State of
Research. In ESIT’99 [888], 1999. Fully available at http://www.cs.ucy.ac.cy/
networksgroup/pubs/published/1999/CI-ATM-London99.pdf and http://www.
erudit.de/erudit/events/esit99/12525_P.pdf [accessed 2008-09-03].
2449. Bart Selman, Hector Levesque, and David Mitchell. A New Method for Solving Hard Satisfi-
ability Problems. In AAAI’92 [2639], pages 440–446, 1992. Fully available at http://www.
aaai.org/Papers/AAAI/1992/AAAI92-068.pdf [accessed 2009-06-24].
2450. KWEB SWS Challenge Workshop, June 6–7, 2007, Innsbruck Congress Center: Inns-
bruck, Austria. Semantic Technology Institute (STI) Innsbruck, Austria: Innsbruck, Aus-
tria. Fully available at http://sws-challenge.org/wiki/index.php/Workshop_
Innsbruck [accessed 2010-12-12].
2451. Bernhard Sendhoff, Martin Kreutz, and Werner von Seelen. A Condition for the Genotype-
Phenotype Mapping: Causality. In ICGA’97 [166], pages 73–80, 1997. Fully avail-
able at ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/manuscripts/
articles/frame-label.ps.gz [accessed 2010-08-12]. CiteSeerx : 10.1.1.38.6750. arXiv
ID: adap-org/9711001v1.
2452. Bernhard Sendhoff, Martin Kreutz, and Werner von Seelen. Causality and the Analysis of Lo-
cal Search in Evolutionary Algorithms. INI Internal Report IR-INI 97-16, Ruhr-Universität
Bochum, Institut für Neuroinformatik: Bochum, North Rhine-Westphalia, Germany,
September 1997. Fully available at ftp://ftp.neuroinformatik.ruhr-uni-bochum.
REFERENCES 1141
de/pub/manuscripts/IRINI/irini97-16/irini97-16.ps.gz [accessed 2009-07-15].

issn: 0943-2752. See also [2451].


2453. Twittie Senivongse and Gina Oliveira, editors. 9th IFIP WG 6.1 International Conference
on Distributed Applications and Interoperable Systems (DAIS’09), June 9–11, 2009, Lisbon,
Portugal, volume 5523/2009 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-02164-0. isbn: 3-642-02163-8. Partly
available at http://discotec09.di.fc.ul.pt/ [accessed 2011-02-28].
2454. Proceedings of the 2012 International Conference on Systems, Man, and Cybernetics
(SCM’12), October 7–10, 2012, Seoul, South Korea.
2455. Isabelle Servet, Louise Travé-Massuyès, and Daniel Stern. Telephone Network Traffic Over-
loading Diagnosis and Evolutionary Computation Technique. In AE’97 [1191], pages 137–144,
1997. doi: 10.1007/BFb0026596.
2456. Isabelle Servet, Louise Travé-Massuyès, and Daniel Stern. Evolutionary Computation Tech-
niques for Traffic Supervision Based On A Model Of Telephone Networks Built From Quali-
tative Knowledge. Studies in Informatics and Control, 6(1), March 1997, National Institute
for R&D in Informatics (Institutul Naţional de Cercetare-Dezvoltare ı̂n Informatică, ICI):
Bucharest (Bucureşti), Romania. Fully available at http://www.ici.ro/SIC/sic1997_
1/ser97.htm [accessed 2009-08-22].
2457. SERVICES Cup’10, 2010.
2458. Mark Shackleton, Rob Shipman, and Marc Ebner. An Investigation of Redundant Genotype-
Phenotype Mappings and their Role in Evolutionary Search. In CEC’00 [1333], pages 493–
500, volume 1, 2000. doi: 10.1109/CEC.2000.870337. Fully available at http://citeseer.
ist.psu.edu/old/409243.html and http://wwwi2.informatik.uni-wuerzburg.
de/mitarbeiter/ebner/research/publications/uniWu/redundant.ps.gz [ac-

cessed 2009-07-10]. INSPEC Accession Number: 6734665.

2459. M. Zubair Shafiq, Muddassar Farooq, and Syed Ali Khayam. A Comparative Study
of Fuzzy Inference Systems, Neural Networks and Adaptive Neuro Fuzzy Inference
Systems for Portscan Detection. In EvoWorkshops’08 [1051], pages 52–61, 2008.
doi: 10.1007/978-3-540-78761-7 6.
2460. M. Zubair Shafiq, S. Momina Tabish, and Muddassar Farooq. Are Evolutionary Rule Learn-
ing Algorithms Appropriate for Malware Detection? In GECCO’09-I [2342], pages 1915–
1916, 2009. doi: 10.1145/1569901.1570233. Fully available at http://www.nexginrc.org/
papers/gecco09-zubair.pdf [accessed 2009-09-05]. See also [2461].
2461. M. Zubair Shafiq, S. Momina Tabish, and Muddassar Farooq. On the Appropriateness of Evo-
lutionary Rule Learning Algorithms for Malware Detection. In GECCO’09-II [2343], pages
2609–2616, 2009. doi: 10.1145/1570256.1570370. Fully available at http://nexginrc.
org/papers/iwlcs09-zubair.pdf [accessed 2009-09-05]. See also [2460].
2462. Biren Shah, Karthik Ramachandran, and Vijay V. Raghavan. A Hybrid Approach for
Data Warehouse View Selection. International Journal of Data Warehousing and Mining
(IJDWM), 2(2):1–37, 2006, Idea Group Publishing (Idea Group Inc., IGI Global): New
York, NY, USA. doi: 10.4018/jdwm.2006040101. CiteSeerx : 10.1.1.85.5014.
2463. S. H. Shami, I. M. A. Kirkwood, and Mark C. Sinclair. Evolving Simple Fault-Tolerant
Routing Rules using Genetic Programming. Electronics Letters, 33(17):1440–1441, Au-
gust 14, 1997, Institution of Engineering and Technology (IET): Stevenage, Herts, UK.
doi: 10.1049/el:19970996. See also [1542].
2464. Yun-Wei Shang and Yu-Huang Qiu. A Note on the Extended Rosenbrock Function. Evo-
lutionary Computation, 14(1):119–126, Spring 2006, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.2006.14.1.119. PubMed ID: 16536893.
2465. Claude Elwood Shannon. A Mathematical Theory of Communication. Bell System Technical
Journal, 27:379–423/623–656, July–October 1948, Bell Laboratories: Berkeley Heights, NJ,
USA. Fully available at http://plan9.bell-labs.com/cm/ms/what/shannonday/
paper.html [accessed 2007-09-15]. Reprinted in [2466, 2467, 2521].
2466. Claude Elwood Shannon, author. Neil James Alexander Sloane and Aaron D. Wyner, ed-
itors. Claude Elwood Shannon: Collected Papers. Institute of Electrical and Electronics
Engineers (IEEE) Press: New York, NY, USA, 1993. isbn: 0780304349. Google Books
ID: 06QKAAAACAAJ. OCLC: 26403297, 246915902, and 318255507. See also [2465].
2467. Claude Elwood Shannon and Warren Weaver. The Mathematical Theory of Communication.
University of Illinois Press (UIP): Champaign, IL, USA, 1949–1962. isbn: 0252725468
1142 REFERENCES
and 0252725484. Google Books ID: BHi4OgAACAAJ, H1MiAAAACAAJ, VB COwAACAAJ,
ZiYIAQAAIAAJ, and dk0n eGcqsUC. OCLC: 13553507, 250803800, and 318164805.
See also [2465].
2468. Nicholas Peter Sharples. Evolutionary Approaches to Adaptive Protocol Design. In The
12th White House Papers Graduate Research in Cognitive and Computing Sciences at Sus-
sex [2139], pages 60–62, 1999. See also [2470].
2469. Nicholas Peter Sharples. Evolutionary Approaches to Adaptive Protocol Design. PhD thesis,
University of Sussex, School of Cognitive Science: Brighton, UK, August 2001–2003. Google
Books ID: -BA3HAAACAAJ. OCLC: 59295838 and 195331508. asin: B001ABO1DK. See
also [2470].
2470. Nicholas Peter Sharples and Ian Wakeman. Protocol Construction Using Genetic Search
Techniques. In EvoWorkshops’00 [464], pages 73–95, 2000. doi: 10.1007/3-540-45561-2 23.
See also [2469].
2471. Jude W. Shavlik, editor. Proceedings of the 15th International Conference on Machine Learn-
ing (ICML’98), July 24–27, 1998, Madison, WI, USA. Morgan Kaufmann Publishers Inc.:
San Francisco, CA, USA. isbn: 1-55860-556-8.
2472. Katharine Jane Shaw. Using Genetic Algorithms for Practical Multi-Objective Production
Schedule Optimisation. PhD thesis, University of Sheffield, Department of Automatic Control
and Systems Engineering: Sheffield, UK, 1997. OCLC: 53552498. asin: B001AJODJE.
2473. Katharine Jane Shaw and Peter J. Fleming. Including Real-Life Problem Preferences in Ge-
netic Algorithms to Improve Optimisation of Production Schedules. In GALESIA’97 [3056],
pages 239–244, 1997. INSPEC Accession Number: 5780801.
2474. J. Shekel. Test Functions for Multimodal Search Techniques. In Fifth An-
nual Princeton Conference on Information Science and Systems [2222], pages 354–
359, 1971. Partly available at http://en.wikipedia.org/wiki/Shekel_function
and http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_
files/TestGO_files/Page2354.htm [accessed 2007-11-06].
2475. Proceedings of the Third Joint Conference on Information Science (JCIS 1997), Section: The
First International Workshop on Frontiers in Evolutionary Algorithms (FEA’98), March 1–
5, 1997, Sheraton Imperial Hotel & Convention Center: Research Triangle Park, NC, USA.
Workshop held in conjunction with Third Joint Conference on Information Sciences.
2476. David J. Sheskin. Handbook of Parametric and Nonparametric Statistical Procedures. Chap-
man & Hall: London, UK and CRC Press, Inc.: Boca Raton, FL, USA, 2nd/3rd edi-
tion, 2004. isbn: 0203489535, 0849331196, 1-58488-133-X, and 1-58488-440-1.
Google Books ID: L9c4IAAACAAJ, bmwhcJqq01cC, t6smAAAACAAJ, and tasmAAAACAAJ.
OCLC: 42643434, 52134708, 247821285, and 265211382.
2477. Amit P. Sheth, Steffen Staab, Mike Dean, Massimo Paolucci, Diana Maynard, Timothy W.
Finin, and Krishnaprasad Thirunarayan, editors. Proceedings of the 7th International Seman-
tic Web Conference – The Semantic Web (ISWC’08), October 26–30, 2008, Kongresszen-
trum: Karlsruhe, Germany, volume 5318 in Information Systems and Application, incl. In-
ternet/Web and HCI (SL 3), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. isbn: 3-540-88563-3. Google Books ID: -YeUh7-PCkgC. Library
of Congress Control Number (LCCN): 2008937502.
2478. Masaya Shinkai, Arturo Hernández Aguirre, and Kiyoshi Tanaka. Mutation Strategy Im-
proves GAs Performance on Epistatic Problems. In CEC’02 [944], pages 968–973, volume 1,
2002. doi: 10.1109/CEC.2002.1007056.
2479. Rob Shipman. Genetic Redundancy: Desirable or Problematic for Evolutionary Adaptation?
In ICANNGA’99 [801], pages 1–11, 1999. CiteSeerx : 10.1.1.32.2032.
2480. Rob Shipman, Mark Shackleton, Marc Ebner, and Richard A. Watson. Neutral Search
Spaces for Artificial Evolution: A Lesson from Life. In Artificial Life VII [246], pages
162–167, 2000. Fully available at http://homepages.feis.herts.ac.uk/˜nehaniv/
al7ev/shipmanUH.ps and http://www.demo.cs.brandeis.edu/papers/alife7_
shipman.ps [accessed 2011-12-05]. CiteSeerx : 10.1.1.17.664.
2481. Rob Shipman, Mark Shackleton, and Inman Harvey. The Use of Neutral Genotype-Phenotype
Mappings for Improved Evolutionary Search. BT Technology Journal, 18(4):103–111, Octo-
ber 2000, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1026714927227.
CiteSeerx : 10.1.1.9.5548.
2482. Shinichi Shirakawa and Tomoharu Nagao. Graph Structured Program Evolu-
REFERENCES 1143
tion: Evolution of Loop Structures. In GPTP’09 [2309], pages 177–194, 2009.
doi: 10.1007/978-1-4419-1626-6 11. Fully available at http://www.cscs.umich.edu/
PmWiki/Farms/GPTP-09/uploads/Main/shirakawa-03-24-2009.pdf [accessed 2010-11-
21].

2483. Amit Shukla, Prasad M. Deshpande, and Jeffrey F. Naughton. Materialized View Selection
for Multidimensional Datasets. In VLDB’98 [1151], pages 488–499, 1998. Fully available
at http://www.vldb.org/conf/1998/p488.pdf [accessed 2011-04-10].
2484. Patrick Siarry and Gérard Berthiau. Fitting of Tabu Search to Optimize Func-
tions of Continuous Variables. International Journal for Numerical Methods in En-
gineering, 40(13):2449–2457, 1997, Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.1002/(SICI)1097-0207(19970715)40:13<2449::AID-NME172>3.0.CO;2-O.
2485. Patrick Siarry and Zbigniew Michalewicz, editors. Advances in Metaheuristics for Hard
Optimization, Natural Computing Series. Springer New York: New York, NY, USA,
November 2007. doi: 10.1007/978-3-540-72960-0. isbn: 3-540-72959-3. Google Books
ID: TkMR1b-VCR4C. Library of Congress Control Number (LCCN): 2007929485. Series
editors: G. Rozenberg, Thomas Bäck, Ágoston E. Eiben, J.N. Kok, and H.P. Spaink.
2486. Patrick Siarry, Gérard Berthiau, François Durdin, and Jacques Haussy. Enhanced Simulated
Annealing for Globally Minimizing Functions of Many-Continuous Variables. ACM Transac-
tions on Mathematical Software (TOMS), 23(2):209–228, June 1997, ACM Press: New York,
NY, USA. doi: 10.1145/264029.264043.
2487. Wojciech Wladyslaw Siedlecki and Jack Sklansky. Constrained Genetic Optimization via
Dynamic Reward-Penalty Balancing and its Use in Pattern Recognition. In ICGA’89 [2414],
pages 141–150, 1989. See also [2488].
2488. Wojciech Wladyslaw Siedlecki and Jack Sklansky. Constrained Genetic Optimization via
Dynamic Reward-Penalty Balancing and its Use in Pattern Recognition. In Handbook of
Pattern Recognition and Computer Vision [539], Chapter 1.3.3, pages 108–124. World Scien-
tific Publishing Co.: Singapore, 1993. See also [2487].
2489. Eric V. Siegel and John R. Koza, editors. Papers from the 1995 AAAI Fall Symposium on
Genetic Programming, November 10–12, 1995, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-92-X. Google
Books ID: 03UmAAAACAAJ. GBV-Identification (PPN): 226307441. AAAI Technical Report
FS-95-01.
2490. Sidney Siegel and N. John Castellan Jr. Nonparametric Statistics for The Behavioral
Sciences, Humanities/Social Sciences/Languages. McGraw-Hill: New York, NY, USA,
1956–1988. isbn: 0-07-057357-3 and 070573434X. Google Books ID: ElwEAAAACAAJ,
MO1lGwAACAAJ, QYFCGgAACAAJ, and YmxEAAAAIAAJ. OCLC: 16131891 and 255536918.
2491. Karl Sigurjónsson. Taboo Search Based Metaheuristic for Solving Multiple Depot VRPPD
with Intermediary Depots. Master’s thesis, Technical University of Denmark (DTU), Infor-
matics and Mathematical Modelling (IMM): Lyngby, Denmark, September 1, 2008. Fully
available at http://orbit.dtu.dk/getResource?recordId=224453&objectId=1&
versionId=1 [accessed 2010-11-02].
2492. Sara Silva, James A. Foster, Miguel Nicolau, Penousal Machado, and Mario Giacobini, ed-
itors. Proceedings of the 14th European Conference on Genetic Programming (EuroGP’11),
April 27–29, 2011, Torino, Italy, volume 6621/2011 in Theoretical Computer Science and
General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-642-20407-4. Partly available at http://evostar.
dei.uc.pt/call-for-contributions/eurogp/ [accessed 2011-02-28]. Library of Congress
Control Number (LCCN): 2011925001.
2493. Kwang Mon Sim and Weng Hong Sun. Multiple Ant-Colony Optimization for Network
Routing. In CW’02 [672], pages 277–281, 2002. doi: 10.1109/CW.2002.1180890.
2494. Dan Simon. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. John
Wiley & Sons Ltd.: New York, NY, USA, June 2006. isbn: 0470045345 and 0471708585.
Google Books ID: UiMVoP 7TZkC. OCLC: 64084871 and 85821130.
2495. George Gaylord Simpson. The Baldwin Effect. Evolution – International Journal of Organic
Evolution, 7(2):110–117, June 1953, Society for the Study of Evolution (SSE) and Wiley
Interscience: Chichester, West Sussex, UK.
2496. Patrick K. Simpson, editor. The 1998 IEEE International Conference on Evolutionary Com-
putation (CEC’98), 1998, Egan Civic & Convention Center and Anchorage Hilton Hotel:
1144 REFERENCES
Anchorage, AK, USA. IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer
Society: Piscataway, NJ, USA. isbn: 0-7803-4869-9 and 0-7803-4870-2. Google
Books ID: FMRVAAAAMAAJ, OUizJgAACAAJ, and uvhJAAAACAAJ. INSPEC Accession Num-
ber: 6016643. Catalogue no.: 98TH8360. Part of [1330].
2497. Karl Sims. Artificial Evolution for Computer Graphics. In SIGGRAPH’91 [2702],
pages 319–328, 1991. doi: 10.1145/122718.122752. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/sims91a.html [accessed 2009-06-19] and http://
www.naturewizard.com/papers/evolution%20-%20p319-sims.pdf [accessed 2009-06-
18].

2498. Mark C. Sinclair. Minimum Cost Topology Optimisation of the COST 239 European Optical
Network. In ICANNGA’95 [2141], pages 26–29, 1995. CiteSeerx : 10.1.1.53.6542. See
also [2500, 2501].
2499. Mark C. Sinclair. Evolutionary Telecommunications: A Summary. In GECCO’99 WS [2502],
pages 209–212, 1999. Fully available at http://uk.geocities.com/markcsinclair/
ps/etppf_sin_sum.ps.gz [accessed 2008-08-01]. CiteSeerx : 10.1.1.38.6797. Partly avail-
able at http://uk.geocities.com/markcsinclair/ps/etppf_sin_slides.ps.gz
[accessed 2008-08-01].

2500. Mark C. Sinclair. Optical Mesh Network Topology Design using Node-Pair Encoding
Genetic Programming. In GECCO’99 [211], pages 1192–1197, volume 2, 1999. Fully
available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/sinclair_1999_
OMNTDNEGP.html [accessed 2008-07-21]. CiteSeerx : 10.1.1.49.453. See also [2498].
2501. Mark C. Sinclair. Node-Pair Encoding Genetic Programming for Optical Mesh Network
Topology Design. In Telecommunications Optimization: Heuristic and Adaptive Tech-
niques [640], Chapter 6, pages 99–114. John Wiley & Sons Ltd.: New York, NY, USA,
2000. See also [2498].
2502. Mark C. Sinclair, David Wolfe Corne, and George D. Smith, editors. Proceedings of Evolu-
tionary Telecommunications: Past, Present and Future – A Bird-of-a-Feather Workshop at
GECCO’99 (GECCO’99 WS), July 13, 1999, Orlando, FL, USA. Fully available at http://
uk.geocities.com/markcsinclair/etppf.html [accessed 2008-06-25]. See also [211, 2093].
2503. Gulshan Singh and Kalyanmoy Deb. Comparison of Multi-Modal Optimization Algo-
rithms based on Evolutionary Algorithms. In GECCO’06 [1516], pages 1305–1312, 2006.
doi: 10.1145/1143997.1144200. Fully available at https://ceprofs.civil.tamu.edu/
ezechman/CVEN689/References/singh_deb_2006.pdf [accessed 2011-03-14]. Genetic algo-
rithms session.
2504. Madan G. Singh, editor. Systems and Control Encyclopedia. Pergamon Press: Ox-
ford, UK, 1987. isbn: 0-08-028709-3, 0080359337, and 0080406017. Google Books
ID: EbE9AAAACAAJ, ErE9AAAACAAJ, and re9QAAAAMAAJ.
2505. Rama Shankar Singh, Costas B. Krimbas, Diane B. Paul, and John Beatty, editors. Think-
ing about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge Uni-
versity Press: Cambridge, UK, November 2000. isbn: 0521620708. Google Books
ID: HmVbYJ93d-AC. LC Classification: QH366.2 .T53 2000.
2506. Sameer Singh, Maneesha Singh, Chid Apte, and Petra Perner, editors. Pattern Recognition
and Data Mining – Third International Conference on Advances in Pattern Recognition –
Part I (ICAPR’05), August 22–25, 2005, Bath, UK, volume 3686/2005 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11551188.
isbn: 3-540-28757-4.
2507. Abhishek Sinha and David Edward Goldberg. A Survey of Hybrid Genetic and Evolu-
tionary Algorithms. IlliGAL Report 2003004, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, University
of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2003. Fully avail-
able at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2003004.ps.Z [ac-
cessed 2010-09-11].

2508. Transportation Optimization Portal – TOP. SINTEF: Trondheim, Norway, October 21, 2010.
Fully available at http://www.sintef.no/Projectweb/TOP/ [accessed 2011-10-13].
2509. Moshe Sipper, Daniel Mange, and Andrés Pérez-Uribe, editors. Proceedings of the Sec-
ond International Conference on Evolvable Systems: From Biology to Hardware (ICES’98),
September 23–25, 1999, Lausanne, Switzerland, volume 1478/1998 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0057601.
REFERENCES 1145
isbn: 3-540-64954-9.
2510. Michael Sipser. Introduction to the Theory of Computation. Thomson Course Tech-
nology: Boston, MA, USA, Cengage Learning (Wadsworth Publishing Company):
Stamford, CT, USA, and Prindle, Weber, Schmidt (PWS) Publishing Company:
Boston, MA, USA, 1997–2006. isbn: 053-4947-28X, 0534948103, and 0619217642.
Google Books ID: -wlWAAAACAAJ, 4CR8AQAACAAJ, wlWAAAACAAJ, and eRYFAAAACAAJ.
OCLC: 35558950 and 64006340.
2511. Evren Sirin, Bijan Parsia, Dan Wu, James Hendler, and Dana Nau. HTN Planning for Web
Service Composition using SHOP2. Web Semantics: Science, Services and Agents on the
World Wide Web, 1(4):377–396, October 2004, Elsevier Science Publishers B.V.: Essex, UK.
International Semantic Web Conference 2003.
2512. Evren Sirin, Bijan Parsia, and James Hendler. Template-based Composition of Semantic Web
Services. In AAAI Fall Symposium on Agents and the Semantic Web [2138], pages 85–92,
2005. Fully available at http://www.mindswap.org/papers/extended-abstract.
pdf [accessed 2011-01-12]. CiteSeerx : 10.1.1.74.2584.
2513. Jaroslaw Skaruz and Franciszek Seredynski. Detecting Web Application Attacks with Use of
Gene Expression Programming. pages 2029–2035. doi: 10.1109/CEC.2009.4983190. INSPEC
Accession Number: 10688785.
2514. Jaroslaw Skaruz and Franciszek Seredynski. Web Application Security through
Gene Expression Programming. In EvoWorkshops’09 [1052], pages 1–10, 2009.
doi: 10.1007/978-3-642-01129-0 1.
2515. Benjamin Skellett, Benjamin Cairns, Nicholas Geard, Bradley Tonkes, and Janet Wiles.
Rugged NK Landscapes Contain the Highest Peaks. In GECCO’05 [304], pages 579–584,
2005. CiteSeerx : 10.1.1.1.5011.
2516. Hendrik Skubch. Hierarchical Strategy Learning for FLUX Agents. Master’s thesis, Technis-
che Universität Dresden: Dresden, Sachsen, Germany, February 18, 2007, Michael Thielscher,
Supervisor. See also [2517].
2517. Hendrik Skubch. Hierarchical Strategy Learning for FLUX Agents: An Applied Tech-
nique. VDM Verlag Dr. Müller AG und Co. KG: Saarbrücken, Saarland, Germany, 2006.
isbn: 3-8364-5271-5. Google Books ID: HONbJgAACAAJ and syxkPQAACAAJ. See also
[2516].
2518. Hendrik Skubch, Michael Wagner, Roland Reichle, Stefan Triller, and Kurt Geihs. Towards a
Comprehensive Teamwork Model for Highly Dynamic Domains. In ICAART’10 [917], pages
121–127, volume 2, 2010.
2519. Proceedings of the 3rd International Workshop on Machine Learning (ML’88), 1985, Skytop,
PA, USA.
2520. Derek H. Sleeman and Peter Edwards, editors. Proceedings of the 9th International Workshop
on Machine Learning (ML’92), July 1–3, 1992, Aberdeen, Scotland, UK. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-247-X.
2521. David Slepian, editor. Key Papers in the Development of Information Theory. Insti-
tute of Electrical and Electronics Engineers (IEEE) Press: New York, NY, USA, 1974.
isbn: 0471798525 and 0879420286. Google Books ID: AItQAAAAMAAJ, JSfdGgAACAAJ,
VPFeAQAACAAJ, and VfFeAQAACAAJ. OCLC: 940335, 214976152, and 257564453. See
also [2465].
2522. Proceedings of the 7th International Conference on Soft Computing, Evolutionary Compu-
tation, Genetic Programing, Fuzzy Logic, Rough Sets, Neural Networks, Fractals, Bayesian
Methods (MENDEL’01), June 6–8, 2001, Brno University of Technology: Brno, Czech Re-
public. Slezská Univerzita V Opavě, Filozoficko-přı́rodoveděcká fakulta, Ústav informatiky:
Opava, Czech Republic. isbn: 80-7248-107-X.
2523. N. J. H. Small. Miscellanea: Marginal Skewness and Kurtosis in Testing Multivariate Nor-
mality. Journal of the Royal Statistical Society: Series C – Applied Statistics, 29(1):85–87,
1980, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
2524. Waleed W. Smari, editor. Third International Conference on Information Reuse and In-
tegration (IRI’01), November 27–29, 2001, Luxor Hotel: Las Vegas, NV, USA. Inter-
national Society for Computers and Their Applications, Inc. (ISCA): Cary, NC, USA.
isbn: 1-880843-41-2. Google Books ID: QnxyAAAACAAJ. OCLC: 55764490.
2525. Mikhail I. Smirnov, editor. Autonomic Communication – Revised Selected Papers from
the First International IFIP Workshop (WAC’04), October 18–19, 2004, Berlin, Ger-
1146 REFERENCES
many, volume 3457/2005 in Computer Communication Networks and Telecommunica-
tions, Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/11520184. isbn: 3-540-27417-0. Google Books ID: kE3DeVNfaMcC
and pfPiPgAACAAJ. OCLC: 61029791, 145832250, 255319050, 318300920, and
403681622. Library of Congress Control Number (LCCN): 2005928376.
2526. Alice E. Smith and David W. Coit. Penalty Functions. In Handbook of Evolutionary Com-
putation [171], Chapter C 5.2. Oxford University Press, Inc.: New York, NY, USA, Institute
of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press,
Inc.: Boca Raton, FL, USA, 1997. Fully available at http://www.cs.cinvestav.mx/
˜constraint/papers/chapter.pdf [accessed 2008-11-15].
2527. George D. Smith, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the In-
ternational Conference on Artificial Neural Nets and Genetic Algorithms (ICANNGA’97),
April 1–4, 1997, Vienna, Austria, SpringerComputerScience. Springer Verlag GmbH: Vienna,
Austria. isbn: 3-211-83087-1. Google Books ID: KqZQAAAAMAAJ. OCLC: 40146054 and
246391089. Published in 1998.
2528. John Maynard Smith. Natural Selection and the Concept of a Protein Space. Nature, 225
(5232):563–564, February 7, 1970. doi: 10.1038/225563a0. PubMed ID: 5411867. Received
November˜7, 1969.
2529. Joshua R. Smith. Designing Biomorphs with an Interactive Genetic Algorithm. In
ICGA’91 [254], pages 535–538, 1991. Fully available at http://web.media.mit.edu/
˜jrs/biomorphs.pdf [accessed 2009-06-18].
2530. Michael H. Smith, M. Lee, James M. Keller, and John Yen, editors. Proceedings of the Bien-
nial Conference of the North American Fuzzy Information Processing Society (NAFIPS’96),
June 19–22, 1996, Berkeley, CA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-3225-3 and 0-7803-3226-1. Google Books ID: 6chVAAAAMAAJ and
X9SOJgAACAAJ. OCLC: 35188367 and 63197371.
2531. Murray Smith. Neural Networks for Statistical Modeling. John Wiley & Sons Ltd.: New York,
NY, USA, Van Nostrand Reinhold Company, and International Thomson Computer Press:
London, UK, 1993. isbn: 0442013108 and 1850328420. Google Books ID: TjjFAAAACAAJ,
TzjFAAAACAAJ, and lEADKgAACAAJ. OCLC: 27145760 and 34181200.
2532. Peter W. H. Smith and Kim Harries. Code Growth, Explicitly Defined Introns, and Alterna-
tive Selection Schemes. Evolutionary Computation, 6(4):339–360, Winter 1998, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1998.6.4.339. CiteSeerx : 10.1.1.24.2202.
2533. Reid G. Smith and A. Carlisle Scott, editors. Proceedings of the The Third Conference
on Innovative Applications of Artificial Intelligence (IAAI’91), June 14–19, 1991, Anaheim,
CA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-69148-5. Partly available
at http://www.aaai.org/Conferences/IAAI/iaai91.php [accessed 2007-09-06].
2534. Robert Elliott Smith, editor. A Report on The First International Workshop on Learning
Classifier Systems (IWLCS-92). Technical Report, NASA Johnson Space Center: Houston,
TX, USA, 1992. CiteSeerx : 10.1.1.45.4955. See also [2002].
2535. Shana Shiang-Fong Smith. Using Multiple Genetic Operators to Reduce Premature Conver-
gence in Genetic Assembly Planning. Computers in Industry – An International, Application
Oriented Research Journal, 54(1):35–49, May 2004, Elsevier Science Publishers B.V.: Ams-
terdam, The Netherlands. doi: 10.1016/j.compind.2003.08.001.
2536. Stephen Frederick Smith. A Learning System based on Genetic Adaptive Algorithms. PhD
thesis, University of Pittsburgh: Pittsburgh, PA, USA, 1980. Order No.: AAI8112638. Uni-
versity Microfilms No.: 81-12638.
2537. Stephen L. Smith and Stefano Cagnoni, editors. Medical Applications of Genetic and Evo-
lutionary Computation Workshop (MedGEC’08), July 12, 2008, Renaissance Atlanta Hotel
Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
2538. Tom Smith, Phil Husbands, Paul Layzell, and Michael O’Shea. Fitness Landscapes and
Evolvability. Evolutionary Computation, 10(1):1–34, Summer 2002, MIT Press: Cambridge,
MA, USA. doi: 10.1162/106365602317301754. Fully available at http://wotan.liu.edu/
docis/dbl/evocom/2002_10_1_1_FLAE.html [accessed 2007-07-28].
2539. Donald L. Snyder and Michael I. Miller. Random Point Processes in Time and Space, Springer
Texts in Electrical Engineering. Springer New York: New York, NY, USA, June 19, 1991.
isbn: 0387975772 and 3540975772. Google Books ID: MIkwAQAACAAJ. OCLC: 23287518
and 246549179.
REFERENCES 1147
2540. Lawrence V. Snyder and Mark S. Daskin. A Random-Key Genetic Algorithm for the Gen-
eralized Traveling Salesman Problem. European Journal of Operational Research (EJOR),
174(1):38–53, October 1, 2006, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2004.09.057. Fully available at http://www.lehigh.edu/˜lvs2/
Papers/gtsp.pdf and http://www.lehigh.edu/˜lvs2/Papers/gtsp_ejor.pdf [ac-
x
cessed 2010-11-22]. CiteSeer : 10.1.1.1.9730, 10.1.1.71.5810, and 10.1.1.87.6258.

2541. Marcus Henrique Soares Mendes, Gustavo Luı́s Soares, and João Antônio de Vasconcelos.
PID Step Response Using Genetic Programming. In SEAL’10 [2585], pages 359–368, 2010.
doi: 10.1007/978-3-642-17298-4 37.
2542. Elliott Sober. The Two Faces of Fitness. In Thinking about Evolution: Historical, Philo-
sophical, and Political Perspectives [2505], Chapter 15, pages 309–321. Cambridge University
Press: Cambridge, UK, 2000. Fully available at http://philosophy.wisc.edu/sober/
tff.pdf [accessed 2008-08-11].
2543. Proceedings of the twelfth annual ACM-SIAM Symposium on Discrete Algorithms, 2001,
Washington, DC, USA. Society for Industrial and Applied Mathematics (SIAM): Philadel-
phia, PA, USA. isbn: 0-89871-490-7.
2544. AISB 2002 Convention: Logic, Language and Learning (AISB’02), April 3–5, 2002, Imperial
College London: London, UK. Society for the Study of Artificial Intelligence and the Sim-
ulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://
www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2545. AISB 2003 Convention: Cognition in Machines and Animals (AISB’03), April 7–11, 2003,
University of Wales: Aberystwyth, Wales, UK. Society for the Study of Artificial Intelligence
and the Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available
at http://www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2546. AISB 2004 Convention: Motion, Emotion and Cognition (AISB’04), March 29–April 1, 2003,
University of Leeds: Leeds, UK. Society for the Study of Artificial Intelligence and the
Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://
www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2547. Social Intelligence and Interaction in Animals, Robots and Agents (AISB’05), April 12–15,
2005, University of Hertfordshire: Hatfield, England, UK. Society for the Study of Artifi-
cial Intelligence and the Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK.
Fully available at http://www.aisb.org.uk/publications/proceedings.shtml [ac-
cessed 2008-09-10].

2548. Adaptation in Artificial and Biological Systems (AISB’06), April 3–6, 2005, University of
Bristol: Bristol, UK. Society for the Study of Artificial Intelligence and the Simulation of
Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://www.aisb.
org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2549. Artificial and Ambient Intelligence (AISB’07), April 1–4, 2007, Newcastle University: New-
castle upon Tyne, UK. Society for the Study of Artificial Intelligence and the Simulation of
Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://www.aisb.
org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2550. Bambang I. Soemarwoto, O. J. Boelens, M. Allan, M. T. Arthur, K. Bütefisch, N. Ceresola,
and W. Fritz, editors. 21st Applied Aerodynamics Conference, Towards the Simulation of
Unsteady Manoeuvre Dominated by Vortical Flow, Common Exercise I of WEAG THALES
JP12.15 (AIAA’03), June 23–26, 2003, Hilton Walt Disney World, Walt Disney World Resort:
Orlando, FL, USA. American Institute of Aeronautics and Astronautics (AIAA): Reston, VA,
USA. isbn: 1563476002 and 9781563476006.
2551. Richard Soland, Ambrose Goicoechea, L. Duckstein, and Stanley Zionts, editors. Proceed-
ings of the 9th International Conference on Multiple Criteria Decision Making: Theory
and Applications in Business, Industry, and Government (MCDM’90), August 5–8, 1990,
Fairfax, VA, USA. Springer New York: New York, NY, USA. isbn: 0-387-97805-4
and 3-540-97805-4. OCLC: 25282714, 232664357, 471769870, 633632497, and
638193262. Library of Congress Control Number (LCCN): 92002703. GBV-Identification
(PPN): 120911701. Published in July 1992.
2552. Ashu M. G. Solo, editor. Proceedings of the 2011 International Conference on Genetic and
Evolutionary Methods (GEM’11), July 18–21, 2011, Las Vegas, NV, USA.
2553. Dawn Xiaodong Song. A Linear Genetic Programming Approach to Intrusion Detection.
1148 REFERENCES
Master’s thesis, Dalhousie University: Halifax, NS, Canada, March 2003, Malcom Ian Hey-
wood and A. Nur Zincir-Heywood, Advisors. Fully available at http://users.cs.dal.
ca/˜mheywood/Thesis/DSong.pdf [accessed 2008-06-16]. CiteSeerx : 10.1.1.91.3779. See
also [2555].
2554. Dawn Xiaodong Song, Adrian Perrig, and Doantam Phan. AGVI – Automatic Generation,
Verification, and Implementation of Security Protocols. In CAV’01 [280], pages 241–245,
2001. doi: 10.1007/3-540-44585-4 21.
2555. Dawn Xiaodong Song, Malcom Ian Heywood, and A. Nur Zincir-Heywood. A Linear
Genetic Programming Approach to Intrusion Detection. In GECCO’03-II [485], pages
2325–2336, 2003. Fully available at http://users.cs.dal.ca/˜mheywood/X-files/
Publications/27242325.pdf [accessed 2008-06-16]. CiteSeerx : 10.1.1.61.7475. See also
[2553].
2556. Il-Yeol Song and Torben Bach Pedersen, editors. Proceedings of the ACM Tenth Interna-
tional Workshop on Data Warehousing and OLAP (DOLAP’07), November 9, 2007, Lisbon,
Portugal. ACM Press: New York, NY, USA. isbn: 978-1-59593-827-5. Part of [28].
2557. Il-Yeol Song and Karen C. Davis, editors. Proceedings of the 7th ACM International Workshop
on Data Warehousing and OLAP (DOLAP’04), November 12–13, 2004, Hyatt Arlington
Hotel: Washington, DC, USA. ACM Press: New York, NY, USA. isbn: 1-58113-977-2.
Part of [1142].
2558. Il-Yeol Song and Dimitri Theodoratos, editors. Proceedings of the Fifth International Work-
shop on Data Warehousing and OLAP (DOLAP’02), November 8, 2002, McLean, VA, USA.
ACM Press: New York, NY, USA. Part of [25? ].
2559. Branko Souček and IRIS Group, editors. Dynamic, Genetic, and Chaotic Programming: The
Sixth-Generation, Sixth Generation Computer Technologies. Wiley Interscience: Chichester,
West Sussex, UK, April 1992. isbn: 047155717X. Partly available at http://www.
amazon.com/gp/reader/047155717X/ref=sib_dp_pt/002-6076954-4198445#
reader-link [accessed 2008-05-29]. Google Books ID: tIFQAAAAMAAJ.
2560. Terence Soule and James A. Foster. Removal Bias: a New Cause of Code Growth
in Tree Based Evolutionary Programming. In CEC’98 [2496], pages 781–186, 1998.
doi: 10.1109/ICEC.1998.700151. CiteSeerx : 10.1.1.35.3659. INSPEC Accession Num-
ber: 6016655.
2561. James C. Spall. Introduction to Stochastic Search and Optimization, Estimation, Simulation,
and Control – Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley
Interscience: Chichester, West Sussex, UK, first edition, June 2003. isbn: 0-471-33052-3
and 0-471-72213-8. Google Books ID: f66OIvvkKnAC. OCLC: 50773216, 51235522,
474647590, and 637018778. Library of Congress Control Number (LCCN): 2002038049.
LC Classification: QA274 .S63 2003.
2562. Bruce M. Spatz and Gregory J. E. Rawlins, editors. Proceedings of the First Workshop on
Foundations of Genetic Algorithms (FOGA’90), July 15–18, 1990, Indiana University, Bloom-
ington Campus: Bloomington, IN, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-170-8. Google Books ID: Df12yLrlUZYC. OCLC: 24168098,
263581771, and 311490755. Published July 1, 1991.
2563. William M. Spears and Kenneth Alan De Jong. On the Virtues of Parameterized Uniform
Crossover. In ICGA’91 [254], pages 230–236, 1991. Fully available at http://www.mli.
gmu.edu/papers/91-95/91-18.pdf [accessed 2010-08-02]. CiteSeerx : 10.1.1.51.8141.
2564. William M. Spears and Kenneth Alan De Jong. Using Genetic Algorithms for Supervised Con-
cept Learning. In TAI’90 [1358], pages 335–341, 1990. doi: 10.1109/TAI.1990.130359. Fully
available at http://www.cs.uwyo.edu/˜wspears/papers/ieee90.pdf [accessed 2009-07-
x
21]. CiteSeer : 10.1.1.57.147. INSPEC Accession Number: 4076618.

2565. William M. Spears and Worthy Neil Martin. Proceedings of the Sixth Workshop on Founda-
tions of Genetic Algorithms (FOGA’00), July 21–23, 2001, Charlottesville, VA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-734-X. Google Books
ID: QPFQAAAAMAAJ and uiX9TNaSxxYC.
2566. Lee Spector. Evolving Control Structures with Automatically Defined Macros. In
Papers from the 1995 AAAI Fall Symposium on Genetic Programming [2489], pages
99–105, 1995. Fully available at http://helios.hampshire.edu/lspector/pubs/
ADM-fallsymp-e.pdf and http://www.aaai.org/Papers/Symposia/Fall/1995/
FS-95-01/FS95-01-014.pdf [accessed 2010-08-21]. CiteSeerx : 10.1.1.51.6239. See also
REFERENCES 1149
[2567].
2567. Lee Spector. Simultaneous Evolution of Programs and their Control Structures. In
Advances in Genetic Programming II [106], Chapter 7, pages 137–154. MIT Press:
Cambridge, MA, USA, 1996. Fully available at http://hampshire.edu/˜lasCCS/
pubs/AiGP2-post-final-e.pdf, http://helios.hampshire.edu/lspector/
pubs/AiGP2-post-final-e.pdf, and http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/spector_1996_aigp2.html [accessed 2010-08-21]. CiteSeerx : 10.1.1.39.9677.
2568. Lee Spector. Automatic Quantum Computer Programming – A Genetic Programming Ap-
proach, volume 7 in Genetic Programming Series. Springer Science+Business Media, Inc.:
New York, NY, USA, paperback edition 2007 edition, 2006. isbn: 0-387-36496-X and
0-387-36791-8. Google Books ID: 9wxzm8Gm-fUC and mEKfvtxmn9MC. Library of
Congress Control Number (LCCN): 2006931640. Series editor: John Koza.
2569. Lee Spector, William B. Langdon, Una-May O’Reilly, and Peter John Angeline, editors.
Advances in Genetic Programming, volume 3 in Complex Adaptive Systems, Bradford Books.
MIT Press: Cambridge, MA, USA, July 16, 1996. isbn: 0-262-19423-6. Google Books
ID: 5Qwbal3AY6oC. OCLC: 29595260, 42274498, 42579104, 60710976, 60863107,
154701926, and 222641812.
2570. Lee Spector, Erik D. Goodman, Annie S. Wu, William B. Langdon, Hans-Michael Voigt,
Mitsuo Gen, Sandip Sen, Marco Dorigo, Shahram Pezeshk, Max H. Garzon, and Ed-
mund K. Burke, editors. Proceedings of the Genetic and Evolutionary Computation Confer-
ence (GECCO’01), July 7–11, 2001, Holiday Inn Golden Gateway Hotel: San Francisco, CA,
USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-774-9.
Google Books ID: 81QKAAAACAAJ and cYVQAAAAMAAJ. See also [1104].
2571. Herbert Spencer. The Principles of Biology, volume 1. D. Appleton and Company: New York,
NY, USA and Williams and Norgate: London & Edinburgh, UK, 1st edition, 1864–1868. Fully
available at http://books.google.com/books?id=rHoOAAAAYAAJ and http://www.
archive.org/details/ThePrinciplesOfBiology [accessed 2009-06-11].
2572. W. Spendley, G. R. Hext, and F. R. Himsworth. Sequential Application of Simplex Designs
in Optimisation and Evolutionary Operation. Technometrics, 4(4):441–461, November 1962,
American Statistical Association (ASA): Alexandria, VA, USA and American Society for
Quality (ASQ): Milwaukee, WI, USA. OCLC: 486202935. JSTOR Stable ID: 1266283.
2573. Christian Spieth, Felix Streichert, Nora Speer, and Andreas Zell. Utilizing an Island
Model for EA to Preserve Solution Diversity for Inferring Gene Regulatory Networks. In
CEC’04 [1369], pages 146–151, volume 1, 2004. doi: 10.1109/CEC.2004.1330850. Fully
available at http://www-ra.informatik.uni-tuebingen.de/software/JCell/
publications.html [accessed 2007-08-24].
2574. Ralph H. Sprague, Jr., editor. Proceedings of the 33rd Hawaii International Conference
on System Sciences (HICSS’00), January 4–7, 2000, Maui, HI, USA. IEEE Computer So-
ciety: Washington, DC, USA. isbn: 0-7695-0493-0, 0769504949, and 0769504957.
Google Books ID: 7QIEAAAACAAJ and IK9VAAAAMAAJ. OCLC: 43892251, 228269935,
and 423974973. Order No.: PR00493.
2575. Genetic Programming Theory and Practice V, Proceedings of the Genetic Programming The-
ory Practice 2007 Workshop (GPTP’07), May 17–19, 2007, University of Michigan, Center
for the Study of Complex Systems (CSCS): Ann Arbor, MI, USA, Genetic and Evolution-
ary Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Nor-
well, MA, USA. Partly available at http://www.cscs.umich.edu/gptp-workshops/
gptp2007/ [accessed 2007-09-28].
2576. Proceedings of the V International Workshop on Nature Inspired Cooperative Strategies for
Optimization (NICSO’11), October 20–22, 2011, Cluj-Napoca, Romania, Studies in Compu-
tational Intelligence. Springer-Verlag: Berlin/Heidelberg. Partly available at http://www.
nicso2011.org/ [accessed 2011-05-25].
2577. Seclected Papers from the First Asia-Pacific Conference on Simulated Evolution and
Learning (SEAL’96), November 9–12, 1996, Taejon, South Korea, volume 1285/1997 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BFb0028514.
2578. Sixth International Workshop on Learning Classifier Systems (IWLCS’03), July 12–16, 2003,
Holiday Inn Chicago: Chicago, IL, USA. Springer-Verlag GmbH: Berlin, Germany. Held
during GECCO-2003 (. Part of [484, 485]. See also [1589].
1150 REFERENCES
2579. The Tenth International Workshop on Learning Classifier Systems (IWLCS’07), July 8, 2007,
University College London (UCL), Department of Computer Science: London, UK, Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. Held in association with GECCO-2007 (. See also
[2699, 2700].
2580. Sixth International Conference on Ant Colony Optimization and Swarm Intelligence
(ANTS’08), September 22–28, 2008, Brussels, Belgium, Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
2581. Learning and Intelligent OptimizatioN (LION 4), January 18–22, 2010, Ca’ Dolfin Building,
Universita’ Ca’ Foscari: Venice, Italy, Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany.
2582. Proceedings of the 35th International Symposium on Mathematical Foundations of Computer
Science (MFCS’10), August 21–29, 2010, Masaryk University, Faculty of Informatics: Brno,
Czech Republic, Advanced Research in Computing and Software Science (ARCoSS), Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2583. Proceedings of the 9th Mexican International Conference on Artificial Intelligence (MI-
CAI’10), November 8–13, 2010, Autonomous University of Hidalgo State: Pachuca, Mexico,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2584. Proceedings of the 9th International Symposium on Experimental Algorithms (SEA’10),
May 20–22, 2010, Hotel Continental Terme: Ischia Porto, Ischa Island, Naples, Italy, volume
6049 in Programming and Software Engineering Series, Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3642131921. Google Books
ID: XerGSAAACAAJ.
2585. Proceedings of the Eighth International Conference on Simulated Evolution And Learning
(SEAL’10), December 1–4, 2010, Indian Institute of Technology Kanpur (IIT): Kanpur,
Uttar Pradesh, India, Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany.
2586. Evolution Artificielle: Proceedings of the 10th Biannual International Conference on Artificial
Evolution (AE’11), October 24–26, 2011, Angers Congress Centre: Angers, France, Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. Partly avail-
able at http://www.info.univ-angers.fr/ea2011/ [accessed 2011-03-30].
2587. Proceedings of the Fourth Conference on Artificial General Intelligence (AGI’11), August 3–
6, 2011, Mountain View, CA, USA, Lecture Notes in Artificial Intelligence (LNAI, SL7),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2588. Proceedings of the 4th European Event on Bio-Inspired Algorithms for Continuous Param-
eter Optimisation (EvoNUM’11), 2011, Torino, Italy. Springer-Verlag GmbH: Berlin, Ger-
many. Partly available at http://evostar.dei.uc.pt/call-for-contributions/
evoapplications/evonum/ [accessed 2011-02-28]. Part of [788? ].
2589. Proceedings of the 10th International Conference on Adaptive and Natural Computing Al-
gorithms (ICANNGA’11), April 14–16, 2011, University of Ljubljana: Ljubljana, Slovenia,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2590. Proceedings of the 7th Semantic Web Services Challenge Workshop (SWSC’08), October 26,
2008, Kongresszentrum: Karlsruhe, Germany. Springer-Verlag GmbH: Berlin, Germany and
Stanford University, Computer Science Department, Stanford Logic Group: Stanford, CA,
USA. Fully available at http://sws-challenge.org/wiki/index.php/Workshop_
Karlsruhe [accessed 2010-12-12]. Part of [2166, 2477].
2591. Proceedings of the 11th UK Performance Engineering Workshop (UKPEW’95), September 5–
9, 1995, Liverpool John Moores University: Liverpool, UK. Springer-Verlag London Limited:
London, UK.
2592. Giovanni Squillero. MicroGP – An Evolutionary Assembly Program Generator. Genetic
Programming and Evolvable Machines, 6(3):247–263, September 2005, Springer Nether-
lands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1007/s10710-005-2985-x.
2593. N. S. Sridharan, editor. Proceedings of the 11th International Joint Conference on Arti-
ficial Intelligence (IJCAI’89-I), August 1989, Detroit, MI, USA, volume 1. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-094-9. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-89-VOL1/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [2594].
REFERENCES 1151
2594. N. S. Sridharan, editor. Proceedings of the 11th International Joint Conference on Artifi-
cial Intelligence (IJCAI’89-II), August 1989, Detroit, MI, USA, volume 2. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-094-9. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-89-VOL2/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [2593].

2595. International Conference on Swarm, Evolutionary and Memetic Computing (SEMCCO’10),


December 16–18, 2010, SRM University, Department of Electrical and Electronics Engineer-
ing: Chennai, India.
2596. Marc St-Hilaire, Steven Chamberland, and Samuel Pierre. A Local Search Heuristic for
the Global Planning of UMTS Networks. In IWCMC’06 [2082], pages 1405–1410, 2006.
doi: 10.1145/1143549.1143831. SESSION: R2-D: general symposium. See also [2597].
2597. Marc St-Hilaire, Steven Chamberland, and Samuel Pierre. A Tabu Search Heuristic for
the Global Planning of UMTS Networks. In WiMob’06 [2176], pages 148–151, 2006.
doi: 10.1109/WIMOB.2006.1696339. INSPEC Accession Number: 9132969. See also [2596].
2598. Peter F. Stadler. Fitness Landscapes. In Biological Evolution and Statistical Physics [1688],
pages 183–204. Springer-Verlag: Berlin/Heidelberg, 2002. doi: 10.1007/3-540-45692-9 10.
Fully available at http://citeseer.ist.psu.edu/old/stadler02fitness.html
and http://www.bioinf.uni-leipzig.de/˜studla/Publications/PREPRINTS/
01-pfs-004.pdf [accessed 2009-06-29]. Based on his talk “The Structure of Fitness Landscapes
given at the International Workshop on Biological Evolution and Statistical Physics”, May
10–14, 2000, located at the Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer
Str. 38, D-01187 Dresden, Germany.
2599. Peter Stagge and Christian Igel. Structure Optimization and Isomorphisms. In Proceed-
ings of the Second Evonet Summer School on Theoretical Aspects of Evolutionary Comput-
ing [1480], pages 409–422. Springer New York: New York, NY, USA, 1999, University of
Antwerp (RUCA): Antwerp, Belgium. Fully available at http://citeseer.ist.psu.
edu/old/520775.html [accessed 2009-07-10].
2600. Proceedings of AAAI Spring Symposium on Parallel Models of Intelligence: How Can Slow
Components Think So Fast?, March 22–24, 1988, Stanford, CA, USA. OCLC: 21247441.
2601. Kenneth Owen Stanley and Risto Miikkulainen. A Taxonomy for Artificial Embryogeny. Arti-
ficial Life, 9(2):93–130, 2003, MIT Press: Cambridge, MA, USA. Fully available at http://
nn.cs.utexas.edu/downloads/papers/stanley.alife03.pdf [accessed 2010-08-03].
2602. Johannes Starlinger, Florian Leitner, Alfonso Valencia, and Ulf Leser. SOA-Based Integration
of Text Mining Services. In SERVICES Cup’09 [1351], pages 99–106, 2009. doi: 10.1109/SER-
VICES-I.2009.100. INSPEC Accession Number: 10982213.
2603. Ioannis Stavrakakis and Mikhail I. Smirnov, editors. Revised and Selected Papers
from the Second IFIP Workshop on Autonomic Communication (WAC’05), October 2–
5, 2005, Vouliagmeni, Athens, Greece, volume 3854/2006 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11687818.
isbn: 3-540-32992-7. Google Books ID: CqEULQAACAAJ. OCLC: 68018967, 145829334,
150400331, 181547043, 314893626, 318300921, 318300921, and 403681669. Library
of Congress Control Number (LCCN): 2006921997. Post-workshop proceedings of the Sec-
ond IFIP TC6 WG6.3 and WG6.6 International Workshop on Autonomic Communication.
2604. Adam Stawowy. Evolutionary based Heuristic for Bin Packing Problem. Computers & Indus-
trial Engineering, 55(2):465–474, September 2008, Elsevier Science Publishers B.V.: Essex,
UK. Imprint: Pergamon Press: Oxford, UK. doi: 10.1016/j.cie.2008.01.007.
2605. Pawel A. Stefanski. Genetic Programming Using Abstract Syntax Trees. self-published, 1993.
Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/gp-html/icga93-gp_
stefanski.html [accessed 2007-08-15]. Notes from Genetic Programming Workshop at ICGA-
93. See also [969].
2606. Irene A. Stegun, author. Milton Abramowitz, editor. Handbook of Mathematical Func-
tions with Formulas, Graphs, and Mathematical Tables. Dover Publications: Mineola, NY,
USA, 1964. isbn: 0486612724. Google Books ID: MtU8uP7XMvoC and USNBNAAACAAJ.
OCLC: 18003605, 174191235, 185061828, 224642767, 232366868, 256560954, and
299883180.
2607. Gerd Steierwald, Hans Dieter Künne, and Walter Vogt. Stadtverkehrsplanung: Grundla-
gen, Methoden, Ziele. Springer-Verlag GmbH: Berlin, Germany, 2., neu bearbeitete und
erweiterte auflage edition, 2005. doi: 10.1007/b138349. isbn: 3-540-40588-7. Google
1152 REFERENCES
Books ID: 1ohJ05DCZ2oC. OCLC: 630487728. GBV-Identification (PPN): 524942374
and 557339782.
2608. Daniel L. Stein, editor. Lectures in the Sciences of Complexity: The Proceedings of
the 1988 Complex Systems Summer School, June–July 1988, Santa Fé, NM, USA, Santa
Fe Institue Studies in the Sciences of Complexity. Addison-Wesley Longman Publishing
Co., Inc.: Boston, MA, USA and Westview Press. isbn: 0201510154. Google Books
ID: T2GIPQAACAAJ, VaB5OwAACAAJ, and aBPCAAAACAAJ. Published in September 1989.
2609. Tammy I. Stein, editor. Proceedings of the International Geoscience and Remote Sens-
ing Symposium, “Quantitative Remote Sensing for Science and Applications” (IGARSS’95),
July 10–14, 1995, Congress Center: Firenze, Italy. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-2567-2. Google Books ID: 2vVVAAAAMAAJ and C VVAAAAMAAJ.
OCLC: 36158995, 223757670, and 421884354. Catalogue no.: 95CH35770.
2610. Ralph E. Steuer. Multiple Criteria Optimization: Theory, Computation and Application.
R. E. Krieger Publishing Company: Malabar, FL, USA, reprint edition, August 1989.
isbn: 0894643932. Google Books ID: r5P0AQAACAAJ.
2611. Terry Stewart. Extrema Selection: Accelerated Evolution on Neutral Networks. In
CEC’01 [1334], pages 25–29, volume 1, 2001. doi: 10.1109/CEC.2001.934366. Fully avail-
able at http://rob.ccmlab.ca:8080/˜terry/papers/2001-Extrema_Selection.
pdf [accessed 2009-07-10]. CiteSeerx : 10.1.1.2.9166.
2612. Theo Stewart, editor. Proceedings of the 13th International Conference on Multiple Crite-
ria Decision Making: Trends in Multicriteria Decision Making (MCDM’97), January 6–10,
1997, University of Cape Town: Cape Town, South Africa, volume 465 in Lecture Notes in
Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. Partly available
at http://www.uct.ac.za/depts/maths/mcdm97 [accessed 2007-09-10]. Published January
15, 1999.
2613. T. R. Stickland, C. M. N. Tofts, and Nigel R. Franks. A Path Choice Algorithm for Ants.
Naturwissenschaften – The Science of Nature, 79(12):567–572, December 1992, Springer-
Verlag: Berlin/Heidelberg. doi: 10.1007/BF01131415.
2614. Adrian Stoica, Jason D. Lohn, and Didier Keymeulen, editors. Evolvable Hardware – Pro-
ceedings of 1st NASA/DoD Workshop on Evolvable Hardware (EH’99), June 19–21, 1999, Jet
Propulsion Laboratory, California Institute of Technology (Caltech): Pasadena, CA, USA.
IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-0256-3.
2615. Robert Roth Stoll. Set Theory and Logic. Dover Publications: Mineola, NY, USA, reprint
edition, June 1979. isbn: 0-4866-3829-4. Google Books ID: 3-nrPB7BQKMC.
2616. Tobias Storch and Ingo Wegener. Real Royal Road Functions for Constant Population Size.
In GECCO’03-II [485], pages 1406–1417, 2003. doi: 10.1007/3-540-45110-2 14. See also
[2617, 2618].
2617. Tobias Storch and Ingo Wegener. Real Royal Road Functions for Constant Population
Size. Reihe Computational Intelligence: Design and Management of Complex Technical Pro-
cesses and Systems by Means of Computational Intelligence Methods CI-167/04, Universität
Dortmund, Collaborative Research Center (Sonderforschungsbereich) 531: Dortmund, North
Rhine-Westphalia, Germany, February 2004. Fully available at http://hdl.handle.net/
2003/5456 [accessed 2008-07-22]. issn: 1433-3325. See also [2616, 2618].
2618. Tobias Storch and Ingo Wegener. Real Royal Road Functions for Constant Population Size.
Theoretical Computer Science, 320(1):123–134, June 2, 2004, Elsevier Science Publishers
B.V.: Essex, UK, G. Ausiello and D. Sannella, editors. doi: 10.1016/j.tcs.2004.03.047. See
also [2616, 2617].
2619. Rainer M. Storn. On the Usage of Differential Evolution for Function Optimiza-
tion. In NAFIPS’96 [2530], pages 519–523, 1996. doi: 10.1109/NAFIPS.1996.534789.
Fully available at https://eprints.kfupm.edu.sa/55484/1/55484.pdf and www.
icsi.berkeley.edu/˜storn/bisc1.ps.gz [accessed 2009-09-18]. INSPEC Accession Num-
ber: 5358034.
2620. Rainer M. Storn. Differential Evolution (DE) for Continuous Function Optimization (An
Algorithm by Kenneth Price and Rainer Storn). International Computer Science Institute
(ICSI), University of California: Berkeley, CA, USA, 2010. Fully available at http://www.
icsi.berkeley.edu/˜storn/code.html [accessed 2010-12-28].
2621. Rainer M. Storn and Kenneth V. Price. Differential Evolution – A Simple and Efficient
Adaptive Scheme for Global Optimization over Continuous Spaces. Technical Report TR-
REFERENCES 1153
95-012, International Computer Science Institute (ICSI), University of California: Berkeley,
CA, USA, March 1995. Fully available at http://http.icsi.berkeley.edu/˜storn/
TR-95-012.pdf [accessed 2009-07-04]. CiteSeerx : 10.1.1.1.9696.
2622. Felix Streichert. Evolutionäre Algorithmen: Implementation und Anwendungen im Asset-
Management-Bereich (Evolutionary Algorithms and their Application to Asset Management).
Master’s thesis, Universität Stuttgart, Institut A für Mechanik: Stuttgart, Germany, Au-
gust 2001, Arnold Kirstner and Werner Koch, Supervisors. Fully available at http://
www-ra.informatik.uni-tuebingen.de/mitarb/streiche [accessed 2007-08-17].
2623. Roman Grigorevich Strongin and Yaroslav D. Sergeyev. Global Optimization with Non-
Convex Constraints, volume 45 in Nonconvex Optimization and Its Applications. Springer
Science+Business Media, Inc.: New York, NY, USA, 2000. isbn: 0-7923-6490-2. Google
Books ID: xh GF9Dor3AC.
2624. Eleni Stroulia and Stan Matwin, editors. Advances in Artificial Intelligence – Proceedings
of the 14th Biennial Conference of the Canadian Society for Computational Studies of In-
telligence (AI’01), June 7–9, 2001, Ottawa, ON, Canada, volume 2056 in Lecture Notes in
Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45153-6. isbn: 3-540-42144-0.
2625. Thomas Stützle, editor. Learning and Intelligent OptimizatioN (LION 3), January 14–
18, 2009, Trento, Italy, Theoretical Computer Science and General Issues (SL 1),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-11169-3.
2626. Ponnuthurai Nagaratnam Suganthan, Nikolaus Hansen, J. J. Liang, Kalyanmoy Deb, Ying-
Ping Chen, Anne Auger, and Santosh Tiwari. Problem Definitions and Evaluation Crite-
ria for the CEC 2005 Special Session on Real-Parameter Optimization. KanGAL Report
2005005, Kanpur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical
Engineering, Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India,
May 2005. Fully available at http://www.iitk.ac.in/kangal/papers/k2005005.
pdf [accessed 2007-10-07]. See also [641, 2627].
2627. Ponnuthurai Nagaratnam Suganthan, Nikolaus Hansen, J. J. Liang, Kalyanmoy Deb,
Ying-Ping Chen, Anne Auger, and Santosh Tiwari. Problem Definitions and Evaluation
Criteria for the CEC 2005 Special Session on Real-Parameter Optimization. Technical
Report May-30-05, Nanyang Technological University (NTU): Singapore, May 30, 2005.
Fully available at http://www.ntu.edu.sg/home/epnsugan/index_files/CEC-05/
Tech-Report-May-30-05.pdf [accessed 2007-10-07]. See also [641, 2626].
2628. Sarina Sulaiman, Siti Mariyam Shamsuddin, Fadni Forkan, Ajith Abraham, and Shahida
Sulaiman. Intelligent Web Caching for E-learning Log Data. In AMS’09 [1349], pages 136–
141, 2009. doi: 10.1109/AMS.2009.88. Fully available at http://www.softcomputing.
net/ams2009_1.pdf [accessed 2009-03-29].
2629. André Sülflow, Nicole Drechsler, and Rolf Drechsler. Robust Multi-Objective Op-
timization in High Dimensional Spaces. In EMO’07 [2061], pages 715–726, 2007.
doi: 10.1007/978-3-540-70928-2 54. Fully available at http://www.informatik.
uni-bremen.de/agra/doc/konf/emo2007_RobustOptimization.pdf [accessed 2009-07-
18].

2630. Fuchun Sun, Yingxu Wang, Jianhua Lu, Bo Zhang, Witold Kinsner, and Lotfi A. Zadeh,
editors. Proceedings of the 9th IEEE International Conference on Cognitive Informatics
(ICCI’10), July 7–9, 2010, Tsinghua University: Běijı̄ng, China. IEEE Computer Society
Press: Los Alamitos, CA, USA. Partly available at http://ieeexplore.ieee.org/xpl/
mostRecentIssue.jsp?punumber=5560187 [accessed 2011-02-28]. Catalogue no.: CFP10312-
CDR.
2631. Jianyong Sun, Qingfu Zhang, Jin Li, and Xin Yao. A Hybrid Estimation of Distri-
bution Algorithm for CDMA Cellular System Design. In SEAL’06 [2856], pages 905–
912, 2006. doi: 10.1007/11903697 114. Fully available at http://dces.essex.ac.uk/
staff/qzhang/conferencepaper/SEAL06SunZhangLIYao.pdf [accessed 2009-08-10]. See
also [2632].
2632. Jianyong Sun, Qingfu Zhang, Jin Li, and Xin Yao. A Hybrid Estimation of Distribution
Algorithm for CDMA Cellular System Design. International Journal of Computational Intel-
ligence and Applications (IJCIA), 7(2):187–200, June 2008, World Scientific Publishing Co.:
Singapore and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026808002235.
1154 REFERENCES
See also [2631].
2633. Xia Sun and Ziqiang Wang. An Efficient Materialized View Selection Algorithm Based on
PSO. In ISA’09 [2847], pages 11–14, 2009. doi: 10.1109/IWISA.2009.5072711. INSPEC
Accession Number: 10717741.
2634. Patrick David Surry. A Prescriptive Formalism for Constructing Domain-specific Evolution-
ary Algorithms. PhD thesis, University of Edinburgh, Edinburgh Parallel Computing Centre
(EPCC): Edinburgh, Scotland, UK, 1998. CiteSeerx : 10.1.1.39.7961. Google Books
ID: HOHxSAAACAAJ. OCLC: 281436714 and 606028198.
2635. Andrew M. Sutton, Monte Lunacek, and L. Darrell Whitley. Differential Evolution and Non-
Separability: Using Selective Pressure to Focus Search. In GECCO’07-I [2699], pages 1428–
1435, 2007. doi: 10.1145/1276958.1277221. Fully available at http://www.cs.colostate.
edu/˜sutton/pubs/Sutton2007de.ps.gz [accessed 2009-10-20].
2636. Masuo Suzuki, H. Nishimori, Yoshiyuki Kabashima, Kazuyuki Tanaka, and T. Tanaka, edi-
tors. 2003 Joint Workshop of Hayashibara Foundation and Hayashibara Forum – Physics and
Information – and Workshop on Statistical Mechanical Approach to Probabilistic Information
Processing (SMAPIP), July 11, 2003, Okayama, Japan. Partly available at http://www.
smapip.is.tohoku.ac.jp/˜smapip/2003/hayashibara/ [accessed 2009-07-11].
2637. Reiji Suzuki and Takaya Arita. Repeated Occurrences of the Baldwin Effect Can
Guide Evolution on Rugged Fitness Landscapes. In CI-ALife’07 [2076], pages 8–14,
2007. doi: 10.1109/ALIFE.2007.367652. Fully available at http://www.alife.cs.is.
nagoya-u.ac.jp/˜reiji/publications/2007_ieeealife_suzuki.pdf [accessed 2008-
09-08]. INSPEC Accession Number: 9507350.

2638. Reiji Suzuki and Takaya Arita. How Learning Can Guide Evolution of Communication. In
Artificial Life XI [446], pages 608–615, 2008. CiteSeerx : 10.1.1.161.4097.
2639. William R. Swartout, Paul Rosenbloom, and Peter Szolovits, editors. Proceedings of the
Tenth National Conference on Artificial Intelligence (AAAI’92), July 12–16, 1992, San
Jose Convention Center: San José, CA, USA. AAAI Press: Menlo Park, CA, USA and
MIT Press: Cambridge, MA, USA. isbn: 0-262-51063-4. Fully available at http://
www.aaai.org/Library/AAAI/aaai92contents.php [accessed 2009-06-24]. Google Books
ID: Q55QAAAAMAAJ, ZZxQAAAAMAAJ, and zSkYAAAACAAJ.
2640. Philip H. Sweany and Steven J. Beaty. Instruction Scheduling Using Simulated Annealing.
In MPCS’98 [611], 1998. CiteSeerx : 10.1.1.67.3149.
2641. Gilbert Syswerda. A Study of Reproduction in Generational and Steady State Genetic Al-
gorithms. In FOGA’90 [2562], pages 94–101, 1990.
2642. Piotr S. Szczepaniak, Janusz Kacprzyk, and Adam Niewiadomski, editors. Advances in Web
Intelligence, Proceedings of the Third International Atlantic Web Intelligence Conference
(AWIC’05), June 6–9, 2005, Lodz, Poland, volume 3528/2005 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-26219-9.
2643. George G. Szpiro. A Search for Hidden Relationships: Data Mining with Ge-
netic Algorithms. Computational Economics, 10(3):267–277, August 1997, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1008673309609. Fully available
at http://ideas.repec.org/a/kap/compec/v10y1997i3p267-77.html [accessed 2009-
x
09-10]. CiteSeer : 10.1.1.21.6037.

2644. Heidi A. Taboada and David W. Coit. Data Clustering of Solutions for Multiple Objec-
tive System Reliability Optimization Problems. Quality Technology & Quantitative Man-
agement – An International Journal, 4(2):191–210, June 2007, Chung Hua University, De-
partment of Industrial Engineering & System Management: Hsinchu, Taiwan. Fully avail-
able at http://web2.cc.nctu.edu.tw/˜qtqm/qtqmpapers/2007V4N2/2007V4N2_
F3.pdf [accessed 2010-12-16]. See also [2645].
2645. Heidi A. Taboada, Fatema Baheranwala, David W. Coit, and Naruemon Wattanapongsakorn.
Practical Solutions for Multi-Objective Optimization: An Application to System Reliabil-
ity Design Problems. Reliability Engineering & System Safety, 92(3):314–322, March 2007,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands, European Safety and
Reliability Association (ESRA), and Safety Engineering and Risk Analysis Division
REFERENCES 1155
(SERAD) of the American Society of Mechanical Engineers (ASME): New York, NY,
USA. doi: 10.1016/j.ress.2006.04.014. Fully available at http://www.rci.rutgers.edu/
˜coit/RESS_2007.pdf [accessed 2007-09-10]. See also [605].
2646. Walter Alden Tackett. Recombination, Selection, and the Genetic Construction of Com-
puter Programs. PhD thesis, University of Southern California (USC), Department of
Electrical Engineering Systems: Los Angeles, CA, USA, April 17, 1994. Fully avail-
able at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/
watphd.tar.Z [accessed 2007-09-07]. CENG Technical Report 94-13.
2647. Genichi Taguchi. Introduction to Quality Engineering: Designing Quality into Products and
Processes. Asian Productivity Organization (APO): Chiyoda-ku, Tokyo, Japan and Kraus
International Publications: Millwood, NY, USA, January 1986. isbn: 9283310837 and
9283310845. Google Books ID: 1NtTAAAAMAAJ, NeA3JAAACAAJ, NuA3JAAACAAJ, and
yMv-AQAACAAJ. OCLC: 16277853 and 180475635. Translation of “Sekkeisha no tame no
hinshitsu kanri”.
2648. Éric D. Taillard. Parallel Iterative Search Methods for Vehicle Routing Problems. Net-
works, 23(8):661–673, December 1993, Wiley Periodicals, Inc.: Wilmington, DE, USA.
doi: 10.1002/net.3230230804.
2649. Éric D. Taillard, Gilbert Laporte, and Michel Grendau. Vehicle Routeing with Multiple
Use of Vehicles. The Journal of the Operational Research Society (JORS), 47(8):1065–1070,
August 1996, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and Op-
erations Research Society: Birmingham, UK. CiteSeerx : 10.1.1.49.3923. JSTOR Stable
ID: 3010414.
2650. Éric D. Taillard, Luca Maria Gambardella, Michael Gendrau, and Jean-Yves Potvin.
Adaptive Memory Programming: A Unified View of Metaheuristics. European Journal
of Operational Research (EJOR), 135(1):1–16, November 6, 2001, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(00)00268-X. Fully available
at http://ina2.eivd.ch/Collaborateurs/etd/articles.dir/ejor135_1_1_16.
pdf [accessed 2008-11-11]. CiteSeerx : 10.1.1.54.8460.
2651. Hideyuki Takagi. Active User Intervention in an EC Search. In FEA’00 [2853], pages 995–
998, 2000. Fully available at http://www.design.kyushu-u.ac.jp/˜takagi/TAKAGI/
IECpaper/JCIS2K_2.pdf [accessed 2010-09-03].
2652. Hideyuki Takagi. Interactive Evolutionary Computation: Fusion of the Capacities of EC
Optimization and Human Evaluation. Proceedings of the IEEE, 89(9):1275–1296, Septem-
ber 2001, Institute of Electrical and Electronics Engineers (IEEE) Press: New York,
NY, USA. doi: 10.1109/5.949485. Fully available at http://www.design.kyushu-u.
ac.jp/˜takagi/TAKAGI/IECsurvey.html [accessed 2007-08-29]. INSPEC Accession Num-
ber: 7053972.
2653. Hiromitsu Takahashi and Akira Matsuyama. An Approximate Solution for the Steiner Prob-
lem in Graphs. Mathematica Japonica, 24(6):573–577, 1980, Japanese Association of Mathe-
matical Sciences (JAMS): Sakai, Osaka, Japan. Fully available at http://monet.skku.ac.
kr/course_materials/undergraduate/al/lecture/2006/TM.pdf [accessed 2010-09-27].
2654. Ricardo H. C. Takahashi, Elizabeth Wanner, Kalyanmoy Deb, and Salvatore Greco, editors.
Proceedings of the Sixth International Conference on Evolutionary Multi-Criterion Decision
Making (EMO’11), April 5, 2011, Hotel Estalagem das Minas Gerais: Ouro Preto, MG,
Brazil.
2655. Ahmet Cagatay Talay. A Gateway Access-Point Selection Problem and Traffic Bal-
ancing in Wireless Mesh Networks. In EvoWorkshops’07 [1050], pages 161–168, 2007.
doi: 10.1007/978-3-540-71805-5 18.
2656. Ahmet Cagatay Talay and Sema Oktug. A GA/Heuristic Hybrid Technique for Routing and
Wavelength Assignment in WDM Networks. In EvoWorkshops’04 [2254], pages 150–159,
2004.
2657. El-Ghazali Talbi, Pierre Liardet, Pierre Collet, Evelyne Lutton, and Marc Schoenauer,
editors. Revised Selected Papers of the 7th International Conference on Artificial Evo-
lution, Evolution Artificielle (EA’05), October 26–28, 2005, Lille, France, volume 3871
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-33589-7. Published in 2006.
2658. Seyed Hamid Talebian and Sameem Abdul Kareem. Using Genetic Algorithm to Select
1156 REFERENCES
Materialized Views Subject to Dual Constraints. In ICSPS’09 [1376], pages 633–638, 2009.
doi: 10.1109/ICSPS.2009.162. INSPEC Accession Number: 10789147.
2659. 13th Conference on Technologies and Applications of Artificial Intelligence (TAAI’08),
November 21–22, 2008, Tamkang University, Lanyang campus: Chiao-hsi Shiang, I-lan
County, Taiwan.
2660. Honghua Tan and Qi Luo, editors. Proceedings of the IITA International Conference on
Control, Automation and Systems Engineering (CASE’09), July 11–12, 2009, Zhāngjiājiè,
Húnán, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
2661. Kay Chen Tan, Meng Hot Lim, Xin Yao, and Lipo Wang, editors. Recend Advances
in Simulated Evolution and Learning – Proceedings of the Fourth Asia-Pacific Conference
on Simulated Evolution And Learning (SEAL’02), November 18–22, 2002, Singapore, vol-
ume 2 in Advances in Natural Computation. World Scientific Publishing Co.: Singapore.
isbn: 981-238-952-0. Google Books ID: Vn4bdMbo488C.
2662. Kay Chen Tan, Eik Fun Khor, Tong Heng Lee, and Ramasubramanian Sathikannan. An
Evolutionary Algorithm with Advanced Goal and Priority Specification for Multi-Objective
Optimization. Journal of Artificial Intelligence Research (JAIR), 18:183–215, February 2003,
AI Access Foundation, Inc.: El Segundo, CA, USA and AAAI Press: Menlo Park, CA,
USA. Fully available at http://www.jair.org/media/842/live-842-1969-jair.
pdf [accessed 2009-06-23]. CiteSeerx : 10.1.1.14.6112.
2663. L. G. Tan and Mark C. Sinclair. Wavelength Assignment between the Central Nodes of
the COST 239 European Optical Network. In UKPEW’95 [2591], pages 235–247, 1995.
Fully available at http://uk.geocities.com/markcsinclair/ps/ukpew11_tan.ps.
gz [accessed 2009-09-02]. CiteSeerx : 10.1.1.55.3604.
2664. Ming Tan, Hong-Bin Fang, Guo-Liang Tian, and Gang Wei. Testing Multivariate Normality
in Incomplete Data of Small Sample Size. Journal of Multivariate Analysis (JMVA), 93(1):
164–179, March 2005, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Academic Press
Professional, Inc.: San Diego, CA, USA. doi: 10.1016/j.jmva.2004.02.014.
2665. Ying Tan, Yuhui Shi, and Kay Chen Tan, editors. Advances in Swarm Intelligence – Proceed-
ings of the First International Conference on Swarm Intelligence, Part I (ICSI’10-I), June 12–
15, 2010, Běijı̄ng, China, volume 6145/2010 in Theoretical Computer Science and General
Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-642-13495-1. isbn: 3-642-13494-7. Library of Congress
Control Number (LCCN): 2010927598.
2666. Ying Tan, Yuhui Shi, and Kay Chen Tan, editors. Advances in Swarm Intelligence – Pro-
ceedings of the First International Conference on Swarm Intelligence, Part II (ICSI’10-
II), June 12–15, 2010, Běijı̄ng, China, volume 6146/2010 in Theoretical Computer Science
and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-13498-2. isbn: 3-642-13497-1. Library
of Congress Control Number (LCCN): 2010927598.
2667. Jing Tang, Meng Hot Lim, and Yew-Soon Ong. Diversity-Adaptive Parallel Memetic Algo-
rithm for Solving Large Scale Combinatorial Optimization Problems. Soft Computing – A
Fusion of Foundations, Methodologies and Applications, 11(9):873–888, July 2007, Springer-
Verlag: Berlin/Heidelberg. doi: 10.1007/s00500-006-0139-6.
2668. Ke Tang, Xin Yao, Ponnuthurai Nagaratnam Suganthan, Cara MacNish, Ying-Ping Chen,
Chih-Ming Chen, and Zhenyu Yang. Benchmark Functions for the CEC’2008 Special Session
and Competition on Large Scale Global Optimization. Technical Report, University of Science
and Technology of China (USTC), School of Computer Science and Technology, Nature In-
spired Computation and Applications Laboratory (NICAL): Héféi, Ānhuı̄, China, 2007. Fully
available at http://nical.ustc.edu.cn/cec08ss.php and http://sci2s.ugr.es/
programacion/workshop/Tech.Report.CEC2008.LSGO.pdf [accessed 2009-08-15].
2669. Ke Tang, Yi Mei, and Xin Yao. Memetic Algorithm with Extended Neighbor-
hood Search for Capacitated Arc Routing Problems. IEEE Transactions on Evo-
lutionary Computation (IEEE-EC), 13(5):1151–1166, October 2009, IEEE Computer
Society: Washington, DC, USA. doi: 10.1109/TEVC.2009.2023449. Fully avail-
able at http://staff.ustc.edu.cn/˜ketang/papers/TangMeiYao_TEVC09.pdf [ac-
x
cessed 2011-10-01]. CiteSeer : 10.1.1.160.9949.

2670. Ke Tang, Xiaodong Li, Ponnuthurai Nagaratnam Suganthan, Zhenyu Yang, and Thomas
Weise. Benchmark Functions for the CEC’2010 Special Session and Competition on Large-
REFERENCES 1157
Scale Global Optimization. Technical Report, University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Héféi, Ānhuı̄, China, January 8, 2010. Fully available
at http://www.it-weise.de/documents/files/TLSYW2009BFFTCSSACOLSGO.pdf.
2671. Ke Tang, Pu Wang, and Tianshi Chen. Towards Maximizing The Area Under The
ROC Curve For Multi-Class Classification Problems. In AAAI’11 [3], pages 483–488,
2011. Fully available at http://aaai.org/ocs/index.php/AAAI/AAAI11/paper/
download/3485/3882 [accessed 2001-09-08].
2672. Eiichi Taniguchi and Russell G. Thompson, editors. Recent Advances in City Logis-
tics. Proceedings of the 4th International Conference on City Logistics, July 12–14, 2005,
Langkawi, Malaysia. Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
isbn: 0-08-044799-6. Google Books ID: dMMQb83KH-EC.
2673. Eiichi Taniguchi, Russell G. Thompson, and Tadashi Yamada. Data Collection for Modelling,
Evaluating, and Benchmarking City Logistics Schemes. In Recent Advances in City Logistics.
Proceedings of the 4th International Conference on City Logistics [2672], pages 1–14, 2005.
2674. Ajay Kumar Tanwani and Muddassar Farooq. Performance Evaluation of Evolutionary Al-
gorithms in Classification of Biomedical Datasets. In GECCO’09-II [2343], pages 2617–
2624, 2009. doi: 10.1145/1570256.1570371. Fully available at http://nexginrc.org/
nexginrcAdmin/PublicationsFiles/iwlcs09-ajay.pdf [accessed 2010-07-26].
2675. Proceedings of the 12th International Conference on Parallel Problem Solving From Nature
(PPSN’12), September 1–5, 2012, Taormina, Italy.
2676. José Juan Tapia, Enrique Morett, and Edgar E. Vallejo. A Clustering Genetic Algorithm
for Genomic Data Mining. In Foundations of Computational Intelligence – Volume 4:
Bio-Inspired Data Mining [13], pages 249–275. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-642-01088-0 11.
2677. Jonathan Tate, Benjamin Woolford-Lim, Ian Bate, and Xin Yao. Comparing Design of
Experiments and Evolutionary Approaches to Multi-Objective Optimisation of Sensornet
Protocols. In CEC’09 [1350], pages 1137–1144, 2009. doi: 10.1109/CEC.2009.4983074.
Fully available at ftp://ftp.cs.york.ac.uk/papers/rtspapers/R:Tate:2009a.
pdf and http://www.cercia.ac.uk/projects/research/SEBASE/pdfs/2009/
jt-bwl-ib-xy-cec2009.pdf [accessed 2009-11-26]. INSPEC Accession Number: 10688669.
2678. Advance Papers of the Fourth International Joint Conference on Artificial Intelligence (IJ-
CAI’75), September 3–8, 1975, Tbilisi, Georgia, USSR, volume 1 and 2. Fully available
at http://dli.iiit.ac.in/ijcai/IJCAI-75-VOL-1&2/CONTENT/content.htm [ac-
cessed 2008-04-01].

2679. Technical Committee CEN/TC 119 “Swap body for combined transport“: Brussels, Belgium.
Swap Bodies – Non-Stackable Swap Bodies of Class C – Dimensions and General Require-
ments, EN-284:2006. CEN-CEN ELEC: Brussels, Belgium, November 10, 2006. See also:
DIN 70013/14, BS 3951:Part 1, ISO 668, ISO 1161, ISO 6346, 88/218/EEC, UIC 592-4, UIC
596-6.
2680. Y. S. Teh and G. P. Rangaiah. Tabu Search for Global Optimization of Continuous Func-
tions with Application to Phase Equilibrium Calculations. Computers and Chemical Engi-
neering, 27(11):1665–1679, November 15, 2003, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0098-1354(03)00134-0.
2681. Astro Teller. Genetic Programming, Indexed Memory, the Halting Problem, and other Cu-
riosities. In FLAIRS’94 [1360], pages 270–274, 1994. Fully available at http://www.
cs.cmu.edu/afs/cs/usr/astro/public/papers/Curiosities.ps [accessed 2009-07-21].
CiteSeerx : 10.1.1.39.5589.
2682. Astro Teller. Turing Completeness in the Language of Genetic Programming with Indexed
Memory. In CEC’94 [1891], pages 136–141, 1994. doi: 10.1109/ICEC.1994.350027. Fully
available at http://www.astroteller.net/pdf/Turing.pdf and http://www.cs.
cmu.edu/afs/cs/usr/astro/public/papers/Turing.ps [accessed 2010-11-21].
2683. Marc Terwilliger, Ajay K. Gupta, Ashfaq A. Khokhar, and Garrison W. Greenwood. Local-
ization Using Evolution Strategies in Sensornets. In CEC’05 [641], pages 322–327, volume
1, 2005. doi: 10.1109/CEC.2005.1554701. Fully available at http://twig.lssu.edu/
publications/LESS.pdf [accessed 2009-09-18]. INSPEC Accession Number: 9004167.
2684. Igor V. Tetko, David J. Livingstone, and Alexander I. Luik. Neural Network Studies. 1.
Comparison of Overfitting and Overtraining. Journal of Chemical Information and Computer
1158 REFERENCES
Sciences, 35(5):826–833, September 1995, American Chemical Society: Washington, DC,
USA. doi: 10.1021/ci00027a006.
2685. Andrea G. B. Tettamanzi and Marco Tomassini. Soft Computing: Integrating Evolu-
tionary, Neural, and Fuzzy Systems. Springer-Verlag GmbH: Berlin, Germany, 2001.
isbn: 3-540-42204-8. Google Books ID: 86jQw6v30KoC.
2686. Sam R. Thangiah. Vehicle Routing with Time Windows using Genetic Algorithms. In
Practical Handbook of Genetic Algorithms: New Frontiers [522], pages 253–277. CRC Press,
Inc.: Boca Raton, FL, USA, 1995. CiteSeerx : 10.1.1.23.7903.
2687. Proceedings of the 4th International Multiconference on Computer Science and Information
Technology (CSIT’06), April 5–7, 2006, Amman, Jordan. The Computing Research Reposi-
tory (CoRR): Ithaca, NY, USA.
2688. First Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating
Web Services Mediation, Choreography and Discovery, March 8–10, 2006, Stanford Univer-
sity, David Packard EE Building Building: Stanford, CA, USA. The Digital Enterprise
Research Institute (DERI) Stanford: Stanford, CA, USA. Fully available at http://
sws-challenge.org/wiki/index.php/Workshop_Stanford [accessed 2010-12-12].
2689. Second Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automating
Web Services Mediation, Choreography and Discovery, June 15–16, 2006, Hotel Maestral:
Przno, Sveti Stefan, Budva, Republic of Montenegro. The Digital Enterprise Research Insti-
tute (DERI) Stanford: Stanford, CA, USA. Fully available at http://sws-challenge.
org/wiki/index.php/Workshop_Budva [accessed 2010-12-12].
2690. Third Workshop of the Semantic Web Service Challenge 2006 – Challenge on Automat-
ing Web Services Mediation, Choreography and Discovery, November 10–11, 2006, Geor-
gia Center: Athens, GA, USA. The Digital Enterprise Research Institute (DERI) Stanford:
Stanford, CA, USA. Fully available at http://sws-challenge.org/wiki/index.php/
Workshop_Athens [accessed 2010-12-12].
2691. Proceedings of The 10th Conference on Pattern Languages of Programs (PLoP’03), Septem-
ber 8–12, 2003, University of Illinois, Allerton House: Monticello, IL, USA. The Hill-
side Group: Champaign, IL, USA. Fully available at http://hillside.net/plop/
plop2003/papers.html [accessed 2010-08-19].
2692. Dimitri Theodoratos and Wugang Xu. Constructing Search Spaces for Materialized View
Selection. In DOLAP’04 [2557], pages 112–121, 2004. doi: 10.1145/1031763.1031783.
2693. Guy Théraulaz and Eric W. Bonabeau. Coordination in Distributed Building. Science Mag-
azine, 269(5224):686–688, American Association for the Advancement of Science (AAAS):
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.269.5224.686.
2694. Guy Théraulaz and Eric W. Bonabeau. Swarm Smarts. Scientific American, pages 72–79,
March 2000, Scientific American, Inc.: New York, NY, USA. Fully available at http://
www.santafe.edu/˜vince/press/SciAm-SwarmSmarts.pdf [accessed 2008-06-12].
2695. Lothar Thiele, Kaisa Miettinen, Pekka J. Korhonen, and Julian Molina. A Preference-
based Interactive Evolutionary Algorithm for Multiobjective Optimization. HSE Working
Paper W-412, Helsinki School of Economics (HSE, Helsingin kauppakorkeakoulu): Helsinki,
Finland, January 2007. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/people/
thiele/paper/TMKM07.pdf and http://hsepubl.lib.hse.fi/pdf/wp/w412.pdf
[accessed 2009-07-18]. issn: 1235-5674.

2696. Dirk Thierens. On the Scalability of Simple Genetic Algorithms. Technical Report UU-CS-
1999-48, Utrecht University, Department of Information and Computing Sciences: Utrecht,
The Netherlands, 1999. Fully available at http://igitur-archive.library.uu.nl/
math/2007-0118-201740/thierens_99_on_the_scalability.pdf and http://
www.cs.uu.nl/research/techreps/repo/CS-1999/1999-48.pdf [accessed 2008-08-12].
CiteSeerx : 10.1.1.40.5295. urn: URN:NBN:NL:UI:10-1874/18986.
2697. Dirk Thierens and David Edward Goldberg. Convergence Models of Genetic Algorithm Se-
lection Schemes. In PPSN III [693], pages 119–129, 1994. doi: 10.1007/3-540-58484-6 256.
Fully available at http://www.cs.uu.nl/groups/DSS/publications/convergence/
convMdl.ps and http://www.csie.ntu.edu.tw/˜b91069/PDF/convMdl.pdf [ac-
cessed 2010-12-04].

2698. Dirk Thierens, David Edward Goldberg, and Ângela Guimarães Pereira. Domino Conver-
gence, Drift, and the Temporal-Salience Structure of Problems. In CEC’98 [2496], pages
REFERENCES 1159
535–540, 1998. doi: 10.1109/ICEC.1998.700085. CiteSeerx : 10.1.1.46.6116.
2699. Dirk Thierens, Hans-Georg Beyer, Josh C. Bongard, Jürgen Branke, John Andrew Clark,
Dave Cliff, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Doerr, Tim Kovacs, Sanjeev
Kumar, Julian Francis Miller, Jason H. Moore, Frank Neumann, Martin Pelikan, Riccardo
Poli, Kumara Sastry, Kenneth Owen Stanley, Thomas Stützle, Richard A. Watson, and Ingo
Wegener, editors. Proceedings of 9th Genetic and Evolutionary Computation Conference
(GECCO’07-I), July 7–11, 2007, University College London (UCL): London, UK. ACM Press:
New York, NY, USA. isbn: 1-59593-697-1. OCLC: 184827588, 232392364, 271369878,
and 288957084. ACM Order No.: 910070. See also [2700].
2700. Dirk Thierens, Hans-Georg Beyer, Josh C. Bongard, Jürgen Branke, John Andrew Clark,
Dave Cliff, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Doerr, Tim Kovacs, Sanjeev
Kumar, Julian Francis Miller, Jason H. Moore, Frank Neumann, Martin Pelikan, Riccardo
Poli, Kumara Sastry, Kenneth Owen Stanley, Thomas Stützle, Richard A. Watson, and Ingo
Wegener, editors. Genetic and Evolutionary Computation Conference – Companion Material
(GECCO’07-II), July 7–11, 2007, University College London (UCL): London, UK. ACM
Press: New York, NY, USA. ACM Order No.: 910071. See also [2699].
2701. Herve Thiriez and Stanley Zionts, editors. Proceedings of the 1st International Conference on
Multiple Criteria Decision Making (MCDM’75), May 21–23, 1975, Jouy-en-Josas, France,
volume 130 in Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 0387077944. OCLC: 2332357. Published in 1976.
2702. James J. Thomas, editor. Proceedings of the 18th Annual Conference on Computer Graphics
and Interactive Techniques (SIGGRAPH’91), July 28–August 2, 1991, Las Vegas, NV, USA.
SIGGRAPH: ACM Special Interest Group on Computer Graphics and Interactive Techniques,
ACM Press: New York, NY, USA. isbn: 0-89791-436-8. Order No.: 428911.
2703. Richard K. Thompson and Alden H. Wright. Additively Decomposable Fitness Func-
tions. Technical Report, University of Montana, Computer Science Department: Missoula,
MT, USA, 1996. Fully available at http://www.cs.umt.edu/u/wright/papers/add_
decomp.ps.gz [accessed 2007-11-27].
2704. Henk Tijms. Understanding Probability: Chance Rules in Everyday Life. Cambridge Uni-
versity Press: Cambridge, UK, August 16, 2004. isbn: 0521540364. Google Books
ID: y2361S44XLsC. OCLC: 54079595.
2705. Jonathan Timmis and Peter J. Bentley, editors. Proceedings of the 1st International Con-
ference on Artificial Immune Systems (ICARIS’02), University of Kent: Canterbury, Kent,
UK. isbn: 1902671325.
2706. Jonathan Timmis, Peter J. Bentley, and Emma Hart, editors. Proceedings of the 2nd In-
ternational Conference on Artificial Immune Systems (ICARIS’03), September 1–3, 2003,
Edinburgh, Scotland, UK, volume 2787 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-40766-9.
2707. Ville Tirronen, Ferrante Neri, Tommi Kärkkäinen, Kirsi Majava, and Tuomo Rossi. An
Enhanced Memetic Differential Evolution in Filter Design for Defect Detection in Paper
Production. Evolutionary Computation, 16(4):529–555, Winter 2008, MIT Press: Cambridge,
MA, USA. doi: 10.1162/evco.2008.16.4.529. PubMed ID: 19053498.
2708. Philippe L. Toint. Test Problems for Partially Separable Optimization and Results for the
Routine PSPMIN. Technical Report 83/4, University of Namur (FUNDP), Department of
Mathematics: Namur, Belgium, 1983.
2709. Proceedings of the 6th International Joint Conference on Artificial Intelligence (IJCAI’79-I),
August 20–23, 1979, Tokyo, Japan, volume 1. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-79-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [2710].
2710. Proceedings of the 6th International Joint Conference on Artificial Intelligence (IJCAI’79-II),
August 20–23, 1979, Tokyo, Japan, volume 2. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-79-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [2709].
2711. Shisanu Tongchim and Xin Yao. Parallel Evolutionary Programming. In CEC’04 [1369],
pages 1362–1367, volume 2, 2004. doi: 10.1109/CEC.2004.1331055. Fully available
at http://www.cs.bham.ac.uk/˜xin/papers/TongchimYao_CEC04.pdf [accessed 2011-
11-10]. INSPEC Accession Number: 8229086.

2712. Virginia Joanne Torczon. Multi-Directional Search: A Direct Search Algorithm for Parallel
Machines. PhD thesis, Rice University, Department of Computational & Applied Mathe-
matics (CAAM): Houston, TX, USA, May 1989. CiteSeerx : 10.1.1.48.9967. Supervisors:
1160 REFERENCES
John E. Dennis, Jr., Andrew E. Boyd, Kenneth W. Kennedy, Jr., and Richard A. Tapia.
2713. P. Tormos, A. Lova, F. Barber, L. Ingolotti, M. Abril, and M. A. Salido. A Ge-
netic Algorithm for Railway Scheduling Problems. In Metaheuristics for Scheduling in
Industrial and Manufacturing Applications [2992], Chapter 10, pages 255–276. Springer-
Verlag: Berlin/Heidelberg, January 2008. doi: 10.1007/978-3-540-78985-7 10. Fully available
at http://arrival.cti.gr/uploads/Documents.0081/ARRIVAL-TR-0081.pdf [ac-
x
cessed 2011-02-19]. CiteSeer : 10.1.1.157.9570. STREP (Member of the FET Open Scheme)

Project Number FP6-021235-2: ARRIVAL “Algorithms for Robust and online Railway op-
timization: Improving the Validity and reliAbility of Large scale systems“; ARRIVAL-TR-
0081.
2714. Aimo Törn and Antanas Žilinskas. Global Optimization, volume 350/1989 in Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany, 1989.
doi: 10.1007/3-540-50871-6. isbn: 0-387-50871-6 and 3-540-50871-6.
2715. Proceedings of the 1st International Workshop on Local Search Techniques in Constraint
Satisfaction (LSCS’04), September 27, 2004, Toronto, ON, Canada.
2716. Douglas E. Torres D. and Claudio M. Rocco S. Empirical Models Based on Hybrid Intelligent
Systems for Assessing the Reliability of Complex Networks. In EvoWorkshops’05 [2340],
pages 147–155, 2005.
2717. Paolo Toth and Daniele Vigo, editors. The Vehicle Routing Problem, volume 9 in SIAM
Monographs on Discrete Mathematics and Applications. Society for Industrial and Applied
Mathematics (SIAM): Philadelphia, PA, USA, 2002. isbn: 0898715792. Google Books
ID: TeMgA5S74skC.
2718. Julius T. Tou, editor. Proceedings of the Symposium on Computer and Information Sciences
II. Academic Press: London, New York, August 22–24, 1966, Columbus, OH, USA. Google
Books ID: V NBAAAAIAAJ. GBV-Identification (PPN): 173850669.
2719. Marc Toussaint and Christian Igel. Neutrality: A Necessity for Self-Adaptation.
In CEC’02 [944], pages 1354–1359, 2002. doi: 10.1109/CEC.2002.1004440.
CiteSeerx : 10.1.1.16.8492. arXiv ID: nlin/0204038v1.
2720. Amy J.C. Trappey, Charles V. Trappey, and Chang-Ru Wu. Genetic Algorithm Dy-
namic Performance Evaluation for RFID Reverse Logistic Management. Expert Sys-
tems with Applications – An International Journal, 37(11):7329–7335, November 2010,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.eswa.2010.04.026. See also [2721].
2721. Amy J.C. Trappey, Charles V. Trappey, and Chang-Ru Wu. Corrigendum to “Genetic Al-
gorithm Dynamic Performance Evaluation for RFID Reverse Logistic Management” [Expert
Systems with Applications 37 (11) (2010) 7329–7335]. Expert Systems with Applications –
An International Journal, 38(3):2920, March 2011, Elsevier Science Publishers B.V.: Essex,
UK. Imprint: Pergamon Press: Oxford, UK. doi: 10.1016/j.eswa.2010.10.005. See also [2720].
2722. Ioan Cristian Trelea. The Particle Swarm Optimization Algorithm: Convergence Analysis and
Parameter Selection. Information Processing Letters, 85(6):317–325, March 31, 2003, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific
Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0020-0190(02)00447-7.
2723. Krzysztof Trojanowski. Evolutionary Algorithms with Redundant Genetic Material for Non-
Stationary Environments. PhD thesis, Warsaw University: Warsaw, Poland, June 21, 2000,
Zbigniew Michalewicz, Advisor.
2724. Michael W. Trosset. I Know It When I See It: Toward a Definition of Direct Search Methods.
SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization, 9:7–
10, Fall 1997, SIAM Activity Group on Optimization – SIAG/OPT: Philadelphia, PA, USA.
Fully available at http://www.mcs.anl.gov/˜leyffer/views/09.pdf [accessed 2010-12-
25].

2725. Constantino Tsallis and Ricardo Ferreira. On the Origin of Self-Replicating Information-
Containing Polymers from Oligomeric Mixtures. Physics Letters A, 99(9):461–463,
December 26, 1983, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientific Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/0375-9601(83)90958-1.
2726. Edward P. K. Tsang, James M. Butler, and Jin Li. EDDIE Beats the
Bookies. International Journal of Software, Practice and Experience, 28(10):
1033–1043, August 1998, John Wiley & Sons Ltd.: New York, NY, USA.
REFERENCES 1161
doi: 10.1002/(SICI)1097-024X(199808)28:10<1033::AID-SPE198>3.0.CO;2-1. Fully avail-
able at http://cswww.essex.ac.uk/Research/CSP/finance/Eddie/Overview/ [ac-
x
cessed 2010-06-25]. CiteSeer : 10.1.1.63.7597. See also [2727, 2728].

2727. Edward P. K. Tsang, Jin Li, Sheri Marina Markose, Hakan Er, Abdellah Salhi, and Guilia
Iori. EDDIE in Financial Decision Making. Journal of Management and Economics,
4(4), November 2000, Universidad de Buenos Aires, Facultad de Ciencias Económicas:
Buenos Aires, Argentina. Fully available at http://www.bracil.net/finance/papers/
Tsang-Eddie-JMgtEcon2000.pdf [accessed 2010-06-25]. CiteSeerx : 10.1.1.64.7200. See
also [2726, 2728].
2728. Edward P. K. Tsang, Paul Yung, and Jin Li. EDDIE-Automation – A Decision Support Tool
for Financial Forecasting. Decision Support Systems, 37(4):559–565, September 2004, Else-
vier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0167-9236(03)00087-3. Fully avail-
able at http://www.bracil.net/finance/papers/TsYuLi-Eddie-Dss2004.pdf [ac-
cessed 2010-06-25]. See also [2726, 2727].

2729. Christian F. Tschudin. Fraglets – A Metabolistic Execution Model for Communication


Protocols. In AINS’03 [513], 2003. Fully available at http://cn.cs.unibas.ch/
pub/doc/2003-ains.pdf and http://path.berkeley.edu/ains/final/007%20-
%2022-tschudin.pdf [accessed 2008-05-02]. CiteSeerx : 10.1.1.133.8894.
2730. Christian F. Tschudin and Lidia A. R. Yamamoto. Fraglets Instruction Set. Uni-
versity of Basel, Computer Science Department, Computer Networks Group: Basel,
Switzerland, September 24, 2007. Fully available at http://www.fraglets.net/
frag-instrset-20070924.txt [accessed 2009-07-21].
2731. Christian F. Tschudin and Lidia A. R. Yamamoto. A Metabolic Approach to Protocol
Resilience. In WAC’04 [2525], pages 191–206, 2004. doi: 10.1007/11520184 15. Fully
available at http://cn.cs.unibas.ch/pub/doc/2004-wac-lncs.pdf and http://
www.autonomic-communication.org/wac/wac2004/program/presentations/
wac2004-tschudin-yamamoto-slides.pdf [accessed 2008-06-20].
2732. Shigeyoshi Tsutsui and Yoshiji Fujimoto. Solving Quadratic Assignment Problems by Genetic
Algorithms with GPU Computation: A Case Study. In GECCO’09-I [2342], pages 2523–
2530, 2009. doi: 10.1145/1570256.1570355. Fully available at http://www2.hannan-u.
ac.jp/˜tsutsui/ps/gecco/wk3006-tsutsui.pdf [accessed 2011-11-14].
2733. Shigeyoshi Tsutsui and Ashish Ghosh. Genetic Algorithms with a Robust Solution Search-
ing Scheme. IEEE Transactions on Evolutionary Computation (IEEE-EC), 1(3):201–208,
September 1997, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.661550.
2734. Shigeyoshi Tsutsui, Ashish Ghosh, and Yoshiji Fujimoto. A Robust Solution
Searching Scheme in Genetic Search. In PPSN IV [2818], pages 543–552, 1996.
doi: 10.1007/3-540-61723-X 1018. Fully available at http://www.hannan-u.ac.jp/
˜tsutsui/ps/ppsn-iv.pdf [accessed 2008-07-19].
2735. Gunnar Tufte. Phenotypic, Developmental and Computational Resources: Scaling in Artifi-
cial Development. In GECCO’08 [1519], 2008. doi: 10.1145/1389095.1389261.
2736. Proceedings of the 5th Indian International Conference on Artificial Intelligence (IICAI-11),
December 14–16, 2011, Tumkur, India.
2737. Alan Mathison Turing. On Computable Numbers, with an Application to the Entschei-
dungsproblem. Proceedings of the London Mathematical Society, s2-42(1):230–265, 1937,
Oxford University Press: Oxford, UK. doi: 10.1112/plms/s2-42.1.230. Fully available
at http://www.abelard.org/turpap2/tp2-ie.asp and http://www.thocp.net/
biographies/turing_alan.html [accessed 2007-08-11]. See also [2738].
2738. Alan Mathison Turing. On Computable Numbers, with an Application to
the Entscheidungsproblem: A Correction. Proceedings of the London Mathe-
matical Society, s2-43(1):544–546, 137, Oxford University Press: Oxford, UK.
doi: 10.1112/plms/s2-43.6.544. Fully available at http://www.scribd.com/doc/
2937039/Alan-M-Turing-On-Computable-Numbers [accessed 2010-06-24]. See also [2737].
2739. Peter Turney, L. Darrell Whitley, and Russell W. Anderson. Evolution, Learning,
and Instinct: 100 Years of the Baldwin Effect. Evolutionary Computation, 4(3):iv–viii,
Fall 1996, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.3.iv. Fully avail-
able at http://www.alife.cs.is.nagoya-u.ac.jp/˜reiji/baldwin/editorial.
html and http://www.apperceptual.com/baldwin-editorial.html [accessed 2010-09-
11].
1162 REFERENCES
2740. Peter Turney, Joanna Bryson, and Reiji Suzuki. The Baldwin Effect: A Bibliography. Nagoya
University, Graduate School of Computer Science, Department of Complex Systems Science:
Nagoya, Japan, August 2008. Fully available at http://www.alife.cs.is.nagoya-u.
ac.jp/˜reiji/baldwin/ [accessed 2010-09-12].
2741. William Thomas Tutte. Graph Theory, Cambridge Mathematical Library. Cambridge Uni-
versity Press: Cambridge, UK, 2001. isbn: 0521794897. Google Books ID: uTGhooU37h4C.
2742. Proceedings of the 4th Workshop on Parallel Processing: Logic, Organisation, and Technology
(WOPPLOT’92), September 1992, Tutzing, Germany. It is not clear to the author whether
this has actually taken place or not.
2743. Andrew M. Tyrrell, Pauline C. Haddow, and Jim Torresen, editors. Proceedings of the
5th International Conference on Evolvable Systems: From Biology to Hardware (ICES’03),
March 17–20, 2003, Trondheim, Norway, volume 2606/2003 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-36553-2.
isbn: 3-540-00730-X. Google Books ID: LU1bIwGaWIC.
2744. Gwo-Hshiung Tzeng, Hsiao-Fan Wang, Ue-Pyng Wen, and Po-Lung Yu, editors. Pro-
ceedings of the 10th International Conference on Multiple Criteria Decision Making: Ex-
pand and Enrich the Domains of Thinking and Application (MCDM’92), July 19–24, 1992,
Taipei, Taiwan. Springer New York: New York, NY, USA. isbn: 0-387-94297-1 and
3-540-94297-1. OCLC: 30027483, 263351019, 311938982, and 633632538. GBV-
Identification (PPN): 148474152 and 257197648. Published in June 1994.

2745. Hidetoshi Uchiyama, Kanda Runapongsa, and Toby J. Teorey. A Progressive


View Materialization Algorithm, 1999. doi: 10.1145/319757.319786. Fully avail-
able at http://gear.kku.ac.th/˜krunapon/research/pub/progressive.pdf [ac-
x
cessed 2011-04-08]. CiteSeer : 10.1.1.20.9685 and 10.1.1.57.9921.

2746. Tatsuo Unemi. SBART 2.4: An IEC Tool for Creating 2D Images, Movies, and Collage. In
GAVAM’00 [1452], pages 153–157, 2000. CiteSeerx : 10.1.1.31.3280.
2747. Proceedings of the 8th Argentinean Conference of Computer Science, Congresso Argentino
de Ciências de la Computacı́on (CACIC’02), October 15–17, 2002, Universidad de Buenos
Aires (UBA): Buenos Aires, Argentina.
2748. AISB 2000 Convention: Artificial Intelligence and Society (AISB’00), April 17–21, 2000,
University of Birmingham: Birmingham, UK. Fully available at http://www.aisb.org.
uk/publications/proceedings.shtml [accessed 2008-09-10].
2749. 4th International Conference on Hybrid Artificial Intelligence Systems (HAIS’09), June 10–
12, 2009, Salamanca, Spain, Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture
Notes in Computer Science (LNCS). University of Burgos, Applied Computational Intelli-
gence Group (GICAP): Burgos, Spain, Springer-Verlag GmbH: Berlin, Germany. co-located
with 10th International Work-Conference on Artificial Neural Networks (IWANN2009).
2750. “New State of MCDM in 21st Century” – Proceedings of the 20th International Conference on
Multiple Criteria Decision Making (MCDM’09), June 21–26, 2009, University of Electronic
Science and Technology of China (Shahe Campus): Chéngdū, Sı̀chuān, China.
2751. Proceedings of the Seventh Conference on Evolutionary and Deterministic Methods for De-
sign, Optimization and Control with Applications to Indutsrial and Societal Problems (EU-
ROGEN’07), June 11–13, 2007, University of Jyväskylä: Jyväskylä, Finland. Partly available
at http://www.mit.jyu.fi/scoma/Eurogen2007/ [accessed 2007-09-16].
2752. Proceedings of the First International Workshop on Advanced Computational Intelligence
(IWACI’08), June 7–8, 2008, University of Macau: Macau, China.
2753. Genetic Programming Theory and Practice IX, Proceedings of the Ninth Workshop on Genetic
Programming (GPTP’11), May 12–14, 2011, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA.
2754. 2011 International Workshop on Nature Inspired Computation and Its Applications
(IWNICA’11), April 17, 2011, University of Science and Technology of China (USTC): Héféi,
Ānhuı̄, China. Partly available at http://www.cercia.ac.uk/projects/research/
NICaiA/2011workshopsinfor.php [accessed 2011-03-20].
2755. The 2004 International Workshop on Nature Inspired Computation and Applications
REFERENCES 1163
(IWNICA’04), October 24–29, 2004, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Héféi, Ānhuı̄, China.
2756. The 2004 UK-China Workshop on Nature Inspired Computation and Applications
(UCWNICA’04), October 25–29, 2004, University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Héféi, Ānhuı̄, China.
2757. The 2008 International Workshop on Nature Inspired Computation and Applications
(IWNICA’08), May 27–29, 2008, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Héféi, Ānhuı̄, China.
2758. The 2010 International Workshop on Nature Inspired Computation and Applications
(IWNICA’10), October 23–27, 2010, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Héféi, Ānhuı̄, China.
2759. AISB 2001 Convention: Agents and Cognition (AISB’01), March 21–24, 2001, University of
York: Heslington, York, UK. isbn: 1902956170, 1902956189, 1902956197, 1902956205,
1902956213, and 1902956221.
2760. Rasmus K. Ursem. Models for Evolutionary Algorithms and Their Applications in System
Identification and Control Optimization. PhD thesis, University of Aarhus, Department of
Computer Science: Århus, Denmark, April 1, 2003, Thiemo Krink and Brian H. Mayoh,
Advisors. Fully available at http://www.daimi.au.dk/˜ursem/publications/RKU_
thesis_2003.pdf [accessed 2009-07-09]. CiteSeerx : 10.1.1.13.4971.

2761. Rob J. M. Vaessens, Emile H. L. Aarts, and Jan Karel Lenstra. A Local Search Template.
In PPSN II [1827], pages 67–76, 1992. CiteSeerx : 10.1.1.54.4457. See also [2762].
2762. Rob J. M. Vaessens, Emile H. L. Aarts, and Jan Karel Lenstra. A Local Search
Template. Computers & Operations Research, 25(11):969–979, November 1, 1998,
Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0305-0548(97)00093-2. See also [2761].
2763. Grégory Valigiani, Evelyne Lutton, Cyril Fonlupt, and Pierre Collet. Optimisation
par “Hommilière” de Chemins Pédagogiques pour un Logiciel d’e-Learning. Technique
et Science Informatique (TSI), 26(10):1245–1267, 2007, TSI: Cachan cedex, France.
doi: 10.3166/tsi.26.1245-1267.
2764. Satyanarayana R. Valluri, Soujanya Vadapalli, and Kamalakar Karlapalem. View Relevance
Driven Materialized View Selection in Data Warehousing Environment. In ADC’02 [3081],
2002. doi: 10.1145/563932.563927. Fully available at http://crpit.com/confpapers/
CRPITV5Valluri.pdf [accessed 2011-03-29]. See also [2765].
2765. Satyanarayana R. Valluri, Soujanya Vadapalli, and Kamalakar Karlapalem. View Relevance
Driven Materialized View Selection in Data Warehousing Environment. Australian Computer
Science Communications, 24(2), January–February 2002, IEEE Computer Society Press: Los
Alamitos, CA, USA. doi: 10.1145/563932.563927. Fully available at http://crpit.com/
confpapers/CRPITV5Valluri.pdf [accessed 2011-03-29]. CiteSeerx : 10.1.1.18.9497. See
also [2764].
2766. Werner Van Belle. Automatic Generation of Concurrency Adaptors by Means of Learning
Algorithms. PhD thesis, Vrije Universiteit Brussel, Faculteit van de Wetenschappen, Departe-
ment Informatica: Brussels, Belgium, August 2001, Theo D’Hondt and Tom Tourwé, Advi-
sors. Fully available at http://borg.rave.org/adaptors/Thesis6.pdf [accessed 2008-06-
23]. See also [2767].

2767. Werner Van Belle, Tom Mens, and Theo D’Hondt. Using Genetic Programming
to Generate Protocol Adaptors for Interprocess Communication. In ICES’03 [2743],
pages 67–73, 2003. Fully available at http://prog.vub.ac.be/Publications/
2003/vub-prog-tr-03-33.pdf and http://werner.yellowcouch.org/Papers/
genadap03/ICES-online.pdf [accessed 2008-06-23]. See also [2766].
2768. Klemens van Betteray. Gesetzliche und handelsspezifische Anforderungen an die
1164 REFERENCES
Rückverfolgung. In Vorträge des 7. VDEB-Infotags 2004 – Mobile Computing, RFID und
Rückverfolgbarkeit [2800], 2004. EU Verordnung 178/2002.
2769. Alex Van Breedam. An Analysis of the Behavior of Heuristics for the Vehicle Routing Prob-
lem for a Selection of Problems with Vehicle-related, Customer-related, and Time-related
Constraints. PhD thesis, University of Antwerp (RUCA): Antwerp, Belgium, 1994.
2770. Alex Van Breedam. An Analysis of the Behavior of Heuristics for the Vehicle Routing Prob-
lem for a Selection of Problems with Vehicle-Related, Customer-Related, and Time-Related
Constraints. PhD thesis, University of Antwerp (RUCA): Antwerp, Belgium, 1994.
2771. Gertrudis Van de Vijver, Stanley N. Salthe, and Manuela Delpos, editors. Evolutionary Sys-
tems – Biological and Epistemological Perspectives on Selection and Self-Organization. Kluwer
Academic Publishers: Norwell, MA, USA, 1998. isbn: 0792352602. OCLC: 39556318. Li-
brary of Congress Control Number (LCCN): 98036791. LC Classification: QH360.5 .E96
1998.
2772. J. Koen van der Hauw. Evaluating and Improving Steady State Evolutionary Algo-
rithms on Constraint Satisfaction Problems. Master’s thesis, Leiden University: Leiden,
The Netherlands, August 9, 1996, Ágoston E. Eiben and Han La Poutré, Supervisors.
CiteSeerx : 10.1.1.29.1568.
2773. Christiaan M. van der Walt and Etienne Barnard. Data Characteristics that Determine Clas-
sifier Performance. In ICAPR’05 [2506], pages 160–165, 2005. Fully available at http://
www.meraka.org.za/pubs/CvdWalt.pdf and http://www.patternrecognition.
co.za/publications/cvdwalt_data_characteristics_classifiers.pdf [ac-

cessed 2009-09-09].

2774. Bas C. van Fraassen. Laws and Symmetry, Clarendon Paperbacks. Oxford University Press,
Inc.: New York, NY, USA, 1989. isbn: 0198248601. Google Books ID: LzXuE1Sri wC.
2775. Jano I. van Hemert and Carlos Cotta, editors. Proceedings of the 8th European Conference on
Evolutionary Computation in Combinatorial Optimization (EvoCOP’08), March 26–28, 2008,
Naples, Italy, volume 4972/2008 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-78603-1. Library of Congress Control Number (LCCN): 2008922955.
2776. Peter J. M. van Laarhoven and Emile H. L. Aarts, editors. Simulated Annealing: Theory
and Applications, volume 37 in Mathematics and its Applications. Kluwer Academic Pub-
lishers: Norwell, MA, USA, 1987. isbn: 90-277-2513-6. Google Books ID: -IgUab6Dp IC
and URDXYgEACAAJ. OCLC: 15548651, 230902948, 471714266, 489996631, and
638812897. Library of Congress Control Number (LCCN): 87009666. GBV-Identification
(PPN): 020442645 and 043026036. LC Classification: QA402.5 .L3 1987. Reviewed in
[1316].
2777. Jan van Leeuwen, editor. Computer Science Today – Recent Trends and Developments,
volume 1000/1995 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany, 1995. doi: 10.1007/BFb0015232. isbn: 3-540-60105-8. Google Books
ID: YHxQAAAAMAAJ.
2778. Erik van Nimwegen and James P. Crutchfield. Optimizing Epochal Evolutionary Search:
Population-Size Dependent Theory. Machine Learning, 45(1):77–114, October 2001, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands,
Leslie Pack Kaelbling, editor. doi: 10.1023/A:1012497308906.
2779. Erik van Nimwegen, James P. Crutchfield, and Martijn A. Huynen. Neutral Evolution of
Mutational Robustness. Proceedings of the National Academy of Science of the United States
of America (PNAS), 96(17):9716–9720, August 17, 1999, National Academy of Sciences:
Washington, DC, USA. Fully available at http://www.pnas.org/cgi/reprint/96/
17/9716.pdf [accessed 2008-07-02].
2780. Erik van Nimwegen, James P. Crutchfield, and Melanie Mitchell. Statistical Dynamics of the
Royal Road Genetic Algorithm. Theoretical Computer Science, 229(1–2):41–102, November 6,
1999, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0304-3975(99)00119-X.
CiteSeerx : 10.1.1.43.3594.
2781. Maarten van Someren and Gerhard Widmer, editors. 9th European Conference on Ma-
chine Learning (ECML’97), April 23–25, 1997, Prague, Czech Republic, volume 1224/1997
in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-62858-4.
isbn: 3-540-62858-4. Google Books ID: BC4- V0D-qUC and w4lfPwAACAAJ.
REFERENCES 1165
OCLC: 36676064, 56867902, and 82399375.
2782. Harry L. Van Trees. Detection, Estimation, and Modulation Theory, Part I. Wiley
Interscience: Chichester, West Sussex, UK, 1968–February 2007. isbn: 0471093904,
0471095176, and 0471899550. Google Books ID: 2kkHAAAACAAJ, NSddAAAACAAJ,
RywEAAAACAAJ, RywEAAAACAAJ, Xzp7VkuFqXYC, and udD3AQAACAAJ. OCLC: 49665206,
85820539, 234257633, and 249067915.
2783. David A. van Veldhuizen and Laurence D. Merkle. Multiobjective Evolutionary Algorithms:
Classifications, Analyses, and New Innovations. PhD thesis, Air University, Air Force Insti-
tute of Technology: Wright-Patterson Air Force Base, OH, USA, June 1999, Gary B. Lamont,
Supervisor, Richard F. Deckro and Thomas F. Reid, Committee members. Fully avail-
able at http://citeseer.ist.psu.edu/old/vanveldhuizen99multiobjective.
html, http://handle.dtic.mil/100.2/ADA364478, and http://neo.lcc.uma.es/
emoo/veldhuizen99a.ps.gz [accessed 2009-08-05]. ID: AFIT/DS/ENG/99-01.
2784. Leonardo Vanneschi and Marco Tomassini. A Study on Fitness Distance Correlation
and Problem Difficulty for Genetic Programming. In GECCO’02 GSWS [1790], pages
307–310, 2002. Fully available at http://personal.disco.unimib.it/Vanneschi/
GECCO_2002_PHD_WORKSHOP.pdf and http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/vanneschi_2002_gecco_workshop.html [accessed 2008-07-20]. See also [590].
2785. Leonardo Vanneschi, Manuel Clergue, Philippe Collard, Marco Tomassini, and Sébastien
Vérel. Fitness Clouds and Problem Hardness in Genetic Programming. In GECCO’04-
II [752], pages 690–701, 2004. doi: 10.1007/b98645. Fully available at http://hal.
archives-ouvertes.fr/hal-00160055/en/ [accessed 2007-10-14], http://hal.inria.
fr/docs/00/16/00/55/PDF/fc_gp.pdf, http://www.cs.bham.ac.uk/˜wbl/
biblio/gp-html/vanneschi_fca_gecco2004.html [accessed 2008-07-20], and http://
www.i3s.unice.fr/˜verel/publi/gecco04-FCandPbHardnessGP.pdf [accessed 2007-
10-14].

2786. Leonardo Vanneschi, Marco Tomassini, Philippe Collard, and Sébastien Vérel. Negative
Slope Coefficient. A Measure to Characterize Genetic Programming. In EuroGP’06 [607],
pages 178–189, 2006. doi: 10.1007/11729976 16. Fully available at http://www.cs.bham.
ac.uk/˜wbl/biblio/gp-html/eurogp06_VanneschiTomassiniCollardVerel.
html [accessed 2008-07-20].
2787. Leonardo Vanneschi, Steven Matt Gustafson, Alberto Moraglio, Ivanoe de Falco, and
Marc Ebner, editors. Proceedings of the 12th European Conference on Genetic Program-
ming (EuroGP’09), April 15–17, 2009, Eberhard-Karls-Universität Tübingen, Fakultät für
Informations- und Kognitionswissenschaften: Tübingen, Germany, volume 5481/2009 in
Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01181-8.
isbn: 3-642-01180-2.
2788. Vladimir Naumovich Vapnik. The Nature of Statistical Learning Theory, Information Science
and Statistics. Springer-Verlag GmbH: Berlin, Germany, 1995–1999. isbn: 0-387-98780-0.
Google Books ID: sna9BaxVbj8C. OCLC: 41959173 and 247524710.
2789. Francisco J. Varela and Paul Bourgine, editors. Toward a Practice of Autonomous Sys-
tems: Proceedings of the First European Conference on Artificial Life (ECAL’91), De-
cember 11–13, 1991, Paris, France, Bradford Books. MIT Press: Cambridge, MA, USA.
isbn: 0-262-72019-1. Published in 1992.
2790. Himanshu Vashishtha, Michael Smit, and Eleni Stroulia. Moving Text Analysis Tools to the
Cloud. In SERVICES Cup’10 [2457], pages 107–114, 2010. doi: 10.1109/SERVICES.2010.91.
INSPEC Accession Number: 11535163.
2791. Athanasios V. Vasilakos, Kostas G. Anagnostakis, and Witold Pedrycz. Application of
Computational Intelligence Techniques in Active Networks. Soft Computing – A Fusion of
Foundations, Methodologies and Applications, 5(4):264–271, August 2001, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/s005000100100.
2792. Vesselin K. Vassilev and Julian Francis Miller. The Advantages of Landscape Neu-
trality in Digital Circuit Evolution. In ICES’00 [1902], pages 252–263, 2000.
doi: 10.1007/3-540-46406-9 25. CiteSeerx : 10.1.1.41.3777.
2793. Miguel A. Vega-Rodrı́guez, Juan A. Gómez-Pulido, Enrique Alba Torres, David Vega-
Pérez, Silvio Priem-Mendes, and Guillermo Molina. Evaluation of Different Metaheuris-
tics Solving the RND Problem. In EvoWorkshops’07 [1050], pages 101–110, 2007.
1166 REFERENCES
doi: 10.1007/978-3-540-71805-5 11.
2794. Miguel A. Vega-Rodrı́guez, David Vega-Pérez, Juan A. Gómez-Pulido, and Juan M.
Sánchez-Pérez. Radio Network Design Using Population-Based Incremental Learning
and Grid Computing with BOINC. In EvoWorkshops’07 [1050], pages 91–100, 2007.
doi: 10.1007/978-3-540-71805-5 10.
2795. Goran Velinov, Danilo Gligoroski, and Margita Kon-Popovska. Hybrid Greedy and Genetic
Algorithms for Optimization of Relational Data Warehouses. In AIAP’07 [776], pages 470–
475, 2007.
2796. Goran Velinov, Boro Jakimovski, Darko Cerepnalkoski, and Margita Kon-Popovska. Improve-
ment of Data Warehouse Optimization Process by Workflow Gridification. In ADBIS [143],
pages 295–304, 2008. doi: 10.1007/978-3-540-85713-6 21.
2797. Manuela M. Veloso, editor. “AI and its benefits to society” – Proceedings of the 20th Inter-
national Joint Conference on Artificial Intelligence (IJCAI’07), January 6–12, 2007, Hyder-
abad, India. AAAI Press: Menlo Park, CA, USA. Fully available at http://dli.iiit.
ac.in/ijcai/IJCAI-2007/CONTENT/contents2007.html and http://www.ijcai.
org/papers07/contents.php [accessed 2008-04-01]. See also [1296].
2798. Gerhard Venter and Jaroslaw Sobieszczanski-Sobieski. Particle Swarm Optimization. In 43rd
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference
and Exhibit [84], 2002. CiteSeerx : 10.1.1.57.7431. See also [2799].
2799. Gerhard Venter and Jaroslaw Sobieszczanski-Sobieski. Particle Swarm Optimization. AIAA
Journal, 41(8):1583–1589, August 2003, American Institute of Aeronautics and Astronautics:
Reston, VA, USA. Fully available at http://pdf.aiaa.org/getfile.cfm?urlX=2
%3CWIG7D%2FQKU%3E6B5%3AKF5%2B%5CQ%3AK%3E%0A&urla=%26%2B%22L%2F%22P
%22%40%0A&urlb=%21%2A%20%20%20%0A&urlc=%21%2A0%20%20%0A&urld=%21%2A0
%20%20%0A&urle=%27%28%22H%22%23PJAT%20%20%20%0A [accessed 2010-12-24]. See also
[2798].
2800. Vorträge des 7. VDEB-Infotags 2004 – Mobile Computing, RFID und Rückverfolgbarkeit,
July 28, 2004. Verband der EDV-Software-und Beratungsunternehmen (VDEB): Aachen,
North Rhine-Westphalia, Germany and VDEB Verband IT-Mittelstand e.V.: Stuttgart, Ger-
many.
2801. Proceedings of the Workshop SAKS 2007, March 1, 2007, Bern, Switzerland. Verband der
Elektrotechnik, Elektronik und Informationstechnik e.V. (VDE): Frankfurt am Main, Ger-
many.
2802. Sébastien Vérel, Philippe Collard, and Manuel Clergue. Measuring the Evolvabil-
ity Landscape to Study Neutrality. In GECCO’06 [1516], pages 613–614, 2006.
doi: 10.1145/1143997.1144107. arXiv ID: 0709.4011v1.
2803. Proceedings of the Second European Congress on Fuzzy and Intelligent Technologies (EU-
FIT’94), September 20–23, 1994, Aachen, North Rhine-Westphalia, Germany. Verlag der
Augustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany.
2804. Proceedings of the First European Congress on Fuzzy and Intelligent Technologies (EU-
FIT’93), September 7–9, 1993, Aachen, North Rhine-Westphalia, Germany. Verlag der Au-
gustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany and ELITE Foundation:
Aachen, North Rhine-Westphalia, Germany.
2805. Proceedings of the Third European Congress on Intelligent Techniques and Soft Computing
(EUFIT’95), August 28–31, 1995, Aachen, North Rhine-Westphalia, Germany. Verlag der
Augustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany, Verlag und Druck
Mainz GmbH: Aachen, North Rhine-Westphalia, Germany, and ELITE Foundation: Aachen,
North Rhine-Westphalia, Germany.
2806. Proceedings of the 6th European Congress on Intelligent Techniques and Soft Computing
(EUFIT’98), September 7–10, 1998, Rheinisch-Westfälische Technische Hochschule (RWTH)
Aachen: Aachen, North Rhine-Westphalia, Germany. Verlag und Druck Mainz GmbH:
Aachen, North Rhine-Westphalia, Germany. isbn: 3-89653-500-5.
2807. Proceedings of the 7th European Congress on Intelligent Techniques and Soft Computing
(EUFIT’99), September 13–16, 1999, Aachen, North Rhine-Westphalia, Germany. Verlag
und Druck Mainz GmbH: Aachen, North Rhine-Westphalia, Germany.
2808. William T. Vettering, Saul A. Teukolsky, William H. Press, and Brian P. Flannery. Numeri-
cal Recipes in C++ – Example Book – The Art of Scientific Computing. Cambridge Univer-
sity Press: Cambridge, UK, second edition, February 7, 2002. isbn: 0521750342. Google
REFERENCES 1167
Books ID: gwijz-OyIYEC. OCLC: 48265610, 314335535, 423496209, 469630365,
and 610691802. Library of Congress Control Number (LCCN): 2001052753. GBV-
Identification (PPN): 336225350 and 374254729. LC Classification: QA76.73.C153 N83
2002.
2809. Sixth European Conference on Machine Learning (ECML’93), April 5–7, 1993, Vienna, Aus-
tria.
2810. 6th Metaheuristics International Conference (MIC’05), August 22–26, 2005, Vienna, Austria.
Partly available at http://www.mic2005.org/ [accessed 2007-09-12].
2811. Savings Ants for the Vehicle Routing Problem. Technical Report 63, Vienna University of
Economics and Business Administration: Vienna, Austria, December 2001. ID: oai:epub.wu-
wien.ac.at:epub-wu-01 1f1. See also [802].
2812. Daniele Vigo. VRPLIB: A Vehicle Routing Problem LIBrary. Università degli
Studi di Bologna, Dipartimento di Elettronica, Informatica e Sistemistica (DEIS),
Operations Research Group (Ricerca Operativa): Bológna, Emilia-Romagna, Italy,
October 3, 2003. Fully available at http://or.ingce.unibo.it/research/
vrplib-a-vehicle-routing-problem-resources-library and http://www.or.
deis.unibo.it/research_pages/ORinstances/VRPLIB/VRPLIB.html [accessed 2011-
10-12].

2813. Daniele Vigo and Maria Battarra. Database of Heterogeneous Vehicle Routing Problems.
Università degli Studi di Bologna, Dipartimento di Elettronica, Informatica e Sistemistica
(DEIS), Operations Research Group (Ricerca Operativa): Bológna, Emilia-Romagna, Italy,
January 19, 2007. Fully available at http://apice54.ingce.unibo.it/hvrp/index.
php [accessed 2011-10-12].
2814. Frank J. Villegas, Tom Cwik, Yahya Rahmat-Samii, and Majid Manteghi. A Parallel Elec-
tromagnetic Genetic-Algorithm Optimization (EGO) Application for Patch Antenna Design.
IEEE Transactions on Antennas and Propagation, 52(9):2424–2435, September 3, 2004, IEEE
Antennas & Propagation Society: Marsfield, NSW, Australia. doi: 10.1109/TAP.2004.834071.
2815. Andrei Vladimirescu. The SPICE Book. John Wiley & Sons Ltd.: New York, NY, USA,
1994. isbn: 0471609269.
2816. Bernhard Friedrich Voigt. Der Handlungsreisende – wie er sein soll und was er zu thun hat,
um Aufträge zu erhalten und eines glücklichen Erfolgs in seinen Geschäften gewiß zu sein –
von einem alten Commis-Voyageur. Voigt: Ilmenau, Germany, 1832. OCLC: 258013333.
Excerpt: “. . . Durch geeignete Auswahl und Planung der Tour kann man oft so viel Zeit
sparen, daß wir einige Vorschläge zu machen haben. . . . Der wichtigste Aspekt ist, so viele
Orte wie möglich zu erreichen, ohne einen Ort zweimal zu besuchen. . . . ”. Newly published
as [2817].
2817. Bernhard Friedrich Voigt. Der Handlungsreisende – wie er sein soll und was er zu thun
hat, um Aufträge zu erhalten und eines glücklichen Erfolgs in seinen Geschäften gewiß zu
sein – von einem alten Commis-Voyageur. Verlag Bernd Schramm: Kiel, Germany, 1981.
isbn: 3-921361-24-9. New edition of [2816].
2818. Hans-Michael Voigt, Werner Ebeling, Ingo Rechenberg, and Hans-Paul Schwefel, editors. Pro-
ceedings of the 4th International Conference on Parallel Problem Solving from Nature (PPSN
IV), September 22–24, 1996, Berlin, Germany, volume 1141/1996 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-61723-X.
isbn: 3-540-61723-X. Google Books ID: uN0mgUx-VRUC. OCLC: 35331240, 150419867,
and 246323673.
2819. Christian Voigtmann. Integration Evolutionärer Klassifikatoren in Weka. Bachelor’s thesis,
University of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Systems Group:
Kassel, Hesse, Germany, October 26, 2008, Thomas Weise, Advisor, Kurt Geihs and Heinrich
Werner, Committee members. See also [2855, 2902].
2820. A. Volgenant. Solving Some Lexicographic Multi-Objective Combinatorial Problems. Eu-
ropean Journal of Operational Research (EJOR), 139(3):578–584, June 16, 2002, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific
Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(01)00214-4.
2821. Paul T. von Hippel. Mean, Median, and Skew: Correcting a Textbook Rule. Journal
of Statistics Education (JSE), 13(2), July 2005, American Statistical Association (ASA):
Alexandria, VA, USA. Fully available at http://www.amstat.org/publications/jse/
v13n2/vonhippel.html [accessed 2007-09-15].
1168 REFERENCES
2822. Richard von Mises. Probability, Statistics, and Truth. W. Hodge & Co.: Glasgow, Scot-
land, UK, Dover Publications: Mineola, NY, USA, and Allen & Unwin: Crows Nest, NSW,
Australia, 2nd edition, 1939–1957. isbn: 0486242145. Google Books ID: KSE4AAAAMAAJ,
enxeAAAAIAAJ, and wFFK P8Cpk0C. OCLC: 8320292 and 256165308.
2823. Richard von Mises. Mathematical Theory of Probability and Statistics. Academic Press
Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers B.V.: Essex, UK,
1964. isbn: 0127273565. Google Books ID: 3s8-AAAAIAAJ and 7gnFJQAACAAJ.
2824. Paul von Ragué Schleyer and Johnny Gasteiger, editors. Encyclopedia of Computational
Chemistry. Volume III: Databases and Expert Systems. John Wiley & Sons Ltd.: New York,
NY, USA, September 1998. isbn: 0-471-96588-X. OCLC: 490503997. Library of Congress
Control Number (LCCN): 98037164. LC Classification: QD39.3.E46 E53 1998.
2825. Matthias von Randow. Güterverkehr und Logistik als tragende Säule der Wirtschaft zukun-
ftssicher gestalten. In Das Beste Der Logistik: Innovationen, Strategien, Umsetzungen [234],
pages 49–53. Springer-Verlag GmbH: Berlin, Germany and Bundesvereinigung Logistik e.V.
(BVL): Bremen, Germany, 2008.
2826. Michael D. Vose. Generalizing the Notion of Schema in Genetic Algorithms. Artificial
Intelligence, 50(3):385–396, August 1991, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(91)90019-G.
2827. Michael D. Vose and Alden H. Wright. Form Invariance and Implicit Parallelism.
Evolutionary Computation, 9(3):355–370, Fall 2001, MIT Press: Cambridge, MA,
USA. doi: 10.1162/106365601750406037. Fully available at http://www.cs.umt.
edu/u/wright/papers/vose_wright2001form_invariance.ps.gz [accessed 2010-07-30].
CiteSeerx : 10.1.1.7.4294. PubMed ID: 11522211.
2828. Stefan Voß, Silvano Martello, Ibrahim H. Osman, and Cathérine Roucairol, editors. 2nd
Metaheuristics International Conference – Meta-Heuristics: Advances and Trends in Local
Search Paradigms for Optimization (MIC’97), July 21–24, 1997, Sophia-Antipolis, France.
Kluwer Publishers: Boston, MA, USA. isbn: 0-7923-8369-9. Published in 1998.

2829. Conrad Hal Waddington. Canalization of Development and the Inheritance of Acquired Char-
acters. Nature, 150(3811):563–565, November 14, 1942. doi: 10.1038/150563a0. Fully available
at http://www.nature.com/nature/journal/v150/n3811/pdf/150563a0.pdf [ac-
cessed 2008-09-10]. See also [2832].

2830. Conrad Hal Waddington. Genetic Assimilation of an Acquired Character. Evolution – In-
ternational Journal of Organic Evolution, 7(2):118–128, June 1953, Society for the Study
of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK. JSTOR Stable
ID: 2405747.
2831. Conrad Hal Waddington. The “Baldwin Effect”, “Genetic Assimilation” and “Homeostasis”.
Evolution – International Journal of Organic Evolution, 7(4):386–387, December 1953, Soci-
ety for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
JSTOR Stable ID: 2405346.
2832. Conrad Hal Waddington. Canalization of Development and the Inheritance of Acquired Char-
acters. In Adaptive Individuals in Evolving Populations: Models and Algorithms [255], pages
91–97. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA and Westview
Press, 1996. See also [2829].
2833. Andreas Wagner. Robustness and Evolvability in Living Systems, volume 9 in Prince-
ton Studies in Complexity. Princeton University Press: Princeton, NJ, USA, 2005–2007.
isbn: 0691122407 and 0691134049. Partly available at http://press.princeton.
edu/titles/8002.html [accessed 2009-07-10]. Google Books ID: 7O1bGwAACAAJ.
2834. Andreas Wagner. Robustness, Evolvability, and Neutrality. FEBS Letters, 579(8):1772–
1778, March 21, 2005, Federation of European Biochemical Societies (FEBS): London, UK
and Elsevier Science Publishers B.V.: Essex, UK, Robert Russell and Giulio Superti-Furga,
editors. doi: 10.1016/j.febslet.2005.01.063. PubMed ID: 15763550.
2835. Günter P. Wagner and Lee Altenberg. Complex Adaptations and the Evolution of Evolv-
ability. Evolution – International Journal of Organic Evolution, 50(3):967–976, June 1996,
Society for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex,
REFERENCES 1169
UK. Fully available at http://dynamics.org/Altenberg/PAPERS/CAEE/ [accessed 2009-
x
07-10]. CiteSeer : 10.1.1.28.1524.

2836. Michael Wagner, Dieter Hogrefe, Kurt Geihs, and Klaus David, editors. First Workshop
on Global Sensor Networks (GSN’09), March 6, 2009, University of Kassel, Fachbereich 16:
Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, volume 17.
European Association of Software Science and Technology (EASST; Universität Potsdam,
Institute for Informatics): Potsdam, Germany. Fully available at http://eceasst.cs.
tu-berlin.de/index.php/eceasst/issue/view/24 [accessed 2009-09-07]. Partly available
at http://ipvs.informatik.uni-stuttgart.de/vs/gsn09/ [accessed 2011-02-28].
2837. Tobias Wagner, Nicola Beume, and Boris Naujoks. Pareto-, Aggregation-, and Indicator-
Based Methods in Many-Objective Optimization. Reihe Computational Intelligence: De-
sign and Management of Complex Technical Processes and Systems by Means of Computa-
tional Intelligence Methods CI-217/06, Universität Dortmund, Collaborative Research Cen-
ter (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia, Germany, Septem-
ber 2006. Fully available at http://hdl.handle.net/2003/26125 [accessed 2009-07-19].
issn: 1433-3325. See also [2838].
2838. Tobias Wagner, Nicola Beume, and Boris Naujoks. Pareto-, Aggregation-, and Indicator-
Based Methods in Many-Objective Optimization. In EMO’07 [2061], pages 742–756, 2007.
doi: 10.1007/978-3-540-70928-2 56. See also [2837].
2839. Markus Wälchli and Torsten Braun. Efficient Signal Processing and Anomaly Detec-
tion in Wireless Sensor Networks. In EvoWorkshops’09 [1052], pages 81–86, 2009.
doi: 10.1007/978-3-642-01129-0 9.
2840. Karl-Heinz Waldmann and Ulrike M. Stocker, editors. Selected Papers of the Annual Inter-
national Conference of the German Operations Research Society (GOR), Jointly Organized
with the Austrian Society of Operations Research (ÖGOR) and the Swiss Society of Op-
erations Research (SVOR), September 6–8, 2006, University of Karlsruhe: Karlsruhe, Ger-
many, volume 2006 in Operations Research Proceedings. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-540-69995-8. isbn: 3-540-69994-5. Google Books ID: HOwaluJXznkC.
OCLC: 185135009, 255964381, 300964495, 307478719, and 315261951.
2841. Donald E. Walker and Lewis M. Norton, editors. Proceedings of the 1st International Joint
Conference on Artificial Intelligence (IJCAI’69), May 7–9, 1969, Washington, DC, USA.
isbn: 0-934613-21-4. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-69/
CONTENT/content.htm [accessed 2008-04-01]. OCLC: 15386096.
2842. David Alexander Ross Wallace. Groups, Rings and Fields, Undergraduate Texts in Mathe-
matics. Springer New York: New York, NY, USA, February 2004. isbn: 3540761772.
2843. David Wallin and Conor Ryan. Maintaining Diversity in EDAs for Real-Valued Optimisation
Problems. In FBIT’07 [1277], pages 795–800, 2007. doi: 10.1109/FBIT.2007.132. INSPEC
Accession Number: 9979463.
2844. David Wallin and Conor Ryan. On the Diversity of Diversity. In CEC’07 [1343], pages
95–102, 2007. doi: 10.1109/CEC.2007.4424459. INSPEC Accession Number: 9889406.
2845. David L. Waltz, editor. Proceedings of the National Conference on Artificial Intelligence
(AAAI’82), August 18–20, 1982, Carnegy Mellon University (CMU): Pittsburgh, PA, USA.
American Association for Artificial Intelligence: Los Altos, CA, USA and John W. Kaufmann
Inc.: Washington, DC, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai82.php [accessed 2007-09-06]. OCLC: 159789798.
2846. Chunzhi Wang and Hongwei Chen, editors. Proceedings of the 2nd International Workshop
on Intelligent Systems and Applications (ISA’10), May 22–23, 2010, Wǔhàn, Húběi, China.
IEEE Computer Society Press: Los Alamitos, CA, USA. Partly available at http://www.
icaiss.org/isa2010/ [accessed 2011-04-10]. Catalogue no.: CFP1060G-ART.
2847. Chunzhi Wang and Zhiwei Ye, editors. Proceedings of the International Workshop on
Intelligent Systems and Applications (ISA’09), May 23–24, 2009, Wǔhàn, Húběi, China.
IEEE Computer Society Press: Los Alamitos, CA, USA. Partly available at http://
www.icaiss.org/isa2009/indexen.html [accessed 2011-04-10]. Library of Congress Con-
trol Number (LCCN): 2009900554. Catalogue no.: CFP0960G.
2848. Haiying Wang, Kay Soon Low, Kexin Wei, and Junqing Sun, editors. Proceedings of the 5th
International Conference on Natural Computation (ICNC’09), August 14–16, 2009, Tiānjı̄n,
China. IEEE Computer Society: Piscataway, NJ, USA. Library of Congress Control Number
(LCCN): 2009903793. Order No.: P3736. Catalogue no.: CFP09CNC-PRT.
1170 REFERENCES
2849. Lipo Wang, editor. Support Vector Machines: Theory and Applications, volume 177/2005
in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, Au-
gust 2005. doi: 10.1007/b95439. isbn: 3-540-24388-7. OCLC: 60800790, 316265870,
318299016, and 403848183. Library of Congress Control Number (LCCN): 2005921894.
2850. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part I (ICNC’05-I), August 27–29, 2005,
Chángshā, Húnán, China, volume 3610 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11539087. isbn: 3-540-28323-4.
See also [2851, 2852].
2851. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part II (ICNC’05-II), August 27–29, 2005,
volume 3611 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/11539117. isbn: 3-540-28325-0. See also [2850, 2852].
2852. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part III (ICNC’05-III), August 27–29,
2005, volume 3612 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/11539902. isbn: 3-540-28320-X. See also [2850, 2851].
2853. Paul P. Wang, editor. Proceedings of the Fifth Joint Conference on Information Science (JCIS
2000), Section: The Third International Workshop on Frontiers in Evolutionary Algorithms
(FEA’00), 2000, Atlantic City, NJ, USA. Part of [142].
2854. Pu Wang, Edward P. K. Tsang, Thomas Weise, Ke Tang, and Xin Yao. Using GP to
Evolve Decision Rules for Classification in Financial Data Sets. In ICCI’10 [2630], pages
722–727, 2010. doi: 10.1109/COGINF.2010.5599820. Fully available at http://mail.
ustc.edu.cn/˜wuyou308/paper1.pdf and http://www.it-weise.de/documents/
files/WTWTY2010UGPTEDRFCIFDS.pdf [accessed 2010-12-09]. CiteSeerx : 10.1.1.170.3794.
INSPEC Accession Number: 11579674. Ei ID: 20105013469298.
2855. Pu Wang, Thomas Weise, and Raymond Chiong. Novel Evolutionary Algo-
rithms for Supervised Classification Problems: An Experimental Study. Evolution-
ary Intelligence, 4(1):3–16, January 12, 2011, Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/s12065-010-0047-7. Fully available at http://www.it-weise.de/
documents/files/WWC2011NEAFSCPAES.pdf. Ei ID: IP51227209. See also [2819, 2854,
2894].
2856. Tzai-Der Wang, Xiaodong Li, Shu-Heng Chen, Xufa Wang, Hussein A. Abbass, Hitoshi
Iba, Guoliang Chen, and Xin Yao, editors. Proceedings of the 6th International Conference
on Simulated Evolution and Learning (SEAL’06), October 15–18, 2006, University of Sci-
ence and Technology of China (USTC): Héféi, Ānhuı̄, China, volume 4247 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11903697. isbn: 3-540-47331-9.
OCLC: 496560934. Library of Congress Control Number (LCCN): 318289226.
2857. Yingxu Wang, Du Zhang, Jean-Claude Latombe, and Witold Kinsner, editors. Proceedings of
the Seventh IEEE International Conference on Cognitive Informatics (ICCI’08), August 14,
2008, Stanford University: Stanford, CA, USA. IEEE Computer Society: Piscataway, NJ,
USA.
2858. Yingxu Wang, Bernard Widrow, Bo Zhang, Witold Kinsner, Kenji Sugawara, Fuchun Sun,
Thomas Weise, Yixin Zhong, and Du Zhang. Perspectives on Cognitive Informatics and Its
Future Development: Summary of Plenary Panel II of IEEE ICCI’10. In ICCI’10 [2630],
pages 17–25, 2010. doi: 10.1109/COGINF.2010.5599693. Fully available at http://
www.it-weise.de/documents/files/WWZKSSWZZ2010POCIAIFDSOPPIIOII.pdf [ac-
cessed 2010-12-09]. Ei ID: 20105013469178. See also [2859, 2890].

2859. Yingxu Wang, Bernard Widrow, Bo Zhang, Witold Kinsner, Kenji Sugawara, Fuchun Sun,
Jianhua Lu, Thomas Weise, and Du Zhang. Perspectives on the Field of Cognitive Informatics
and Its Future Development. The International Journal of Cognitive Informatics and Natural
Intelligence (IJCINI), 5(1):1–17, January–March 2011, Idea Group Publishing (Idea Group
Inc., IGI Global): New York, NY, USA. doi: 10.4018/jcini.2011010101. See also [2858, 2890].
2860. Yu Wang, Bin Li, and Thomas Weise. Estimation of Distribution and Differential Evo-
lution Cooperation for Large Scale Economic Load Dispatch Optimization of Power Sys-
tems. Information Sciences – Informatics and Computer Science Intelligent Systems Ap-
plications: An International Journal, 180(12):2405–2420, June 2010, Elsevier Science Pub-
REFERENCES 1171
lishers B.V.: Essex, UK. doi: 10.1016/j.ins.2010.02.015. Fully available at http://www.
it-weise.de/documents/files/WLW2010EODADECFLSELDOOPS.pdf [accessed 2010-06-19].
Ei ID: 20101412821019. IDS (SCI): 593PM.
2861. Yu Wang, Bin Li, Thomas Weise, Jianyu Wang, Bo Yuan, and Qiongjie Tian. Self-
Adaptive Learning Based Particle Swarm Optimization. Information Sciences – Infor-
matics and Computer Science Intelligent Systems Applications: An International Jour-
nal, 181(20):4515–4538, October 15, 2011, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.ins.2010.07.013. Ei ID: IP51031102.
2862. Zai Wang, Tianshi Chen, Ke Tang, and Xin Yao. A Multi-Objective Approach to Redun-
dancy Allocation Problem in Parallel-Series Systems. In CEC’09 [1350], pages 582–589,
2009. doi: 10.1109/CEC.2009.4982998. Fully available at http://nical.ustc.edu.cn/
uploads/cec2009/P252.pdf [accessed 2009-09-05]. INSPEC Accession Number: 10688610.
2863. Zai Wang, Ke Tang, and Xin Yao. Multi-objective Approaches to Optimal Test-
ing Resource Allocation in Modular Software Systems. IEEE Transactions on
Reliability, 59(3):563–575, September 2010, IEEE Reliability Society: Lafayette,
CO, USA. doi: 10.1109/TR.2010.2057310. Fully available at http://staff.
ustc.edu.cn/˜ketang/papers/WangTangYao_TR10a.pdf [accessed 2011-02-22].

CiteSeerx : 10.1.1.174.7029. INSPEC Accession Number: 11489129.


2864. Zhao-Xia Wang, Zeng-Qiang Chen, and Zhu-Zhi Yuan. QoS Routing Optimization Strategy
using Genetic Algorithm in Optical Fiber Communication Networks. Journal of Computer
Science & Technology (JCS&T), 19(2):213–217, March 2004, Iberoamerican Science & Tech-
nology Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM, USA.
doi: 10.1007/BF02944799.
2865. Ziqiang Wang and Dexian Zhang. Optimal Genetic View Selection Algorithm Un-
der Space Constraint. International Journal of Information Technology (IJIT), 11
(5):44–51, 2005, Singapore Computer Society (SCS): Singapore. Fully available
at http://www.cs.hcmus.edu.vn/elib/bitstream/123456789/83427/1/1684.
pdf and http://www.icis.ntu.edu.sg/scs-ijit/115/115_6.pdf [accessed 2011-03-28].
CiteSeerx : 10.1.1.103.5672.
2866. 18th European Conference on Machine Learning (ECML’07), September 17–21, 2007, War-
saw, Poland.
2867. Proceedings of the 6th International Conference on Hybrid Artificial Intelligence Systems
(HAIS’11), May 23–25, 2011, Warsaw, Poland.
2868. Satoshi Watanabe. Knowing and Guessing: A Quantitative Study of Inference and Infor-
mation. John Wiley & Sons Ltd.: New York, NY, USA, 1969. isbn: 0471921300. Google
Books ID: fNXjAQAACAAJ. OCLC: 10971, 258565060, and 264995516.
2869. Donald Arthur Waterman and Frederick Hayes-Roth, editors. Selected Papers from the
Workshop on Pattern Directed Inference Systems. Academic Press: New York, NY, USA,
1977, Honolulu, HI, USA. isbn: 0127375503. Google Books ID: 1npQAAAAMAAJ and
xHtQAAAAMAAJ. OCLC: 3843456, 84255208, 247410354, and 310813094.
2870. James Dewey Watson, Tania A. Baker, Stephen P. Bell, Alexander Gann, Michael L.
Levine, and Richard Losick. Molecular Biology of the Gene. Pearson Education: Up-
per Saddle River, NJ, USA. Imprint: Benjamin/Cummings Publishing Company, Inc.:
San Francisco, CA, USA, fifth edition, 1987–2003. isbn: 0321223683, 0321248643,
080534635X, 0805346422, and 0805346430. OCLC: 51861844, 56972646, 423675724,
and 488848386. Library of Congress Control Number (LCCN): 2003042863. LC Classifi-
cation: QH506 .M6627 2004.
2871. Geoffrey I. Webb and Xinghuo Yu, editors. Advances in Artificial Intelligence – Proceedings
of the 17th Australian Joint Conference on Artificial Intelligence (AI’04), December 4–6,
2004, Central Queensland University: Cairns, Australia, volume 3339/2004 in Lecture Notes
in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/b104336. isbn: 3-540-24059-4. Google Books
ID: bIWyepRUTKkC. OCLC: 57209655, 57313586, 67000961, 76556374, 224922858,
300264487, 318299687, 473571883, and 496631785. Library of Congress Control Num-
ber (LCCN): 2004116040.
2872. Bonnie Lynn Webber and Nils J. Nilsson, editors. Readings in Artificial Intelligence: A Col-
lection of Articles. Tioga Publishing Co.: Wellsboro, PA, USA and Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA, 1981. isbn: 0934613036 and 0935382038.
1172 REFERENCES
Google Books ID: 2X9QAAAAMAAJ, J2xkAAAAMAAJ, NxTUAQAACAAJ, YzJzHgAACAAJ, and
ZX9QAAAAMAAJ. OCLC: 7732055, 12722542, 263999188, 299402077, and 310662867.
2873. Bruce H. Weber and David J. Depew, editors. Evolution and Learning: The Baldwin
Effect Reconsidered, Bradford Books. MIT Press: Cambridge, MA, USA, July 2003.
isbn: 0-262-23229-4. Google Books ID: yBtRzBilw1MC. OCLC: 50440849. Library of
Congress Control Number (LCCN): 2002031992. GBV-Identification (PPN): 353052698.
LC Classification: BF698.95 .E95 2003.
2874. David C. Wedge and Douglas B. Kell. Rapid Prediction of Optimum Population Size in
Genetic Programming using a Novel Genotype-Fitness Correlation. In GECCO’08 [1519],
pages 1315–1322, 2008. Fully available at http://www.cs.bham.ac.uk/˜wbl/biblio/
gp-html/Wedge_2008_gecco.html [accessed 2009-07-10].
2875. Bill Wedley, editor. Proceedings of the 17th International Conference on Multiple Criteria
Decision Making (MCDM’04), August 5–8, 2004, Whistler Conference Center: Whistler, BC,
Canada. Partly available at http://www.bus.sfu.ca/events/mcdm/ [accessed 2007-09-10].
2876. Ingo Wegener. Komplexitätstheorie – Grenzen der Effizienz von Algorithmen. Springer-Verlag
GmbH: Berlin, Germany, 2003. isbn: 3-540-00161-1. Google Books ID: rFvDwDP0hmkC.
Newly published as [2877].
2877. Ingo Wegener. Complexity Theory: Exploring the Limits of Efficient Algorithms. Springer-
Verlag: Berlin/Heidelberg, 2005, Randall Pruim, Translator. isbn: 3-540-21045-8. Google
Books ID: u7DZSDSUYlQC. See also [2876].
2878. Yaxing Wei, Peng Yue, Upendra Dadi, Min Min, Chengfang Hu, and Liping Di. Active
Acquisition of Geospatial Data Products in a Collaborative Grid Environment. In SCCon-
test’06 [554], pages 455–462, 2006. doi: 10.1109/SCC.2006.46. INSPEC Accession Num-
ber: 9165409.
2879. Karsten Weicker. Evolutionäre Algorithmen, Leitfäden der Informatik. B. G. Teubner:
Leipzig, Germany, March 2002. isbn: 3-519-00362-7. Google Books ID: 3-519-00362-7.
OCLC: 51984239. GBV-Identification (PPN): 098084860.
2880. Karsten Weicker and Nicole Weicker. Burden and Benefits of Redundancy. In
FOGA’00 [2565], pages 313–333, 2001. Fully available at http://www.imn.
htwk-leipzig.de/˜weicker/publications/redundancy.pdf [accessed 2008-11-02].
2881. Edward D. Weinberger. Correlated and Uncorrelatated Fitness Landscapes and How to
Tell the Difference. Biological Cybernetics, 63(5):325–336, September 1993, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/BF00202749.
2882. Edward D. Weinberger. Local Properties of Kauffman’s NK Model, a Tuneably Rugged
Energy Landscape. Physical Review A, 44(10):6399–6413, November 1991, American Physical
Society: College Park, MD, USA. doi: 10.1103/PhysRevA.44.6399.
2883. Edward D. Weinberger. N P Completeness of Kauffman’s N-k Model, A Tuneable Rugged
Fitness Landscape. Working Papers 96-02-003, Santa Fe Institute: Santa Fé, NM, USA,
February 1996. Fully available at http://www.santafe.edu/media/workingpapers/
96-02-003.pdf [accessed 2010-12-04].
2884. Edward D. Weinberger and Peter F. Stadler. Why Some Fitness Landscapes are Fractal.
Journal of Theoretical Biology, 163(2):255–275, July 21, 1993, Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands. Imprint: Academic Press Professional, Inc.: San Diego,
CA, USA. doi: 10.1006/jtbi.1993.1120. Fully available at http://www.tbi.univie.ac.
at/papers/Abstracts/93-01-01.ps.gz [accessed 2008-06-01].
2885. Thomas Weise. Herleitungen zu Warteschlangenmodellen (Queuing Theory). University
of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Systems Group: Kas-
sel, Hesse, Germany, November 22, 2005. Fully available at http://www.it-weise.de/
documents/files/W2005WS.pdf [accessed 2010-12-07].
2886. Thomas Weise. Genetic Programming for Sensor Networks. Technical Report, Univer-
sity of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Systems Group:
Kassel, Hesse, Germany, January 2006. Fully available at http://www.it-weise.de/
documents/files/W2006DGPFa.pdf. See also [2896].
2887. Thomas Weise. SIGOA+DGPF: Evolutionary Computation and Genetic Programming for
Distributed Computing. University of Kassel, Fachbereich 16: Elektrotechnik/Informatik,
Distributed Systems Group: Kassel, Hesse, Germany, August 6, 2007.
2888. Thomas Weise. Evolving Distributed Algorithms with Genetic Programming. PhD the-
sis, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Sys-
REFERENCES 1173
tems Group: Kassel, Hesse, Germany, May 4, 2009, Kurt Geihs, Supervisor, Kurt
Geihs, Christian F. Tschudin, Birgit Vogel-Heuser, and Albert Zündorf, Committee mem-
bers. Fully available at http://www.it-weise.de/documents/files/W2009DISS.
pdf. urn: urn:nbn:de:hebis:34-2009051127217. Won the Dissertation Award
of The Association of German Engineers (Verein Deutscher Ingenieure, VDI). See also
[2892, 2898, 2899].
2889. Thomas Weise. Evolving Distributed Algorithms with Genetic Programming. University of
Basel, Computer Science Department, Computer Networks Group: Basel, Switzerland, Jan-
uary 15, 2009. See also [2888].
2890. Thomas Weise. CI/CC and Evolutionary Computation. July 9, 2010. Fully available
at http://www.it-weise.de/documents/files/W2010CCAEC.pdf. Position Paper.
See also [2858].
2891. Thomas Weise. Genetic Programming. University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Héféi, Ānhuı̄, China, August 11, 2010.
2892. Thomas Weise. Global Optimization Algorithms – Theory and Application. it-weise.de (self-
published): Germany, 2009. Fully available at http://www.it-weise.de/projects/
book.pdf.
2893. Thomas Weise and Raymond Chiong. Evolutionary Approaches and Their Applications
to Distributed Systems. In Intelligent Systems for Automated Learning and Adaptation:
Emerging Trends and Applications [561], Chapter 6, pages 114–149. Information Science
Reference: Hershey, PA, USA, 2009. doi: 10.4018/978-1-60566-798-0.ch006.
2894. Thomas Weise and Raymond Chiong. Evolutionary Data Mining Approaches for
Rule-based and Tree-based Classifiers. In ICCI’10 [2630], pages 696–703, 2010.
doi: 10.1109/COGINF.2010.5599821. Fully available at http://www.it-weise.de/
documents/files/WC2010EDMAFRBATBC.pdf [accessed 2010-06-24]. INSPEC Accession Num-
ber: 11579675. Ei ID: 20105013469299. See also [2855].
2895. Thomas Weise and Raymond Chiong. A Novel Extremal Optimization Algorithm for the
Template Design Problem. International Journal of Organizational and Collective Intelligence
(IJOCI), 2(2):1–17, April–June 2011, Idea Group Publishing (Idea Group Inc., IGI Global):
New York, NY, USA. See also [566].
2896. Thomas Weise and Kurt Geihs. Genetic Programming Techniques for Sensor Networks.
In 5. GI/ITG KuVS Fachgespräch “Drahtlose Sensornetze” [1833], pages 21–25, 2006.
Fully available at http://elib.uni-stuttgart.de/opus/volltexte/2006/2838/
and http://www.it-weise.de/documents/files/WG2006DGPFb.pdf [accessed 2007-11-
x
07]. CiteSeer : 10.1.1.152.8029 and 10.1.1.61.6782. See also [2886, 2897].

2897. Thomas Weise and Kurt Geihs. DGPF – An Adaptable Framework for Distributed Multi-
Objective Search Algorithms Applied to the Genetic Programming of Sensor Networks. In
BIOMA’06 [919], pages 157–166, 2006. Fully available at http://www.it-weise.de/
documents/files/WG2006DGPFc.pdf. CiteSeerx : 10.1.1.74.8243. See also [2896].
2898. Thomas Weise and Ke Tang. Evolving Distributed Algorithms with Genetic Programming.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 16(2), September 26, 2011–
2012, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/TEVC.2011.2112666.
See also [2888].
2899. Thomas Weise and Michael Zapf. Evolving Distributed Algorithms with Genetic Program-
ming: Election. In GEC’09 [3000], pages 577–584, 2009. doi: 10.1145/1543834.1543913. Fully
available at http://www.it-weise.de/documents/files/WZ2009EDAWGPE.pdf. Ei
ID: 20093012219539. See also [2888].
2900. Thomas Weise, Volker Fickert, Thomas Ziegs, Rico Roßberg, René Kreiner, Roland Fischer,
Stefan Gelbke, Harry Briem, and et al. VisioOS Visual Simulation of Operating Systems.
Chemnitz University of Technology, Faculty of Computer Science, Operating Systems Group:
Chemnitz, Sachsen, Germany, May 15, 2005, Winfried Kalfa, Supervisor. Fully available
at http://www.it-weise.de/documents/files/WEA2005VISIOS.pdf [accessed 2010-12-
07]. See also [910].

2901. Thomas Weise, Mirko Dietrich, Marc Kirchhoff, Lado Kumsiashvili, Stefan Niemczyk, and
Alexander Podlich. DGPF – Various other Documents and Slides. University of Kassel, Fach-
bereich 16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany,
October 26, 2006. Fully available at http://www.it-weise.de/documents/files/
1174 REFERENCES
W2006DGPFo.pdf [accessed 2010-12-07]. See also [2888].
2902. Thomas Weise, Stefan Achler, Martin Göb, Christian Voigtmann, and Michael Zapf. Evolv-
ing Classifiers – Evolutionary Algorithms in Data Mining. Kasseler Informatikschriften
(KIS) 2007, 4, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, September 28, 2007. Fully available at http://www.it-weise.de/
documents/files/WAGVZ2007DMC.pdf [accessed 2010-12-07]. CiteSeerx : 10.1.1.89.6165.
urn: urn:nbn:de:hebis:34-2007092819260.
2903. Thomas Weise, Steffen Bleul, and Kurt Geihs. Web Service Composition Sys-
tems for the Web Service Challenge – A Detailed Review. Technical Report
2007, 7, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, November 19, 2007. Fully available at http://www.it-weise.de/
documents/files/WBG2007WSCb.pdf [accessed 2007-11-20]. CiteSeerx : 10.1.1.86.7774.
urn: urn:nbn:de:hebis:34-2007111919638. See also [334, 335, 2908].
2904. Thomas Weise, Kurt Geihs, and Philipp Andreas Baer. Genetic Program-
ming for Proactive Aggregation Protocols. In ICANNGA’07-I [257], pages 167–
173, 2007. doi: 10.1007/978-3-540-71618-1. Fully available at http://www.
it-weise.de/documents/files/W2007DGPFb.pdf. CiteSeerx : 10.1.1.152.9906. Ei
ID: 20080311022959. IDS (SCI): BGC96. See also [2911].
2905. Thomas Weise, Michael Zapf, and Kurt Geihs. Rule-based Genetic Programming. In
BIONETICS’07 [1342], pages 8–15, 2007. doi: 10.1109/BIMNICS.2007.4610073. Fully
available at http://www.it-weise.de/documents/files/WZG2007RBGP.pdf. Ei
ID: 20084111632894. IDS (SCI): BKM77.
2906. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Genetic
Programming meets Model-Driven Development. Kasseler Informatikschriften (KIS)
2007, 2, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, July 2, 2007. Fully available at http://www.it-weise.de/
documents/files/WZKG2007DGPFd.pdf [accessed 2007-09-17]. CiteSeerx : 10.1.1.94.317.
urn: urn:nbn:de:hebis:34-2007070218786. See also [2907, 2916].
2907. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Genetic Pro-
gramming meets Model-Driven Development. In HIS’07 [1572], pages 332–335, 2007.
doi: 10.1109/HIS.2007.11. Fully available at http://www.it-weise.de/documents/
files/WZKG2007DGPFg.pdf. Ei ID: 20082911386590. See also [2906, 2916].
2908. Thomas Weise, Steffen Bleul, Diana Elena Comes, and Kurt Geihs. Different Ap-
proaches to Semantic Web Service Composition. In ICIW’08 [1862], pages 90–96,
2008. doi: 10.1109/ICIW.2008.32. Fully available at http://www.it-weise.de/
documents/files/WBCG2008ICIW.pdf. INSPEC Accession Number: 10067008. Ei
ID: 20083711536637. IDS (SCI): BRC96. See also [2903].
2909. Thomas Weise, Steffen Bleul, Marc Kirchhoff, and Kurt Geihs. Semantic Web Service
Composition for Service-Oriented Architectures. In CEC/EEE’08 [1346], pages 355–358,
2008. doi: 10.1109/CECandEEE.2008.148. Fully available at http://www.it-weise.de/
documents/files/WBKG2008SWSCFSOA.pdf. INSPEC Accession Number: 10475110. Ei
ID: 20091512027447. IDS (SCI): BJF01. See also [204, 334, 335].
2910. Thomas Weise, Stefan Niemczyk, Hendrik Skubch, Roland Reichle, and Kurt Geihs.
A Tunable Model for Multi-Objective, Epistatic, Rugged, and Neutral Fitness Land-
scapes. In GECCO’08 [1519], pages 795–802, 2008. doi: 10.1145/1389095.1389252.
Fully available at http://www.it-weise.de/documents/files/WNSRG2008GECCO.
pdf. CiteSeerx : 10.1.1.152.8952 and 10.1.1.153.308. Ei ID: 20085111786051.
2911. Thomas Weise, Michael Zapf, and Kurt Geihs. Evolving Proactive Aggregation Proto-
cols. In EuroGP’08 [2080], pages 254–265, 2008. doi: 10.1007/978-3-540-78671-9 22. Fully
available at http://www.it-weise.de/documents/files/WZG2008DGPFa.pdf. Ei
ID: 20083011392084. IDS (SCI): BHN52. See also [2904].
2912. Thomas Weise, Alexander Podlich, and Christian Gorldt. Solving Real-World Vehicle Routing
Problems with Evolutionary Algorithms. In Natural Intelligence for Scheduling, Planning
and Packing Problems [564], Chapter 2, pages 29–53. Springer-Verlag: Berlin/Heidelberg,
2009. doi: 10.1007/978-3-642-04039-9 2. Fully available at http://www.it-weise.de/
documents/files/WPG2009SRWVRPWEA.pdf [accessed 2010-06-19]. See also [2188, 2913, 2914].
2913. Thomas Weise, Alexander Podlich, Manfred Menze, and Christian Gorldt. Optimierte
Güterverkehrsplanung mit Evolutionären Algorithmen. Industrie Management – Zeitschrift
REFERENCES 1175
für industrielle Geschäftsprozesse, 10(3):37–40, June 2, 2009, GITO mbH – Verlag für Indus-
trielle Informationstechnik und Organisation: Berlin, Germany. Fully available at http://
www.it-weise.de/documents/files/WPMG2009OGMEA.pdf and http://www.
logistics.de/logistik/branchen.nsf/D4FF2D93CA69B47FC12575CF004870C8/
$File/gueterverkehrsplanung_optimierung_algorithmen_weise_podlich_
menze_gorldt_gitopdf.pdf [accessed 2009-09-07]. See also [2912, 2914].
2914. Thomas Weise, Alexander Podlich, Kai Reinhard, Christian Gorldt, and Kurt Geihs. Evo-
lutionary Freight Transportation Planning. In EvoWorkshops’09 [1052], pages 768–777,
2009. doi: 10.1007/978-3-642-01129-0 87. Fully available at http://www.it-weise.de/
documents/files/WPRGG2009EFTP.pdf. Ei ID: 20093012218527. IDS (SCI): BJH22. See
also [2912].
2915. Thomas Weise, Michael Zapf, Raymond Chiong, and Antonio Jesús Nebro Urbaneja. Why Is
Optimization Difficult? In Nature-Inspired Algorithms for Optimisation [562], Chapter 1,
pages 1–50. Springer-Verlag: Berlin/Heidelberg, 2009. doi: 10.1007/978-3-642-00267-0 1.
Fully available at http://www.it-weise.de/documents/files/WZCN2009WIOD.pdf.
2916. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Combining Genetic
Programming and Model-Driven Development. International Journal of Computational Intel-
ligence and Applications (IJCIA), 8(1):37–52, March 2009, World Scientific Publishing Co.:
Singapore and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026809002436.
Fully available at http://www.it-weise.de/documents/files/WZKG2009DGPFz.
pdf. Ei ID: 20092012083173. See also [2906, 2907].
2917. Thomas Weise, Stefan Niemczyk, Raymond Chiong, and Mingxu Wan. A Framework for
Multi-Model EDAs with Model Recombination. In EvoNUM’11 [2588], pages 304–313,
2011. doi: 10.1007/978-3-642-20525-5 31. Fully available at http://www.it-weise.de/
documents/files/WNCW2011AFFMMEWMR.pdf. See also [2038].
2918. August Weismann. The Germ-Plasm – A Theory of Heredity. Charles Scribner’s Sons:
New York, NY, USA and Electronic Scholary Publishing (ESP): Seattle, WA, USA, 1893,
W. Newton Parker and Harriet Rönnfeld, Translators. Fully available at http://www.esp.
org/books/weismann/germ-plasm/facsimile/ [accessed 2008-09-10].
2919. Gerhard Weiß and Sandip Sen, editors. Adaption and Learning in Multi-Agent Systems, IJ-
CAI’95 Workshop Proceedings (IJCAI-95 WS), August 21, 1995, Montréal, QC, Canada, vol-
ume 1042/1996 in Lecture Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-60923-7.
isbn: 3-540-60923-7. Google Books ID: joFQAAAAMAAJ. OCLC: 34318834, 150397844,
and 246327094. See also [1836, 1861].
2920. Justin Werfel and Radhika Nagpal. Extended Stigmergy in Collective Construction. IEEE
Intelligent Systems Magazine, 21(2):20–28, March–April 2006, IEEE Computer Society Press:
Los Alamitos, CA, USA. doi: 10.1109/MIS.2006.25. Fully available at http://hebb.mit.
edu/people/jkwerfel/ieeeis06.pdf [accessed 2008-06-12].
2921. Justin Werfel, Yaneer Bar-Yam, Daniela Rus, and Radhika Nagpal. Distributed Construction
by Mobile Robots with Enhanced Building Blocks. In ICRA’06 [1389], pages 2787–2794, 2006.
doi: 10.1109/ROBOT.2006.1642123. Fully available at http://www.eecs.harvard.edu/
x
˜rad/ssr/papers/icra06-werfel.pdf [accessed 2008-06-12]. CiteSeer : 10.1.1.87.6701.
INSPEC Accession Number: 9120372.
2922. Gregory M. Werner and Michael G. Dyer. Evolution of Communication in Artificial Organ-
isms. In Artificial Life II [1668], pages 659–687, 1990. Fully available at http://www.isrl.
uiuc.edu/˜amag/langev/paper/werner92evolutionOf.html [accessed 2008-07-28].
2923. A. Wetzel. Evaluation of the Effectiveness of Genetic Algorithms in Combinatorial Opti-
mization. University of Pittsburgh: Pittsburgh, PA, USA, 1983. Unpublished manuscript,
technical report.
2924. James F. Whidborne, Da-Wei Gu, and Ian Postlethwaite. Algorithms for the Method of
Inequalities – A Comparative Study. In ACC [1325], pages 3393–3397, volume 5, 1995. FA19
= 9:15.
2925. James M. Whitacre, Philipp Rohlfshagen, Axel Bender, and Xin Yao. The Role of Degenerate
Robustness in the Evolvability of Multi-Agent Systems in Dynamic Environments. In PPSN
XI-1 [2412], pages 284–293, 2010. doi: 10.1007/978-3-642-15844-5 29.
2926. R. C. White, jr. A Survey of Random Methods for Parameter Optimization. Simu-
lation – Transactions of The Society for Modeling and Simulation International, SAGE
1176 REFERENCES
Journals Online, 17(5):197–205, 1971, Society for Modeling and Simulation International
(SCS): San Diego, CA, USA. doi: 10.1177/003754977101700504. Fully available at http://
alexandria.tue.nl/extra1/erap/publichtml/7101031.pdf [accessed 2009-07-04]. TH-
Report 70-E-16.
2927. L. Darrell Whitley, editor. Proceedings of the Second Workshop on Foundations of Genetic
Algorithms (FOGA’92), July 26–29, 1992, Vail, CO, USA. Morgan Kaufmann Publishers
Inc.: San Francisco, CA, USA. isbn: 1-55860-263-1. OCLC: 27723077, 312001104,
438787312, 475768457, 490655463, and 612351520. issn: 1081-6593. Published Febru-
ary 1, 1993.
2928. L. Darrell Whitley, editor. Late Breaking Papers at Genetic and Evolutionary Computation
Conference (GECCO’00 LBP), July 10–12, 2000, Riviera Hotel and Casino: Las Vegas, NV,
USA. AAAI Press: Menlo Park, CA, USA. See also [2935, 2986].
2929. L. Darrell Whitley. The GENITOR Algorithm and Selection Pressure: Why Rank-
Based Allocation of Reproductive Trials is Best. In ICGA’89 [2414], pages 116–121,
1989. Fully available at http://citeseer.ist.psu.edu/531140.html [accessed 2007-08-
x
21]. CiteSeer : 10.1.1.18.8195.

2930. L. Darrell Whitley. A Genetic Algorithm Tutorial. Technical Report CS-93-103,


Colorado State University, Computer Science Department: Fort Collins, CO, USA,
March 10, 1993. Fully available at ftp://ftp.cse.cuhk.edu.hk/pub/EC/GA/
papers/tutor93.ps.gz, http://www.cs.iastate.edu/˜honavar/ga_tutorial.
pdf, http://www.cs.uga.edu/˜potter/CompIntell/ga_tutorial.pdf, http://
www.emunix.emich.edu/˜evett/AI/ga_tutorial.pdf, http://www.stat.ucla.
edu/˜yuille/courses/stat202c/ga_tutorial.pdf, and http://www.wins.uva.
nl/˜mes/psdocs/ga-tr103.ps.gz [accessed 2010-08-03]. CiteSeerx : 10.1.1.38.4806. See
also [2931].
2931. L. Darrell Whitley. A Genetic Algorithm Tutorial. Statistics and Computing, 4(2):65–85,
June 1994, Springer Netherlands: Dordrecht, Netherlands and Samizdat Press: Golden, CO,
USA. doi: 10.1007/BF00175354. Fully available at http://samizdat.mines.edu/ga_
tutorial/ga_tutorial.ps [accessed 2007-08-12]. See also [2930].
2932. L. Darrell Whitley and Michael D. Vose, editors. Proceedings of the Third Workshop on
Foundations of Genetic Algorithms (FOGA 3), July 31–August 2, 1994, Estes Park, CO,
USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-356-5.
Google Books ID: SMuOAAAACAAJ and z2vTGQAACAAJ. OCLC: 32999554, 60271926, and
312293288. GBV-Identification (PPN): 194472310. Published June 1, 1995.
2933. L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution,
The Baldwin Effect and Function Optimization. In PPSN III [693], pages 5–15, 1994.
doi: 10.1007/3-540-58484-6 245. CiteSeerx : 10.1.1.18.2428.
2934. L. Darrell Whitley, Soraya Rana, John Dzubera, and Keith E. Mathias. Evaluating Evolution-
ary Algorithms. Artificial Intelligence, 85(1-2):245–276, August 1996, Elsevier Science Pub-
lishers B.V.: Essex, UK. doi: 10.1016/0004-3702(95)00124-7. CiteSeerx : 10.1.1.53.134.
2935. L. Darrell Whitley, David Edward Goldberg, Erick Cantú-Paz, Lee Spector, Ian C. Parmee,
and Hans-Georg Beyer, editors. Proceedings of the Genetic and Evolutionary Computation
Conference (GECCO’00), July 8–12, 2000, Riviera Hotel and Casino: Las Vegas, NV, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-708-0. Google
Books ID: GYRQAAAAMAAJ and o-8HAAAACAAJ. A joint meeting of the Ninth International
Conference on Genetic Algorithms (ICGA-2000) and the Fifth Annual Genetic Programming
Conference (GP-2000). See also [2928, 2986].
2936. Dirk Wiesmann, Ulrich Hammel, and Thomas Bäck. Robust Design of Multilayer Optical
Coatings by Means of Evolutionary Algorithms. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), 2(4):162–167, November 1998, IEEE Computer Society: Washington,
DC, USA. doi: 10.1109/4235.738986. INSPEC Accession Number: 6149966. See also [2937].
2937. Dirk Wiesmann, Ulrich Hammel, and Thomas Bäck. Robust Design of Multilayer Opti-
cal Coatings by Means of Evolutionary Strategies. Technical Report, Universität Dort-
mund, Collaborative Research Center (Sonderforschungsbereich) 531: Dortmund, North
Rhine-Westphalia, Germany, March 31–November 8, 1998. Fully available at http://hdl.
handle.net/2003/5348 [accessed 2009-07-11]. CiteSeerx : 10.1.1.36.5624. See also [2936].
2938. Wikipedia – The Free Encyclopedia. Wikimedia Foundation, Inc.: San Francisco, CA, USA,
2009. Fully available at http://en.wikipedia.org/ [accessed 2009-06-08].
REFERENCES 1177
2939. Frank Wilcoxon. Individual Comparisons by Ranking Methods. Biometrics Bulletin, 1(6):80–
83, December 1945, Blackwell Publishing Ltd: Chichester, West Sussex, UK. Fully available
at http://sci2s.ugr.es/keel/pdf/algorithm/articulo/wilcoxon1945.pdf
and http://webspace.ship.edu/pgmarr/Geo441/Readings/Wilcoxon%201945
%20-%20Individual%20Comparisons%20by%20Ranking%20Methods.pdf [accessed 2009-
07-27].

2940. Herbert S. Wilf. Algorithms and Complexity. A K Peters, Ltd.: Natick, MA, USA, second
edition, December 2002. isbn: 1-56881-178-0. Google Books ID: jGD9pNFKI2UC. Library
of Congress Control Number (LCCN): 2002072765. LC Classification: QA63 .W55 2002.
2941. Claus O. Wilke. Evolutionary Dynamics in Time-Dependent Environments. PhD thesis,
Ruhr-Universität Bochum, Fakultät für Physik und Astronomie: Bochum, North Rhine-
Westphalia, Germany, Shaker Verlag GmbH: Aachen, North Rhine-Westphalia, Germany,
July 1999. isbn: 3-8265-6199-6. Fully available at http://wlab.biosci.utexas.
edu/˜wilke/ps/PhD.ps.gz [accessed 2007-08-19].
2942. Claus O. Wilke. Adaptive Evolution on Neutral Networks. Bulletin of Mathemat-
ical Biology, 63(4):715–730, July 2001, Springer New York: New York, NY, USA.
doi: 10.1006/bulm.2001.0244. arXiv ID: physics/0101021v1. PubMed ID: 11497165.
2943. Daniel N. Wilke, Schalk Kok, and Albert A. Groenwold. Comparison of Linear and Classical
Velocity Update Rules in Particle Swarm Optimization: Notes on Diversity. International
Journal for Numerical Methods in Engineering, 70(8):962–984, 2007, Wiley Interscience:
Chichester, West Sussex, UK. doi: 10.1002/nme.1867.
2944. George C. Williams. Pleiotropy, Natural Selection, and the Evolution of Senescence. Evo-
lution – International Journal of Organic Evolution, 11(4):398–411, December 1957, Society
for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.2307/2406060. Reprinted in [2946].
2945. George C. Williams. Adaptation and Natural Selection: A Critique of Some Current Evo-
lutionary Thought. Princeton University Press: Princeton, NJ, USA, September 28, 1966.
isbn: 0-691-02615-7. Google Books ID: TztkT5lIGOwC. OCLC: 35230452. GBV-
Identification (PPN): 083639411 and 223362034.
2946. George C. Williams. Pleiotropy, Natural Selection, and the Evolution of Senescence. Science
of Aging Knowledge Environment (SAGE KE), 2001(1), October 3, 2001, American Associ-
ation for the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press
(Stanford University): Cambridge, MA, USA. Reprint of [2944].
2947. Dominic Wilson and Devinder Kaur. Using Quotient Graphs to Model Neutrality in Evolu-
tionary Search. In GECCO’08 [1519], pages 2233–2238, 2008. doi: 10.1145/1388969.1389051.
2948. Edward Osborne Wilson. Sociobiology: The New Synthesis. Belknap Press: Cambridge, MA,
USA and Harvard University Press: Cambridge, MA, USA, 1975. See also [2949].
2949. Edward Osborne Wilson. Sociobiology: The New Synthesis. Belknap Press: Cambridge, MA,
USA, twenty-fifth anniversary edition edition, March 2000. isbn: 0674002350. See also
[2948].
2950. Garnett Wilson and Wolfgang Banzhaf. Discovery of Email Communication Networks
from the Enron Corpus with a Genetic Algorithm using Social Network Analysis. In
CEC’09 [1350], pages 3256–3263, 2009. doi: 10.1109/CEC.2009.4983357. INSPEC Acces-
sion Number: 10688952.
2951. P. B. Wilson and M. D. Macleod. Low Implementation Cost IIR Digital Filter Design using
Genetic Algorithms. In NASP’93 [1323], pages 4/1–4/8, 1993.
2952. Stewart W. Wilson. ZCS: A Zeroth Level Classifier System. Evolution-
ary Computation, 2(1):1–18, Spring 1994, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1994.2.1.1. Fully available at ftp://ftp.cs.bham.ac.uk/pub/
authors/T.Kovacs/lcs.archive/Wilson1994a.ps.gz and http://www.eskimo.
com/˜wilson/ps/zcs.pdf [accessed 2010-12-15]. CiteSeerx : 10.1.1.38.6456.
2953. Stewart W. Wilson. Classifier Fitness Based on Accuracy. Evolutionary Computation, 3(2):
149–175, Summer 1995, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.2.149.
Fully available at ftp://ftp.cs.bham.ac.uk/pub/authors/T.Kovacs/lcs.
archive/Wilson1995a.ps.gz and http://www.eskimo.com/˜wilson/ps/xcs.
pdf [accessed 2010-12-15]. CiteSeerx : 10.1.1.38.6508.
2954. Stewart W. Wilson. Generalization in the XCS Classifier System. In GP’98 [1612], pages 665–
674, 1998. Fully available at ftp://ftp.cs.bham.ac.uk/pub/authors/T.Kovacs/
1178 REFERENCES
lcs.archive/Wilson1998a.ps.gz [accessed 2010-12-15]. CiteSeerx : 10.1.1.38.6472.
2955. Stewart W. Wilson. State of XCS Classifier System Research. In IWLCS’00 [1678], pages
63–82, 2000. Fully available at http://www.eskimo.com/˜wilson/ps/state.ps.gz
x
[accessed 2010-12-15]. CiteSeer : 10.1.1.54.1204.

2956. Stewart W. Wilson and David Edward Goldberg. A Critical Review of Classifier Systems.
In ICGA’89 [2414], pages 244–255, 1989. Fully available at http://www.eskimo.com/
˜wilson/ps/wg.ps.gz [accessed 2007-09-12].
2957. Öjvind Winge. Wilhelm Johannsen: The Creator of the Terms Gene, Genotype, Phe-
notype and Pure Line. Journal of Heredity, Oxford Journals, 49(2):83–88, March 1958,
American Genetic Association: Newport, OR, USA. Fully available at http://jhered.
oxfordjournals.org/cgi/reprint/49/2/83.pdf [accessed 2008-08-21]. See also [1448].
2958. Hans Winkler. Verbreitung und Ursache der Parthenogenesis im Pflanzen- und Tier-
reiche. Verlag von Gustav Fischer: Jena, Thuringia, Germany, 1920. Fully available
at http://www.archive.org/details/verbreitungundur00wink [accessed 2009-06-28].
Google Books ID: Y7xUAAAAMAAJ and kZwZAAAAIAAJ. asin: B0018HWPMU.
2959. Gabriel Winter, Jacques Periaux, M. Galán, and P. Cuesta, editors. Genetic Algorithms
in Engineering and Computer Science, European Short Course on Genetic Algorithms and
Evolution Strategies (EUROGEN’95), December 1995, University of Las Palmas: Las Pal-
mas de Gran Canaria, Spain, Wiley Series in Computational Methods in Applied Sciences.
Wiley Interscience: Chichester, West Sussex, UK. isbn: 0-471-95859-X. Google Books
ID: WPRQAAAAMAAJ. OCLC: 35318862, 174238271, 246825979, and 639660095. Library
of Congress Control Number (LCCN): 96177148. GBV-Identification (PPN): 191484377
and 27870042X. LC Classification: QA76.87 .G46 1995.
2960. Paul C. Winter, G. Ivor Hickey, and Hugh L. Fletcher. Instant Notes in Genetics. BIOS
Scientific Publishers Ltd.: Oxford, UK, Taylor and Francis LLC: London, UK, and Science
Press: Běijı̄ng, China, 1998–2006. isbn: 0-387-91562-1, 041537619X, 1-85996-166-5,
1-85996-262-9, and 703007307X. Google Books ID: -yhJHgAACAAJ, 966qAQAACAAJ,
VOjyJQAACAAJ, iKYFAAAACAAJ, and pB7PAQAACAAJ.
2961. Ian H. Witten and Eibe Frank. Data Mining: Practical Machine Learning Tools and
Techniques with Java Implementations, The Morgan Kaufmann Series in Data Manage-
ment Systems. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, Octo-
ber 1999. isbn: 1558605525. Google Books ID: 6lVEKlrTq8EC, PtBQAAAAMAAJ, and
b9BQAAAAMAAJ. OCLC: 42420775, 42621057, and 300306645. Library of Congress Con-
trol Number (LCCN): 99046067. LC Classification: QA76.9.D343 W58 2000.
2962. Martin Wolpers, Ralf Klamma, and Erik Duval, editors. Proceedings of the EC-TEL
2007 Poster Session (EC-TEL’07 Posters), September 17–20, 2007, Crete, Greece, vol-
ume 280 in CEUR Workshop Proceedings (CEUR-WS.org). Rheinisch-Westfälische Tech-
nische Hochschule (RWTH) Aachen, Sun SITE Central Europe: Aachen, North Rhine-
Westphalia, Germany. Fully available at http://sunsite.informatik.rwth-aachen.
de/Publications/CEUR-WS/Vol-280/ [accessed 2009-09-05]. See also [2962].
2963. David H. Wolpert and William G. Macready. No Free Lunch Theorems for Search. Technical
Report SFI-TR-95-02-010, Santa Fe Institute: Santa Fé, NM, USA, February 6, 1995. Fully
available at http://www.santafe.edu/research/publications/workingpapers/
95-02-010.pdf [accessed 2009-06-30]. CiteSeerx : 10.1.1.33.5447.
2964. David H. Wolpert and William G. Macready. No Free Lunch Theorems for Op-
timization. IEEE Transactions on Evolutionary Computation (IEEE-EC), 1(1):67–82,
April 1997, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.585893.
CiteSeerx : 10.1.1.39.6926.
2965. Laurence A. Wolsey. Integer Programming, volume 52 in Estimation, Simulation, and
Control – Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley In-
terscience: Chichester, West Sussex, UK, 1998. isbn: 0-471-28366-5. Google Books
ID: x7RvQgAACAAJ. OCLC: 38976179 and 472310378. Library of Congress Control Num-
ber (LCCN): 98007296. GBV-Identification (PPN): 244286620. LC Classification: T57.74
.W67 1998.
2966. Jan Wolters. Simulated Annealing. GRIN Verlag GmbH: Munich, Bavaria, Germany, 2007.
doi: 10.3239/9783640298761. isbn: 3640303830. Google Books ID: pI9LGg8u09IC.
2967. Andreas Wombacher, Christian Huemer, and Markus Stolze, editors. Proceedings of 2006
IEEE Joint Conference on E-Commerce Technology and Enterprise Computing, E-Commerce
REFERENCES 1179
and E-Services (CEC/EEE’06), June 26–29, 2006, Westin San Francisco Airport: Mill-
brae, CA, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-2511-3.
Partly available at http://semanticweb.org/wiki/CEC/EEE2006 [accessed 2011-02-28].
OCLC: 72439855, 84651797, 232341468, 289020000, and 326786916. Library of
Congress Control Number (LCCN): 2006927609. Order No.: P2511. LC Classifica-
tion: HF5548.32 .I337 2006.
2968. D. F. Wong, Hon Wai Leong, and Chung Laung Liu. Simulated Annealing for VLSI Design,
volume 24 in Kluwer International Series in Engineering and Computer Science, Frontiers
in Logic Programming Architecture and Machine Design, Kluwer International Series in
Engineering and Computer Science: VLSI, Computer Architecture, and Digital Signal Pro-
cessing. Springer Netherlands: Dordrecht, Netherlands. isbn: 0898382564. Google Books
ID: 1ldJR6BDjkIC.
2969. Man Leung Wong and Kwong Sak Leung. Data Mining Using Grammar Based Genetic
Programming and Applications, volume 3 in Genetic Programming Series. Springer Sci-
ence+Business Media, Inc.: New York, NY, USA, January 2000. isbn: 079237746X. Google
Books ID: j8hQAAAAMAAJ. OCLC: 43031203.
2970. John R. Woodward. Evolving Turing Complete Representations. In CEC’03 [2395], pages
830–837, volume 2, 2003. doi: 10.1109/CEC.2003.1299753. Fully available at http://www.
cs.bham.ac.uk/˜wbl/biblio/gp-html/Woodward_2003_Etcr.html [accessed 2008-11-
x
08]. CiteSeer : 10.1.1.3.6859.

2971. Nimit Worakul, Wibul Wongpoowarak, and Prapaporn Boonme. Optimization in Develop-
ment of Acetaminophen Syrup Formulation. Drug Development and Industrial Pharmacy, 28
(3):345–351, March 2002, Informa Healthcare: London, UK. doi: 10.1081/DDC-120002850.
PubMed ID: 12026227. See also [2972].
2972. Nimit Worakul, Wibul Wongpoowarak, and Prapaporn Boonme. Optimization in Develop-
ment of Acetaminophen Syrup Formulation – Erratum. Drug Development and Industrial
Pharmacy, 28(8):1043–1045, September 2002, Informa Healthcare: London, UK. See also
[2971].
2973. Robert P. Worden. A Speed Limit for Evolution. Journal of Theoretical Biology, 176(1):137–
152, September 7, 1995, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Im-
print: Academic Press Professional, Inc.: San Diego, CA, USA. doi: 10.1006/jtbi.1995.0183.
Fully available at http://dspace.dial.pipex.com/jcollie/sle/ [accessed 2009-07-10].
2974. 5th Online World Conference on Soft Computing in Industrial Applications (WSC5), 2000.
World Federation on Soft Computing (WFSC).
2975. 6th Online World Conference on Soft Computing in Industrial Applications (WSC6), 2001.
World Federation on Soft Computing (WFSC).
2976. 7th Online World Conference on Soft Computing in Industrial Applications (WSC7), 2002.
World Federation on Soft Computing (WFSC).
2977. 8th Online World Conference on Soft Computing in Industrial Applications (WSC8), 2003.
World Federation on Soft Computing (WFSC).
2978. 9th Online World Conference on Soft Computing in Industrial Applications (WSC9), 2004.
World Federation on Soft Computing (WFSC).
2979. 10th Online World Conference on Soft Computing in Industrial Applications (WSC10), 2005.
World Federation on Soft Computing (WFSC).
2980. 11th Online World Conference on Soft Computing in Industrial Applications (WSC11),
September 18–October 6, 2006. World Federation on Soft Computing (WFSC).
2981. Alden H. Wright. Genetic Algorithms for Real Parameter Optimization. In FOGA’90 [2562],
pages 205–218, 1990. CiteSeerx : 10.1.1.25.5297.
2982. Alden H. Wright, Richard K. Thompson, and Jian Zhang. The Computational Com-
plexity of N-K Fitness Functions. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 4(4):373–379, November 2000, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.887236. Fully available at http://www.cs.umt.edu/u/wright/
papers/nkcomplexity.ps.gz [accessed 2009-02-26]. CiteSeerx : 10.1.1.40.3093. INSPEC
Accession Number: 6791340.
2983. Alden H. Wright, Michael D. Vose, Kenneth Alan De Jong, and Lothar M. Schmitt, editors.
Revised Selected Papers of the 8th International Workshop on Foundations of Genetic Al-
gorithms (FOGA’05), January 5–9, 2005, Aizu-Wakamatsu City, Japan, volume 3469/2005
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1180 REFERENCES
doi: 10.1007/11513575 1. Partly available at http://www.cs.umt.edu/foga05/ [ac-
. Published August 22, 2005.
cessed 2007-09-01]

2984. Margaret H. Wright. Direct Search Methods: Once Scorned, Now Respectable. In Numer-
ical Analysis – Proceedings of the 16th Dundee Biennial Conference in Numerical Analy-
sis [1138], pages 191–208, 1995. Fully available at http://cm.bell-labs.com/cm/cs/
doc/96/4-02.ps.gz and http://www.math.wm.edu/˜buckaroo/pubs/LeToTr00a/
LeToTr00a.pdf [accessed 2010-12-24]. CiteSeerx : 10.1.1.80.9399.
2985. Sewall Wright. The Roles of Mutation, Inbreeding, Crossbreeding and Selection in Evo-
lution. In Proceedings of the Sixth Annual Congress of Genetics [1459], pages 356–366,
volume 1, 1932. Fully available at http://www.blackwellpublishing.com/ridley/
classictexts/wright.pdf [accessed 2009-06-29].
2986. Annie S. Wu, editor. Proceedings of the 2000 Genetic and Evolutionary Computation Con-
ference Workshop Program (GECCO’00 WS), July 8, 2000. GECCO-2000 Bird-of-a-feather
workshops. See also [2928, 2935].
2987. Hsien-Chung Wu. Evolutionary Computation. self-published, February 2005. Fully available
at http://nknucc.nknu.edu.tw/˜hcwu/pdf/evolec.pdf [accessed 2010-08-01].
2988. Nelson Wu. Differential Evolution for Optimisation in Dynamic Environments. Techni-
cal Report, Royal Melbourne Institute of Technology (RMIT) University, School of Com-
puter Science and Information Technology: Melbourne, VIC, Australia, November 2006.
CiteSeerx : 10.1.1.93.3234.
2989. Tin-Yu Wu, Guan-Hsiung Liaw, Sing-Wei Huang, Wei-Tsong Lee, and Chung-Chi Wu. A
GA-based Mobile RFID Localization Scheme for Internet of Things. Personal and Ubiquitous
Computing, Springer-Verlag London Limited: London, UK. doi: 10.1007/s00779-011-0398-9.
2990. Zixin Wu, Karthik Gomadam, Ajith Ranabahu, Amit P. Sheth, and John A. Miller. Auto-
matic Composition of Semantic Web Services Using Process Mediation. In ICEIS’07 [494],
pages 453–462, 2007. Fully available at http://knoesis.cs.wright.edu/library/
publications/download/SWSChallenge-TR-METEOR-S-Feb2007.pdf [accessed 2010-12-
16]. See also [1098].

2991. Roman Wyrzykowski, Jack Dongarra, Marcin Paprzycki, and Jerzy Waśniewski, editors.
Proceedings of the 5th International Conference on Parallel Processing and Applied Math-
ematics, revised papers (PPAM’03), September 7–10, 2003, Czȩstochowa, Poland, volume
3019/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b97218. isbn: 3-540-21946-3. Library of Congress Control Number
(LCCN): 2004104391. GBV-Identification (PPN): 386648921. LC Classification: QA76.58
.P69 2003.

2992. Fatos Xhafa and Ajith Abraham, editors. Metaheuristics for Scheduling in Indus-
trial and Manufacturing Applications, volume 128 in Studies in Computational Intel-
ligence. Springer-Verlag: Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-78985-7.
isbn: 3-540-78984-7. Google Books ID: sn4ksQOnE5MC. Library of Congress Control
Number (LCCN): 2008924363. Libri-Number: 3608646.
2993. Fatos Xhafa, Francisco Herrera Triguero, Mario Köppen, and Jose Manuel Bénitez, editors.
8th International Conference on Hybrid Intelligent Systems (HIS’08), October 10–12, 2008,
Barcelona, Catalonia, Spain. IEEE Computer Society: Piscataway, NJ, USA. Partly available
at http://his2008.lsi.upc.edu/ [accessed 2009-03-02]. Library of Congress Control Number
(LCCN): 2008928438. IEEE Computer Society Order Number P3326. BMS Part Number
CFP08360-CDR.
2994. Bowei Xi, Zhen Liu, Mukund Raghavachari, Cathy H. Xia, and Li Zhang. A Smart Hill-
Climbing Algorithm for Application Server Configuration. In WWW’04 [898], pages 287–296,
2004. doi: 10.1145/988672.988711. CiteSeerx : 10.1.1.2.1211. Session: Server performance
and scalability.
2995. Proceedings of the Third International Workshop on Advanced Computational Intelligence
(IWACI’10), August 25–27, 2010, Xi’an Jiaotong-Liverpool University: Sūzhōu, Jiāngsū,
China.
2996. Huayang Xie. An Analysis of Selection in Genetic Programming. PhD thesis, Vic-
REFERENCES 1181
toria University of Wellington: Wellington, New Zealand, 2009. Fully available
at http://researcharchive.vuw.ac.nz/bitstream/handle/10063/837/thesis.
pdf [accessed 2011-11-25].
2997. Shengwu Xiong, Weiwu Wang, and Feng Li. A New Genetic Programming Approach in Sym-
bolic Regression. In ICTAI’03 [1367], pages 161–167, 2003. doi: 10.1109/TAI.2003.1250185.
INSPEC Accession Number: 7862118.
2998. Bin Xu, Tao Li, Zhifeng Gu, and Gang Wu. SWSDS: Quick Web Service Dis-
covery and Composition in SEWSIP. In CEC/EEE’06 [2967], pages 449–451, 2006.
doi: 10.1109/CEC-EEE.2006.85. INSPEC Accession Number: 9189345. See also [318, 1146,
1797, 3010, 3011].
2999. Lei Xu, Adam Krzyżak, and Erkki Oja. Rival Penalized Competitive Learning for Clus-
tering Analysis, RBF Net, and Curve Detection. IEEE Transactions on Neural Net-
works, 4(4):636–649, July 1993, IEEE Computer Society Press: Los Alamitos, CA,
USA. doi: 10.1109/72.238318. Fully available at http://www.cse.cuhk.edu.hk/˜lxu/
papers/journal/IEEETNNrpcl93.PDF [accessed 2009-08-28].
3000. Lihong Xu, Erik D. Goodman, and Yongsheng Ding, editors. Proceedings of the First
ACM/SIGEVO Summit on Genetic and Evolutionary Computation (GEC’09), June 12–14,
2009, Hua-Ting Hotel & Towers: Shànghǎi, China. ACM Press: New York, NY, USA. Partly
available at http://www.sigevo.org/gec-summit-2009/. ACM Order No.: 910093.
3001. Yong Xu, Sancho Salcedo-Sanz, and Xin Yao. Editorial to the Special Issue on Na-
ture Inspired Approaches to Networks and Telecommunications. International Journal
of Computational Intelligence and Applications (IJCIA), 5(2):iii–viii, June 2005, World
Scientific Publishing Co.: Singapore and Imperial College Press Co.: London, UK.
doi: 10.1142/S1469026805001532.
3002. Yong Xu, Sancho Salcedo-Sanz, and Xin Yao. Metaheuristic Approaches to Traffic Groom-
ing in WDM Optical Networks. International Journal of Computational Intelligence and
Applications (IJCIA), 5(2):231–249, June 2005, World Scientific Publishing Co.: Singapore
and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026805001593. Fully avail-
able at http://www.cs.bham.ac.uk/˜xin/papers/IJCIA_TrafficGrooming2005.
pdf [accessed 2009-08-10].
3003. XXX Seminário Integrado de Software e Hardware in Proceedings of XXIII Congresso da
Sociedade Brasileira de Computação (SEMISH’03), August 4–5, 2003.

3004. S. Yaacob, Meenakshi Nagarajan, and A. Chekima, editors. Proceedings of Interna-


tional Conference on Artificial Intelligence in Engineering and Technology (ICAIET’02),
June 17–18, 2002, University of Malaysia Sabah: Kota Kinabalu, Sabah, Malaysia.
isbn: 983-2188-92-X. Google Books ID: ejBaPgAACAAJ.
3005. Hirozumi Yamaguchi, Kozo Okano, Teruo Higashino, and Kenichi Taniguchi. Synthesis of
Protocol Entities’ Specifications from Service Specifications in a Petri Net Model with Reg-
isters. In ICDCS’95 [1382], pages 510–517, 1995. CiteSeerx : 10.1.1.57.1320. See also
[872].
3006. Lidia A. R. Yamamoto and Christian F. Tschudin. Experiments on the Automatic Evo-
lution of Protocols Using Genetic Programming. In WAC’05 [2603], pages 13–28, 2005.
doi: 10.1007/11687818 2. Fully available at http://cn.cs.unibas.ch/people/ly/doc/
wac2005-lyct.pdf [accessed 2008-06-20]. See also [3007].
3007. Lidia A. R. Yamamoto and Christian F. Tschudin. Experiments on the Automatic Evo-
lution of Protocols Using Genetic Programming. Technical Report CS-2005-002, Univer-
sity of Basel, Computer Science Department, Computer Networks Group: Basel, Switzer-
land, April 21, 2005. Fully available at http://cn.cs.unibas.ch/people/ly/doc/
wac2005tr-lyct.pdf [accessed 2008-06-20]. See also [3006, 3008].
3008. Lidia A. R. Yamamoto and Christian F. Tschudin. Genetic Evolution of Pro-
tocol Implementations and Configurations. In SelfMan’05 [1839], 2005. Fully
available at http://cn.cs.unibas.ch/pub/doc/2005-selfman.pdf and http://
ksuseer1.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.8032 [accessed 2009-07-22].
CiteSeerx : 10.1.1.86.8032. See also [3007].
1182 REFERENCES
3009. Lidia A. R. Yamamoto, Daniel Schreckling, and Thomas Meyer. Self-Replicating
and Self-Modifying Programs in Fraglets. In BIONETICS’07 [1342], 2007.
doi: 10.1109/BIMNICS.2007.4610104. Fully available at http://cn.cs.unibas.
ch/people/ly/doc/bionetics2007-ysm.pdf [accessed 2008-05-04]. INSPEC Accession
Number: 10186239.
3010. Yixin Yan, Bin Xu, and Zhifeng Gu. Automatic Service Composition Using AND/OR Graph.
In CEC/EEE’08 [1346], pages 335–338, 2008. doi: 10.1109/CECandEEE.2008.124. INSPEC
Accession Number: 10475105. See also [204, 1146, 1797, 2998, 3011].
3011. Yixin Yan, Bin Xu, Zhifeng Gu, and Sen Luo. A QoS-Driven Approach for Semantic Service
Composition. In CEC’09 [1245], pages 523–526, 2009. doi: 10.1109/CEC.2009.44. INSPEC
Accession Number: 10839127. See also [1146, 1571, 1797, 2998, 3010].
3012. Yong Yan, Peng Wang, Chen Wang, Haofeng Zhou, Wei Wang, and Baile Shi. ANNE: An
Efficient Framework on View Selection Problem. In APWeb’04 [3045], pages 384–394, 2004.
doi: 10.1007/978-3-540-24655-8 41.
3013. Yuhong Yan and Xianrong Zheng. A Planning Graph Based Algorithm for Semantic Web
Service Composition. In CEC/EEE’08 [1346], pages 339–342, 2008. doi: 10.1109/CECan-
dEEE.2008.135. INSPEC Accession Number: 10475106. See also [204].
3014. Ang Yang, Yin Shan, and Lam Thu Bui, editors. Success in Evolutionary Computation,
volume 92/2008 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Hei-
delberg, January 16, 2008. doi: 10.1007/978-3-540-76286-7. isbn: 3-540-76285-X and
3-540-76286-8. Google Books ID: wytCwGYoAowC. OCLC: 181090645, 244024657,
244040858, 261325352, and 300248166. Library of Congress Control Number
(LCCN): 2007939404.
3015. Jian Yang, Kamalakar Karlapalem, and Qing Li. A Framework for Designing Material-
ized Views in Data Warehousing Environment. In ICDCS’97 [1363], pages 458–465, 1997.
doi: 10.1109/ICDCS.1997.603380. CiteSeerx : 10.1.1.54.5417. INSPEC Accession Num-
ber: 5622820.
3016. Jian Yang, Kamalakar Karlapalem, and Qing Li. Algorithms for Materialized View
Design in Data Warehousing Environment. In VLDB’97 [1434], pages 136–145,
1997. Fully available at http://www.vldb.org/conf/1997/P136.PDF [accessed 2011-04-13].
CiteSeerx : 10.1.1.102.838.
3017. Jin-Hyuk Yang and Chan-Jin Chung. ASVMRT: Materialized View Selection Algo-
rithm in Data Warehouse. Journal of Information Processing Systems (JIPS), 2(2):67–75,
June 2006, Korea Information Processing Society (KIPS): Seoul, South Korea. Fully available
at http://jips-k.org/dlibrary/JIPS_v02_no2_paper1.pdf [accessed 2006-04-12].
3018. Shengxiang Yang and Changhe Li. A Clustering Particle Swarm Optimizer for Locating and
Tracking Multiple Optima in Dynamic Environments. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 14(6):959–974, December 2010, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/TEVC.2010.2046667. INSPEC Accession Number: 11675042.
3019. Shengxiang Yang, Yew-Soon Ong, and Yaochu Jin, editors. Evolutionary Computa-
tion in Dynamic and Uncertain Environments, volume 51/2007 in Studies in Compu-
tational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2007. isbn: 3-540-49772-2
and 3-540-49774-9. Partly available at http://www.soft-computing.de/Jin_
CEC04T.pdf.gz [accessed 2009-07-12]. Google Books ID: 8w HGQAACAAJ and ciktAAAACAAJ.
OCLC: 77013308, 180965995, and 318293101.
3020. Zhenyu Yang, Ke Tang, and Xin Yao. Large Scale Evolutionary Optimization using Coop-
erative Coevolution. Information Sciences – Informatics and Computer Science Intelligent
Systems Applications: An International Journal, 178(15), August 1, 2008, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.ins.2008.02.017. Fully available at http://
nical.ustc.edu.cn/papers/yangtangyao_ins.pdf [accessed 2009-08-07].
3021. Ziheng Yang and Joseph P. Bielawski. Statistical Methods for Detecting Molecular Adapta-
tion. Trends in Ecology and Evolution (TREE), 15(12):496–503, December 1, 2000, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands and Cell Press: St. Louis, MO, USA,
Katrina A. Lythgoe, editor. doi: 10.1016/S0169-5347(00)01994-7. Fully available at http://
citeseer.ist.psu.edu/old/yang00statistical.html [accessed 2009-07-10].
3022. Jinhui Yao, Shiping Chen, Chen Wang, David Levy, and John Zic. Accountability as a Service
for the Cloud: From Concept to Implementation with BPEL. In SERVICES Cup’10 [2457],
pages 91–98, 2010. doi: 10.1109/SERVICES.2010.79. INSPEC Accession Number: 11535248.
REFERENCES 1183
3023. Xin Yao, editor. Progress in Evolutionary Computation, AI’93 (Melbourne, Victoria, Aus-
tralia, 1993-11-16) and AI’94 Workshops (Armidale, NSW, Australia, 1994-11-22/23) on
Evolutionary Computation, Selected Papers, 1993–1994, Australia, volume 956/1995 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-60154-6. isbn: 3-540-60154-6. Google Books ID: -17wqjv133EC.
OCLC: 32892128, 150398021, 246326423, 246326423, and 633901751.
3024. Xin Yao. An Empirical Study of Genetic Operators in Genetic Algorithms. Micropro-
cessing and Microprogramming, 38(1-5):707–714, September 1993, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientific Publish-
ers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/0165-6074(93)90215-7. Fully avail-
able at http://www.cs.bham.ac.uk/˜xin/papers/euro93_final.pdf [accessed 2011-11-
x
10]. CiteSeer : 10.1.1.24.7090.

3025. Xin Yao, editor. Evolutionary Computation: Theory and Applications. World Scientific
Publishing Co.: Singapore, January 28, 1996. isbn: 981-02-2306-4. Google Books
ID: GP7ChxbJOE4C.
3026. Xin Yao, Yong Liu, and Guangming Lin. Evolutionary Programming Made Faster. IEEE
Transactions on Evolutionary Computation (IEEE-EC), 3(2):82–102, July 1999, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/4235.771163. Fully available at http://
eprints.kfupm.edu.sa/38413/ and http://www.u-aizu.ac.jp/˜yliu/
publication/tec22r2_online.ps.gz [accessed 2009-08-07]. CiteSeerx : 10.1.1.14.1492.
INSPEC Accession Number: 6290499.
3027. Xin Yao, Feng-Sheng Wang, K. Padmanabhan, and Sancho Salcedo-Sanz. Hybrid Evolution-
ary Approaches to Terminal Assignment in Communications Networks. In Recent Advances in
Memetic Algorithms [1205], pages 129–159. Springer-Verlag GmbH: Berlin, Germany, 2005.
doi: 10.1007/3-540-32363-5 7.
3028. Xin Yao, Edmund K. Burke, José Antonio Lozano, Jim Smith, Juan Julián Merelo-
Guervós, John A. Bullinaria, Jonathan E. Rowe, Peter Tiño, Ata Kabán, and Hans-
Paul Schwefel, editors. Proceedings of the 8th International Conference on Parallel Prob-
lem Solving from Nature (PPSN VIII), September 18–22, 2008, Birmingham, UK, volume
3242/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b100601. isbn: 3-540-23092-0. Google Books ID: dsI61Gy0 yAC.
OCLC: 56583354, 57501425, and 249904829. Library of Congress Control Number
(LCCN): 2004112163.
3029. Yiyu Yao, Zhongzhi Shi, Yingxu Wang, and Witold Kinsner, editors. Proceedings of the
Firth IEEE International Conference on Cognitive Informatics (ICCI’06), July 17–19, 2006,
Běijı̄ng, China. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-0475-4.
3030. Frank Yates. The Design and Analysis of Factorial Experiments. Imperial Bureau
of Soil Science, Commonwealth Agricultural Bureaux: Harpenden, England, UK, 1937–
1952. isbn: 0851982204. Google Books ID: 5-7RAAAAMAAJ and YW1OAAAAMAAJ.
asin: B00086SIZ0. Technical Communication No. 35.
3031. Tao Ye. Large-Scale Network Parameter Configuration using On-line Simulation Framework.
PhD thesis, Rensselaer Polytechnic Institute, Computer and System Engineering, Depart-
ment of Electrical: Troy, NY, USA, ProQuest: Ann Arbor, MI, USA, March 2003, Shiv-
kumar Kalyanaraman, Advisor, Shivkumar Kalyanaraman, Biplab Sikdar, Christopher D.
Carothers, and Aparna Gupta, Committee members. Fully available at http://www.
ecse.rpi.edu/Homepages/shivkuma/research/papers/tao-phd-thesis.pdf [ac-
cessed 2008-10-16]. Order No.: AAI3088530. See also [3032].

3032. Tao Ye and Shivkumar Kalyanaraman. An Adaptive Random Search Alogrithm for Opti-
mizing Network Protocol Parameters. Technical Report, Rensselaer Polytechnic Institute,
Computer and System Engineering, Department of Electrical: Troy, NY, USA, 2001. Fully
available at http://www.ecse.rpi.edu/Homepages/shivkuma/research/papers/
tao-icnp2001.pdf [accessed 2008-10-16]. CiteSeerx : 10.1.1.24.8138. See also [3031].
3033. Gary G. Yen, Simon M. Lucas, Gary B. Fogel, Graham Kendall, Ralf Salomon, Byoung-Tak
Zhang, Carlos Artemio Coello Coello, and Thomas Philip Runarsson, editors. Proceedings
of the IEEE Congress on Evolutionary Computation (CEC’06), 2006, Sheraton Vancouver
Wall Centre Hotel: Vancouver, BC, Canada. IEEE Computer Society: Piscataway, NJ, USA,
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-9487-9. Google Books
ID: 4j5kNwAACAAJ and ZP7RPQAACAAJ. OCLC: 181016874 and 192047219. Catalogue
1184 REFERENCES
no.: 04TH8846. Part of [1341].
3034. John Jung-Woon Yoo, Soundar R. T. Kumara, and Dongwon Lee. A Web Service Composi-
tion Framework Using Integer Programming with Non-functional Objectives and Constraints.
In CEC/EEE’08 [1346], pages 347–350, 2008. doi: 10.1109/CECandEEE.2008.144. INSPEC
Accession Number: 10475108. See also [204, 662, 1998, 1999, 2064, 2066, 2067].
3035. Hee Yong Youn and We-Duke Cho, editors. Proceedings of the 10th International Conference
on Ubiquitous Computing (UbiComp’08), September 21–24, 2008, COEX, World Trade Cen-
ter: Gangnam-gu, Seoul, Korea, volume 344 in ACM International Conference Proceeding
Series (AICPS). ACM Press: New York, NY, USA.
3036. Marshall C. Yovits, editor. Advances in Computers, volume 31 in Advances in Computers.
Academic Press Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands, October 1990. isbn: 0-12-012131-X. Google Books
ID: hIZDUKGj1w8C. OCLC: 23189502, 444245898, and 493579034. Library of Congress
Control Number (LCCN): 59-15761.
3037. Marshall C. Yovits, George T. Jacobi, and Gordon D. Goldstein, editors. Self-Organizing
Systems (Proceedings of the conference sponsored by the Information Systems Branch of the
Office of Naval Research and the Armour Research Foundation of the Illinois Institute of
Technology.), May 22–24, 1962, Chicago, IL, USA. Spartan Books: Washington, DC, USA.
asin: B000GXZFFG.
3038. Fei Yu, editor. Proceedings of the International Symposium on Electronic Commerce and Secu-
rity (ISECS’08), August 3–5, 2008, Guǎngzhōu, Guǎngdōng, China. isbn: 0-7695-3258-6.
Google Books ID: soBiPgAACAAJ. OCLC: 253559152 and 262691080.
3039. Gwoing Tina Yu. Program Evolvability Under Environmental Variations and Neutrality. In
ECAL’07 [852], pages 835–844, 2007. doi: 10.1007/978-3-540-74913-4 84. See also [3040].
3040. Gwoing Tina Yu. Program Evolvability Under Environmental Variations and Neutrality.
In GECCO’07-II [2700], pages 2973–2978, 2007. doi: 10.1145/1274000.1274041. Workshop
Session – Evolution of natural and artificial systems - metaphors and analogies in single and
multi-objective problems. See also [3039].
3041. Gwoing Tina Yu and Peter J. Bentley. Methods to Evolve Legal Phenotypes. In PPSN
V [866], pages 280–291, 1998. doi: 10.1007/BFb0056843. Fully available at http://www.
cs.ucl.ac.uk/staff/p.bentley/YUBEC2.pdf [accessed 2010-08-06].
3042. Gwoing Tina Yu and Julian Francis Miller. Finding Needles in Haystacks Is Not Hard
with Neutrality. In EuroGP’02 [977], pages 46–54, 2002. doi: 10.1007/3-540-45984-7 2.
Fully available at http://www.cs.mun.ca/˜tinayu/index_files/addr/public_
html/EuroGP2002.pdf [accessed 2009-07-10]. CiteSeerx : 10.1.1.5.1303.
3043. Gwoing Tina Yu, Lawrence Davis, Cem M. Baydar, and Rajkumar Roy, editors. Evolu-
tionary Computation in Practice, volume 88/2008 in Studies in Computational Intelligence.
Springer-Verlag: Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-75771-9. Google Books
ID: om-uG-vk6GgC. Library of Congress Control Number (LCCN): 2007940149. Series
editor: Janusz Kacprzyk.
3044. Jeffrey Xu Yu, Xin Yao, Chi-Hon Choi, and Gang Gou. Materialized View Selection as Con-
strained Evolutionary Optimization. IEEE Transactions on Systems, Man, and Cybernetics –
Part C: Applications and Reviews, 33(4):458–467, November 2003, IEEE Systems, Man, and
Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCC.2003.818494. Fully avail-
able at http://www.cs.bham.ac.uk/˜xin/papers/published_tsmc_view_nov03.
pdf [accessed 2009-04-09]. CiteSeerx : 10.1.1.6.4190. INSPEC Accession Number: 7782484.
3045. Jeffrey Xu Yu, Xuemin Lin, Hongjun Lu, and Yanchun Zhang, editors. Proceedings of
the 6th Asia-Pacific Web Conference on Advanced Web Technologies and Applications (AP-
Web’04), April 14–17, 2004, Hángzhōu, Zhèjiāng, China, volume 3007 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b96838.
isbn: 3-540-21371-6. Google Books ID: yHylerhe4uIC. Library of Congress Control
Number (LCCN): 2004102546.
3046. Tian-Li Yu, Scott Santarelli, and David Edward Goldberg. Military Antenna Design Using
a Simple Genetic Algorithm and hBOA. In Scalable Optimization via Probabilistic Model-
ing – From Algorithms to Applications [2158], Chapter 12, pages 275–289. Springer-Verlag:
Berlin/Heidelberg, 2006. doi: 10.1007/978-3-540-34954-9.
3047. Jing Yuan, Yu Zheng, Chengyang Zhang, Wenlei Xie, Xing Xie, Guangzhong Sun, and Yan
Huang. T-Drive: Driving Directions Based on Taxi Trajectories. In ACM SIGSPATIAL GIS
REFERENCES 1185
2010 [35], pages 99–108, 2010. doi: 10.1145/1869790.1869807.
3048. Deniz Yuret and Michael de la Maza. Dynamic Hill Climbing: Overcoming the
Limitations of Optimization Techniques. In TAINN’93 [1643], pages 208–212, 1993.
Fully available at http://www.denizyuret.com/pub/tainn93.html [accessed 2010-07-23].
CiteSeerx : 10.1.1.53.6367.

3049. Lotfi A. Zadeh, Youakim Badr, Ajith Abraham, Yukio Ohsawa, Richard Chbeir, Fernando
Ferri, Mario Köppen, and Dominique Laurent, editors. Proceedings of the 5th IEEE/ACM
International Conference on Soft Computing as Transdisciplinary Science and Technology
(CSTST’08), October 27–31, 2008, University of Cergy-Pontoise: Cergy-Pontoise, France.
Association for Computing Machinery (ACM): New York, NY, USA. Library of Congress
Control Number (LCCN): 2009417688.
3050. Vladimir Zakian. New Formulation for the Method of Inequalities. Proceedings of the Insti-
tution of Electrical Engineers, 126:579–584, 1979, Institution of Electrical Engineers (IEE):
London, UK and Institution of Engineering and Technology (IET): Stevenage, Herts, UK.
See also [3051].
3051. Vladimir Zakian. New Formulation for the Method of Inequalities. In Madan G. Singh,
editor, Systems and Control Encyclopedia [2504], pages 3206–3215, volume 5. Pergamon
Press: Oxford, UK, 1987. See also [3050].
3052. Vladimir Zakian. Perspectives on the Principle of Matching and the Method of Inequali-
ties. Control Systems Centre Report 769, University of Manchester Institute of Science and
Technology (UMIST), Control Systems Centre: Manchester, UK, 1992. See also [3053].
3053. Vladimir Zakian. Perspectives on the Principle of Matching and the Method of Inequalities.
International Journal of Control, 65(1):147–175, September 1996, Taylor and Francis LLC:
London, UK. doi: 10.1080/00207179608921691. See also [3052].
3054. Vladimir Zakian and U. AI-Naib. Design of Dynamical and Control Systems by the Method
of Inequalities. IEE Proceedings D: Control Theory – Control Theory and Applications, 120
(11):1421–1427, 1973, Institution of Engineering and Technology (IET): Stevenage, Herts,
UK and Institution of Electrical Engineers (IEE): London, UK.
3055. Ali M. S. Zalzala, editor. First International Conference on Genetic Algorithms in Engineer-
ing Systems: Innovations and Applications (GALESIA’95), September 12–14, 1995, Sheffield,
UK, volume 414 in IET Conference Publications. Institution of Engineering and Technology
(IET): Stevenage, Herts, UK.
3056. Ali M. S. Zalzala, editor. Proceedings of the Second IEE/IEEE International Conference
On Genetic Algorithms in Engineering Systems: Innovations and Applications (GALE-
SIA’97), September 2–4, 1997, University of Strathclyde: Glasgow, Scotland, UK, volume
1997 (CP446) in IET Conference Publications. Institution of Engineering and Technology
(IET): Stevenage, Herts, UK. isbn: 0-85296-693-8. Google Books ID: uBPdAAAACAAJ.
OCLC: 37864166, 38989931, 84640200, 288957978, and 312682751. coden: IECPB4.
3057. Maciej Zaremba, Tomas Vitvar, Matthew Moran, Thomas Haselwanter, and Adina Sirbu.
WSMX Discovery for SWS Challeng. In Third Workshop of the Semantic Web Service
Challenge 2006 – Challenge on Automating Web Services Mediation, Choreography and
Discovery [2690], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Athens/papers/DERI-sws-challenge-3.pdf [accessed 2010-12-16]. See also [582,
1210, 1940, 3058, 3059].
3058. Maciej Zaremba, Tomas Vitvar, Matthew Moran, Marco Brambilla, Stefano Ceri, Dario
Cerizza, Emanuele Della Valle, Federico Michele Facca, and Christina Tziviskou. Towards
Semantic Interoperabilty – In-depth Comparison of Two Approaches to Solving Seman-
tic Web Service Challenge Mediation Tasks. In ICEIS’07 [494], pages 413–421, volume
4, 2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/2-DERI-Milano%20comparison.pdf [accessed 2010-12-16]. See also [582,
1210, 1940, 3057, 3059].
3059. Maciej Zaremba, Maximilian Herold, Raluca Zaharia, and Tomas Vitvar. Data and Pro-
cess Mediation Support for B2B Integration. In EON-SWSC’08 [1023], 2008. Fully avail-
1186 REFERENCES
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/ [accessed 2010-12-16]. See also [582, 1210, 1940, 2166, 3057, 3058].
3060. Marvin Zelen and Norman C. Severo. Probability Functions. In Handbook of Mathemati-
cal Functions with Formulas, Graphs, and Mathematical Tables [2606], Chapter 26. Dover
Publications: Mineola, NY, USA, 1964.
3061. Zhi-hui Zhan and Jun Zhang. Discrete Particle Swarm Optimization for Multiple
Destination Routing Problems. In EvoWorkshops’09 [1052], pages 117–122, 2009.
doi: 10.1007/978-3-642-01129-0 15.
3062. Chuan Zhang and Jian Yang. Genetic Algorithm for Materialized View Selec-
tion in Data Warehouse Environments. In DaWaK’99 [1924], pages 116–125, 1999.
doi: 10.1007/3-540-48298-9 12.
3063. Chuan Zhang, Xin Yao, and Jian Yang. Evolving Materialized Views in Data Warehouse.
In CEC’99 [110], pages 823–829, 1999. doi: 10.1109/CEC.1999.782507. Fully available
at http://www.cs.bham.ac.uk/˜xin/papers/ZhangYaoYangCEC99.pdf [accessed 2011-
04-09]. CiteSeerx : 10.1.1.45.4743 and 10.1.1.63.5130. INSPEC Accession Num-
ber: 6338891.
3064. Chuan Zhang, Xin Yao, and Jian Yang. An Evolutionary Approach to Materialized Views
Selection in a Data Warehouse Environment. IEEE Transactions on Systems, Man, and
Cybernetics – Part C: Applications and Reviews, 31(3):282–294, August 2001, IEEE Sys-
tems, Man, and Cybernetics Society: New York, NY, USA. doi: 10.1109/5326.971656.
CiteSeerx : 10.1.1.13.9065. INSPEC Accession Number: 7141147.
3065. Du Zhang, Yingxu Wang, and Witold Kinsner, editors. Proceedings of the Six IEEE Interna-
tional Conference on Cognitive Informatics (ICCI’07), August 6–8, 2007, Lake Tahoe, CA,
USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-1327-3.
3066. Guoqiang Peter Zhang. Neural Networks for Classification: A Survey. IEEE Trans-
actions on Systems, Man, and Cybernetics – Part C: Applications and Reviews, 30(4):
451–462, 2000, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/5326.897072. Fully available at http://vis.lbl.gov/˜romano/mlgroup/
papers/neural-networks-survey.pdf [accessed 2010-07-24].
3067. Jing Zhang, Weiran Nie, Mark Panahi, Yichin Chang, and Kwei-Jay Lin. Business
Process Composition with QoS Optimization. In CEC’09 [1245], pages 499–502, 2009.
doi: 10.1109/CEC.2009.81. INSPEC Accession Number: 10839133. See also [1571, 2264,
3073, 3074].
3068. Liang-Jie Zhang, editor. Proceedings of the 5th World Conference on Services: Part I
(SERVICES’09-I), July 6–10, 2009, Los Angeles, CA, USA. IEEE Computer Society: Pis-
cataway, NJ, USA.
3069. Liang-Jie Zhang, Wil van der Aalst, and Patrick C. K. Hung, editors. Proceedings of the IEEE
International Conference on Services Computing (SCC’07), July 9–13, 2007, Salt Lake City,
UT, USA. IEEE Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7695-2925-9.
Library of Congress Control Number (LCCN): 2007927952. Order No.: P2925.
3070. P. Zhang and A.H. Coonick. Coordinated Synthesis of PSS Parameters in Multi-Machine
Power Systems Using the Method of Inequalities Applied to Genetic Algorithms. IEEE Trans-
actions on Power Systems, 15(2):811–816, May 2000, IEEE Power & Energy Society (PES):
Piscataway, NJ, USA. doi: 10.1109/59.867178. Fully available at http://www.lania.mx/
x
˜ccoello/EMOO/zhang00.pdf.gz [accessed 2008-11-14]. CiteSeer : 10.1.1.8.9768.
3071. Qingzhou Zhang, Xia Sun, and Ziqiang Wang. An Effiecient MA-Based Materialized Views
Selection Algorithm. In CASE’09 [2660], pages 315–318, 2009. doi: 10.1109/CASE.2009.111.
INSPEC Accession Number: 10813304.
3072. Shichao Zhang and Ray Jarvis, editors. Advances in Artificial Intelligence. Proceedings of the
18th Australian Joint Conference on Artificial Intelligence (AI’05), December 5–9, 2005, Uni-
versity of Technology, Sydney (UTS): Sydney, NSW, Australia, volume 3809/2005 in Lecture
Notes in Artificial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11589990. isbn: 3-540-30462-2.
Library of Congress Control Number (LCCN): 2005936732.
3073. Yue Zhang, Tao Yu, Krishna Raman, and Kwei-Jay Lin. Strategies for Efficient Syntactical
and Semantic Web Services Discovery and Composition. In CEC/EEE’06 [2967], pages 452–
454, 2006. doi: 10.1109/CEC-EEE.2006.84. INSPEC Accession Number: 9189346. See also
[318, 2264, 3067, 3074].
REFERENCES 1187
3074. Yue Zhang, Krishna Raman, Mark Panahi, and Kwei-Jay Lin. Heuristic-based Service Com-
position for Business Processes with Branching and Merging. In CEC/EEE’07 [1344], pages
525–528, 2007. doi: 10.1109/CEC-EEE.2007.53. INSPEC Accession Number: 9868639. See
also [319, 2264, 3067, 3073].
3075. R. T. Zheng, N. Q. Ngo, P. Shum, S. C. Tjin, and L. N. Binh. A Staged Continuous Tabu
Search Algorithm for the Global Optimization and its Applications to the Design of Fiber
Bragg Gratings. Computational Optimization and Applications, 30(3):319–335, March 2005,
Kluwer Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht,
Netherlands. doi: 10.1007/s10589-005-4563-9.
3076. Shijue Zheng, Shaojun Xiong, Yin Huang, and Shixiao Wu. Using Methods of Association
Rules Mining Optimization in in Web-Based Mobile-Learning System. In ISECS’08 [3038],
pages 967–970, 2008. doi: 10.1109/ISECS.2008.223. INSPEC Accession Number: 10183507.
3077. Yu Zheng, Quannan Li, Yukun Chen, Xing Xie, and Wei-Ying Ma. Understand-
ing Mobility based on GPS Data. In UbiComp’08 [3035], pages 312–321, 2008.
doi: 10.1145/1409635.1409677. See also [1892, 3078].
3078. Yu Zheng, Like Liu, Longhao Wang, and Xing Xie. Learning Transportation Mode from
Raw GPS Data for Geographic Applications on the Web. In WWW’08 [140], pages 247–256,
2008. doi: 10.1145/1367497.1367532. Fully available at http://www2008.org/papers/
pdf/p247-zhengA.pdf [accessed 2011-02-19]. See also [1892, 3077].
3079. Jinghui Zhong, Xiaomin Hu, Jun Zhang, and Min Gu. Comparison of Performance
between Different Selection Strategies on Simple Genetic Algorithms. In CIMCA-
IAWTIC’06 [1923], pages 1115–1121, 2006. doi: 10.1109/CIMCA.2005.1631619. Fully avail-
able at http://www.ee.cityu.edu.hk/˜jzhang/papers/zhongjinghui06.pdf [ac-
x
cessed 2010-08-03]. CiteSeer : 10.1.1.140.3747. INSPEC Accession Number: 9109532.

3080. Chi Zhou, Weimin Xiao, Thomas M. Tirpak, and Peter C. Nelson. Evolving Accurate and
Compact Classification Rules with Gene Expression Programming. IEEE Transactions on
Evolutionary Computation (IEEE-EC), 7(6):519–531, December 2003, IEEE Computer So-
ciety: Washington, DC, USA. doi: 10.1109/TEVC.2003.819261. INSPEC Accession Num-
ber: 7826902.
3081. Xiaofang Zhou, editor. Proceedings of the 13th Australasian Database Conference (ADC’02),
January–February 2002, Monash University: Melbourne, VIC, Australia, volume 5 in Con-
ferences in Research and Practice in Information Technology (CRPIT). ACM Press: New
York, NY, USA. isbn: 0-909-92583-6.
3082. Xiaofang Zhou, Maria E. Orlowska, and Yanchun Zhang, editors. Proceedings of the 5th Asia-
Pacific Web Conference (APWeb’03), April 23–25, 2003, Xı̄’ān, Shǎnxı̄, China, volume 2642
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-36901-5. isbn: 3-540-02354-2.
3083. Jianping Zhu, Pete Bettinger, and Rongxia (Tiffany) Li. Additional Insight into the Per-
formance of a New Heuristic for Solving Spatially Constrained Forest Planning Problems.
Silva Fennica, 41(4):687–698, Finnish Society of Forest Science, The Finnish Forest Research
Institute: Finland, Eeva Korpilahti, editor. Fully available at http://www.metla.fi/
silvafennica/full/sf41/sf414687.pdf [accessed 2008-08-23].
3084. Kenny Qili Zhu. A Diversity-Controlling Adaptive Genetic Algorithm for the Vehi-
cle Routing Problem with Time Windows. In ICTAI’03 [1367], pages 176–183, 2003.
doi: 10.1109/TAI.2003.1250187. INSPEC Accession Number: 7862120.
3085. Liming Zhu, Roger L. Wainwright, and Dale A. Schoenefeld. A Genetic Algorithm
for the Point to Multipoint Routing Problem with Varying Number of Requests. In
CEC’98 [2496], pages 171–176, 1998. doi: 10.1109/ICEC.1998.699496. Fully available
at http://euler.mcs.utulsa.edu/˜rogerw/papers/Zhu-ICEC98.pdf [accessed 2009-
x
09-03]. CiteSeer : 10.1.1.32.5655.

3086. Weihang Zhu. A Study of Parallel Evolution Strategy – Pattern Search on a GPU Computing
Platform. In GEC’09 [3000], pages 765–771, 2009. doi: 10.1145/1543834.1543939. See also
[3087].
3087. Weihang Zhu. Nonlinear Optimization with a Massively Parallel Evolution Strategy-
Pattern Search Algorithm on Graphics Hardware. Applied Soft Computing, 11(2):1770–1781,
March 2011, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2010.05.020.
See also [3086].
3088. Karin Zielinski and Rainer Laur. Stopping Criteria for Constrained Optimization with Parti-
1188 REFERENCES
cle Swarms. In BIOMA’06 [919], pages 45–54, 2006. Fully available at http://www.item.
uni-bremen.de/staff/zilli/zielinski06stopping_PSO.pdf [accessed 2007-09-13]. See
also [3089].
3089. Karin Zielinski and Rainer Laur. Stopping Criteria for a Constrained Single-Objective
Particle Swarm Optimization Algorithm. Informatica, 31(1):51–59, 2007, Slovensko
društvo Informatika (Slovenian Society Informatika). Fully available at http://
www.informatica.si/vols/vol31.html and http://www.item.uni-bremen.de/
staff/zilli/zielinski07informatica.pdf [accessed 2007-09-13]. Extension of [3088].
3090. Antanas Žilinskas. Algorithm AS 133: Optimization of One-Dimensional Multimodal Func-
tions. Journal of the Royal Statistical Society: Series C – Applied Statistics, 27(3):367–375,
1978, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
doi: 10.2307/2347182.
3091. Hans-Jürgen Zimmermann, editor. Proceedings of the 5th European Congress on Intelligent
Techniques and Soft Computing (EUFIT’97), September 8–11, 1997, Aachen, North Rhine-
Westphalia, Germany. ELITE Foundation: Aachen, North Rhine-Westphalia, Germany and
Wissenschaftsverlag Mainz: Germany.
3092. Uwe Zimmermann, Ulrich Derigs, Wolfgang Gaul, Rolf H. Möhring, and Karl-Peter Schuster,
editors. Operations Research Proceedings 1996, Selected Papers of the Symposium on Oper-
ations Research (SOR’96), September 3–6, 1996, Braunschweig, Lower Saxony, Germany.
Springer-Verlag: Berlin/Heidelberg. isbn: 3540626301. Google Books ID: FK5gAQAACAAJ.
OCLC: 36865524, 174184822, and 243863641. Published 1997.
3093. Lyudmila Zinchenko, Matthias Radecker, and Fabio Bisogno. Application of the Univariate
Marginal Distribution Algorithm to Mixed Analogue - Digital Circuit Design and Optimisa-
tion. In EvoWorkshops’07 [1050], pages 431–438, 2007. doi: 10.1007/978-3-540-71805-5 48.
3094. Stanley Zionts, editor. Proceedings of the 2nd International Conference on Multiple Crite-
ria Decision Making (MCDM’77), August 22–26, 1977, Buffalo, NY, USA, volume 155 in
Lecture Notes in Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg.
isbn: 0-387-08661-7 and 3-540-08661-7.
3095. Christian Zirpins, Guadalupe Ortiz, Winfried Lamersdorf, and Wolfgang Emmerich, editors.
First International Workshop on Engineering Service Compositions (WESC’05). Technical
Report RC23821, IBM Research Division: Yorktown Heights, NY, USA, December 12, 2005,
Amsterdam, The Netherlands. Partly available at http://fresco-www.informatik.
uni-hamburg.de/wesc05/ [accessed 2011-02-28].
3096. Eckart Zitzler and Simon Künzli. Indicator-based Selection in Multiobjective Search. In
PPSN VIII [3028], pages 832–842, 2008. Fully available at http://www.tik.ee.ethz.
ch/sop/publicationListFiles/zk2004a.pdf [accessed 2009-07-18].
3097. Eckart Zitzler and Lothar Thiele. An Evolutionary Algorithm for Multiobjective Op-
timization: The Strength Pareto Approach. TIK-Report 43, Eidgenössische Technis-
che Hochschule (ETH) Zürich, Department of Electrical Engineering, Computer Engi-
neering and Networks Laboratory (TIK): Zürich, Switzerland, May 1998. Fully avail-
able at http://www.tik.ee.ethz.ch/sop/publicationListFiles/zt1998a.pdf
x
[accessed 2009-07-10]. CiteSeer : 10.1.1.40.7696.

3098. Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of Multiobjective Evo-
lutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2):173–195, Sum-
mer 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600568202. Fully avail-
able at http://sci2s.ugr.es/docencia/cursoMieres/EC-2000-Comparison.pdf
and http://www.lania.mx/˜ccoello/EMOO/zitzler99b.ps.gz [accessed 2009-08-07].
CiteSeerx : 10.1.1.30.5848. PubMed ID: 10843520.
3099. Eckart Zitzler, Kalyanmoy Deb, Lothar Thiele, Carlos Artemio Coello Coello, and
David Wolfe Corne, editors. Proceedings of the First International Conference on
Evolutionary Multi-Criterion Optimization (EMO’01), March 7–9, 2001, Eidgenössische
Technische Hochschule (ETH) Zürich: Zürich, Switzerland, volume 1993/2001 in Lec-
ture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-44719-9. isbn: 3-540-41745-1. Google Books ID: F29RAAAAMAAJ
and d52UjQ5pfcIC. OCLC: 46240282, 59550134, 76254334, 77755314, 150396610,
243488365, and 313725155.
3100. Eckart Zitzler, Marco Laumanns, and Lothar Thiele. SPEA2: Improving the Strength
Pareto Evolutionary Algorithm. TIK-Report 101, Eidgenössische Technische Hochschule
REFERENCES 1189
(ETH) Zürich, Department of Electrical Engineering, Computer Engineering and Net-
works Laboratory (TIK): Zürich, Switzerland, May 2001. Fully available at http://
www.tik.ee.ethz.ch/sop/publicationListFiles/zlt2001a.pdf [accessed 2009-07-10].
CiteSeerx : 10.1.1.17.2440. Errata added 2001-09-27.
3101. Mark Zlochin, Mauro Birattari, Nicolas Meuleau, and Marco Dorigo. Model-Based
Search for Combinatorial Optimization: A Critical Survey. Annals of Operations Re-
search, 132(1-4):373–395, November 2004, Springer Netherlands: Dordrecht, Nether-
lands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1023/B:ANOR.0000039526.52305.af.
3102. Constantin Zopounidis, editor. Proceedings of the 18th International Conference on Multiple
Criteria Decision Making (MCDM’06), June 9–13, 2006, MAICh (Mediterranean Agronomic
Institute of Chania) Conference Centre: Chania, Crete, Greece. Partly available at http://
www.dpem.tuc.gr/fel/mcdm2006/ [accessed 2007-09-10].
3103. Blaz Zupan, Elpida Keravnou, and Nada Lavrac, editors. Proceedings Intelligent Data Anal-
ysis in Medicine and Pharmacology (IDAMAP’01), September 4, 2001, Custom House Hotel
at ExCeL: London, UK, The Springer International Series in Engineering and Computer
Science. Springer US: Boston, MA, USA.
3104. Jacek M. Zurada, Gary G. Yen, and Jun Wang, editors. Computational Intelligence: Re-
search Frontiers – IEEE World Congress on Computational Intelligence – Plenary/Invited
Lectures (WCCI), June 1–6, 2008, Hong Kong Convention and Exhibition Centre: Hong
Kong (Xiānggǎng), China, volume 5050/2008 in Theoretical Computer Science and General
Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-540-68860-0. isbn: 3-540-68858-7 and 3540688609. Google
Books ID: X-WH9-BP6JoC. OCLC: 228414687, 243453135, 244010459, and 315620902.
Library of Congress Control Number (LCCN): 2008927523.
3105. Katharina Anna Zweig. On Local Behavior and Global Structures in the Evolu-
tion of Complex Networks. PhD thesis, Eberhard-Karls-Universität Tübingen, Fakultät
für Informations- und Kognitionswissenschaften: Tübingen, Germany, July 2007.
Fully available at http://www-pr.informatik.uni-tuebingen.de/mitarbeiter/
katharinazweig/downloads/Diss.pdf [accessed 2008-06-12]. See also [3106].
3106. Katharina Anna Zweig and Michael Kaufmann. Evolutionary Algorithms for the
Self-Organized Evolution of Networks. In GECCO’05 [304], pages 563–570, 2005.
doi: 10.1145/1068009.1068105. Fully available at http://www-pr.informatik.
uni-tuebingen.de/mitarbeiter/katharinazweig/downloads/299-lehmann.
pdf [accessed 2008-05-28]. See also [3105].
Index

— Symbols — Adaptive Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115


(1, 1)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 fitter dynamics . . . . . . . . . . . . . . . . . . . . . . . . 116
(11)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 greedy dynamics . . . . . . . . . . . . . . . . . . . . . . 116
(µ′ , λ′ (µ, λ)γ )-EAs . . . . . . . . . . . . . . . . . . . . . . . . . 267 one-mutant. . . . . . . . . . . . . . . . . . . . . . . . . . . .116
(µ, λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 ADF . . . . . . . . . . . . . . . . . . . . . . . 273, 400 f., 403, 406
(µ, λ)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Adjacency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
(µ/ρ, λ)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 adjacent neighbors . . . . . . . . . . . . . . . . . . . . . . . . . 554
(µ/ρλ)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 ADM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401, 403
(µ1)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Admissible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
(µλ)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 AE/EA . . 319, 353, 374, 384, 414, 421, 429, 460,
1/5th Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 468, 474, 484
χ2 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684 AGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
µGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1/5th Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Aggregation
σ-algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656 linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
τ -EO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 AGI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
k-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . 454 AI . . . . . . . . . . . . . . . . . . . . . . . . . 31, 35, 76, 130, 536
N P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 AICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
N P-Completeness . . . . . . . . . . . . . . . 117, 147, 247 AIIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
N P-Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 AIM-GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 AIMGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
1-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 AIMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
2-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 AINS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
80x86 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
AISB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
—A— AJWS . . . 321, 355, 375, 385, 416, 423, 430, 461,
A⋆ Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519 f. 469, 475, 485
AAAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Abiogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 bucket brigade . . . . . . . . . . . . . . . . . . . . . . . . 458
Ackley determinism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 deterministic . . . . . . . . . . . . . . . . . . . . . 147, 511
ACO . . . . . . . . . . . . . 3, 32, 37, 207, 471 f., 477, 945 evolutionary . . . . . . . . . . . . . . . . . . . . . . . 37, 253
Acquisition generational . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 genetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Las Vegas. . . . . . . . . . . . . . . . . . . . .41, 106, 148
self . . . . . . . . . . . . . . . . . . . . . . . . . . 360, 370, 419 memetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

1190
INDEX 1191
Monte Carlo . . . . . . . . . . . . . . . . . . . . . . 106, 148 trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
optimization . . . . . . . . . . . . . . . . . . . . . . . 23, 103 Best-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
baysian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Best-Limited Search . . . . . . . . . . . . . . . . . . . . . . . . 519
probabilistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 BFS . . . . . . . . . . . . . . . . . . . . . . . . 32, 115, 515 ff., 519
randomized. . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 Bias. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .692
stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 BibTeX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
Algorithmic Chemistry . . . . . . . . . . . . . . . . . . . . . 327 Big-O notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
ALIO/EURO . 133, 228, 235, 237, 251, 321, 355, Bijective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
375, 385, 416, 423, 430, 461, 469, 475, Bijectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
479, 485, 491, 496, 500, 502, 506, 518, Bin Packing . . . . . . . . . . . . . . . 25, 39, 181, 183, 238
521, 524, 528 BinInt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 f., 562 f.
Allele . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 f. Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . 674
Amplifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 BIOMA . . 320, 354, 374, 384, 415, 422, 429, 460,
ANN . . . 187, 191 f., 204, 273, 389, 396, 406, 536, 468, 474, 484
541 BIONETICS. .130, 320, 354, 374, 384, 415, 422,
Annealing 429, 460, 468, 474, 484
simulated . . . . . . . . . . . . 3, 25, 32 f., 36, 156 f., BitCount. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562
160, 181, 231, 243 – 249, 252, 308, 412, Black Box . . . . . . . . . . . . . . . . . . . . 35 f., 42, 91, 1209
463 f., 493, 541, 557, 716, 740, 945 Bloat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Ant Block
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Ant Colony Optimization . 3, 32, 37, 207, 471 f., BLUE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .696
477, 945 BOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Antenna Boolean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Branch And Bound . . . . . . . . . . . . . . . . . . . . . 32, 523
Antisymmetrie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 Breadth-First Search . . . . . 32, 115, 515, 517, 519
ANTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474, 484 Breeding Selection . . . . . . . . . . . . . . . . . . . . . . . . . 289
ANZIIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Brute Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
AO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Bucket Brigade Algorithm . . . . . . . . . . . . . . . . . . 458
Artificial Ant57, 61, 64, 69, 76, 78, 107, 380, 566 Building Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Artificial Development . . . . . . . . . . . . . . . . . . . . . 272 Building Block Hypothesis . . . 84, 121, 183, 217,
Artificial Embryogeny . . . . . . . . . . . . . . . . . . . . . . 272 341, 346, 558 f., 566
Artificial Intelligence . . . . . . . . . . . . 31, 35, 76, 536 building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Artificial Neural Network . . 187, 191 f., 204, 273, Bypass
389, 396, 406, 536, 541 extradimensional . . . . . . . . . . . . . . . . . . . . . . 219
Artificial Ontogeny . . . . . . . . . . . . . . . . . . . . . . . . . 272
ASC 134, 321, 355, 375, 385, 416, 423, 430, 461,
—C—
C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 f.
469, 475, 485
C 745 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
Asexual Reproduction . . . . . . . . . . . . . . . . . 307, 325
CACSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Assimilation
Candidate
genetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
AST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Capacitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Asymmetrie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 CarpeNoctem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Automatically Defined Function.273, 400 f., 403 Cartesian Genetic Programming . . . . . . . . . . . . 174
Automatically Defined Macro. . . . . . . . . .401, 403 Catastrophe
complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
—B— Causality . . . . . . . . . . . . . . . . . . . . . 84, 162, 218, 571
Baldwin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658, 660
effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 continous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
Basin-δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
Bayesian Optimization Algorithm . . . . . . . . . 184 CEC 317, 353, 373, 383, 414, 421, 429, 459, 467,
BB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 523 474, 484
BBH . . . . . 84, 121, 183, 217, 341, 346, 558 f., 566 CEGNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
bcGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Cellular Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Bernoulli Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . 681
distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 CGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 cGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 ff.
1192 INDEX
CGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Completeness . . . . . . . . . . . . 85, 146, 157, 218, 514
CGPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 f. weak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Change Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 143, 203
non-synonymous . . . . . . . . . . . . . . . . . . . . . . 173 complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
synonymous . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Complexity Catastrophe . . . . . . . . . . . . . . . . . . . 555
Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 509 Computational Embryogeny . . . . . . . . . . . . . . . . 272
Chi-square Distribution . . . . . . . . . . . . . . . . . . . . 684 Computing
Chromosome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 evolutionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Chromosomes soft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
string Confidence
fixed-length . . . . . . . . . . . . . . . . . . . . . . . . . 328 coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
variable-length . . . . . . . . . . . . . . . . . . . . . . 339 interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Constrained-domination . . . . . . . . . . . . . . . . . . . . . 75
CI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696 Constraint Handling . . . . . . . . . . . . . . . . . . . . . . . . . 70
CIMCA . . 320, 354, 374, 385, 415, 422, 430, 461, Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 22, 70, 509
468, 475, 485 Continuous Distributions . . . . . . . . . . . . . . . . . . . 676
Circuit Continuous Optimization . . . . . . . . . . . . . . . . . . . . 27
analog Continuous Problem. . . . . . . . . . . . . . . . . . . . . . . . .27
design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Controller
Circuit Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
CIS . 320, 355, 374, 385, 415, 422, 430, 461, 468, Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
475, 485 domino . . . . . . . . . . . . . . . . . . . . 153 f., 218, 562
CISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 non-uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Class premature . . . . . . . . . . . . . . . . . 151 f., 482, 547
equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Convergence Prevention . . . . . . . . . . . . . . . . . . . . 546
Classifier System . . . . . . . . . . . . . . . . . . . . 339, 457 f. Cooperation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .463
learning . . . . . . . . . 3, 32, 264, 457 f., 536, 945 Copyleft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
Classifier Systems Copyright . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947, 955 f.
learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Correlation
Clearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 fitness distance . . . . . . . . . . . . . . . . . . . . . . . . 163
Climbing genotype-fitness . . . . . . . . . . . . . . . . . . . . . . . 163
hill. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 f., operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
41, 85, 96, 99, 114, 116, 157, 160, 165, Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
167, 169, 172, 176, 181, 210 f., 229 – Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
232, 238 ff., 243, 245 f., 248, 252, 266, estimate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .664
296, 359 f., 377, 400, 412, 425, 463 f., CPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
482, 501, 559, 563, 565, 735, 737, 945 Creation . . . . . . . . . . . . . . . . . . . . . 306, 328, 340, 389
random restarts . . . . . . . . . . . . . . . . . . . . . 501 Criterion
CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 termination . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
CO2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Criticality
Co-Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 self-organized . . . . . . . . . . . . . . . . . . . . . . . . . 493
Code CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Crossover . . . . . . . . . . . . . 262, 307, 335, 340 f., 399
Code Bloat. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .534 1-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 2-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Coefficient average . . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 362
negative slope . . . . . . . . . . . . . . . . . . . . . . . . . 163 headless chicken . . . . . . . . . . . . . . . . . . 339, 399
Coefficient of Variation . . . . . . . . . . . . . . . . . . . . . 663 strong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Coevolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 weak. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .400
cooperative. . . . . . . . . . . . . . . . . . . . . . . . . . . .205 homologous . . . . . . . . . . . . . . . . . . 338, 340, 407
COGANN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 k-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 MPX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
Combinatorial Optimization . . . . . . . . . . . . . . . . 181 multi-point . . . . . . . . . . . . . . . . . . . . . . . 335, 338
Combinatorial Problem . . . . . . . . . . . . . . . . . . . . . . 25 point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 random. . . . . . . . . . . . . . . . . . . . . . . . . . .339, 399
Competition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 single-point . . . . . . . . . . . . . . . . . . . . . . . 335, 400
Complete Enumeration . . . . . . . . . . . . . . . . . . . . . 212 SPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
INDEX 1193
sticky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 DFS . . . . . . . . . . . . . . . . . . . . . . . . 32, 115, 516 f., 519
strong context preserving . . . . . . . . . . . . . . 400 DHL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 f., 550
subtree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . 3,
TPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 27, 32, 84, 191, 207, 262 f., 419 f., 425,
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 444, 463, 580, 771, 794, 796
two-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Differentiation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
uniform . . . . . . . . . . . . . . . . . 337, 346, 362, 377 Difficult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
UX . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 346, 362 Dilemma
CS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339, 457 f. prisoner’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
CSTST . . 132, 320, 355, 374, 385, 415, 422, 430, Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
461, 468, 475, 485 Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Cumulative Distribution Function . . . . . . . . . 658 Dinosaur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Customer Relationship Management . . . . . . . 536 Diodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Cut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Direct Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Cut & Splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Cutting-Plane Method . . . . . . . . . . . . . . . . . . . . . . 32 Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . 668
CVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 Discrete Optimization . . . . . . . . . . . . . . . . . . . . . . . 29
Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Discrete Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
—D— Distribution . . . . . . . . . . . . . . . . . . . . . . 658, 668, 676
Darwin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464 χ2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . 154, 536 Binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
DE . . . . 3, 27, 32, 84, 191, 207, 262 f., 419 f., 425, chi-square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
444, 463, 580, 771, 794, 796 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
Death Penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Deceptiveness . . . . . . . . . . . . . . . . 115, 167, 182, 557 estimation of, algorithm . . . . . . . . . . . . . . . 27,
Deceptivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 182 32, 267, 367, 427, 431, 442, 447, 452 f.,
Decile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 455, 554, 775
Decision Maker exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
external . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Decision Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 multivariate . . . . . . . . . . . . . . . . . . . . . . . . 680
Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Decreasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
monotonically . . . . . . . . . . . . . . . . . . . . . . . . . 647 Student’s t . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Defined Length. . . . . . . . . . . . . . . . . . . . . . . . . . . . .341 t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
DELB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 uniform . . . . . . . . . . . . . . . . . . . . . . . . . . 668, 676
Delivery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .544 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . 676
Depth-First Search . . . . . . . . . . 32, 115, 516 f., 519 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
iterative deepenining . . . . . . . . . . . . . . 32, 516 Diversification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
Depth-Limited Search . . . . . . . . . . . . . . . . . . . . 516 f. Diversity . . . . . . . . . . . . 70, 154, 216, 330, 452, 547
DERL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Divisor
DES. . .3, 27, 32, 84, 191, 207, 262 f., 419 f., 425, greatest common . . . . . . . . . . . . . . . . . . . . . . 180
444, 463, 580, 771, 794, 796 DLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 f.
Design DNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272, 338
antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
circuit DoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
analog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Domination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Design Of Experiments . . . . . . . . . . . . . . . . . . . . . 545 constrained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Design of Experiments . . . . . . . . . . . . . . . . . . . . . 178 rank. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
Determinism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 resistance . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 198
Deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Domino
Deterministic Selection . . . . . . . . . . . . . . . . . . . . . 289 convergence . . . . . . . . . . . . . . . . 153 f., 218, 562
Deutsche Post . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Domino Convergence . . . . . . . . . . . . . . . . . . . . . . . 154
Development Don’t Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Downhill Simplex . . . 32, 104, 264, 420, 463, 501,
Developmental Approach . . . . . . . . . . . . . . . . . . . 272 945
Deviation DPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662 Drunkard’s Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
1194 INDEX
DTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145, 147, 149 Enterprise Resource Planning . . . . . . . . . . . . . . 536
Duplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306 f. Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
DVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
—E— information . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
E-Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 f., 945 Enumeration
E-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
EA . . . . . . . . . . . . . . . . . . . . . . . . . . . 3, 32, 36 f., 75 ff., exhaustive . . . 113, 116 f., 143, 168, 210, 212
84, 86, 93 ff., 121, 155 ff., 162 f., 172 f., Environment Selection . . . . . . . . . . . . . . . . . . . . . 261
183 f., 207, 210, 212, 229 f., 245, 253 ff., Environmental Selection . . . . . . . . . . . . . . . . . . . . 260
258, 261 – 267, 269 f., 272 ff., 277, 279, EO . . . . . . . . . . . . . . . . . . . 3, 25, 32, 172, 493 f., 945
285 ff., 289, 294, 302, 306 f., 309, 312, GEO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
324 f., 327, 335, 359 f., 377, 379 f., 393, Eoarchean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
412, 419, 427, 434, 452 f., 457, 463 f.,
EP . . . . . . . . . . . . . . . . . . . 27, 32, 264, 380, 413, 415
466, 512, 538, 541 f., 544 ff., 551, 562,
Ephemeral Random Constants . . . . . . . . . . . . . 532
566 ff., 577, 622, 699, 717, 746, 945
EPIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
hybrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Epistacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
EARL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Epistasis . 163, 177, 181, 204, 389, 404, 554, 560,
EBCOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
562, 569, 577 f.
EC . . . 3, 24, 31 f., 36, 48, 70, 156, 180, 253, 379,
Epistatic Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
537, 553, 945
Epistatic Variance. . . . . . . . . . . . . . . . . . . . . . . . . .163
ECML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132, 461
Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
ECML PKDD . . . . . . . . . . . . . . . . . . . . . . . . . 135, 462
punctuated . . . . . . . . . . . . . . . . . . . . . . . 172, 494
Ecological Selection . . . . . . . . . . . . . . . . . . . . . . . . 260
Equivalence
Economics
class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
mathematical . . . . . . . . . . . . . . . . . . . . . . 23, 509
EDA . . 27, 32, 267, 367, 427, 431, 434, 442, 447, relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
452 f., 455, 554, 775 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 f.
Edge Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 ERL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
EDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 ERP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
EDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Effect β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Baldwin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 mean square . . . . . . . . . . . . . . . . . . . . . . . . . . 692
hiding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 threshold . . . . . . . . . . . . . . . . . . . . . . . . . 163, 562
Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 type 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 type 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Error Threshold . . . . . . . . . . . . . . . . . . . . . . . 163, 562
Elitism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267, 546 ES . . . 3, 27, 32, 37, 155, 162, 188, 191, 216, 229,
Embrogeny . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 263 f., 266 f., 286, 289, 302, 359 f., 362,
Embryogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 364, 367 f., 370 ff., 377, 419, 425, 427,
Embryogenic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 444, 481, 566, 580 f., 760, 786, 788, 945
Embryogeny differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 27, 32, 84, 191, 207, 262 f., 419 f., 425,
computational . . . . . . . . . . . . . . . . . . . . . . . . . 272 444, 463, 580, 771, 794, 796
EMO 55, 65, 202, 253, 257, 259 f., 279, 311, 319, Estimation Of Distribution Algorithm . . . . . . 27,
353, 374, 384, 414, 421, 429, 460, 467, 32, 267, 367, 427, 431, 442, 447, 452 f.,
474, 484 455, 554, 775
EMOO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . . 691
EN 284 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 f.
Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 f. best linear unbiased . . . . . . . . . . . . . . . . . . . 696
Encoding maximum likelihood . . . . . . . . . . . . . . . . . . . 695
cellular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
Endnote . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945 unbiased . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
Endogenous . . . . . . . . . . . . . . . . 90, 360, 365 ff., 481 EUFIT . . 132, 320, 355, 375, 385, 415, 422, 430,
EngOpt . . 132, 228, 234, 237, 251, 320, 355, 375, 461, 468, 475, 485
385, 415, 422, 430, 461, 468, 475, 478, EUROGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . 354, 374
485, 490, 496, 500, 502, 506, 518, 521, EuroGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
524, 528 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
INDEX 1195
Event Explicit Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 270
certain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Explicit Representation . . . . . . . . . . . . . . . . . . . . 270
conflicting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Exploitation . . . . . . . . . . . . . . . . . . . . . . 156, 162, 216
elementary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Exploration . . . . . . . . . . . . . . . . . . 155, 216, 360, 512
impossible. . . . . . . . . . . . . . . . . . . . . . . . . . . . .654 Exponential Distribution . . . . . . . . . . . . . . . . . . . 682
random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 External Decision Maker . . . . . . . . . . . . . 75 ff., 281
EvoCOP . 321, 355, 375, 385, 416, 422, 430, 461, Extinctive Selection . . . . . . . . . . . . . . . . . . . 183, 265
468, 475, 485 left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21, 509 right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Baldwinian . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Extradimensional Bypass . . . . . . . . . . . . . . . . . . . 219
chemical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Extrema Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 174
differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Extremal Optimization3, 25, 32, 172, 493 f., 945
grammatical . . . . . . . . . . . . . . . . . 204, 273, 403 generalized . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Lamarckian . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Extremum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Evolution Strategy . . . . . . . . . . . . . . . . . . . 3, 27, 32, global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
37, 155, 162, 188, 191, 216, 229, 263 f., local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
266 f., 286, 289, 302, 359 f., 362, 364,
367 f., 370 ff., 377, 419, 425, 427, 444,
—F—
Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
481, 566, 580 f., 760, 786, 788, 945
False Negative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
False Positive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
27, 32, 84, 191, 207, 262 f., 419 f., 425,
FDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
444, 463, 580, 771, 794, 796
FDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947
Evolutionary Algorithm . . . . . . . 3, 32, 36 f., 75 ff.,
FEA . 320, 354, 374, 384, 415, 422, 429, 460, 468
84, 86, 93 ff., 121, 155 ff., 162 f., 172 f.,
Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
183 f., 207, 210, 212, 229 f., 245, 253 ff.,
Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
258, 261 – 267, 269 f., 272 ff., 277, 279,
Fibonacci Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
285 ff., 289, 294, 302, 306 f., 309, 312,
Filter
324 f., 327, 335, 359 f., 377, 379 f., 393,
Butterworth . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
412, 419, 427, 434, 452 f., 457, 463 f.,
Chebycheff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
466, 512, 538, 541 f., 544 ff., 551, 562,
Finite State Machine . . . . . . . . . . . . . . . . . . . . . . . 380
566 ff., 577, 622, 699, 717, 746, 945
First Hitting Time . . . . . . . . . . . . . . . . . . . . . . . . . . 38
generational . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94, 257, 259
hybrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
assignment process . . . . . . . . . . . . . . . . . . . . 274
multi-objective . . . . . . . . . . . . . . . . . . . . . . . . 253
nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
steady-state . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Evolutionary Computation . . 3, 24, 31 f., 36, 48, Fitness Assignment . . . . . . . . . . . . . . . . . . . . . . . . 274
70, 156, 180, 253, 379, 537, 553, 945 Pareto ranking . . . . . . . . . . . . . . . . . . . . . . . . 275
Evolutionary Multi-Objective Optimization 253 Prevalence ranking . . . . . . . . . . . . . . . . . . . . 275
Evolutionary Operation . . . . . . . . . . . . . . . 264, 501 Tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
randomized . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 weighted sum . . . . . . . . . . . . . . . . . . . . . . . . . 275
Evolutionary Programming27, 32, 264, 380, 413 Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Evolutionary Reinforcement Learning . . . . . . 457 Fitness Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Evolvability . . . . . . . . . . . . . . . . . . . . . . . . . . . 163, 172 deceptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
EvoNUM 321, 355, 375, 386, 416, 423, 430, 461, ND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
469, 502 neutral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
EVOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 NK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
EvoRobots. . . .322, 356, 375, 386, 416, 423, 431, NKp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
462, 469, 476, 486 NKq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
EvoWorkshops 318, 353, 373, 383, 414, 421, 429, p-Spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
459, 467, 474, 484 rugged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
EWSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134, 461 technological . . . . . . . . . . . . . . . . . . . . . . . . . . 556
Exhaustive Enumeration . . 113, 116 f., 143, 168, FLAIRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
210, 212 FOCI . . . . 321, 355, 375, 386, 416, 423, 431, 461,
Exogenous. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90, 360 469, 475, 485
expand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 FOGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Expected value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 Forma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 f., 163
Experiment analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Miller-Urey . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
1196 INDEX
Free Lunch Gene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
no . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Freight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Generality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Frequency Generalized Extremal Optimization. . . . . . . . 495
absolute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
relative. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655 Generational. . . . . . . . . . . . . . . . . . . . . . . . .265, 546 f.
Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Genetic Algorithm . . . . 3, 25, 32, 37, 43, 48, 82 –
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 85, 90, 95, 101, 105, 119, 139, 154 f.,
automatically-defined . . . . . . 273, 400 f., 403 163, 167, 174, 183, 188, 207, 215, 217,
benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 248, 262 – 265, 271, 278, 290 f., 296,
cumulative distribution . . . . . . . . . . . . . . . . 658 325 – 329, 331, 335, 337 ff., 341 f., 345 –
fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 348, 350, 357, 377, 380 f., 389, 398,
Griewangk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 407, 427, 434, 439, 454 f., 457 f., 463 ff.,
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 541, 554, 558 f., 562 f., 565 f., 568, 573,
Griewank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 580, 945
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 compact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439
heuristic . . . . . . . . . . . . . . . . . . . . . . 34, 274, 518 human based . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
monotone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
objective . . . . . . . . . . . . . . . . . . . 22, 36, 44, 509 messy . . . . . . . . . . . . . . . . . . . . . . 184, 347 f., 395
conflicting . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 380
harmonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 real-coded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
independent . . . . . . . . . . . . . . . . . . . . . . . . . 56 Genetic Assimilation . . . . . . . . . . . . . . . . . . . . . . . 465
penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Genetic Programming. . . . . . . . . . . . . . . . . . . . . . . .3,
adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 32, 37, 107, 155, 162 f., 173, 175, 180,
dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 187, 192, 195, 262 ff., 273, 302, 304 f.,
probability density . . . . . . . . . . . . . . . . . . . . 659 379 ff., 386 f., 389, 396, 398, 400 f., 403,
probability mass . . . . . . . . . . . . . . . . . . . . . . 659 405 f., 408, 411 f., 427, 434, 447, 531,
royal road . . . . 154, 173, 455, 559 – 562, 566 534, 536, 561, 566, 568, 824, 945
sphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .368 byte code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
trap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 557 f. cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Weierstraß . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 compiling system . . . . . . . . . . . . . . . . . . . . . . 405
Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 crossover
Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 homologous . . . . . . . . . . . . . . . . . . . . 338, 407
FWGA . . . . . . . . . . . . . . . . . . . . . . 134, 355, 475, 485 function nodes . . . . . . . . . . . . . . . . . . . . . . . . 387
grammar-guided . . . . . . . . 273, 381, 403, 409
—G— graph-based . . . . . . . . . . . . . . . . . . . . . . . 32, 409
G3P . . . . . . . . . . . . . . . . . . . . . . . . . 273, 381, 403, 409 linear . . . . 32, 327, 381, 400, 403 – 408, 412
GA . . . . . . . . . . . . . . . . . . . 3, 25, 32, 37, 43, 48, 82 – page-based. . . . . . . . . . . . . . . . . . . . . . . . . .408
85, 90, 95, 101, 105, 119, 139, 154 f., non-terminal nodes . . . . . . . . . . . . . . . . . . . . 387
163, 167, 174, 183, 188, 207, 215, 217, stack-based . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
248, 262 – 265, 271, 278, 290 f., 296, Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
325 – 329, 331, 335, 337 ff., 341 f., 345 – standard32, 273, 386 f., 389, 399, 403 f., 412
348, 350, 357, 377, 380 f., 389, 398, strongly-typed . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
407, 427, 434, 439, 454 f., 457 f., 463 ff., 32, 37, 107, 155, 162 f., 173, 175, 180,
541, 554, 558 f., 562 f., 565 f., 568, 573, 187, 192, 195, 262 ff., 273, 302, 304 f.,
580, 945 379 ff., 386 f., 389, 396, 398, 400 f., 403,
compact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439 405 f., 408, 411 f., 427, 434, 447, 531,
messy . . . . . . . . . . . . . . . . . 184, 327, 347 f., 395 534, 536, 561, 566, 568, 824, 945
Gads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 terminal nodes . . . . . . . . . . . . . . . . . . . . . . . . 387
GALESIA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355 tree-based . . . . . . . . . . . . . . . . . . . 381, 386, 389
Gauss-Markov Theorem . . . . . . . . . . . . . . . . . . . . 696 Genetic Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 f.
GCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 GENITOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
GE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204, 273, 403 Genome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82, 326
GEC 321, 355, 375, 385, 416, 422, 430, 461, 468, Genomes
475, 485 string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
GECCO . 317, 353, 373, 383, 414, 421, 429, 459, tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
467, 473, 484 Genotype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39, 82
GEM 321, 355, 375, 385, 416, 422, 430, 461, 469 Genotype-Phenotype Mapping . . . . . . . . 39 ff., 71,
INDEX 1197
86 f., 90 f., 95 f., 101, 103 f., 111, 113, stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
119, 162, 168, 174 f., 179, 181 ff., 204, Grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
209, 215 ff., 219, 254, 258, 263, 270 – Grammar-Guided Genetic Programming . . 273,
273, 325, 327, 348 ff., 357, 360, 377, 381, 403
380, 404, 406 f., 433 f., 437, 443 f., 481, Grammatical Evolution . . . . . . . . . . . 204, 273, 403
542, 569, 580, 711, 834 Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 512, 651
direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 acyclic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
GEO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GEWS . . . . . . . . . . . . . . . . . . . . . . 132, 228, 321, 385 directed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 finite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GGGP . . . . . . . . . . . . . . . . . . . 32, 273, 381, 403, 409 infinite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GICOLAG. . . .134, 228, 235, 237, 251, 321, 356, path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
375, 386, 416, 423, 431, 461, 469, 475, theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651
479, 485, 491, 496, 500, 502, 506, 518, undirected . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
521, 524, 528 weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
Glass GRASP . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 157, 499
spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93, 557 Gray Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Global Optimization . . . . . . . . . . . . . . 3, 22, 31, 48, Greedy Search . . . . . . . . . . . . . . 32, 177, 464, 519 f.
51, 55, 90, 93, 99, 105, 107, 139, 151, Griewangk Function . . . . . . . . . . . . . . . . . . . . . . . . 601
154 f., 172, 174, 187 f., 215, 231, 243, shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
267, 509, 511, 542, 553, 566, 649, 945 Griewank Function . . . . . . . . . . . . . . . . . . . . . . . . . 601
Goal Attainment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
Goal Programming . . . . . . . . . . . . . . . . . . . . . . 72, 75 Grow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
GP . . . . 3, 32, 37, 107, 155, 162 f., 173, 175, 180, GSM/GPRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
187, 192, 195, 262 ff., 273, 302, 304 f.,
379 ff., 384, 386 f., 389, 396, 398, 400 f., —H—
403, 405 f., 408, 411 f., 427, 434, 447, HAIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
531, 534, 536, 561, 566, 568, 824, 945 Halting Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 f.
grammar-guided . . . . . . . . . . . . . 273, 381, 403 Harmony Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
graph-based . . . . . . . . . . . . . . . . . . . . . . . 32, 409 Haystack
linear . . . . 32, 327, 381, 400, 403 – 408, 412 needle in a . . . . . . . . . 103, 168, 175, 180, 465
standard32, 273, 386 f., 389, 399, 403 f., 412 HBGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
STGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3, hBOA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453
32, 37, 107, 155, 162 f., 173, 175, 180, HC . 32 f., 41, 85, 96, 99, 114, 116, 157, 160, 165,
187, 192, 195, 262 ff., 273, 302, 304 f., 167, 169, 172, 176, 181, 210 f., 229 –
379 ff., 386 f., 389, 396, 398, 400 f., 403, 232, 238 ff., 243, 245 f., 248, 252, 266,
405 f., 408, 411 f., 427, 434, 447, 531, 296, 359 f., 377, 400, 412, 425, 463 f.,
534, 536, 561, 566, 568, 824, 945 482, 501, 559, 563, 565, 735, 737, 945
GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 ff., 71, with random restarts . . . . . . . . . . . . . . . . . . 501
86 f., 90 f., 95 f., 101, 103 f., 111, 113, HCwL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
119, 162, 168, 174 f., 179, 181 ff., 204, Headless Chicken . . . . . . . . . . . . . . . . . . . . . . 339, 399
209, 215 ff., 219, 254, 258, 263, 270 – Herman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
273, 325, 327, 348 ff., 357, 360, 377, Hessian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
380, 404, 406 f., 433 f., 437, 443 f., 481, Heuristic . . . . . . . . . . . . . . . . . . . . . . . 34, 36, 274, 518
542, 569, 580, 711, 834 admissible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
direc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 monotonic. . . . . . . . . . . . . . . . . . . . . . . . . . . . .520
direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Hiding Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
explicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 f.,
indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 41, 85, 96, 99, 114, 116, 157, 160, 165,
ontogenic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 167, 169, 172, 176, 181, 210 f., 229 –
GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 232, 238 ff., 243, 245 f., 248, 252, 266,
GPTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 296, 359 f., 377, 400, 412, 425, 463 f.,
GR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 482, 501, 559, 563, 565, 735, 737, 945
GR Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 multi-objective . . . . . . . . . . . . . . . . . . . . . . . . 230
Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53, 104 randomized restarts . . . . . . . . . . . . . . . . . . . 232
descend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Gradient Descent with random restarts . . . . . . . . . . . . . . . . . . 501
1198 INDEX
Hill Climbing With Random Restarts . . . . . . 501 mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
HIS . 128, 227, 234, 236, 250, 319, 353, 374, 384, Intel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
415, 421, 429, 460, 468, 474, 478, 484, Intelligence
490, 496, 500, 502, 506, 518, 521, 524, artificial . . . . . . . . . . . . . . . . . . . . 31, 35, 76, 536
528 swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37
Homologous Crossover . . . . . . . . . . . . 338, 340, 407 Intensification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Homology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
HRO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
HS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Intrinsic Parallelism . . . . . . . . . . . . . . . . . . . . . . . . 263
HX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Intron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Hybridization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 f. explicitly defined . . . . . . . . . . . . . . . . . . . . . . 407
Hyperplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Hypothesis Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
building block . . . . . . . . . . . . . . . . . . . . . . . . . 346 IPO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Irreflexivenss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
—I— ISA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 475, 485
IAAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128, 474, 484 isGoal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
ICAART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 ISICA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
ICANNGA . . . 319, 353, 374, 384, 415, 422, 429, ISIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
460, 468, 474, 484 Isolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
ICARIS . . . . . . . . . . . . . . . . . . . . . . . . . . 130, 227, 320
isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
ICCI 128, 319, 353, 374, 384, 415, 422, 429, 460,
Isotropic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
468, 474, 484
Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
ICEC . . . . 322, 356, 376, 386, 416, 423, 431, 462,
Iteratively Deepening Depth-First Search . . 516
469, 476, 486
IUMDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
ICGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
IWACI . . . 322, 356, 376, 386, 417, 423, 431, 462,
ICML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129, 460
469, 476, 486
ICMLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134, 462
IWLCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
ICNC . . . . 319, 354, 374, 384, 415, 422, 429, 460,
IWNICA.321, 356, 375, 386, 416, 423, 431, 462,
468, 474, 484
469, 476, 486
ICSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475, 485
ICTAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
IDDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 516 f.
—J—
JAPHET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
IEA/AIE . . . . . . . . . . . . . . . . . . . . . . . . . 133, 475, 485
JBGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
IGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
JME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
IICAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Job Shop Scheduling . . . . . . . . . . . . . . . . . . . . . . . . 26
IJCAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
JSS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 f.
IJCCI . . . 321, 356, 375, 386, 416, 423, 431, 462,
Juxtapositional . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
469, 476, 486
JVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
Implicit Parallelistm . . . . . . . . . . . . . . . . . 263, 345 f. —K—
in.west . . . . . . . . . . . . . . . . . . . . . . . . . . . 537, 543, 549 k-PK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338
Increasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 k-PX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
monotonically . . . . . . . . . . . . . . . . . . . . . . . . . 647 Kauffman NK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Keys
Indirect Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 272 random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Indirect Representation . . . . . . . . . . . . . . . . . . . . 272 Knapsack Problem . . . . . . . . . . . . . . . . . . . . . . . . . 147
Individual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Kung-Fu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Inductor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
inequalities excess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
method of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Infeasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 —L—
Infix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Lamarck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164, 464
Informed Search . . . . . . . . . . . . . . . . . . . . . . . . 32, 518 Lamarckism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Injective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 Landscape
Injectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Input-Processing-Output . . . . . . . . . . . . . . . . . . . 379 problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Integer Programming . . . . . . . . . . . . . . . . . . . . . . . . 29 Laplace
INDEX 1199
assumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 LLN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Large Numbers Local Search . . . . . . . . . . . . . . . . . 229, 243, 463, 514
law of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668 Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . 84, 162, 218
Large-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Las Vegas Algorithm. . . . . . . . . . . . . . .41, 106, 148 Logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537, 622
Layout Long k-Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Long Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
LCS . . . . . . . . . . . . . . . . . 3, 32, 264, 457 f., 536, 945 LS-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Michigan-style . . . . . . . . . . . . . . . . . . . . . . . . 458 LSCS . . . . 133, 228, 235, 237, 251, 321, 355, 375,
Pitt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 385, 416, 423, 430, 461, 469, 475, 478,
Pittsburgh-style . . . . . . . . . . . . . . . . . . . . . . . 458 485, 491, 496, 500, 502, 506, 518, 521,
Learning 524, 528
linkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Lunch
Learning Classifier System . . . . 3, 32, 264, 457 f., no free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
536, 945
Michigan-style . . . . . . . . . . . . . . . . . . . . . . . . 458
—M—
MA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37, 463 f.
Pittsburgh-style . . . . . . . . . . . . . . . . . . . . . . . 458
Machine
Left-total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
finite state . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Length
Turing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
deterministic. . . . . . . . . . . . . . . . . . . . . . . .145
Lexicographic Optimization . . . . . . . . . . . . . . . . . 59
non-deterministic . . . . . . . . . . . . . . . . . . . 146
LGP . . . . . . . . . . 32, 327, 381, 400, 403 – 408, 412
Machine Learning. . . . . . . . . . . .187, 212, 380, 536
page-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Macro
LGPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
automatically-defined . . . . . . . . . . . . . 401, 403
License
Management
FDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947
customer relationship. . . . . . . . . . . . . . . . . .536
LGPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Many-Objectivity . . . . . . . . . . . . 197 f., 201 f., 1213
Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 f. Mapping
Likelihood Genotype-Phenotype . . . . . . . . . . . . . . . . . . 270
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693 genotype-phenotype . . . . . . . . . . . . . 39 ff., 71,
Linear Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . 62 86 f., 90 f., 95 f., 101, 103 f., 111, 113,
Linear Genetic Programming . 32, 327, 381, 400, 119, 162, 168, 174 f., 179, 181 ff., 204,
403 – 408, 412 209, 215 ff., 219, 254, 258, 263, 270 –
page-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 273, 325, 327, 348 ff., 357, 360, 377,
Linear Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 380, 404, 406 f., 433 f., 437, 443 f., 481,
Linkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 542, 569, 580, 711, 834
Linkage Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 ontogenic . . . . . . . . . . . . . . . . . . . . . . . . . . 86, 273
LION . . . . 133, 228, 234, 237, 251, 321, 355, 375, phenotype-genotype . . . . . . . . . . . . . . . . . . . 464
385, 416, 422, 430, 461, 469, 475, 478, MAS . . 37, 59, 125, 190, 207, 226, 315, 352, 382,
485, 490, 496, 500, 502, 506, 518, 521, 459, 473
524, 528 Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 defined length . . . . . . . . . . . . . . . . . . . . . . . . . 341
addItem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
appendList . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Mating Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89, 660
createList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
deleteItem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 89
deleteRange . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Maximum Likelihood Estimator . . . . . . . . . . . . 695
insertItem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 MCDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
listToSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 MCDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76, 129
removeItem . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 MDVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
reverseList . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Mean
setToList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
subList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665, 667
search (sorted) . . . . . . . . . . . . . . . . . . . . . . . . 650 Meme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
search (unsorted) . . . . . . . . . . . . . . . . . . . . . . 650 Memetic Algorithm . . . . . . . . . . . . . . . 32, 37, 463 f.
sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Memory Consumption. . . . . . . . . . . . . . . . . . . . . .514
sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 MENDEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
1200 INDEX
Messy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Multi-Criterion Decision Making . . . . . . . . . . . . 76
Messy GA . . . . . . . . . . . . . . . . . . . . . . 184, 347 f., 395 Multi-Modality 139, 151, 161, 165, 255, 452, 454
Messy Genome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Multi-Objective Evolutionary Algorithm 55, 65,
META . . . 228, 235, 237, 251, 322, 356, 376, 386, 202, 253, 257, 259 f., 279, 311
417, 423, 431, 462, 469, 476, 479, 486, Multi-Objectivity . 39, 54 – 57, 59 – 62, 64 f., 70,
491, 497, 500, 506 76 f., 89, 93 ff., 101, 104, 106, 158, 182,
Metaheuristic . . . . . . . . . . . . . . . . . 3, 24, 31 f., 34 ff., 197 f., 201, 213, 230, 232, 238, 253 ff.,
39, 41, 109, 112 f., 139, 141, 143, 165, 274, 308, 434, 453, 482, 534, 537 f.,
168 f., 197 f., 213, 225, 253 f., 262, 367, 554, 566, 570, 716, 728, 737, 841, 906,
489, 540 f., 1209 1213, 1221 ff.
Method Multi-Objectivization . . . . . . . . . . . . . . . . . . . . . . 158
raindrop . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 235 Mutation . . . . . . . 262, 306 f., 330, 340, 393 f., 543
Method Of Inequalities . . . . . . . . . . . . . 72 – 75, 77 bit-flip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Metropolis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 ff. model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
mGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184, 347, 349 multi-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
MIC 227, 234, 236, 250, 320, 354, 374, 384, 415, sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
422, 429, 460, 468, 474, 478, 484, 490, single-point . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
496, 500, 506 strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
MICAI. . .131, 320, 354, 374, 384, 415, 422, 429, tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
460, 468, 474, 484 vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
MicroGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Miller-Urey experiment. . . . . . . . . . . . . . . . . . . . .258
—N—
NaBIC . . . 322, 356, 375, 386, 416, 423, 431, 462,
Min-Max
469, 476, 486
weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Naivet . . 19 E. . . . . . . . . . . . . . . . . . . 279, 536, 639
Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Natural Selection. . . . . . . . . . . . . . . . . . . . . . . . . . .260
Minimum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89, 660
ND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 172 f., 557
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
ND Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 89
Needle-In-A-Haystack . . 103, 168, 175, 180, 465,
Mining
563
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Negative
MIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
false. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .700
ML . . . . . . . . . . . . . . . . . . . . . . . . . . 187, 212, 380, 536
Negative Slope Coefficient . . . . . . . . . . . . . . . . . . 163
MLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695 Neighborhood Search . . . . . . . . . . . . . . . . . . . . . . . 512
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Neighboring Relation . . . . . . . . . . . . . . . . . . . . . . . . 86
Modularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Nelder & Mead . 32, 104, 264, 420, 463, 501, 945
Module Network
acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 artificial neural . . 187, 191 f., 204, 273, 389,
MOEA55, 65, 72, 202, 253, 257, 259 f., 274, 279, 396, 406, 536, 541
311 Baesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
MOI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 – 75, 77 neutral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 random Boolean. . . . . . . . . . . . . . . . . . . . . . .174
about mean . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Neural Network
central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 artificial187, 191 f., 204, 273, 389, 396, 406,
standardized . . . . . . . . . . . . . . . . . . . . . . . . . . 664 536, 541
statistical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Neutral
Monotone network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Neutrality . . . . . . . . . . . . . 171, 556 f., 559, 569, 578
heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 NFL . . . . . . . . . . . . . . . . . . . . 95, 114, 168, 209 – 213
Monotonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 NIAH . . . . . . . . . . . . . . . . . . . 103, 168, 175, 180, 465
Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Niche Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Niche Preemption Principle . . . . . . . . . . . . . . . . 151
Monte Carlo Algorithm . . . . . . . . . . . 106, 148, 243 NICSO . . 131, 227, 234, 236, 250, 320, 354, 374,
MOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 197 385, 415, 422, 430, 460, 468, 474, 478,
Morphogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 484, 490, 496, 500, 502, 506, 518, 521,
MPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335, 338, 407 524, 528
MSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 NK . . . . . . . . . . . . . . . . . . . . . . . . . . 161, 179, 553, 556
Multi-Agent System . 37, 59, 125, 190, 207, 226, NKp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 556
315, 352, 382, 459, 473 NKq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 556
INDEX 1201
No Free Lunch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
No Free Lunch Theorem 95, 114, 168, 209 – 213 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Node Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 extremal . . . . . . . . . 3, 25, 32, 172, 493 f., 945
nodeWeight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 generalized . . . . . . . . . . . . . . . . . . . . . . . . . 495
Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187, 192 f. global . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 31, 509
Non-Decreasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Non-Dominated 67 ff., 73, 79, 197 f., 200 ff., 277, lexicographic . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
279, 284, 1210 multi-objective . . . . . . . . . . . . . . . . . . . . . . . . . 55
Non-Increasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Non-Synonymous Change . . . . . . . . . . . . . . . . . . 173 prevalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Nonile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 weighted sum . . . . . . . . . . . . . . . . . . . . . . . . 62
Nonseparability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Norm overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Tschebychev . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 particle swarm.3, 27, 32, 37, 155, 188, 191,
Normal Distribution. . . . . . . . . . . . . . . . . . . . . . . .678 207, 463, 481 f., 580, 945
standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678 problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 103
Notation random . . . . . . . . . . . . 3, 32, 36, 116, 232, 505
reverse polish . . . . . . . . . . . . . . . . . . . . . . . . . 404 setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
NSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 structure of . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
NSGA-II . . . . . . . . . . . . . . . . . . . . 198, 202, 266, 453 taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
NTM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149 termination criterion . . . . . . . . . . . . . . . . . . 105
Numbers Optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51, 89
complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 isolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
law of large . . . . . . . . . . . . . . . . . . . . . . . . . . . 668 lexicographical . . . . . . . . . . . . . . . . . . . . . . . . . 60
natural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51, 88 f.
rational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 optimal set . . . . . . . . . . . . . . . . . . . . . . . . . 54, 57
real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 extracting . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Numerical Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 27 obtain by deletion . . . . . . . . . . . . . . . . . . 309
NWGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
updating. . . . . . . . . . . . . . . . . . . . . . . . . . . .308
—O— updating by insertion . . . . . . . . . . . . . . 309
Objective Function . . . . . . . . . . . . . . . . . 36, 44, 544
Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
conflicting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
harmonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 645
independent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
simple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
OBUPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Off-The-Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . 43, 146
Overfitting . . . . . . . . . . . . . . 191, 193, 534, 536, 568
OneMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Ontogenic Mapping. . . . . . . . . . . . . . . . . . . . .86, 273 Overgeneralization . . . . . . . . . . . . . . . . . . . . . . . . . 194
Ontogeny Oversimplification . . . . . . . . . . . . . . . . . . . . . 194, 568
artificial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Overspecification . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Operations Research . . . . . . . . . . . . . . . 23, 197, 509
Operator —P—
search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 p-Spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161, 179, 556
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 PADO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Operator Correlation . . . . . . . . . . . . . . . . . . . . . . . 163 PAES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Optimality PAKDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Parallelism
Optimiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 implicit . . . . . . . . . . . . . . . . . . . . . . . . . 263, 345 f.
Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 intrinsic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 103 Parameter
classification of . . . . . . . . . . . . . . . . . . . . . . 31 endogenous . . . . . . . . . . . . . . 90, 360, 370, 481
baysian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 exogenous . . . . . . . . . . . . . . . . . . . . . . . . . 90, 360
combinatorial . . . . . . . . . . . . . . . . . . . . . . . . . 181 Parental Selection . . . . . . . . . . . . . . . . . . . . . . . . 260 f.
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
derivative-based . . . . . . . . . . . . . . . . . . . . . . . . 53 frontier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
derivative-free . . . . . . . . . . . . . . . . . . . . . . . . . . 53 optimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
1202 INDEX
ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 PPSN. . . .318, 353, 374, 384, 414, 421, 429, 459,
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 467, 474, 484
Pareto Ranking. . . .68, 76, 260, 275 ff., 279, 281, PPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
284, 300, 1210 Preemption
Pareto-Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 niche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Partial Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Partial order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Prehistory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Particle Swarm Optimization . 3, 27, 32, 37, 155, Premature Convergence . . . . . . . . . . 151, 482, 547
188, 191, 207, 463, 481 f., 580, 945 Preservative Selection . . . . . . . . . . . . . . . . . . . . . . 265
Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Prevalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 f.
Fibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 Prevention
long . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
long k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Primordial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
PBIL . . . . . . . . . . . . . . . . . . . . . . . . . . 438, 442 ff., 452 Principle
real-coded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 niche preemption . . . . . . . . . . . . . . . . . . . . . . 151
PBILC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . . . . . . 380
PDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 f. Probabilistic Algorithm . . . . . . . . . . . . . . . . . . . . . 33
Penalty Probability
adaptive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 von Mises [2822]. . . . . . . . . . . . . . . . . . . . . . .655
Death . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Bernoulli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Conditional . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Kolmogorov . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
Percentile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 of success. . . . . . . . . . . . . . . . . . . . . . . . .163, 172
Permission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .656
Permutation . . . . . . . . . . . . . 25, 333, 395, 643, 655 of a random variable . . . . . . . . . . . . . . . . 657
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Probability Density Function . . . . . . . . . . . . . . . 659
Pertinency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Probability Mass Function . . . . . . . . . . . . . . . . . 659
Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Probit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
PESA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Problem
PGM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 bin packing. . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Phase combinatorial . . . . . . . . . . . . . . . . . . . . . . 25, 472
juxtapositional . . . . . . . . . . . . . . . . . . . . . . . . 349 continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
primordial. . . . . . . . . . . . . . . . . . . . . . . . . . . . .348 discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Phenome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Phenotype . . . . . . . . . . . . . . . . . . . . . . . . . . . 39, 43, 82 numerical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 optimization . . . . . . . . . . . . . . . . . . . . . . . 22, 103
Phenotype-Genotype Mapping . . . . . . . . . . . . . 464 multi-objective . . . . . . . . . . . . . . . . . . . . . . . 55
PIPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447, 449, 452 satisfiability . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Pitfall traveling salesman 25, 34, 45 f., 55, 91, 118,
siren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 122, 147, 209, 243, 327, 333, 350, 377,
Pitt approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 425, 463, 621
Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 Vehicle Routing . . . . . . . . . . . . . . . . . . . . . . . 537
enterprise resource . . . . . . . . . . . . . . . . . . . . 536 vehicle routing . . . . 25, 147, 266, 350, 537 f.,
Plasticity 540 f., 548, 622, 634, 914
phenotypic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Problem Landscape . . . . . . . . . . . . . . . . . . . . . . . . . 95
Pleiotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177, 179 Problem Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
PMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 f. Product
Point Cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Production Systems . . . . . . . . . . . . . . . . . . . . . . . . 457
saddle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Programming
Point Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 genetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 32, 37, 107, 155, 162 f., 173, 175, 180,
Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 187, 192, 195, 262 ff., 273, 302, 304 f.,
Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . 671 379 ff., 386 f., 389, 396, 398, 400 f., 403,
Population. . . . . . . . . . . . . . . . . . . . . . . . .90, 253, 265 405 f., 408, 411 f., 427, 434, 447, 531,
Positive 534, 536, 561, 566, 568, 824, 945
false. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .700 cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . .174
Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 grammar-guided. . . . . . . . . . .273, 381, 403
INDEX 1203
linear . . 32, 327, 381, 400, 403 – 408, 412 Recombination . . . . . . . . . . 262, 307, 335, 399, 544
standard . 32, 273, 386 f., 389, 399, 403 f., discrete . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 362
412 dominant . . . . . . . . . . . . . . . . . . . . 337, 362, 377
strongly-typed . . . . . . . . . . . . . . . . . . . . . . . . 3, homologous . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
32, 37, 107, 155, 162 f., 173, 175, 180, intermediate . . . . . . . . . . . . . . . . . . . . . . 338, 362
187, 192, 195, 262 ff., 273, 302, 304 f., multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
379 ff., 386 f., 389, 396, 398, 400 f., 403, tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
405 f., 408, 411 f., 427, 434, 447, 531, Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
534, 536, 561, 566, 568, 824, 945 Redundancy . . . . . . . . . . . . . . . . . . . . . . . 86, 174, 569
integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Reflexivenss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
mixed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 logistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
constant innovation . . . . . . . . . . . . . . . . . . . 174 symbolic . . 119, 121, 187, 191 f., 380 f., 387,
Proximity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 401, 403, 411 f., 531 – 535, 905 f., 908,
Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 910
PSO3, 27, 32, 37, 155, 188, 191, 207, 463, 481 f., Relation
580, 945 binary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .644
psoReproduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 types and properties of . . . . . . . . . . . . . 644
PVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
—Q— partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
QBB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Quadric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Repairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Quantile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
Repairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Quartile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Representation
Quenching
explicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
simulated . . . . . . . . . . . . . . . . . . . . . . 247 ff., 252
indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Quintile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Reproduction. . . . . . . . . . . . . . . . . . . . . . . . . .306, 542
asexual . . . . . . . . . . . . . . . . . . . . . . . . . . . 307, 325
—R—
Rail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 binary. . . . . . . . .335, 338, 340, 348, 398, 407
Raindrop Method . . . . . . . . . . . . . . . . 36, 235 f., 945 nullary . . . . . . . . . . . . . . . . . . . . . . 328, 340, 389
RAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 sexual . . . . . . . . . . . . . . . . . . 257, 262, 307, 325
Ramped Half-and-Half . . . . . . . . . . . . . . . . . . . . . 390 unary . . . . . 330, 333, 340, 347 f., 393, 395 ff.
Random Resistance
event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 domiance . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 198
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Resistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656 Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 REVOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
continous . . . . . . . . . . . . . . . . . . . . . . . . . . . 658 RFD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 477
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658 RISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Random Access Machine . . . . . . . . . . . . . . . . . . . 145 River Formation Dynamics . . . . . . . . . . . . . 32, 477
Random Boolean Network . . . . . . . . . . . . . . . . . . 174 RO . . . . . . . . . . . . . . . . . . . . . 3, 32, 36, 116, 232, 505
Random Keys. . . . . . . . . . . . . . . . . . .349 f., 377, 834 Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
random neighbors . . . . . . . . . . . . . . . . . . . . . 455, 554 royal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
Random Optimization . . 3, 32, 36, 116, 232, 505 variable-length . . . . . . . . . . . . . . . . . . . . . . 559
Random Sampling . . . . 113 ff., 118, 181, 240, 729 VLR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Random Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 302 RoboCup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Random Walk . . . . 113 – 116, 118, 163, 168, 172, Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
174 f., 181, 210, 212, 240, 245 f., 267, Root2path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563, 565
302, 514, 732 Root2paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
Randomized Algorithm . . . . . . . . . . . . . . . . . . . . . . 33 Royal Road . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 558 f.
Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646, 661 variable-length . . . . . . . . . . . . . . . . . . . . . . . . 559
interquartile . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 VLR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Ranking Royal Road Function . . 154, 173, 455, 558 – 562,
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 566, 1221
RBN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Royal Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
Reachability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 RPCL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .454
Real-Coded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 RPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
1204 INDEX
Ruggedness . . . . . . . . 161, 554, 556, 571, 573, 578 Search Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
—S— Search Procedure
S-Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 greedy randomized adaptive . . . . . 157, 499
SA . . . . . . . . . . . . . . . . . . . . . . . 3, 25, 32 f., 36, 156 f., Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
160, 181, 231, 243 – 249, 252, 308, 412, Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
463 f., 493, 541, 557, 716, 740, 945 Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254, 258
Saddle Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f., 285
Salience breeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
high . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . 289
low. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562 ecological . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Sample space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .653 elitist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Sampling environment. . . . . . . . . . . . . . . . . . . . . . . . . . .261
random . . . . . . . . . . 113 ff., 118, 181, 240, 729 environmental . . . . . . . . . . . . . . . . . . . . . . . . . 260
SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 extinctive . . . . . . . . . . . . . . . . . . . . . . . . 183, 265
SC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 34 left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Scalability Requirement . . . . . . . . . . . . . . . . . . . . 219 extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
large . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Fitness Proportionate . . . . . . . . . . . 290, 292 f.
small . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 mating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Schedule natural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 parental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f.
exponential . . . . . . . . . . . . . . . . . . . . . . . . . 249 preservative . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
logarithmic . . . . . . . . . . . . . . . . . . . . . . . . . 248 ranking
polynomial . . . . . . . . . . . . . . . . . . . . . . . . . 249 linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Roulette Wheel . . . . . . . . . . . . . . . . . 290, 292 f.
job shop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 sexual . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f., 287
Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . 119, 163, 341 survival . . . . . . . . . . . . . . . . . . . . 260 f., 287, 302
Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Schema Theorem . . . . . 119, 167, 217, 263, 341 f., non-deterministic . . . . . . . . . . . . . . . . . . . 300
344 ff., 558 with replacement. . . . . . . . . . . . . . .298, 300
Schwefel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 without replacement. . . . . . . . . . . . . . 298 f.
SCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 f., 305, 546 tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
SEA 135, 228, 235, 237, 251, 322, 356, 376, 386, truncation. . . . . . . . . . . . . . . . . . . . . . . . . . . . .289
417, 423, 431, 462, 469, 476, 479, 486, Self-Adaptation . . . . . . . . . . . . . . 219, 360, 370, 419
491, 497, 500, 502, 506, 518, 521, 525, Self-Organization. . . . . . . . . . . . . . . . . . . . . . . . . . .420
528 criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
SEAL . . . . 130, 320, 354, 374, 384, 415, 422, 429, Self-Organized Criticality . . . . . . . . . . . . . . . . . . 493
460, 468, 474, 484 SEMCCO322, 356, 376, 386, 417, 423, 431, 462,
Search 470, 476, 486
A⋆ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519 Sensor Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
best-first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Separability . . . . . . . . . . . . . . . . . . . . . . . . . 178, 204 f.
breadth-first . . . . . . . . 32, 115, 515, 517, 519 non . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
depth-first. . . . . . . . . . . . . .32, 115, 516 f., 519 partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
iterative deepenining . . . . . . . . . . . . 32, 516 Separbility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
depth-limited . . . . . . . . . . . . . . . . . . . . . . . . 516 f. Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
greedy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519 Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 639
harmony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 cardinality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
informed . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 518 Cartesian product . . . . . . . . . . . . . . . . . . . . . 642
local . . . . . . . . . . . . . . . . . . . . 229, 243, 463, 514 complement . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . 512 connected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
State Space . . . . . . . . . . . . . . . . . . . . . . . . 32, 511 countable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
tabu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
uninformed . . . . . . . . . . . . . . . . . . . . 32, 34, 514 empty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
INDEX 1205
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Sparc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
membership . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 SPEA 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
operations on . . . . . . . . . . . . . . . . . . . . . . . . . 641 SPEA-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
optimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54, 57 Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368, 581
Power set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
relations between . . . . . . . . . . . . . . . . . . . . . . 640 SPICE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
special . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Spin Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Spin-Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639 Splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 SPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
uncountable . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 SQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 ff., 252
union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 SSCI 322, 356, 376, 386, 417, 423, 431, 462, 470,
Sexual Reproduction . . . . . . . . . 257, 262, 307, 325 476, 486
Sexual Selection . . . . . . . . . . . . . . . . . . . . . 260 f., 287 SSEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266, 324
SGP . . . . . . . . 32, 273, 386 f., 389, 399, 403 f., 412 SSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
SH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . 662 f.
Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277, 546 Standard Genetic Programming . 32, 273, 386 f.,
Variety Preserving . . . . . . . . . . . . . . . . . . . . 279 389, 399, 403 f., 412
SHCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 f. State Machine
SHCLVND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 finite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
SI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37 State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Simple Order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646 Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 511
Simplex State Space Search . . . . . . . . . . . 34 f., 511 f., 514 f.
downhill . . . 32, 104, 264, 420, 463, 501, 945 Static-φ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Simulated Annealing. . . . . .3, 25, 32 f., 36, 156 f., Statisfiability Problem . . . . . . . . . . . . . . . . . . . . . 147
160, 181, 231, 243 – 249, 252, 308, 412, Statistical Independence. . . . . . . . . . . . . . . . . . . .657
463 f., 493, 541, 557, 716, 740, 945 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
Simulated Quenching . . . . . . . . . . . . . . . . . . 247 Steady-State 157, 255, 266, 286, 289, 324, 546 ff.
temperature schedule . . . . . . . . . . . . . . . . . . 247 Steady-State Evolutionary Algorithm . 266, 324
Simulated Quenching . . . . . . . . . . . 156, 247 ff., 252 STeP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 STGP . 3, 32, 37, 107, 155, 162 f., 173, 175, 180,
annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 187, 192, 195, 262 ff., 273, 302, 304 f.,
Simulation Dynamics . . . . . . . . . . . . . . . . . . . . . . . 163 379 ff., 386 f., 389, 396, 398, 400 f., 403,
Single-Objectivity . . . . . . . . . . . . . . . . . . . 39, 54, 56, 405 f., 408, 411 f., 427, 434, 447, 531,
60 ff., 70 f., 88, 93, 95, 100 f., 106, 109, 534, 536, 561, 566, 568, 824, 945
158, 160, 198, 201 f., 209, 238, 253 f., Sticky Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
274, 278, 287, 325, 360, 362, 434, 439, Stigmergy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
501, 541, 580, 713, 724, 905, 1221 ff. sematectonic . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Siren Pitfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 sign-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
SIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476, 486 Stochastic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 33
Skewness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 Stochastic Gradient Descent . . . . . . . . . . . . . . . . 232
Small-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Stopping Criterion . . . . . . . . . . . . . . . . . . . . . . . . . 105
SMC 131, 227, 234, 236, 251, 320, 354, 374, 385, Strategy
415, 422, 430, 460, 468, 474, 478, 484, evolutionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
490, 496, 500, 502, 506, 518, 521, 524, Strongly-Typed Genetic Programming . . . . 387,
528 392 f., 398
SOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
SoCPaR . 133, 321, 355, 375, 385, 416, 423, 430, Student’s t-Distribution . . . . . . . . . . . . . . . . . . . . 687
461, 469, 475, 485 Subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
Soft Computing . . . . . . . . . . . . . . . . . . . . . . . . . 31, 34 Subtree . . . . . . . . . . 181, 393, 395 – 400, 404, 1211
Solution Success
candidate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 probability of . . . . . . . . . . . . . . . . . . . . . 163, 172
Space Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 sqr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 weightes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Support Vector Machine . . . . . . . . . . . . . . . . . . . 536
search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Surjective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
1206 INDEX
Surjectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 Tree Genomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Survival Selection . . . . . . . . . . . . . . . 260 f., 287, 302 Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Truck-Meets-Truck . . . . . . . . . . . . . . . . . . . . . . . . . 544
Swap Body . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539, 542 Truncation Selection . . . . . . . . . . . . . . . . . . . . . . . 289
Swarm TS . . 3, 25, 32, 36, 155, 231, 463 f., 489, 512, 541
particle/optimization . . . . 3, 27, 32, 37, 155, TSoR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
188, 191, 207, 463, 481 f., 580, 945 TSP25, 34, 45 f., 55, 91, 118, 122, 147, 209, 243,
Swarm Intelligence. . . . . . . . . . . . . . . . . . . . . . .32, 37 327, 333, 350, 377, 425, 463, 621
Symbolic Regression 119, 121, 187, 191 f., 380 f., TSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
387, 401, 403, 411 f., 531 – 535, 905 f., Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
908, 910 Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 Turing Completeness . . . . . . . . . . . . . . . . . . . . . . . 406
Symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Turing Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Symmetry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646 deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Synonymous Change . . . . . . . . . . . . . . . . . . . . . . . 173 non-deterministic. . . . . . . . . . . . . . . . . . . . . .146
System Two-Point Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
classifier . . . . . . . . . . . . . . . . . . . . . . . . 339, 457 f. Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
learning . . . . . . 3, 32, 264, 457 f., 536, 945 Type 1 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
multi-agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Type 2 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700

—T— —U—
Ugly Duckling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212
t-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
UMDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434, 437
TAAI . . . . . . . . . . . . . . . . . . . . . . . . 134, 462, 476, 486
Unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Tabu Search . 3, 25, 32, 36, 155, 231, 463 f., 489,
Underspecification . . . . . . . . . . . . . . . . . . . . . . . . . 348
512, 541
Uni-Modality . . . . . . . . . . . . . . . . . . . . . 165, 452, 454
TAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Uniform Distribution
TAINN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
Technological Landscape . . . . . . . . . . . . . . . . . . . 556
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Telematics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .549 f.
uniformSelectNode . . . . . . . . . . . . . . . . . . . . . . . . . 392
Temperature Schedule . . . . . . . . . . . . . . . . . . . . . . 247
Uninformed Search . . . . . . . . . . . . . . . . . . . . . 32, 514
Termination Criterion . . . . . . . . . . . . . . . . . . . . . . 105
UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 346, 362
Test
statistical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
—V—
TGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381, 386, 389 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
Theorem epistatic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
central limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 estimator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .662
Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . 341, 558 Variation
ugly duckling. . . . . . . . . . . . . . . . . . . . . . . . . .212 coefficient of . . . . . . . . . . . . . . . . . . . . . . . . . . 663
Threshold Variety Preserving . . . . . . . . . . . . . . . . . . . . . . . . . 279
error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163, 562 Vehicle Routing Problem . . . . . 25, 147, 266, 350,
Threshold Selection . . . . . . . . . . . . . . . . . . . . . . . . 289 537 f., 540 f., 548, 622, 634, 914
Time Consumption . . . . . . . . . . . . . . . . . . . . . . . . . 514 Capacitated . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Total Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Multiple Depot . . . . . . . . . . . . . . . . . . . . . . . . 538
Tournament Selection . . . . . . . . . . . . . . . . . . . . . . 573 Periodic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
TPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 with Backhauls . . . . . . . . . . . . . . . . . . . . . . . . 538
Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 with Pick-up and Delivery . . . . . . . . . . . . . 538
Train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 VIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 VRP . . 25, 147, 266, 350, 537 f., 540 f., 548, 622,
Transititve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 634, 914
Transitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 VRPB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Trap Function . . . . . . . . . . . . . . . . . . . . . . . 167, 557 f. VRPPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Traveling Salesman Problem25, 34, 45 f., 55, 91,
118, 122, 147, 209, 243, 327, 333, 350, —W—
377, 425, 463, 621 Walk
Tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 adaptive
royal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 fitter dynamics . . . . . . . . . . . . . . . . . . . . . 116
INDEX 1207
greedy dynamics . . . . . . . . . . . . . . . . . . . . 116
one-mutant . . . . . . . . . . . . . . . . . . . . . . . . . 116
drunkard’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
random . . . . . 113 – 116, 118, 163, 168, 172,
174 f., 181, 210, 212, 240, 245 f., 267,
302, 514, 732
WCCI . . . 318, 353, 374, 384, 414, 421, 429, 460,
467, 474, 484
Wechselbrücksteuerung . . . . . . . . . . . . . . . . . . . . . 537
Weierstraß Function . . . . . . . . . . . . . . . . . . . . . . . . 165
Weighted Sum . . . . . . . . . . . . . . . . . . . . . . 62, 78, 275
WHCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Wildcard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
WOMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
WOPPLOT . . 133, 228, 235, 237, 251, 321, 355,
375, 385, 416, 423, 430, 461, 469, 475,
478, 485, 491, 496, 500, 502, 506, 518,
521, 524, 528
WPBA . . 322, 356, 375, 386, 416, 423, 431, 462,
469, 476, 486
Wrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 f.
WSC131, 320, 354, 374, 385, 415, 422, 430, 461,
468, 475, 484

—X—
XCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458

—Y—
Yellow Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550

—Z—
Z80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
ZCS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .458
List of Figures

1.1 Terms that hint: “Optimization Problem” (inspired by [428]) . . . . . . . . . . . . . 21


1.2 One example instance of a bin packing problem. . . . . . . . . . . . . . . . . . . . . 25
1.3 An example for circuit layouts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4 The graph of g(x) as given in Equation 1.3 with the four roots x⋆1 to x⋆4 . . . . . . . . 28
1.5 A truss optimization problem like the one given in [18]. . . . . . . . . . . . . . . . . 29
1.6 A rough sketch of the problem type relations. . . . . . . . . . . . . . . . . . . . . . . 30
1.7 Overview of optimization algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.8 Black box treatment of objective functions in metaheuristics. . . . . . . . . . . . . . 35
1.9 The robotic soccer team CarpeNoctem (on the right side) on the fourth day of the
RoboCup German Open 2009. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.1 A sketch of the stone’s throw in Example E2.1. . . . . . . . . . . . . . . . . . . . . . 44


2.2 An example for Traveling Salesman Problems, based on data from GoogleMaps. . . . 45

3.1 Global and local optima of a two-dimensional function. . . . . . . . . . . . . . . . . . 52


3.2 Examples for functions with multiple optima . . . . . . . . . . . . . . . . . . . . . . 54
3.3 The possible relations between objective functions as given in [2228, 2230]. . . . . . 56
3.4 Two functions f1 and f2 with different maxima x̂1 and x̂2 . . . . . . . . . . . . . . . . 58
3.5 Two functions f3 and f4 with different minima x̌1 , x̌2 , x̌3 , and x̌4 . . . . . . . . . . . . 58
3.6 Optimization using the weighted sum approach (first example). . . . . . . . . . . . . 62
3.7 Optimization using the weighted sum approach (second example). . . . . . . . . . . 63
3.8 A problematic constellation for the weighted sum approach. . . . . . . . . . . . . . . 64
3.9 Examples of Pareto fronts [2915]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.10 Optimization using the Pareto Frontier approach. . . . . . . . . . . . . . . . . . . . . 67
3.11 Optimization using the Pareto Frontier approach (second example). . . . . . . . . . 68
3.12 Optimization using the Pareto-based Method of Inequalities approach (first example). 73
3.13 Optimization using the Pareto-based Method of Inequalities approach (first example). 74
3.14 An external decision maker providing an Evolutionary Algorithm with utility values. 76

4.1 Examples for real-coding candidate solutions. . . . . . . . . . . . . . . . . . . . . . . 82


4.2 The relation of genome, genes, and the problem space. . . . . . . . . . . . . . . . . . 83
4.3 Neighborhoods under non-injective genotype-phenotype mappings. . . . . . . . . . . 87
4.4 Minima of a single objective function. . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5.1 An example optimization problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96


5.2 The problem landscape of the example problem derived with searchOp1 . . . . . . . . 97
5.3 The problem landscape of the example problem derived with searchOp2 . . . . . . . . 98

6.1 Spaces, Sets, and Elements involved in an optimization process. . . . . . . . . . . . . 102

7.1 Use dedicated algorithms! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109


7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
1210 LIST OF FIGURES
8.1 Some examples for random walks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

9.1 An graph coloring-based example for properties and formae. . . . . . . . . . . . . . . 120


9.2 Example for formae in symbolic regression. . . . . . . . . . . . . . . . . . . . . . . . 121

11.1 Different possible properties of fitness landscapes (minimization). . . . . . . . . . . . 140

12.1 Illustration of some functions illustrating their rising speed, gleaned from [2365]. . . 145
12.2 Turing machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

13.1 Premature convergence in the objective space. . . . . . . . . . . . . . . . . . . . . . . 152


13.2 Pareto front approximation sets [2915]. . . . . . . . . . . . . . . . . . . . . . . . . . . 153
13.3 Loss of Diversity leads to Premature Convergence. . . . . . . . . . . . . . . . . . . . 155
13.4 Exploration versus Exploitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
13.5 Multi-Objectivization à la Sum-of-Parts. . . . . . . . . . . . . . . . . . . . . . . . . . 159

14.1 The landscape difficulty increases with increasing ruggedness. . . . . . . . . . . . . . 161

15.1 Increasing landscape difficulty caused by deceptivity. . . . . . . . . . . . . . . . . . . 167

16.1 Landscape difficulty caused by neutrality. . . . . . . . . . . . . . . . . . . . . . . . . 171


16.2 Possible positive influence of neutrality. . . . . . . . . . . . . . . . . . . . . . . . . . 173

17.1 Pleiotropy and epistasis in a dinosaur’s genome. . . . . . . . . . . . . . . . . . . . . . 178


17.2 The influence of epistasis on the fitness landscape. . . . . . . . . . . . . . . . . . . . 180
17.3 Epistasis in the permutation-representation for bin packing. . . . . . . . . . . . . . . 182

18.1 A robust local optimum vs. a “unstable” global optimum. . . . . . . . . . . . . . . . 189

19.1 Overfitting due to complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192


19.2 Fitting noise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
19.3 Oversimplification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

20.1 Approximated distribution of the domination rank in random samples for several
population sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
20.2 The proportion of non-dominated candidate solutions for several population sizes ps
and dimensionalities n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

23.1 A visualization of the No Free Lunch Theorem. . . . . . . . . . . . . . . . . . . . . . 211


23.2 Fitness Landscapes and the NFL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
23.3 The puzzle of optimization algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . 214

24.1 Examples for an increase of the dimensionality of a search space G (1d) to G′ (2d). . 220

27.1 A sketch of annealing in metallurgy as role model for Simulated Annealing. . . . . . 244
27.2 Different temperature schedules for Simulated Annealing. . . . . . . . . . . . . . . . 248

28.1 The basic cycle of Evolutionary Algorithms. . . . . . . . . . . . . . . . . . . . . . . . 254


28.2 The family of Evolutionary Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . 264
28.3 The configuration parameters of Evolutionary Algorithms. . . . . . . . . . . . . . . . 269
28.4 An example scenario for Pareto ranking. . . . . . . . . . . . . . . . . . . . . . . . . . 276
28.5 The sharing potential in the Variety Preserving Example E28.17. . . . . . . . . . . . 282
28.6 Selection with and without replacement. . . . . . . . . . . . . . . . . . . . . . . . . . 286
28.7 The four example fitness cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
28.8 The number of expected offspring in truncation selection. . . . . . . . . . . . . . . . 290
28.9 Examples for the idea of roulette wheel selection. . . . . . . . . . . . . . . . . . . . . 294
28.10The number of expected offspring in roulette wheel selection. . . . . . . . . . . . . . 295
28.11The number of expected offspring in tournament selection. . . . . . . . . . . . . . . . 297
28.12The expected numbers of occurrences for different values of n and c. . . . . . . . . . 305
LIST OF FIGURES 1211
29.1 A Sketch of a DNA Sequence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
29.2 Value-altering mutation of string chromosomes. . . . . . . . . . . . . . . . . . . . . . 330
29.3 Mutation in binary- and real-coded GAs . . . . . . . . . . . . . . . . . . . . . . . . . 332
29.4 Permutation applied to a string chromosome. . . . . . . . . . . . . . . . . . . . . . . 333
29.5 Crossover (recombination) operators for fixed-length string genomes. . . . . . . . . . 335
29.6 The points sampled by different crossover operators. . . . . . . . . . . . . . . . . . . 338
29.7 Search operators for variable-length strings (additional to those from Section 29.3.2
and Section 29.3.3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
29.8 Crossover of variable-length string chromosomes. . . . . . . . . . . . . . . . . . . . . 341
29.9 An example for schemata in a three bit genome. . . . . . . . . . . . . . . . . . . . . 342
29.10Two linked genes and their destruction probability under single-point crossover. . . . 347
29.11An example for two subsequent applications of the inversion operation [1196]. . . . . 348

30.1 Sampling Results from Normal Distributions . . . . . . . . . . . . . . . . . . . . . . 364

31.1 Genetic Programming in the context of the IPO model. . . . . . . . . . . . . . . . . 379


31.2 Some important works pon Genetic Programming illustrated on a time line. . . . . . . 380
31.3 The formula esinx + 3 |x| expressed as tree. . . . . . . . . . . . . . . . . . . . . . . . 387
31.4 The AST representation of algorithms/programs. . . . . . . . . . . . . . . . . . . . . 388
31.5 Tree creation by the full method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
31.6 Tree creation by the grow method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
31.7 Possible tree mutation operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
31.8 Mutation of a mathematical expression represented as a tree. . . . . . . . . . . . . . 395
31.9 Tree permutation – (asexually) shuffling subtrees. . . . . . . . . . . . . . . . . . . . . 395
31.10Tree editing – (asexual) optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . 396
31.11An example for tree encapsulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
31.12An example for tree wrapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
31.13An example for tree lifting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
31.14Tree recombination by exchanging subtrees. . . . . . . . . . . . . . . . . . . . . . . . 399
31.15Recombination of two mathematical expressions represented as trees. . . . . . . . . . 399
31.16Comparison of functions and macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
31.17The impact of insertion operations in Genetic Programming . . . . . . . . . . . . . . 405

34.1 Comparison between information in a population and in a model. . . . . . . . . . . . 428


34.2 A rough sketch of the UMDA process. . . . . . . . . . . . . . . . . . . . . . . . . . . 435

36.1 The Baldwin effect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465

46.1 An example for a possible search space sketched as acyclic graph. . . . . . . . . . . . 513

49.1 An example genotype of symbolic regression of with x = x ∈ R1 . . . . . . . . . . . . 532


49.2 ϕ(x), the evolved ψ1⋆ (x) ≡ ϕ(x), and ψ2⋆ (x). . . . . . . . . . . . . . . . . . . . . . . . 535
49.3 The freight traffic on German roads in billion tons*kilometer. . . . . . . . . . . . . . 537
49.4 Different flavors of the VRP and their relation to the in.west system. . . . . . . . . . 539
49.5 The structure of the phenotypes x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
49.6 Some mutation operators from the freight planning EA. . . . . . . . . . . . . . . . . 543
49.7 Two examples for the freight plan evolution. . . . . . . . . . . . . . . . . . . . . . . . 548
49.8 An overview of the in.west system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
49.9 The YellowBox – a mobile sensor node. . . . . . . . . . . . . . . . . . . . . . . . . . 550

50.1 Ackley’s “Trap” function [19, 1467]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 558


50.2 The perfect Royal Trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
50.3 Example fitness evaluation of Royal Trees . . . . . . . . . . . . . . . . . . . . . . . . 561
50.4 The root2path for l = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
50.5 An example for the fitness landscape model. . . . . . . . . . . . . . . . . . . . . . . . 567
50.6 An example for the epistasis mapping z → e4 (z). . . . . . . . . . . . . . . . . . . . . 570
50.7 An example for rγ with γ = 0..10 and f̂ = 5. . . . . . . . . . . . . . . . . . . . . . . . 571
50.8 The basic problem hardness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
50.9 Experimental results for the ruggedness. . . . . . . . . . . . . . . . . . . . . . . . . . 574
1212 LIST OF FIGURES
50.10Experiments with ruggedness and deceptiveness. . . . . . . . . . . . . . . . . . . . . 576
50.11Experiments with epistasis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
50.12The results from experiments with neutrality. . . . . . . . . . . . . . . . . . . . . . . 578
50.13Expection and reality: Experiments involving both, epistasis and neutrality . . . . . 579
50.14Expection and reality: Experiments involving both, ruggedness and epistasis . . . . 579
50.15Examples for the sphere function (v = 100). . . . . . . . . . . . . . . . . . . . . . . . 581
50.16Examples for Schwefel’s problem 2.22. . . . . . . . . . . . . . . . . . . . . . . . . . . 583
50.17Examples for Schwefel’s problem 1.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
50.18Examples for Schwefel’s problem 2.21. . . . . . . . . . . . . . . . . . . . . . . . . . . 587
50.19The 2d-example for the generalized Rosenbrock’s function. . . . . . . . . . . . . . . . 589
50.20The 1d-example for the step function (v = 10). . . . . . . . . . . . . . . . . . . . . . 591
50.21Examples for the quartic function with noise. . . . . . . . . . . . . . . . . . . . . . . 593
50.22Examples for Schwefel’s problem 2.26. . . . . . . . . . . . . . . . . . . . . . . . . . . 595
50.23Examples for the generalized Rastrigin’s function. . . . . . . . . . . . . . . . . . . . . 597
50.24Examples for Ackley’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
50.25Examples for the generalized Griewank function. . . . . . . . . . . . . . . . . . . . . 601
50.26Examples for the general penalized function 1. . . . . . . . . . . . . . . . . . . . . . 604
50.27Examples for the general penalized function 2. . . . . . . . . . . . . . . . . . . . . . 606
50.28Examples for Levy’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
50.29A close-to-real-world Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . 622

51.1 Set operations performed on sets A and B inside a set A. . . . . . . . . . . . . . . . 641


51.2 Properties of a binary relation R with domain A and codomain B. . . . . . . . . . . 644

53.1 Examples for the discrete uniform distribution . . . . . . . . . . . . . . . . . . . . . 670


53.2 Examples for the Poisson distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 672
53.3 Examples for the binomial distribution. . . . . . . . . . . . . . . . . . . . . . . . . . 675
53.4 Some examples for the continuous uniform distribution. . . . . . . . . . . . . . . . . 677
53.5 Examples for the normal distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 679
53.6 Some examples for the exponential distribution. . . . . . . . . . . . . . . . . . . . . . 683
53.7 Examples for the χ2 distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
53.8 Examples for Student’s t-distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 688
53.9 The PMF and CMF of the dice throw . . . . . . . . . . . . . . . . . . . . . . . . . . 690
53.10The numbers thrown in the dice example . . . . . . . . . . . . . . . . . . . . . . . . 691
List of Tables

2.1 The unique tours for Example E2.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.1 Applications and Examples of Multi-Objective Optimization. . . . . . . . . . . . . . 59


3.2 Applications and Examples of Constraint Optimization. . . . . . . . . . . . . . . . . 70

10.1 Applications and Examples of Optimization. . . . . . . . . . . . . . . . . . . . . . . . 123


10.2 Conferences and Workshops on Optimization. . . . . . . . . . . . . . . . . . . . . . . 126

18.1 Applications and Examples of robust methods. . . . . . . . . . . . . . . . . . . . . . 189

20.1 Applications and Examples of Many-Objective Optimization. . . . . . . . . . . . . . 197

21.1 Applications and Examples of Large-Scale Optimization. . . . . . . . . . . . . . . . . 203

22.1 Applications and Examples of Dynamic Optimization. . . . . . . . . . . . . . . . . . 207

25.1 Applications and Examples of Metaheuristics. . . . . . . . . . . . . . . . . . . . . . . 226


25.2 Conferences and Workshops on Metaheuristics. . . . . . . . . . . . . . . . . . . . . . 227

26.1 Applications and Examples of Hill Climbing. . . . . . . . . . . . . . . . . . . . . . . 234


26.2 Conferences and Workshops on Hill Climbing. . . . . . . . . . . . . . . . . . . . . . . 234
26.3 Applications and Examples of Raindrop Method. . . . . . . . . . . . . . . . . . . . . 236
26.4 Conferences and Workshops on Raindrop Method. . . . . . . . . . . . . . . . . . . . 236
26.5 Bin Packing test data from [2422], Data Set 1. . . . . . . . . . . . . . . . . . . . . . 239

27.1 Applications and Examples of Simulated Annealing. . . . . . . . . . . . . . . . . . . 250


27.2 Conferences and Workshops on Simulated Annealing. . . . . . . . . . . . . . . . . . . 250

28.1 The Pareto domination relation of the individuals illustrated in Figure 28.4. . . . . . 277
28.2 An example for Variety Preserving based on Figure 28.4. . . . . . . . . . . . . . . . . 283
28.3 The distance and sharing matrix of the example from Table 28.2. . . . . . . . . . . . 283
28.4 Applications and Examples of Evolutionary Algorithms. . . . . . . . . . . . . . . . . 314
28.5 Conferences and Workshops on Evolutionary Algorithms. . . . . . . . . . . . . . . . 317

29.1 Applications and Examples of Genetic Algorithms. . . . . . . . . . . . . . . . . . . . 351


29.2 Conferences and Workshops on Genetic Algorithms. . . . . . . . . . . . . . . . . . . 353

30.1 Applications and Examples of Evolution Strategies. . . . . . . . . . . . . . . . . . . . 373


30.2 Conferences and Workshops on Evolution Strategies. . . . . . . . . . . . . . . . . . . 373

31.1 Applications and Examples of Genetic Programming. . . . . . . . . . . . . . . . . . . 382


31.2 Conferences and Workshops on Genetic Programming. . . . . . . . . . . . . . . . . . 383
31.3 Applications and Examples of Linear Genetic Programming. . . . . . . . . . . . . . . 408
1214 LIST OF TABLES
31.4 Applications and Examples of Grammar-Guided Genetic Programming. . . . . . . . 409
31.5 Applications and Examples of Grahp-based Genetic Programming. . . . . . . . . . . 409

32.1 Applications and Examples of Evolutionary Programming. . . . . . . . . . . . . . . . 414


32.2 Conferences and Workshops on Evolutionary Programming. . . . . . . . . . . . . . . 414

33.1 Applications and Examples of Differential Evolution. . . . . . . . . . . . . . . . . . . 421


33.2 Conferences and Workshops on Differential Evolution. . . . . . . . . . . . . . . . . . 421

34.1 Applications and Examples of Estimation Of Distribution Algorithms. . . . . . . . . 428


34.2 Conferences and Workshops on Estimation Of Distribution Algorithms. . . . . . . . 429

35.1 Applications and Examples of Learning Classifier Systems. . . . . . . . . . . . . . . . 459


35.2 Conferences and Workshops on Learning Classifier Systems. . . . . . . . . . . . . . . 459

36.1 Applications and Examples of Memetic Algorithms. . . . . . . . . . . . . . . . . . . 467


36.2 Conferences and Workshops on Memetic Algorithms. . . . . . . . . . . . . . . . . . . 467

37.1 Applications and Examples of Ant Colony Optimization. . . . . . . . . . . . . . . . . 473


37.2 Conferences and Workshops on Ant Colony Optimization. . . . . . . . . . . . . . . . 473

38.1 Applications and Examples of River Formation Dynamics. . . . . . . . . . . . . . . . 478


38.2 Conferences and Workshops on River Formation Dynamics. . . . . . . . . . . . . . . 478

39.1 Applications and Examples of Particle Swarm Optimization. . . . . . . . . . . . . . . 483


39.2 Conferences and Workshops on Particle Swarm Optimization. . . . . . . . . . . . . . 483

40.1 Applications and Examples of Tabu Search. . . . . . . . . . . . . . . . . . . . . . . . 490


40.2 Conferences and Workshops on Tabu Search. . . . . . . . . . . . . . . . . . . . . . . 490

41.1 Applications and Examples of Extremal Optimization. . . . . . . . . . . . . . . . . . 496


41.2 Conferences and Workshops on Extremal Optimization. . . . . . . . . . . . . . . . . 496

42.1 Applications and Examples of GRAPSs. . . . . . . . . . . . . . . . . . . . . . . . . . 500


42.2 Conferences and Workshops on GRAPSs. . . . . . . . . . . . . . . . . . . . . . . . . 500

43.1 Applications and Examples of Downhill Simplex. . . . . . . . . . . . . . . . . . . . . 502


43.2 Conferences and Workshops on Downhill Simplex. . . . . . . . . . . . . . . . . . . . 502

44.1 Conferences and Workshops on Random Optimization. . . . . . . . . . . . . . . . . . 506

46.1 Applications and Examples of Uninformed Search. . . . . . . . . . . . . . . . . . . . 518


46.2 Conferences and Workshops on Uninformed Search. . . . . . . . . . . . . . . . . . . . 518
46.3 Applications and Examples of Informed Search. . . . . . . . . . . . . . . . . . . . . . 520
46.4 Conferences and Workshops on Informed Search. . . . . . . . . . . . . . . . . . . . . 521

47.1 Applications and Examples of Branch And Bound. . . . . . . . . . . . . . . . . . . . 524


47.2 Conferences and Workshops on Branch And Bound. . . . . . . . . . . . . . . . . . . 524

48.1 Applications and Examples of Cutting-Plane Method. . . . . . . . . . . . . . . . . . 528


48.2 Conferences and Workshops on Cutting-Plane Method. . . . . . . . . . . . . . . . . . 528

49.1 Sample Data A = {(xi , yi ) : i ∈ [0..8]} for Equation 49.8 . . . . . . . . . . . . . . . . 534


49.2 The configurations used in the full-factorial experiments. . . . . . . . . . . . . . . . . 546
49.3 The measurements taken during the experiments. . . . . . . . . . . . . . . . . . . . . 546
49.4 The best and the worst evaluation results in the full-factorial tests. . . . . . . . . . . 547

50.1 Some long Root2paths for l from 1 to 11 with underlinedbridge elements. . . . . . . 564
50.2 Properties of the sphere function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
50.3 Results obtained for the sphere function with τ function evaluations. . . . . . . . . . 581
LIST OF TABLES 1215
50.4 Properties of Schwefel’s problem 2.22. . . . . . . . . . . . . . . . . . . . . . . . . . . 583
50.5 Results obtained for Schwefel’s problem 2.22 with τ function evaluations. . . . . . . 583
50.6 Properties of Schwefel’s problem 1.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
50.7 Results obtained for Schwefel’s problem 1.2 with τ function evaluations. . . . . . . . 585
50.8 Properties of Schwefel’s problem 2.21. . . . . . . . . . . . . . . . . . . . . . . . . . . 587
50.9 Results obtained for Schwefel’s problem 2.21 with τ function evaluations. . . . . . . 587
50.10Properties of the generalized Rosenbrock’s function. . . . . . . . . . . . . . . . . . . 589
50.11Results obtained for the generalized Rosenbrock’s function with τ function evaluations.590
50.12Properties of the step function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
50.13Results obtained for the step function with τ function evaluations. . . . . . . . . . . 591
50.14Properties of the quartic function with noise. . . . . . . . . . . . . . . . . . . . . . . 593
50.15Results obtained for the quartic function with noise with τ function evaluations. . . 593
50.16Properties of Schwefel’s problem 2.26. . . . . . . . . . . . . . . . . . . . . . . . . . . 595
50.17Results obtained for Schwefel’s problem 2.26 with τ function evaluations. . . . . . . 595
50.18Properties of the generalized Rastrigin’s function. . . . . . . . . . . . . . . . . . . . . 597
50.19Results obtained for the generalized Rastrigin’s function with τ function evaluations. 597
50.20Properties of Ackley’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
50.21Results obtained for Ackley’s function with τ function evaluations. . . . . . . . . . . 600
50.22Properties of the generalized Griewank function. . . . . . . . . . . . . . . . . . . . . 601
50.23Results obtained for the generalized Griewank function with τ function evaluations. 601
50.24Properties of the generalized penalized function 1. . . . . . . . . . . . . . . . . . . . 604
50.25Results obtained for the generalized penalized function 1 with τ function evaluations. 605
50.26Properties of the generalized penalized function 2. . . . . . . . . . . . . . . . . . . . 606
50.27Results obtained for the generalized penalized function 2 with τ function evaluations. 607
50.28Properties of Levy’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
50.29Results obtained for Levy’s function with τ function evaluations. . . . . . . . . . . . 608
50.30Properties of the shifted sphere function. . . . . . . . . . . . . . . . . . . . . . . . . . 610
50.31Results obtained for the shifted sphere function with τ function evaluations. . . . . . 610
50.32Properties of the shifted and rotated elliptic function. . . . . . . . . . . . . . . . . . 611
50.33Results obtained for the shifted and rotated elliptic function with τ function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
50.34Properties of shifted Schwefel’s problem 2.21. . . . . . . . . . . . . . . . . . . . . . . 612
50.35Properties of Schwefel’s modified problem 2.21 with optimum on the bounds. . . . . 613
50.36Results obtained for Schwefel’s modified problem 2.21 with optimum on the bounds
with τ function evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
50.37Properties of the shifted Rosenbrock’s function. . . . . . . . . . . . . . . . . . . . . . 614
50.38Results obtained for the shifted Rosenbrock’s function with τ function evaluations. . 614
50.39Properties of shifted Ackley’s function. . . . . . . . . . . . . . . . . . . . . . . . . . . 616
50.40Properties of the shifted and rotated Ackley’s function. . . . . . . . . . . . . . . . . 617
50.41Results obtained for the shifted and rotated Ackley’s function with τ function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
50.42Properties of the shifted Rastrigin’s function. . . . . . . . . . . . . . . . . . . . . . . 618
50.43Results obtained for the shifted Rastrigin’s function with τ function evaluations. . . 618
50.44Properties of the shifted and rotated Rastrigin’s function. . . . . . . . . . . . . . . . 619
50.45Results obtained for the shifted and rotated Rastrigin’s function with τ function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
50.46Properties of Schwefel’s problem 2.13. . . . . . . . . . . . . . . . . . . . . . . . . . . 620
50.47Results obtained for Schwefel’s problem 2.13 with τ function evaluations. . . . . . . 620
50.48Properties of the shifted Griewank function. . . . . . . . . . . . . . . . . . . . . . . . 621

53.1 Special Quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667


53.2 Parameters of the discrete uniform distribution. . . . . . . . . . . . . . . . . . . . . . 669
53.3 Parameters of the Poisson distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 671
53.4 Parameters of the Binomial distribution. . . . . . . . . . . . . . . . . . . . . . . . . . 674
53.5 Parameters of the continuous uniform distribution. . . . . . . . . . . . . . . . . . . . 676
53.6 Parameters of the normal distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 678
53.7 Some values of the standardized normal distribution. . . . . . . . . . . . . . . . . . . 680
53.8 Parameters of the exponential distribution. . . . . . . . . . . . . . . . . . . . . . . . 682
1216 LIST OF TABLES
53.9 Parameters of the χ2 distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
53.10Some values of the χ2 distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
53.11Parameters of the Student’s t- distribution. . . . . . . . . . . . . . . . . . . . . . . . 687
53.12Table of Student’s t-distribution with right-tail probabilities. . . . . . . . . . . . . . 689
53.13Parameters of the dice throw experiment. . . . . . . . . . . . . . . . . . . . . . . . . 691
53.14Example for paired samples (a, b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
53.15Example for unpaired samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702

54.1 The precise developer environment used implementing and testing. . . . . . . . . . . 705
List of Algorithms

1.1 x ←− “optimizeBinPacking′′ (k, b, a) . . . . . . . . . . . . . . . . (see Example E1.1) . . . . . . . . 40

6.1 “Example Iterative Algorithm′′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

8.1 x ←− randomSampling(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114


8.2 x ←− randomWalk(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.3 x ←− exhaustiveSearch(f, y̆) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

26.1 x ←− hillClimbing(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230


26.2 X ←− hillClimbingMO “OptProb′′ , as . . . . . . . . . . . . . . . . . . . . . . . . . . 231
26.3 X ←− hillClimbingMORR “OptProb′′ , as . . . . . . . . . . . . . . . . . . . . . . . . 233

27.1 x ←− simulatedAnnealing(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

28.1 X ←− simpleEA(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255


28.2 X ←− simpleMOEA “OptProb′′ , ps, mps . . . . . . . . . . . . . . . . . . . . . . . . . 256
28.3 X ←− elitistMOEA “OptProb′′ , ps, as, mps . . . . . . . . . . . . . . . . . . . . . . . . 268
28.4 v2 ←− assignFitnessParetoRank(pop, cmpf ) . . . . . . . . . . . . . . . . . . . . . . . . 276
28.5 v ←− assignFitnessVariety(pop, cmpf ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
28.6 v ←− assignFitnessTournament(q, r, pop, cmpf ) . . . . . . . . . . . . . . . . . . . . . . 285
28.7 mate ←− truncationSelectionr (k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . . 289
28.8 mate ←− truncationSelectionw (mps, pop, v) . . . . . . . . . . . . . . . . . . . . . . . . 289
28.9 mate ←− rouletteWheelSelectionr (pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 292
28.10mate ←− rouletteWheelSelectionw (pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 293
28.11mate ←− tournamentSelectionr (k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 298
28.12mate ←− tournamentSelectionw (k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 298
28.13mate ←− tournamentSelectionw (k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 299
28.14mate ←− tournamentSelectionNDr (η, k, pop, v, mps) . . . . . . . . . . . . . . . . . . . 300
28.15mate ←− randomSelectionr (pop, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
28.16v′ ←− clearingFilter(pop, σ, k, v) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
28.17P ←− scpFilter(pop, c, f ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
28.18pop ←− createPop(ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
28.19pop ←− reproducePopEA(mate, ps, mr, cr) . . . . . . . . . . . . . . . . . . . . . . . . . 308
28.20Pnew ←− updateOptimalSet(Pold , pnew , cmpf ) . . . . . . . . . . . . . . . . . . . . . . 309
28.21Pnew ←− updateOptimalSet(Pold , pnew , cmpf ) (nd Version) . . . . . . . . . . . . . . . 309
28.22Pnew ←− updateOptimalSetN(Pold , P, cmpf ) . . . . . . . . . . . . . . . . . . . . . . . 310
28.23P ←− extractBest(pop, cmpf ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
28.24Pnew ←− pruneOptimalSetc (Pold , k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
28.25(Pl , lst, cnt) ←− divideAGA(Pold , d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
28.26Pnew ←− pruneOptimalSetAGA(Pold , d, k) . . . . . . . . . . . . . . . . . . . . . . . . 314

29.1 g ←− createGAFLBin(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328


29.2 g ←− createGAPerm(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
1218 LIST OF ALGORITHMS
29.3 g ←− mutateGAFLBinRand(gp , n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
29.4 g ←− mutateSinglePerm(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
29.5 g ←− mutateMultiPerm(gp ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
29.6 g ←− recombineMPX(gp1 , gp2 , k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
29.7 g ←− recombineUX(gp1 , gp2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
29.8 x ←− gpmRK (g, S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349

30.1 X ←− generalES(f, µ, λ, ρ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361


30.2 g′ ←− recombineDiscrete(P ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
30.3 g′ ←− recombineIntermediate(P ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
g ←− mutateG,σ g′ , w = σ  . . . .

30.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
30.5 g ←− mutateG,σ g′ , w = σ  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
30.6 g ←− mutateG,M g′ , w = M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
30.7 x ←− (1+1)-ES1/5 (f, L, a, σ0 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
30.8 σ ←− mutateW,σ (σ)′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
 
ˆ N, Σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
31.1 g ←− createGPFull d,
 
31.2 g ←− createGPGrow d, ˆ N, Σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
 
31.3 g ←− createGPRamped d, ˆ N, Σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
31.4 tc ←− selectNodeuni (tr ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
31.5 g ←− mutateGP(gp ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
31.6 g ←− recombineGP(gp1 , gp2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398

34.1 x ←− PlainEDA(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432


34.2 X ←− EDA MO “OptProb′′ , ps, mps . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
34.3 x ←− UMDA(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
34.4 pop ←− sampleModelUMDA(M, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
34.5 M ←− buildModelUMDA(mate) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
34.6 x ←− PBIL(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
34.7 M ←− buildModelPBIL mate, M′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
34.8 x ←− cGA(f, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
34.9 M ←− buildModelCGA mate, M′ , ps . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
34.10(true, false) ←− terminationCriterionCGA(M) . . . . . . . . . . . . . . . . . . . . 440
34.11x ←− transformModelCompGA(M) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
34.12x ←− SHCLVND(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
34.13pop ←− sampleModelSHCLVND(M, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . 443
34.14M ←− buildModelSHCLVND mate, M′ . . . . . . . . . . . . . . . . . . . . . . . . . . 443
34.15x ←− RCPBIL(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

34.16M ←− sampleModelRCPBIL mate, M . . . . . . . . . . . . . . . . . . . . . . . . . . 445
′
34.17M ←− buildModelRCPBIL mate, M . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
34.18x ←− PIPE(f, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
34.19N ←− newModelNodePIPE(N, Σ, PΣ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
34.20pop ←− sampleModelPIPE(M,  ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
34.21(N, t) ←− buildTreePIPE N ′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450

34.22M ←− buildModelPIPE mate, M ,x . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
′ 
34.23N ←− increasePropPIPE N , t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
34.24N ←− pruneTreePIPE N ′ . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . 452

39.1 x⋆ ←− PSO(f, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

46.1 P ←− expandP(g) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513


46.2 X ←− BFS(r, isGoal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
46.3 X ←− DFS(r, isGoal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
46.4 X ←− DLS(r, isGoal, d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
46.5 x ←− IDDFS(r, isGoal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
46.6 X ←− greedySearch(r, isGoal, φ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
LIST OF ALGORITHMS 1219
50.1 r ←− flp (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
50.2 rγ ←− “buildRPermutation′′ γ, f̂ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
 
50.3 γ ←− “translate′′ γ ′ , f̂ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
List of Listings

1 The CVS command for anonymously checking out the complete book sources. . . . . 4
17.1 A program computing the greatest common divisor. . . . . . . . . . . . . . . . . . . 180
31.1 A genotype of an individual in Brameier and Banzhaf’s LGP system. . . . . . . . . . 407
31.2 The data for Task 91 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
50.1 An example Royal Road function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
50.2 A Java code snippet computing the sphere function. . . . . . . . . . . . . . . . . . . 581
50.3 A Java code snippet computing Schwefel’s problem 2.22. . . . . . . . . . . . . . . . 583
50.4 A Java code snippet computing Schwefel’s problem 1.2. . . . . . . . . . . . . . . . . 585
50.5 A Java code snippet computing Schwefel’s problem 2.21. . . . . . . . . . . . . . . . 587
50.6 A Java code snippet computing the generalized Rosenbrock’s function. . . . . . . . 589
50.7 A Java code snippet computing the step function function. . . . . . . . . . . . . . . 591
50.8 A Java code snippet computing the quartic function with noise. . . . . . . . . . . . 593
50.9 A Java code snippet computing Schwefel’s problem 2.26 function. . . . . . . . . . . 595
50.10A Java code snippet computing the generalized Rastrigin’s function. . . . . . . . . . 597
50.11A Java code snippet computing Ackley’s function. . . . . . . . . . . . . . . . . . . . 599
50.12A Java code snippet computing the generalized Griewank function. . . . . . . . . . 601
50.13A Java code snippet computing the generalized penalized function 1. . . . . . . . . 604
50.14A Java code snippet computing the generalized penalized function 2. . . . . . . . . 606
50.15A Java code snippet computing the shifted sphere function. . . . . . . . . . . . . . . 610
50.16A Java code snippet computing shifted Schwefel’s problem 2.21. . . . . . . . . . . . 612
50.17A Java code snippet computing the shifted Rosenbrock’s function. . . . . . . . . . . 614
50.18A Java code snippet computing shifted Ackley’s function. . . . . . . . . . . . . . . . 616
50.19A Java code snippet computing the shifted Rastrigin’s function. . . . . . . . . . . . 618
50.20A Java code snippet computing the shifted Griewank function. . . . . . . . . . . . . 621
50.21The location data of the first example problem instance. . . . . . . . . . . . . . . . . 626
50.22The order data of the first example problem instance. . . . . . . . . . . . . . . . . . 627
50.23The truck data of the first example problem instance. . . . . . . . . . . . . . . . . . 628
50.24The container data of the first example problem instance. . . . . . . . . . . . . . . . 629
50.25The driver data of the first example problem instance. . . . . . . . . . . . . . . . . . 629
50.26The distance data of the first example problem instance. . . . . . . . . . . . . . . . . 631
50.27The move data of the solution to the first example problem instance. . . . . . . . . . 633
50.28An example output of the features of an example plan (with errors). . . . . . . . . . 634
55.1 The basic interface for everything involved in optimization. . . . . . . . . . . . . . . 707
55.2 The interface for all nullary search operations. . . . . . . . . . . . . . . . . . . . . . . 708
55.3 The interface for all unary search operations. . . . . . . . . . . . . . . . . . . . . . . 708
55.4 The interface for all binary search operations. . . . . . . . . . . . . . . . . . . . . . . 709
55.5 The interface for all ternary search operations. . . . . . . . . . . . . . . . . . . . . . 710
55.6 The interface for all binary search operations. . . . . . . . . . . . . . . . . . . . . . . 711
55.7 The interface common to all genotype-phenotype mappings. . . . . . . . . . . . . . . 711
55.8 The interface for objective functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.9 The interface for termination criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.10An interface to all single-objective optimization algorithms. . . . . . . . . . . . . . . 713
55.11An interface to all multi-objective optimization algorithms. . . . . . . . . . . . . . . 716
1222 LIST OF LISTINGS
55.12The interface for temperature schedules. . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.13The interface for selection algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.14The interface for fitness assignment processes. . . . . . . . . . . . . . . . . . . . . . . 718
55.15The interface for comparator functions. . . . . . . . . . . . . . . . . . . . . . . . . . 719
55.16The interface for model building and updating. . . . . . . . . . . . . . . . . . . . . . 719
55.17The interface for sampling points from a model. . . . . . . . . . . . . . . . . . . . . . 720
56.1 The base class for everything involved in optimization. . . . . . . . . . . . . . . . . . 723
56.2 A base class for all single-objective optimization methods. . . . . . . . . . . . . . . . 724
56.3 A base class for all multi-objective optimization methods. . . . . . . . . . . . . . . . 728
56.4 The random sampling algorithm “randomSampling” given in Algorithm 8.1. . . . . . 729
56.5 The random walk algorithm “randomWalk” given in Algorithm 8.2. . . . . . . . . . 732
56.6 The Hill Climbing algorithm “hillClimbing” given in Algorithm 26.1. . . . . . . . . . 735
56.7 The Multi-Objective Hill Climbing algorithm “hillClimbingMO” given in
Algorithm 26.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
56.8 The Simulated Annealing algorithm “simulatedAnnealing” given in Algorithm 27.1. 740
56.9 The logarithmic temperature schedule as specified in Equation 27.16. . . . . . . . . . 743
56.10The logarithmic temperature schedule as specified in Equation 27.17. . . . . . . . . . 744
56.11A base class providing the features Algorithm 28.1 in a generational way (see
Section 28.1.4.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
56.12A base class providing the features Algorithm 28.2 in a generational way. . . . . . . 750
56.13The truncation selection method given in Algorithm 28.8 on page 289. . . . . . . . . 754
56.14The tournament selection method given in Algorithm 28.11 on page 298. . . . . . . . 756
56.15The random selection method given in Algorithm 28.15 on page 302. . . . . . . . . . 758
56.16Pareto ranking fitness assignment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
56.17An example implementation of the Evolution Strategy Algorithm 30.1 on page 361. . 760
56.18An individual record also holding endogenous information. . . . . . . . . . . . . . . . 770
56.19An example implementation of the Differential Evolution algorithm which uses
ternary crossover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
56.20The Estimation Of Distribution Algorithm algorithm “PlainEDA” given in
Algorithm 34.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
56.21The creator for the initial model to be used in UMDA. . . . . . . . . . . . . . . . . . 781
56.22The model building method used in UMDA. . . . . . . . . . . . . . . . . . . . . . . . 782
56.23Sample points from the UMDA model. . . . . . . . . . . . . . . . . . . . . . . . . . . 783
56.24Create uniformly distributed real vectors. . . . . . . . . . . . . . . . . . . . . . . . . 784
56.25Modify a real vector by adding normally distributed random numbers with a given
standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
56.26A strategy mutation operator for Evolution Strategy. . . . . . . . . . . . . . . . . . . 787
56.27Modify a real vector by adding uniformly distributed random numbers. . . . . . . . 789
56.28Modify a real vector by adding normally distributed random numbers. . . . . . . . . 791
56.29A weighted average crossover operator. . . . . . . . . . . . . . . . . . . . . . . . . . . 792
56.30The binomial crossover operator from Differential Evolution. . . . . . . . . . . . . . 794
56.31The exponential crossover operator from Differential Evolution. . . . . . . . . . . . . 796
56.32The dominant recombination operator defined in Section 30.3.1. . . . . . . . . . . . . 798
56.33The intermedtiate recombination operator defined as Section 30.3.2. . . . . . . . . . 799
56.34A uniform random string generator. . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
56.35A single bit-flip mutator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
56.36The uniform crossover operator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
56.37Create uniformly distributed permutations. . . . . . . . . . . . . . . . . . . . . . . . 804
56.38Modify a permutation by swapping two genes. . . . . . . . . . . . . . . . . . . . . . . 806
56.39Modify a permutation by repeatedly swapping two genes. . . . . . . . . . . . . . . . 808
56.40The base class for search operations for trees. . . . . . . . . . . . . . . . . . . . . . . 809
56.41An utility class to construct random paths in a tree, i. e., to select random nodes. . . 811
56.42The ramped-half-and-half creation procedure. . . . . . . . . . . . . . . . . . . . . . . 814
56.43A simple sub-tree replacement mutator. . . . . . . . . . . . . . . . . . . . . . . . . . 815
56.44A binary search operation for trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
56.45A multiplexing nullary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . 818
56.46A multiplexing unary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . . 820
56.47A multiplexing binary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . 822
LIST OF LISTINGS 1223
56.48A base class for tree nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
56.49A type description for tree nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
56.50A set of tree node types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
56.51The identity mapping for cases where G = X. . . . . . . . . . . . . . . . . . . . . . . 833
56.52The Random Keys genotype-phenotype mapping introduced in Algorithm 29.8. . . . 834
56.53The step-limit termination criterion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
56.54A comparator realizing the lexicographic relationship. . . . . . . . . . . . . . . . . . 837
56.55A comparator realizing the Pareto dominance relationship. . . . . . . . . . . . . . . . 838
56.56The individual record. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
56.57The individual record for multi-objective optimization. . . . . . . . . . . . . . . . . . 841
56.58A base class for archiving non-dominated individuals. . . . . . . . . . . . . . . . . . 843
56.59Some constants useful for optimization. . . . . . . . . . . . . . . . . . . . . . . . . . 850
56.60A small and handy class for computing some useful statistics. . . . . . . . . . . . . . 851
56.61Some simple utilities for converting objects to strings. . . . . . . . . . . . . . . . . . 854
57.1 The Sphere Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
57.2 f1 as defined in Equation 26.1 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 860
57.3 f2 as defined in Equation 26.2 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 861
57.4 f3 as defined in Equation 26.3 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 862
57.5 Testing the function from Task 64. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
57.6 An example output of the Function test program. . . . . . . . . . . . . . . . . . . . . 868
57.7 Solving the Sphere functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
57.8 An example output of the Sphere test program. . . . . . . . . . . . . . . . . . . . . . 887
57.9 An Instance of the Bin Packing Problem. . . . . . . . . . . . . . . . . . . . . . . . . 888
57.10The first objective function f1 from Task 67. . . . . . . . . . . . . . . . . . . . . . . . 892
57.11The second objective function f2 from Task 67. . . . . . . . . . . . . . . . . . . . . . 893
57.12The third objective function f3 from Task 67. . . . . . . . . . . . . . . . . . . . . . . 894
57.13The fourth objective function f4 from Task 67. . . . . . . . . . . . . . . . . . . . . . 896
57.14The fifth objective function f5 from Task 67. . . . . . . . . . . . . . . . . . . . . . . . 897
57.15Solving the Bin Packing problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
57.16An example output of the Bin Packing test program. . . . . . . . . . . . . . . . . . . 903
57.17The data of the att48 problem instance. . . . . . . . . . . . . . . . . . . . . . . . . . 904
57.18A single-objective test program for symbolic regression. . . . . . . . . . . . . . . . . 905
57.19A multi-objective test program for symbolic regression. . . . . . . . . . . . . . . . . . 906
57.20An objective function for Symbolic Regression. . . . . . . . . . . . . . . . . . . . . . 908
57.21An objective function based on Listing 57.20, but also including tree-size information. 910
57.22An objective function for Symbolic Regression. . . . . . . . . . . . . . . . . . . . . . 910
57.23Some example functions to match with regression. . . . . . . . . . . . . . . . . . . . 912
57.24An instance of this class holds the information of one location. . . . . . . . . . . . . 914
57.25An instance of this class holds the information of one order. . . . . . . . . . . . . . . 915
57.26An instance of this class holds the information of one truck. . . . . . . . . . . . . . . 918
57.27An instance of this class holds the information of one container. . . . . . . . . . . . . 920
57.28An instance of this class holds the information of one driver. . . . . . . . . . . . . . 921
57.29This class holds the complete infomation about the distances between locations. . . 923
57.30An object which holds a transportation move. . . . . . . . . . . . . . . . . . . . . . . 925
57.31An object which can be used to compute the features of a candidate solution of the
benchmark problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931

You might also like