You are on page 1of 132

Adrian Thompson

Hardware Evolution
Automatic design of electronic circuits in
recon gurable hardware by arti cial evolution.
April 8, 1998

Springer-Verlag

Berlin Heidelberg NewYork


London Paris Tokyo
Hong Kong Barcelona
Budapest
Preface

Evolution through natural selection has been going on for a very long time.
Evolution through arti cial selection has been practiced by humans for a
large part of our history, in the breeding of plants and livestock. Arti cial
evolution, where we evolve an artifact through arti cial selection, has been
around since electronic computers became common: about 30 years.
Right from the beginning, people have suggested using arti cial evolution
to design electronics automatically.1 Only recently, though, have suitable re-
con gurable silicon chips become available that make it easy for arti cial
evolution to work with a real, physical, electronic medium: before them, ex-
periments had to be done entirely in software simulations. Early research
concentrated on the potential applications opened-up by the raw speed ad-
vantage of dedicated digital hardware over software simulation on a general-
purpose computer. This book is an attempt to show that there is more to
it than that. In fact, a radically new viewpoint is possible, with fascinating
consequences.
This book was written as a doctoral thesis, submitted in September 1996.
As such, it was a rather daring exercise in ruthless brevity. Believing that
the contribution I had to make was essentially a simple one, I resisted being
drawn into peripheral discussions. In the places where I deliberately drop a
subject, this implies neither that it's not interesting, nor that it's not relevant:
just that it's not a crucial part of the tale I want to tell here.

1
Thanks to Moshe Sipper and Ed Rietman for the following early references:
Atmar, J. W. (1976). Speculation on the evolution of intelligence and its possible
realization in machine form. Doctor of Science thesis. Las Cruces: New Mexico
State University, April, 1976.
Wolfram, S. (1986). Approaches to Complexity Engineering. Physica 22D,
pp. 385{399.
vi Preface

Since writing the thesis, things have been going nicely. In the Centre for
Computational Neuroscience & Robotics at the University of Sussex, we now
have a small `evolutionary electronics' group, and others around the world
are taking interest and starting related research projects. The `Future Work'
chapter of this book is not idle talk: it's now current work, and is starting
to produce interesting and promising results. Rather than try to update my
1996 writing, I refer the interested reader to our World Wide Web pages,
which are permanently up to date:

http://www.cogs.susx.ac.uk/users/adrianth/
2014: Not any more. Try:
https://sites.google.com/site/thompsonevolvablehardware/
I owe it all to Phil Husbands and the School of Cognitive and Computing
Sciences. Special thanks also to the following people and organisations: Dave
Cli , Harry Barrow, Inman Harvey; Steve Trimberger, Dennis Segers, Raj
Patel, John Watson, Bart Thielges, Dennis Rose et al. at Xilinx, Inc. (San
Jose, California); Jerry Mitchell, Tony Simpson, Martin Nock, Paul Swan,
David Fogel, Giles Mayley, Tony `Monty' Hirst, EPSRC, Chris Winter and
British Telecom, Graeme Proudler and Hewlett Packard Ltd., Ian Macbeth
and Motorola Inc., Jon Stocker and Zetex plc.
I'm especially grateful for the kindness, support, and silicon of John Gray
and all at the Xilinx Development Corp. (Edinburgh, Scotland): without
them this book would have to have been about something else.
I think that's enough prefacing. Enjoy the book!

Adrian Thompson
University of Sussex, UK
Spring 1998
Summary

In recon gurable hardware, the behaviours and interconnections of the con-


stituent electronic primitives can be repeatedly changed. Arti cial evolution
can automatically derive a con guration causing the system to exhibit a pre-
speci ed desired behaviour. A circuit's evolutionary tness is given according
to its behaviour when physically instantiated as a hardware con guration:
`intrinsic' hardware evolution.
There is no distinction between design and implementation, nor are design
abstractions used: evolution proceeds by taking account of changes in the
overall physical behaviour of the system when variations are made to its
internal structure. This contrasts with top-down design methodologies, where
hardware details are mainly considered only in the nal stages. It would
be infeasible for conventional methods to consider all of the semiconductor
physics of the components and their interactions at all stages of the design
process, but this is the essence of intrinsic hardware evolution.
After removing the constraints on circuit structure and dynamics nor-
mally needed to permit design abstractions, evolution explores beyond the
scope of conventional design into the entire repertoire of behaviours that the
physical hardware can manifest. A series of experiments is used to explore
the practicalities, culminating in a simple but non-trivial application. The
circuits may seem bizarre, but are highly ecient in their use of silicon. The
experiments include the rst intrinsically evolved hardware for robot control,
and the rst intrinsic evolution of the con guration of a Field-Programmable
Gate Array (FPGA). There is great potential for real-world applications:
some hurdles remain, but a promising solution is proposed.
It is also shown that e ects arising from evolutionary population dynamics
can exert an in uence towards compact circuits, or give some degree of fault-
tolerance. Additionally, fault-tolerance requirements can be incorporated into
tness criteria. Evolved fault-tolerance is integrated into the way the system
operates, rather than explicitly relying on spare parts (redundancy).
viii Summary
Table of Contents

Preface : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : v
Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : vii
Acronyms : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xvii
1. Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1
1.1 Topic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Hardware Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 An Example of Recon gurable Hardware . . . . . . . . . . . . 2
1.2.2 Evolving the Circuit Con guration . . . . . . . . . . . . . . . . . 4
1.2.3 Intrinsic/Extrinsic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 The Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2. Context : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9
2.1 Inspiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Mead et al.: Analog neural VLSI . . . . . . . . . . . . . . . . . . . 9
2.1.2 Pulse-stream Neural Networks . . . . . . . . . . . . . . . . . . . . . 11
2.1.3 Other Neural Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.4 Recon gurable Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.5 Self-Timed Digital Design . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.6 Analogies with Software: Ray's Tierra . . . . . . . . . . . . . . . 16
2.1.7 A Dynamical Systems Perspective . . . . . . . . . . . . . . . . . . 16
2.2 Evolutionary Algorithms for Electronic Design:
Other approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.1 ETL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.2 de Garis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.3 EPFL & CSEM: `Embryonics' . . . . . . . . . . . . . . . . . . . . . 23
2.2.4 A Sophisticated Extrinsic Approach: Hemmi et al. . . . . 24
2.2.5 Evolving Analogue Circuits . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.6 A Silicon Neuromorph { The First Intrinsic Hardware
Evolution? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.7 Loosely Related Evolutionary Hardware Projects . . . . . 27
x Table of Contents

2.3 Multi-Criteria EAs: Area, Power, Speed and Testability . . . . . 27


2.4 A Philosophy of Arti cial Evolution . . . . . . . . . . . . . . . . . . . . . . 29
2.4.1 Domain Knowledge, Morphogenesis, Encoding Schemes
and Evolvability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4.2 Species Adaptation Genetic Algorithms (SAGA) . . . . . 32
2.5 The Position of this Book Within the Field . . . . . . . . . . . . . . . . 33
3. Unconstrained Structure and Dynamics : : : : : : : : : : : : : : : : : : 35
3.1 The Relationship Between Intrinsic Hardware Evolution
and Conventional Design Techniques . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Unconstrained Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Unconstrained Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.1 Unconstrained Evolutionary Manipulation of Timescales
I: Simulation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.2 II: Using a real FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3.3 A Showpiece for Unconstrained Dynamics:
An Evolved Hardware Sensorimotor Control Structure 48
3.4 The Relationship Between Intrinsic Hardware
Evolution and Natural Evolution . . . . . . . . . . . . . . . . . . . . . . . . . 56
4. Parsimony and Fault Tolerance : : : : : : : : : : : : : : : : : : : : : : : : : : : 57
4.1 Insensitivity to Genetic Mutations . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Engineering Consequences of Mutation-Insensitivity. . . . . . . . . 62
4.3 Explicitly Specifying Fault-Tolerance Requirements . . . . . . . . . 66
4.4 Adaptation to Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.5 Fault Tolerance Through Redundancy . . . . . . . . . . . . . . . . . . . . . 71
4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5. Demonstration : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73
5.1 The Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.4 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6. Future Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 87
6.1 Engineering Tolerances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7. Conclusion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 93
Appendix A. Circuit Diagram of the DSM Evolvable
Hardware Robot Controller : : : : : : : : : : : : : : : : : : : : : 97
Table of Contents xi

Appendix B. Details of the Simulations used in the


`Mr Chips' Robot Experiment : : : : : : : : : : : : : : : : : : 99
B.1 The Motor Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
B.2 The Movement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
B.3 The Sonar Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 103
xii Table of Contents
List of Figures

1.1 A simpli ed view of the XC6216 FPGA . . . . . . . . . . . . . . . . . . . . . . . 3


1.2 Evolving an FPGA con guration using a simple genetic algorithm 5
3.1 Output of the oscillator evolved in simulation . . . . . . . . . . . . . . . . . . 42
3.2 The 4kHz oscillator circuit evolved in simulation . . . . . . . . . . . . . . . 43
3.3 The experimental arrangement for oscillator evolution with the
real FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Frequency of oscillation of individuals over the GA run (real FPGA) 47
3.5 The robot known as \Mr Chips" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6 The hardware implementation of the evolvable DSM robot controller 50
3.7 An alternative representation of the DSM as used in the experiment 52
3.8 The evolved wall-avoidance behaviour . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.9 A representation of one of the wall-avoiding DSMs . . . . . . . . . . . . . . 54
4.1 Mean population distribution after evolution . . . . . . . . . . . . . . . . . . . 59
4.2 Calculation of a genotype's tness on an NK landscape . . . . . . . . . . 60
4.3 The NK landscape modi cation algorithm . . . . . . . . . . . . . . . . . . . . . 61
4.4 e^ and the mean tness of evolved optima as the mutation
probability is varied . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.5 Tolerance of the evolved robot controller to SSA faults . . . . . . . . . . 65
4.6 Max and mean tnesses over time, with faults being present after
generation 85 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.7 Fault tolerance of the robot controller: before and after . . . . . . . . . . 68
4.8 The evolution of fault tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1 The apparatus for the tone discriminator experiment . . . . . . . . . . . . 75
5.2 The circuitry to evolve the tone discriminator . . . . . . . . . . . . . . . . . . 75
5.3 Photographs of the oscilloscope screen . . . . . . . . . . . . . . . . . . . . . . . . 77
5.4 Population statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.5 The nal evolved circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.6 The pruned circuit diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.7 The functional part of the circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.8 The frequency response of the nal circuit . . . . . . . . . . . . . . . . . . . . . 85
6.1 The frequency response measured at three di erent temperatures . 89
xiv List of Figures

6.2 Moving the circuit to a di erent region of the FPGA . . . . . . . . . . . . 89


6.3 The miniature Khepera robot, with onboard FPGA evolvable
controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
A.1 Circuit diagram for the DSM evolvable hardware robot controller . 98
List of Tables

3.1 Node functions for the oscillator evolved in simulation . . . . . . . . . . . 40


3.2 Genotype segment for one node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
xvi List of Tables
Acronyms

All acronyms are de ned where rst used, either in the main text or (when
the reader is likely to be familiar with it already) in a footnote.
AI Arti cial Intelligence
ALN Adaptive Logic Network
ASIC Application-Speci c Integrated Circuit
ATR HIP Advanced Telecommunications Research institute,
Human Information Processing laboratories
CAD Computer Aided Design
CAM Cellular Automaton Machine
CCD Charge-Coupled Device
CMOS Complementary Metal Oxide Semiconductor
CSEM Centre Suisse d'Electronique et de Microtechnique SA
DFG Data Flow Graph
DSM Dynamic State Machine
EA Evolutionary Algorithm
EEPROM Electrically Erasable and Programmable Read Only Memory
EHW Evolvable HardWare
ETL Electrotechnical Laboratory
FPGA Field-Programmable Gate Array
FSM Finite State Machine
GA Genetic Algorithm
HDL Hardware Description Language
IC Integrated Circuit
IOB Input/Output Block (of an FPGA)
ISA Industry Standard Architecture
KL Kernighan & Lin (graph-partitioning heuristic)
LSL EPFL Logic Systems Laboratory,
Ecole Polytechnique Federale de Lausanne
MIMD Multiple Instruction, Multiple Data
NEWS North, East, West, South
PLD Programmable Logic Device
PLN Probabilistic Logic Neuron
RAM Random Access Memory
ROM Read Only Memory
xviii Acronyms

SAGA Species Adaptation Genetic Algorithm


SGA Simple Genetic Algorithm
SSA Single Stuck-At (fault)
VGA Variable length chromosome GA
VLSI Very Large Scale Integration
WISARD WIlkie, Stonham and Aleksander's Recognition Device
1. Introduction

1.1 Topic
There exist recon gurable VLSI1 silicon chips for which the behaviours and
interconnections of the constituent electronic primitives can be repeatedly
changed. Arti cial evolution can be used to derive a con guration causing the
device to exhibit a pre-speci ed desired behaviour, without the intervention
of a human designer. This book will argue that if, during evolution, each new
variant con guration is assigned its tness score according to the behaviour
it induces in the real recon gurable hardware, then evolution can be allowed
to explore new kinds of circuits that are not within the scope of conventional
design methods.
More strongly, I shall argue that evolution should be allowed to explore
circuits having a richer structure and dynamical behaviour than usual, and
having more respect for the physical properties of the medium in which they
are implemented. By removing constraints on circuit structure and dynamics
normally applied to make simulation or the use of designers' abstract models
viable, evolution can be allowed to exploit the entire repertoire of behaviours
that the hardware can manifest. The full power of the available silicon is
thus released { even the detailed semiconductor physics of the components
{ to be brought to bear on the problem at hand. We shall see examples of
small evolved circuits displaying surprisingly sophisticated behaviours, which
would take conventional design more silicon to achieve.
The evolution of circuit designs that are inherently tolerant to hardware
faults, and the evolution of parsimonious (area-ecient) circuits, will both
be investigated. Under certain conditions, evolutionary population dynamics
can have a positive in uence on these without any special measures being in-
troduced, and in the case of fault tolerance several other evolutionary mecha-
nisms will be demonstrated. Fault tolerance is an example of a non-functional
requirement that is dicult to integrate within conventional design method-
ologies, but using evolution it can exert an in uence at all times during the
automatic design process.
This introduction will provide the necessary background to state the above
claims in precise terms. They are simple claims, but it is hard to believe that
1
VLSI = Very Large Scale Integration.
2 1. Introduction

they could be true and of practical use. Electronic circuits of the rich struc-
ture and dynamics advocated here have not existed before, and can appear
bizarre to those schooled in design techniques. For this reason, the theoretical
arguments will be reinforced with extensive experiments, to illustrate their
practical worth.

1.2 Hardware Evolution


The Xilinx XC6216 (Xilinx, Inc., 1996b) Field-Programmable Gate Array
(FPGA) (Old eld & Dorf, 1995) is a recon gurable VLSI silicon chip partic-
ularly suitable for evolutionary work. It will appear later in the main demon-
stration (Chapter 5), but I will describe how it was used here in order to give
a concrete example of how hardware might be evolved.
1.2.1 An Example of Recon gurable Hardware
Figure 1.1 gives a simpli ed representation of the device. Both logically and
physically, this VLSI chip consists of a two dimensional array of 64  64
recon gurable logic cells, each of which is connected to its four neighbours:
North, East, West and South (NEWS) as shown. There is also a hierarchical
arrangement of wires spanning 4, 16 and 64 cells, but these { along with
many other features { will not be used in this work. Each cell contains a
function unit that can be con gured to perform any Boolean function of two
inputs, or multiplexer functions of three inputs. Each of a function unit's
three inputs (not all of which are necessarily used) can be con gured to be
sourced by any of the four NEWS neighbours. The output of a cell in each of
the NEWS directions can be con gured to be driven either by the output F
of its function unit, or by the signal arriving at any one of the other NEWS
faces. This allows a cell to connect some of its NEWS neighbours directly
together at the same time as performing a function; a cell can `route across
itself' in some directions while giving the output of function F in others.
The cells are con gured independently (they do not all perform the same
function), so even using only the nearest-neighbour links a very large range
of possible circuits can be implemented.
Around the periphery of the array of cells are Input/Output Blocks (IOBs)
and pads that interface the signals at the edge of the array to the external
pins of the chip's package. This is done in a more complex and exible way
than shown in the gure: there are in fact not as many pins as there are cells
at the edges of the array, and there are a variety of ways in which the signal
at the pin can be interfaced to the array. But it is possible to connect any
edge cell to a pin (either for input or output), and that is all that will be
important here. For simplicity, in this book the designation of certain edge
cells as inputs or outputs will be done by hand (rather arbitrarily) at the
start of each experiment.
1.2 Hardware Evolution 3

N EW F
N
S N S EW
W W
F
N N
S S
E E
W W
N
F S
E
E
F
S EWF

Fig. 1.1. A simpli ed view of the XC6216 FPGA. Only those features used later
in the experiments are shown. Top: A 10  10 corner of the 64  64 array of blocks;
Below: the internals of an individual cell, showing the function unit at its centre.
The symbol represents a multiplexer { which of its four inputs is connected
to the output (via an inversion) is controlled by the con guration memory. Similar
multiplexers are used to implement the user-con gurable function F.
4 1. Introduction

At any time, the con guration of the chip is determined by the bits held
in an on-chip memory, which can be written to by software running on a host
computer. By controlling the multiplexers shown in the gure, these bits
regulate the function performed in each cell and how the cells are connected
together. By changing these bits from software, the setting of the electronic
switches distributed throughout the chip is altered to form a new circuit.
Even though the structure of that circuit has been determined by software,
it is physically instantiated on the chip and behaves in real-time according
to the laws of physics. Thus the chip is con gured not programmed : the
con guration bits do not specify a program of instructions to be executed by
a xed processor, but actually cause a new circuit to be created on the chip
which then behaves according to semiconductor physics.2 Remember that
the XC6216 FPGA described here is just one example of a recon gurable
hardware system { several quite di erent types will be described in the next
chapter.
1.2.2 Evolving the Circuit Con guration
To evolve a circuit to perform some pre-speci ed task, each individual in the
population of an evolutionary algorithm corresponds to a setting of the con-
guration bits, and hence to a physical circuit. In the examples I will show,
a simple genetic algorithm was used, with the con guration bits directly
encoded bit-for-bit onto the linear bit-string genotype of an individual. An
overview of the evolutionary process is given in Figure 1.2. A population of
typically 50 individuals was maintained, the genotypes of which were initially
generated completely at random. Then the evolutionary tness of each in-
dividual was evaluated in turn, by taking the matrix of con guration bits
derived from an individual's genotype and using it to con gure a real FPGA.
The circuit now instantiated on the FPGA was then automatically given a
score according to how closely it approximated the desired behaviour, and
that score was the individual's tness. I will use the word `phenotype' to refer
to the instantiated circuit.3
Once the tness of each individual had been evaluated, an entire new
population (the `Next Generation' in the gure) was formed. First, the single
best scoring individual's genotype was copied once into the next generation,
without any alterations at all (this is called `elitism'). The remaining mem-
bers of the new population were formed by stochastically selecting (with
replacement) parents from the old population with a probability determined
by a linear function of their rank within the population, as given by the
2
It is possible to attack this distinction, but it seems more useful to retain it.
Recon guration and programming are two di erent viewpoints, which should be
adopted appropriately for the system in question.
3
The use of the word `phenotype' to refer to behaviour (Dawkins, 1990) can be
useful in other discussions of hardware evolution (Harvey & Thompson, 1997),
but here it means the circuit itself.
Fig. 1.2. Evolving an FPGA con guration using a simple genetic algorithm.

1.2 Hardware Evolution


REPEAT UNTIL SATISFACTORY

Fitness
A population of Population Scores Next Generation
A new
(initially random) 0 1 1 1 0 0 0 1 1 4.851 population is
bit-string
1 0 1 0 0 0 1 0 0 9.001 formed, made
genotypes is
0.000 of the offspring
maintained, 1 1 0 0 1 1 1 0 0
of the fitter
each individual 0 1 1 0 1 0 0 1 1 3.942 (on average)
coding for a
0.030 members of the
possible FPGA
old one.
configuration.

0 1 0 0 0 0 1 1 1
Fitness Evaluation: 0 1 0 0 0 1 0 0 0
1 0 1 0 0 0 1 1 1
Each individual is
taken in turn and 1
used to configure 1 0 1 0 0 0 1 1 1
a real FPGA, which
is then scored at Higher scoring individuals are more likely
how well it performs to parent offspring (selection). Offspring are
the desired task. formed by stochastically combining segments
from each parent (crossover), and by
randomly inverting a few bits (mutation).

5
6 1. Introduction

ordering of the tness scores (`linear rank selection'). The least t individ-
ual had zero probability of being selected, while the most t individual had
twice the probability of the median. When a `parent' had been selected in
this way, an o spring was formed by copying it into the new population
with the addition of random mutations: each bit of its genotype was inverted
with a certain small `mutation probability' (or rate) applied independently at
each bit position (or locus). Alternatively, with a certain `crossover probabil-
ity' (or rate), this o spring individual was formed by selecting two parents,
randomly selecting a `crossover point' between two loci, and taking the bits
before this point from one parent, and the bits after the crossover point from
the other. Only after this `sexual recombination' was mutation then applied
and the o spring inserted into the new population. The cycle of evaluation
and `breeding' was repeated until either a satisfactory circuit was found or
the experiment was abandoned.
This is a fairly standard genetic algorithm (GA), with the fundamentally
important novelty that tnesses are evaluated according to the behaviour of
physically real genetically speci ed circuits. A conventional GA was chosen
so that any results would be of general relevance, rather than being artifacts
of clever problem-speci c `hacks' to the evolutionary algorithm. See Goldberg
(1989) for more details of the standard GA terms and techniques used.
Numerous other evolutionary algorithms are in common use: as well as Ge-
netic Algorithms (Holland, 1975), the main families are Genetic Programming
(Koza, 1992), Evolutionary Programming (Fogel, Owens, & Walsh, 1966), and
Evolution Strategies (Schwefel & Rudolph, 1995). Much of this book applies
to many of these techniques, if properly applied. In the author's view, how-
ever, it is Harvey's Species Adaptation Genetic Algorithm (SAGA) (Harvey,
1992b) framework which is best suited to the evolution of electronic systems.
This theory (covered in the next chapter) was used here merely to set the
mutation rate of the basic GA described above. For clarity, the name `Evo-
lutionary Algorithm' (EA) will be used to refer to SAGA in particular, but
with the understanding that another evolutionary technique could have been
used. It is left to the reader to infer from context which evolutionary algo-
rithms would be appropriate, and where this is of particular importance it
will be explicitly stated.
1.2.3 Intrinsic/Extrinsic
What I have just described has been dubbed `Intrinsic' hardware evolution
(de Garis, 1993b). In the contrasting `Extrinsic' case, the phenotype circuits
are evaluated in a software simulation during evolution, and only the nal
product is eventually implemented as a real circuit. If all of the detailed char-
acteristics of the implementation could be simulated perfectly, then these two
approaches would be equivalent, except for practical considerations such as
speed/cost trade-o s, and the availability of suitable recon gurable hardware.
Such a simulation { which I will call a physical simulation because it captures
1.3 Motivation 7

all of the physical behaviour of the implementation medium { is extremely


computationally expensive, and is currently out of the question for all but
small systems. Consequently, other kinds of simulation make use of the fact
that circuits are usually designed under some methodology that allows their
behaviour to be predicted by a more abstract model. For example, a digital
logic simulation assumes that the circuit has been designed under a method-
ology such that its behaviour can be correctly predicted in this way. It will
be a central theme of this book (Chapter 3) that the constraints convention-
ally applied to circuits to allow this abstraction are not needed for intrinsic
hardware evolution. When an accurate physical simulation is possible, the
same applies to the extrinsic case.

1.3 Motivation
The aim is to use arti cial evolution to produce electronic circuits that are
useful, and in such a way that it is preferable to conventional design methods.
Nothing will be said about intelligence, cognitive science, arti cial life, natural
evolution or biology, though many concepts will be taken from these elds.
This work may have consequences for those areas, but it is not the purpose
of this book to identify them. This is an engineering enterprise.

1.4 The Thesis


With the above background, my thesis can be accurately stated.
For intrinsic hardware evolution:
1. Evolution can be allowed to explore circuits that are beyond the scope
of conventional design. With their less constrained spatial structure and
richer dynamical behaviour, these circuits can be of a di erent nature to
the way electronics is normally envisaged.
2. There is a potential bene t in allowing evolution to do this. The increased
freedom allows evolution to exploit the properties of the implementation
medium more e ectively in achieving the task. Consequently, the result-
ing circuits can be better tailored to the characteristics of the resources
available.
1 & 2 also apply for extrinsic evolution when the simulation is an accurate
physical one, but become less relevant as the simulation is made more ab-
stract.
3. In certain kinds of evolutionary algorithm that can be used for hardware
evolution, there is an e ect whereby the phenotype circuits produced
tend to be relatively una ected by small amounts of mutation to their
8 1. Introduction

genotypes. This e ect can be turned to engineering use, such as encour-


aging parsimonious solutions or giving a degree of graceful degradation in
the presence of certain hardware faults. There are other mechanisms by
which evolution can be more explicitly induced to produce fault-tolerant
circuits.
In the next chapter, I set the context for these ideas by overviewing the
sources of their inspiration and contrasting this with the other work using
evolutionary methods for electronics. Then, in Chapter 3 (`Unconstrained
Structure and Dynamics'), I give a verbal argument for points 1 & 2 above,
and then present experiments carefully designed to explore the key issues.
Chapter 4 deals exclusively with point 3 above. The climax of the book is
the practical demonstration in Chapter 5, which shows all of the ideas in
action in an application. The approach promoted here is not without its
unsolved problems: it is necessary to nd a good balance between exploiting
detailed properties of the medium and being robust to their variations. The
penultimate chapter gives a proposal for how this may be achieved (with
promising preliminary results), and identi es application domains.
The conclusion will be that points 1{3 of the thesis have been demon-
strated by experiment to be true, even though the circuits produced are very
unconventional. The concepts have a potential impact in real-world applica-
tions after more research on the exploitation/robustness trade-o . The book
provides fundamental foundations for the new eld of hardware evolution,
however it develops.
2. Context

In this chapter, I rst show how the approach I will develop grows from roots
outside of what has been considered in other studies of hardware evolution.
After this `Inspiration' section, I go on to consider the existing body of re-
search directly concerning hardware evolution. Following that, the next major
section considers multi-criteria evolution with multiple constraints, which is
of general importance to hardware evolution. Fundamental evolutionary is-
sues are then discussed before nally summing up the position of this book
and clarifying its originality.

2.1 Inspiration
2.1.1 Mead et al.: Analog neural VLSI
Many of the ideas in this book are a fusion of arti cial evolution with two key
factors in the design philosophy developed by Mead et al. for `analog neural
VLSI' (Mead, 1989):
{ Respect for the physics of the medium. Rather than deciding what functions
will be required and then coming to implement them in silicon, the way
in which the system is designed is driven by the behaviours that are natu-
rally exhibited by various small silicon structures. Design is the process of
composing these natural physical behaviours such that the required overall
system behaviour emerges through their interactions. This is contrary to
standard top-down methodologies, where detailed implementation issues
are mainly not considered until the very last stages, after the structure of
the system has already been decided. By taking the properties of the im-
plementation medium into account at all stages during the design process,
there is the opportunity for it to be used more e ectively or eciently.
{ Emphasis on the role of time. The system's temporal dynamics arises from
the coupling together of the natural dynamical behaviours of the compo-
nent silicon structures, rather than by implementing abstract atemporal
computations derived through top-down design (in which, perhaps, time
would be represented as a variable like any other, and on which dynam-
ics might be arti cially enforced through mechanisms like clocking). This
10 2. Context

releases the full power of the available resources: they are physical compo-
nents that behave over time, and now the full behaviours are put to use in
performing the desired function.
(The above is partially a re-interpretation of the work from my own view-
point.)
Of course, designing in this way is extremely dicult: that is why con-
ventional top-down design proceeds di erently. Mead et al. attain successful,
impressive and useful systems by modelling neural mechanisms associated
with the early stages of vision and audition in particular species of animals.
This modelling is done to a greater degree of biological realism than is typi-
cal in the eld of arti cial neural networks, especially with respect to neural
dynamics, but yet with regard for the natural behaviours of small con gura-
tions of silicon components as described above. Natural evolution has done a
large part of the design, which is then re-cast into the new implementation
medium. Thus natural evolution was crafting a structure suited for the bi-
ological medium, not for VLSI: the physics of the silicon medium were not
taken into account at all stages of the design process, but these are cleverly
incorporated by humans at the last minute.
The resulting silicon systems are successful because the behaviours of
certain groups of silicon components resemble important aspects of the rele-
vant neural dynamics. However, the great di erences between the biological
and VLSI media are explicitly analysed (Faggin & Mead, 1990), especially in
terms of speed and connectivity of the components. These di erences become
crucial if one wishes to build VLSI analogues of neural structures other than
those with a rather regular structure and short-range connections between
components, as has been done so far. Multiplexing schemes are then proposed
in order to use the speed of silicon to compensate for the limited connectivity
(Douglas, Mahowald, & Mead, 1995; Craven, Curtis, & Hayes-Gill, 1994),
e ectively making one fast physical wire operate as many slower virtual ones
(particularly important in multi-chip systems, where the number of pins on
a chip is limited (Tessier, Babb, Dahl, et al., 1994)). The fact remains that
biological neural structures evolved with regard to a di erent implementation
medium than silicon VLSI, so are not best suited to it.
At the heart of this book is the observation that intrinsic hardware evo-
lution is the solution to the problem. It proceeds by taking account of the
overall physical behaviour of the real silicon as new variant con gurations
are tried. The philosophy of considering the properties of the medium at all
stages of the `design' process and of exploiting the natural coupled dynam-
ics of the components can be followed to the full. While this is too dicult
for a human designer to do, it is the way that intrinsic hardware evolution
naturally works. This argument and its radical implications will be eshed
out in the next chapter, and will be seen in action in the demonstration of
Chapter 5.
2.1 Inspiration 11

2.1.2 Pulse-stream Neural Networks


Signal values in pulse-stream neural networks (Murray, Tarassenko, Reekie,
et al., 1991; Murray, 1992) are represented by the timings of xed-amplitude
digital pulses. For example, the signal value could be the frequency of short
xed-duration `spikes,' loosely resembling (but not intended to model) those
observed in the nervous systems of animals. The rationale is that ana-
logue neural networks can then be made using a standard, essentially dig-
ital, CMOS1 VLSI fabrication process: the analogue operations happen over
the time dimension. The demonstrated e ectiveness of this technique gives
another reason to take seriously the role of continuous time (see previous sec-
tion), even in a binary system. In Chapter 3, we shall see a spiking strategy
spontaneously evolve in circuits in an asynchronous (continuous time) logic
simulation, without this having to be enforced.
The original formulation of pulse-stream neural networks, although im-
plemented in CMOS VLSI, still used some analogue elements. Probabilistic
(or stochastic) bit-stream neural networks were developed to allow implemen-
tation in a purely digital paradigm, `to exploit fully the strengths of existing
digital VLSI technology' (van Daalen, Jeavons, & Shawe-Taylor, 1991). Here,
a signal value is represented by the probability of any bit within a stream of
bits being set to 1. A highly ecient way of implementing such networks
in look-up table based FPGAs2 is given by Bade and Hutchings (1994). By
staying rmly in the digital domain, these networks retain the tolerance to
process variations between chips, to temperature variations, and so on, nor-
mally associated with conventional digital systems. This is not the case for
the part-analogue pulse-stream networks, but corrective mechanisms can be
built in more easily than for fully analogue circuits (Murray et al., 1991). A
similar problem will arise for the circuits evolved in this book; the issue is
considered in depth and a solution proposed in Chapter 6.
2.1.3 Other Neural Hardware
There are numerous arti cial neural network chips available (see Lindsey and
Lindblad (1995) for a good survey), and some of them could have intrinsic
hardware evolution applied to derive the connection weights and/or the net-
work topology. However, I have already hinted that perhaps evolution can
nd an architecture better tailored to the properties of silicon than mimicry
of the neural structures that evolved for the biological medium. This argu-
ment will be developed in detail in Chapter 3, and is the reason why I shall
concentrate on the use of ne-grained FPGAs (as in the example given in the
Introduction) rather than chips of a prede ned neural construction.
1
CMOS = Complementary Metal Oxide Semiconductor.
2
Look-up table based FPGAs implement the con gurable logic functions by look-
up tables rather than by multiplexers as in the XC6216 example of the previous
chapter.
12 2. Context

The use of a ne-grain recon gurable device (meaning that there are no
large prede ned building-blocks, neural or otherwise) allows evolution, rather
than human prejudice, to solve the problem, as will be advocated in 2.4.1 be-
low. However, the only suitable ne-grain recon gurable devices currently
available are digital FPGAs, so the relationship between logic systems and
arti cial neural networks (which are undeniably useful) is of interest. In fact,
feedforward logic networks have been shown to share some of the desirable
properties of binary feedforward neural networks (Andree, Barkema, Lourens,
et al., 1993). This has long been practically demonstrated by Armstrong's
Adaptive Logic Networks (ALNs) (Armstrong, 1991; Armstrong & Thomas,
1994; Armstrong, Chu, & Thomas, 1995), where there is also a learning pro-
cedure that operates on the ALN tree structure. These results suggest that,
given an appropriate structuring mechanism (in our case, evolution), feed-
forward networks of logic gates can perform behaviours normally associated
with non-recurrent neural networks. The issue of recurrent networks will be
considered in 2.4.1 below.
Another successful line of research relating logic circuits to neural net-
works is the RAM-based3 approach championed by Aleksander, and typi ed
by The WISARD4 (Aleksander & Morton, 1990). Here, the logic functions
are implemented in look-up tables stored in o -the-shelf RAM chips. In later
work, by modifying the binary RAM model to include a third state in which
a 0 or a 1 is emitted at random (a `Probabilistic Logic Neuron (PLN)'), the
learning procedure has been extended to operate on multi-layer associative
PLN networks (Kan & Aleksander, 1989) of broad applicability in neural net-
work applications. RAM-based neural networks were the inspiration behind
the evolvable RAM-based architecture to be presented in one of the studies
of Chapter 3.
2.1.4 Recon gurable Hardware
There are many recon gurable devices on the market (Old eld & Dorf, 1995),
and many more architectures are conceivable. However, the requirements for
intrinsic hardware evolution are di erent to the intentions behind many com-
mercial products designed for use in the electronics industry:
{ Recon gurable an Unlimited Number of Times. Clearly, write-once devices
based on fuse or anti-fuse technologies are not suitable. Even some products
with a seemingly large limit upon the number of recon gurations are still
unsuitable. For example, Intel's 80170NX Electrically Trainable Analog
Neural Network chip uses oating-gate transistors (EEPROM5 technology)
to store the synaptic strengths: these are only speci ed for 104 weight
changing cycles per synapse (Intel Corp., 1993). Beyond this, there is the
3
RAM = Random Access Memory.
4
WISARD = `WIlkie, Stonham and Aleksander's Recognition Device.'
5
EEPROM = Electrically Erasable and Programmable Read Only Memory.
2.1 Inspiration 13

possibility of permanent physical degradation of the gate oxide. A single


evolutionary run for 1000 generations with a population size of 100, if
using a single chip for the intrinsic tness evaluations, would take 105
recon gurations: too many for this device.
{ Fast Recon guration. The time taken to con gure the hardware with each
individual in the population should be small compared to the time taken by
the tness evaluations, the selection of parents, and the genetic operations.
Otherwise, the overall speed of evolution would su er from this overhead.
The ability to only partially recon gure could help here: because of the
similarity between individuals in the population, not all of the con guration
will need to be changed between individuals, in practice.
{ Indestructibility or Possible Validity Checking. Ideally, it should not be pos-
sible to con gure the hardware such that it damages itself. If it is possible
to con gure the hardware illegally, then there must be some ecient way
either to restrict the set of possible genotypes to those encoding legal con-
gurations, or to identify which con gurations are illegal so that they may
be discarded. A high level of con dence in the correctness of the software
would be required for this approach, so indestructible hardware is prefer-
able. As an example of the problem, consider the XC30xx/40xx families
of FPGAs (Xilinx, Inc., 1996a). Their architecture supports 3-state busses
and wire-OR, which means that the outputs of 3-state components can be
directly connected together. It is the responsibility of the user and CAD6
software to make sure that outputs do not simultaneously attempt to drive
the same signal to opposite logic levels: this would result in large currents,
potentially damaging the device. However, the way in which the con gura-
tion bits determine what circuit is present on the chip is kept a proprietary
secret of the manufacturer, in order to protect the user's designs from com-
petitors seeking to reverse-engineer the design from the con guration bits.
This means that any validity checking or restriction must be done at the
level of input to the proprietary software which takes a user's design and
produces the con guration bits. The execution of this software introduces
a severe time overhead into the con guration process.
{ Flexible Input/Output. The way in which inputs are supplied to the evolving
circuits, and the outputs extracted, could have a strong in uence on the
chance of success. Experimentation is likely to be required, along with
the possibility of placing aspects of the input/output con guration under
evolutionary control. For these reasons, as well as to give a wide range of
possible applications, a exible recon gurable input/output architecture is
useful.
{ Observability. When attempting to analyse an evolved circuit, the more
facilities for monitoring the internal activity of the circuit the better.
6
CAD = Computer Aided Design.
14 2. Context

{ Fine Grain Recon gurability. Section 2.4.1 will argue against enforcing the
use of large prede ned building-blocks. Note that such building-blocks do
not directly correspond to the `coarse' or ` ne' grain-size of recon gurable
cells referred to in FPGA parlance. A recon gurable cell could be large, but
yet have its properties controllable at a ne level of detail, so not presenting
a large prede ned building-block to evolution. In such cases, the boundaries
of recon gurable cells can appear more as part of a hierarchy of di erent
types of interconnections between components.
{ Low Cost. Not only is this desirable for academic researchers, but it broad-
ens the range of commercial applications towards which the research can
be aimed.
The preferences above are not completely shared by the traditional uses of
FPGAs, so commercial FPGA chips tend to be ill-suited to our purpose. How-
ever, recent theoretical and technological advances in the concept of custom
and dynamically recon gurable computing (Old eld & Dorf, 1995; DeHon,
1994; Tau, Chen, Eslick, et al., 1995) are beginning to stimulate the pro-
duction of new devices with suitable characteristics for evolution. In custom
computing, computations are performed by special purpose circuits imple-
mented on general purpose devices: one or more FPGAs. In the dynamic
recon guration case, the FPGA is used to instantiate di erent circuits at
various stages of the computation, perhaps being used as a recon gurable
co-processor to a host microprocessor: circuits are rapidly `swapped' in and
out of hardware (see Eldredge and Hutchings (1994) for an example). The
requirements of these paradigms have much in common with those of intrinsic
hardware evolution, and are beginning to prompt the production of suitable
commercial FPGAs. The XC6216 used in this book is such a chip, and it can
be expected that more will follow. At the time of writing, the XC62xx family
are by far the best suited commercially available devices.
Although evolvable FPGAs have only recently become available, it has
long been possible to construct evolvable hardware systems out of other read-
ily available components. We saw in Section 2.1.3 how systems resembling
neural networks could be constructed with RAM chips; by placing the RAM
contents under evolutionary control, these become evolvable hardware. Ad-
ditionally, some of the interconnections between components at the circuit-
board level can be placed under evolutionary control by using analogue switch
ICs7 (e.g. the 4053 chip in the long-standing 4000 CMOS series) or by using
more recent digital ` eld programmable interconnect' devices (I-Cube, Inc.,
1996; Aptix Corp., 1996). The rst part of the work reported in this book
was done before the XC6216 became available (even then, a -test part was
used), so in the next chapter we will indeed study a RAM/analogue-switch
based evolvable system. Other architectures based on RAM chips and/or re-

7
IC = Integrated Circuit.
2.1 Inspiration 15

con gurable board-level interconnect are possible, and this may still be a
fruitful line of research even now suitable FPGAs are available.
By using analogue switch ICs interconnecting analogue components, an
evolvable analogue hardware system could be constructed. Of greater interest
are the VLSI analogue counterparts of the FPGA that are emerging; variously
known as the `Electrically Programmable Analog Circuit' (IMP, Inc., 1996),
the `Field Programmable Analog Array' (Bratt & Macbeth, 1996; Motorola,
Inc., 1998) and the `Totally Recon gurable Analog Circuit' (Zetex plc, 1996).
The elementary repeated unit in these devices is an operational ampli er
(op-amp), and the con guration determines attributes of each op-amp and
its local circuitry (for instance, to determine its frequency response) as well
as some aspects of how the op-amps are interconnected. While evolutionary
experiments with these devices would undoubtedly be informative, they are
not used in this book because the op-amps present large prede ned building
blocks, which is against the evolutionary philosophy I chose to follow (see 2.4.1
below for the justi cation). However, a digital gate is just a simple high-gain
ampli er made of a few transistors (a much smaller unit than an op-amp),
normally kept in saturation by observing various design constraints: these
are absent in the evolutionary experiments to be presented, and the XC6216
operates as an analogue device made of these high-gain ampli ers, rather
than in the digital way intended by the manufacturers.
The main points of this book are independent of the choice of recon g-
urable medium: what is important is that evolution is intrinsic { individuals
are evaluated as con gurations of the real medium (or a highly accurate
physical simulation of it if this is possible), not in an abstract simulation.
I will concentrate on digital FPGAs as these are the most suitable and so-
phisticated recon gurable devices currently available, but with an open mind
even to radically new architectures (e.g. the Evolvable Electro-Biochemical
Systems proposed by Kitano (1996b)).

2.1.5 Self-Timed Digital Design


In self-timed or asynchronous digital design (see Gopalakrishnan and Akella
(1992) for a brief introduction), the global clock of synchronous design is re-
placed by point-to-point handshaking between subcircuits to be co-ordinated
in time. The subcircuits are still locked in compute-communicate-compute-
communicate: : : cycles, so this is very di erent from the circuits with rich
dynamics and interactions between parts that I promote in this book, as
will become clear through examples. It is worth noting, though, that asyn-
chronous designs can be implemented in FPGAs (for example Old eld and
Kappler (1991), Brunvand (1991), Payne (1995)), so in principle it would be
possible for intrinsic hardware evolution to produce such circuits if appropri-
ate.
16 2. Context

2.1.6 Analogies with Software: Ray's Tierra


Ray (1995) gives an account of how a comparative biology may be founded
through `inoculating evolution by natural selection into the medium of the
digital computer,' and comparing the resulting phenomena with natural bi-
ology. In the famous Tierra system, evolving machine-language computer
programs inhabit the storage-space of a virtual digital computer. `Evolution
is then allowed to nd the natural forms of living organisms in the arti -
cial medium.' The paper discusses how the programs evolve tailored to the
`digital physics' of their `world,' consisting of the way in which the machine
language instructions are executed and the structure of the virtual machine's
memory. A particular approach is promoted: `to understand and respect the
natural form of the arti cial medium, to facilitate the process of evolution in
generating forms that are adapted to the medium, and to let evolution nd
forms and processes that naturally exploit the possibilities inherent in the
medium.'
Although the primary goals and media are di erent (synthetic biology in
software rather than engineering in hardware) { resulting in di erent conse-
quences, problems, and possibilities { the same approach will be argued for in
this book; this time by considering how arti cial evolution may best be used
to perform engineering design automatically, especially considering removal
of conventionally imposed design constraints.

2.1.7 A Dynamical Systems Perspective


There is a `new wave' in attempting to understand and create intelligent
systems (such as autonomous mobile robots), which is centered around a be-
havioural decomposition of the system, rather than the functional decomposi-
tion characterising `classical' AI8 (Brooks, 1991, 1995). Part of this movement
has been a revival of concepts from the Cybernetics endeavour, which was
based around the application of modern control theory and dynamical sys-
tems theory to intelligent agents (Ashby, 1960). This revival is partly linked
to the development and recognised importance of Chaos Theory { an exten-
sion to the dynamical system framework { and to the use of neural network
learning techniques and/or arti cial evolution; in these elds dynamical sys-
tems theory has sometimes been found a more appropriate explanatory or
constructive framework than computational or rule-based approaches (Kolen,
1994; Beer, 1995; Smithers, 1995; Husbands, Harvey, & Cli , 1995).
Throughout this book, I will talk of evolution crafting the dynamical be-
haviour of electronic systems, and I will not attempt a detailed interpretation
of their operation other than to sketch out the broad mechanisms that seem
to be at work. Even though it will not be formally applied, I shall tacitly take
a dynamical systems perspective in order to facilitate the contemplation of
8
AI = Arti cial Intelligence.
2.1 Evolutionary Algorithms for Electronic Design: Other Approaches 17

the largest possible set of behaviours and mechanisms. This is worthwhile,


because I want to allow arti cial evolution to explore new regions of `design
space' without being encumbered by intellectual constraints arising from a
possibly more restrictive conceptual framework.

2.2 Evolutionary Algorithms for Electronic Design:


Other Approaches
Having given the grounding of my own approach in the previous `Inspiration'
section, I now consider the existing body of work related speci cally to hard-
ware evolution. At the time of writing, the best sources of overview material
are Hirst (1996b), Sanchez and Tomassini (1996) (already rather outdated)
and Higuchi and Iwata (1997).
2.2.1 ETL
The idea of applying arti cial evolution to the automatic design of con g-
urations for recon gurable VLSI electronic devices was rst investigated by
a group working at ETL (Electrotechnical Laboratory, Tsukuba, Japan) in
1992 (Higuchi, Niwa, Tanaka, et al., 1993a), and dubbed `EHW' (standing for
`Evolvable HardWare'). One of their rst experiments was to perform extrin-
sic evolution of 108 of the con guration bits of a GAL16V8 Programmable
Logic Device (PLD) (made by Lattice Corp.) to perform the 6-multiplexer
function. At the end of evolution these con guration bits could be used to
con gure a small part of the whole chip, which is based around a xed AND-
OR architecture (Green, 1985), to compute the desired feedforward Boolean
function of six inputs and one output.
The group (with various members, but led by Tetsuya Higuchi) went on
to propose an architecture for a VLSI ASIC9 (Higuchi, Iba, & Manderick,
1994b). In this architecture, a whole population of individual systems would
be instantiated at once on a single chip, and it was assumed that the environ-
ment could simultaneously supply a performance measure for each individual.
Each individual consisted of a recon gurable logic device, to which arti cial
evolution was applied, followed by a reinforcement learning component also
in hardware. The learning component was not described, but a high-speed
implementation of an on-chip parallel GA was proposed, involving a bitwise
parallelisation of the genetic operators.
Higuchi et al. (1994b) noted that in general, tness evaluations are con-
sidered to dominate the total GA execution time. However, it was observed
that in applications such as function optimisation, where the evaluation can
be extremely rapid, then the time taken to perform selection, crossover and
mutation can dominate. It was acknowledged that only in such situations is
9
ASIC = Application-Speci c Integrated Circuit.
18 2. Context

it worthwhile speeding up selection and genetic operations by implementing


the GA in hardware. I do not consider hardware implementation of the GA
in this book, but in applications where this would be desirable, it is clear
that it is a straightforward piece of digital design, as demonstrated by Scott,
Samal, and Seth (1995), Turton and Arslan (1995).
The ASIC was not built, but extrinsic experiments using a simulation of
the GAL16V8 PLD continued (Higuchi, Niwa, Tanaka, et al., 1993b; Higuchi
et al., 1994b), by applying the evolved combinatorial circuits as the state-
transition functions for a Finite State Machine (FSM). First, a 3-bit counter
was evolved by directly comparing the outputs of three evolved combinato-
rial functions of three inputs with the desired state-transition function, over
all eight possible input combinations to the functions. This was extended
to evolving the state-transition and output functions for a Mealy FSM. In
their example, the functions for a four state, single-input, single-output ma-
chine were evolved, by comparing the output of the evolved machine, over
a sequence of random inputs, with the ideal required output. For this ex-
periment, the genotype length was 157 bits. In later experiments, a more
compact genetic encoding scheme (involving variable length genotypes) was
adopted (Kajitani, Hoshino, Iwata, et al., 1996), because an increase in geno-
type length was thought to increase GA execution time prohibitively.
Higuchi and Hirao (1995), Higuchi, Iwata, Kajitani, et al. (1996a, 1996b)
present an interesting application: hardware is evolved to duplicate an exist-
ing control system, receiving the same inputs, and being scored according to
how closely its outputs approximate that of the existing system. The idea is
that if the main controller fails, then an evolved duplicate will take over. As
an example, a combinatorial circuit was evolved to emulate a 4-bit compara-
tor as part of the controller of a welding arm. The structures being evolved
were very similar to that of the GAL16V8 PLD simulated in earlier exper-
iments, but this time they were emulated on Xilinx XC4025 FPGAs rather
than being simulated in software. The con guration of the real FPGA was
still not itself evolved.
In another project (Higuchi et al., 1996a; Iwata, Kajitani, Yamada, et al.,
1996), hardware is evolved to recognise three characters drawn by hand on
an input tablet divided into 8  8 pixels. Again, the structure subjected to
evolution was the PLD AND-OR type similar to the earlier experiments: as
in the comparator experiment, it was emulated using an FPGA, but the con-
guration of the FPGA itself was not actually evolved. An interesting feature
of the experiment was that the inclusion of a `minimum description length'
component in the tness function (to favour `simple' solutions) was found
to improve the generalisation of the evolved recognisers for noisy characters
(deviating from those on which it was trained). This work highlights ETL's
emphasis on speed of operation, on-line adaptation (see also Higuchi, Iba,
and Manderick (1994a)), and understandability of the nal result; the set of
Boolean functions that was evolved was considered to be more comprehen-
2.2 Evolutionary Algorithms for Electronic Design: Other Approaches 19

sible than a neural network with its weights adapted to perform the same
task.
In the character recognition task, their variable-length GA (which they
call VGA) was again used in preference to the less compact encoding scheme
(which they call a Simple GA, SGA). The two are compared by Iwata et al.
(1996):
\The main advantage of VGA in pattern recognition is that we can
handle larger inputs than using SGA. For example, EHW could learn three
patterns of 16 inputs by SGA with the chromosome length of 840. On the
other hand, by VGA, EHW can learn three patterns of 64 inputs with the
chromosome length of 187.6 in average. In addition, the learning by VGA
is much faster than SGA; 416.7 generation [sic] by VGA, 4053 by SGA."
A radically di erent solution to the perceived genotype-length problem is
proposed by Higuchi et al. (1996b), stating:
\Present EHW research [is] all based on gate-level evolution. However,
the size of a circuit allowed at gate-level evolution is not so large because
of the increase of GA execution time. Low-level hardware functions given
by such an EHW would be insucient for practical applications
However, if hardware is genetically synthesized from higher level hard-
ware functions (e.g. adder, subtracter, sine generator, etc.) than primitive
gates (e.g. AND gates) in gate-level evolution, more useful hardware func-
tions can be provided by EHWs. Thus, function-level EHWs aim at more
practical applications than gate-level EHWs."
This response may indeed permit immediate application in industrial
problems. However, I disagree with the damnation of gate-level (and, in gen-
eral, ne-grain) evolution. In Chapter 5, I demonstrate successful evolution at
the gate level with a genotype length of 1800 bits over the course of 5000 gen-
erations. There were no indications that an upper bound on genotype length
was being approached. The evolutionary algorithm was a Species Adaptation
Genetic Algorithm (see 2.4.2 below): a standard GA is not suitable for the
evolution of complex systems. In addition, the structure being evolved was
very di erent from ETL's EHW feedforward logic functions, and it may be
of a more `evolvable' nature (see 2.4.1). I will present a general argument in
favour of ne-grain evolution in 2.4.1 below, taking the view that biassing
or more gently restricting the genetic encoding scheme is a more appropriate
way to give domain knowledge to the EA than enforcing the use of large
prede ned building-blocks.
Nevertheless, the function-level architecture proposed is interesting in its
own right, and may have useful applications. The project is to build as an
ASIC a very coarse-grained FPGA which can then be used with evolutionary
methods (Murakawa, Yoshizawa, Kajitani, et al., 1996). The basic repeated
unit can perform one of seven 16-bit oating point functions: add, subtract,
if-then, sine, cosine, multiply, divide. These are arranged in columns ve units
high, and the columns are interconnected by crossbar switches. The genotype
will be used to encode which function is to be performed by each unit, and
20 2. Context

the settings of the crossbar interconnections; simulation studies have shown


success on some benchmark problems. The function units occupy so much
silicon area that only two columns of ve will be implemented on the ASIC,
but multiple chips may be tiled together.
2.2.2 de Garis
Hugo de Garis was one of the founding members of the ETL EHW team in
1992, but after one year left for ATR HIP (Advanced Telecommunications Re-
search institute Human Information Processing research laboratories, Kyoto,
Japan). There, he has pursued a long-term project to develop a technology for
constructing very large arti cial neural networks (the aim is one billion neu-
rons), which could { in principle { have their structure determined through
arti cial evolution. However, only very limited evolutionary feasibility stud-
ies have been conducted (de Garis, 1996), with the focus of the project being
the implementation, not the design, of huge neural networks.
de Garis (1993b) considers this `CAM-Brain' project to be a form of
intrinsic evolvable hardware. The neural networks are implemented in a large
three-dimensional Cellular Automaton Machine (CAM). The CAM (To oli
& Margolus, 1991) is a high-speed and cost-e ective implementation of a
cellular automaton, being based on the very rapid parallel update of cell states
held in commercial RAM chips. This machine was conceived as a general
form of `programmable matter,' where the state transition rules of the cells
are thought of as de ning the laws of physics for this cellular medium.10
For example, uid ow can be simulated by giving the cells state-transition
rules that describe the interactions of small elements of the uid. In this
spirit, the CAM-Brain project gives the cells functions so that an arti cial
neural network can `grow' in the cellular space. Starting from an initially
simple `embryo,' growth signals travel down a trail of cells, this trail being a
connected path of cells having a particular class of internal state. The growth
signals themselves are also just an alteration of the cells' state, and the state-
transition rules have been crafted (by a human with software tools, not by
evolution) to allow these signals to travel down the trails. On reaching the
end of the trail, the particular value of the growth signal determines what
happens next: the trail can undergo various kinds of branching and joining.
Thus, the sequence of growth signals determines the nal structure of the
pattern of trails, and this could be genetically determined.
After this `growth phase,' the network of trails is used in a `neural sig-
nalling phase.' Now, the signals passing down trails are the interactions be-
tween `synapses' and `neurons' formed at trail junctions. Whether the evolu-
tion of such a network's structure should be considered hardware evolution is
a matter of semantics (I would say not), but it is de nitely not the evolution
of the structure of an electronic circuit as is the topic of this book.
10
Beware: there also exists an alternative, signi cantly di erent, use of the term
`programmable matter' (Rasmussen, Knudsen, & Feldberg, 1991).
2.2 Evolutionary Algorithms for Electronic Design: Other Approaches 21

de Garis has also published some proposals pertaining to the evolution of


electronic circuits (Higuchi et al., 1993a; de Garis, 1993a). There is a vision
of evolution `at electronic speeds' (de Garis, 1995) underlying both the CAM
project and these proposals. A `Darwin Machine' is proposed, where recon-
gurable devices (on which circuits designs are to be intrinsically evolved)
are situated next to tness-measuring and genetic-operation devices, and the
whole repeated to a high level of parallelism. The entire system could be
implemented in hardware, perhaps even on a single chip, or on a chip that
could be tiled to form an arbitrarily large parallel machine.
The phrase `evolution at electronic speeds' conjures up an image of very
rapid evolution indeed. But it is never stated how this is to come about:
there seems to be an assumption that by using hardware rather than software
there will automatically be a huge increase in the speed of the evolutionary
process. This assumption neglects the fact that the goal is to evolve circuits
that perform some real-world task, and that time is therefore not something
that can arbitrarily be manipulated.11 So how, in fact, can the process of
evolution be made to go faster? There seem to be three main ways:
1. By developing more ecient evolutionary algorithms. Perhaps by doing
this, the number of tness evaluations required can be reduced. However,
there must be some upper limit on the amount of `design work' that can
be done by evolution on the basis of a given number of tness evaluations
(Worden, 1995). To make a guess, let's say we can reduce the number of
tness evaluations by a factor of  10. (I use the symbol `' to mean `on
the order of.')
2. By evaluating the tnesses of many individuals at once, using parallel
hardware. Depending on the EA developed in (1) { which may be an
asynchronous (`steady-state') non-generational type { this could increase
the speed of evolution by a factor of up to the population size. It is
also dicult to predict the population size of future EAs, but a guess of
 100 lies about midway between the possible extremities of orders of
magnitude, as represented by the current Evolution Strategy and Genetic
Programming EAs.
3. By performing the tness evaluations in faster than real time. To evolve
a circuit to perform a particular real-world task, it could be evaluated
in a high-speed hardware or software emulation of its nal operating
environment, and given tness scores according to its behaviour at the
accelerated timescale. At the end of evolution, when it is time to use the
evolved circuit in the real world, it must be possible to slow down all of
its dynamics that crucially a ect behaviour. If the circuit was evolved for
an emulated environment running at k times faster than real time, then
the circuit must be slowed down by a factor k to operate correctly in the
real world. The obvious way to do this is to evolve a synchronous digital
11
Thanks to Inman Harvey for this point of view.
22 2. Context

circuit, where the clock speed can easily be varied. There is a constraint
that only hardware structures with a controllable speed of behaviour can
be evolved: this may have an impact on the evolvability of the system
(see 2.4.1), which may end up increasing the number of tness evaluations
required.
A second constraint is that it must actually be possible to sim-
ulate/emulate the circuit's environment adequately, so that the nal
evolved (slowed down) circuits work in the real world. For many elec-
tronic applications, this will be possible, but de Garis concentrates on
the most problematic case conceivable: the evolution of control systems
for autonomous mobile robots. For this robot case, there exists a school
of thought that only the use of real robots (no simulation of the robot-
environment interactions) will be sucient in the long term (Mondada
& Floreano, 1996), but there is no general agreement on this question.
A third constraint is that if the maximum speed at which the hard-
ware can operate is Smax , and the speed at which it must nally operate
in the real world to perform the task it was evolved for is Sneeded , then
the maximum factor by which the tness evaluations can be accelerated
is k = Smax /Sneeded . Thus, a speed-up can only be achieved if the full
speed of the hardware is not required during nal operation in the real
world. As an example, consider the evolution of a synchronous digital
circuit as a con guration of the XC6216 FPGA. Say that the nal circuit
is required to operate with a clock speed of 1MHz in the real-world ap-
plication, and that the maximum depth of combinatorial logic between
clocked elements is constrained to be 10 function units. Allowing 5ns per
function unit, this gives a maximum clock frequency of 20MHz. Hence the
maximum speedup that can be obtained through the use of a high-speed
environment emulation is k = 20MHz/1MHz = 20 in this application.
This was a rather arbitrary example, but not unrealistic.
Factors 1{3 taken together do give a considerable increase in the speed
of the evolutionary process (by a factor of about  104 { but note that this
is based on very vague guesses indeed), but none of them seem to constitute
the image given by `evolution at electronic speeds.'
A nal alternative, explored by de Garis' earlier work (de Garis, 1990), is
to evolve separately subcircuits (or neural modules) for separate components
of the behaviour of the nal desired system. Once these components have
been evolved (perhaps in parallel), they are `frozen' and used as building
blocks in the next stage of evolution; there could be many stages, leading to
a hierarchy of building blocks. This approach is dependent on the ability to
decompose the desired behaviour into independent elements that can later be
combined: it depends on the application and on the human experimenter's
skills and understanding of it. The speed-up factor in the evolutionary process
obtained through the simultaneous evolution of subsystems is equal to the
number of subsystems being evolved in parallel, with a potential penalty
2.2 Evolutionary Algorithms for Electronic Design: Other Approaches 23

when it is time to integrate them if they have evolved in such a way as not
to piece together easily (i.e. the elements were not truly independent).
2.2.3 EPFL & CSEM: `Embryonics'
The word `embryonics' was coined by de Garis (1993a) to mean `embryolog-
ical electronics'; in other words electronic systems that are in some respects
analogous to embryology in nature. In a collaboration between LSL EPFL
(Logic Systems Laboratory, Ecole Polytechnique Federale de Lausanne) and
CSEM (Centre Suisse d'Electronique et de Microtechnique SA, Neuch^atel),
a large group have explored possibilities for new FPGA architectures pos-
sessing `quasi-biological' properties of self-repair and self-reproduction. The
techniques used are strongly inspired by nature, where the level of analogy is
to compare the repeated blocks of the FPGA with the cells of a multi-cellular
organism.
The basic idea is that the description of the circuit to be implemented
on the FPGA is like the genotype of a multi-cellular organism. In the organ-
ism/FPGA, each cell di erentiates (that is, becomes committed { not nec-
essarily irreversibly { to a speci c mode of behaviour out of its repertoire)
according to both the context in which it is situated, and to information from
the genotype relating to that context. In the FPGA case, the context is the
state of neighbouring cells; in biology it is more complicated. By construct-
ing a circuit speci cation in terms of what each cell should do depending on
the state of its neighbours, rather than on its absolute physical position, the
possibility for highly robust self-repair mechanisms is introduced. Cells are
equipped with built-in self-test, and if a faulty region of silicon is identi ed,
the circuit can dynamically redistribute over the remaining functional cells
(assuming there were some unused cells that can be recruited). This is sim-
pli ed by having a (row, column) index as part of the state of each cell: this
index gives the position of the cell within the circuit design , not its absolute
position, and when a fault is detected, a whole row and/or column of the
FPGA can be skipped by not incrementing the appropriate counter in the
faulty row/column. If the index arithmetic is done modulo n for one or both
of the counters, then the circuit will be repeated every n non-faulty FPGA
cells in that direction, giving the possibility for multiple-mode redundancy
as well as self-repair within each repetition.
The FPGA architectures developed are aimed towards the implementa-
tion of logic circuits described as binary decision trees (Akers, 1978) and
their derivatives. This description could be placed under the control of ar-
ti cial evolution, and intrinsic (or extrinsic) hardware evolution performed.
The resulting evolved system would possess the high level of robustness given
by the self-repair mechanisms of the medium. In this way, the Embryonics
approach is complementary to the evolutionary fault-tolerance mechanisms
I will develop in Chapter 4, where robustness evolved into the design it-
self can build on top of the robustness of the medium. For full details of
24 2. Context

the Embryonics project, see Mange (1993), Mange, Stau er, Sanchez, et al.
(1993), Durand and Piguet (1994), Marchal and Stau er (1994), Marchal,
Piguet, Mange, et al. (1994b, 1994a), Mange and Stau er (1994), Marchal,
Nussbaum, Piguet, et al. (1996), Mange, Goeke, Madon, et al. (1996).
2.2.4 A Sophisticated Extrinsic Approach: Hemmi et al.
Hemmi et al. (also at ATR) have been working to push forward the bound-
aries of the extrinsic approach to hardware evolution. The aim is to evolve a
description of circuit behaviour , and then to use existing automatic synthesis
tools to produce a circuit schematic, an FPGA con guration, or the masks
to make an ASIC (Hemmi, Mizoguchi, & Shimohara, 1996b). A variant of
the Genetic Programming EA technique is used (Koza, 1992, 1994) to ma-
nipulate tree-structured genotypes. These genotypes, through a sophisticated
process of development using formal grammar theory, map to a behaviour-
level Hardware Description Language (HDL) description of the circuit. The
tness is then evaluated by feeding the behavioural-HDL description into a
behavioural digital logic simulator, which already exist as part of CAD suites.
The process of grammatical development of the genotype has been carefully
contrived so as to allow regularities of the task to be exploited through the
repetition of substructures within the whole design: see Hemmi, Mizoguchi,
and Shimohara (1994, 1996a), Mizoguchi, Hemmi, and Shimohara (1994),
Hikage, Hemmi, and Shimohara (1996) for details. (See also the independent
work of Seals and Whapshott (1994) for the problems encountered if an HDL
description is encoded directly onto the genotype.)
If, for every tness evaluation, the genotype was expressed to produce the
HDL description, this was then run through the automatic synthesis tools to
produce an FPGA con guration, and the tness assigned according to the
behaviour of the real FPGA, then this would be intrinsic hardware evolu-
tion. The transformation between HDL description and FPGA con guration
would be subsumed as part of the genotype!phenotype mapping (although it
may be nondeterministic, because automatic synthesis tools can use methods
like simulated annealing (van Laarhoven & Aarts, 1987)). In this hypotheti-
cal case, evolution would be manipulating the primitives of the real FPGA,
but via the mapping imposed by the genotype!HDL!FPGA process. In
practice, automatic synthesis tools are too slow for this to be practical.
I raised the above imaginary intrinsic version to illustrate the di erence
with Hemmi's actual extrinsic technique, where the individuals receive a t-
ness score according to the performance of a digital logic behavioural simu-
lator operating from the HDL description. The advantages of the extrinsic
approach are rstly that there is the potential for the behavioural simula-
tion (which includes no details of electronic timings) to be executed very
quickly { faster than real-time { on a powerful computer; the second advan-
tage is that the nal design could be implemented in a variety of di erent
silicon technologies. The disadvantage is that evolution never `sees' any of the
2.2 Evolutionary Algorithms for Electronic Design: Other Approaches 25

characteristics of the implementation medium: it must operate solely at the


abstract level of a behavioural description. Evolution cannot, therefore, take
account of any of those hardware characteristics in forming the circuit design,
so could produce a circuit that does not use the hardware resources well. In
addition, the structure of the circuits must be tightly constrained in order
to allow the simulation to be an adequate model of the nal implemented
system. These two observations will form the basis of my argument in favour
of `unconstrained' intrinsic evolution, to be developed in the next chapter.

2.2.5 Evolving Analogue Circuits


It was mentioned in 2.1.4 above that recon gurable analogue VLSI devices
exist, showing the potential for the intrinsic evolution of analogue circuits.
At the time of writing, this has never been performed, to my knowledge.
Notice, though, that later in this book I will allow evolution to break the
digital design constraints, so that even though using recon gurable devices
intended for digital operation, evolution is demonstrably building continuous-
time analogue systems out of them.
The diculty with the extrinsic approach is that the simulation of ana-
logue circuits can be highly computationally expensive. Grimbleby (1995)
found the limits of what could be evolved with a 33MHz 486DX personal
computer to be active or passive linear networks of up to 12 components. For
passive linear networks, lters were successfully evolved to both frequency-
domain and time-domain speci cations by using a GA to determine the net-
work's topology, and then using numerical optimisation to set the component
values. It was concluded that to evolve nonlinear networks (the simulation of
which is far more computationally expensive than linear analysis), either a
breakthrough in nonlinear analysis methods or a new generation of computers
must be awaited.
A year later, Koza et al. were able to follow the second of these two op-
tions, using a MIMD12 computer consisting of 64 80MHz Power PC 601 pro-
cessors. The simulator used is the industry-standard SPICE simulator, which
can simulate active nonlinear networks but would normally be too slow to
be used for evolutionary evaluations. Using Genetic Programming, circuits
which { in the simulation { satisfy various quite dicult lter speci cations
have been produced, with both topology and component values being evolved
(Koza, Bennett III, Andre, et al., 1996c). Circuits of impressive complexity
and performance resembling operational ampli ers (but not satisfying all of
the requirements for a really useful op-amp) have also been evolved (Koza,
Andre, Bennett III, et al., 1996a). These experiments bene ted from the use
of `Automatically De ned Functions' (Koza, Andre, Bennett III, et al., 1996b;
Koza, 1994) { a feature of Genetic Programming whereby the genotype con-
sists of several trees, and some of these trees can be repeatedly `called' as
12
MIMD = Multiple Instruction, Multiple Data.
26 2. Context

functions by other trees. This allows phenotypes containing repeated struc-


tures to be evolved. Another way in which this was facilitated was through
the use of Gruau's cellular encoding; see 2.4.1 below.
This use of the SPICE simulator is close to what I called an `accurate phys-
ical simulation' in the introduction. The more closely extrinsic evolution's
simulator comes to perfectly capturing all of the properties of the hardware,
the more closely does extrinsic evolution approximate the intrinsic approach
{ apart from in respects like speed and cost. Much of what I have to say under
the banner of `intrinsic hardware evolution' also applies to circuits evolved in
a SPICE simulation. It has yet to be clari ed to what extent SPICE really
does accurately model the behaviour of the evolved analogue circuits (which
may contain unusual structures) if actually built: none of them ever have
been.
In fact, to construct one of Koza et al.'s circuits would be dicult, because
only particular `preferred' component values (e.g. for resistors) are readily
available. In independent work, Horrocks et al. have shown that evolution can
successfully be constrained only to use a particular series of preferred com-
ponent values (Horrocks & Spittle, 1993; Horrocks & Khalifa, 1994, 1995).
In these experiments, only the component values for prede ned lter net-
work structures were evolved, but presumably this could be combined with
a genetic design of the structure itself.
A second diculty in physically constructing the circuits evolved extrinsi-
cally in SPICE might be the inaccuracies and drift in the component values,
as well as parasitic e ects: for example, a resistor has some capacitance, and
an inductor has some resistance. For the former, it may be possible to in-
corporate a component-value sensitivity analysis into the tness function; for
the latter, Horrocks and Khalifa (1996) show that component models includ-
ing the parasitics can be used in the simulation to evolve circuits that would
work well in the real world.
Although there is every reason to think that the extrinsic hardware evolu-
tion methods described here will go far, it should be noted that only circuits
that can readily be simulated within the experimenter's nancial budget can
be evolved in this way. In Chapter 5 I intrinsically evolve a circuit of a type
beyond current simulation techniques (no matter how expensive): evolution
is given a completely free hand to explore the full repertoire of behaviours
available from the FPGA provided, and is not constrained by issues of sim-
ulatability. This potentially allows the FPGA to be used more e ectively, as
we shall see in the next chapter.

2.2.6 A Silicon Neuromorph { The First Intrinsic Hardware


Evolution?
The rst experiment of which I am aware that might be classed as intrinsic
hardware evolution is reported by Elias (1992). A `Silicon Neuromorph' {
2.3 Multi-Criteria EAs: Area, Power, Speed and Testability 27

a spatially extensive model of a dendritic tree connected to a single spike-


generating soma { was implemented in real silicon. A hybrid of a GA with
simulated annealing was used to evolve the points of attachment of input
signals to the dendritic tree, where the inputs were to come from a CCD13
camera, and the output was to control a motorised camera-orientation sys-
tem: the task was to keep a moving object in the centre of the camera's eld
of view. Fitness trials were the evaluation of the behaviour of the real sili-
con neuromorph with the genetically speci ed pattern of input connections:
intrinsic hardware evolution. Good results were attained for this, and some
other simple tasks (Elias, 1994; Northmore & Elias, 1994).
Although this was never identi ed as a case of intrinsic hardware evolution
by its authors, it pre-dates my own paper (Thompson, 1995a), which as
far as I know was the rst example of someone deliberately setting about
to `do' intrinsic hardware evolution. There may be other cases of this sort,
where the authors have not highlighted the hardware evolution aspect of their
experiments.

2.2.7 Loosely Related Evolutionary Hardware Projects


There have been reports of other projects to evolve feedforward combinato-
rial logic networks in simulation (e.g. Louis and Rawlins (1991), Naito, Oda-
giri, Matsunaga, et al. (1996)) but they will not be considered here because,
though interesting in themselves, they have little impact upon this book.
Similarly, much of the work applying evolutionary optimisation to various
stages of VLSI synthesis such as logic optimisation or placement and routing
(Fourman, 1985; Benten & Sait, 1994; Lienig & Brandt, 1994; Schnecke &
Vornberger, 1995; Miller & Thomson, 1995; Miller, Bradbeer, & Thomson,
1996) is not directly relevant to the arguments I shall put forward, but could
become signi cant in later developments.
Another eld I do not wish to enter here is the application of EAs to
high-level computer architecture (Burgess, 1995; Teich, Blickle, & Thiele,
1996; Hirst, 1996a). Although this book may have implications for that area,
I want to stay clearly focussed on the fundamentals of evolving electronic
circuits.

2.3 Multi-Criteria EAs: Area, Power, Speed and


Testability
It has been shown in a number of separate experiments that, in some circum-
stances, an EA can manipulate an electronic circuit to optimise it with respect
to several tness criteria, such as low area, low mean power consumption, low
13
CCD = Charge-Coupled Device.
28 2. Context

peak power consumption, high speed (low delay through combinatorial sec-
tions), and testability (being able to identify internal faults by applying test
inputs). The ability to do this will probably be an important part of all
approaches to hardware evolution.
Martin and Knight (1993, 1995) use a GA to perform high-level be-
havioural synthesis tasks. Taking a Data Flow Graph (DFG) representation
of the system and a library of prede ned modules, the GA assigns modules
to operations in the DFG. There are many alternative modules for each op-
eration, and each alternative gives di erent characteristics of area, delay, and
power use. The GA must also perform scheduling, whereby the same physi-
cal module implements more than one operation in the DFG by being used
at di erent times. The tness function is a requirement to minimise delay,
area, average power, peak power, or weighted sums of the latter three. Ad-
ditionally, multiple constraints can be placed on these factors: in the case
of a constraint violation, a xed penalty plus a penalty increasing with the
degree of violation is given to the tness. Bright and Arslan (1996) describe
how a GA can also be used to manipulate the DFG itself, inserting delays
and parallel branches to achieve various kinds of re-timing, pipelining, and
parallelism.
Moving to a lower level, Hill and Kang (1994) show how a GA can select
modules from a standard-cell library for individual logic gates, again subject
to multiple criteria and constraints. There, the logic network is xed and the
GA selects the implementation of each gate; Arslan, Ozdemir, Bright, et al.
(1996c), Arslan, Horrocks, and Ozdemir (1996b, 1996a) go a stage further and
put the structure of the logic network itself under genetic control as well. The
tness function then not only has weighted components for area and delay
(resulting from the library cell selection and from the network structure),
but also for the correctness with which the desired pre-speci ed Boolean
function is performed by the network. Drechsler, Becker, and Gockel (1996)
demonstrate that a metric of testability can be included in a multi-criteria
tness function.
There is a consensus that holistic approaches are best: ideally, the criteria
(functionality, power, area, etc.) should be taken into account at all stages of
design and implementation. Intrinsic hardware evolution gives an opportu-
nity to do this. There is no distinction between design and implementation:
evolution constructs the circuits as physical objects. Criteria such as power
consumption can be directly measured during the tness evaluations, and
included in the tness function. These criteria will then be respected in all
aspects of the nal evolved system, from its use of the components available
through to the mechanisms by which the task is accomplished. In the case of
the criterion of fault-tolerance , there are practical diculties in this scheme,
but these are identi ed and resolved (at least partially) in Chapter 4.
2.4 A Philosophy of Arti cial Evolution 29

2.4 A Philosophy of Arti cial Evolution


Arti cial evolution can be seen from several di erent perspectives, from view-
ing it as an engineering optimisation technique, through using it as a tool in
producing systems otherwise too complex to design, to thinking of it as a
model of some of the processes that take place in natural evolution. In this
book, it is used in the engineering design of complex circuits that are well tai-
lored to their recon gurable hardware medium. Therefore, `Natural evolution
in an Arti cial Medium' (Ray, 1995) is called for: I wish to allow evolution
to explore the natural forms in the electronic substrate, being cautious not
to constrain it with inappropriate preconceptions, taken either from con-
ventional electronic design or from biology (see next chapter). This section
sketches out a framework in which that might be done.
2.4.1 Domain Knowledge, Morphogenesis, Encoding Schemes and
Evolvability
The evolution of complex systems can take a long time, but for engineering
purposes we desire a satisfactory solution in minimum time. It can there-
fore seem sensible to provide some application- or domain-speci c knowledge
about known ways in which the problem can be solved, so that evolution will
not have to waste time rediscovering them. This is sensible if this information
inevitably would have to be rediscovered, but if the information represents
just some ways of setting about solving the problem { perhaps ways suitable
for human designers, or for evolution in biology, but not suitable for evolution
in the electronic medium { then forcing evolution to use this information un-
necessarily restricts the space of possible solutions. Even worse, it could steer
evolution in ways incompatible with the nature of the evolutionary process
or of the recon gurable medium.
Several times in the discussions above, I have criticised the enforcement of
large prede ned building-blocks upon evolution. Their justi cation is either
that these building-blocks have been shown to be useful in similar situations
(usually not in the scenario of hardware evolution, but in human design
or connectionist networks), or that by \doing some of evolution's work for
it," in designing these blocks, then the time to a satisfactory solution will be
reduced. If the building-blocks really are good for evolution, the task, and the
electronic medium, then this could work. The danger is that providing only
building-blocks { that must be used { rigidly enforces the domain knowledge,
which therefore really must be right.
Evolution proceeds by taking account of changes in behaviour caused
by applications of the genetic operators. If most genetic changes result in
radical changes in behaviour, then the evolutionary process degenerates to
random search (Kau man, 1993). In other words, the tness landscape { the
assignment of tness values over the space of all possible genotypes { is too
`rugged,' in that the tnesses of genotypes separated by small amounts of
30 2. Context

genetic change are not suciently correlated to guide evolution. This could
start to occur if the phenotypic primitives e ectively manipulated by the
genetic operators are too large, as noted by Cli , Harvey, and Husbands
(1993):
\Any high-level semantic groupings. . . necessarily restrict the possibil-
ities available to the evolutionary process insofar as they favor particular
paths at the expense of others, compared to letting the lowest-level prim-
itives be manipulated by genetic operators. The human designer's preju-
dices are incorporated with his or her choice of high-level semantics, and
these restrictions give rise to a much more coarse-grained tness landscape,
with steeper precipices. It might be thought that the use of low-level primi-
tives necessitates very many generations of evolution with vast populations
before any interesting high-level behaviour emerges, but our simulations
show that this is not necessarily the case."
So the imposition of large prede ned building-blocks could impede evolu-
tion { reduce the system's evolvability { by causing the tness landscape to
be too rugged.
The way in which the genetic operators e ectively manipulate the pheno-
typic primitives is determined by the genetic encoding scheme { the mapping
between genotype and phenotype. This mapping is therefore of crucial in u-
ence on the tness landscape, and hence on evolvability. In direct encoding,
there is a one-to-one mapping between phenotypes and genotypes. In unre-
stricted direct encoding every possible phenotype (hardware con guration) is
represented by exactly one genotype. Jakobi (1996a) shows that by restrict-
ing or biasing the genotype!phenotype mapping, domain knowledge can be
given to the evolutionary process.
In a restricted encoding, there is still a one-to-one mapping from geno-
types to phenotypes, but there are fewer possible genotypes than possible
phenotypes: some of the phenotypes have no corresponding genetic repre-
sentation, so can never be generated by the evolutionary process. In a biased
encoding, the genotype!phenotype mapping is many-to-one, so that if a ran-
dom genotype is selected then it is more likely to code for some phenotypes
than others. Domain knowledge can be introduced by using one or both of
these: restriction to exclude some phenotypes (presumed bad) altogether, and
biasing to make some phenotypes (presumed to be on average better than
the others) more likely.
The enforcement of large prede ned building blocks is e ectively an ex-
treme form of restriction: it is impossible for a genotype to represent a circuit
that is not entirely constructed from them. Jakobi (1996a) recommends en-
coding schemes that are both biased and restricted, and I agree. Restriction
can be used to exclude phenotypes that the experimenter is absolutely sure
are not of interest; biasing can be used to give evolution `hints' that certain
kinds of phenotypes may be more interesting than others. I do not think that
the state of knowledge in hardware evolution is sucient to justify the im-
2.4 A Philosophy of Arti cial Evolution 31

position of particular large building-blocks, and I suggest that a less extreme


form of restriction along with biasing would be appropriate.
A particularly general biasing heuristic is that the use of repeated sub-
structures might be bene cial. Another perhaps less general heuristic is to
pay particular attention to circuits having symmetries: this has certainly
been shown to be of importance in evolving control systems for robots with
left/right symmetry (Cli & Miller, 1996). One way of having an encoding
biased by these heuristics is to have the genotype in uence a developmental
process that gives rise to the phenotype. Biases can readily be built in to
the developmental (or morphogenetic) process, but it is dicult to design the
process so as to retain evolvability. The rst use of a morphogenetic process
in arti cial evolution was the graph generation grammar system of Kitano
(1990), and it is suggested (Kitano, 1996a) that this same process could
be applied to electronic circuits. In common with many proposed morpho-
genetic encodings, the genotype encodes production rules that are applied
in the manner of a formal grammar, repeatedly transforming the system,
starting from an initial `embryonic' object (analogous to the starting sym-
bol) and nally giving the phenotype. This modelling of a growth process by
formal language theory springs from the work of Lindenmayer in modelling
biological development (Prusinkiewicz & Lindenmayer, 1990).
Hemmi's HDL generation grammar encoding and Koza's application of
Gruau's cellular encoding (Gruau, 1994) (see above), are both examples of
a sophisticated morphogenetic process (designed through huge amounts of
human e ort) being applied to hardware evolution. Certainly in Koza's case,
the bene ts are apparent to an electronic engineer's eye: the circuits use
repeated substructures in a sensible way, and these substructures did not have
to be evolved independently { the repeated unit is only coded for once on the
genotype. In this way, evolution designs its own building-blocks, which can be
crafted to suit the evolutionary process, the medium, and the application. In
the examples of this book, I use an unrestricted direct encoding for simplicity,
but acknowledge the importance of restriction and biasing for larger systems.
I conclude this discussion on evolvability with a speculation on the suit-
ability of di erent classes of circuits for evolution. Consider an experiment to
evolve a circuit to compute the 2-input Boolean exclusive-OR (XOR) func-
tion, using AND, OR, and NOT gates as primitives. There are four possible
test cases, corresponding to the four possible combinations of the two inputs.
The evolutionary tness will be a measure of on how many of the test cases
the circuit gives a correct output. Almost immediately, evolution will discover
that a single OR gate gives the correct output for three out of the four test
cases. From this time onwards, evolution will perform at least as badly as
random search in nding a circuit that gets all four of the test cases correct.
The problem can be even worse for more complex digital systems: what sort
of evaluation technique allows evolution a gradual path of improvements to
evolve a microprocessor? Part of the answer would be to use incremental evo-
32 2. Context

lution (see below), and perhaps the injection of noise could help to smooth
the tness landscape (Thompson, Harvey, & Husbands, 1996; Higuchi et al.,
1996a). However, it could be that continuous time analogue dynamical sys-
tems (e.g. the continuous-time recurrent networks of logic gates demonstrated
later) are inherently more evolvable than discrete-time digital computational
systems. More research is needed to clarify this issue.
2.4.2 Species Adaptation Genetic Algorithms (SAGA)
It would be dicult to develop a tness evaluation technique that allowed a
gradual path of improvements starting from an initial random population and
nally arriving at a very complex pre-speci ed target behaviour. One way to
incorporate human skill in such cases is to break the target behaviour down
into a sequence of tasks of increasing complexity and diculty: incremental
evolution. Harvey, Husbands, and Cli (1994) give a good example: the task
was to evolve a neural controller and a visual morphology that caused a
robot to navigate towards a white triangle, avoiding a white rectangle, while
not bumping into the black walls of the rectangular arena (the two shapes
were xed on one of the walls). First, a random population was formed,
and a single individual was picked out by human judgement as displaying
vaguely `interesting' (but totally stupid) behaviour. The initial population
of the evolutionary experiment was then made up of clones of that single
individual. Starting from this population, the rst subtask was to navigate
robustly towards an entire wall of the arena coloured white. The next subtask,
starting from the nal population of the previous one, was to navigate towards
a much smaller rectangular target. Finally, the task was to nd the triangle
and avoid the rectangle.
In the long term, incremental evolution must be the paradigm for the arti-
cial evolution of complex systems. It allows an experiment to start from the
best previous result rather than always starting from zero each time, partly
side-stepping criticisms about the time taken by the evolutionary approach.
However, as pointed out by Jakobi (1996b),
\A GA based on or requiring one-way change (such as population con-
vergence) contains a built in stopping point where that one-way change
goes to its limit. If we are after an open-ended evolutionary process that
is truly limitless in terms of the behavioural complexity it is capable of
producing, therefore, we cannot rely on traditional GA optimization tech-
niques."
During a run of either a conventional Genetic Algorithm, or of Genetic
Programming, the amount of genetic variation in the population decreases.
It is a maximum in the initial random population, and eventually decreases
to a small value, at which time evolution ceases and the experiment is over
(whether or not the goal was reached). These EAs are therefore not suitable
for the long-term open-ended incremental evolution of arbitrarily complex
systems. If the goal is reached just as the population becomes genetically
2.5 The Position of this Book Within the Field 33

converged, then a slightly more complex problem { requiring a few more gen-
erations { could not be solved using the same experimental parameters; most
de nitely this nal population could not be used as the starting population
of the next most complex task in an incremental sequence.
Inman Harvey's Species Adaptation Genetic Algorithm (SAGA) (Harvey,
1992b, 1992a; Harvey, Husbands, & Cli , 1993; Harvey, 1995) was devel-
oped as an extension to the GA and as a conceptual framework speci cally
to deal with this issue. It casts arti cial evolution as a process of contin-
ual adaptation of a relatively genetically converged `species.' In this process,
mutation is the primary genetic operator: crossover, though still useful, does
not play the fundamental role that it does in conventional GAs. The per-bit
mutation probability in SAGA is set with an aim to nd an optimal balance
between mutation (which increases exploration and population divergence)
and selection (which maintains progress made so far, and increases popula-
tion convergence). Theoretical and empirical investigations show the optimal
rate of mutation to be one which makes, on average, of the order of one
mutation that a ects tness per genotype. (This applies to common experi-
mental conditions used with GAs, but see the references for full and general
details.) SAGA theory also gives a way for the genotypes to increase gradu-
ally in size, to match the increasing complexity of an incremental evolution
scenario. Putting aside the possibility of increasing genotype size, the only
modi cation to the standard GA algorithm needed by SAGA is to maintain
a constant selection pressure (e.g. by using rank-based selection), to set the
mutation rate appropriately, and to allow the experiment to continue after
any initial transient phase in the amount of genetic convergence.
In my view, SAGA is the most suitable EA for the evolution of complex
systems. It may be that other kinds of EA can be modi ed in a similar way
to how SAGA modi es the conventional GA, but this remains to be seen.
However, most of the points of this book are independent of the choice of EA,
as long as it works. In the experiments, I use SAGA ideas to set the mutation
rate of a completely standard GA with rank selection, which is allowed to
continue after the initial phase of genetic convergence. I will not focus on
SAGA further, because I do not want my arguments to be predicated on the
choice of EA. See Thompson et al. (1996) and Harvey and Thompson (1997)
for a detailed discussion of SAGA applied to hardware evolution, including
analysis of the experiments seen in this book from an evolutionary theory
perspective.

2.5 The Position of this Book Within the Field


It will be clear to the reader of the preceding pages that `The Field' is some-
thing that I have constructed especially for this work. Other studies of hard-
ware evolution do not build upon the same foundations described in the
`Inspiration' section. Of course, I acknowledge the importance of the other
34 2. Context

hardware evolution projects { there are many avenues to be explored, and this
book walks down just one. The central idea of allowing `Natural evolution
in an arti cial medium' of recon gurable hardware, by means of removing
conventional constraints on structure and dynamics (see next chapter) is, to
my knowledge, original. I am not aware of other work placing an empha-
sis on the use of evolution to exploit the natural physical properties of the
silicon medium: usually the recon gurable hardware is merely viewed as a
high-speed implementation of logic that could easily be done more slowly in
software. The engineering use of population-dynamic e ects to be introduced
in Chapter 4 is also new, though it builds on previous work in theoretical
biology. The concept of intrinsic hardware evolution is not mine, but the
experiments to be presented do achieve some milestones: the rst real ex-
periment in intrinsic hardware evolution deliberately performed as such, the
rst evolved hardware robot control system, and the rst intrinsic evolution
of the con guration of an FPGA.
3. Unconstrained Structure and Dynamics

In this chapter, the concept of unconstrained intrinsic hardware evolution {


the central idea of the book { is developed.1 This will be done by clarifying
the relationship between evolution and conventional design techniques, and
then following a sequence of three experiments to see if the conclusions are
a practical proposition. The experiments are deliberately simple, in order to
investigate speci c hypotheses: wait until Chapter 5 for a full-scale demon-
stration of the entire thesis in a more practical application. To close the
chapter, the relationship between intrinsic hardware evolution and natural
evolution is also discussed.

3.1 The Relationship Between Intrinsic Hardware


Evolution and Conventional Design Techniques
Human design of anything but the smallest circuits must proceed through the
use of abstract models. It would be infeasible for a designer to consider the
detailed semiconductor physics of every component and their interactions at
all stages of the design process. Instead, the following very general strategy
is used:
1. Break the system into parts that can be understood individually.
2. Restrict the interactions between these parts so that can be understood.
3. Apply 1{2 repeatedly, allowing design at hierarchically increasing levels
of abstraction.
Take synchronous digital design as an example. The rst step is to de ne
logic gates as the primitive elements to be used in the next level of the ab-
straction hierarchy. Each logic gate is constructed from just a few transistors,
in a circuit that can be analysed, designed, and well understood. However,
if arbitrary networks of these gates were constructed, then their collective
behaviour would be too complex to analyse, design, and understand. The
interactions between the gates must be restricted, so that the behaviour of
1
Most of this chapter's material also appears in: Thompson (1995a), Thompson
et al. (1996), Thompson (1996c).
36 3. Unconstrained Structure and Dynamics

the whole system can be readily understood using knowledge of the individ-
ual gates. This restriction is to demand that the logic gates are arranged in
entirely feedforward networks: there must be no recurrent (feedback) connec-
tions. The behaviour of these feedforward networks can then be understood
using Boolean logic.
Having composed feedforward networks out of logic gates made out of
transistors, we now want to build something out of a collection of feedforward
networks. Again, though, an arbitrary composition of feedforward networks
(allowing recurrent connections) would be too complex to understand. This
time we allow recurrent paths to exist, but insist that they operate only in
discrete time, on the ticking of a global clock.
We have now arrived at a situation where the system is compartmentalised
into feedforward modules, which are only allowed to communicate with each
other at the discrete instants given by the global clock. When the clock ticks,
the inputs to a feedforward module can change. There is then a urry of
dynamical activity at the level of the logic gates (known as hazards and races
(Green, 1985)), and within each gate as the transistors follow the analogue,
continuous-time laws of semiconductor physics. However, all this dynamical
activity is not allowed to a ect the rest of the system, because it is not until
the next clock tick that the feedforward module's output is used by any other
module, and by then all of this `transient' activity has died away, leaving the
module's output at a digital logic state predictable by Boolean logic.
To allow the designer to work at the abstract level of Boolean logic, in-
stead of thinking about the physics of every transistor, the possible circuits
have become highly constrained. There is the structural constraint that the
system must be constructed of modules, each being a feedforward network of
primitive subcircuits called logic gates. Then there is the temporal constraint
that these modules must only in uence each other at the discrete time in-
stants given by the ticking of the global clock. In asynchronous or self-timed
logic design (Section 2.1.5), a slightly less rigid temporal constraint is used
to the same e ect: a module is allowed to in uence the rest of the system as
soon as its internal transients have died away.
Digital logic is just one example. It is universally the case that if a system
is to be designed without always having to think about the detailed physics
of the semiconductor components, then the abstraction steps 1{3 above must
be followed, resulting in constraints on the circuit's structure and/or dy-
namics. So for anything but tiny circuits, the need for design abstractions
imposes circuit constraints that are there for the designer's bene t and may
not be necessary to the application or to the operation of electronic circuits
in general.
Heretofore, all electronic circuits have been designed. Thus, general con-
ceptions about what sort of a thing an electronic circuit can be are heavily
biased by the constraints resulting from the design methodologies that have
been developed. Intrinsic hardware evolution never uses abstractions: there
3.2 Unconstrained Structure 37

is no distinction between design and implementation. Evolution proceeds by


taking account of changes in the overall behaviour of the real physical elec-
tronic system when variations are made to its internal structure. Thus any
circuit constraints that are normally applied by designers solely to facilitate
abstractions are unnecessary and should be removed. It takes considerable
imagination to envision the new structures, dynamical behaviours, and mech-
anisms that could be evolved once released from these constraints.
As introduced in the previous chapter, intrinsic hardware evolution can
exploit the natural behaviours exhibited by the recon gurable medium. This
contrasts with top-down design, where operations are arrived at which then
have to be implemented somehow. The advantage of evolution is greatest if
unnecessary constraints on structure and dynamics are removed; the medium
can support complex structures with rich dynamics, and now all of the dy-
namical behaviour that naturally arises from semiconductor physics can po-
tentially be put to use in achieving the target overall behaviour. This gives
the potential for systems that are better tailored to the medium, that exploit
its natural characteristics, and which can therefore use the available silicon
resources more e ectively or eciently than can design methods.
I have slipped into speaking as if evolution may be allowed to explore
every possible con guration, every possible behaviour within the repertoire
of the recon gurable medium. Is that really the case? The rest of the chapter
is devoted to this question. In the following, unconstrained spatial structure
is rst considered, and then (at greater length) unconstrained dynamics.

3.2 Unconstrained Structure


An enforced spatial or topological structure is not required by evolution to
support design abstractions, because it uses none. However, another way in
which structure is in uenced by the design methodology is by the problem
decomposition. In conventional top-down design, the problem is iteratively
broken down into subproblems, until the subproblems are simple enough to
be solved. In this, and even in more bottom-up approaches such as the Sub-
sumption Architecture (where the decomposition is into behaviours (Brooks,
1991)), the structure of the nal circuit will tend to re ect the problem de-
composition used.
Does the evolutionary process require some kind of modularity in the
phenotype, in an analogous way? (Here, `module' is used loosely to mean a
cohesive substructure.) It has been argued that evolution bene ts from an
\independent genetic representation of functionally distinct character com-
plexes" (Wagner, 1995; Wagner & Altenberg, 1996). This would prevent small
mutational variations applied at one point from having large-scale rami ca-
tions throughout the whole system, so that parts of it could be improved
semi-independently. It makes sense that evolution can work best if the na-
ture of the problem allows such an organisation, but to what extent is struc-
38 3. Unconstrained Structure and Dynamics

tural modularity of evolved circuits implied? `Functionally distinct character


complexes,' are components of behaviour and are not necessarily cohesive
substructures of the circuit's spatial or topological organisation; for exam-
ple, they might inhere in the basins of attraction, which could be a global
property of the whole circuit.
This issue is unresolved, and is interwoven with the issues of evolvability,
morphogenesis, and genotype!phenotype encoding discussed in Section 2.4.1
of the previous chapter. Some kind of modularity in the genotype!behaviour
mapping will aid the process of evolution using a particular encoding scheme,
a particular recon gurable medium, and for a particular application. Proba-
bly, there are general principles applying across classes of such instances. This
modularity in the encoding of `character complexes' may not be directly re-
ected in the spatial or topological structure of the circuit. The appropriate
circuit structuring of evolved/evolvable circuits may therefore be di erent to
the modules arising from conventional design methods. I suggest that pre-
conceptions from design practice should be applied only very tentatively to
evolutionary systems, and this via biases in the encoding scheme rather than
through hard constraints. In fact, when formulating an encoding scheme, evo-
lutionary biology may be a more suitable source of inspiration than electronics
design, but the characteristics of the recon gurable electronic medium, and
of the application, should not be neglected.

3.3 Unconstrained Dynamics


Real physical electronic circuits are continuous-time dynamical systems. They
can display a broad range of dynamical behaviour, of which discrete-time
systems, digital systems and even computational systems are but subsets.
These subsets are much more amenable to design techniques than dynamical
electronic systems in general, because design abstractions are supported by
the constraints on structure and dynamics that each subset brings. Intrinsic
hardware evolution does not require abstract models, so there is no need
arti cially to constrain the natural dynamics of the recon gurable medium {
all of this dynamical behaviour can be released, and put to use in performing
the task.
In particular, there no longer needs to be an enforced method of control-
ling the phase (temporal co-ordination) in recon gurable hardware originally
intended to implement digital designs. The phase of the system does not have
to be advanced in lock-step by a global clock, nor even the more local hand-
shaking mechanisms of asynchronous digital design methodologies imposed.
In the previous chapter, the survey of Analogue Neural VLSI, and of pulse-
stream neural networks, showed how pro table an excursion into the space
of general dynamical electronic systems can be.
In some applications, dynamics on more than a single timescale are needed
in an evolved circuit. For example, a real-time control system needs to behave
3.3 Unconstrained Dynamics 39

on a timescale suited to the actuators (typically in the range milliseconds to


seconds), while the underlying dynamics of the controller's electronic com-
ponents might be measured in nanoseconds. Behaviour on more than one
timescale can also be signi cant in other ways; indeed, on-line adaptation
(`learning') can be thought of as a dynamic on a slower timescale than indi-
vidual task-achieving behaviours.
There are several ways in which high-speed electronic components can
give rise to much slower behaviour:
{ The phase can be governed by one or more external signals, a digital clock
being the prime example.
{ Large time-constant resources can be provided. Large capacitors or induc-
tors cannot be made in VLSI, so either external components can be made
available, or special techniques can be used to make the most of smaller
on-chip components (Kinget, Steyaert, & van der Spiegel, 1992).
{ The high-speed components can somehow be assembled to give rise to
slower dynamics, without explicitly providing large time-constant resources
or slow-speed clocks.
Is the last of these possibilities feasible in an evolutionary framework? To
generalise the issue, can evolution craft the dynamics of the system so that
the overall behaviour is appropriate even when dynamical constraints have
been completely removed? Without evidence, it would be too much a leap of
faith to expect this to work. For example, surely a complex continuous-time
recurrent network of logic gates is doomed to fall into useless uncontrollable
high-frequency oscillation? If evolution is able to control the dynamics of
such seemingly unmanageable networks, then there is great potential for it
to exploit usefully those very dynamics it has harnessed { dynamics normally
precluded from contributing to the way an electronic system operates.
These issues are now explored in a sequence of three experiments. In the
rst, the question of unconstrained evolutionary manipulation of timescales is
addressed in a simulation study, by attempting (successfully) to evolve a low-
frequency oscillator from a network of very high-speed components. In the
second experiment, the result is shown to hold for the intrinsic evolution of
a real FPGA. Having veri ed that evolution can manipulate at least a single
timescale, the nal experiment is a show-piece for unconstrained dynamics:
a standard electronic architecture is taken, and some of the dynamical con-
straints needed by designers are removed. Evolution is then able to exploit
the rich new dynamics released, eliciting remarkably sophisticated behaviour
from very little hardware. (The main demonstration in Chapter 5 will later
provide a more overwhelming exposition of the possibilities by applying un-
constrained evolution to an FPGA in a real application.)
40 3. Unconstrained Structure and Dynamics

3.3.1 Unconstrained Evolutionary Manipulation of Timescales


I: Simulation study
The task was to evolve a network of high-speed logic gates, in a simulation
loosely based on the structure of the XC6216 FPGA, to oscillate regularly at a
much slower timescale than the gate delays. The task was simple, but success
would demonstrate evolution's ability to manipulate dynamical timescales,
without the imposition of dynamical constraints.
The number of logic nodes available was xed at 100, and the genotype
determined which of the Boolean functions of Table 3.1 was instantiated by
each node, and how the nodes were connected. The nodes were analogous to
the recon gurable logic blocks of an FPGA, but an input could be connected
to the output of any node without restriction. The linear bit-string genotype
consisted of 101 segments (numbered 0::100 from left to right), each of which
directly coded for the function of a node and the sources of its inputs, as
shown in Table 3.2. (Node 0 was a special `ground' node, the output of
which was always clamped at logic zero.) This encoding is based on that
used by Cli et al. (1993). The source of each input was speci ed by counting
forwards/backwards along the genotype (according to the `Direction' bit) a
certain number of segments (given by the `Length' eld), either starting from
one end of the string, or starting from the current segment (dictated by the
`Addressing Mode' bit). When counting along the genotype, if one end was
reached, then counting continued from the other.
At the start of the experiment, each node was assigned a real-valued prop-
agation delay, selected uniformly randomly from the continuous range [1 : : : 5]
nanoseconds, and held to double precision accuracy. These delays were to
be the input)output delays of the nodes during the entire experiment, no
matter which functions the nodes performed. There were no delays on the
interconnections. To commence a simulation of a network's behaviour, all of
the outputs were set to logic zero. From that moment onwards, a standard
asynchronous event-based logic simulation was performed (Miczo, 1987), with

Name Symbol Bits Meaning


BUFFER 0-4 Junk
NOT 5-7 Node Function
Pointer to First Input
AND 8 Direction
OR 9 Addressing Mode
10-15 Length
XOR
Pointer to Second Input
NAND 16 Direction
NOR 17 Addressing Mode
18-23 Length
NOT-XOR
Table 3.2. Genotype segment for one node.
Table 3.1. Node functions.
3.3 Unconstrained Dynamics 41

real-valued time being held to double precision accuracy. The logic simulator
program was written especially for this experiment. An equivalent time-slicing
simulation would have had a time-slice of 10 24 seconds, so the underlying
synchrony of the simulating computer was only manifest at a timescale 15
orders of magnitude smaller than the node delays, allowing the asynchronous
dynamics of the network to be seen in the simulation. A low-pass lter mech-
anism meant that pulses shorter than 0.5ns never happened anywhere in the
network.
The objective was for node number 100 to produce a square wave oscilla-
tion of 1kHz, which means alternately spending 0:5  10 3 seconds at logic 1
and at logic 0. If k logic transitions were observed on the output of node 100
during the simulation, with the nth transition occurring at time tn seconds,
then the average error in the time spent at each level was calculated as :
k
average error = k 1 1
X
(tn tn ) 0:5  10


1
3
(3.1)
n=2
For the purpose of this equation, transitions were also assumed to occur
at the very beginning and end of the trial, which lasted for 10ms of simulated
time. The tness was simply the reciprocal of the average error. Networks
that oscillated far too quickly or far too slowly (or not at all) had their
evaluations aborted after less time than this, as soon as a good estimate of
their tness had been formed. The genetic algorithm was the one described
in the introduction,2 with population size 30, crossover probability 0.7, and
mutation probability 6:0  10 4 per bit. At the time, this mutation rate was
found through trial and error, but later calculations showed it to be in line
with SAGA theory (Section 2.4.2).
Figure 3.1 shows that the output of the best individual in the 40th gen-
eration (Figure 3.2) was, at 4kHz, approximately 4 21 thousand times slower
than the best of the random initial population, and was six orders of mag-
nitude slower than the propagation delays of the nodes. In fact, tness was
still rising at generation 40 when the experiment was stopped because of
the excessive processor time needed to simulate this kind of network. This
result suggests that it is possible for evolution to arrange for a network of
high-speed components to generate much slower behaviour, without having
to have constraints applied to the dynamics.
The evolved oscillators produced spike trains rather than the desired
square wave. (A square wave could have been produced by the addition of
a toggling ip- op to the output, but this did not arise within the 40 gen-
erations.) Probing internal nodes indicated that beating between spike trains
of slightly di erent frequencies was being used to generate a much lower fre-
quency; beating only works for spikes, not for square waves. This does not
2
For no good reason, in this experiment the rank-selection method included trun-
cation of the ve least- t individuals (they never have o spring). This is not
thought to be signi cant, and was dropped on all later experiments.
42 3. Unconstrained Structure and Dynamics

logic ‘1’
~
~ 18MHz
logic ‘0’
Two millionths of a second

logic ‘1’
~ 4kHz
~
logic ‘0’
Two thousandths of a second
Fig. 3.1. Output of the oscillator evolved in simulation. Top: Best of the initial
random population of 30 individuals; Below: best of generation 40. Note the dif-
ferent time axes. A visible line is drawn for every output spike, and in the lower
picture each line represents a single spike.

mean that the task was easy: it is dicult for beats to reduce the frequency by
the massive factor required and yet produce an output as regular as that seen
in Figure 3.1. Beating does not just occur at a few nodes, but is distributed
throughout the network: I was unable to attribute functions to particular
subnetworks of components. By examining the causative chain of events be-
ing scheduled by the logic simulator, it was seen that soon after initialisation
all of the 68 gates seen in the gure had a ected the output.
The layout of the circuit diagram, Figure 3.2, was done by hand. It would
be quite possible for there to be interesting structures in the topology of the
network, without these being apparent to the eye from this diagram. There
do exist methods for automatic network-diagram layout that could draw bet-
ter diagrams (e.g. Kosak, Marks, and Shieber (1991)). Rather than concen-
trating on visual analysis, the well-known Kernighan & Lin (KL) heuristic
graph-partitioning algorithm (Kernighan & Lin, 1970) was adapted to search
directly for substructures with properties that could aid an understanding of
the circuit. The modi ed KL algorithm was used to divide the evolved net-
work into two subnetworks (A and B, with A containing fewer nodes than B),
while attempting to maximise a measure of the quality of this bipartition.
Two di erent quality measures were used:
Type 1: The quality of a partition was the total number of links between
nodes within A, plus the total number of links between nodes within B,
minus the number of nodes in A having output connections to B, minus
the number of nodes in B having output connections to A. An output
link from one subnetwork that fanned-out to connect to more than one
input in the other subnetwork was counted as a single crossing of the
partition. This quality metric was intended to cause the network to be
divided into two parts, each with high internal connectivity, but with few
3.3 Unconstrained Dynamics 43

Output
100
86

19
67
72

42

36
1

43

97
44

23
3
10

53

13
34

33
64
68

37
4

9
85
24
98

61
25

12
87

66

21

81
30

16

47

41

94
69
55

46
89
2

45
51

75
5
74

52

32
38
62

57
35

71
17

90
83
15

48
92

8
78

79
91
58

56
11

Fig. 3.2. The 4kHz oscillator circuit evolved in simulation. Gates having no con-
nected path by which they could in uence the output are not shown, leaving the 68
seen above. The index of the genotype segment coding for a node is written inside
its symbol.
44 3. Unconstrained Structure and Dynamics

connections crossing the partition. The smaller subnetwork (A) might


then be considered as a `module' worth analysing in (partial) isolation
from the rest of the network.
Type 2: The quality of a partition was the total number of links between
nodes within A, plus the number of nodes in B having output connections
to A, minus the number of nodes in A having output connections to B.
This quality metric was intended to cause a subnetwork A to be identi ed,
having high internal connectivity, and receiving a large number of inputs
from many parts of the rest of the network, but having a small number of
outputs (although these could fan-out to a large number of inputs in B).
Such a subnetwork, if identi ed, would be worth inspecting to see if it
was performing some important co-ordinating function.
The algorithm was run many times on the nal evolved circuit of Fig-
ure 3.2, searching for each type of partition, and with the number of nodes
in partition A ranging from 1 up to half the total number of nodes. The al-
gorithm and quality metrics worked well in tests, but were unable to identify
any interesting substructures in the evolved network, other than those that
can be seen with a little e ort by eye. This is not very surprising, because
there was no bias in the genetic encoding towards Type 1 or 2 substructures.
It is reassuring to note, though, that in the absence of any structural con-
straints, evolution has solved the problem without re-creating `modules' or
anything like them.
A human designer would probably have attempted the design of this cir-
cuit by constructing a ring oscillator (Mead & Conway, 1980, page 235) fol-
lowed by frequency-division stages. In the absence of structural or dynamical
constraints, evolution has found an alternative mechanism based on the use
of beats: an essentially continuous-time phenomenon, not contained in the
toolbox of conventional design. Evolution was able to harness the high-speed
dynamics of the simulated digital components, and to put this natural be-
haviour to work at a di erent timescale. Both this evolved circuit and pulse-
stream neural networks (Section 2.1.2) capitalise on the use of continuous
time in a digital system; both use spikes, but here evolution was able to in-
vent this temporal co-ordination mechanism without it having to be built in
as a preconception.
3.3.2 Unconstrained Evolutionary Manipulation of Timescales
II: Using a real FPGA
In the simulation experiment, we saw that evolution could craft the dynam-
ics of a continuous-time noise-free purely digital network. However, real re-
current continuous-time logic networks are far from noise-free and digital.
Even though the gates are nominally digital, they are essentially very high
gain analogue ampli ers, and this can become a more appropriate descrip-
tion of their behaviour. Consequently, although the simulation work was a
3.3 Unconstrained Dynamics 45

good way of investigating the unconstrained manipulation of timescales in


a well-controlled experiment, the results are not directly applicable to the
isomorphic real hardware. Here, we repeat the previous experiment, but with
all tness evaluations taking place on a real FPGA. There is no simulation
of the circuit, just real semiconductor physics.
The apparatus is shown in Figure 3.3. As described in the Introduction
(Section 1.2), a subset of the functionality of the XC6216 FPGA was used. To
con gure a single cell, there were 18 multiplexer control bits to be set, and
these bits were directly encoded onto the linear bit-string genotype. Only
the 10  10 array of cells in the extreme north-west corner of the FPGA
was used. The genotype of length 1800 bits was formed from left to right by
taking the cells in a raster fashion, from west to east along each row, and
taking the rows from south to north. There was no input, and the output
was taken from a cell on the north edge, as shown. The basic GA was again
used, with a population size of 50, a crossover probability of 0.7, and a per-
bit mutation probability such that the expected number of mutations per
genotype was 1.45. The mutation rate was arrived at through a combination
of SAGA theory and experimentation.
Each individual, once downloaded as a con guration onto the real hard-
ware, was given a single one-second evaluation. During this time, the output
was monitored by a counter (an HC11 micro-controller) capable of count-
ing positive-going logic transitions at a rate of up to 1MHz. Its count at
the end of the one-second evaluation was taken as a direct measurement

High-frequency
counter.

Desktop
configuration Computer
(PC)

10x10 corner of FPGA


Fig. 3.3. The experimental arrangement for oscillator evolution with the real
FPGA.
46 3. Unconstrained Structure and Dynamics

of frequency. Taking the tness to be the inverse error as before, we have


tness = 1= jf nj, where f is the desired frequency and n is the number
of positive transitions counted over one second. If no transitions at all were
counted, then the tness was set to zero, and if f n was zero then the tness
was declared to be a million.
The initial population was hand-seeded by randomly generating a large
number of circuits, inspecting their output by eye using an oscilloscope, and
simply selecting those for which the output was not always constant. Starting
from this same initial population, the GA was run for 40 generations for each
of the target frequencies f = 10Hz, 1kHz, 100kHz, 1MHz. The measured
frequency of oscillation of each individual over the course of each run is
shown in Figure 3.4. As the diagram clearly shows, the population quickly
converges on the desired frequency. The population is not just converging
upon an individual already present in the initial population: the maximum
tness in the population, as well as the mean, increases over time. It must be
admitted that there were always individuals in the initial population with a
frequency close to the target, but the wide spectrum of behavioural timescales
present in the initial population is an interesting observation in itself.
Although using the counter to measure frequency is crude, there is evi-
dence that the evolved individuals do indeed display the desired behaviour:
if they are allowed to run for two seconds instead of one, then the number
of counts measured during the second second is very close to that measured
during the rst. This implies a certain degree of regularity in the oscillation.
Repeated evaluations over extended periods of time produce very similar
readings.
How is the behaviour being achieved? Visual inspection of the output
on an oscilloscope shows that all of the evolved solutions produce very high
frequency waveforms of some kind, but that this high frequency component
is not crossing the digital logic threshold, so is not being registered by the
counter. Only at just the desired frequency is the signal making an excursion
across the threshold long enough to be captured by the counter. All that can
be seen on a cheap oscilloscope is a blur of high-frequency activity centered
around an analogue voltage a short distance away from the logic threshold,
with occasional faint traces of large-amplitude excursions.
Evolution really does seem to have tuned the circuit as a continuous-time
analogue dynamical system, even though the manufacturers of the chip had
discrete logic in mind. As an analogue system, the solution appears to be
found more easily than in the equivalent purely digital noise-free simulation
of the previous section. The physical characteristics of the FPGA have been
exploited without any constraint whatsoever on the con gurations allowed.
The four runs show evolution manipulating the overall dynamics of the system
on a timescale varying over ve orders of magnitude.
3.3 Unconstrained Dynamics 47

1e+6 1e+6

1e+5 1e+5

1e+4 1e+4

1e+3 1e+3

1e+2 1e+2
Frequency (Hz)

1e+1 1e+1

1e+0 1e+0
0 10 20 30 40 0 10 20 30 40

1e+6 1e+6

1e+5 1e+5

1e+4 1e+4

1e+3 1e+3

1e+2 1e+2

1e+1 1e+1

1e+0 1e+0
0 10 20 30 40 0 10 20 30 40

Generations
Fig. 3.4. Frequency of oscillation of individuals over the GA run (real FPGA).
The objectives were: Top left - 10Hz; Top right - 1kHz; Bottom left - 100kHz;
Bottom right - 1MHz. Individuals with constant output (frequency = 0Hz) are
not shown. Frequencies higher than 1MHz appear as exactly 1MHz due to the
limited rate of the counter. Where many points are overlaid, they appear as one.
Note that the frequencies are shown on a logarithmic scale. Each of these runs took
just a few minutes to complete, due to the shortness of the evaluations.
48 3. Unconstrained Structure and Dynamics

3.3.3 A Showpiece for Unconstrained Dynamics:


An Evolved Hardware Sensorimotor Control Structure
The preceding experiments have suggested that evolution can craft the be-
haviour of a complex dynamical electronic system, but for a rather `toy' task.
This section aims to highlight the potential usefulness of the rich dynamics
released when unnecessary design constraints are removed. The experiment
takes a standard electronic architecture, removes some of the dynamical con-
straints used to make conventional design tractable, and subjects the result-
ing dynamical electronic system to intrinsic hardware evolution. When rst
reported (Thompson, 1995a), this experiment was the rst case of intrinsic
hardware evolution deliberately carried out as such, and the rst evolved
hardware control system for a real robot.3
The circuit was the on-board controller for the robot shown in Figure 3.5.
This two-wheeled autonomous mobile robot has a diameter of 46cm, a height
of 63cm, and was required to display simple wall-avoiding/room-centering
behaviour in an empty 2.9m4.2m rectangular arena. For this scenario, the
d.c. motors were not allowed to run in reverse and the robot's only sensors
were a pair of time-of- ight sonars rigidly mounted on the robot, one pointing
left and the other right. The sonars re simultaneously ve times a second;
when a sonar res, its output changes from logic 0 to logic 1 and stays there
until the rst echo is sensed at its transducer, at which time its output returns
to 0.
Conventional electronic design would tackle the control problem along
the following lines: For each sonar, a timer would measure the length of its
output pulses { and thus the time of ight of the sound { giving an indication
of the range to the nearest object on that side of the robot. These timers
would provide binary-coded representations of the two times of ight to a
central controller. The central controller would be a hardware implementation
of a nite-state machine (FSM), with the next-state and output functions
designed so that it computes a binary representation of the appropriate motor
speed for each wheel. For each wheel, a pulse-width modulator would take
the binary representation of motor speed from the central controller and vary
the mark:space ratio of pulses sent to the motor accordingly.
It would be possible to evolve intrinsically the central controller FSM by
implementing the next-state and output functions as look-up tables held in an
o -the-shelf random access memory (RAM) chip; this is the well-known `Di-
rect Addressed ROM'4 implementation of an FSM (Comer, 1984). The FSM
would then be speci ed by the bits held in the RAM, which could be recon-
3
The robot was constructed especially for this project. The author is responsible
for all of the electronic design and construction, and for the physical design of
the upper sections of the robot. The lower section was a pre-existing chassis
borrowed from another project, and the extra metalwork was performed by the
University of Sussex Engineering Workshops.
4
ROM = Read Only Memory.
3.3 Unconstrained Dynamics 49

Virtual
World
Simulator

Sonar
Emulator

Evolvable
Hardware

Sonars

Wheels

Rotation
Sensors

Fig. 3.5. The robot known as \Mr Chips."


50 3. Unconstrained Structure and Dynamics

gured under the control of each individual's genotype in turn. There would
be little bene t in evolving this architecture as hardware, however, because
the electronics is constrained to behave in accordance with the FSM design
abstraction: all of the signals are synchronised to a global clock to give clean,
deterministic state-transition behaviour as predicted by the model. Conse-
quently, the hardware would behave identically to a software implementation
of the same FSM.
What if the constraint of synchronisation of all signals is relaxed and
placed under evolutionary control? Although super cially similar to the FSM
implementation, the result (shown in Figure 3.6), is a machine of a funda-
mentally di erent nature. Not only is the global clock frequency placed un-
der genetic control, but the choice of whether each signal is synchronised
(latched) by the clock or whether it is asynchronous (directly passed through
as an analogue voltage) is also genetically determined. These relaxations of
temporal constraints { constraints necessary for a designer's abstraction but
not for intrinsic evolution { endow the system with a rich range of potential
dynamical behaviour, to the extent that the sonar echo pulses can be fed
directly in, and the motors driven directly by the outputs, without any pre-
or post-processing: no timers or pulse-width modulators. (The sonar ring
cycle is asynchronous to the evolved clock).
Sonars Evolved RAM Contents

1k by 8 bits RAM
10 Address inputs 8 Data outputs

1 1 10 6 1 1
G.L.

G.L.

Evolved M M
Clock
Motors
Fig. 3.6. The hardware implementation of the evolvable DSM robot controller.
`G.L.' stands for a bank of genetic latches: it is under genetic control whether each
signal is passed straight through asynchronously as an analogue voltage, or whether
its digital value is latched according to the global clock of evolved frequency.
3.3 Unconstrained Dynamics 51

Let this new architecture be called a Dynamic State Machine (DSM). It


is not a nite-state machine because a description of its state must include
the temporal relationship between the asynchronous signals, which is a real-
valued analogue quantity. In the conventionally designed control system there
was a clear sensory/control/motor decomposition (timers/controller/pulse-
width-modulators), communicating in atemporal binary representations which
hid the real-time dynamics of the sensorimotor systems, and the environment
linking them, from the central controller. Now, the evolving DSM is inti-
mately coupled to the real-time dynamics of its sensorimotor environment,
so that real-valued time can play an important role throughout the system.
The evolving DSM can explore special-purpose tight sensorimotor couplings
because the temporal signals can quickly ow through the system being in u-
enced by, and in turn perturbing, the DSM on their way. The circuit diagram
for the DSM is given in Appendix A.
For the simple wall-avoidance behaviour, only two of the eight feedback
paths seen in Figure 3.6 were enabled. The resulting DSM can be viewed as
the fully connected, recurrent, mixed synchronous/asynchronous logic net-
work shown in Figure 3.7, where the bits stored in the RAM give a look-up
table implementing any pair of logic functions of four inputs. This continuous-
time dynamical system cannot be simulated in software, because the e ects
of the asynchronous variables and their interaction with the clocked ones
depend upon the characteristics of the hardware: meta-stability (Prosser &
Winkel, 1986, pages 505{511) and glitches will be rife, and the behaviour will
depend upon physical properties of the implementation, such as propagation
delays, meta-stability constants, and the behaviour of the RAM chip when
connected in this unusual way. Similarly, a designer would only be able to
work within a small subset of the possible DSM con gurations { the ones
that are easier to analyse.
The same basic GA was used, with the contents of the RAM (only 32
bits required for the machine with two feedback paths), the period of the
clock (16 bits in a Gray code, giving a clock frequency from around 2Hz to
several kHz) and the clocked/unclocked condition of each signal all being
directly encoded onto the linear bit-string genotype. The population size was
30, probability of crossover 0.7, and the mutation probability was again set
according to SAGA theory to give an expectation of around one mutation
per genotype.
If the distance of the robot from the centre of the room in the x and
y directions at time t was cx(t) and cy (t), then after an evaluation for T
seconds, the robot's tness was a discrete approximation to the integral:
Z T 

tness = T1

e kxcx(t)2 + e ky cy (t)2 s(t) dt (3.2)
0

kx and ky were chosen such that their respective Gaussian terms fell from
their maximum values of 1.0 (when the robot was at the centre of the room)
52 3. Unconstrained Structure and Dynamics

LEFT RIGHT

MOTORS M M

LOGIC LOGIC
FUNCTION FUNCTION

LEFT RIGHT

SONARS
Fig. 3.7. An alternative representation of the evolvable Dynamic State Machine,
as used in the experiment. Each is a `Genetic Latch' (see previous gure).

to a minimum of 0.1 when the robot was actually touching a wall in their
respective directions. The function s(t) has the value 1 when the robot is
stationary, otherwise it is 0: this term is to encourage the robot always to
keep moving. Each individual was evaluated for four trials of 30 seconds each,
starting with di erent positions and orientations. The worst of the four scores
was taken as the tness (Harvey et al., 1993). For the nal few generations,
the evaluations were extended to 90 seconds, to nd controllers that were not
only good at moving away from walls, but also staying away from them.
For convenience, evolution took place with the robot in a kind of `virtual
reality.' The real evolving hardware controlled the real motors, but the wheels
were just spinning in the air. The photograph of Figure 3.5 was taken during
an actual evolutionary run of this kind. The wheels' angular velocities were
3.3 Unconstrained Dynamics 53

measured, and used by a real time simulation of the motor characteristics to


calculate how the robot would move if on the ground. The sonar echo signals
were then arti cially synthesised and supplied in real time to the hardware
DSM. Realistic levels of noise were included in the sensor and motor models,
both of which were constructed by tting curves to experimental measure-
ments, including a stochastic model for specular sonar re ections. Details
of the simulation are given in Appendix B: the development of adequate
models was no small task. The GA and the virtual environment simulation
were performed by a laptop PC onboard the robot, and the synthesising of
the sonar waveforms and the generation of the evolved clock by a pair of
micro-controllers. The real DSM hardware connected to the real motors was
used at all times. For operation in the real world, the real sonars were simply
connected in place of the simulated ones, and the robot placed on the ground.
Figure 3.8 shows the excellent performance attained after 35 generations,
with a good transfer from the virtual environment to the real world. The robot
is drawn to scale at its starting position, with its initial heading indicated by
the arrow; thereafter only the trajectory of the centre of the robot is drawn.
The bottom-right picture is a photograph of behaviour in the real world,
taken by double-exposing (1) A picture of the robot at its starting position,
with (2) A long exposure of a light xed on top of the robot moving in the
darkened arena. If started repeatedly from the same position in the real world,
the robot follows a di erent trajectory each time (occasionally very di erent),
because of real-world noise. The robot displays the same qualitative range of
behaviours in the virtual world, and the bottom pictures of Figure 3.8 were
deliberately chosen to illustrate this.
When it is remembered that this miniscule electronic circuit receives the
raw echo signals from the sonars and directly drives the motors (one of which
happens to be more powerful than the other), then this performance is sur-
prisingly good. It is not possible for the DSM directly to drive the motors
from the sonar inputs (in the manner of Braitenberg's `Vehicle 2' (Braiten-
berg, 1984)), because the sonar pulses are too short to provide enough torque.
Additionally, such nave strategies would fail in the symmetrical situations
seen at the top of Figure 3.8. One of the evolved wall-avoiding DSMs was
analysed (see below), and was found to be going from sonar echo signals to
motor pulses using only 32 bits of RAM and 3 ip- ops (excluding clock
generation): highly ecient use of hardware resources, made possible by the
absence of design constraints.
Figure 3.9 attempts to represent one of the wall-avoiders in state-transition
format. This particular individual used an evolved clock frequency of 9Hz
(about twice the sonar pulse repetition rate). Both sonar inputs evolved to
be asynchronous, and both motor outputs clocked, but the internal state
variable that was clocked to become the left motor output was free-running
(asynchronous), whereas that which became the right output was clocked. In
the diagram, the dotted state transitions occur as soon as their input com-
54 3. Unconstrained Structure and Dynamics

Fig. 3.8. Wall avoidance in virtual reality and (bottom right) in the real world,
after 35 generations. The top pictures are of 90 seconds of behaviour, the bottom
ones of 60.
Left motor ON Left motor ON
Right motor ON Right motor OFF

01 00,10

11 01 01,11

01,11 11

RESET Left motor OFF 00 Left motor OFF


Right motor OFF Right motor ON

00,01
Fig. 3.9. A representation of one of the wall-avoiding DSMs. Asynchronous tran-
sitions are shown dotted, and synchronous transitions solid. The transitions are
labelled with (left, right) sonar input combinations, and those causing no change
of state are not shown. There is more to the behaviour than is seen immediately
in this state-transition diagram, because it is not entirely a discrete-time system,
and its dynamics are tightly coupled to those of the sonars and the rest of the
environment.
3.3 Unconstrained Dynamics 55

bination is present, but the solid transitions only happen when their input
combinations are present at the same time as a rising clock edge. Since both
motor outputs are synchronous, the state can be thought of as being sampled
by the clock to become the motor outputs.
This state-transition representation is misleadingly simple in appearance,
because when this DSM is coupled to the input waveforms from the sonars
and its environment, its dynamics are subtle, and the strategy being used
is not at all obvious. It is possible to convince oneself that the diagram is
consistent with the behaviour, but it would have been very dicult to pre-
dict the behaviour from the diagram, because of the rich feedback through
the environment and sensorimotor systems on which this machine seems to
rely. The behaviour even involves a stochastic component, arising from the
probabilities of the asynchronous echo inputs being present in certain com-
binations at the clocking instant, and the probability of the machine being
in a certain state at that same instant (remember that one of the feedback
loops is unclocked).
Even this small system is non-trivial, and performs a dicult task with
minimal resources, by means of its rich dynamics and exploitation of the real
hardware.5 After relaxing the temporal constraints necessary to support the
designers' FSM model, a tiny amount of hardware has been able to display
rather surprising abilities. As a control experiment, three GA runs were per-
formed under identical conditions, but with all of the genetic latches set to
`clocked' irrespective of the genotype. All three runs failed completely, con-
rming that new capabilities have been released from the architecture by
relaxing the dynamical constraints. In another set of three control runs, all
the genetic latches were set to `unclocked.' These runs succeeded but the be-
haviour was not so reliable: from time to time the robot would head straight
for a wall and crash into it.
It seems that the clock allowed the mixed synchronous/asynchronous con-
trollers to move with a slight `waggle' (just visible in the bottom-right pic-
ture in Figure 3.8), and that this prevented them from being disastrously
fooled by specular sonar re ections. This suggests that while removing an
enforced clock can widen the repertoire of dynamical behaviours, providing
an optional clock of evolvable frequency to be used under genetic control
at di erent points in the system can expand the repertoire of dynamics still
further. The clock becomes a resource, not a constraint.

5
Historical Note: The idea of making a highly ecient control system for an
autonomous mobile robot by allowing electronic components to interact with
each other (and the environment) more freely than is conventional dates back at
least as far as Grey Walter's electromechanical `tortoises' in 1949 (Holland, 1996).
Then, the active components were thermionic valves and relays, and ingenious
design by hand was used rather than arti cial evolution.
56 3. Unconstrained Structure and Dynamics

3.4 The Relationship Between Intrinsic Hardware


Evolution and Natural Evolution
Hardware evolution is a combination of electronics and evolution. We have
discussed the error of adhering too closely to the conventional principles of
electronics, but there are also potential pitfalls in blindly applying ideas from
natural evolution.
Consider biological brains. Compared to electronics, the neuron response
and signal propagation times are extremely slow. On the other hand, there is
high connectivity in three dimensions, contrasting with the highly restricted
planar wiring in VLSI (Faggin & Mead, 1990). The two media { biological cell
based and silicon VLSI based { provide very di erent resources. A structure
evolved to exploit the former may not eciently utilise the latter. It may
be possible to evolve parallel distributed architectures better tailored to the
opportunities provided by VLSI than models of biological neural networks
are (Section 2.1.1). Such an architecture might use the high speed of VLSI to
compensate for impoverished connectivity in a more sophisticated way than
the multiplexing schemes commonly seen in VLSI implementations of neural
nets (Douglas et al., 1995; Craven et al., 1994; Tessier et al., 1994). Hence,
having seen the possibilities of intrinsic evolution to o er an unconstrained
exploitation of the physical resources, it would be unwise rigidly to limit
hardware evolution to a neuro-mimetic structure. Of course, one justi cation
for the use of some connectionist architectures is the existence of e ective
learning algorithms, and these are still valuable for that reason.
In the same way that the architecture of natural nervous systems evolved
to be suited to the restrictions and opportunities of biology, so did the pro-
cess of natural evolution itself adapt to the resources available (\the evo-
lution of evolvability"). The large timescale, highly parallel, heterogeneous
distributed co-evolution found in nature is somewhat di erent to present-day
implementations of arti cial evolution. It is thus justi able to use biologically-
implausible mechanisms where these are e ective, for example in the setting
of the mutation rate or in the morphogenesis process. The aim is to arrive
at an implementation of arti cial evolution informed by evolutionary biol-
ogy, but adapted to the facilities available in the intrinsic hardware evolution
scenario.
To sum up the whole chapter, we wish `to facilitate the process of evolution
in generating forms that are adapted to the medium' (Ray, 1995). I have
formulated unconstrained intrinsic hardware evolution to do just that, and
the feasibility studies have indicated that it is viable and pro table. Chapter 5
will demonstrate this more vividly; before then, the next chapter explores
some other facets of evolution that will prove useful.
4. Parsimony and Fault Tolerance

This chapter investigates how the nature of the evolutionary process itself can
be exploited for engineering purposes. In the rst section, a phenomenon orig-
inally observed in molecular evolution { namely, the evolution of insensitivity
to genetic mutations { is explored in the context of engineering GAs. Having
seen that the e ect is signi cant, the second section describes how it can be
exploited by the engineer to give a tendency for parsimonious solutions, or
solutions robust to certain kinds of variation. A particularly important in-
stance of robustness to variations is fault tolerance, and the remainder of the
chapter goes on to study other evolutionary mechanisms by which it can be
achieved.1

4.1 Insensitivity to Genetic Mutations


It has been observed in the study of molecular evolution that evolution tends
to produce individuals which not only have high tness, but are also of a struc-
ture such that the average decrease in tness caused by genetic mutations
is small (Eigen, 1987; Huynen & Hogeweg, 1994). I deliberately postpone
pointing out the engineering implications of this until the next section, to
make it clear that under the conditions to be identi ed below, the e ect will
occur whether or not it is put to any engineering use: the results from this
section contribute in their own right to the analysis of population dynamics.
To gain an intuitive understanding of the e ect, consider a single indi-
vidual in the population. The spread of this individual's genetic information
through the population in successive generations depends not only on how
many o spring it produces, but also on how many o spring those o spring
produce, and so on. Each time one of these o spring is produced, however, it
is subject to genetic mutation. Hence a t individual that is relatively insen-
sitive to mutations will have mutated o spring that are also t; its genetic
information can spread through the population more readily than that of an
individual which is equally t, but which is vulnerable to mutations and so
produces mutant o spring of lower average tness. Such mutation-insensitive
1
Most of the material in this chapter also appears in Thompson (1995b, 1996a,
1997).
58 4. Parsimony and Fault Tolerance

individuals therefore tend to displace more mutationally-brittle individuals


over time, even if the brittle individuals have slightly higher tness. (Here
we assumed that the task was not changing over time. If the task is rapidly
changing, then Huynen and Hogeweg (1994) have demonstrated that quite
the opposite outcome can result.)
Eigen (1987) summarised several experiments which demonstrate this
phenomenon, but using a model of molecular evolution signi cantly di erent
from EAs used for engineering. As a rst illustration, I now adapt one of those
experiments using the type of basic GA described in Section 1.2. Consider a
population of 5-bit genotypes. Let the Hamming distance of an individual i
from the sequence 00000 be h(i), so that h(i) is simply the number of `1's in
i's genotype. Then de ne i's tness as:
if h(i) = 0
8
> 10
if h(i) = 5
>

tness(i) = >
<
9
>
5 if h(i) = 4
:
0 otherwise
The ` tness landscape' of an evolutionary problem is the assignment of tness
values over the space of all possible genotypes. Here, the tness landscape
consists of two local optima. The rst is a global optimum of 10 for the geno-
type 00000, which is an isolated optimum: all genotypes near it in Hamming
space (within three bit- ips) give zero tness. The second optimum is for the
sequence 11111, and has the slightly inferior tness of 9, but is surrounded
by a region of medium tness, such that all ve possible 1-bit mutants of the
optimum have tness 5. All other genotypes confer zero tness. To initialise
the population, all of the genotypes were set to the 00000 global optimum,
and then the GA was let to run. Elitism was not used, the population size
was 30, and the bitwise mutation probability was set to give an expectation
of 1:0 mutations per genotype. After 200 generations, the distribution of the
population was measured by counting the number of individuals at each of
h(i) = 0; 1; 2; 3; 4; 5. The measurements were averaged over 100 runs of the
GA.
The results (Figure 4.1) show that the population nearly always moved
away from the isolated global optimum, in favour of the slightly inferior
tness peak, with its surrounding 1-bit mutant region of medium tness. In
the gure, the bar for h(i) = 5 is not the highest, even though the population
is converged around this point, because there is only one possible genotype
(11111) for h(i) = 5, but there are more possibilities for h(i) = 4; 3; 2; 1, as
indicated. The outcome was similar even when the elitism mechanism was
re-introduced, as long as there was more than 10% noise added to the tness
evaluations, or if the two optima were set to be of equal tness.
In this contrived example, the population abandoned the global optimum
in favour of a slightly less- t optimum at which the detrimental e ects of
genetic mutations were smaller. Can such a tendency to seek smooth regions
4.1 Insensitivity to Genetic Mutations 59

number of possible
Mean number of different genotypes
individuals
10 1
1
8 1
0 1
6 0 1
0
4 0
0
2

0
0 1 2 3 4 5
Hamming distance h(i) from 00000
Fig. 4.1. Mean population distribution after evolution.

of tness landscape occur in a more realistic model of an engineering appli-


cation, and be of signi cant magnitude? I will now show the answer to be
`Yes' on both counts, but not for all types of GA.
For ease of experimentation, we shall study evolution on the well-known
NK model of tness landscapes (Kau man, 1993) rather than on a tness
landscape arising from a real problem. N is the length, in bits, of the geno-
type. In this model, the tness of each bit can be calculated, and the tness
of the whole genotype is just the mean tness of its bits. The tness of a
bit is determined by its own value (0 or 1) and the values of K other bits
(0  K  N-1).
To generate a random landscape for particular values of N and K, one
proceeds as follows. For each of the N bits in the genotype in turn, choose at
random K other `in uencer' bits which will in uence its tness (this is the
`random neighbours' model). Since the tness of each bit will be determined
by its own value and that of its K in uencers, a bit's tness can be given by
a look-up table of 2K +1 real-valued entries. For each of the N bits, a separate
tness look-up table is randomly generated, with entries uniformly randomly
drawn from the interval [0.0, 1.0]. This random choice of in uencers and
look-up table entries is now held constant, and de nes a particular tness
landscape which can be used in an evolutionary experiment. The calculation
of a genotype's tness on a landscape with K=2 is illustrated in Figure 4.2.
Low values of K give, on average, `smooth' random landscapes: a genetic
mutation to a bit will not hugely alter the tness of the genotype, because
60 4. Parsimony and Fault Tolerance

N bits

K (here = 2) epistatic
interactions per bit

0 1 0 0 1 0 1 1 1

Fitness contribution
000 0.23346
001 0.20974
The fitness table for 010 0.97353
each bit is filled with 011 0.77497
random values [0...1] 100 0.98986
101 0.47629
110 0.95221
111 0.03163

The fitness of a genotype is the mean of


the fitnesses of its bits.

Fig. 4.2. Calculation of a genotype's tness on an NK landscape.

that bit in uences the tness contributions of few other bits. In the limit
of K=0, there is a single global tness optimum with no other local optima.
Conversely, for high values of K a genetic mutation is likely to have a large ef-
fect on tness (`rugged' landscapes). In the limit of K=N-1, a single mutation
changes the tness of a genotype to a value which is completely uncorrelated
with the unmutated tness. For 0 < K < N-1, any particular random land-
scape is likely to have some regions which are more rugged than others, and
the value of K determines on average how rugged it is overall.
The experimental method was as follows. With N=20, a random landscape
was generated for a particular value of K. Starting from a randomly gener-
ated population of 100 genotypes, the basic GA was run for 1000 generations
with a particular selection scheme, mutation rate and single-point crossover
probability. At the end of this run, the ttest individual in the population
was taken, and a check was made that it was at a local optimum with respect
to single mutations. If not, then the GA was started again with a new random
population on a new random landscape until the nal ttest individual was
a local optimum. We now wish to answer the questions, `Is this evolved opti-
mal individual less sensitive to single mutations than one would statistically
expect for a local optimum of this tness, given the current landscape? If so,
by how much?'
4.1 Insensitivity to Genetic Mutations 61

Taking the evolved optimal individual, the mean tness decrease fd caused
by a single mutation was measured, averaged over all N possible single mu-
tations. The algorithm given in Figure 4.3 was then applied to assign new
random tnesses to the single-mutation neighbours of the optimum, but such
that (a) local-optimality is preserved and (b) the statistical correlation be-
tween the tness of the optimum and the tnesses of its single-mutation
neighbours is preserved. If we now re-measure the mean tness decrease fd
caused by single mutations to the optimum, it will (on average) be typical
of an optimum of this tness on a landscape of the current N, K and choice
of in uencers. The di erence in mutation-sensitivity between the optimum
found through evolution and this random typical optimum of the same tness
is e = fd fd .

Taking the locally optimal genotype g found through evolution:


REPEAT f
FOR each possible single mutation m f
FOR each locus l f
Would mutation m change which location in l's tness lookup table
was accessed? If so, then set the entry at this new location to a new
random value in the interval [0:0; 1:0].
g
g
g UNTIL this process has allocated new tnesses
such that g is a local optimum.
Fig. 4.3. The NK landscape modi cation algorithm.

For each particular setting of K and the GA parameters, the entire pro-
cedure of the previous two paragraphs was repeated at least 200 times2 and
the values of e averaged to give e. The value e gives the expected di erence
between the tness drop when a single mutation is applied to an evolved op-
timal individual and the tness drop that one would statistically expect on
optima of the same tness under the same conditions. Below, we will express
e as a percentage e^ of the mean tness of the nal optimal solutions found
by the GA (averaged over all the runs). So where a tness drop of k% would
normally be expected on optima of a particular tness when a single muta-
tion is applied, if such optima are found through evolution then the actual
tness drop will only be (k e^)% on average. In summary, e^ is the percentage
of unmutated tness by which the single-mutants of an evolved optimum are
better than one would expect, on average.
This experiment has been performed for over a hundred combinations
of K, mutation rate, crossover probability and selection method. Figure 4.4
2
Often as many as 1000 runs were performed, as deemed necessary by monitoring
the standard error of the nal mean value.
62 4. Parsimony and Fault Tolerance

shows the results for K=9, crossover probability=1.0, using linear rank se-
lection without elitism. e^ increases with the mutation rate until the `error
threshold' is reached: beyond this the mutation rate is too high for the GA
to work properly and both e^ and the actual tness attained decrease. Very
similar results are obtained when the selection method is tness proportional
with linear scaling. It seems to be generally true that e^ peaks at the max-
imum mutation rate for which the GA still works well (before the tness
starts to decrease due to the `error catastrophe'). Fortunately, this maximum
rate of mutation { which depends on the tness landscape and the selection
pressure { is also the mutation rate which would normally be used for best
performance. The maximum e^ observed under any conditions is that seen in
this gure: 11.1%.
For low or high K, the maximum value of e^ is smaller: the e ect occurs
most on landscapes of intermediate smoothness/ruggedness, peaking at K=9
in these experiments with N=20. As the crossover probability is reduced
from 1.0, e^ is also reduced, with the maximum value of e^ without crossover
being about half of that with crossover probability 1.0. If elitism was in-
troduced into the rank selection method, then although the tness obtained
by the GA was greatly improved, the maximum e^ was reduced to around a
quarter of what it would otherwise be.
In truncation selection with threshold T , the T % best individuals have
equal probability of reproducing, and the others have zero probability. When
truncation selection was used, e^ was at least as great as for the other selection
methods, and was maximised at the largest value of T that could be used
without the tness su ering. Under the particular conditions used, e^ was
maximised at T =60%. As T was reduced to 5%, e^ fell to around a tenth of
its maximum value even though the tness obtained was una ected.
To sum up: for NK landscapes of intermediate smoothness/ruggedness,
and when elitism is not used, evolved optima have been observed to be around
11% less degraded by single mutations than would be statistically expected
for that problem. The GA parameters did not have to be set in an unusual
way to achieve this, although there are common conditions for which the
e ect is much attenuated. The magnitude of the e ect is not known outside
of the NK model, but it is reasonable to suppose it could be signi cant for
many (but not all) implementations of GAs for engineering applications.

4.2 Engineering Consequences of Mutation-Insensitivity


There are two ways in which individuals can be insensitive to random genetic
mutations. The rst is for most of the loci to a ect tness, but such that
mutation at any single locus has, on average, a small e ect. The second way
is for many of the loci not to a ect tness at all (they are `junk'): the average
e ect of random mutations can then be small even though mutations to the
functional loci might be catastrophic. Presumably a mixture of both occurs
4.2 Engineering Consequences of Mutation-Insensitivity 63

12.0
e (%)
10.0

8.0

6.0

4.0

2.0

0.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
mutation rate (expected bits/string)
0.75
Fitness
0.74

0.73

0.72

0.71

0.70

0.69
0.68
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
mutation rate (expected bits/string)
Fig. 4.4. e^ (top) and the mean tness of evolved optima (bottom) as the bit-
wise mutation probability is varied. K=9, crossover probability=1.0, linear rank
selection, no elitism. The error bars indicate  the standard error.
64 4. Parsimony and Fault Tolerance

(depending on the application), and they can be exploited by the engineer


when the genetic encoding scheme is designed: either to give robustness in
the phenotype to variations that are the same as those caused by genetic
mutation, or to encourage a parsimonious use of whatever it is that mutations
manipulate.
Take, for example, the direct encoding of an FPGA's con guration that
was used when evolving oscillators in Section 3.3.2 of the previous chapter.
If there is a tendency for many of the possible genetic mutations to have no
in uence on tness at all, then this could translate into a pressure towards
small solutions, with few of the available cells being involved in generating
the behaviour: mutations to the sections of genotype coding for the others will
not be detrimental to tness. In the presence of such an inclination towards
parsimonious circuits, a large number of components can be provided as
resources for optional use, without fear of this causing inecient systems
to be evolved. This removes from the experimenter the burden of estimating
how many components will be required. Harvey and Thompson (1997) suggest
that the surplus components can also facilitate networks of mutations that are
neutral with respect to tness, percolating large distances through genotype
space { this could fundamentally improve the performance of an EA like
SAGA. The results of the more sophisticated FPGA experiment given in the
next chapter will be seen to support this view.
Insensitivity to genetic mutations, whether through parsimony, or by most
single loci having only a small e ect on tness, is desirable in any situation
where the system is required to function in the presence of some set of varia-
tions, and those variations have exactly the same e ect on the phenotype as
do single mutations. It is possible that the encoding scheme (and perhaps the
mutation operator) can be devised speci cally with this in mind, using muta-
tion insensitivity to give the evolved system robustness with respect to spe-
ci c phenotypic variations. To illustrate how mutation insensitivity can lead
to phenotypic robustness, consider the DSM robot controller of Section 3.3.3.
A mutation to the section of genotype coding for the RAM contents has the
phenotypic e ect of inverting one of the bits in the RAM chip. This is exactly
the same variation to the phenotype as an adverse single-stuck-at (SSA) fault
in the memory array of the RAM chip, causing that single bit to read the
opposite of what it should. Hence, a tendency towards mutation insensitivity
translates directly into a tendency for the DSM robot controllers to display
graceful degradation in the presence of SSA faults in the RAM's memory
array. Note that it is not sucient for mutations to have the same e ect on
behaviour as the phenotypic variation: they must actually manipulate the
phenotype's structure in exactly the same way.
So does the evolved DSM robot controller really display graceful degra-
dation in the presence of SSA faults? Figure 4.5 shows that the evolved wall-
avoider DSM does have some robustness to adverse SSA faults { observation
of the robot's qualitative behaviour bears this out { but it is not known how
4.2 Engineering Consequences of Mutation-Insensitivity 65

No Faults

Mean Faulty

Mean
Fitness

Mean Random

32 different SSA faults


Fig. 4.5. Tolerance of the evolved robot controller to SSA faults. The 32 faults have
been sorted in order of severity. `No Faults' shows the tness without faults; `Mean
Faulty,' shows the tness in the presence of an SSA fault, averaged over all 32;
`Mean Random' shows the mean tness of individuals with randomly generated
genotypes.

much is due to the e ect described above, and how much is simply a property
of the DSM architecture. The 32 possible adverse SSA faults were each emu-
lated in turn by writing the opposite value to that speci ed by the genotype
to the RAM bit in question. For each fault, the DSM was then used to control
the robot (in the virtual environment) for sixteen 90-second runs from the
same starting position, and the average tness was measured to give the data
in the gure. It would be too time-consuming to conduct comparative studies
to ascertain whether the mutation-insensitivity e ect really is at play here,
but the results seem consistent with it.
Fault-tolerance and graceful degradation are important. In a harsh envi-
ronment, an inaccessible situation, or a safety critical application, a system
might be required to retain a certain level of ability even if a computer's
memory becomes slightly corrupted, or hardware defects develop. Tolerance
to semiconductor defects also increases both yield and feasible chip size, and
is a necessity for wafer-scale integration (Yasunaga, Masuda, Yagyu, et al.,
1991). Evolution can integrate an ability to function in the presence of faults
into the design itself, rather than relying on the use of spare parts, as is
conventional. If an evolutionary approach to fault-tolerance is to be used,
however, it must operate at the same level of abstraction as the faults to
be tolerated manifest themselves: it would be a mistake to evolve a neural
network to tolerate perturbations to its structure, simulate it on a digital
66 4. Parsimony and Fault Tolerance

computer, and expect the system to cope with failures of the computer {
the simulation program would just crash. This makes hardware evolution a
signi cant part of an evolutionary route to fault tolerance, because hardware
defects are often of primary importance.
Graceful degradation with respect to certain faults, arising out of evolved
mutation insensitivity, can be used to augment other means of fault toler-
ance in engineering applications: it arises `for free' out of the nature of the
evolutionary process, without any special measures having to be taken. How-
ever, the e ect is limited both in magnitude and in the range of faults to
which it can apply (as determined by the genetic encoding). The next section
considers how evolution can be more explicitly incited to meet pre-speci ed
fault-tolerance requirements.

4.3 Explicitly Specifying Fault-Tolerance Requirements


The obvious way to induce evolution to produce a fault-tolerant design is
to incorporate a fault-tolerance measure into the tness function. That way,
fault tolerance is explicitly part of the required behaviour. Ideally, for each
tness evaluation the individual would be given a trial in the presence of
every possible fault in turn, and the resulting tness score would be some
measure of performance in the face any fault. For systems being evolved in
software simulation, it is easy to simulate the e ects of faults. If the individ-
uals are instantiated in recon gurable hardware for their tness evaluations,
then many faults can be emulated simply by altering the con guration to
something other than what it would normally be. For instance, in our robot
controller example, an adverse SSA fault in the RAM chip's memory array
can be emulated by writing the wrong value to that bit.
To have each evaluation consist of trials for every possible fault { of which
there are typically many { will be prohibitively time consuming, in general
(but not always (Sebald & Fogel, 1992)). However, if we are interested in
optimising worst-case performance (i.e. minimising the e ects of the most
serious fault), there is a potential short-cut. In this case the tness measure
will be based on performance in the presence of only the single most serious
fault. If some way of predicting which fault is the most serious can be found,
then only this single fault needs to be introduced during the tness evaluation.
A similar situation arises if only a relatively small subset of the possible
faults seriously degrades the system: only this subset of serious faults need
be considered.
However, which faults are the most serious might be di erent for each
individual in the population. If the only way to identify the worst faults for
each individual is to test them with each fault in turn, then we are back where
we started. In practice, though, after the rst few generations the individuals
are mostly similar and the population as a whole changes gradually over
time. These facts can be used in predicting which faults are the most serious
4.3 Explicitly Specifying Fault-Tolerance Requirements 67

without having to test every individual with every fault; fortunately small
errors of prediction are unlikely to be disastrous to the evolutionary process.
To illustrate this idea, we evolve the RAM-based robot controller exam-
ple to give satisfactory wall-avoidance behaviour in the presence of any of
the 32 possible adverse SSA faults in its RAM chip. First, the wall-avoider
was evolved as normal, using the basic GA with rank selection, elitism, and a
population size of 50. After 85 generations the GA had stabilised at a good so-
lution. Then the consensus sequence was generated: the genotype formed by,
for each locus, taking whichever of the values f0, 1g was most common in the
population at that position. The robot controller coded for by this consensus
sequence was then tested in the presence of each of the 32 possible adverse
SSA faults in turn. The fault that caused the consensus individual to behave
the most poorly (lowest tness score) was nominated as the `current fault.'
Another generation of evolution was then performed, but with the current
fault being present during all of the tness evaluations. After this generation
the new consensus individual was constructed, tested, and a (possibly) new
current fault nominated for the next generation. The process continued in
this way, with a single fault being present throughout all evaluations within
a generation { this fault being the one that caused the worst performance in
the consensus individual of the previous generation.3
Figure 4.6 shows that the maximum and mean tnesses dropped sharply
at generation 85 when faults were rst introduced, but over the course of the
next 150 generations returned to high values. Figure 4.8 shows that when
the faults were rst applied the controller was already tolerant to most SSA
faults, but a few were critical. At various stages afterwards, this tolerance to
most SSA faults is lost in the GA's attempts to improve performance on the
single most serious current fault. Some serious faults are seen to persist over
long periods. Eventually, consensus individuals arose that give satisfactory
performance when any of the SSA faults is present.4 Figure 4.7 compares the
fault tolerance of the conventionally-evolved consensus individual at genera-
tion 85 with that of the rst completely-tolerant consensus which arises at
generation 204. The criterion for `satisfactory performance' was for the real
robot to display what would reasonably be called wall-avoiding behaviour,
and corresponds to a tness score of  1:0.
Returning to the general discussion, we can see that this example has
exploited the similarity between individuals in the population by assuming
that a single fault will be the most serious one for all individuals at a par-
ticular generation. This fault was identi ed by exhaustively testing a single
`average' individual { the consensus. Though this fault-prediction strategy is
3
It may have been better to have taken the consensus of the current generation
rather than of the previous one.
4
In fact, if the GA was left to run, then these completely-tolerant solutions would
be lost again as the GA concentrated entirely on improving performance in the
presence of the current most serious fault { even if that performance was already
satisfactory.
68 4. Parsimony and Fault Tolerance

Fitness
1.60
1.40
1.20
1.00
0.80
0.60
0.40
0.00 100.00 200.00
Generations
Fig. 4.6. Maximum and mean tness in the population over time. The rst 85
generations were in the absence of faults, thereafter all tness evaluations were in
the presence of the `current fault' (see text).

Fitness
1.6
After
1.4
1.2
Satisfactory performance
1.0
0.8
0.6
0.4
Before
0.2
0.0

32 possible adverse SSA faults


Fig. 4.7. Fault tolerance of the consensus at generation 85, and then after 119
generations of evolution in the presence of faults. In each case, the faults have been
sorted in order of severity.
4.3 Explicitly Specifying Fault-Tolerance Requirements 69

235

185

Generation

135

Fig. 4.8. The evolution of fault tol-


erance. Results of the exhaustive test
over all possible adverse SSA faults
made on the consensus individual of
85 each generation. The darker a spot, the
more serious the fault. Pure white rep-
resents satisfactory performance ( t-
ness  1:0), and pure black the worst
The 32 adverse possible performance. At the genera-
tions marked with arrows, the con-
SSA faults. sensus individual is satisfactory in the
presence of any SSA fault.
70 4. Parsimony and Fault Tolerance

not exact, it had the desired e ect of catalysing the evolution of a completely
fault-tolerant individual.
Many other strategies could be used to decide which faults an individual
should encounter during its evaluation: the example above is just intended
as a simple illustration. If there were a very large number of possible faults,
exhaustive testing of even just the single consensus individual per genera-
tion could take too long. Following an earlier idea (Thompson, 1995b), an
attempt was made to co-evolve (Hillis, 1992) a population of faults { the
hope being that evolution could be used to maintain a population of faults
that concentrates on the weak-spots of the co-evolving target population and
tracks them over time. Unfortunately, there was not enough correlation be-
tween the positions of the most serious faults for evolution to identify them
more eciently than random search, and the experiment failed. This may be
a general diculty for such techniques, but more investigation is needed.
Another interesting possibility is the use of a steady-state (rather than
generational) GA (Paredis, 1994). Here, for a successful individual to stay in
the population, it must score well in repeated re-evaluations which could be
used to build up gradually an accurate picture of performance in the presence
of a set of faults. The diculty here is that if relatively few out of a large set of
faults are serious, then the population can be dominated by new individuals
that { by chance { have not yet encountered the faults which a ect them. A
further embellishment that has proven useful in a similar problem is the use
of a distributed GA (McIlhagga, Husbands, & Ives, 1996).
What has been shown here is that if some way of targeting the most serious
weak-spots of individuals can be found, then subjecting the individuals to
these faults during their tness evaluations can cause the evolution of systems
tolerant to all of the possible faults. This has been demonstrated in evolving
fault-tolerance in the real-world robot controller. It may be possible to use
an adaptive process such as co-evolution to target the weak-spots, or search
using application-speci c heuristics may prove more appropriate.

4.4 Adaptation to Faults


If a particular defect persists for an extended period of time while the system
is evolving, then the behaviour of the faulty part becomes just another com-
ponent to be used: the evolutionary algorithm does not `know' that the part
is supposed to do something else. For example, one of the SSA faults (the one
marked with an arrow in Figure 4.5) was introduced as a permanent feature
in the DSM, and the evolved controller was allowed to evolve some more. At
rst, the maximum and mean tnesses of the population were dramatically
lowered, but after only 10 generations they had fully recovered to their previ-
ous values, as if there was no fault. A population has more tolerance to newly
occurring faults than any single individual, because it already contains a di-
versity of slightly di erent solutions. In this case, the faulty part was tolerated
4.5 Fault Tolerance Through Redundancy 71

rather than used , but in general this need not be so. This use of evolution as a
corrective mechanism may prove useful when transferring an evolved system
between pieces of hardware having di erent defects, or to cope with slowly
changing faults in the same hardware. In some applications, it may be possi-
ble to have evolution permanently running `in the background,' to cope with
changing component characteristics automatically. This applies to gradual
drift in component properties (or any other system parameters) as much as
to faults, which are just an extreme case. Schneider and Card (1991) report
an analogous situation in which on-line learning of a VLSI neural network
allowed it to compensate for inaccuracies in the silicon. There are practical
diculties with on-line evolution, however, because highly un t individuals
are unavoidably generated with signi cant frequency.

4.5 Fault Tolerance Through Redundancy


We have been concentrating on how the nature of the evolutionary process
may be used to produce designs that are inherently fault-tolerant. However,
the `Embryonics' architectures described in Section 2.2.3 transparently pro-
vide fault-tolerance through redundancy in a recon gurable VLSI medium to
which hardware evolution can be applied. In this way, all of the above evolu-
tionary techniques can augment the more traditional redundancy approaches
which give predictable levels of fault-tolerance through the use of spare parts.

4.6 Summary
The tendency for evolution to nd solutions that are relatively una ected
by genetic mutations has been quanti ed in the context of engineering GAs,
and found to be of a magnitude and applicability worth considering. De-
pending on the genetic encoding scheme, it can be manifested as a pressure
towards parsimonious systems, or as robustness to certain phenotypic varia-
tions. Hardware faults are an extreme instance of such a variation, and some
degree of graceful-degradation can be achieved. Other evolutionary methods
to deal with fault-tolerance requirements more directly were proposed.
Using this tool-box of techniques, systems can be evolved which by the
nature of their design exhibit fault tolerance or graceful degradation. Con-
ventional design methodologies cannot cope with integrating fault tolerance
requirements into the heart of the design process, and must resort to pro-
viding spare parts (redundancy). Evolution, in contrast, can take account of
fault-tolerance considerations at all stages of the (automatic) design process.
72 4. Parsimony and Fault Tolerance
5. Demonstration

In this chapter, the main ideas of the book are seen in action. The con gura-
tion of an FPGA is placed under the direct control of unconstrained intrinsic
hardware evolution, and evolved for a simple but non-trivial task. Evolution
solves the problem well, using a surprisingly small region of the FPGA, with
rich structure and dynamics; it is demonstrated that unusual aspects of the
semiconductor physics are exploited. When rst reported (Thompson, 1996c),
this was the rst case of the intrinsic evolution of an FPGA con guration.1

5.1 The Experiment


The task was to evolve a circuit { a con guration of a 10  10 corner of the
XC6216 FPGA { to discriminate between square waves of 1kHz and 10kHz
presented at the input. Ideally, the output should go to +5V as soon as one
of the frequencies is present, and 0V for the other one. The task was intended
as a rst step into the domains of pattern recognition and signal processing,
rather than being an application in itself. One could imagine, however, such a
circuit being used to demodulate frequency-modulated binary data received
over a telephone line.
It might be thought that this task is trivially easy. So it might be, if the
circuit had access to a clock or external resources such as RC time-constants
by which the period of the input could be timed or ltered. It had not.
Evolution was required to produce a con guration of the 100 logic cells to
discriminate between input periods ve orders of magnitude longer than the
input ) output propagation time of each cell (which is just a few nanosec-
onds). No clock, and no o -chip components could be used: a continuous-time
recurrent arrangement of the 100 cells had to be found which could perform
the task entirely on-chip. Many people thought this would not be possible.
Although the results of Section 3.3.3 suggested the provision of a clock
of evolvable frequency as an optional resource rather than as an imposed
constraint, no clock was provided here. There were two reasons for this. The
rst was to continue to investigate the possibility of unconstrained intrinsic
evolution of completely unclocked networks of logic gates. The second was
1
The material in this chapter also appears in Thompson (1997a).
74 5. Demonstration

an engineering reason: the components needed for an external time-reference


would be bulky compared to the 1% of the FPGA's silicon area used by the
nal evolved circuit. The fully integrated solution is preferable in terms of
size, mechanical robustness, and the cost of components and manufacturing.
The evolutionary algorithm and genetic encoding were the same as de-
scribed for the evolved oscillator experiment of Section 3.3.2: a basic GA was
used, and the 18 con guration bits per cell were encoded in a raster fashion
onto the linear genotype of 1800 bits. Refer to the Introduction (Section 1.2)
for a reminder of the subset of the FPGA's functionality used in these ex-
periments. The population size was 50, the probability of crossover was 0.7,
and the per-bit mutation probability was set such that the expected number
of mutations per genotype was 2.7. This mutation rate was found through
experimentation at this task, and within the SAGA framework.
The GA was run on a normal desktop PC interfaced to some simple in-
house electronics2 as shown in Figures 5.1 and 5.2. To evaluate the tness of
an individual, the hardware-reset signal of the FPGA was rst momentarily
asserted to make certain that any internal conditions arising from previous
evaluations were removed. Then the 1800 bits of the genotype were used to
con gure the 10  10 corner of the FPGA, and the FPGA was enabled. At
this stage, there now exists on the chip a genetically speci ed circuit behaving
in real-time according to semiconductor physics.
The tness of this physically instantiated circuit was then automatically
evaluated as follows. The tone generator drove the circuit's input with ve
500ms bursts of the 1kHz square-wave, and ve of the 10kHz wave. These ten
test tones were shued into a random order, which was changed every time.
There was no gap between the test tones. The analogue integrator was reset
to zero at the beginning of each test tone, and then it integrated the voltage
of the circuit's output pin over the 500ms duration of the tone.

2
Technical Electronics Notes: All of the electronics ts comfortably on a
single wire-wrapped ISA (Industry Standard Architecture) card (Figure 5.2),
and was designed and constructed by the author for this project. The analogue
integrator was of the basic op-amp/resistor/capacitor type, with a MOSFET
to reset it to zero (Horowitz & Hill, 1989). A MC68HC11A0 micro-controller
operated this reset signal (and that of the FPGA), and performed 8-bit A/D
conversion on the integrator output. A nal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration over
256 sub-intervals, with an A/D conversion followed by a resetting of the ana-
logue integrator performed after each sub-interval. The same micro-controller
was responsible for the generation of the tones.
Locations in the con guration memory of the FPGA and in the dual-port
RAM used by the the micro-controller could be read and written by the PC via
some registers mapped into the ISA-Bus I/O space. The XC6216 requires some
small but non-trivial circuitry to allow this; the schematics are subject to change
(a -test chip was used in this work), so are not included here, but are available
from the author.
5.1 The Experiment 75

Output Analogue
(to oscilloscope) integrator

Desktop
configuration
PC

XC6216 FPGA
Tone
generator
Fig. 5.1. The apparatus for the tone discriminator experiment. The 10  10 corner
of cells used is shown to scale with respect to the whole FPGA. The single input
to the circuit was applied as the east-going input to a particular cell on the west
edge, as shown. The single output was designated to be the north-going output of
a particular cell on the north edge.

Fig. 5.2. The circuitry to evolve the tone discriminator. This ISA card plugs di-
rectly into the PC, and no extra circuitry is needed. On the left of the board is the
analogue integrator; in the centre is the micro-controller and its dual-port RAM;
on the right is the FPGA (beneath a fan-cooled heatsink) and its interface chips.
76 5. Demonstration

Let the integrator reading at the end of test tone number t be denoted it
(t=1,2,. . . 10). Let S1 be the set of ve 1kHz test tones, and S10 the set of
ve 10kHz test tones. Then the individual's tness was calculated as:
! !

k2 it where kk21 == 11==3052730730:746



1
tness = 5 k1

X
it
X

t2S1 t2S10
:973
(5.1)
This tness function demands the maximising of the di erence between the
average output voltage when a 1kHz input is present and the average out-
put voltage when the 10kHz input is present. The calibration constants
k1 and k2 were empirically determined, such that circuits simply connect-
ing their output directly to the input would receive zero tness. Otherwise,
with k1 = k2 = 1:0, small frequency-sensitive e ects in the integration of the
square-waves were found to make these useless circuits an inescapable local
optimum.
It is important that the evaluation method { here embodied in the ana-
logue integrator and the tness function (Equation 5.1) { facilitates an evolu-
tionary pathway of very small incremental improvements. Earlier attempts,
where the evaluation method only paid attention to whether the output volt-
age was above or below the logic threshold, met with failure. It should be
recognised that to evolve non-trivial behaviours, the development of an ap-
propriate evaluation technique can also be a non-trivial task.

5.2 Results
Throughout the experiment, an oscilloscope was directly attached to the out-
put pin of the FPGA (see Figure 5.1), so that the behaviour of the evolving
circuits could be visually inspected. Figure 5.3 shows photographs of the os-
cilloscope screen, illustrating the improving behaviour of the best individual
in the population at various times over the course of evolution.
The individual in the initial random population of 50 that happened to get
the highest score produced a constant +5V output at all times, irrespective
of the input. It received a tness of slightly above zero just because of noise.
Thus, there was no individual in the initial population that demonstrated
any ability whatsoever to perform the task.
After 220 generations, the best circuit was basically copying the input to
the output. However, on what would have been the high part of the square
wave, a high frequency component was also present, visible as a blurred thick-
ening of the line in the photograph. This high-frequency component exceeds
the maximum rate at which the FPGA can make logic transitions, so the
output makes small oscillations about a voltage slightly below the normal
logic-high output voltage for the high part of the square wave. After an-
other 100 generations, the behaviour was much the same, with the addition
of occasional glitches to 0V when the output would otherwise have been high.
5.2 Results 77

1kHz 10kHz

IN
0
220
320
3500 2800 2550 2100 1400 1100 650

Fig. 5.3. Photographs of the oscilloscope screen. Top: the 1kHz and 10kHz input
waveforms. Below: the corresponding output of the best individual in the popula-
tion after the number of generations marked down the side.
78 5. Demonstration

Once 650 generations had elapsed, de nite progress had been made. For
the 1kHz input, the output stayed high (with a small component of the
input wave still present) only occasionally pulsing to a low voltage. For the
10kHz input, the input was still basically being copied to the output. By
generation 1100, this behaviour had been re ned, so that the output stayed
almost perfectly at +5V only when the 1kHz input was present.
By generation 1400, the neat behaviour for the 1kHz input had been
abandoned, but now the output was mostly high for the 1kHz input, and
mostly low for the 10kHz input. . . with very strange looking waveforms. This
behaviour was then gradually improved. Notice the waveforms at generation
2550 { they would seem utterly absurd to a digital designer. Even though this
is a digital FPGA, and we are evolving a recurrent network of logic gates,
the gates are not being used to `do' logic. Logic gates are in fact high-gain
arrangements of a few transistors, so that the transistors are usually saturated
{ corresponding to logic 0 and 1. Evolution does not `know' that this was the
intention of the designers of the FPGA, so just uses whatever behaviour these
high-gain groups of transistors happen to exhibit when connected in arbitrary
ways (many of which a digital designer must avoid in order to make digital
logic a valid model of the system's behaviour). This is not a digital system,
but a continuous-time, continuous valued dynamical system made from a
recurrent arrangement of high-gain groups of transistors { hence the unusual
waveforms.
By generation 2800, the only defect in the behaviour was rapid glitching
present on the output for the 10kHz input. Here, the output polarity has
changed over: it is now low for the 1kHz input and high for 10kHz. This
change would have no impact on tness because of the absolute value signs
in the tness function (Eqn. 5.1); in general it is a good idea to allow evolution
to solve the problem in as many ways as possible { the more solutions there
are, the easier they are to nd.
In the nal photograph at generation 3500, we see the perfect desired
behaviour. In fact, there were infrequent unwanted spikes in the output (not
visible in the photograph); these were nally eliminated at around generation
4100. The GA was run for a further 1000 generations without any observable
change in the behaviour of the best individual. The nal circuit (which I will
arbitrarily take to be the best individual of generation 5000) appears to be
perfect when observed by eye on the oscilloscope. If the input is changed from
1kHz to 10kHz (or vice-versa), then the output changes cleanly between a
steady +5V and a steady 0V without any perceptible delay.
Graphs of maximum and mean tness, and of genetic convergence, are
given in Figure 5.4. These graphs suggest that some interesting population
dynamics took place, especially at around generation 2660. The experiment is
analysed in depth from an evolution-theoretic perspective in work carried out
jointly with Inman Harvey and reported in (Harvey & Thompson, 1997); I
will not describe that research here. Crucial to any attempt to understand the
5.2 Results 79

Fitness Hamming Distance


2.00 900
1.80
800
1.60
700
1.40
600
1.20
500
1.00
0.80 400

0.60 300

0.40 200
0.20 100
0.00
0
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000

(a) Generations (b) Generations


Fig. 5.4. Population statistics. (a) Maximum and mean tnesses of the popula-
tion at each generation. (b) Genetic convergence, measured as the mean Hamming
distance between the genotypes of pairs of individuals, averaged over all possible
pairs.

evolutionary process that took place is the observation that the population
had formed a genetically converged `species' before tness began to increase:
this is contrary to conventional GA thinking, but at the heart of SAGA
theory (Section 2.4.2). Evolution was the process of continual adaptation of
this relatively converged species, with genetic mutation playing a key role in
generating new variants.
The entire experiment took 2{3 weeks. This time was dominated by the
ve seconds taken to evaluate each individual, with a small contribution from
the process of calculating and saving data to aid later analysis. The times
taken for the application of selection, the genetic operators, and to con gure
the FPGA were all negligible in comparison. It is not known whether the
experiment would have succeeded if the individuals had been evaluated for
shorter periods of time { tness evaluations should be just accurate enough
that the small incremental improvements in performance that facilitate evolu-
tion are not swamped by noise. An interesting aspect of hardware evolution is
that very high-speed tasks can be tackled, for instance in the pattern recog-
nition or signal processing domains, where tness evaluation { and hence
evolution { can be very rapid. The recognition of audio tones, as in this ex-
periment, is a long duration task in comparison to many of these, because
it is reasonable to expect that the individuals will need to be evaluated for
many periods of the (slow) input waveforms, especially in the early stages of
evolution. The author was engaged in a di erent project while the experiment
was running, so it consumed no human time.
80 5. Demonstration

5.3 Analysis
The nal circuit is shown in Figure 5.5; observe the many feedback paths.
No constraining preconceptions were imposed on the circuit, so evolution was
given the freedom to explore the full space of possible designs. A bias in the
encoding scheme favouring the use of repeated structures, though potentially
helpful in other scenarios, would have been inappropriate for such a small
circuit in this application.

Out

In

Fig. 5.5. The nal evolved circuit. The 10  10 array of cells is shown, along with all
connections that eventually connect an output to an input. Connections driven by a
cell's function output are represented by arrows originating from the cell boundary.
Connections into a cell which are selected as inputs to its function unit have a small
square drawn on them. The actual setting of each function unit is not indicated in
this diagram.
5.3 Analysis 81

Parts of the circuit that could not possibly a ect the output can be pruned
away. This was done by tracing all possible paths through the circuit that
eventually connect to the output. A `path' not only includes wires, but also
passing from an input to the output of a cell's function unit. It was assumed
that all of a function unit's inputs could a ect the function unit output, even
when the actual function performed meant that this should not theoretically
be the case. This assumption was made because it is not known exactly
how function units connected in continuous-time feedback loops actually do
behave. In Figure 5.6, cells and wires are only drawn if there is a connected
path by which they could possibly a ect the output, which leaves only about
half of them.

Out

In

Fig. 5.6. The pruned circuit diagram: cells and wires are only drawn if there is a
connected path through which they could possibly a ect the output.
82 5. Demonstration

To ascertain fully which parts were actually contributing to the behaviour,


a search was conducted to nd the largest set of cells that could have their
function unit outputs simultaneously clamped to constant values (0 or 1)
without a ecting the behaviour. To clamp a cell, the con guration was altered
so that the function output of that cell was sourced by the ip- op inside its
function unit (a feature of the chip which has not been mentioned until now,
and which was not used during evolution): the contents of these ip- ops
can be written by the PC and can be protected against any further changes.
A program was written to select a cell at random, clamp it to a random
value, perform a tness evaluation, and to return the cell to its un-clamped
con guration if performance was degraded, otherwise to leave the clamp in
place. This procedure was iterated, gradually building up a maximal set of
cells that can be clamped without altering tness.
In the above automatic search procedure, the tness evaluations were
more rigorous (longer) than those carried out during evolution, so that very
small deteriorations in tness would be detected (made dicult by the noise
present during all evaluations). However, there was still a problem: clamping
some of the cells in the extreme north-west corner produced such a tiny decre-
ment in tness that the evaluations did not detect it, but yet by the time all of
these cells of small in uence had been clamped, the e ect on tness was quite
noticeable. In these cases manual intervention was used (informed by several
runs of the automatic method), with evaluations happening by watching the
oscilloscope screen for several minutes to check for any infrequent spikes that
might have been caused by the newly introduced clamp.
Figure 5.7 shows the functional part of the circuit that remains when the
largest possible set of cells has been clamped without a ecting the behaviour.
The cells shaded gray cannot be clamped without degrading performance,
even though there is no connected path by which they could in uence the
output { they were not present on the pruned diagram of Figure 5.6. They
must be in uencing the rest of the circuit by some means other than the
normal cell-to-cell wires: this probably takes the form of a very localised in-
teraction with immediately neighbouring components. Possible mechanisms
include interaction through the power-supply wiring, or electromagnetic cou-
pling. Clamping one of the gray cells in the north-west corner has only a small
impact on behaviour, introducing either unwanted pulses into the output, or
a small time delay before the output changes state when the input frequency
is changed. However, clamping the function unit of the most south-eastern
gray cell, which also has two active connections routed through it, degrades
operation severely even though that function output is not selected as an
input to any of the NEWS neighbours: it doesn't go anywhere.
This circuit is discriminating between inputs of period 1ms and 0.1ms
using only 32 cells, each with a propagation delay of less than 5ns, and with
no o -chip components whatsoever: a surprising feat. Evolution has been free
to explore the full repertoire of behaviours available from the silicon resources
5.3 Analysis 83

Out

In

Fig. 5.7. The functional part of the circuit. Cells not drawn here can be clamped
to constant values without a ecting the circuit's behaviour { see main text.

provided, even being able to exploit the subtle interactions between adjacent
components that are not directly connected. The input/output behaviour
of the circuit is a digital one, because that is what maximising the tness
function required, but the complex analogue waveforms seen at the output
during the intermediate stages of evolution betray the rich continuous-time
continuous-value dynamics that are likely to be internally present.3
Only a core of 32 out of the 100 cells is involved in generating the be-
haviour, even though there was nothing explicitly to encourage small solu-
tions. The previous chapter provided a possible explanation: solutions will
3
It has been suggested (Johnson, 1996) that `The Laws of Form' (Spencer-Brown,
1969) could help in the analysis of this type of circuit. This is currently an
unsubstantiated possibility.
84 5. Demonstration

tend to be favoured for which the deleterious e ect of genetic mutations is,
on average, small. One way for that to happen here is for few of the cells
to be implicated in the behaviour: mutations to the parts of the genotype
coding for the unused cells will have no e ect on tness (except perhaps for
the cells immediately surrounding the functional core). It is not known if
this mechanism was in fact the cause of the circuit's impressive parsimony.
As noted in Section 4.2, if there is a pressure towards parsimony, then there
need be no fear that evolution will construct inecient circuits if given far
more components (FPGA cells) than are necessary. It was suggested that
networks of mutations that are neutral with respect to tness, percolating
large distances through genotype space, can be facilitated by the presence of
surplus components. Such `neutral networks' can vastly improve the perfor-
mance of an EA like SAGA. It is noticeable that in this experiment, evolution
did not become trapped at a local optimum of poor tness, which is contrary
to common expectations of the performance of a GA under these experimen-
tal conditions. See Harvey and Thompson (1997) for further consideration of
this issue.
So far, we have only considered the response of the circuit to the two
frequencies it was evolved to discriminate. How does it behave when other
frequencies of square wave are applied to the input? Figure 5.8 shows the av-
erage output voltage (measured using the analogue integrator over a period
of 5 seconds) for input frequencies from 31.25kHz to 0.625kHz. For input
frequencies  4.5kHz the output stays at a steady +5V, and for frequen-
cies  1.6kHz at a steady 0V. Thus, the test frequencies (marked F1 and
F2 in the gure) are correctly discriminated with a considerable margin for
error. As the frequency is reduced from 4.5kHz, the output begins to rapidly
pulse low for a small fraction of the time; as the frequency is reduced further
the output spends more time at 0V and less time at +5V, until nally resting
at a steady 0V as the frequency reaches 1.6kHz. These properties might be
considered `generalisation.'

5.4 Interpretation
The results described in this chapter represent the state of the art in intrinsic
hardware evolution at the time of writing. The circuit is small, but de nitely
not trivial. For a human designer to solve this problem using only 32 cells
(each with a propagation delay less than 5ns), and no external components at
all, would be very dicult indeed (if feasible at all). There was no indication
that an upper bound on the complexity of circuits that can be evolved was
being approached, even using a very basic GA (within the SAGA framework),
and using direct genetic encoding (no restrictions or biases to incorporate
domain-speci c knowledge). The nal circuit occupied only 1% of the total
area of the FPGA, possibly because of the mutation-insensitivity e ect of the
previous chapter, so there is great potential. One can only speculate about the
5.4 Interpretation 85

F1 F2

5.0V

Average
output
voltage

2.5V

0.0V
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
Input period (ms)
Fig. 5.8. The frequency response of the nal circuit. F1 and F2 are the two frequen-
cies that the circuit was evolved to discriminate; in fact, for ease of implementation,
they happen to be of period 0.096ms (10.416kHz) and 0.960ms (1.042kHz) respec-
tively, rather than exactly 10kHz and 1kHz as mentioned in the main text.

abilities of the entire chip, when used with a process of incremental intrinsic
unconstrained hardware evolution (Section 2.4.2).
The circuit vividly demonstrates the power of unconstrained intrinsic
hardware evolution. With a freedom to explore rich structures and dynam-
ics, intrinsic evolution has been able to exploit the natural behaviours arising
from the physics of the device. It has even been proven that interactions be-
tween components that a designer would consider to be spurious or parasitic
have been put to use in achieving the desired overall behaviour. There is a
practical diculty, though. Some properties of the device are not constant
over time, or between nominally identical chips. If these properties are used in
generating the behaviour, the evolved circuit could stop working when they
vary. This issue is the main subject of the next chapter, which also goes on
to consider application domains.
86 5. Demonstration
6. Future Work

6.1 Engineering Tolerances


The theory and experiments of the preceding chapters have shown how in-
trinsic hardware evolution can exploit every aspect of a recon gurable de-
vice's physical characteristics, resulting in a remarkably e ective use of small
amounts of silicon to perform surprisingly sophisticated behaviours. How-
ever, not all device properties are constant over time, or between nominally
identical chips. For example, as the temperature varies, so do the resistances,
capacitances, and time-delays in VLSI devices. A commercially useful circuit
must operate over a range of temperatures and power-supply voltages, and
be implemented on many di erent chips which are always slightly di erent
to each other.
The major challenge to be faced by unconstrained intrinsic hardware evo-
lution is to exploit maximally those properties that are suciently stable over
the required range of operating conditions to support a robust behaviour, but
at the same time to be tolerant to variations in other aspects of the medium.
As an example of the problem, recall the 4kHz oscillator evolved in a logic
simulation in Chapter 3: if the node propagation delays were reset to new
random values, as if transferring the circuit from one FPGA to another, then
the behaviour of the evolved circuit degraded to that typical of the initial
random population.
Evolving circuits will potentially come to depend upon any properties that
are suciently stable during evolution for at least the number of generations
it takes to exploit them. This statement of the problem contains the solution
that I propose: to subject the evolving circuits to the range of conditions in
which they will be required to operate. The apparatus will consist of many
FPGA devices, nominally identical, but chosen from di erent batches (or
even di erent foundries where this is possible) to represent the full range
of production variations. These chips will be held at di erent temperatures
(e.g. by using Peltier-e ect heat-pumps), and run from di erent power-supply
voltages, covering the range of operating conditions with which the evolved
circuits are required to cope. The intrinsic evaluation of an individual circuit's
tness will be the measurement of its ability to perform under all of these
conditions, with the circuit being instantiated on each of the chips. For a
88 6. Future Work

circuit to be t, it must cope with the full variety of conditions under which
it is tested.
The apparatus is not prohibitively dicult to build, and the speed of
evolution would not be reduced because the multiple FPGA chips can be
used to carry out tness evaluations in parallel. It will not be possible to test
the individuals under every possible combination of VLSI process variations
and external parameter variations, nor is this necessary. What is required is
testing under a sucient spectrum of combinations so that it is easier for the
evolving circuits to generalise over them than to become specially tuned to
the particular set of combinations that is present on the chips used. Empirical
investigations will be needed to determine the practicalities of this.
There are two preliminary and inconclusive indications that the method I
have just proposed could work. The rst is the observation that when oscilla-
tors were evolved on an FPGA (Section 3.3.2) in an evolutionary run lasting
only about ve minutes, the resulting circuits were extremely sensitive to tem-
perature variations. So too were the circuits evolved in the early stages of the
tone-discriminator experiment (Chapter 5). However, by the end of the tone-
discriminator run, the circuits operated perfectly over the full 5C range of
temperatures that the evolving population had encountered due to night/day
and sun/cloud temperature cycling during the two-week experiment.
Taking the nal evolved tone-discriminator, Figure 6.1 repeats the fre-
quency response measurement originally shown in Figure 5.8, but this time
at high and low temperatures outside of the range that prevailed during evo-
lution. The high temperature was achieved by placing a 60W light-bulb near
the chip, the low temperature by opening all of the laboratory windows on a
cool breezy evening. Varying the temperature moves the frequency response
curve to the left or right, so once the margin for error is exhausted the circuit
no longer behaves perfectly to discriminate between F1 and F2.
In the examples given here, at 43:0C the output is not steady at +5V
for F1, but is pulsing to 0V for a small fraction of the time. Conversely, at
23:5C the output is not a steady 0V for F2, but is pulsing to +5V for a small
fraction of the time. However, despite the fact that the only time-reference
that the system has is the natural dynamical behaviour of the components {
which is temperature dependent { the circuit operates perfectly over the 10C
range of temperatures to which the population was exposed during evolution.
Taken together, these observations are suggestive { and nothing more {
that circuits will evolve to operate correctly over the range of temperatures
to which they have been exposed during tness evaluations.
The second indication comes from taking the nal population of the tone-
discriminator experiment (generation 5000), and using it to con gure a com-
pletely di erent 1010 region of the same FPGA chip, as shown in Figure 6.2.
When used to con gure this new region, the individual in the population that
was ttest at the old position deteriorated by  7%. However, there was an-
other individual in the population which, at the new position, was within
6.1 Engineering Tolerances 89

F1 F2

5.0V

Average
output 31.2 Case temperature
voltage 43.0 23.5 (Celsius)

2.5V

0.0V
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
Input period (ms)
Fig. 6.1. The frequency response of the evolved tone-discriminator, measured at
three di erent temperatures.

Output Analogue
(to oscilloscope) integrator

configuration Desktop
PC

XC6216 FPGA
Tone
generator

Fig. 6.2. Moving the circuit to a di erent region of the FPGA.


90 6. Future Work

0.1% of perfect tness. Evolution was allowed to continue at the new posi-
tion and after only 100 generations, perfect performance was regained. When
this new population was moved back to the original region of silicon, again
the transfer reduced the tness of the individual that used to be ttest, but
there was another individual in the population that behaved perfectly there.
The ease with which evolution was able to adapt the circuits to a com-
pletely di erent region of the FPGA is suggestive { and nothing more { that
when tness evaluations measure the ability to perform on a wide range of
di erent chips, evolution will be able to nd a single circuit that works on all
of them. One way to reduce the number of FPGA devices needed to induce
generalisation over all chips of a particular type is to use translations and/or
rotations of the circuits upon the same FPGA, as in the experiment above.
Inducing evolution to produce circuits dealing with `engineering toler-
ances,' by actually subjecting the evolving circuits to the conditions with
which they are to cope, gives the maximum opportunity for the stable prop-
erties of the medium to be exploited. In contrast, the design constraints
discussed in Chapter 3 cope with engineering tolerances by precluding large
swathes of the natural behaviour of the medium from being put to use in
achieving the behaviour, in a far more indiscriminate manner.
It is quite possible that the scheme proposed above will prove to be im-
practical for some reason, and in the absence of an alternative it may become
necessary to impose constraints on evolution to ensure sucient robustness
of the evolved circuits. But this will be a matter of adjusting a trade-o
between exploitation and tolerance to variations, and will not be embroiled
with the abstraction issues (and their supporting constraints) that are inher-
ent in conventional design methodologies: more of the natural abilities of the
medium will still be able to shine through. For example, the highly stable
signal from an o -chip crystal oscillator could be provided to the evolving
circuits as an extra input. The evolved circuits would be free to ignore it,
but could use it to stabilise their dynamics if the selection pressure towards
robustness { resulting from the evaluations in di erent conditions { justi ed
it. The crystal oscillator would truly be a `timegiver' (Moore-Ede, Sulzman,
& Fuller, 1982), and would not be an enforced constraint on the circuit's dy-
namics as is the clock in synchronous digital design. Indeed, the experiments
with the DSM robot controller (Section 3.3.3) showed how a clock provided
as a resource , rather than as a constraint, can actually enrich the available
dynamics.
There is currently no evidence to suggest that the scheme given above for
completely unconstrained evolution will not succeed: the preliminary results
are promising, and the rewards would be great { the full power of uncon-
strained intrinsic hardware evolution would be a commercial practicality.
6.2 Applications 91

6.2 Applications
The application niches of intrinsic hardware evolution are those for which a
highly ecient custom circuit is desirable, but conventional design methods
are found to be inadequate. Characteristics of suitable applications might
be high speed, small size, fault-tolerance, graceful degradation, low power,
low cost, and low component count (high mechanical robustness). What are
currently considered `neural-network' applications are good targets, because
intrinsically evolved circuits may be able to out-perform neural-network im-
plementations on the above criteria. The ability to distribute the circuit on
a oppy-disk, the internet, or other electronic media, as a con guration for
an o -the-shelf FPGA { with its economies of scale { could broaden the
commercial opportunities.
Of particular interest are very high-speed tasks, where the tness evalu-
ations can be very short, allowing evolution to progress rapidly. High-speed
signal-processing (Sharman, Esparcia-Alcazar, & Li, 1995; Esparcia-Alcazar
& Sharman, 1996) and pattern recognition are promising areas, where a real-
time intrinsic hardware evolution tness evaluation could take just a fraction
of a second. The tone-discriminator experiment of the previous chapter was
a step in this direction, but still relatively slow compared to video image
recognition, for instance.
To exemplify some other application domains, consider intrinsically evolv-
ing a circuit to control an autonomous mobile robot. Even though the task is
of long duration, with the tness evaluations possibly taking minutes, there
are still niches for hardware evolution. Typically, evolutionary roboticists
evolve neural networks simulated in software. But what if the robot is to be
very small, or consume very little battery power, or be highly fault-tolerant
(perhaps the robot is subject to nuclear radiation on a space mission)? Fig-
ure 6.3 shows that intrinsic hardware evolution is possible, even onboard the
`Khepera' miniature robot, which is a commonly used tool in evolutionary
robotics (Mondada & Floreano, 1996).
A simple wall-avoidance behaviour has been evolved for this robot, with
a tness evaluation measuring the behaviour of the real robot moving in the
real world, being controlled by a real genetically speci ed circuit instanti-
ated in the onboard XC6216 FPGA. The GA was run on a o -board PC
attached to the robot by a serial cable during evolution, which could later
be disconnected. The PC received reports of the Khepera's wheel speeds,
and the tness function was simply to maximise the forward speed of each
wheel. This involves not getting stuck on walls, so a wall-avoiding behaviour
evolved. This is the rst intrinsically evolved FPGA controller for a robot.
Naito et al. (1996) describe a very similar experiment, but with the evolving
logic circuits implemented in a software simulation.
92 6. Future Work

Fig. 6.3. The miniature Khepera robot. The top two layers are an FPGA extension
turret allowing onboard intrinsic hardware evolution of electronic control systems.
They were designed by the author, and constructed in collaboration with the Xilinx
Development Corp.

The fact that intrinsic hardware evolution can be carried out even on-
board this tiny robot illustrates that its bene ts { based around extreme
eciency through e ective exploitation of the silicon { can be brought to
most applications where arti cial evolution is an appropriate technique.
7. Conclusion

As promised at the end of the introduction (Section 1.4), the conclusion is


that points 1{3 of my thesis have been shown to be true, using a combination
of theory, verbal argument, and empirical demonstrations:
For intrinsic hardware evolution:
1. Evolution can be allowed to explore circuits that are beyond the scope
of conventional design. With their less constrained spatial structure and
richer dynamical behaviour, these circuits can be of a di erent nature to
the way electronics is normally envisaged.
{ Chapter 3 considered the basic step of abstraction common to all con-
ventional design methodologies. To facilitate design at an abstract
level, constraints on the circuit's structure and/or dynamics must be
applied to prevent those aspects of the real semiconductor physics that
have been `abstracted away' from in uencing the overall behaviour of
the system.
{ In a pair of experiments in Chapter 3, in which oscillators were evolved
in simulation and then on a real FPGA, it was shown that evolution is
capable of crafting the dynamics of a complex network of high-speed
electronic components to display behaviour on a desired timescale,
without the imposition of structural or dynamical constraints. The
tone-discriminator experiment of Chapter 5 clearly demonstrated evo-
lution's ability to explore rich structures and dynamical behaviours
that are obviously radically di erent to those produced by conven-
tional design, but yet which achieve the desired behaviour perfectly.
{ This rst point of the thesis has thus been shown to be true.
94 7. Conclusion

2. There is a potential bene t in allowing evolution to do this. The increased


freedom allows evolution to exploit the properties of the implementation
medium more e ectively in achieving the task. Consequently, the result-
ing circuits can be better tailored to the characteristics of the resources
available.
{ Throughout Chapters 2 and 3, it was argued that by relaxing spatial
and dynamical constraints, evolution can be given the freedom to com-
pose the overall desired behaviour out of the natural physical dynami-
cal behaviours of the medium. By exploiting the natural properties of
the medium, rather than restricting some of that natural dynamics in
order to implement more abstract functions, the VLSI resources can
be exploited to the full.
{ The evolved DSM robot controller for the `Mr Chips' robot demon-
strated this (Chapter 3). By taking a standard electronic architecture,
and removing the temporal constraints needed to support a designer's
model, the capabilities of the system were enriched, and evolution was
able to exploit this in building an e ective control system from a minis-
cule amount of hardware.
{ Again, the tone-discriminator experiment vividly proved this point to
be true. Taking an FPGA that was intended to be used in a digital
way, and removing the constraints needed to support the digital design
methodology, a perfect behaviour was evolved in an extraordinarily
small silicon area. It was shown that subtle physical properties of the
silicon were being exploited. A conventional methodology could not
have produced such an ecient circuit, so well tailored to the natural
properties of its medium. To do this, the set of coupled di erential
equations describing every piece of doped silicon, oxide, metal, etc.
and their interactions would have to have been considered at all stages
of the design process { intractable.
{ The second point of the thesis has thus been shown to be true.
3. In certain kinds of evolutionary algorithm that can be used for hardware
evolution, there is an e ect whereby the phenotype circuits produced
tend to be relatively una ected by small amounts of mutation to their
genotypes. This e ect can be turned to engineering use, such as encour-
aging parsimonious solutions or giving a degree of graceful degradation in
the presence of certain hardware faults. There are other mechanisms by
which evolution can be more explicitly induced to produce fault-tolerant
circuits.
{ This was unequivocally shown in Chapter 4, which went on to show
how evolution could integrate fault-tolerance considerations with the
process of automatic design, instead of relying on the use of spare parts
added to the nal circuit, as is conventional.
7. Conclusion 95

This book has aimed to lay the foundations for the new eld of intrinsic
hardware evolution, by investigating the relationships with existing knowl-
edge: conventional electronics design, and natural evolution. The former is a
di erent process in the same medium, and the latter is a similar process in a
di erent medium. They are both rich sources of techniques and inspiration,
but intrinsic hardware evolution is a new combination of process and medium,
and its full potential can only be realised by exploring the new forms that
are natural to it.
Whether or not the potentially great engineering bene ts of fully uncon-
strained intrinsic hardware evolution turn out to be completely realisable in
practical applications (Chapter 6), the fundamental groundwork developed
herein must provide the basis for the future development of the eld.
96 7. Conclusion
Appendix A.
Circuit Diagram of the DSM Evolvable
Hardware Robot Controller

This appendix gives the circuit diagram mentioned in Section 3.3.3. Other
hardware details of the `Mr Chips' robot are available from the author: they
are a matter of conventional digital design, so are not of direct importance
to this book.
With reference to the DSM circuit diagram, Figure A.1, the circuit func-
tions as follows. The RAM chip is in fact a MS6130 1k8 dual-port RAM.
One port is used to provide read/write access for the PC, and the other
supports the feedback connections of the DSM. The `Genetic Latches' are
implemented by 74HCT4053 analogue switches, which { depending on their
control bits { select signals either directly, or after they have passed through
one of the 74HCT273 latches driven by the genetically speci ed clock. A sepa-
rate pair of 74HCT273 registers, which can be written to by the PC, are used
to hold these switch settings. The clock of genetically speci ed frequency is
generated by a MC68HC811E2 micro-controller (not shown), referenced to
its crystal oscillator by way of a real-time interrupt service routine.
RIGHT SONAR PULSE-TRAIN

Fig. A.1. Circuit diagram for the DSM evolvable hardware robot controller.

98
LEFT SONAR PULSE-TRAIN
SQUARE-WAVE CLOCK
(OF GENETICALLY SPECIFIED FREQUENCY)
1 &
3 1 & 1
ENABLE THE DSM TO RUN 2 3 APPLY POWER TO CLR
LEFT MOTOR 11
2 CLK
74HCT00 2 3
74HCT08 1Q 1D
5 4
2Q 2D
+5V 1 6 7

A. Circuit Diagram of the DSM Evolvable Hardware Robot Controller


1 & 3Q 3D
CLR 3 APPLY POWER TO 9 8
11 RIGHT MOTOR 4Q 4D
CLK 2 12 13
5Q 5D FOR THE 4053’S:
MD0 3 2 C1 15 14
1D 1Q 74HCT08 6Q 6D n=0 -> On=nX -> CLOCKED
MD1 4 5 B1 16 17
2D 2Q 7Q 7D n=1 -> On=nY -> UNCLOCKED
MD2 7 6 A1 19 18
3D 3Q 8Q 8D PIN 6 (INHIBIT) ALWAYS = 0.
MD3 8 9 C2
*WE 4D 4Q 74HCT273
MD4 13 12 B2 CLOCKED LATCHES
MA3 5D 5Q
MD5 14 15 A2 ANALOGUE SWITCHES
MA2 6D 6Q 1
MD6 17 16 C3 CLR
MA1 7D 7Q A 11 A4 11
MD7 18 19 B3 CLK
MA0 8D 8Q 4 B 10 B4 2 3
74HCT273 C 9 C4 5
1Q 1D
4
+5V 2Q 2D
*0 VCC SWITCH CONTROL REGISTERS 14 OA AX 12 6
3Q 3D
7
*1 A0
+5V AY 13 9
4Q 4D
8
*2 A1 1
CLR 15 OB BX 2 12
5Q 5D
13
*3 A2 11
CLK BY 1 15
6Q 6D
14
*4 A3
MD0 3 2 4 CX 5 16
7Q 7D
17
*5 *E1 1D 1Q A3 OC
MD1 4 5 CY 3 19
8Q 8D
18
*6 *E0 2D 2Q C4
MD2 7 6 74HCT273
*7 *15 3D 3Q B4 74HCT4053
MD3 8 9
*8 *14 4D 4Q A4
MD4 13 12 A 11
*9 *13 5D 5Q A3
MD5 14 15 3 B 10
*10 *12 6D 6Q B3
MD6 17 16 C 9
GND *11 7D 7Q C3
74HCT154 MD7 18 19
8D 8Q AX 12
14 OA
74HCT273 AY 13
15 OB BX 2
MA10 1 & BY 1
+5V
3 *CEL VCC
*WE 4 CX 5
2 1 R/*WL *CER OC
MA11 & CY 3
3 *BUSYL R/*WR
74HCT08 2 *INTL *BUSYR 74HCT4053
1 *OE *OEL *INTR
& MA0 A 11 A2
3 74HCT00 A0L *OER
*ACCESS DSM
MA1
2 B 10 B2
2 1 A1L A0R
& MA2 C 9 C2
1 3 A2L A1R
&
74HCT00 3 2 MA3 A3L A2R 14 OA AX 12
2 MA4 A4L A3R AY 13
74HCT00 MA5 A5L A4R 15 OB BX 2
74HCT00 MA6 A6L A5R BY 1
MA7 A7L A6R 4 CX 5
OC
ADDRESS DECODING LOGIC MA8 A8L A7R CY 3
MA9 A9L A8R
MD0 74HCT4053
IO0L A9R
MA0 - MA11 IS THE ADDRESS BUS USED BY THE PC.
MD1 IO1L IO7R A 11 A1
MD0 - MD7 IS THE DATA BUS USED BY THE PC.
MD2 IO2L IO6R 1 B 10 B1
MD3 IO3L IO5R C 9 C1
’*’ BEFORE A SIGNAL NAME INDICATES IT IS ACTIVE LOW. MD4 IO4L IO4R
*WE IS A WRITE-ENABLE CONTROL SIGNAL. MD5 14 OA AX 12
IO5L IO3R
*OE IS A READ (OUTPUT) -ENABLE CONTROL SIGNAL. MD6 AY 13
IO6L IO2R
MD7 15 OB BX 2
IO7L IO1R
BY 1
GND MS6130 IO0R
4 CX 5
OC
DUAL-PORT RAM CY 3
74HCT4053
Appendix B.
Details of the Simulations used in the
`Mr Chips' Robot Experiment

This appendix gives details of the models referred to in Section 3.3.3.

B.1 The Motor Model


The relationships between the angular velocities of the wheels when the robot
was moving on the ground and the angular velocities if the robot was lifted
into the air were found to be well described by the equations:
!L = L1 + L2LL3 + L4 RL5 + Gaussian-noise(L ; bL) (B.1)
!R = R1 + R2 LR3 + R4 RR5 + Gaussian-noise(R ; bR) (B.2)
where !L & !R are the angular velocities of the left and right wheels when
on the ground, L & R are the angular velocities of the left and right wheels
when spinning in the air, and all the 's are constants to be determined.
The  constants were determined by tting the above equations (with the
noise terms set to zero) to a set of over 280 experimental measurements at
di erent combinations of motor power settings, with the real robot moving
over the surface on which it was destined to be used. The Nelder-Meade
algorithm built in to a commercial numerical mathematics software package
was used.
Having determined the  constants, the mean squared error of the exper-
imental data from the resulting model was calculated, and the noise terms
used to add Gaussian noise to the model with standard deviations equal
to the square-root of this mean-squared error. For the Gaussian-noise(; b)
function in the equations above, random numbers would be generated from
a Gaussian distribution with zero mean and standard deviation  until the
magnitude of the number was less than the bound b; this number was then
returned. Noise would only be added to the equations if they would otherwise
return a non-negative !. After the possible addition of noise, an ! less than
zero was set to zero.
The constants used were (to 6 signi cant gures):
L1 = 1:15423
L2 = 0:121787
100 B. Details of the Simulations used in the `Mr Chips' Robot Experiment

L 3 = 1:74965
L 4 = 0:162305
L 5 = 0:996823
L = 0:534553
bL = 5:00000
R 1 = 0:899570
R 2 = 0:141250
R 3 = 1:00410
R 4 = 0:0234597
R 5 = 2:45655
R = 0:660710
bR = 5:00000

There is signi cant asymmetry in the model: with the wheels spinning at
the same speed in the air, if the robot was placed on the ground it did not
move in a straight line.

B.2 The Movement Model


The position and orientation (x, y, ) of the robot in the virtual world were
updated in a time-slicing simulation according to the following equations:

 = sin rt
 

d [!r !L] (B.3)


1

x = rt 2 (!R + !L ) sin() (B.4)


y = rt 2 (!R + !L) cos() (B.5)
Where t is the length of the time slice, r is the radius of the wheels, and
d is the separation between the wheels. !R & !L are the modelled speeds of
the wheels on the ground given by the equations of the previous section on
the basis of the current speeds of the wheels spinning in the air.

B.3 The Sonar Model


The time-of- ight sonars were intended to work by detecting di use re ec-
tions coming back from the rst surface struck by the `ping' of ultrasound.
B.3 The Sonar Model 101

However, if the re ecting surface is very smooth, then the di use re ection
will be too weak to be detected, and the sound will take a longer path af-
ter the specular re ection, nally arriving back at the sonar transducer after
re ecting o more than one surface. The walls of the robot's arena were
quite smooth, and specular re ections were common. The physics of sound
(Rayleigh, 1929, pages 89{96) was not found to be a good predictor of this
e ect.
Instead, an empirically determined model for the probability of a specular
re ection as a function of the angle of incidence was formulated. A stochastic
approach was necessary, because the e ect was found to be highly dependent
on tiny variations in the texture of the walls. If the angle of incidence was i
radians from the normal, then the probability p of a specular re ection was
given by:
p = 0:833 if i > 0:698 (B.6)
p = max(0:0; 1:91i 0:417) otherwise
Three rays were traced out from the sonar transducer until each met with
an arena wall, and Equation B.6 was then used to decide if there would be a
specular re ection for any rays. The range was taken to be the path length of
the shortest ray giving a di use re ection. If the beam within the envelope of
rays contained a corner of the arena, then this always gave a di use re ection.
If all three rays underwent specular re ection, then the centre ray was traced
on from the re ecting surface to the second surface it met. The range was
then deemed to be this total path length. In a similar way to the motor
model, bounded Gaussian noise was added to the range readings according
to the empirically determined error between the model and reality. The noise
on the crude model of multiple re ections was much greater than the noise
added if one of the rays returned a di use re ection from the rst surface.
The noise for when the robot was moving was also treated separately to when
it was stationary. The sonar time of ight was proportional to the range, and
this time was given to the micro-controller which was synthesising the sonar
echo waveforms being fed into the DSM.
A large amount of e ort went into the construction of this simulation.
The test of adequacy of the simulation is for a control system evolved in the
virtual world to work similarly in reality with the real sonars connected. Fig-
ure 3.8 shows that the models described here were adequate for this particular
behaviour and environment.
102 B. Details of the Simulations used in the `Mr Chips' Robot Experiment
References

Akers, S. B. (1978). Binary decision diagrams. IEEE Transactions on Com-


puters, c-27 (6), 509{516.
Aleksander, I., & Morton, H. (1990). An Introduction to Neural Computing.
Chapman & Hall.
Andree, H. M. A., Barkema, G. T., Lourens, W., et al. (1993). A comparison
study of binary feedforward neural networks and digital circuits. Neural
Networks, 6, 785{790.
Aptix Corp. (1996). MP3 system explorer data sheet. See
http://www.aptix.com.
Armstrong, W. W. (1991). Some results concerning adaptive logic networks.
Revised version of Tech. Rept. TR90-30, Dept. of Computer Science,
The University of Alberta, Edmonton, Alberta, Canada.
Armstrong, W. W., Chu, C., & Thomas, M. M. (1995). Feasibility of using
adaptive logic networks to predict compressor unit failure. In Proc.
Battelle Paci c Northwest Laboratories Workshop on Environmental
and Energy Applications of Neural Networks Richland WA, USA.
Armstrong, W. W., & Thomas, M. M. (1994). Control of a vehicle active
suspension system model using adaptive logic networks. In Proc. World
Congress on Neural Networks (WCNN'94), Program Addendum., pp.
9{14.
Arslan, T., Horrocks, D. H., & Ozdemir, E. (1996a). Structural cell-based
VLSI circuit design using a genetic algorithm. In Proc. IEEE Int. Symp.
on Circuits and Systems (ISCAS'96) Atlanta, Georgia, USA.
Arslan, T., Horrocks, D. H., & Ozdemir, E. (1996b). Structural synthesis of
cell-based VLSI circuits using a multi-objective genetic algorithm. IEE
Electronics Letters, 32 (7), 651{652.
Arslan, T., Ozdemir, E., Bright, M. S., et al. (1996c). Genetic synthesis
techniques for low-power digital signal processing. In Proc. IEE Colloq.
on Digital Synthesis, pp. 7/1 { 7/5 London, UK.
Ashby, W. R. (1960). Design For A Brain: The origin of adaptive behaviour.
Chapman & Hall Ltd., London.
Bade, S. L., & Hutchings, B. L. (1994). FPGA-based stochastic neural net-
works | implementation. In Proc. IEEE Workshop on FPGAs for
Custom Computing Machines, pp. 189{198.
104 REFERENCES

Beer, R. D. (1995). A dynamical systems perspective on agent-environment


interaction. Arti cial Intelligence, 72, 173{215.
Benten, M. S. T., & Sait, S. M. (1994). GAP: a genetic algorithm approach
to optimize two-bit decoder PLAs. Int. J. Electronics, 76 (1), 99{106.
Braitenberg, V. (1984). Vehicles : Experiments in Synthetic Psychology. MIT
Press.
Bratt, A., & Macbeth, I. (1996). Design and implementation of a eld pro-
grammable analogue array. In Proc. ACM/SIGDA 4th Int. Symp. on
Field-Programmable Gate Arrays (FPGA'96), pp. 88{93.
Bright, M. S., & Arslan, T. (1996). A genetic framework for the high-level
optimisation of low power VLSI DSP systems. IEE Electronics Letters,
32 (13), 1150{1151.
Brooks, R. A. (1991). Intelligence without representation. Arti cial Intelli-
gence, 47, 139{159.
Brooks, R. A. (1995). Intelligence without reason. In Steels, L., & Brooks,
R. (Eds.), The Arti cial Life Route to Arti cial Intelligence: Building
embodied, situated agents, chap. 2, pp. 25{70. Lawrence Erlbaum Asso-
ciates.
Brunvand, E. (1991). Implementing self-timed systems with FPGAs. In
Moore, W. R., & Luk, W. (Eds.), FPGAs, pp. 312{323. Abingdon
EE&CS Books, Abingdon, UK.
Burgess, C. J. (1995). A genetic algorithm for the optimisation of a multi-
processor computer architecture. In Proc. 1st IEE/IEEE Int. Conf. on
Genetic Algorithms in Engineering Systems: Innovations and Applica-
tions (GALESIA'95), pp. 39{44. IEE Conf. Publication No. 414.
Cli , D., Harvey, I., & Husbands, P. (1993). Explorations in evolutionary
robotics. Adaptive Behaviour, 2 (1), 73{110.
Cli , D., & Miller, G. F. (1996). Co-evolution of pursuit and evasion II: Simu-
lation methods and results. In Maes, P., et al. (Eds.), From Animals to
Animats 4: Proc. 4th Int. Conf. on Simulation of Adaptive Behaviour
(SAB96), pp. 506{515. MIT Press.
Comer, D. J. (1984). Digital Logic & State Machine Design. Holt, Rinehart
and Winston.
Craven, M. P., Curtis, K. M., & Hayes-Gill, B. R. (1994). Consideration of
multiplexing in neural network hardware. IEE Proc.-Circuits Devices
Syst., 141 (3), 237{240.
Dawkins, R. (1990). The Extended Phenotype: The Long Reach of the Gene.
Oxford University Press.
de Garis, H. (1990). Genetic programming: Modular neural evolution for
Darwin Machines. In Proc. Int. Joint Conf. on Neural Networks
(IJCNN'90), Vol. I, pp. 194{197. Lawrence Erlbaum Assoc., Inc.
REFERENCES 105

de Garis, H. (1993a). Evolvable hardware: Genetic programming of a Darwin


Machine. In Albrecht, R., et al. (Eds.), Arti cial Neural Nets and
Genetic Algorithms: Proc. of the Int. Conf. in Innsbruck, Austria, pp.
441{449. Springer-Verlag.
de Garis, H. (1993b). Growing an arti cial brain with a million neural net
modules inside a trillion cell cellular automaton machine. In Proc. 4th
Int. Symp. on Micro Machine and Human Science, pp. 211{214.
de Garis, H. (1995). The CAM-BRAIN project: The genetic programming
of a billion neuron arti cial brain by 2001 which grows/evolves at elec-
tronic speeds inside a cellular automaton machine. In Proc. Int. Conf.
on Arti cial Neural Networks and Genetic Algorithms (ICANNGA95)
Ales, France.
de Garis, H. (1996). CAM-BRAIN: The evolutionary engineering of a bil-
lion neuron arti cial brain by 2001 which grows/evolves at electronic
speeds inside a Cellular Automaton Machine (CAM). In Sanchez, E., &
Tomassini, M. (Eds.), Towards Evolvable Hardware: The evolutionary
engineering approach, Vol. 1062 of LNCS, pp. 76{98. Springer-Verlag.
DeHon, A. (1994). DPGA-Coupled microprocessors: Commodity ICs for the
early 21st century. In Proc. IEEE Workshop on FPGAs for Custom
Computing Machines (FCCM'94) Napa, CA.
Douglas, R., Mahowald, M., & Mead, C. (1995). Neuromorphic analogue
VLSI. Annu. Rev. Neurosci., 18, 255{281.
Drechsler, R., Becker, B., & Gockel, N. (1996). A genetic algorithm for the
construction of small and highly testable OKFDD-circuits. In Koza,
J. R., et al. (Eds.), Genetic Programming 1996: Proc. 1st Annual Conf.
(GP96), pp. 473{478. Cambridge, MA: MIT Press.
Durand, S., & Piguet, C. (1994). FPGA with self-repair capabilities. In Proc.
2nd Int. ACM/SIGDA Workshop on Field-Programmable Gate Arrays
(FPGA`94) Berkeley, Calif., USA.
Eigen, M. (1987). New concepts for dealing with the evolution of nucleic
acids. In Cold Spring Harbor Symposia on Quantitative Biology, Vol.
LII. Cold Spring Harbor Laboratory.
Eldredge, J. G., & Hutchings, B. L. (1994). Density enhancement of a neural
network using FPGAs and run-time recon guration. In Proc. IEEE
Workshop on FPGAs for Custom Computing Machines, pp. 180{188.
Elias, J. G. (1992). Genetic generation of connection patterns for a dynamic
arti cial neural network. In Whitley, L. D., & Scha er, J. D. (Eds.),
Proc. Combinations of Genetic Algorithms and Neural Networks Work-
shop (COGANN'92), pp. 38{54. IEEE Computer Society Press, Los
Alamitos, CA.
Elias, J. G. (1994). Silicon dendritic trees. In Zaghloul, M., Meador, J.,
& Newcomb, R. (Eds.), Silicon Implementation of Pulse-Coded Neural
Networks, chap. 3. Kluwer Academic Press, Norwell, Mass.
106 REFERENCES

Esparcia-Alcazar, A. I., & Sharman, K. C. (1996). Some applications of ge-


netic programming in digital signal processing. In Koza, J. R. (Ed.),
Late Breaking Papers at the Genetic Programming 1996 (GP96) Con-
ference, pp. 24{31. Stanford, CA: Stanford University Bookstore.
Faggin, F., & Mead, C. (1990). VLSI implementation of neural networks. In
Zornetzer, Davis, & Lau (Eds.), An Introduction to Neural and Elec-
tronic Networks, chap. 13, pp. 275{292. Academic Press Inc.
Fogel, L. J., Owens, A. J., & Walsh, M. J. (1966). Arti cial Intelligence
Through Simulated Evolution. John Wiley & Sons, Inc.
Fourman, M. P. (1985). Compaction of symbolic layout using genetic al-
gorithms. In Grefenstette, J. J. (Ed.), Proc. 1st Int. Conf. on Genetic
Algorithms and their Applications, pp. 141{153. Lawrence Erlbaum As-
sociates.
Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimisation & Ma-
chine Learning. Addison Wesley.
Gopalakrishnan, G., & Akella, V. (1992). VLSI asynchronous systems: speci-
cation and synthesis. Microprocessors and Microsystems, 16 (10), 517{
526.
Green, D. (1985). Modern logic design. Addison-Wesley.
Grimbleby, J. B. (1995). Automatic analogue network synthesis using genetic
algorithms. In Genetic Algorithms in Engineering Systems: Innovations
and Applications (GALESIA'95), pp. 53{58. IEE Conf. Publication No.
414.
Gruau, F. (1994). Neural network synthesis using cellular encoding and the
genetic algorithm. Ph.D. thesis, Ecole Normale Superieure de Lyon.
Harvey, I., Husbands, P., & Cli , D. (1993). Genetic convergence in a species
of evolved robot control architectures. In Forrest, S. (Ed.), Proc. 5th
Int. Conf. on Genetic Algorithms, p. 636 (1 page summary). Morgan
Kaufmann. (Full paper available as CSRP 267, COGS, University of
Sussex, UK.)
Harvey, I., Husbands, P., Cli , D., Thompson, A., & Jakobi, N. (1997). Evo-
lutionary robotics: the Sussex approach. Robotics and Autonomous
Systems, 20, 205{224.
Harvey, I. (1992a). The SAGA cross: The mechanics of recombination for
species with variable length genotypes. In Manner, R., & Manderick,
B. (Eds.), Proc. Parallel Problem Solving from Nature (PPSN) II, pp.
269{278. North-Holland.
Harvey, I. (1992b). Species Adaptation Genetic Algorithms: A basis for a
continuing SAGA. In Varela, F. J., & Bourgine, P. (Eds.), Towards
a Practice of Autonomous Systems: Proc. 1st Eur. Conf. on Arti cial
Life, pp. 346{354. MIT Press.
Harvey, I. (1995). The arti cial evolution of adaptive behaviour. DPhil thesis,
COGS, University of Sussex, UK.
REFERENCES 107

Harvey, I., Husbands, P., & Cli , D. (1994). Seeing the light : Arti cial evolu-
tion, real vision. In Cli , D., et al. (Eds.), From Animals to Animats 3:
Proc. 3rd Int. Conf. on simulation of adaptive behaviour, pp. 392{401.
MIT Press.
Harvey, I., & Thompson, A. (1997). Through the labyrinth evolution nds
a way: A silicon ridge. In Higuchi, T., & Iwata, M. (Eds.), Proc. 1st
Int. Conf. on Evolvable Systems: From Biology to Hardware (ICES`96),
Vol. 1259 of LNCS, pp. 406{422. Springer-Verlag.
Hemmi, H., Mizoguchi, J., & Shimohara, K. (1994). Development and evolu-
tion of hardware behaviours. In Brooks, R., & Maes, P. (Eds.), Arti cial
Life IV: Proc. 4th Int. Workshop on the Synthesis and Simulation of
Living Systems, pp. 371{376. MIT Press.
Hemmi, H., Mizoguchi, J., & Shimohara, K. (1996a). Development and evo-
lution of hardware behaviours. In Sanchez, E., & Tomassini, M. (Eds.),
Towards Evolvable Hardware: The evolutionary engineering approach,
Vol. 1062 of LNCS, pp. 250{265. Springer-Verlag.
Hemmi, H., Mizoguchi, J., & Shimohara, K. (1996b). Evolving large scale
digital circuits. In Langton, C. (Ed.), Arti cial Life V: Proc. 5th Int.
Workshop on the Synthesis and Simulation of Living Systems, pp. 168{
173. Cambridge, MA: MIT Press.
Higuchi, T., & Hirao, Y. (1995). Evolvable hardware with genetic learning.
In Proc. 2nd Workshop on Synthetic Worlds - Modelizations, Practices,
Societal Impacts Paris.
Higuchi, T., Iba, H., & Manderick, B. (1994a). Applying evolvable hardware
to autonomous agents. In Davidor, Y., Schwefel, H.-P., & Manner, R.
(Eds.), Proc. Parallel Problem Solving from Nature (PPSN) III, Vol.
866 of LNCS, pp. 524{533. Springer-Verlag.
Higuchi, T., Iba, H., & Manderick, B. (1994b). Evolvable hardware. In
Kitano, H. (Ed.), Massively Parallel Arti cial Intelligence, pp. 195{217.
MIT Press.
Higuchi, T., Iwata, M., Kajitani, I., et al. (1996a). Evolvable hardware and
its application to pattern recognition and fault-tolerant systems. In
Sanchez, E., & Tomassini, M. (Eds.), Towards Evolvable Hardware, Vol.
1062 of LNCS, pp. 118{135. Springer-Verlag.
Higuchi, T., Iwata, M., Kajitani, I., et al. (1996b). Evolvable hardware with
genetic learning. In Proc. of the IEEE Int. Symp. on Circuits and
Systems (ISCAS96) Atlanta, USA.
Higuchi, T., & Iwata, M. (Eds.). (1997). Proc. 1st Int. Conf. on Evolvable
Systems: From Biology to Hardware (ICES`96). Vol. 1259 of LNCS.
Springer-Verlag.
Higuchi, T., Niwa, T., Tanaka, T., et al. (1993a). Evolving hardware with
genetic learning: A rst step towards building a Darwin Machine. In
Proc. 2nd Int. Conf. on the Simulation of Adaptive Behaviour 1992
(SAB92), pp. 417{424. MIT Press.
108 REFERENCES

Higuchi, T., Niwa, T., Tanaka, T., et al. (1993b). A parallel architecture for
genetic based evolvable hardware. In Proc. 13th Int. Joint Conf. on
Arti cial Intelligence (IJCAI93), Workshop on Parallel Processing for
Arti cial Intelligence, pp. 46{52.
Hikage, T., Hemmi, H., & Shimohara, K. (1996). Hardware evolution with
genetic diversity. In Langton, C. (Ed.), Arti cial Life V: Proc. 5th Int.
Workshop on the Synthesis and Simulation of Living Systems, p. 191
(poster paper). Cambridge, MA: MIT Press.
Hill, A. M., & Kang, S.-M. (1994). Genetic algorithm based design opti-
mization of CMOS VLSI circuits. In Davidor, Y., Schwefel, H.-P., &
Manner, R. (Eds.), Proc. Parallel Problem Solving from Nature (PPSN)
III, Vol. 866 of LNCS, pp. 546{555. Springer-Verlag.
Hillis, W. D. (1992). Co-evolving parasites improve simulated evolution as
an optimization procedure. In Langton, C., et al. (Eds.), Arti cial Life
II, pp. 313{324. Addison-Wesley.
Hirst, A. J. (1996a). Evolving adaptive computer systems. In
Proc. 1st On-line Workshop on Soft Computing (WSC1). See
http://www.bioele.nuee.nagoya-u.ac.jp/wsc1/.
Hirst, A. J. (1996b). Notes on the evolution of adaptive hardware. In Parmee,
I. (Ed.), Proc. 2nd Int. Conf. on Adaptive Computing in Engineering
Design and Control (ACEDC96). Univ. of Plymouth UK.
Holland, J. H. (1975). Adaptation in Natural and Arti cial Systems. Ann
Arbor: University of Michigan Press.
Holland, O. (1996). Grey Walter: the pioneer of real arti cial life. In Langton,
C. (Ed.), Arti cial Life V: Proc. 5th Int. Workshop on the Synthesis
and Simulation of Living Systems, pp. 16{23. Cambridge, MA: MIT
Press.
Horowitz, P., & Hill, W. (1989). The Art of Electronics (2nd edition). Cam-
bridge University Press.
Horrocks, D. H., & Khalifa, Y. M. A. (1994). Genetically derived lter circuits
using preferred value components. In Proc. IEE Colloq. on Analogue
Signal Processing, pp. 4/1 { 4/5 Oxford, UK.
Horrocks, D. H., & Khalifa, Y. M. A. (1995). Genetically evolved FDNR
and leap-frog active lters using preferred component values. In Proc.
Eur. Conf. on Circuit Theory and Design (ECCTD'95), pp. 359{362
Istanbul, Turkey.
Horrocks, D. H., & Khalifa, Y. M. A. (1996). Genetic algorithm de-
sign of electronic analogue circuits including parasitic e ects. In
Proc. 1st On-line Workshop on Soft Computing (WSC1). See
http://www.bioele.nuee.nagoya-u.ac.jp/wsc1/.
Horrocks, D. H., & Spittle, M. C. (1993). Component value selection for
active lters using genetic algorithms. In Proc. IEE/IEEE Workshop
on Natural Algorithms in Signal Processing, Vol. 1, pp. 13/1 { 13/6
Chelmsford, UK.
REFERENCES 109

Husbands, P., Harvey, I., & Cli , D. (1995). Circle in the round: State space
attractors for evolved sighted robots. Robotics and Autonomous Sys-
tems, 15, 83{106.
Huynen, M. A., & Hogeweg, P. (1994). Pattern generation in molecular
evolution: Exploitation of the variation in RNA landscapes. J. Mol.
Evol., 39, 71{79.
I-Cube, Inc. (1996). IQX family data sheet. See http://www.icube.com.
IMP, Inc. (1996). IMP50E10 EPAC programmable analog signal-conditioning
circuit: Datasheet. See http://www.impweb.com.
Intel Corp. (1993). 80170NX Electrically Trainable Analog Neural Network.
Data-sheet 290408-003.
Iwata, M., Kajitani, I., Yamada, H., et al. (1996). A pattern recognition
system using evolvable hardware. In Voigt, H.-M., et al. (Eds.), Par-
allel Problem Solving from Nature IV: Proc. of the Int. Conf. on Evo-
lutionary Computation, Vol. 1141 of LNCS, pp. 761{770. Heidelberg:
Springer-Verlag.
Jakobi, N. (1996a). Encoding scheme issues for open-ended arti cial evo-
lution. In Voigt, H.-M., et al. (Eds.), Parallel Problem Solving From
Nature IV: Proc. of the Int. Conf. on Evolutionary Computation, Vol.
1141 of LNCS, pp. 52{61. Heidelberg: Springer-Verlag.
Jakobi, N. (1996b). Facing the facts: Necessary requirements for the arti -
cial evolution of complex behaviour. CSRP 422, COGS, University of
Sussex, UK.
Johnson, R. C. (1996). Robot `evolves' without programming | FPGAs
controlled by genetic algorithm develop new functions. Electronic En-
gineering Times. Issue 854 (June 6th), p.42.
Kajitani, I., Hoshino, T., Iwata, M., et al. (1996). Variable length chromo-
some GA for evolvable hardware. In Proc. 1996 IEEE Int. Conf. on
Evolutionary Computation (ICEC96), pp. 443{447 Nagoya, Japan.
Kan, W., & Aleksander, I. (1989). A probabilistic logic neuron network
for associative learning. In Aleksander, I. (Ed.), Neural Computing
Architectures: The design of brain-like machines, pp. 156{171. North
Oxford Academic.
Kau man, S. A. (1993). The Origins of Order. Oxford University Press.
Kernighan, B. W., & Lin, S. (1970). An ecient heuristic procedure for
partitioning graphs. Bell System Technical Journal, 49 (2).
Kinget, P., Steyaert, M., & van der Spiegel, J. (1992). Full analog CMOS
integration of very large time constants for synaptic transfer in neural
networks. Analog Integrated Circuits and Signal Processing, 2 (4), 281{
295.
Kitano, H. (1990). Designing neural networks using genetic algorithms with
graph generation system. Complex Systems, 4, 461{476.
110 REFERENCES

Kitano, H. (1996a). Morphogenesis for evolvable systems. In Sanchez, E., &


Tomassini, M. (Eds.), Towards Evolvable Hardware: The evolutionary
engineering approach, Vol. 1062 of LNCS, pp. 99{117. Springer-Verlag.
Kitano, H. (1996b). Towards evolvable electro-biochemical systems. In Lang-
ton, C. (Ed.), Arti cial Life V: Proc. 5th Int. Workshop on the Syn-
thesis and Simulation of Living Systems, pp. 174{181. Cambridge, MA:
MIT Press.
Kolen, J. F. (1994). Recurrent networks: State machines or iterated function
systems?. In Mozer, M. C., et al. (Eds.), Proc. 1993 Connectionist
Models Summer School, Boulder, CO, USA, pp. 203{210. Lawrence
Erlbaum Associates.
Kosak, C., Marks, J., & Shieber, S. (1991). A parallel genetic algorithm for
network-diagram layout. In Belew, R. K., & Booker, L. B. (Eds.), Proc.
4th Int. Conf. on Genetic Algorithms (ICGA-91), pp. 458{465. Morgan
Kaufmann.
Koza, J. R. (1992). Genetic Programming: On the programming of computers
by means of natural selection. MIT Press, Cambridge, Mass.
Koza, J. R. (1994). Genetic Programming II: Automatic Discovery of
Reusable Programs. MIT Press.
Koza, J. R., Andre, D., Bennett III, F. H., et al. (1996a). Evolution of a
low-distortion, low-bias 60 decibel op amp with good frequency gener-
alization using genetic programming. In Koza, J. R. (Ed.), Late Break-
ing Papers at the Genetic Programming 1996 Conference, pp. 94{100.
Stanford, CA: Stanford University Bookstore.
Koza, J. R., Andre, D., Bennett III, F. H., et al. (1996b). Use of automatically
de ned functions and architecture-altering operations in automated cir-
cuit synthesis with genetic programming. In Koza, J. R., et al. (Eds.),
Genetic Programming 1996: Proc. 1st Annual Conf. (GP96), pp. 132{
140. Cambridge, MA: MIT Press.
Koza, J. R., Bennett III, F. H., Andre, D., et al. (1996c). Automated WYWI-
WYG design of both the topology and component values of electrical
circuits using genetic programming. In Koza, J. R., et al. (Eds.), Ge-
netic Programming 1996: Proc. 1st Annual Conf. (GP96), pp. 123{131.
Cambridge, MA: MIT Press.
Lienig, J., & Brandt, H. (1994). An evolutionary algorithm for the routing
of multi-chip modules. In Davidor, Y., Schwefel, H.-P., & Manner, R.
(Eds.), Proc. Parallel Problem Solving from Nature (PPSN) III, Vol.
866 of LNCS, pp. 588{597. Springer-Verlag.
Lindsey, C. S., & Lindblad, T. (1995). Survey of neural network hardware.
In Rogers, S. K., & Ruck, D. W. (Eds.), SPIE'95 Proc. Applications
and Science of Arti cial Neural Networks, Vol. 2492. SPIE, Bellingham,
Wa., USA.
REFERENCES 111

Louis, S. J., & Rawlins, G. J. E. (1991). Designer genetic algorithms: Ge-


netic algorithms in structure design. In Belew, R. K., & Booker, L. B.
(Eds.), Proc. 4th Int. Conf. on Genetic Algorithms (ICGA-91). Morgan
Kaufmann.
Mange, D. (1993). Wetware as a bridge between computer engineering
and biology. In Preliminary Proc. 2nd Eur. Conf. on Arti cial Life
(ECAL93), Brussels, pp. 658{667.
Mange, D., Goeke, M., Madon, D., et al. (1996). Embryonics: A new family
of coarse-grained eld-programmable gate array with self-repair and
self-reproducing properties. In Sanchez, E., & Tomassini, M. (Eds.),
Towards Evolvable Hardware: The evolutionary engineering approach,
Vol. 1062 of LNCS, pp. 197{220. Springer-Verlag.
Mange, D., Stau er, A., Sanchez, E., et al. (1993). Designing pro-
grammable circuits with biological-like properties. In Annales du
Groupe CARNAC, EPFL et UNIL, Lausanne, Vol. 6, pp. 53{71.
Mange, D., & Stau er, A. (1994). Introduction to embryonics: Towards new
self-repairing and self-reproducing hardware based on biological-like
properties. In Thalmann, N. M., & Thalmann, D. (Eds.), Arti cial
Life and Virtual Reality, pp. 61{72. John Wiley, Chichester, England.
Marchal, P., Nussbaum, P., Piguet, C., et al. (1996). Embryonics: The birth
of synthetic life. In Sanchez, E., & Tomassini, M. (Eds.), Towards
Evolvable Hardware: The evolutionary engineering approach, Vol. 1062
of LNCS, pp. 166{196. Springer-Verlag.
Marchal, P., Piguet, C., Mange, D., et al. (1994a). Achieving von Neumann's
dream: Arti cial life on silicon. In Proc. IEEE Int. Conf. on Neural
Networks (icNN'94), Vol. IV, pp. 2321{2326.
Marchal, P., Piguet, C., Mange, D., et al. (1994b). Embryological devel-
opment on silicon. In Brooks, R., & Maes, P. (Eds.), Arti cial Life
IV: Proc. 4th Int. Workshop on the Synthesis and Simulation of Living
Systems, pp. 365{370. MIT Press.
Marchal, P., & Stau er, A. (1994). Binary decision diagram oriented FPGAs.
In Proc. 2nd. Int. ACM/SIGDA Workshop on Field-Programmable
Gate Arrays (FPGA`94) Berkeley, Calif., USA.
Martin, R. S., & Knight, J. P. (1993). Genetic algorithms for optimization of
integrated circuits synthesis. In Forrest, S. (Ed.), Proc. 5th Int. Conf.
on Genetic Algorithms (ICGA93), pp. 432{438. Morgan Kaufmann,
San Mateo, CA, USA.
Martin, R. S., & Knight, J. P. (1995). Power-pro ler: Optimizing ASICs
power consumption at the behavioural level. In Proc. 32nd Conf. on
Design Automation (DAC 1995), pp. 42{47 San Francisco, CA, USA.
McIlhagga, M., Husbands, P., & Ives, R. (1996). A comparison of search tech-
niques on a wing-box optimisation problem. In Parallel Problem Solving
from Nature IV: Proc. of the Int. Conf. on Evolutionary Computation,
Vol. 1141 of LNCS, pp. 614{623. Heidelberg: Springer-Verlag.
112 REFERENCES

Mead, C., & Conway, L. (1980). Introduction to VLSI Systems. Addison-


Wesley.
Mead, C. A. (1989). Analog VLSI and Neural Systems. Addison Wesley.
Miczo, A. (1987). Digital Logic Testing and Simulation. Wiley New York.
Miller, J. F., Bradbeer, P. V. G., & Thomson, P. (1996). Experi-
ences of using evolutionary techniques in logic minimisation. In
Proc. 1st On-line Workshop on Soft Computing (WSC1). See
http://www.bioele.nuee.nagoya-u.ac.jp/wsc1/.
Miller, J. F., & Thomson, P. (1995). Combinational and sequential logic
optimisation using genetic algorithms. In Proc. 1st IEE/IEEE Int.
Conf. on Genetic Algorithms in Engineering Systems: Innovations and
Applications (GALESIA'95), pp. 34{38. IEE Conf. Publication No. 414.
Mizoguchi, J., Hemmi, H., & Shimohara, K. (1994). Production genetic algo-
rithms for automated hardware design through an evolutionary process.
In Proc. 1st Int. Conf. on Evolutionary Computation (ICEC`94). IEEE
Press.
Mondada, F., & Floreano, D. (1996). Evolution and mobile autonomous
robotics. In Sanchez, E., & Tomassini, M. (Eds.), Towards Evolvable
Hardware: The evolutionary engineering approach, Vol. 1062 of LNCS,
pp. 221{249. Springer-Verlag.
Moore-Ede, M. C., Sulzman, F. M., & Fuller, C. A. (1982). The Clocks
That Time Us: Physiology of the Circadian Timing System. Harvard
University Press.
Motorola, Inc. (1998). MPAA Field Programmable Analog Arrays: Product
information. See http://mot-sps.com/fpaa/.
Murakawa, M., Yoshizawa, S., Kajitani, I., et al. (1996). Hardware evolution
at function level. In Voigt, H.-M., et al. (Eds.), Parallel Problem Solving
from Nature IV: Proc. of the Int. Conf. on Evolutionary Computation,
Vol. 1141 of LNCS, pp. 62{71. Heidelberg: Springer-Verlag.
Murray, A. F., Tarassenko, L., Reekie, H. M., et al. (1991). Pulsed sili-
con neural networks - following the biological leader. In Ramacher, &
Ruckert (Eds.), VLSI Design of Neural Networks, pp. 103{123. Kluwer
Academic Publishers.
Murray, A. F. (1992). Analogue neural VLSI: Issues, trends and pulses.
Arti cial Neural Networks, 2, 35{43.
Naito, T., Odagiri, R., Matsunaga, Y., et al. (1996). Genetic evolution of a
logic circuit which controls an autonomous mobile robot. In Langton, C.
(Ed.), Arti cial Life V: Proc. 5th Int. Workshop on the Synthesis and
Simulation of Living Systems, pp. 120{124 (poster paper). Cambridge,
MA: MIT Press.
Northmore, D. P., & Elias, J. G. (1994). Evolving synaptic connections for
a silicon neuromorph. In Proc.1st IEEE Conf. on Evolutionary Com-
putation, IEEE World Congress on Computational Intelligence, Vol. 2,
pp. 753{758. IEEE, New York.
REFERENCES 113

Old eld, J. V., & Dorf, R. C. (1995). Field Programmable Gate Arrays:
Recon gurable logic for rapid prototyping and implementation of digital
systems. Wiley.
Old eld, J. V., & Kappler, C. J. (1991). Implementing self-timed systems:
Comparison of a con gurable logic array with a full-custom VLSI cir-
cuit. In Moore, W. R., & Luk, W. (Eds.), FPGAs, pp. 324{331. Abing-
don EE&CS Books, Abingdon, UK.
Paredis, J. (1994). Steps towards co-evolutionary classi cation neural net-
works. In Brooks, R., & Maes, P. (Eds.), Arti cial Life IV: Proc. 4th
Int. Workshop on the Synthesis and Simulation of Living Systems, pp.
102{108. MIT Press.
Payne, R. (1995). Self-timed FPGA systems. In Moore, W., & Luk, W. (Eds.),
Proc. 5th Int. Workshop on Field-Programmable Logic and Applications
(FPL'95), Vol. 975 of LNCS. Springer-Verlag.
Prosser, F. P. R., & Winkel, D. W. (1986). The Art of Digital Design : an
introduction to top-down design (2nd edition). Prentice-Hall Englewood
Cli s.
Prusinkiewicz, P., & Lindenmayer, A. (1990). The algorithmic beauty of
plants. New York: Springer-Verlag.
Rasmussen, S., Knudsen, C., & Feldberg, R. (1991). Dynamics of pro-
grammable matter. In Langton, C. G., et al. (Eds.), Arti cial Life
II, SFI Studies in the Sciences of Complexity, Vol. X, pp. 211{253.
Addison-Wesley.
Ray, T. S. (1995). An evolutionary approach to synthetic biology: Zen and
the art of creating life. In Langton, C. G. (Ed.), Arti cial Life: An
overview, pp. 179{209. MIT Press.
Rayleigh (1929). Theory of Sound, Vol. II. Macmillan & Co., Ltd.
Sanchez, E., & Tomassini, M. (Eds.). (1996). Towards Evolvable Hardware:
The evolutionary engineering approach, Vol. 1062 of LNCS. Springer-
Verlag.
Schnecke, V., & Vornberger, O. (1995). Genetic design of VLSI-layouts. In
Proc. 1st IEE/IEEE Int. Conf. on Genetic Algorithms in Engineering
Systems: Innovations and Applications (GALESIA'95), pp. 430{435.
IEE Conf. Publication No. 414.
Schneider, C., & Card, H. (1991). Analog VLSI models of mean eld net-
works. In Delgado-Frias, J. G., & Moore, W. R. (Eds.), VLSI for Ar-
ti cial Intelligence and Neural Networks. Plenum Press, New York.
Schwefel, H.-P., & Rudolph, G. (1995). Contemporary evolution strategies.
In Moran, F., et al. (Eds.), Advances in Arti cial Life: Proc. 3rd Eur.
Conf. on Arti cial Life, Vol. 929 of LNAI, pp. 893{907. Springer-Verlag.
Scott, S. D., Samal, A., & Seth, S. (1995). HGA: A hardware-based genetic al-
gorithm. In Proc. ACM/SIGDA 3rd Int Symp. on Field-Programmable
Gate Arrays, pp. 53{59.
114 REFERENCES

Seals, R. C., & Whapshott, G. F. (1994). Design of HDL programmes for


digital systems using genetic algorithms. In Applications of arti cial
intelligence in engineering IX: 9th Int. Conf. Selected Papers, pp. 331{
338. Southampton Computational Mechanics Publications.
Sebald, A. V., & Fogel, D. B. (1992). Design of fault tolerant neural networks
for pattern classi cation. In Fogel, D. B., & Atmar, W. (Eds.), Proc.
1st Ann. Conf. on Evolutionary Programming, pp. 90{99. EP Society,
La Jolla, CA.
Sharman, K. C., Esparcia-Alcazar, A. I., & Li, Y. (1995). Evolving signal
processing algorithms by genetic programming. In Proc. 1st IEE/IEEE
Int. Conf. on Genetic Algorithms in Engineering Systems: Innovations
and Applications (GALESIA'95), pp. 473{480. IEE Conf. Publication
414.
Smithers, T. (1995). Are autonomous agents information processing sys-
tems?. In Steels, L., & Brooks, R. (Eds.), The Arti cial Life Route to
Arti cial Intelligence: Building embodied, situated agents, chap. 4, pp.
123{162. Lawrence Erlbaum Associates.
Spencer-Brown, G. (1969). Laws of Form. George Allen and Unwin Ltd,
London. (First USA edition by The Julian Press, Inc. 1972).
Tau, E., Chen, D., Eslick, I., et al. (1995). A rst generation DPGA imple-
mentation. In Proc. 3rd Canadian Workshop on Field-Programmable
Devices, pp. 138{143.
Teich, J., Blickle, T., & Thiele, L. (1996). An evolutionary approach to
system-level synthesis. In Proc. 1st On-line Workshop on Soft Com-
puting (WSC1). See http://www.bioele.nuee.nagoya-u.ac.jp/wsc1/.
Tessier, R., Babb, J., Dahl, M., et al. (1994). The virtual wires emulation
system: A gate-ecient ASIC prototyping environment. In Proc. 1994
ACM Workshop on FPGAs (FPGA'94).
Thompson, A. (1995a). Evolving electronic robot controllers that exploit
hardware resources. In Moran, F., et al. (Eds.), Advances in Arti cial
Life: Proc. 3rd Eur. Conf. on Arti cial Life (ECAL95), Vol. 929 of
LNAI, pp. 640{656. Springer-Verlag.
Thompson, A. (1995b). Evolving fault tolerant systems. In Proc. 1st
IEE/IEEE Int. Conf. on Genetic Algorithms in Engineering Systems:
Innovations and Applications (GALESIA'95), pp. 524{529. IEE Conf.
Publication No. 414.
Thompson, A. (1996a). Evolutionary techniques for fault tolerance. In Proc.
UKACC Int. Conf. on Control 1996 (CONTROL'96), pp. 693{698. IEE
Conference Publication No. 427.
Thompson, A. (1996c). Silicon evolution. In Koza, J. R., et al. (Eds.),
Genetic Programming 1996: Proc. 1st Annual Conf. (GP96), pp. 444{
452. Cambridge, MA: MIT Press.
REFERENCES 115

Thompson, A., Harvey, I., & Husbands, P. (1996). Unconstrained evolution


and hard consequences. In Sanchez, E., & Tomassini, M. (Eds.), To-
wards Evolvable Hardware: The evolutionary engineering approach, Vol.
1062 of LNCS, pp. 136{165. Springer-Verlag.
Thompson, A. (1997a). An evolved circuit, intrinsic in silicon, entwined with
physics. In Higuchi, T., & Iwata, M. (Eds.), Proc. 1st Int. Conf. on
Evolvable Systems: From Biology to Hardware (ICES`96), Vol. 1259 of
LNCS, pp. 390{405. Springer-Verlag.
Thompson, A. (1997). Evolving inherently fault-tolerant systems. Proc Instn
Mech Engrs, Vol. 211 Part I, 365{371.
To oli, T., & Margolus, N. (1991). Programmable matter: Concepts and
realization. Physica D, 47, 263{272.
Turton, B. C. H., & Arslan, T. (1995). A parallel genetic VLSI architecture
for combinatorial real-time applications | disc scheduling. In Proc. 1st
IEE/IEEE Int. Conf. on Genetic Algorithms in Engineering Systems:
Innovations and Applications (GALESIA'95), pp. 493{498. IEE Conf.
Publication No. 414.
van Daalen, M., Jeavons, P., & Shawe-Taylor, J. (1991). Probabilistic bit
stream neural chip: Implementation. In Delgado-Frias, J. G., & Moore,
W. R. (Eds.), VLSI for Arti cial Intelligence and Neural Networks, pp.
285{294. Plenum Press, New York.
van Laarhoven, P. J. M., & Aarts, E. H. L. (1987). Simulated Annealing:
Theory and Applications. D. Reidel Publishing Co.
Wagner, G. P., & Altenberg, L. (1996). Complex adaptations and the evolu-
tion of evolvability. Evolution, 50 (3), 967{976.
Wagner, G. P. (1995). Adaptation and the modular design of organisms.
In Moran, F., et al. (Eds.), Advances in Arti cial Life: Proc. of the
3rd Eur. Conf. on Arti cial Life (ECAL'95), Vol. 929 of LNAI, pp.
317{328. Springer-Verlag.
Worden, R. P. (1995). A speed limit for evolution. Journal of Theoretical
Biology, 176 (1), 137{152.
Xilinx, Inc. (1996a). The Programmable Logic Data Book. See
http://www.xilinx.com.
Xilinx, Inc. (1996b). XC6200 Advanced product speci cation V1.0,
June 1996. In The Programmable Logic Data Book. See
http://www.xilinx.com.
Yasunaga, M., Masuda, N., Yagyu, M., et al. (1991). Design, fabrication and
evaluation of a 5-inch wafer scale neural network LSI composed of 576
digital neurons. In Int. Joint Conf. on Neural Networks (IJCNN'91),
Vol. II, pp. 527{535. IEEE, New York.
Zetex plc (1996). TRAC020 totally recon gurable analog circuit. Datasheet.
See http://www.zetex.com.
116 REFERENCES
Index

Abstraction 7, 35{37, 90 Locus 6


Arti cial intelligence, A. Life 7, 16
Medium of implementation 9{10, 15,
Biology 7, 10, 16, 56, 57 16, 37, 56, 82{83, 87{90
Modularity 22, 36{38, 42{44, see also
Clocking see Synchronisation Evolutionary alg. { Theory
Co-evolution 56, 70 Morphogenesis 20, 23{24, 26, 31, 56,
Con guration (vs. programming) 4 see also Evolutionary alg. { Theory
Crossover 6, 33, 62 Multiplexing 10, 56
Custom computing 14 Mutation
Cybernetics 16 { Insensitivity to 57{62, 83{84
{ Probability (rate) 6, 33, 41, 45, 51,
Elitism 4, 62 56, 74, 78{79
Embryonics 20{24, 71
Evolutionary algorithm Neural systems 9{12, 20{23, 26, 56,
{ Applied to FPGA (example) 4{6 65, 71, 91
{ Families 6 NK tness landscapes 59{62
{ GA 6
{ SAGA 6, 19, 32{33, 64, 78{79, 83{84 Phenotype 4, 24, 30{31, 37{38
{ Theory 20{23, 29{33, 56{71, 83{84 Population 4, 57, 70
Extrinsic hardware evolution Programmable matter 20
{ De nition 6
{ Examples 17{20, 24{28 RAM-based systems 12, 14, 20, 48{55
Recombination 6, see also Crossover
Fitness evaluation 4, 17, 31, 76, 79, 91
Fitness landscape 58 Selection
FPGAs { Fitness proportional 62
{ Analogue counterparts 11, 15, 25 { Rank 4, 33
{ Desirable characteristics 12{14 { Truncation 62
{ Example digital architecture 2{4 Simulation
{ Grain size 12, 14, 19, 29 { Diculty of 7, 22, 25, 44{45, 53, 101
{ Other recon gurable systems 14{15 { Physical circuit 6{7, 26
{ Robot movement 22, 53, 99{100
Generation 4 { Sonar 53, 100{101
Genotype 4, 24, 30{31, 37{38, 40, 45 { SPICE 25
Synchronisation 9, 15, 36, 38{55, 73,
Intrinsic hardware evolution 90
{ De nition 6
{ Examples 26, 44{46, 48{55, 73{85, Thermal stability 11, 87{90
91{92 Top-down design 9{10, 35{37

You might also like