You are on page 1of 53

AI in Engineering in the near future

Drew Garrick, Stuart Grey, Benjamin R.B. Kirkland,

William David MacRae, J. Iain Sword

March 26, 2017

0.1 Introduction: What is AI? . . . . . . . . . . . . . . . . . . . . . . 1

1 The historical role of AI in engineering 3

1.1 Articial intelligence in antiquity . . . . . . . . . . . . . . . . . . 3
1.2 Early mechanical thought . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Mechanical Calculators . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Pre-Electronic thought processes . . . . . . . . . . . . . . . . . . 6
1.5 Philosophy of early articial intelligences . . . . . . . . . . . . . . 6
1.6 Late mechanical computers and theories (1800-1950) . . . . . . . 9
1.7 Historical denitions of modern AI . . . . . . . . . . . . . . . . . 10
1.8 Fictional portrayals of articial intelligences of the 20th century . 12
1.9 The recent history of AI in Engineering . . . . . . . . . . . . . . 14
1.10 Case Study: Deep Blue; chess computer . . . . . . . . . . . . . . 15

2 The current status of AI and its application to engineering 16

2.1 Design Applications & Expert Systems . . . . . . . . . . . . . . . 16
2.2 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Articial Neural Networks . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Crisis Management . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Recent & future developments of AI in engineering 23

3.1 Technologies in modern AI . . . . . . . . . . . . . . . . . . . . . 23
3.2 Programming modern AI . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Modern AI examples . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 The Future of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 Summary of current viewpoints on the role of AI 30

4.1 Industry Leaders . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Political Authorities . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 The ethical issues and political arguments 33

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Employment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.4 Robot Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.5 Militarisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.6 Political Stances . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.7 Religion and Philosophy . . . . . . . . . . . . . . . . . . . . . . . 41

6 Conclusions and Recommendations 42

Man has always had a desire to create life by mechanical means. This aim
can be traced back to antiquity in myths and legends. However in the modern
day, we may be close to the threshold of creating concious life articially. The
implications of advanced articial intelligences are far reaching and not yet
fully understood. This report will investigate many of the applications of AI
in engineering; past, present and future as well as presenting ethical arguments
and viewpoints from industry leaders.
0.1 Introduction: What is AI?
This report will explore Articial Intelligence (also known by the abbreviation
`AI') in detail. Specically, the report will explore the historical role of AI and
the ethical lessons learned so far, the current uses of AI in engineering, the recent
and future developments, the ethical and political arguments, and a summary
of the current viewpoints on AI and its uses in engineering and in general. AI
is set to become a powerful tool in most areas of human life in the near future,
but to understand what it is capable of, you must rst fully understand what it
is. So, what is AI?

A basic textbook denition is: articial intelligence is a sub-eld of computer

science. Its goal is to enable the development of computers that are able to do
things normally done by people  in particular, things associated with people
acting intelligently [1]. So, from this denition AI can be anything that can
do a task or process that we perceive to require intelligent thought. However,
it can then be split into three dierent sub-sections depending on how they are
required to perform:

`Strong AI' is the title given to the rst sub-section. Strong AI aim to perfectly
simulate human thinking. The ultimate goal is to be able to imitate human
behaviour so well that an observer would not be able to tell if it was human or
machine. This idea of perfect imitation was the subject of Alan Turing's 1951
paper The Imitation Game [2]. This rst iteration of the test consisted of
three rooms, each connected to the others by computer screens and keyboards.
In one of the rooms was a man, in the next a woman, and in the nal room a
third person known as the `judge'. The goal is for the judge to determine in
which of the other two rooms is the man. The man's goal is to communicate by
his computer to try and convince the judge that he is the man. The woman's
goal however is to communicate by her computer to try and trick the judge into
thinking she is the man. The computers are used to ensure there is no physical
clues as to which one is the man. The man and the woman can see each others
communications and try to react to their opponent's claims. But what does this
have to do with AI?

In the next iteration of the test the woman is replaced by an AI, and the goal
this time is for the judge to decide which one of the two is the human. As before
the man will try and honestly convince the judge that he is the human, whereas
the AI will try and trick the judge into thinking it is the human. If this test
is repeated with many dierent `men' and the judge's accuracy at deciphering
which of the rooms contains the human is below 50% then this means the judge
is just as likely to think the man is human as he is to think the AI is human.
This therefore means that the AI is a passable simulation for human, and is
possibly intelligent. In a nal iteration of the test there are only two rooms, in
one the judge and in the other the AI. In this test the AI must convince the

judge that it is human and the judge must decide if it is a human or a machine.
For an AI to pass this test and therefore be dened as truly intelligent is a hard
task, and since The Imitation Game's inception in 1951 no AI has passed [3].
It will take some major breakthroughs in multiple areas before a Strong AI can
be produced.

The second sub-section is `Weak AI'. Their functions are to perform intelligent
tasks, but it does not matter the process by which it does this. A Weak AI will
simulate a cognitive process but is not in itself intelligent (i.e. it cannot think
for itself ). An example is Apple's Siri. Siri can seem intelligent when you ask
it questions it can deal with, it can hold a conversation and make jokes but if
you push it too far and ask it to do things it has not been programmed to do, it
will be unable to respond. Weak AI can be very good while working inside the
situations they are programmed for but when pushed outwith that they cannot
function as they have no intelligence of their own, just what they have been
programmed to simulate.

The nal sub-section of AI is somewhere in between strong and weak, and

is hence known as 'in-between AI'. This type of AI uses human reasoning and
inspiration as a guide, but do not attempt to perfectly replicate it. They can be
seen as intelligent in their own way. In-Between AI does not have to simulate
human thought, it just has to demonstrate its own intelligence. These systems
can gather information and build up collections of evidence in order to learn
and improve.

Another important distinction between dierent types of AI is the dierence

between Narrow and General. Narrow AI are built to perform specic tasks,
whereas General AI are designed with the ability to think and reason universally.
Narrow AI would include Weak AI and most In-Between AI. General AI would
include Strong AI and possibly some In-Between AI. Obviously a Strong AI
capable of thinking about and analysing all situations is the ultimate form of
articial intelligence, but is not necessary for most applications: an in-between
AI capable of medical diagnostics and administering medicine does not have to
be able to nd the nearest best priced petrol station.

So in conclusion, an Articial Intelligence at its most basic must be able to

perform intelligent tasks. Weak AI can do this through specic programming,
In-Between AI do it through their own form of intelligence, and Strong AI
by fully imitating human cognition. AI have almost unlimited applications in
our modern world and as their technology improves their possibilities will only
expand. But is this a good thing? What are the issues that we face, and should
the future of AI be limited by what they can do or by what we deem it acceptable
for them to do?

Chapter 1

The historical role of AI in

1.1 Articial intelligence in antiquity
The dawn of articial intelligence began long before the rise of complex mechan-
ical and electrical machinery and can be seen as "an ancient wish to forge the
gods."[4] The rst recorded examples of articial intelligence appear in Greek
myths including the golden robots built by the forge god Hephaestus, the ivory
statue Galatea brought to life by Pygmalion of Cyprus and the automaton Ta-
los that protected queen Europa of Crete.[5] These examples show the desire of
mankind to triumph over nature and create articial life, even from this early

However, this does not simply extend only

to ancient Greece, there are further examples
of such automata in ancient China and Egypt.
King Mu of the Zhou dynasty was presented
with a life size human shaped automoton by
the articer Yan Shi, an example of an early
mechanical engineer.[6] Mechanical men were
also constructed in Greece and Egypt that
were believed to have emotion and wisdom.

"they have sensus and spiritus

Figure 1.1: Talos, The automa-
. . . by discovering the true nature
ton guardian of Europa
of the gods, man has been able to
reproduce it."[4]

Such idols were banned under later Abra-

hamic religions. (e.g the 2nd Commandment: Make no false idols)

1.2 Early mechanical thought
Many of these beliefs (and some of those to follow) were for the most part,
myths and legends. However, some of the great thinkers of the ancient Greeks
and early Arabic scholars were able to use a knowledge of mathematics to create
a theory of logic and derive the mechanics of thought. The rst recorded form
of this is syllogism , (Greek: σψλλογισμος, syllogismos, "conclusion, inference")
put forward by Aristotle. This was a formalised expression of logic, following set
expressions, reminiscent of far later computer code. An example of a syllogism
would be this proof that Socrates is a mortal:

1 All men are mortal

2 Socrates is a man

3 Therefore Socrates is mortal

The rst two statements are premises, while the third is a conclusion. (i.e.
from 1 and 2, we reach 3) These premises can be universal (All A are B),
particular (some A are B
) or indenite ( A B
is ) and their denials (Not all A
is B etc.)[7]

Hero of Alexandria (c.10-70AD) was a greek mathematician and engineer,

following Aristotle by 400 years, and credited with possibly being the rst to
dene the steam engine or turbine, the aeolipile. He is also credited with the
construction of numerous automata and mechanical men.[8]

With the absorption of Greek civilisation into the Roman empire and its later
collapse, the philosophy of articial thought was continued elsewhere. Porphyry
of Tyros followed in this in the third century then Geber and Al-Jazari from
the ninth century onwards in the rising Arabic world of academia.

Porphyry (c. 234-305AD) is credited for being the rst to develop a "mind
map".[9] a planning tool to identify tasks to be done and their priority. In
addition, they can also be used to visualise thought at a basic level. His book
Isagoge or Introduction became a standard textbook for logic during the middle
ages. In particular it further renes Aristotle's syllogism and denes layers
of classication familiar to us today: genus and species as well as dierence,
property and accident.[10]

As dened by Porphyry, dierence refers to an object having a dierent

nature from another, properties are further divided into 4 categories: property
of a species but not all individuals thereof (healing of man), properties including
all of a species but also extending to others (bipedal), time dependent (ageing)

and those pertinent to only all of one species (the humour of man). Accident
is a property that does not change the nature of its subject and can also be
further divided to separable and inseparable. For example: to be asleep would
be separable and not change the nature of being a man, while a skin tone would
be inseparable yet also not change ones nature. One can see the inuences
of such forms of thinking in the later categorisation of databases and also the
inuences of prior works in philosophy such as syllogism.[10]

Following one thousand years after Porphyry and a geographical shift in the
centre of innovation to the east where many arabic translations of books lost
in the library of Alexandria still resided. One such scholar, Ismail al-Jazari
(1136-1206AD), born in the east of modern day Turkey, was best known for
his development of complex mechanical systems in his writing The Book of
Knowledge of Ingenious Mechanical Devices. He used many of these designs
to produce complex automatons capable of simple tasks such as serving drinks
and playing music. These devices, very advanced for their time, primarily used
hydropower both as the driving force and as a system for timings, making them
simple water powered computers.[11]

1.3 Mechanical Calculators

Within a century of Al-Jazari's mechani-
cal orchestra, a Catalonian philosopher and
writer, Ramon Llull wrote the Ars Magna,
which described a tool for mechanically
combining concepts, with the basis formed
from Arabic astronomical tools such as the
Zairja. These designs are more commonly
recognised today in slide rules used in the
nuclear era.[12]

Several centuries then passed until the

next major series of developments in the
seventeenth century. Early examples of
these calculators were unsuccessful, includ-
ing those of Schickard's calculating clock
and other designs by Burattini and Gril- Figure 1.2: US Army RADIAC
let. Many of these unsuccessful mecha- Calculator
nisms relied on single tooth carry wheels
which were unsophisticated and unable to
deal with complex operations.[13] In all cases prior to the late 20th century
however, these would be dened as weak AI given their specic natures, often
being designed to calculate a particular combination of quantities (such as the
motion of the planets).

1.4 Pre-Electronic thought processes
It was inevitable however that someone would succeed in the construction of a
mechanical calculator, and early and basic form of AI owing to its generality.
The honour went to French inventor and scientist Blaise Pascal in the mid-17th
century after 3 years of development while still in his teens. His machines were
capable of addition, subtraction, and thus multiplication and division by the
repetition of these. The carry mechanism on these machines was also far more
complex, employing a 3 phase transmission with a series of connected wheels
and ratchet and pawls.

This design was later rened and expanded by the German polymath, Got-
tfried Leibniz, also known for his competing development of calculus with Sir
Isaac Newton. Leibniz' developments were prompted by an incomplete under-
standing of Pascal's machine which would have been unworkable. To x these
issues, he developed his own machine called the Stepped Reckoner and could
perform direct multiplication using a method based on long multiplication or
division. From a series of these, powers and roots could then be calculated. Fur-
ther development of such systems continued throughout the remainder of the
17th and 18th centuries, though many of these designs followed the mechanisms
laid out by Pascal and Leibniz.

1.5 Philosophy of early articial intelligences

For much of written history, it was believed that complex thought, imagination
and `sense', that is the sensations of touch, smell, taste etc. derived from the
soul. This was known as the Aristotelian theory. However, this view was chal-
lenged during the 17th century by sceptical philosophers including Descartes.[14]
An example of this, Leviathan, a political and philosophical treatise published
by Thomas Hobbes in 1641, it is argued:

`The cause of Sense, is the Externall Body, or Object, which presseth

the organ proper to each Sense. . . which pressure, by mediation of
Nerves. . . continued inwards to the brain.'

Hobbes presents a case for mechanical existence, with understanding following

from a combination of the senses. Imagination, as described by Hobbes is `there-
fore is nothing but decaying sense ', applying the determinism and predictability
of newton's laws to the complex system of the human mind.

Further arguments for the mechanical nature of the mind are put forward
concerning mentall discourse and Re-Conning, once a term used for addition
and subtraction of numbers (arithmetic) and shapes (geometry), now commonly
used to describe thought as reckoning. Hobbes argues that `Fancies are motions
within us' following his premise that thoughts follow classical mechanics. His
description of coming to false conclusions is also so derived:

`And as in Arithmatique, unpracticed men must, and Professors
themselves may often erre. . . so also in the subject of reasoning,
the ablest. . . most practiced men, may deceive themselves, and in-
ferre false conclusions.'

[15] From this it can be implied that given time, the process of cognition could
be simulated, perhaps with a mechanical calculator on a scale far grander than
that of Pascal's, completed a year after the publication of Leviathan.

In a similar vein to the philosophical treatises written by Descartes and

Hobbes, the French philosopher, Julien Oray de La Mettrie also wrote his
own take on the argument that all living beings are products of classical me-
chanics and could then be predicted by them. This text -l'homme machine- or
`man a machine', published in 1748, deals with the fundamentals of the soul
and its denition. A question that still bothers many philosophers to this day.
Rather than seeking a theological answer as many have done in the past, he
uses something akin to the then-in -favour scientic method.

`If there is a revelation, it can not then contradict nature. By nature

only can we understand the meaning of the words of the Gospel, of
which experience is the only truly interpreter.'

Much of the text deals with the metaphysical and goes into more depth than
can be covered here, however many points made deal with the predictability of
certain processes on men, beasts and machines:

The human body is a machine which winds its own springs. It is

the living image of perpetual movement. Nourishment keeps up the
movement which fever excites. Without food, the soul pines away,
goes mad, and dies exhausted.

In addition, he also recognises that the human mind is so complex a machine

that it would be impossible to dene (at least with the technology of the time).

`Man is so complicated a machine that it is impossible to get a

clear idea of the machine beforehand, and hence impossible to dene

Ultimately there are those to this day that hold to this philosophy that man
is a machine, and the primary goal of general AI research is to emulate this
machine. It is possible that one day we will succeed in replicating the human
mind, and it will be a landmark achievement in computer science, engineering
and many other elds.

In the century that followed this new philosophical outlook, perhaps less con-
strained by religious orthodoxy, the development and discourse on articial be-
ings expanded dramatically, but so did the ethical questions raised. In 1818,
Mary Shelley published what can be argued to be the rst science ction novel:

Frankenstein; or, The Modern Prometheus. The novel tells the story of a young
scientist, Victor Frankenstein, who creates a sentient being in an experiment.
(being one of the inspirations for the mad scientist trope in popular culture as
portrayed in later lm adaptations)[17] It argues against the headlong pursuit
of science without considering the ethical questions posed by such developments
and the damage that can be caused by blind ambition. The protagonist, Victor
Frankenstein condes to another character, Captain Walton following the tragic
events before his death:

Learn from me, if not by my precepts, at least by my example, how

dangerous is the acquirement of knowledge, and how much happier
that man is who believes his native town to be his world, than he
who aspires to become greater than his nature will allow. [18]

Frankenstein's creation of Adam (also known by his classical pseudonym: the

monster) is a strong parallel to the future creation of fully self-aware articial
intelligence. Both would be sentient creatures and like Adam it is possible that
they will stray beyond the roles we intend for them (in Adam's case, murdering
his creator, though later conded in Captain Walton how the killing brought
him no peace, displaying a level of ethical depth we may well see in advanced

Adam is feared by others for his grotesque appearance and ostracised, possibly
a parallel of how articial intelligence may be treated by us in the future. This
may perhaps lead to similar consequences of death and destruction.

Other ethical issues were raised include the

imitation of an articial intelligence for prot.
A famous example of this is Wolfgang von
Kempelen's chess automaton: The Turk. The
outside appearance of this automaton was
that of an ornately carved case lled with
clockwork gearing and a driven mechanical
man dressed in traditional Turkish garb. This
exterior façade was simply designed to dis-
tract the watchers, and was in fact clever
trick. The box contained within a chess mas-
ter hired by owner and the chessboard on the
case was linked by magnets to another inside,
at which the operator sat. The Turk's me-
chanical arm was then operated by a panto-
Figure 1.3: The Turk, as it was
graph from inside the casing and the operator
presumed to operate.
could be communicated with while inside the
box by means of a telegraph.[19]

Though the Turk was merely seen as a curio in its time, a parlour trick
and even a puzzle by its opponents, it poses a signicant question for modern

research into AI. Given the academic esteem that would likely be heaped upon
the rst successful developer of a general articial intelligence, measures would
be required to prevent fraudulent cases.

1.6 Late mechanical computers and theories (1800-

As time went on, the development of mechanical calculators advanced, lead-
ing to the eventual development of programmable mechanical computers in the
mid-19th century. Two British mathematicians can take most of the credit
for these developments: Ada Lovelace and Charles Babbage. The dierence
engine designed by Babbage was capable of calculating polynomial functions
automatically which could then be used to approximate all other forms of func-
tion through means such as the Taylor series. This machine was envisioned to
automate the process of producing data tables (such as log tables which could
be used to compute long multiplication through addition).

Lovelace is famed for her contributions to early programming, having a claim

to writing the rst published algorithm, written for Babbage's dierence engine,
to calculate Bernoulli numbers.[4]

Again working from previous knowledge such as the logic of Aristotle, many
more theories were developed about how computers would work. The rst
example of this in the 19th century was Boolean logic and algebra, developed
by George Boole. This system of logic is still used today for determining the
use of logic gates such as AND, OR, NOT and others which are used as the
basis of simple microelectronics. This application of Boolean logic to electronics
came at the start of the 20th century, being formalised in a 1937 masters thesis:
A Symbolic Analysis of Relay and Switching Circuits [20]. In a similar vein,
Bertrand Russell's book: Principa Mathematica, published in 1913 sets out a
set of axioms (premises) and rules for inferences, furthering the scope of logic
from a mathematical perspective.

Though the logic by which AI would process the world was now very rmly es-
tablished, as were mechanical computers able to run simple programs. However,
another form of mathematics would prove indispensable to the development of
articial intelligence: Game theory. Initially set out by von Neumann and Mor-
genstern in 1944, it can be used to optimise a problem using probability. Game
theory was later expanded (and revolutionised by) John Nash in 1950 which
provided a system to nd these optimal positions in an n person problem.

1.7 Historical denitions of modern AI
Cybernetics is dened as the science of control and communication, in the an-
imal and the machine.[21] And though the word often brings to mind images
of robots such as the Terminator and others, the true scientic eld of cyber-
netics as it began in the 1950s and earlier studies how machines of all types
are regulated, and as such is an excellent core discipline for the designers of
articial intelligence. Indeed, the rst volume of proceeds from Oxford's 1972
Cybernetics and Systems conference deals almost entirely with that relation-
ship. Automation, though not truly AI was also considered, with 2 forms being
identied: (i) those under human control, or repeating a single task; also mecha-
nisation. And (ii) the addition of mechanisation to control systems, also dened
as total automation.

At the time this was written by Professor L. Finkelstein, there was some
development in the latter, though it was mostly limited to closed loop systems
with single variables in the chemical and aerospace industries. He also stated
that automation should be applied to meet denite practical economic, social
or military objectives. As automation was then a new technology, rapid devel-
opment had signicant risks and as with today, there were concerns that the
new technology would threaten jobs, and then livelihoods. This question is still
struggled with today and has yet to be answered comprehensively.

Other cases put forward at this conference included a comparison of human

and machine intelligence, making the case that as computing machinery man is
more impressive on the centre court at Wimbledon than he is in the classroom
or laboratory. Even by this time, the computers available were capable of out-
performing humans in mathematical precision and speed, and given sucient
sensors, be more capable at determining the outcomes of scientic experiments.
However, humans were and for the moment remain, more capable at uid and
highly dynamic situations such as sports. It was also presented that one of the
means to advance AI development would be in the form of memory, in much
the way human memory is formed through associations and connections rather
than pure data. Such a system would be almost essential for the understanding
of language, and this paper A Problem in Articial Intelligence: Preliminary
Design for an Artifact to Augment Associative Memory Function possibly lays
down a foundation for what would later become neural net systems that are
almost ubiquitous in articial intelligences.[22]

As mentioned in the introduction (pg 2), in 1950, Alan Turing now widely
known as the man who broke the enigma code as well as a renowned computer
scientist, proposed a test of strong articial intelligence which to this date has
yet to be passed. Questions that naturally follow from this include: what is
intelligence? And what techniques would one use in producing articial intel-
ligence? The dictionary denition of intelligence relevant to this discussion is

the capacity for understanding and other forms of adaptive behaviour. It is
strongly argued that humans display this, which is a key assumption for dening
articial intelligence, proving a reference point by which to compare it's adap-
tive behaviour. Other perspectives consider that intelligence is a spectrum, with
some objects being more intelligent than others (with rocks and plants being
near the bottom, and humans the top), but all objects are dened to have some
degree of intelligence. This approach merely sidesteps the question of machines
however, and is often ignored.

An argument against human intelligence is that we are endowed with adap-

tive reasoning by a complex series of bioelectrical impulses in our brains and the
programming given to our brains by evolution. By this metric, programmable
machines are also therefore incapable of intelligence as they are told what to
do by their own computer programmes, written by humans. This however is
countered by the argument that human experiences are then used to modify
our own program, and as such, if a similar process could be written for a
machine, this would then endow it with intelligence as has been studied with
neural networks and was put forward as a memory form in the 1970s (pg 10)

One of the primary tools in building an

articial intelligence is the algorithm. By
denition it is a series of processes and de-
cisions guaranteed to yield correct results.
These can range from the simple (a hand
calculation method) to the complex (such as
Google's search algorithm: pg 25). Heuris-
tics meanwhile, are less precise and rely on
simplications to aid in the solution of com-
plex problems though cannot typically pro-
vide a solution on their own. (an example
of this would be a particular situation that
may in some cases pose risk, such as a se-
quence of chess moves for a chess-bot). For
the most part, articial intelligences require
a specic denition of a problem and then
subdivide this into smaller, simple problems
that it can solve (in a manner of speaking, Figure 1.4: A simple algorithm

going all the way back to sums of binary

numbers by computer hardware, like the r-
ing of neurons in a human brain). [23]

1.8 Fictional portrayals of articial intelligences
of the 20th century
I had read many robot stories and found that they fell into two
classes. In the rst class there was Robot-as-Menace. . . After a
while, they palled dreadfully and I couldn't stand them. In the
second class there was Robot-as-Pathos. . . These charmed me."[24]

Isaac Asimov, 1982

Over the course of the 20th century, the attitudes of western society towards
new technology changed dramatically. This phenomenon is not just limited to
articial intelligence. Examples of shifts in perspective can be seen clearly in
ction and are inuenced by current events; for example: the world of Thun-
derbirds, created by Gerry Anderson in the mid 1960s shows a very optimistic
view of the future, with technology used to protect people from almost certain
death, whereas just one year later, he created Captain Scarlet, a far darker
aair, capturing the atmosphere of the cold war.

Early in the 20th century, when automation was very new and the world was
reeling from the industrialised killing of the Great War, attitudes towards new
technology and specically automation were fearful and cautious. An excellent
example of this is a Czech play by Karel ƒapek titled:  Rossum's Universal
Robots . Written in 1921, the play introduces the word Robot, now ubiquitous
in science ction. The word was derived from the Slavic robota meaning forced
labourer, reecting their status at the beginning of the play. However, during
the 2nd act, the robots revolt and eventually destroy humanity. This theme of
a robot rebellion and mass destruction is common throughout science ction to
this day.

As the century went on, the attitude towards technology shifted towards
favouring technology, with robots often taking a positive role in society (an
example being the short story A Boy's Best Friend by Isaac Asimov amongst
others[24]). This parallels many of the other shifts in opinion of the 1950s with
the atomic age and its positivity. This period of positivity lasted until the cold
war truly began to aect national culture, and a fearful attitude once again set
in. A famous example of this fear of technology is the long running Terminator
franchise, once again detailing the rise of machines that this time are capable
of impersonating humans. The rogue defence computer Skynet has become a
popular culture byword for a destructive articial intelligence.

The theme of robots (or androids, cyborgs etc.) impersonating human beings
goes back even further than 1984's Terminator to Philip K. Dick's  Do Androids
Dream of Electric Sheep  of 1968; which formed the basis of the 1982 lm  Blade
Runner  and again, this can be traced back to cold war fears. At the time,

there was a great cultural fear that anyone you knew could be a soviet spy, and
carefully disguised sleeper agents were everywhere. Even Blade Runner leaves
it vague as to the true nature of the protagonist: Replicant or Human.

Another altogether dierent class of robot also

emerged from this period, though no less destruc-
tive, it was motivated by pure logic that was hard
wired into the programming. The two most famous
examples of this are HAL-9000 from 2001: A Space
Odyssey (1968) and the central computer of the Nos-
tromo in the lm Alien (1979) and their infamous
quotes: "I'm sorry Dave. . . " and ". . . All other con-
siderations secondary, Crew expendable". Neither
computer (or in HAL's case, character) acted against
the crew with malice, they were simply killed to fur-
ther "the mission". This callousness reected an atti-
tude towards scientic progress forged in the Second
Figure 1.5: HAL-9000's
World War, the Nazi's had made many technologi-
infamous camera "eye"
cal advancements with little thought for the human
cost, and many people were worried that this would
extend to the west as time went by.

As if foreseeing such an eventuality, Asimov published in one of his early

stories "3 Laws of Robotics" which are as follows:

1 A robot may not injure a human being, or, through inaction, allow a
human being to come to harm

2 A robot must obey the orders given to it by human beings except where
such orders would conict with the First Law

3 A robot must protect its own existence as long as such protection does
not conict with the First or Second Law[24]

These laws, though providing a strong basis for a plot, have serious aws for
applying to the real world. In fact, many of Asimov's own stories detail robots
unintentionally exploiting such aws as an incomplete denition of human, lead-
ing to an ethnic cleansing. However, with the advent of such technologies as
semi-autonomous military drones (such as the MQ-1 Predator and its succes-
sors pg37) such stories once relegated to science ction may be closer than we

1.9 The recent history of AI in Engineering
Many examples of the use of articial intelligence in engineering are spectacu-
lar feats of remote organisation, with examples including spacecraft operating
light hours from earth and robots capable of walking without human assistance.
These high-prole applications may capture media attention, but some more
limited forms of AI have also been in use with engineering design for many
years. The combination of articial intelligence with computer aided design al-
lows some processes to be automated. In the mid-1990s, the development of AI
was primarily used to optimise the design of components and other low level,
iterative tasks.[26]

Other applications involve the design of integrated circuit chips, which often
rely on small repeated components and optimising their design is complex. AI
in some forms have been used since the 1980s for this purpose.

Another use for these early forms of AI is basic decision making, also known
as expert systems. These can be used to categorise knowledge which can then
be used more eectively by engineers to supplement knowledge. More advanced
forms of expert systems can be used in combination with design applications for
structural analysis. (pg 16) By updating the memory of the system - which
at this point would be done by hand- the machine can in a way learn, a process
now seen as a staple function of modern AI and adaptive systems.[27]

In the late 90's, there was a denite shift towards taking an inspiration from
biology, making use of neural networks with increasing regularity. A key feature
of neural nets is a use of associative memory. As in nature, such systems can
be used to learn.

Most general computer systems make use of address memories which have
their aws for adaptive systems and are susceptible to losses. They are however
easier to implement and space ecient. Biological memory is more resilient to
errors and failures in hardware and with their capacity to form new connections,
are capable of rapid adaptation to new situations when guided (supervised).
Unsupervised learning in neural nets can be slow however.[28]

Yet, with such rapid advancement in biologically inspired computational mod-

els, the ability to visualise a system become more and more dicult. Programs
that at one point could be read through and analysed by a skilled program-
mer become more complex with the addition of neural networks, being able to
rewrite themselves on the y, forcing us to place our trust in the machine to
work out the bugs.[29]

1.10 Case Study: Deep Blue; chess computer
Famed for being the rst computer in the world to
beat a grand master at chess, deep blue was built
by IBM in 1996 and defeated then world cham-
pion Garry Kasparov in the rst round, though
Kasparov won the match, a revised model of Deep
Blue went on to win a rematch in 1997. The al-
gorithm behind deep blue was primarily based
around a highly advanced processor capable of
brute force solutions. Deep Blue changed the
mindset of the time from hardware to software.
For example, much of the revision to deep blue
before the rematch was development of a spe-
cialised chess processor and databases concerning
endgame forms and set openings (which are also
used by human players). These developments al-
lowed deep blue to search a move tree one level
deeper than before, plotting almost all possible
midgame actions before making a move.
Figure 1.6: IBM's chess
computer, that defeated
In the rematch, deep blue consistently drew
grand master Garry Kas-
games against the grand-master, eventually scor-
ing a nal win in 19 moves. The prize money from
the game awarded to deep blue's team ($700,000)
went on to further develop parallel processing at
IBM. The team also went on to win awards for their development including
1997 Innovation of the Year from Popular Science and others. Kasparov had
indicated interest in playing a further rematch as well as seeing the log les,
though IBM declined on both accounts and Deep Blue was later dismantled,
though the logs were later published online.

Criticisms were laid against Deep Blue's defeat of Kasparov, citing that it was
not true AI, as it did not display learning, merely recomputing the game each
move, as well as treating a game of chess as a mere calculation to be solved. As
an AI, Deep blue is also highly specialised, being only capable of chess. However
this does not detract from the monumental achievement of building a machine
capable of winning a game as complex and uid as chess. In addition, the
lessons learned in the construction and development of IBM's chess computers
doubtless had inuences in other advanced systems.

Chapter 2

The current status of AI and

its application to engineering
2.1 Design Applications & Expert Systems
In a world of rapidly developing technology, articial intelligence is increasingly
nding its way into all our lives in many dierent forms. From bringing power
to our homes, to saving lives after an earthquake. AI also has a wide variety of
applications across every engineering discipline.[30]

Articial Intelligence has many applications in the design process, these in-
clude: storing knowledge of past designs; analysing designs and suggesting im-
provements; and planing ecient manufacturing and assembly processes.[31]

One key type of AI used are `Expert Systems' [32]. These are designed to
replicate a human expert or specialist. They are programmed with a large
database of knowledge relating to the specic eld they are to be used in, this
includes related designs and projects. If a human expert were to be used they
would be limited to their own learning and experiences, however an AI can store
the equivalent knowledge of many experts as well as details of projects spanning
a huge time period.

As well as storing all this information, AI are able to apply this knowledge.
When the relevant details of the design task are entered, it can use its stored
knowledge to bring forward relevant information in the form of advice and sug-
gestions. Unlike a simple search engine, an AI can interpret the information
to ensure what it submits to the designers is relevant and useful rather than
just picking out keywords. This can be utilised to analyse a design made by
a human, as the AI can use its knowledge to evaluate the design and provide
constructive criticism, suggestions and improvements. This can be very useful
for detecting faults in a design or picking up on any particular oversight.

Expert Systems can also play a larger part in the design process. They are
not only capable of providing advice, but can create their own solutions to a
design problem. The AI is capable of taking the details of the scenario provided,
and using its reasoning abilities combined with its knowledge of past designs,
to format a solution.

2.2 Genetic Algorithms

Another type of useful AI for designers are
Genetic Algorithms[33]. This type of arti-
cial intelligence aims to replicate evolution in
order to improve products[34]. The AI fol-
lows a synthetic equivalent of `natural selec-
tion' by rst listing all the traits of the current
design. The products requirements and lim-
itations are then used to form a criteria to
which the traits can be compared and rated.
The next generation of design can then be cre-
ated in one of two ways. The AI can form
pairs of the highest rated traits and produce
a new trait that is a combination of both par-
ents. Or; by making an identical copy of a
single, high scoring parent but with one ran-
domly generated change, or `mutation', en-
suring that the two dier slightly.

Creating a new trait from only one parent

restricts the diversity of the new generation
as they will not mix with other traits. How-
Figure 2.1: A owchart of nat- ever the changes undergone through the sec-
ural selection ond method mean the new generation is less
likely to keep any faults from the parents.

Although the AI selects the highest per-

forming traits to use, they are still not perfect, so both methods allow for the
next generation to carry forward these imperfections, which is where the random
change from method two is useful as it should alter these weaknesses. However
it is also possible that some of the new generation will lose the better qualities
of their parents, requiring that previous generations are considered as potential
parents. This way if the parent traits are stronger than their child, said child
will be rejected and the parent traits used again in the following iteration. The
process of scoring and reproducing traits will continuously repeat until it reaches
a desired goal, such as the traits no longer improving between generations, or
they have reached a high enough quality to satisfy the requirements specied
by the designer. Throughout the many iterations, both methods of producing

the new generation will be used. This is to ensure there is a good balance so
the traits will mix with one another as well as having random dierences.

Boeing have used genetic algorithms when working towards a low vibration
rotor design[35]. The AI is fed details of all the variables of the design such
as dimensions and positions of all the components, along with the performance
requirements. A genetic algorithm is then used to analyse all possible options
and select the optimum design, in this case, that which causes a minimum of
vibration whilst also meeting all other requirements. As this is a very complex
design that requires high levels of precision, the algorithm is usually run twice,
once to estimate the best design, then a second time over a small range of
options close to the estimate but with more precise termination conditions.

Design optimisation can be a complex and time consuming process for this
level when run by human engineers. A properly programmed AI, however, can
solve the problem far quicker and to the high degree of accuracy. Another
key advantage of this type of AI is the sheer volume of options it can process
at a time, allowing it to consider all possible designs simultaneously, a task
impossible for a human.

The termination conditions can also be adjusted so that the nal design is
not just a case of minimal vibration but accounts for other preferences such
as assembly feasibility or cost, however it will require a more complex input
to do this. Using AI also helps prevents calculation errors that humans may
make, especially for complex and sensitive designs such as those undertaken by
companies such as Boeing. Although any results produced by AI will still need
to be checked and veried by qualied professionals, as would any calculation
run by humans.

Figure 2.2:

2.3 Articial Neural Networks

Articial Neural Networks, often known by the abbreviation ANN, are another
form of Articial Intelligence that has a wide variety of applications in nance,
business and engineering [36].

ANNs are built to mimic human neurons, in a similar way that genetic al-
gorithms are made to simulate biological evolution. They are comprised of
hundreds of simple processing units; connected together in a complex network,
allowing the processors to cooperate in order to solve complex problems. Once
built the ANN has to be programmed, or `taught', the correct way to process
information. Just like a human child the ANN learns by example, so it is fed
data for which the correct output is already known, with the aim of ensuring
the ANN produces the same output. Initially the ANN is unlikely to produce
the expected output and thus the process must be repeated, allowing the ANN
to learn from its mistakes and improve its processes until the error is within
an allowable error margin. This needs to be repeated again with other input
values until the ANN has stored enough knowledge of processes in its articial

ANNs have many applications in control systems[37]. One use is as a function

approximator; where the ANN uses the provided input values and parameters
along with its memory of similar problems, in order to estimate an output value.
This is then compared with exact output and the error calculated and fed back
to the ANN allowing it to adapt and produce a new more accurate result. Once
the ANN is producing an output with acceptably low error, the function it has
built to create said output can then be used to calculate further reliable outputs
from new input values.

This is useful for situations when a very complex system is to be analysed, as

it can be very dicult and time consuming to form a function that accurately
describes the system's performance. This method does however require an initial

output to be known before the ANN can be used. This can be calculated
experimentally, and then fed into the ANN to nd a matching function and
calculate further values without having to run further costly experiments.

The energy sector also has many uses for Articial Neural Networks, in re-
search, operations, and supply and demand predictions[38]. ANNs can be used
to estimate the energy consumption and conservation of buildings. This helps
understand the energy requirement to heat said building to a desired tempera-
ture as well as the building's capabilities to retain heat.[39][40].

When teaching ANNs to be able to make this type of prediction to high

levels of accuracy, they need to be fed data from hundreds of dierent buildings
with varying conditions, all with known outputs so the ANNs can learn and
improve, as detailed above. Details of the building's dimensions; room layout;
numbers and areas of windows, as well as their type etc. all have to be included
in the input data. The AI can then estimate the temperature uctuations in
the building and the maximum and minimums throughout the day. It can also
apply these daily estimates to a monthly or annual scale to help predict long
term energy requirements.

A key advantage of using ANNs for this type of process is their learning
capabilities, as the longer they run and the more data they acquire the more
accurate and reliable they become. Although the ANN requires a large number
of known situations to learn how to deal with this style of problems initially,
once set-up it provides a simple, ecient way to make energy estimates that
process the information far quicker than alternative algorithm methods.

However as the building increases in size and complexity the results produced
will become less accurate. Over small variations the error will remain at a low
level, and will only cause a sizeable error when presented with a large variation.
For example if a network has been taught to analyse two to four bedroom houses,
is to be used for a ten storey hotel building, it will generate a large error. Thus
it will need to rst analyse other buildings of intermediate size in order to build
up a knowledge base for this scale of buildings and work its way towards the
desired size. This adjustment process can be time consuming and take a lot of
eort which is why ANNs are usually conned to certain applications and in
this case dierent networks will be designed for each purpose to maximise their

Figure 2.3: Graph showing the comparison between the ANN estimated and
actual annual heat energy consumption.

2.4 Crisis Management

Disasters, natural and induced, put lots of strain on the emergency services,
from individuals working at the scene to the people coordinating the rescue
eort. Articial Intelligence can help emergency services by removing pressure
from workers and making the process more ecient and successful. AI can help
with many aspects of crisis management: it is used in reconnaissance to better
assess the situation and accompany S&R teams to help with rescue attempts,
and it can assist with coordinating the emergency services.[41]

Drones are also very well suited for deal-

ing with disasters[42]. They have excellent re-
connaissance capabilities, as most drones are
highly mobile and can access areas that res-
cue teams cannot. In the event of an earth-
quake, or a similar disaster, the aected area
is often very dangerous and buildings could
collapse further, so being able to send drones
in to search for survivors avoids rescuers get-
ting harmed, but is also faster as drones are
often small and capable of making a path
through debris. Drones can also be easily
Figure 2.4: Drone footage of af-
equipped with infrared cameras and other de-
termath of Nepal earthquake in
tection equipment that help locate survivors
trapped in the wreckage. Flying drones are
especially useful as they are able to y into hazardous areas, and can also sur-
vey large areas, to provide emergency services with details on the whole disaster.
They are also well adapted for the event of heavy ooding as they do not have
the same restrictions as human rescue teams so can quickly manoeuvre through
the aected area.

In recent years AI expert systems have also been used to help coordinate
search and rescue attempts[43]. Dealing with the result of major disasters is very
dicult as there are many variables to consider and huge sums of information
to be taken into account. All of this combined with the fact that people's lives
are in danger can make this very stressful and put a lot of pressure on those in
charge. AI however is immune to such feelings of pressure or nervousness and
can process vast amounts of information far quicker than any human, making
them perfect for such a role. Expert systems can store relative information
from previous incidents which it can use along with information on the current
situation to make decisions and recommendations. These systems are also able
to quickly adapt their decisions when new information is provided from rescue
teams or reconnaissance drones. (pg 16)

AI can also be programmed to search social media and other sources to detect
any information about the disaster, and analyse this data to decide whether it
is useful and reliable and if so pass it on to the relevant people or software[44].
AI are superior to humans for such a job as they can process raw data far
quicker, enabling them to locate important information more rapidly. Using AI
also prevents the operator getting bored and losing focus, and allows them to
work on a more important task instead. Another advantage of AI is for privacy
reasons, as the software will take a very impersonal approach and only extract
relevant information, ignoring everything else. Using a human would make this
more dicult as they are almost guaranteed to retain at least a small portion of
the information they analyse. This introduces privacy issues as generally people
are against strangers reading through their posts but having an impersonal
computer scan them is considered less invasive.

Chapter 3

Recent & future developments

of AI in engineering
3.1 Technologies in modern AI
Articial intelligence, as it is known today,
has been around for roughly sixty years but
it is only recently that any real and tangible
progression has been made. These are due to
the ability to create:

1 Cheap parallel computation

2 Increased data storage capacity

Figure 3.1: A GPU or graphics 3 Complex algorithms[45]


As of 2017, almost all computers have an

integrated GPU (graphics processing unit).
This has been one of the most important ad-
vances in achieving AI; as previously most
computers have been able to only work on
one process in a given clock cycle. This means
that they follow code in a linear manner and are unable to branch into sepa-
rate tasks simultaneously -and reinforce them- as human brains do. In order
to do that, they require millions of simultaneous computations, just as the hu-
man brain res neurons down pathways. GPUs are computer chips that allow
these computations to be run simultaneously. Primarily used for gaming due
to the requirement for large numbers pixels to change upwards of 60 times a
second, GPUs have rapidly advanced in design, giving programmers the ability

to carry out the computations and design programs to work on many sub-
jects or tasks in parallel. This process can possibly be seen best in Facebook,
where the program cross references thousands of photos to recommend tags,
who you talk to the most and even to suggest new friends; with over 1 billion
accounts, all cross referenced by a massive network, guided by an overseeing AI.

This process can be used for engineering

purposes in many ways. Already, Facebook
is becoming a screening method for any com-
pany wishing to employ and Linkedin being a
more ocial medium with a similar function.
Programs for selecting materials or designing
structures are being optimised to far surpass
any human, with millions of calculations be-
ing performed and then approximating intel-
ligence that is capable of fully optimizing the
structure. Already there are lattice structures
being built by a manufacturing line[46], show-
Figure 3.2: An AI designed
ing that the theory is quickly becoming real-
ity. Indeed, in Norway, researchers are design-
ing a robot that can redesign itself, giving AI
the ability to evolve (pg 17). As dangerous as this could be, it is still very basic
and far from the depictions seen in lms (pg 12).

The second step forward to developing AI is the massive amount of data

available to anyone with an internet connection.[47] Our attempts at creating
AIs in the present require them to undertake a long learning process before being
any use to anyone. By using the vast amount of data available to us from the
internet (and other sources) we can speed up this learning process dramatically.
Many modern day AI's such as Siri, Cortana and Google itself are linked with
the internet and so can draw upon more information than any one being and
this knowledge base is growing exponentially every day, with search history, web
cookies, company databases and more.

Considering the amount of data such AIs are required to process, even after
the invention and consequent development of GPUs, a faster and more stream-
line solution for processing this raw data is required to overcome this bottleneck.
This is the third most important recent advance in AI technology: better algo-
rithms. AIs process information through a series of logic statements. If an AI
was faced with a dog then it would recognise that it had four legs, then that
it had a tail, then that it had hair, then its size and so on until it came to the
logical conclusion that it was a dog by adding these facts together to create a
nal solution.(pg 4) This process might not be so arduous and yet, with other
more complicated subjects the computations would take far longer. In 2006,
Geo Hinton optimized this ow of data terming it Deep learning. This faster,

more ecient approach to AI us is now used all around us including Facebook,
Siri and Google and has taken AI development one step further.

3.2 Programming modern AI

Another issue is that machines cannot yet apply their knowledge to dierent
circumstances. This is another major step to be tackled before we achieve truly
sentient AI, e.g. strong AI as, by its very denition, it must be able to apply its
intelligence to general problems and not be programmed specically to do so
each task. DeepMind's seminal DQN, a programme designed to play dierent
Atari games and improve by applying knowledge between games, has come
close to this and yet it still falls short.[48] In the rst 100 games of Breakout
the programme was very poor, performing very badly and losing every time.
After 200 games the programme was able to play of an average human, playing
well although still missing with some shots. When left for a prolonged period
of time however, the programme developed a strategy that sent the ball to the
opposite side of the bricks, an ecient technique that the programmers didn't
know. This shows that the programme itself taught the programmers something
that had not been programmed directly into this early gaming AI.

There are some obvious issues with this specic programme. Mainly, it took
about 200 games to reach a human level, as has been discussed earlier. If this
was used directly in Industry then millions of pounds, tonnes of material and
many hours would be lost in this process of learning. In order to achieve a
functional AI we must streamline this process and start the program with some
human experiences already installed, cutting out this early learning so as to
speed up the process of becoming an intelligent AI. In essence, skipping the
AI's childhood.

3.3 Modern AI examples

Google and Alexa

Perhaps the best example of current day general AI and of deep learning is
Google[49]. The biggest search engine in the world; Google uses deep neural
networks to process and connect information[50]. It predicts what a search is
going to be based on personal search history, links websites with phrases and key
words, recognises photos and even take aural commands. Google has recently
developed a new search programme called RandBrain. This AI program is
dierent as it can guess what a word means if it does not yet know it. The
same technology is being implemented in Amazon's Alexa. The Amazon Dot
is set up at a central location in a house and be asked questions, or to perform
tasks verbally. This shows that verbal recognition in AI is also developing
quickly, a testament to newer, more ecient algorithms. Even though this
is a great step forward in terms of AI development, it also raises questions.

Figure 3.3: Alexa: Amazon's AI home assistant

Former Google employees explained that the rate at which these algorithms are
improving means that the programmers don't fully understand what is going on
or how to fully control the software.

Google's advertisement campaign also brings into question the way the in
which these neural pathways are being used[51][52]. They record what trends
people have in their shopping habits and then apply them to advertising. This
in itself is not new at all but the number of times google re-evaluates the millions
of products to see which one would be best for you is amazing. Programmers
have had fears that, by changing from the straight cut rules of linear code to
the neural network algorithms, we are eliminating the boundaries to what the
AI can do. If a programme is designed to do one thing then it will always and
only do that one thing, whereas an AI has much more freedom: and therefore,
much more ability to do damage.

Self-Driving Cars

Another area that could directly aect en-

gineering is the self-driving car: WAYMO.
Google's research division has already pro-
duced a car that has completed several driver-
less journeys. The AI is given a pre-recorded
map of the route it will take, Ensuring the AI
doesn't have to process both the route and
other vehicles, allowing for more processing
power to be used to detect possible obstruc-
tions. Prior to release, tests were performed
along 2 million miles of roads, providing a
massive cache of data for the car's AI to tap
Figure 3.4: WAYMO
into. To be able to detect when an object is
slowing down is not enough when driving so,
the AI has been taught to predict where each object could move and how they

are going to do so. It is this ability to predict what is going to happen that
makes WAYMO so interesting. Up until this point in time computers have not
been able to guess or learn how dierent things interact with it so eciently.
This opens up several issues, however.

Firstly, who is responsible if WAYMO does crash? What happens when the
car is hacked into and the person inside kidnapped in their very own vehicle?
Even with these setbacks, they seem very popular with many uses. They could
be sent without anyone in the car to pick up children from school, be used by
elderly people who can no longer drive or by people who have been drinking.
Those with disabilities or handicaps will also be given a freedom often lost due
to the lack of options in terms of transport. Long distance travel will be a case
of entering in your destination and lying back for a snooze.

Indeed, if AI continues to develop at a fast rate then the entirety of the road
could be controlled by one, all controlling AI. There would be no uncertainty
between drivers at junctions or roundabouts and human error would be elimi-
nated altogether. Travelling would be faster and more streamlined, with fewer
idling cars, reducing emissions. The skills of driving would be eliminated, but
so too would many crashes and it would provide more time to work, relax or
look beyond a stretch of tarmac.


AI could open many doors in the healthcare

sector: and has already started[53][54]. An
AI called Watson is designed for oncology and
can evaluate thousands of textbooks and med-
ical reports in seconds, comparing them to
data and clinical trials given about the pa-
tient. It can then provide recommendations
on what the issue might be, what course of
action should be taken or which extra tests
to run. This allows for updated information
to always be at the ngertips of any terminal
of the programme (in this case an Ipad[55])
Figure 3.5: An artists impres-
to be accessed. (pg 16) In this day and age
sion of an AI healthcare assis-
medical theories are quickly evolving, new dis-
eases are being found and new cures being
produced. The only way to keep abreast with
recent developments is to use programmes of this nature.

This will also relieve doctors and nurses of many of the day to day tasks and
running of the hospital, allowing them to become more procient with their
specialization. AI will be able to look up personal records and give tailored

advice to each person, giving a much better opportunity to recover quickly.
There are several systems being developed that require the patient to show the
medicine to the camera before and whilst ingesting it, ensuring that the correct
dose is being taken at the correct time by the correct person.


Many technological breakthroughs owe their origin to recreational beginnings.

Although not entirely true in the case of AI, it has still provided a very willing
area for AI to be tested. Many games which are being currently in use have
AI written into their coding. The popular game Shadow of Mordor (which has
just announced a prequel) has many characters who all remember past meetings
with the player. Depending on the nature of such encounters they can become
stronger or weaker, develop fears of certain in game objects or beings and even
make comments on what occurred in such meetings. Many games have required
AIs of a certain nature for human players to face which are competitive and
act in a human manner. Being able to use objects around them for cover or
interaction improves the game play and creates an enjoyable experience: the
raison d'être for such games. Instead of a linear story with the same enemies and
escape routes games can be replayed many times over without being the same.

3.4 The Future of AI

The future of AI appears to be a very interest-
ing topic but one thing is certain: there will
be one. The rate at which AI is improving is
increasing exponentially. There will come a
certain point where AI's will be able to im-
prove themselves[56], leading to an avalanche
eect of improvement. It is possible that Figure 3.6: A modern "gaming"

there will be AI's in all aspects of life rang- GPU

ing from individual AI's in mechanical bodies

(the equivalent to humans) and one or two
cloud AI's controlling the internet, trac, the
stock market and even healthcare.

But what will the stages be from today to this new age of information and
technology? Already, AI's are appearing in healthcare all around the world and
will continue to spread. Engineering rms will design products to use AI, making
life easier for humans. The design process could also become more streamlined
if we incorporate such programmes into our engineering software. Some of the
worlds brightest engineers/scientists- Elon Musk and Steven Hawking among
them- have warned that this could spell the end of humanity. (pg 31) If (or
when) we hit this moment when AI can improve itself (the Singularity), the

human race will no longer be required. AI will be able to think and do things
faster than any human, without the need to eat or sleep.

Perhaps a way to avoid this is for us to combine this new technology with our
bodies, minds and morals. Cybernetics could allow us to maximise the potential
of AI whilst avoiding the possibility of creating a master race. The technology
could be used with people who have degenerative diseases or missing limbs.
Today, we must use our phones or laptops to access information. By combining
ourselves with this ability to compute massive amounts of data and access to
what is essentially a high-tech calculator we would eliminate the need for schools.
Information would be available to everyone (who could aord an implant) and
we could immediately share thoughts or emotions.

Another area that will see the introduction to AI is in the education sector.
Already there are thousands of educational apps for tablets. Instead of having
individual apps these could all be linked with each other, allowing the AI to
become a teacher who could see all results and all patterns with each individuals
learning. Areas of diculty could be taken more slowly while strong areas could
be reinforced but not overdone. The teaching could be tailored to each person's
learning style and would not have to be performed at a certain time allowing
for more physical exercise or family time.

AI is, without doubt, developing rapidly and is expanding to all areas of life.
It could help improve life in many ways, helping us heal, design, travel and

Chapter 4

Summary of current
viewpoints on the role of AI
4.1 Industry Leaders
Articial Intelligence is a divisive subject. Some people believe AI is the future
of our society, and will provide great strides in technology and quality of life.
Others view it negatively, they fear that AI is uncontrollable and may take over
our society. A common fear is that they will make working people obsolete,
forcing them into poverty. Some of the biggest names in science and business,
as well as leading international organisations, have voiced their opinions on AI
and its role in society. Even in the engineering community there are conicting
opinions about the role of AI.

The Future of Life Institute has a balanced view, acknowledging both the po-
tential benets and risks of Articial Intelligence. Max Tegmark, the President
of the FLI, says:

Everything we love about civilization is a product of intelligence, so

amplifying our human intelligence with articial intelligence has the
potential of helping civilization ourish like never before  as long
as we manage to keep the technology benecial. [57]

He goes on to discuss the necessity of aligning the AI's goals and morals with
ours before it surpasses our intelligence so as to minimise the risk of the AI
achieving the benecial goal it was asked to complete, through a destructive

FLI's position is that our civilization will ourish as long as we win

the race between the growing power of technology and the wisdom
with which we manage it. . . . the best way to win that race is . . . by
supporting AI safety research.

Stephen Hawking, one of the great scientists of our day, has great concerns
about AI. He said:

"It would take o on its own, and re-design itself at an ever increasing
rate. Humans, who are limited by slow biological evolution, couldn't
compete, and would be superseded."[58]

Professor Hawking's principal concern is that if AI becomes intelligent enough

to learn and improve itself; it would do so at such speed that it would soon
completely eclipse human intelligence and would become the dominant form of
life on earth.

However, Rollo Carpenter; creator of Cleverbot, thinks dierently. He spoke

out with the opinion that we will remain in control of AIs for a time and ca-
pable of solving many problems. This is still not a perfect vision for AI, as he
recognises that we won't be able to remain in control of AI forever, and what
happens then?

Elon Musk, a great leader in modern technology, has a very dark view of
Articial Intelligence. He describes it as the biggest existential threat that the
human race faces. He goes on to warn about the requirement for regulation of
AI before making a very dark comparison,

With articial intelligence we are summoning the demon. In all

those stories where there's the guy with the pentagram and the holy
water, it's like  yeah, he's sure he can control the demon. Doesn't
work out. [59]

So Elon Musk is of the same mind as Stephen Hawking: AI is a great threat to

human survival.

Bill Gates, Microsoft founder, also holds this view. He forsees a positive
future while AI remains limited inteligence, however is concerned for a time
when super-intelligent general AI is developed. So some of the leaders in Science,
Technology and Computing all fear the rise of AI, but what about the high level
political organisations?[60]

4.2 Political Authorities
The United Nations is widely regarded as an international political organisa-
tion. The UN's Chief Information Technology Ocer (CITO), Atefeh Riazi,
was interviewed by TechRepublic on the subject of AI. She says that the UN
has no ocial statement but does give her view. She says:

I'm looking at the machine language, and the path we're creating
for 10, 20, 30 years from now but not fully understanding the ethical
programming that we're putting into the systems."[61]

So although she obviously has to remain impartial and professional, she is ad-
mitting that AI will revolutionise our society, and not necessarily for the better.

The European Union (EU) is set to vote on a very important AI issue. They
are going to vote on whether robots (AI) can be granted legal status to hold
them responsible for their actions.(pg 39) They state:

The more autonomous robots are, the less they can be considered
simple tools in the hands of other actors . . .  responsible for its acts
or omissions. [62]

If this passes it would be a step towards recognition from the political bodies
that computers could and will surpass human intelligence and will have to be
regulated and controlled in some way to stop the destruction of our society.

In the engineering community there is also concern about how to manage

Articial Intelligence. The Committee of Science and Technology has published
a report criticising the UK Government for a lack of discussion about the way
AI will t into our society. The Institute of Mechanical Engineers (IMechE)
reported on the necessity for developing plans for the workforce and training
to keep up with advancements in computer science.[63] The committee went
on to criticise other areas of AI preparation which are currently lacking. The
government was also reprimanded for failing to publish its digital strategy and
was advised to present it without further delay and to commit to investing in
skills for workers. So the engineering community is also concerned about the
future role of AI and how that is managed.

Current viewpoints on Articial Intelligence seem to be fairly unied. There

is recognition of the current advances in AI and the many applications of it in
engineering and in our society but there is also concern. This concerns mainly
the regulation and management of AI. If AI is poorly managed it could well
mean the end of the human race, but managed correctly it could vastly improve
society. Some view the risk of extinction as too great, and so AI is not worth it,
whereas others view the potential benets as far too great to ignore. Whatever
is decided on, it must be agreed upon soon. AI is progressing at an exponential
rate and we must write legislation soon to direct that progress so that the AI
we create is in the form that we desire.

Chapter 5

The ethical issues and

political arguments
5.1 Introduction
The subject of articial intelligence is plagued with political debate and ethical
ambiguities: Will AI replace us as the main workforce? Will renouncing more
industry and work to the machines reduce our own capabilities? What if the
machines rise up against us? Who ought to control them? If strong AI becomes
a reality, at what point is it a sentient being? At what point must it be respected
and given rights? Would it be enslavement to own one? The list is long.

The grey areas that surround this eld are mainly hypothetical, but they are
worth discussing as the exponential advancements in technology are bringing
these scenarios closer and closer to the realms of possibility.

5.2 Employment
The rst issue that springs to mind when discussing the political fallout of
technological advancement is employment. Just as the factory workers in textile
mills protested their replacement by large scale industrialisation in the early 19th
century, there are many people who fear the eect that articial intelligence may
have on our workforce.

The issue stands that while in the past it has been the repetitive, menial
work that has been replaced by machines, the ability to learn and adapt opens
the door for more cognitive jobs to be replaced. Note that a strong AI is not
required to replace the work of a structural design engineer, as is proven by
Autodesk's new Project Dreamcatcher[64].

Dreamcatcher takes a set of design specications and outputs a large number
of highly complex designs which t those specications. The client is then faced
with a set of working designs to choose from. Each of these designs has an
optimisation in mind, and the end products are very often unlike anything that
a human would be able to create in a comparable time-span. (pg 17)

This is undoubtedly an advance-

ment for Airbus[65], as the bulkhead
in the gure (left) was proven to be
just as strong as the original solid
bulkhead. It uses less material, mak-
ing the plane lighter and more fuel ef-
cient. This in turn may improve life
for the passengers, as the cut in fuel
consumption may lower prices for air
travel. It's certainly of benet to the
environment, as less emissions will be
created per ight.
Figure 5.1: Project Dreamcatcher

However, the algorithm that gener-

ated this design has taken the job of
an engineer, which begs the question: which is more important, the worker or
the work? By replacing the engineer with a superior alternative, the product
was improved and money has been saved by all. While unfortunate for the en-
gineering job market, in this case the benets of using a computerised method
clearly outweigh the potential redundancies caused.

But how far can automation be allowed to spread through our society? It's
clear that within job sectors like engineering, the increase in computing power
is a phenomenal tool, even though it has decimated the ranks of stress engi-
neers, and all but eradicated draughtsmen. The unintended consequence of this
action is that the number of humans left capable of fullling these roles is now
diminishing rapidly.

The important question is, if the technology which currently occupies these
roles were to fail, could humanity cope with the sudden skills decit?

The answer in this case would be yes, but it would take several years to train
new skilled workers to breach the gaps in industry and infrastructure. What
would happen in the meantime? Industry would slow down markedly, and the
safety of products and infrastructure may be ropey for a decade or two, but it's
hardly a doomsday scenario.

If we were to consider another role in society, such as a doctor, would the
damage be as limited? (This is not science ction, as with the NHS under
the nancial strain in now nds itself in, the market for AIs which can consult
patients on their symptoms is quickly expanding[66]). Now imagine that in every
walk of life, the place of mankind in the work environment was diminishing in
favour of automation.

The outlook for humanity if these systems were to fail would be inevitably
grim, and the time taken for society to recuperate would be considerable.

On a less critical level there are changes occurring in the public sphere: jobs
are changing and work which previously could only have been done by a human
is now being done with technology. This can be seen now in the introduction of
voice recognition and response systems for customer service phone-lines, and the
gradual diminution of police presence on our streets in favour of an ever-growing
CCTV surveillance culture.

These applications present problems in themselves, as programs created to

understand human speech, if perfected, could be used by intelligence agencies
to tap, transcribe and analyse every phone call made.

5.3 Ownership
The benets to society created by the work of Articial intelligence is also a
bone of contention. Who should benet from their work?

Take the previous example of the Airbus bulkhead. It is entirely possible that
the money saved on fuel, materials and designers would not reach the general
public, and that the prot margin of the company would instead increase. The
shareholders would be ecstatic to nd out that due to a small investment in
some clever software, they had cut costs and reduced their wage bill. The
problem then lies that an imbalance of power has arisen. The company could
continue to benet from the AI's work, without costing them any signicant
further investment. All of the work done by the AI is going into the pockets of
a few individuals, through no merit of their own.

Before echoing Lenin and encouraging the diminishingly potent masses to

seize the means of now slightly better production, it is important to note the
parallels and dierences between the emergence of AI and the introduction of
the automatic cotton mill. In both scenarios, the business-owner proted at the
expense of the workers, and through no particular brilliance of their own they
implemented a new tool to improve their output.

This could be seen as simply an improvement in technology paving the way
for greater eciency and productivity. In this case, the owner is using all of
the tools at his disposal to create an ecient business model, and increase his
prots. The modern-day luddites will protest these changes and the debate will
run in exactly the same way as is has done since the beginning of the industrial

But therein lies a problem, until this point the advances made in technology
have been purely material. As AI improves it may someday begin to approach
the status of strong AI, at which point it is in many ways indistinguishable from
a real human. At this point the work dynamic changes. Can you really consider
an AI, albeit just a program, to be a mere tool, if it is capable of being nearly
indistinguishable from a person?

One solution to this problem is to avoid strong AIs altogether: A lesser AI

is still perfectly capable of designing a bulkhead. There is no need for it to be
able to discuss art or form opinions. But in a hypothetical world, where strong
AIs exist, wouldn't their usage, without payment or credit be tantamount to

5.4 Robot Rights

One concept for dealing with the moral ambiguity of robot workers gaining
more humanity is to give them rights. There have been considerations in the
EU already that robots should have certain rights transferred to them should
AI progress to a stage at which they could become their own entities before the

While it may sound ridiculous to some that robots be granted rights or equal
legal status, it could become entirely necessary once AI have suciently pro-
gressed to self-awareness. Imagine the following situation: A robot AI is dis-
covered at the scene of a murder, knife in hand, having just killed a person.
The robot is fully aware of what has just occurred and freely admits to the
murder. Who is responsible for the actions of this robot? The designer? The
manufacturer? Perhaps even the robot itself ?

Consider another situation: Strong AI robots are being manufactured and

sold as helpers for the home. They are programmed to obey whosoever buys
them, but they are also self-aware. The ownership of another being, whether
organic or not, is by denition slavery, should that robot be by any measure

Finally consider the work done in creating the airbus bulk head. If in some
similar circumstance a strong AI were to create a novel design or a new invention,
shouldn't it be given the property rights?

In all three of these cases, setting up a system of rights for AI could prevent
exploitation and help to prevent any suering caused to AIs.

5.5 Militarisation
The US is now researching autonomous weapons such as drones, capable of
identifying and attacking targets purely on its own[67]. The argument goes
that by replacing soldiers with machines, the casualties of war would decrease.
Certainly, for America the cost of war in terms of their own soldiers would
be massively reduced. However, there is a school of thought which regards
autonomous weaponry as weapons of mass destruction.

There are evidently a lot of security concerns with autonomous weaponry,

and the main one is how to ensure that the system doesn't identify and murder
innocent civilians. If these weapons are being implemented, then there will be
a degree of error as with any other weapon. If the decision is made that a small
amount of collateral is acceptable where will the line be drawn. If it kills one
civilian for every genuine target it would be an atrocity, but would it be any
less of an atrocity if it killed one for every thousand? The degree of error will
obviously decrease with each generation produced, but are we really willing to
allow a lethal robot to decide its own targets? If you were left in a room with
an autonomous weapon, a target and yourself, how condent would you be in
its decision-making?[68]

Liability then becomes a grey area, who is at fault if an innocent is killed?

This question returns to the idea of a AI as a legal entity. Where weaker AI
would only be used to detect, point and shoot, an intelligent mechanised soldier
would be able to evaluate situations, have a limited understanding of context
and adapt its behaviour to achieve goals. Such a machine would prove very
attractive to most modern militaries, and there is no doubt that just as they
have latched onto the idea of autonomous drones, they would have no qualms in
pushing the boundaries of AI in a new arms race to the sentient super-soldier.
If one such AI were to kill an innocent, who is the murderer? If it develops a
taste for killing, who is the war criminal?

The condence of a military with disposable soldiers would be a frightening

thing. Warfare has advanced a lot in the last century, as it has moved from the
millennia old perspective of seeing soldiers as cannon-fodder. The increased re-
spect for a soldier's value in today's modern military can be seen in the casualty
reports. In a recent raid in Yemen, the US lost one SEAL, and it was treated as
a tragedy[69]. Compare that to the use of commandos and special servicemen
in the second world war and it's evident that we no longer throw troops at the
enemy and just hope some come back. In a world where footsoldier and special
serviceman alike have been replaced by robots, the return to a classical model

of warfare is worrying. With vast swarms of mechanised troops taking the place
of armies, there is the possibility for widespread destruction.

However, once a technology like this is released to the world it's no longer
under lab conditions, and the rst concern that arises is that of misuse. If the
system is designed by a democratic country, with caution and due concern for
human life, then it may seem that it's no bad thing. It could be used to ush
out a booby-trapped bunker without risk to life. However, it is a Pandora's box,
once it has been used there is no guarantee that it cannot be copied, hacked
or damaged. Consider the opportunity it would aord malicious forces in this
world, the ability to attack targets not only remotely, but with a weapon that
has some degree of self-preservation and initiative. Quite literally, Cry havoc!
And let slip the dogs of war.

There doesn't need to be a foreign aggressor or a technological terrorist for

the possibility of drones-gone-wrong. If an error occurs or a malfunction devel-
ops, unexpected behaviour could very easily emerge. This is especially relevant
for strong, adaptive AI. The lm I, robot beautifully illustrated the possible
consequences of AI gone wrong. The lm deals with an AI takeover, where the
AI in question believes itself to be doing good to humanity, by taking control of
the humans by force, in order to prevent their self-destructive tendencies from
harming them further, all while still trying to obey a modern interpretation of
Asimov's 3 Laws. (pg 13)

The idea of articial intelligence turned loose onto the general population is
apocalyptic. While an army of armed robotic AI would be indubitably awful,
there is no need for the AI to manifest itself physically to inict damage. The
prevalence of technology in the western world makes data very easy to access.
As stated earlier, with voice recognition an AI could mine phone calls for infor-
mation; with natural language understanding it could also use texts, emails and
anything placed on the internet. All digital communication could be potentially
compromised and GPS signals from phones and cars could be used to track us
down. Leaked documents allege the CIA is capable of using phones and smart
TVs to eectively wiretap the general population, and believe it possible to take
advantage of the electronic systems which are contained in most modern cars
to force a car to crash and assassinate the driver. While these claims may seem
outlandish and absurd, the fact is that these are the very types of attack that
could become possible in the coming years. If an AI has connection to the inter-
net and phone networks, the implications for privacy, safety and dignity could
be disastrous.

5.6 Political Stances
United Kingdom

The UK government seems to be pro AI, as it invested ¿20 million into AI

research in February 2017. It projects that the industry could be worth up to
¿654 billion to the British economy by 2035 if growth in AI continues.[70]

United States

The US government under Obama was very pro AI, seeing it as an opportunity
for the US to remain ahead in technological development and as a tool for bet-
tering the life of its citizens. A case study stated in the Obama administration's
report Preparing for the Future of AI shows the benets of narrow AI already:

In transportation, AI-enabled smarter trac management applica-

tions are reducing wait times, energy use, and emissions by as much
as 25 percent in some places. [71]

The US has also been developing AI in a military capacity, developing au-

tonomous drones and missiles with the intelligence to select their own targets.[72]
This has extended to the development of an AI which ew in simulation dog-
ghts with experienced US pilots undefeated.[73]


Russia is in the process of developing AI weaponry according to state owned

RT, as a new remote control T-14 tank is being developed, with the potential
for future expansion to include AI capabilities[74].


China is not far behind the US, according to the state news agency China Daily,
which claims that China now has modularised intelligent missiles, which can be
adapted to specic combat situations and make use of a high level of articial
intelligence in ight.[75]

European Union

The EU has taken a philosophical approach to the development of AI, passing

a proposal that robots should be declared electronic persons and new laws
should be considered to hold them accountable for their actions.[76]

United Nations

The United Nations Interregional Crime and Justice Research Institute has set
its aims on AI as the following:

1 Capture perceptions and facilitate a common understanding of the con-

cepts of AI and robotics. 

2 Cultivate a community of stakeholders in the eld of AI and robotics.

3 Ensure that the risks posed by present and potential future security
implications for AI and robotics are appropriately mitigated.

These aims are intended to create a balance between the technological benets
and advancements made possible through AI, and the possible bad outcomes if
it should go wrong.

5.7 Religion and Philosophy
There is a high degree of chance that in the next 100 years we will approach a
strong AI, and once that occurs we need to have the philosophical understanding
in place to understand what we have created.

The conditions for life are vague and create ambiguity. According to the
Merriam Webster Medical dictionary life is dened as the following:

a state of living characterized by capacity for metabolism, growth,

reaction to stimuli, and reproduction.

Robotic AI can already react to stimuli; are used in the production of more
robotics; grow via modular construction; and arguably metabolise electricity.
Without humans controlling these functions and facilitating them, it could be
argued that an AI is fully alive. What then? If it displayed self-preservation,
could it be comparable in intelligence to an animal? What level of intelligence
would deem it sentient?

There are philosophical and religious fallout to this. If a being is brought into
existence that displays sentience, we are playing God. It would call into question
the existence of the soul that many religions hold central to their beliefs. If a
sentient being can be constructed, then it gives a lot of credence to the idea
that we are purely material, beings which operate on chemistry and physics and
that the soul plays no part in our existence.

This will annoy people greatly.

This newly created life will also have little to no principles or idea of ethics,
and it will have to be taught to behave and think as we think appropriate, but
who decides its political persuasions and religious stance, would it be right or
left wing? However you create it, people will oppose giving it views contrary to
their own. Perhaps it should be given the full anthology of human philosophy
and political discourse as part of its core being. But do we want that?

With the full course of human history as its guide, what would we create?

Our mirror image would shock us. With that in mind, is it advisable to
connect it to our missile systems?

Chapter 6

Conclusions and
This report has explored the history of AI in engineering, the current status of
AI in engineering, the recent and future Articial Intelligence developments in
engineering, the ethical and political issues and arguments surrounding AI, and
a summary of the current viewpoints on AI. AI is a nuanced topic, dating back
to Ancient Greek Myths, progressing through the early developments of com-
puters, and now an important engineering tool. AI has always been a futuristic
endeavour, but recent advancements in computing and robotics are making that
future ever more present. This has provided a major political and ethical issue;
it is no longer about whether we can produce AI, it's about whether we should.

Figure 6.1: What could AI become in the long term? And what might they
think of us?

A major argument posed in favour of AI is how useful it is. AI systems
are able to create solutions to design problems which are more intricate and
mechanically ecient than any human could ever design. They can also provide
unique solutions by genetic modelling; and articial neural networks can be used
in many dierent industries. AI also provides solutions in the convenience sector,
as personal assistants, self driving cars and in computer games. The future of AI
could include heath care, further engineering applications and education. Many
argue that these signicant and varied uses are currently advancing our lives,
and could and should continue to do so to a greater and greater extent.

Many people are concerned about our future with AI. They fear a future
where AI continues to learn and develop, and starts to take over. They fear
a future where we lose control of our miraculous creation and it turns against
us. Up until recently the thought of being able Strong AI has been so far in
the future that there has been no need to regulate its development, but little
is being done. Without the necessary regulation, AI cannot continue to get
stronger without signicant uncertainty.

With the correct worldwide legislation and regulation of AI, it could provide
a great service to mankind. However, this is exactly the point, AI must be of
use to mankind for it to be justiable. Strong AI is hard to justify, an AI which
has the ability for full cognition an all areas, able to develop a personality and
feelings, is no longer a tool. If we are to create something with these abilities
then there are serious moral issues around how we treat it: it is not a thing to
be utilised, it is life. For this reason it would be prudent to distinguish between
Strong AI from the weaker, more specic forms of AI. Strong AI has no useful
place in society, and hence should not be created. Weak and In-Between AI
designed to do specic tasks within specic limits can provide us with powerful,
unique solutions, enhance industry and improve personal life, and therefore as
long as they are controlled they provide a valuable service to society and should
be developed.

[1] Kris Hammond. Computerworld.

[2] University of toronto: Turing test.

[3] Aleksandar Todorovi¢. Is the turing test passed?

[4] Pamela McCorduck. Machines who think. A.K. Peters.

[5] Talos.

[6] Joseph Needham. Science & Civilisation in China. Cambridge University


[7] Robin Smith. Aristotle's logic. In Edward N. Zalta, editor, The Stanford
Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford Univer-
sity, winter 2016 edition, 2016.

[8] David darling encycopedia: Aeolipile.

[9] Knowledge without borders.


[10] Porphyry. Isagoge. Translated by Boethius, 268AD.

[11] Encyclopedia britanica: Al-jazari.


[12] Strange love for cold-war-era slide rules.

[13] Michael R. Williams. A History of Computing Technology. JOHN WILEY

& SONS INC, 1997.

[14] The Cambridge Companion to Hobbes's "Leviathan". Cambridge University

Press, 2007.

[15] Hobbes. Leviathan (Everyman's Library (Paper)). Everyman Paperbacks,

[16] Julien Oray de La Mettrie. l'Homme Machine. -, 1748.

[17] Tv tropes.

[18] Mary Shelley. Frankenstein. Kindle Store, 1818. Kindle Volunteer Edition.

[19] The last of a veteran chess player.


[20] CLAUDE E. SHANNON. A symbolic analysis of relay and switching cir-

cuits. Master's thesis, MiT, 1938.

[21] W. R. Ashby. An Introduction to Cybernetics. University Paperbacks, 1956.

[22] J. Rose, editor. Advances in Cybernetics and Systems, volume 1, 1972.

[23] Robert Glorioso. Engineering Cybernetics (Prentice-Hall information and

system sciences series). Prentice Hall, 1975.

[24] Isaac Asimov. The Complete Robot. Harper Collins Publ. UK, 1982.

[25] Peter W. Singer. Isaac asimov's laws of robotics are wrong.,

[26] D. Sriram C. Tong, editor. Articial Intelligence in Engineering Design,

Volume 1: Volume I: Design Representation and Models of Routine Design.
Academic Press, 1992.

[27] J.S. Gero, editor. Articial Intelligence in Engineering: Design. WIT Press,

[28] David J. C. MacKay. Information Theory, Inference and Learning Algo-

rithms. Cambridge University Pr., 2003.

[29] N. W. Heap. Information Technology and Society. Sage Publications UK,


[30] G. Rzevski, J. Pastor, and R. A. Adey. Applications of Articial Intelligence

in Engineering VIII: Vol. 2, Applications and Techniques. Computational
Mechanics, 1993.

[31] David C Brown. Articial intelligence for design process improvement.

[32] Stanford University Computer Science Department. Ar-

ticial intelligence - expert systems. Tutorials Point.

[33] Atool Varma and Nathan Erhardt. Genetic algorithms.

[34] Mnemosyne_Studio. What is an evolutionary algorithm?

[35] Peter Ross and Dave Corne. Applications of genetic algorithms.

[36] Saurabh Arya Ismail Akbani, Ashwin Baghele. Ar-

ticial intelligence in mechanical engineering: A case
study on vibration analysis of cracked cantilever beam.

[37] Oludele Awodele and Olawale Jegede. Neural networks

and its application in engineering. Babcock University.

[38] S.A. Kaloggirou. Applications of arti-

cial neural networks in energy systems.

[39] Soteris A. Kalogirou. Articial neural networks in energy appli-

cations in buildings.

[40] Yan Cheng-Wen Yao Jian. Application of ann for the prediction of build-
ing energy consumption at dierent climate zones with hdd and cdd.

[41] EKU Online. The benets & challenges of us-

ing articial intelligence for emergency managemente.

[42] Drones in disaster recovery.


[43] Matthieu Lauras. Special issue on innovative ar-

ticial intelligence solutions for crisis management.

[44] Drone-University-USA. Search and rescue.

[45] Helen Beers. Articial intelligence: Current trends and future develop-

[46] Dyllan Furness. Give a 3d printer articial intelligence, and this is
what you'll get.

[47] Pavel Kordík. Recent developments in articial intelligence.

[48] Richard Mallah. Top ai breakthrough of 2015.

[49] Bernard Marr. The top 10 ai and machine

learning use cases everyone should know about.

[50] Napier Lopez. Google's security ai is now so smart

it doesn't need to ask if you're not a robot.

[51] Dann Albright. 10 examples of articial intelligence you're using in daily


[52] Cade Metz. Ai is transforming google search. the rest of the web
is next.

[53] Society of Interventional Radiology. Articial intelli-

gence virtual consultant helps deliver better patient care.

[54] The Medical Futurist. Articial intelligence will redesign health-


[55] Kevin Kelly. The three breakthroughs that have nally unleashed ai on the

[56] Lars Bevanger. Robots 3d-print themselves through ai evo-


[57] Max Tegmark. Benits & risks of articial intelligence.

[58] Rory Cellan-Jones. Stephen hawking warns articial intelligence could end

[59] Samuel Gibbs. Elon musk: articial intelligence is our biggest existen-
tial threat.

[60] Peter Holley. Bill gates on dangers of articial intelligence:

`i don't understand why some people are not concerned'.

[61] Dan Patterson. United nations cito: Articial intelligence will be hu-
manity's nal innovation.

[62] May Bulman. Eu to vote on declaring robots to be `electronic per-


[63] Parizad Mangi. Government criticised for slow progress on ai regulations.

[64] Autodesk. Project dreamcatcher.

[65] Autodesk. Customer story: Airbus.


[66] Danny Buckland. Doctors and nurses will work with ai.

[67] Erico Guizzo. Autonomous weapons "could be devel-

oped for use within years," says arms-control group.

[68] Thomas Nash Dr. Matthew Bolton and Richard Moyes. Ban autonomous
armed robots.

[69] Justin Carissimo. Trump missed his main target in

yemen raid that killed 30 civilians and one us navy seal.

[70] Ben Riley-Smith. Government to plough ¿20m into arti-

cial intelligence research including robots and driverless cars.

[71] Obama Administration. Preparing for the future of articial intelligence.

[72] Matthew Rosenberg and John Marko. The pentagon's `ter-

minator conundrum': Robots that could kill on their own.

[73] Mark Prigg. The ai top gun that can beat the military's best: Pi-
lots hail aggresive and dynamic software after losing to it repeat-

[74] Russia Today. Producer of russia's armata t-14 plans to create army of ai

[75] Zhao Lei. Nation's next generation of missiles to be highly exible.

[76] May Bulman. Eu to vote on declaring robots to be `electronic per-