Professional Documents
Culture Documents
Universal Intelligence
Universal Intelligence
Shane Legg
IDSIA, Galleria 2, Manno-Lugano CH-6928, Switzerland
shane@vetta.org www.vetta.org/shane
Marcus Hutter
RSISE@ANU and SML@NICTA, Canberra, ACT, 0200, Australia
marcus@hutter1.net www.hutter1.net
Abstract
A fundamental problem in artificial intelligence is that nobody really
knows what intelligence is. The problem is especially acute when we need to
consider artificial systems which are significantly different to humans. In this
paper we approach this problem in the following way: We take a number of
well known informal definitions of human intelligence that have been given
by experts, and extract their essential features. These are then mathemat-
ically formalised to produce a general measure of intelligence for arbitrary
machines. We believe that this equation formally captures the concept of
machine intelligence in the broadest reasonable sense. We then show how
this formal definition is related to the theory of universal optimal learning
agents. Finally, we survey the many other tests and definitions of intelligence
that have been proposed for machines.
1
Contents
1 Introduction 1
2 Natural Intelligence 3
2.1 Human intelligence tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Animal intelligence tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Desirable properties of an intelligence test . . . . . . . . . . . . . . . . . . 5
2.4 Static vs. dynamic tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Theoreis of human intelligence . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 Ten definitions of human intelligence . . . . . . . . . . . . . . . . . . . . . 8
2.7 More definitions of human intelligence . . . . . . . . . . . . . . . . . . . . 9
1 Introduction
“Innumerable tests are available for measuring intelligence, yet no one is quite
certain of what intelligence is, or even just what it is that the available tests
are measuring.” R. L. Gregory [32]
What is intelligence? It is a concept that we use in our daily lives that seems to
have a fairly concrete, though perhaps naive, meaning. We say that our friend who got
an A in his calculus test is very intelligent, or perhaps our cat who has learnt to go into
hiding at the first mention of the word “vet”. Although this intuitive notion of intelligence
presents us with no difficulties, if we attempt to dig deeper and define it in precise terms
we find the concept to be very difficult to nail down. Perhaps the ability to learn quickly
is central to intelligence? Or perhaps the total sum of one’s knowledge is more important?
Perhaps communication and the ability to use language play a central role? What about
“thinking” or the ability to perform abstract reasoning? How about the ability to be
creative and solve problems? Intelligence involves a perplexing mixture of concepts, many
of which are equally difficult to define.
Psychologists have been grappling with these issues ever since humans first became
fascinated with the nature of the mind. Debates have raged back and forth concerning the
correct definition of intelligence and how best to measure the intelligence of individuals.
These debates have in many instances been very heated as what is at stake is not merely a
scientific definition, but a fundamental issue of how we measure and value humans: Is one
employee smarter than another? Are men on average more intelligent than women? Are
white people smarter than black people? As a result intelligence tests, and their creators,
have on occasion been the subject of intense public scrutiny. Simply determining whether
a test, perhaps quite unintentionally, is partly a reflection of the race, gender, culture or
social class of its creator is a subtle, complex and often politically charged issue [30, 43].
Not surprisingly, many have concluded that it is wise to stay well clear of this topic.
In reality the situation is not as bad as it is sometimes made out to be. Although
the details of the definition are debated, in broad terms a fair degree of consensus about
the scientific definition of intelligence and how to measure it has been achieved [27, 95].
Indeed it is widely recognised that when standard intelligence tests are correctly applied
and interpreted, they all measure approximately the same thing [27]. Furthermore, what
they measure is both stable over time in individuals and has significant predictive power, in
particular for future academic performance and other mentally demanding pursuits. The
issues that continue to draw debate are the questions such as whether the tests test only
a part or a particular type of intelligence, or whether they are somehow biased towards
a particular group or set of mental skills. Great effort has gone into dealing with these
issues, but they are difficult problems with no easy solutions.
Somewhat disconnected from this exists a parallel debate over the nature of intelligence
in the context of machines. While the debate is less politically charged, in some ways
the central issues are even more difficult. Machines can have physical forms, sensors,
actuators, means of communication, information processing abilities and environments
that are totally unlike those that we experience. This makes the concept of “machine
intelligence” particularly difficult to get a handle on. In some cases, a machine may display
properties that we equate with human intelligence, in such cases it might be reasonable
to describe the machine as also being intelligent. In other situations this view is far too
limited and anthropocentric. Ideally we would like to be able to measure the intelligence
of a wide range of systems; humans, dogs, flies, robots or even disembodied systems such
as chat-bots, expert systems, classification systems and prediction algorithms [55, 1].
1 INTRODUCTION 2
One response to this problem might be to develop specific kinds of tests for specific
kinds of entities; just as intelligence tests for children differ to intelligence tests for adults.
While this works well when testing humans of different ages, it comes undone when we
need to measure the intelligence of entities which are profoundly different to each other
in terms of their cognitive capacities, speed, senses, environments in which they operate,
and so on. To measure the intelligence of such diverse systems in a meaningful way
we must step back from the specifics of particular systems and establish the underlying
fundamentals of what it is that we are really trying to measure.
The difficulty of developing an abstract and highly general notion of intelligence is
readily apparent. Consider, for example, the memory and numerical computation tasks
that appear in some intelligence tests and which were once regarded as defining hallmarks
of human intelligence. We now know that these tasks are absolutely trivial for a machine
and thus do not appear to test the machine’s intelligence in any meaningful sense. Indeed
even the mentally demanding task of playing chess can be largely reduced to brute force
search [46]. What else may in time be possible with relatively simple algorithms running
on powerful machines is hard to say. What we can be sure of is that as technology
advances, our concept of intelligence will continue to evolve with it.
How then are we to develop a concept of intelligence that is applicable to all kinds of
systems? Any proposed definition must encompass the essence of human intelligence, as
well as other possibilities, in a consistent way. It should not be limited to any particular
set of senses, environments or goals, nor should it be limited to any specific kind of
hardware, such as silicon or biological neurons. It should be based on principles which
are fundamental and thus unlikely to alter over time. Furthermore, the definition of
intelligence should ideally be formally expressed, objective, and practically realisable as
an effective intelligence test.
In this paper we approach the problem of defining machine intelligence as follows:
Section 2 overviews well known theories, definitions and tests of intelligence that have been
developed by psychologists. Our objective in this section is to gain an understanding of
the essence of intelligence in the broadest possible terms. In particular we are interested
in commonly expressed ideas that could be applied to arbitrary systems and contexts, not
just humans.
Section 3 takes these key ideas and formalises them. This leads to universal intelligence,
our proposed formal definition of machine intelligence. We then examine some of the
properties of universal intelligence, such as its ability to sensibly order simple learning
algorithms and connections to the theory of universal optimal learning agents.
Section 4 overviews other definitions and tests of machine intelligence that have been
proposed. Although surveys of the Turing test and its many variants exist, for example
[81], as far as we know this section is the first general survey of definitions and tests of
machine intelligence. Given how fundamental this is to the field of artificial intelligence,
the absence of such a survey is quite remarkable. For any field to mature as a science,
questions of definition and measurement must be meticulously investigated. We conclude
our survey with a summary comparison of the various proposed tests and definitions of
machine intelligence.
Section 5 ends the paper with discussion, responses to criticisms, conclusions and direc-
tions for future research.
The genesis of this work lies in Hutter’s universal optimal learning agent, AIXI, de-
scribed in 2, 12, 60 and 300 pages in [49, 48, 50, 54], respectively. In this work, an order
relation for intelligent agents is presented, with respect to which the provably optimal
2 NATURAL INTELLIGENCE 3
AIXI agent is maximal. The universal intelligence measure presented here is a derivative
of this order relation. A short description of the universal intelligence measure appeared
in [61], from which two articles followed in the popular scientific press [31, 21]. An 8 page
paper on universal intelligence appeared in [63], followed by an updated poster presenta-
tion [62]. In the current paper we explore universal intelligence in much greater detail,
in particular the way in which it relates to mainstream views on human intelligence and
other proposed definitions of machine intelligence.
2 Natural Intelligence
Human intelligence is an enormously rich topic with a complex intellectual, social and
political history. For an overview the interested reader might want to consult “Handbook
of Intelligence” [93] edited by R. J. Sternberg. Our objective in this section is simply to
sketch a range of tests, theories and definitions of human and animal intelligence. We are
particularly interested in common themes and general perspectives on intelligence that
could be applicable to many kinds of systems, as these will form the foundation of our
definition of machine intelligence in the next section.
deaf or people who did not speak the test language as a first language. To address this
problem, he proposed that tests should contain a combination of both verbal and nonverbal
problems. He also believed that in addition to an overall IQ score, a profile should be
produced showing the performance of the individual in the various areas tested. Borrowing
significantly from the Stanford-Binet, the US army Alpha test, and others, he developed
a range of tests targeting specific age groups from preschoolers up to adults [107]. Due
in part to problems with revisions of the Stanford-Binet test in the 1960’s and 1970’s,
Wechsler’s tests became the standard. They continue to be well respected and widely
used.
Owing to a common lineage, modern versions of the Wechsler and the Stanford-Binet
have a similar basic structure [57]. Both test the individual in a number of verbal and
non-verbal ways. In the case of a Stanford-Binet the test is broken up into 5 key areas:
Fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working
memory. In the case of the Wechsler Adult Intelligence Scale (WAIS-III), the verbal tests
include areas such as such as knowledge, basic arithmetic, comprehension, vocabulary,
and short term memory. Non-verbal tests include picture completion, spatial perception,
problem solving, symbol search and object assembly.
As part of an effort to make intelligence tests more culture neutral John Raven de-
veloped the progressive matrices test [78]. In this test each problem consists of a short
sequence of basic shapes. For example, a circle in a box, then a circle with a cross in
the middle followed by a circle with a triangle inside. The test subject then has to se-
lect from a second list the image that best continues the pattern. Simple problems have
simple patterns, while difficult problems have more subtle and complex patterns. In each
case however, the simplest pattern that can explain the observed sequence is the one that
correctly predicts its continuation. Thus, not only is the ability to recognise patterns
tested, but also the ability to evaluate the complexity of different explanations and then
correctly apply the philosophical principle of Occam’s razor. We will return to Occam’s
razor and its importance in intelligence testing in Subsection 3.3 when considering machine
intelligence.
Today several different versions of the Raven test exist designed for different age
groups and ability levels. As the tests depend strongly on the ability to identify abstract
patterns, rather than knowledge, they are considered to be some of the most “g-loaded”
intelligence tests available (see Subsection 2.5). The Raven tests remain in common use
today, particularly when it is thought that culture or language bias could be an issue.
The intelligence quotient, or IQ, was originally introduced by Stern [91]. It was
computed by taking the age of a child as estimated by their performance in the intelligence
test, and then dividing this by their true biological age and multiplying by 100. Thus a
10 year old child whose mental performance was equal to that of a normal 12 year old,
had an IQ of 120. As the concept of mental age has now been discredited, and was
never applicable to adults anyway, modern IQ scores are simply normalised to a Gaussian
distribution with a mean of 100. The standard deviation used varies: in the United
States 15 is commonly used, while in Europe 25 is common. For children the normalising
Gaussian is based on peers of the same age.
Whatever normalising distribution is used, by definition an individual’s IQ is always
an indication of their cognitive performance relative to some larger group. Clearly this
would be problematic in the context of machines where the performance of some machines
could be many orders of magnitude greater than others. Furthermore, the distribution of
machine performance would be continually changing due to advancing technology. Thus,
for our purposes, an absolute measure will be more meaningful than a traditional IQ type
2 NATURAL INTELLIGENCE 5
of measure.
For an overview of the history of intelligence testing and the structure of modern tests,
see [57].
Statistical variability can also be a problem in short tests. Longer tests help in this regard,
however they are naturally more costly to administer.
Another important reliability factor is the bias that might be introduced by the indi-
vidual administering the test. Purely written tests avoid this problem as there is minimal
interaction between the tested individual and the tester. However this lack of interaction
also has disadvantages as it may mean that other sources of bias, such as cultural dif-
ferences, language problems or even something as simple as poor eyesight, might not be
properly identified. Thus, even in a written test the individual being tested should first
be examined by an expert in order to ensure that the test is appropriate.
Cultural bias in particular is a difficult problem, and tests should be designed to
minimise this problem where possible, or at least detect potential bias problems when
they occur. One way to do this is to test each ability in multiple ways, for example
both verbally and visually. While language is an obvious potential source of cultural
bias, more subtle forms of bias are difficult to detect and remedy. For example, different
cultures emphasise different cognitive abilities, and thus it is difficult, perhaps impossible,
to compare intelligence scores in a way that is truly objective. In part this is a question of
what intelligence is. Indeed the problem of how to weight performance in different areas
is fundamental and we will need to face it again in the context of our formal definition of
machine intelligence.
When testing large numbers of individuals, for example when testing army recruits,
the cost of administering the test becomes important. In these cases less accurate but
more economical test procedures may be used, such as purely written tests without any
direct interaction between the individuals being tested and a psychologist.
An intelligence test should be valid in the sense that it appears to be testing what it
claims it is testing for. One way to check this is to show that the test produces results
consistent with other manifestations of intelligence. A test should also have predictive
power, for example the ability to predict future academic performance. This ensures that
what is being measured is somehow meaningful, beyond just the ability to answer the
questions in the test.
Standard intelligence tests such as a modern Stanford-Binet are thoroughly tested for
years on the above criteria, and many others, before they are ready for wide spread use.
Many of these desirable properties, such as reliability, tester bias, cost and validity, are also
relevant to tests of machine intelligence. To some extent they are also relevant to formal
definitions of intelligence. We will return to these desirable properties when analysing
our definition of machine intelligence in Subsection 3.5, and later when comparing tests
of machine intelligence in Subsection 4.3.
What is needed is a more direct test of an individual’s ability to learn and adapt: A
so called “dynamic test”[96] (for related work see also [56]). In a dynamic test the test
subject interacts over a period of time with the tester, who now becomes a kind of teacher.
The tester’s task is to present the individual with a series of problems. After each attempt
at solving a problem, the tester provides feedback to the individual who then has to adapt
their behaviour accordingly in order to solve the next problem.
Although dynamic tests could in theory be very powerful, they are not yet well estab-
lished due to a number of difficulties. One of the drawbacks is that they require a much
greater degree of interaction between the test subject and the tester. This makes dynamic
testing more costly to perform and increases the danger of tester bias.
Dynamic testing is of particular interest to us because in a formal test for machines
it appears that we can overcome these problems by automating the role of the tester.
In this subsection and the next we will overview a range of definitions of intelligence
that have been given by psychologists. Many of these definitions are well known. Although
the definitions differ, there are reoccurring features; in some cases these are explicitly
stated, while in others they are more implicit. We start by considering ten definitions
that take, in our view, a similar perspective:
Perhaps the most elementary common feature of these definitions is that intelligence
is seen as a property of an individual who is interacting with an external environment,
problem or situation. Indeed, at least this much is common to practically all proposed
definitions of intelligence.
Another common feature is that an individual’s intelligence is related to their ability
to succeed or “profit”. The notion of success or profit implies the existence of some kind
of objective or goal. What the goal is, is not specified, indeed individuals’ goals may
be varied. The important thing is that the individual is able to carefully choose their
actions in a way that leads to them accomplishing their goals. The greater this capacity
to succeed with respect to various goals, the greater the individual’s intelligence.
The strong emphasis on learning, adaption and experience in these definitions implies
that the environment is not fully known to the individual and may contain new situations
that could not have been anticipated in advance. Thus intelligence is not the ability to
deal with a fully known environment, but rather the ability to deal with some range of
possibilities which cannot be wholly anticipated. What is important then is that the
individual is able to quickly learn and adapt so as to perform as well as possible over a
wide range of environments, situations, tasks and problems. Collectively we will refer to
these as “environments”, similar to some of the definitions above.
Bringing these key features together gives us what we believe to the essence of intel-
ligence in its most general form:
Intelligence measures an agent’s ability to achieve goals in a wide range of
environments.
We take this to be our informal working definition of intelligence. In the next section
we will use this definition as the starting point from which we will construct a formal
definition of machine intelligence. However before we proceed further, the reader way
wish to revise the 10 definitions above to ensure that the definition we have adopted is
indeed reasonable.
aspects of intelligence. In this subsection we will survey some of these other definitions
and compare them to the position we have taken. For an even more extensive collection
of definitions of intelligence, indeed the largest collection that we are aware of, visit our
online collection [64].
The following is an especially interesting definition as it was given as part of a group
statement signed by 52 experts in the field. As such it obviously represents a fairly
mainstream perspective:
“Intelligence is a very general mental capability that, among other things, involves
the ability to reason, plan, solve problems, think abstractly, comprehend complex
ideas, learn quickly and learn from experience.” [27]
“. . . in its lowest terms intelligence is present where the individual animal, or human
being, is aware, however dimly, of the relevance of his behaviour to an objective.
Many definitions of what is indefinable have been attempted by psychologists, of
which the least unsatisfactory are 1. the capacity to meet novel situations, or to
learn to do so, by new adaptive responses and 2. the ability to perform tests or tasks,
involving the grasping of relationships, the degree of intelligence being proportional
to the complexity, or the abstractness, or both, of the relationship.” J. Drever [18]
This definition has many similarities to ours. Firstly, it emphasises the agent’s ability
to choose its actions so as to achieve an objective, or in our terminology, a goal. It
then goes on to stress the agent’s ability to deal with situations which have not been
encountered before. In our terminology, this is the ability to deal with a wide range of
environments. Finally, this definition highlights the agent’s ability to perform tests or
tasks, something which is entirely consistent with our performance orientated perspective
of intelligence.
“Intelligence is not a single, unitary ability, but rather a composite of several func-
tions. The term denotes that combination of abilities required for survival and
advancement within a particular culture.” A. Anastasi [3]
This definition does not specify exactly which capacities are important, only that they
should enable the individual to survive and advance with the culture. As such this is a
more abstract “success” orientated definition of intelligence, like ours. Naturally, culture
is a part of the agent’s environment.
This is not really much of a definition as it simply shifts the problem of defining
intelligence to the problem of defining abstract thinking. The same is true of many other
definitions that refer to things such as imagination, creativity or consciousness. The
following definition has a similar problem:
The definition of Woodrow is typical of those which emphasise not the current ability
of the individual, but rather the individual’s ability to expand and develop new abilities.
This is a fundamental point of divergence for many views on intelligence. Consider the
following question: Is a young child as intelligent as an adult? From one perspective,
children are very intelligent because they can learn and adapt to new situations quickly.
On the other hand, the child is unable to do many things due to a lack of knowledge
and experience and thus will make mistakes an adult would know to avoid. These need
not just be physical acts, they could also be more subtle things like errors in reasoning
as their mind, while very malleable, has not yet matured. In which case, perhaps their
intelligence is currently low, but will increase with time and experience?
Fundamentally, this difference in perspective is a question of time scale: Must an agent
be able to tackle some task immediately, or perhaps after a short period of time during
which learning can take place, or perhaps it only matters that they can eventually learn to
deal with the problem? Being able to deal with a difficult problem immediately is a matter
of experience, rather than intelligence. While being able to deal with it in the very long
run might not require much intelligence at all, for example, simply trying a vast number
of possible solutions might eventually produce the desired results. Intelligence then seems
to be the ability to adapt and learn as quickly as possible given the constraints imposed
by the problem at hand. It is this insight that we will use to neatly deal with temporal
preference when defining machine intelligence (see Measure of success in Subsection 3.2).
At first this might not look like a definition of intelligence, but it makes an important
point: Intelligence is not really the ability to do anything in particular, rather it is a
very general ability that affects many kinds of performance. Conversely, by measuring
3 A DEFINITION OF MACHINE INTELLIGENCE 12
Boring’s famous definition of intelligence takes this idea a step further. If intelligence
is not the ability to do anything in particular, but rather an abstract ability that indirectly
affects performance in many tasks, then perhaps it is most concretely described as the
ability to do the kind of abstract problems that appear in intelligence tests? In which
case, Boring’s definition is not as facetious as it first appears.
This definition also highlights the fact that the concept of intelligence, and how it
is measured, are intimately related. In the context of this paper we refer to these as
definitions of intelligence, and tests of intelligence, respectively.
observation
reward
agent environment
action
Figure 1: The agent and the environment interact by sending action, observation
and reward signals to each other.
be built into the agent. The problem with this however is that it limits each agent to just
one goal. We need to allow agents that are more flexible, specifically, we need to be able
to inform the agent of what the goal is. For humans this is easily done using language.
In general however, the possession of a sufficiently high level of language is too strong an
assumption to make about the agent. Indeed, even for something as intelligent as a dog
or a cat, direct explanation is not very effective.
Fortunately there is another possibility which is, in some sense, a blend of the above
two. We define an additional communication channel with the simplest possible semantics:
A signal that indicates how good the agent’s current situation is. We will call this signal
the reward. The agent’s goal is then simply to maximise the amount of reward it receives.
So in a sense its goal is fixed. This is not particularly limiting however, as we have not said
anything about what causes different levels of reward to occur. In a complex setting the
agent might be rewarded for winning a game or solving a puzzle. If the agent is to succeed
in its environment, that is, receive a lot of reward, it must learn about the structure of
the environment and in particular what it needs to do in order to get reward. Thus from
a broad perspective, the goal is flexible. Not surprisingly, this is exactly the way in which
we condition an animal to achieve a goal: By selectively rewarding certain behaviours (see
Subsection 2.2). In a narrow sense the animal’s goal is fixed, perhaps to get more treats
to eat, but in a broader sense it is flexible as it may require doing a trick or solving a
puzzle of our choosing.
In our framework we will include the reward signal as a part of the perception gener-
ated by the environment. The perceptions also contain a non-reward part, which we will
refer to as observations. This now gives us the complete system of interacting agent and
environment, as illustrated in Figure 1. The goal, in the broad flexible sense, is implicitly
defined by the environment as this is what defines when rewards are generated. Thus, in
the framework as we have defined it, to test an agent in any given way it is sufficient to
fully define the environment.
This widely used and very flexible structure is in itself nothing new. In artificial
intelligence it is the framework used in reinforcement learning [97]. By appropriately
renaming things, it also describes the controller-plant framework used in control theory
[6]. The interesting point for us is that this setup follows naturally from our informal
definition of intelligence and our desire to keep things as general as possible. The only
difficulty was how to deal with the notion of success, or profit. This required the existence
of some kind of an objective or goal. The most flexible and elegant way to bring this into
the framework was to use a simple reward signal.
3 A DEFINITION OF MACHINE INTELLIGENCE 14
3.1 Example. To make this model more concrete, consider the following “Two Coins
Game”. In each cycle two 50¢ coins are tossed. Before the coins settle the player must
guess at the number of heads that will result: either 0, 1, or 2. If the guess is correct
the player gets to keep both coins and then two new coins are produced and the game
repeats. If the guess is incorrect the player does not receive any coins, and the game is
repeated.
In terms of the agent-environment model, the player is the agent and the system that
produces all the coins, tosses them and distributes the reward when appropriate, is the
environment. The agent’s actions are its guesses at the number of heads in each iteration
of the game: 0, 1 or 2. The observation is the state of the coins when they settle, and the
reward is either $0 or $1.
It is easy to see that for unbiased coins the most likely outcome is 1 head and thus the
optimal strategy for the agent is to always guess 1. However if the coins are significantly
biased it might be optimal to guess either 0 or 2 heads depending on the bias. If this were
the case, then after a number of iterations of the game an intelligent agent would realise
that the coins were probably biased and change its strategy accordingly.
With a little imagination, seemingly any sort of game, challenge, problem or test
can be expressed in this simple framework without too much effort. It should also be
emphasised that this agent-environment framework says nothing about how the agent or
the environment actually work; it only describes their roles.
The agent. Formally, the agent is a function, denoted by π, which takes the current
history as input and chooses the next action as output. We do not want to restrict the
agent in any way, in particular we do not require that it is deterministic. A convenient way
of representing the agent then is as a probability measure over actions conditioned on the
3 A DEFINITION OF MACHINE INTELLIGENCE 15
complete interaction history. Thus, π(a3 |o1 r1 a1 o2 r2 ) is the probability of action a3 in the
third cycle, given that the current history is o1 r1 a1 o2 r2 . A deterministic agent is simply
one that always assigns a probability of 1 to a single action for any given history. As the
history that the agent can use to select its action expands indefinitely, the agent need not
be Markovian. Indeed, how the agent produces its distribution over actions for any given
history is left open. In artificial intelligence the agent will of course be a machine and so
π will be a computable function.
√ In general however, π could be anything: An algorithm
that generates the digits of e as outputs, an incomputable function, or even a human
pushing buttons on a keyboard.
3.2 Example. To illustrate this formalism, consider again the Two Coins Game intro-
duced in Example 3.1. Let P := {0, 1, 2} × {0, 1} be the perception space representing the
number of heads after tossing the two coins and the value of the received reward. Likewise
let A := {0, 1, 2} be the action space representing the agent’s guess at the number of heads
that will occur. Assuming two fair coins, we can represent this environment by µ:
1
4 if ok = ak−1 ∈ {0, 2} ∧ rk = 1,
3
if ok 6= ak−1 ∈ {0, 2} ∧ rk = 0,
4
1
µ(ok rk |o1 . . . ak−1 ) := 2 if ok = ak−1 = 1 ∧ rk = 1,
1
2
if ok 6= ak−1 = 1 ∧ rk = 0,
0 otherwise.
That is, always guess that one head will be the result of the two coins being tossed. A
more complex agent might keep count of how many heads occur in each cycle and then
adapt its strategy if it seems that the coins are sufficiently biased.
Measure of success. Our next task is to formalise the idea of “profit” or “success”
for an agent. Informally, we know that the agent must try to maximise the amount of
reward it receives, however this could mean several different things. For example, one
agent might quickly find a way to get a reward of 0.9 in every cycle. After 100 cycles it
will have received a total reward of about 90 with an average reward per cycle of close to
0.9. A second agent might spend the first 80 cycles exploring different actions and their
consequences, during which time its average reward might only be 0.2. Having done this
exploration however, it might then know a way to get a reward of 1.0 in every cycle. Thus
after 100 cycles its total reward is only 80 × 0.2 + 20.0 = 36, giving an average reward per
cycle of just 0.36. After 1,000 cycles however, the second agent will be performing much
better than the first.
3 A DEFINITION OF MACHINE INTELLIGENCE 16
Which agent is the better one? The answer depends on how we value reward in the
near future versus reward in the more distant future. In some situations we may want
our agent to perform well fairly quickly, in others we might only care that it eventually
reaches a level of performance that is as high as possible.
A standard way of formalising this is to scale the value of rewards so that they decay
geometrically into the future at a rate given by a discount parameter γ ∈ (0, 1). For
example, with γ = 0.95 a reward of 0.7 that is 10 time steps into the future would be
given a value of 0.7 × (0.95)10 ≈ 0.42. At 100 time steps into the future a reward of 0.7
would have a value of just over 0.004. By increasing γ towards 1 we weight long term
rewards more heavily, conversely by reducing it we weight them less so. In other words,
this parameter controls how short term greedy, or long term farsighted, the agent should
be.
To work out the expected future value for a given agent and environment interacting,
we take the sum of these discounted rewards into the infinite future and work out its
expected value, !
∞
1 X
Vµπ (γ) := E γ i ri . (1)
Γ i=1
In the above, ri is the reward in cycle i of a given history, γ is the discount rate, γ i
discount applied to the ith reward into the future, the normalising constant is
is the P
∞ i
Γ := i=1 γ , and the expected value is taken over all possible interaction sequences
between the agent π and the environment µ.
Under geometric discounting an agent with γ = 0.95 will not plan further than about
1
20 cycles ahead. Thus we say that the agent has a constant effective horizon of 1−γ .
Since we are interested in universal intelligence, a limited farsightedness is not acceptable
because for every horizon there is a task that needs a larger horizon to be solved. For
instance, while a horizon of 5 is sufficient for tic-tac-toe, it is insufficient for chess. Clearly,
geometric discounting has not solved the problem of how to weight near term rewards
versus long term rewards, it has simply expressed this weighting as a parameter. What
we require is a single definition of machine intelligence, not a range of definitions that
vary according to a free parameter.
A more promising candidate for universal discounting is the near-harmonic, or
quadratic discount, where we replace γ i in Equation 1 by 1/i2 and modifying Γ accord-
ingly. This has some interesting properties, in particular the agent needs to look forward
into the future in a way that is proportional to its current age. This is appealing since it
seems that humans of age k years usually do not plan their lives for more than, perhaps,
the next k years. More importantly, it allows us to avoid the problem of having to choose
a global time scale or effective horizon [50].
Although harmonic discounting has a number of attractive properties [51], an even
simpler and more general solution is possible. If we look at the value function in Equa-
tion 1, we see that discounting plays two roles. Firstly, it normalises rewards received
so that their sum is always finite. Secondly, it weights the reward at different points in
the future which in effect defines a temporal preference. A direct way to solve both of
these problems, without needing an external parameter, is to simply require that the total
reward returned by the environment can never exceed 1. For such a reward summable
environment µ, it follows that the expected value of the sum of rewards is also finite and
thus discounting is no longer required,
∞
!
X
π
Vµ := E ri ≤ 1. (2)
i=1
3 A DEFINITION OF MACHINE INTELLIGENCE 17
One way of viewing this is that the rewards returned by the environment now have the
temporal preference already factored in. The cost is that this is an additional condition
that we place on the space of environments. Previously we required that each reward
signal was in a subset of [0, 1] ∩ Q, now we have the additional constraint that the reward
sum is always bounded (see Subsection 5.2 for further discussion about why this constraint
is reasonable).
As the number of observations increases, the set of hypotheses shrinks and hopefully the
remaining hypotheses become increasingly accurate at modelling the true environment.
The problem is that in any given situation there will likely be a large number of
hypotheses that are consistent with the current set of observations. Thus, if the agent
is going to predict which hypotheses are the most likely to be correct, it must resort to
something other than just the observational information that it has. This is a frequently
occurring problem in inductive inference for which the most common approach is to invoke
the principle of Occam’s razor:
Given multiple hypotheses that are consistent with the data, the simplest
should be preferred.
This is generally considered the rational and intelligent thing to do [104], indeed IQ
tests often implicitly test an individual’s ability to use Occam’s razor, as pointed out in
Subsection 2.1.
3.3 Example. Consider the following type of question which commonly appears in
intelligence tests. There is a sequence such as 2, 4, 6, 8, and the test subject needs to
predict the next number. Of course the pattern is immediately clear: The numbers are
increasing by 2 each time, or more mathematically, the k th item is given by 2k. An
intelligent person would easily identify this pattern and predict the next digit to be 10.
However, the polynomial 2k 4 − 20k 3 + 70k 2 − 98k + 48 is also consistent with the data,
in which case the next number in the sequence would be 58. Why then, even if we are
aware of the larger polynomial, do we consider the first answer to be the most likely one?
It is because we apply, perhaps unconsciously, the principle of Occam’s razor. The fact
that intelligence tests define this as the “correct” answer, shows us that using Occam’s
razor is considered the intelligent thing to do. Thus, although we do not usually mention
Occam’s razor when defining intelligence, the ability to effectively use it is an important
facet of intelligent behaviour.
In some cases we may even consider the correct use of Occam’s razor to be a more
important demonstration of intelligence than achieving a successful outcome. Consider,
for example, the following game:
3.4 Example. A questioner lays twenty $10 notes out on a table before you and then
points to the first one and asks “Yes or No?”. If you answer “Yes” he hands you the
money. If you answer “No” he takes it from the table and puts it in his pocket. He then
points to the next $10 note on the table and asks the same question. Although you, as an
intelligent agent, might experiment with answering both “Yes” and “No” a few times, by
the 13th round you would have decided that the best choice seems to be “Yes” each time.
However what you do not know is that if you answer “Yes” in the 13th round then the
questioner will pull out a gun and shoot you! Thus, although answering “Yes” in the 13th
round is the most intelligent choice, given what you know, it is not the most successful
one. An exceptionally dim individual may have failed to notice the obvious relationship
between answers and getting the money, and thus might answer “No” in the 13th round,
thereby saving his life due to what could truly be called “dumb luck”.
What is important then, is not that an intelligent agent succeeds in any given situation,
but rather that it takes actions that we would expect to be the most likely ones to lead
to success. Given adequate experience this might be clear, however often experience is
not sufficient and one must fall back on good prior assumptions about the world, such as
3 A DEFINITION OF MACHINE INTELLIGENCE 19
Occam’s razor. It is important then that we test the agents in such a way that they are,
at least on average, rewarded for correctly applying Occam’s razor, even if in some cases
this leads to failure.
There is another subtlety that needs to be pointed out. Often intelligence is thought
of as the ability to deal with complexity. Or in the words of the psychologist Gottfred-
son, “. . . g is the ability to deal with cognitive complexity — in particular, with complex
information processing.”[28] It is tempting then to equate the difficultly of an environ-
ment with its complexity. Unfortunately, things are not so straightforward. Consider the
following environment:
3.5 Example. Imagine a very complex environment with a rich set of relationships
between the agent’s actions and observations. The measure that describes this will have
a high complexity. However, also imagine that the reward signal is always maximal no
matter what the agent does. Thus, although this is a very complex environment in
which the agent is unlikely to be able predict what it will observe next, it is also an
easy environment in the sense that all policies are optimal, even very simple ones that do
nothing at all. The environment contains a lot of structure that is irrelevant to the goal
that the agent is trying to achieve.
From this perspective, a problem is thought of as being difficult if the simplest good
solution to the problem is complex. Easy problems on the other hand are those that have
simple solutions. This is a very natural way to think about the difficulty of problems, or
in our terminology, environments.
Fortunately, this distinction does not affect our use of Occam’s razor. When we talk
about an hypothesis, what we mean is a potential model of the environment from the
agent’s perspective, not just a model that is sufficient with respect to the agent’s goal.
From the agent’s perspective, an incorrect hypothesis that fails to model much of the
environment may be optimal if the parts of the environment that the hypothesis fails to
model are not relevant to receiving reward. However, when Occam’s razor is applied, we
apply it with respect to the complexity of the hypotheses, not the complexity of good
solutions with respect to an objective. Thus, to reward agents on average for correctly
using Occam’s razor, we must weight the environments according to their complexity, not
their difficulty.
Our remaining problem now is to measure the complexity of environments. The
Kolmogorov complexity of a binary string x is defined as being the length of the shortest
program that computes x:
where p is a binary string which we call a program, l(p) is the length of this string in bits,
and U is a prefix universal Turing machine U called the reference machine.
To gain an intuition for how this works, consider a binary string 0000 . . . 0 that consists
of a trillion 0s. Although this string is very long, it clearly has a simple structure and
thus we would expect it to have a low complexity. Indeed this is the case because we
can write a very short program p that simply loops a trillion times outputing a 0 each
time. Similarity, other strings with simple patterns have a low Kolmogorov complexity.
On the other hand, if we consider a long irregular random string 111010110000010 . . .
then it is much more difficult to find a short program that outputs this string. Indeed it
is possible to prove that there are so many strings of this form, relative to the number
of short programs, that in general it is impossible for long random strings to have short
programs. In other words, they have high Kolmogorov complexity.
3 A DEFINITION OF MACHINE INTELLIGENCE 20
general communication channels. Finally, the formal definition places no limits on the
internal workings of the agent. Thus, we can apply the definition to any system that is
able to receive and generate information with view to achieving goals. The main drawback,
however, is that the Kolmogorov complexity function K is not computable and can only
be approximated. This is an important point that we will return to later.
A random agent. The agent with the lowest intelligence, at least among those that
are not actively trying to perform badly, would be one that makes uniformly random
actions. We will call this π rand . Although this is clearly a weak agent, we cannot simply
rand
conclude that the value of Vµπ will always be low as some environments will generate
high reward no matter what the agent does. Nevertheless, in general such an agent will
not be very successful as it will fail to exploit any regularities in the environment, no
rand
matter how simple they are. It follows then that the values of Vµπ will typically be
low compared to other agents, and thus Υ(π rand ) will be low. Conversely, if Υ(π rand )
is very low, then the equation for Υ implies that for simple environments, and many
rand
complex environments, the value of Vµπ must also be relatively low. This kind of poor
performance in general is what we would expect of an unintelligent agent.
A very specialised agent. From the equation for Υ, we see that an agent could
have very low universal intelligence but still perform extremely well at a few very specific
and complex tasks. Consider, for example, IBM’s Deep Blue chess supercomputer, which
dblue
we will represent by π dblue . When µchess describes the game of chess, Vµπchess is very high.
chess
However 2−K(µ ) is small, and for µ 6= µchess the value function will be low as π dblue
only plays chess. Therefore, the value of Υ(π dblue ) will be very low. Intuitively, this is
because Deep Blue is too inflexible and narrow to have general intelligence; a characteristic
weakness of specialised artificial intelligence systems.
A general but simple agent. Imagine an agent that performs very basic learning
by building up a table of observation and action pairs and keeping statistics on the rewards
that follow. Each time an observation that it has been seen before occurs, the agent takes
the action with highest estimated expected reward in the next cycle with 90% probability,
or a random action with 10% probability. We will call this agent π basic . It is immediately
clear that many environments, both complex and very simple, will have at least some
structure that such an agent would take advantage of. Thus, for almost all µ we will have
basic rand
Vµπ > Vµπ and so Υ(π basic ) > Υ(π rand ). Intuitively, this is what we would expect as
basic
π , while very simplistic, is surely more intelligent than π rand .
Similarly, as π dblue will fail to take advantage of even trivial regularities in some of
the most basic environments, Υ(π basic ) > Υ(π dblue ). This is reasonable as our aim is to
measure a machine’s level of general intelligence. Thus an agent that can take advantage
of basic regularities in a wide range of environments should rate more highly than a
specialised machine that fails outside of a very limited domain.
3 A DEFINITION OF MACHINE INTELLIGENCE 22
A simple agent with more history. The first order structure of π basic , while very
general, will miss many simple exploitable regularities. Consider the following environ-
ment µalt . Let R = [0, 1] ∩ Q, A = {up, down} and O = {ε}, where ε is the empty string.
In cycle k the environment generates a reward of 2−k each time the agent’s action is dif-
ferent to its previous action. Otherwise the reward is 0. We can define this environment
formally,
1 if ak−1 6= ak−2 ∧ rk = 2−k ,
µalt (ok rk |o1 . . . ak−1 ) := 1 if ak−1 = ak−2 ∧ rk = 0,
0 otherwise.
Clearly the optimal strategy for an agent is simply to alternate between the actions up
and down. Even though this is very simple, this strategy requires the agent to correlate
its current action with its previous action, something that π basic cannot do.
A natural extension of π basic is to use a longer history of actions, observations and
rewards in its internal table. Let π 2back be the agent that builds a table of statistics
for the expected reward conditioned on the last two actions, rewards and observations.
It is immediately clear that π 2back will exploit the structure of the µalt environment.
Furthermore, by definition π 2back is a generalisation of π basic and thus it will adapt to
2back basic
any regularity that π basic can adapt to. It follows then that in general Vµπ > Vµπ
and so Υ(π 2back ) > Υ(π basic ), as we would intuitively expect. In the same way we can
extend the history that the agent utilises back further and produce even more powerful
agents that are able to adapt to more lengthy temporal structures and which will have
still higher machine intelligence.
top
a = rest or climb
r = 2 -k
a = rest a = climb
-k-4 bottom r = 0.0
r = 2
Figure 2: A simple game in which the agent climbs a playground slide and slides
back down again. A shortsighted agent will always just rest at the bottom of the
slide.
In a similar way agents of increasing complexity and adaptability can be defined which
will have still greater intelligence. However with more complex agents it is usually difficult
to theoretically establish whether one agent has more or less universal intelligence than
another. Nevertheless, in the simple examples above we saw that the more flexible and
powerful an agent was, the higher its universal intelligence.
A very intelligent agent. A very smart agent would perform well in simple environ-
ments, and reasonably well compared to most other agents in more complex environments.
From the equation for universal intelligence this would clearly produce a very high value
for Υ. Conversely, if Υ was very high then the equation for Υ implies that the agent must
perform well in most simple environments and reasonably well in many complex ones also.
A super intelligent agent. Consider what would be required to maximise the value
of Υ. By definition, a “perfect” agent would always pick the action which had greatest
expected future reward. To do this, for every environment µ ∈ E the agent must take into
account how likely it is that it is facing µ given the interaction history so far, and the prior
probability of µ, that is, 2−K(µ) . It would then consider all possible future interactions
that might occur, and how likely they are, and from this select the action in the current
cycle that maximises the expected future reward.
This perfect theoretical agent is known as AIXI. It has been precisely defined and
studied at length in [50] (see [54] for a shorter exposition). The connection between
universal intelligence and AIXI is not coincidental: Υ was originally derived from the
so called “intelligence order relation” (see Definition 5.14 in [50]), which in turn was
constructed to reflect the equations for AIXI. As such we can define the upper bound on
universal intelligence to be,
AIXI is not computable due to the incomputability of K, and even if K were com-
putable, accurately computing the expectations to maximise future expected rewards
would be practically infeasible. Nevertheless, AIXI is interesting from a theoretical per-
spective as it defines, in an elegant way, what might be considered to be the perfect
3 A DEFINITION OF MACHINE INTELLIGENCE 24
theoretical artificial intelligence. Indeed many strong optimality properties have been
proven for AIXI. For example, it has been proven that AIXI converges to optimal perfor-
mance in any environment where this is at all possible for a general agent (see Theorem
5.34 of [50]). This optimality result includes ergodic Markov decision processes, predic-
tion problems, classification problems, bandit problems and many others [60, 59]. These
mathematical results prove that agents with very high universal intelligence are extremely
powerful and general.
concept in the strongest and cleanest way possible, and then to accept that our ability
to test for this ideal has limitations. In other words, our task is to find better and more
effective tests, not to redefine what it is that we are testing for. This is the attitude we
have taken here, though in this paper our focus is almost entirely on the first part, that
is, establishing a strong theoretical definition of machine intelligence.
Although some of the criteria by which we judge practical tests of intelligence are not
relevant to a pure definition of intelligence, many of the desirable properties are similar.
Thus to understand the strengths and weaknesses of our definition, consider again the
desirable properties for a test of intelligence from Subsection 2.3.
Valid. The most important property of any proposed formal definition of intelligence
is that it does indeed describe something that can reasonably be called “intelligence”.
Essentially, this is the core argument of this report so far: We have taken a mainstream
informal definition and step by step formalised it. Thus, so long as our informal definition
is reasonable, and our formalisation argument holds, the result can reasonably be described
as a formal definition of intelligence.
Meaningful. As we saw in the previous section, universal intelligence orders the power
and adaptability of simple agents in a natural way. Furthermore, a high value of Υ implies
that the agent performs well in most simple and moderately complex environments. Such
an agent would be an impressively powerful and flexible piece of technology, with many
potential uses. Clearly then, universal intelligence is inherently meaningful, independent
of whether or not one considers it to be a measure of intelligence.
Wide range. As we saw in the previous section, universal intelligence is able to order
the intelligence of even the most basic agents such as π rand , π basic , π 2back and π 2forward . At
the other extreme we have the theoretical super intelligent agent AIXI which has maximal
Υ value. Thus, universal intelligence spans trivial learning algorithms right up to super
intelligent agents. This seems to be the widest range possible for a measure of machine
intelligence.
General. As the agent’s performance on all well defined environments is factored into
its Υ value, a broader performance metric is difficult to imagine. Indeed, a well defined
measure of intelligence that is broader than universal intelligence would seem to contradict
the Church-Turing thesis as it would imply that we could effectively measure an agent’s
performance for some well defined problem that was outside of the space of computable
measures.
many logical puzzle problems, another might be more linguistic in emphasis, while another
stresses visual reasoning. Modern intelligence tests like the Stanford-Binet try to minimise
this problem by covering the most important areas of human reasoning both verbally and
non-verbally. This helps but it is still very anthropocentric as we are still only testing
those abilities that we think are important for human intelligence.
For an intelligence measure for arbitrary machines we have to base the test on some-
thing more general and principled: Universal Turing computation. As all proposed models
of computation have thus far been equivalent in their expressive power, the concept of
computation appears to be a fundamental theoretical property rather than the product
of any specific culture. Thus, by weighting different environments depending on their
Kolmogorov complexity, and considering the space of all computable environments, we
have avoided having to define intelligence with respect to any particular culture, species
etc.
Unfortunately, we have not entirely removed the problem. The environmental distri-
bution 2−K(µ) that we have used is invariant, up to a multiplicative constant, to changes
in the reference machine U. Although this affords us some protection, the relative intel-
ligence of agents can change if we change our reference machine. One approach to this
problem is to limit the complexity of the reference machine, for example by limiting its
state-symbol complexity. We expect that for highly intelligent machines that can deal
with a wide range of environments of varying complexity, the effect of changing from one
simple reference machine to another will be minor. For simple agents, such as those consid-
ered in Subsection 3.4, the ordering of their machine intelligence was also not particularly
sensitive to natural choices of reference machine. Recently attempts have been made to
make algorithmic probability completely unique and objective by identifying which uni-
versal Turing machines are, in some sense, the most simple [74]. Unfortunately however,
an elegant solution to this problem has not yet been found.
Practical. In its current form the definition cannot be directly turned into a test of
intelligence as the Kolmogorov complexity function is not computable. Thus in its pure
form we can only use it to analyse the nature of intelligence and to theoretically examine
the intelligence of mathematically defined learning algorithms.
In order to use universal intelligence more generally we will need to construct a work-
able test that approximates an agent’s Υ value. The equation for Υ suggests how we
might approach this problem. Essentially, an agent’s universal intelligence is a weighted
sum of its performance over the space of all environments. Thus, we could randomly gen-
erate programs that describe environmental probability measures and then test the agent’s
4 DEFINITIONS AND TESTS OF MACHINE INTELLIGENCE 27
performance against each of these environments. After sampling sufficiently many envi-
ronments the agent’s approximate universal intelligence would be computed by weighting
its score in each environment according to the complexity of the environment as given by
the length of its program. Another possibility might to be try to approximate the sum by
enumerating environmental programs from short to long, as the short ones will contribute
by far the greatest to the sum. However in this case we will need to be able to reset
the state of the agent so that it cannot cheat by learning our environmental enumeration
method. In any case, various practical challenges will need to be addressed before uni-
versal intelligence can be used to construct an effective intelligence test. As this would
be a significant project in its own right, in this paper we focus on the theoretical issues
surrounding the universal intelligence.
“Intelligence is the computational part of the ability to achieve goals in the world.
Varying kinds and degrees of intelligence occur in people, many animals and some
machines.” J. McCarthy [72]
the achievement of behavioral subgoals that support the system’s ultimate goal.”
J. S. Albus [1]
The position taken by Albus is especially similar to ours. Although the quote above
does not explicitly mention the need to be able to perform well in a wide range of envi-
ronments, at a later point in the same paper he mentions the need to be able to succeed
in a “large variety of circumstances”.
“Intelligent systems are expected to work, and work well, in many different envi-
ronments. Their property of intelligence allows them to maximize the probability
of success even if full knowledge of the situation is not available. Functioning of
intelligent systems cannot be considered separately from the environment and the
concrete situation including the goal.” R. R. Gudwin [33]
While this definition is consistent with the position we have taken, when trying to
actually test the intelligence of an agent Gudwin does not believe that a “black box” be-
haviour based approach is sufficient, rather his approach is to look at the “. . . architectural
details of structures, organizations, processes and algorithms used in the construction of
the intelligent systems,” [33] Our perspective is simply to not care whether an agent looks
intelligent on the inside. If it is able to perform well in a wide range of environments,
that is all that matters. For more discussion on this point see our response to Block’s and
Searle’s arguments in Subsection 5.2.
“We define two perspectives on artificial system intelligence: (1) native intelli-
gence, expressed in the specified complexity inherent in the information content
of the system, and (2) performance intelligence, expressed in the successful (i.e.,
goal-achieving) performance of the system in a complicated environment.” J. A.
Horst [45]
Here we see two distinct notions of intelligence, a performance based one and an infor-
mation content one. This is similar to the distinction between fluid intelligence and crystal-
lized intelligence made by the psychologist Cattell (see Subsection 2.5). The performance
notion of intelligence is similar to our definition with the expectation that performance
is measured in a complex environment rather than across a wide range of environments.
This perspective appears in some other definitions also,
“[An intelligent agent does what] is appropriate for its circumstances and its goal, it
is flexible to changing environments and changing goals, it learns from experience,
and it makes appropriate choices given perceptual limitations and finite computa-
tion.” D. Poole [77]
4 DEFINITIONS AND TESTS OF MACHINE INTELLIGENCE 29
“. . . in any real situation behavior appropriate to the ends of the system and adap-
tive to the demands of the environment can occur, within some limits of speed and
complexity.” A. Newell and H. A. Simon [76]
“Intelligence is the ability for an information processing agent to adapt to its envi-
ronment with insufficient knowledge and resources.” P. Wang [105]
be applied to any test of intelligence that considers only a system’s external behaviour,
that is, most intelligence tests.
A more common criticism is that passing the Turing test is not necessary to establish
intelligence. Usually this argument is based on the fact that the test requires the machine
to have a highly detailed model of human knowledge and patterns of thought, making
it a test of humanness rather than intelligence [24, 23]. Indeed, even small things like
pretending to be unable to perform complex arithmetic quickly and faking human typing
errors become important, something which clearly goes against the purpose of the test.
The Turing test has other problems as well. Current AI systems are a long way
from being able to pass an unrestricted Turing test. From a practical point of view this
means that the full Turing test is unable to offer much guidance to our work. Indeed,
even though the Turing test is the most famous test of machine intelligence, almost no
current research in artificial intelligence is specifically directed toward being able to pass
it. Unfortunately, simply restricting the domain of conversation in the Turing test to
make the test easier, as is done in the Loebner competition [67], is not sufficient. With
restricted conversation possibilities the most successful Loebner entrants are even more
focused on faking human fallibility, rather than anything resembling intelligence [47].
Finally, the Turing test returns different results depending on who the human judges are.
Its unreliability has in some cases lead to clearly unintelligent machines being classified
as human, and at least one instance of a human actually failing a Turing test. When
queried about the latter, one of the judges explained that “no human being would have
that amount of knowledge about Shakespeare”[85].
child has a significant amount of elementary knowledge about how to interact with the
world, this knowledge would be of little use when trying to compress an encyclopedia full
of abstract “adult knowledge” about the world.
Competitive games. The Turing Ratio method of Masum et al. has more emphasis
on tasks and games rather than cognitive tests. Similar to our own definition, they propose
that “. . . doing well at a broad range of tasks is an empirical definition of ‘intelligence’.”[71]
To quantify this they seek to identify tasks that measure important abilities, admit a
series of strategies that are qualitatively different, and are reproducible and relevant over
an extended period of time. They suggest a system of measuring performance through
pairwise comparisons between AI systems that is similar to that used to rate players
in the international chess rating system. The key difficulty however, which the authors
acknowledge is an open challenge, is to work out what these tasks should be, and to
quantify just how broad, important and relevant each is. In our view these are some of the
most central problems that must be solved when attempting to construct an intelligence
test. Thus we consider this approach to be incomplete in its current state.
designed for humans, such as the Wechsler Adult Intelligent Scale and Raven Progressive
Matrices (see Subsection 2.1).
As effective as these tests are for humans, we believe that they are unlikely to be
adequate for measuring machine intelligence. For a start they are highly anthropocentric.
Another problem is that they embody basic assumptions about the test subject that are
likely to be violated by computers. For example, consider the fundamental assumption
that the test subject is not simply a collection of specialised algorithms designed only
for answering common IQ test questions. While this is obviously true of a human, or
even an ape, it may not be true of a computer. The computer could be nothing more
than a collection of specific algorithms designed to identify patterns in shapes, predict
number sequences, write poems on a given subject or solve verbal analogy problems —
all things that AI researchers have worked on. Such a machine might be able to obtain a
respectable IQ score [80], even though outside of these specific test problems it would be
next to useless. If we try to correct for these limitations by expanding beyond standard
tests, as Bringsjord and Schimanski seem to suggest, this once again opens up the difficulty
of exactly what, and what not, to test for. Thus we consider Psychometric AI, at least as
it is currently formulated, to only partially address this central question.
C-Test. One perspective among psychologists who support the g-factor view of intelli-
gence, is that intelligence is “the ability to deal with complexity”[28]. Thus, in a test of
intelligence, the most difficult questions are the ones that are the most complex because
these will, by definition, require the most intelligence to solve. It follows then that if
we could formally define and measure the complexity of test problems using complexity
theory we could construct a formal test of intelligence. The possibility of doing this was
perhaps first suggested by Chaitin [16]. While this path requires numerous difficulties to
be dealt with, we believe that it is the most natural and offers many advantages: It is
formally motivated, precisely defined and potentially could be used to measure the perfor-
mance of both computers and biological systems on the same scale without the problem
of bias towards any particular species or culture.
Essentially this is the approach that we have taken. Universal intelligence is based
our the universally optimal AIXI agent for active environments, which in turn is based
on Kolmogorov complexity and Solomonoff’s universal model of sequence prediction.
A relative of universal intelligence is the C-Test of Hernández-Orallo which was also
inspired by Solomonoff induction and Kolmogorov complexity [41, 42]. If we gloss over
some technicalities, the essential relationships look like this:
The C-Test consists of a number of sequence prediction and abduction problems simi-
lar to those that appear in many standard IQ tests. The test has been successfully applied
to humans with intuitively reasonable results [42, 40]. Similar to standard IQ tests, the
C-Test always ensures that each question has an unambiguous answer in the sense that
there is always one hypothesis that is consistent with the observed pattern that has signif-
icantly lower complexity than the alternatives. Other than making the test easier to score,
it has the added advantage of reducing the test’s sensitivity to changes in the reference
machine.
The key difference to sequence problems that appear in standard intelligence tests is
4 DEFINITIONS AND TESTS OF MACHINE INTELLIGENCE 33
that the questions are based on a formally expressed measure of complexity. To overcome
the problem of Kolmogorov complexity not being computable, the C-Test instead uses
Levin’s Kt complexity [65]. In order to retain the invariance property of Kolmogorov
complexity, Levin complexity requires the additional assumption that the universal Turing
machines are able to simulate each other in linear time. As far as we know, this is the
only formal definition of intelligence that has so far produced a usable test of intelligence.
To illustrate the C-Test, below are some example problems taken from [42]. Beside
each question is its complexity, naturally more complex patterns are also more difficult:
Our main criticism of the C-Test is that it is a static test limited to passive environ-
ments. As we have argued earlier, we believe that a better approach is to use dynamic
intelligence tests where the agent must interact with an environment in order to solve
problems. As AIXI is a generalisation of Solomonoff induction from passive to active en-
vironments, universal intelligence could be viewed as generalising the C-Test from passive
to active environments.
Smith’s Test. Another complexity based formal definition of intelligence that ap-
peared recently in an unpublished report is due to W. D. Smith [89]. His approach
has a number of connections to our work, indeed Smith states that his work is largely
a “. . . rediscovery of recent work by Marcus Hutter”. Perhaps this is over stating the
similarities because while there are some connections, there are also many important
differences.
The basic structure of Smith’s definition is that an agent faces a series of problems
that are generated by an algorithm. In each iteration the agent must try to produce
the correct response to the problem that it has been given. The problem generator then
responds with a score of how good the agent’s answer was. If the agent so desires it can
submit another answer to the same problem. At some point the agent requests to the
problem generator to move onto the next problem and the score that the agent received
for its last answer to the current problem is then added to its cumulative score. Each
interaction cycle counts as one time step and the agent’s intelligence is then its total
cumulative score considered as a function of time. In order to keep things feasible, the
problems must all be in the complexity class P, that is, decision problems which can be
solved by a deterministic Turing machine in polynomial time.
We have three main criticisms of Smith’s definition. Firstly, while for practical reasons
it might make sense to restrict problems to be in P, we do not see why this practical
restriction should be a part of the very definition of intelligence. If some breakthrough
meant that agents could solve difficult problems in not just P but sometimes in NP as
4 DEFINITIONS AND TESTS OF MACHINE INTELLIGENCE 34
well, then surely these new agents would be more intelligent? We had similar objections
to informal definitions of machine intelligence that included efficiency requirements in
Subsection 4.1.
Our second criticism is that the way intelligence is measured is essentially static, that
is, the environments are passive. As we have argued before, we believe that dynamic
testing in active environments is a better measure of a system’s intelligence. To put this
argument yet another way: Succeeding in the real world requires you to be more than an
insightful spectator!
The final criticism is that while the definition is somewhat formally defined, still it
leaves open the important question of what exactly the tests should be. Smith suggests
that researchers should dream up tests and then contribute them to some common pool
of tests. As such, this is not a fully specified definition.
Pr ersa ed
O al al
.
D ral e
ef
G R e
g
v fin
id tiv
D
rm en
en an
lly e
ac l
nd d
Te al
nb c
Fu ctiv
U De
Fo am
U mi
Fu ase
W ma
.
tic
vs
Intelligence Test
a
e
lid
e
r
st
yn
fo
bj
ni
Va
In
Turing Test • · · · • · · · · • · • T
Total Turing Test • · · · • · · · · • · · T
Inverted Turing Test • • · · • · · · · • · • T
Toddler Turing Test • · · · • · · · · · · • T
Linguistic Complexity • • · · · · • • · • • T
Text Compression Test • • · • • • T
Turing Ratio • ? ? ? ? ? · ? ? T/D
Psychometric AI • ? • · • • • · • T/D
Smith’s Test • • · ? · ? • T/D
C-Test • • · T/D
Universal Intelligence · D
Table 1: In the table means “yes”, • means “debatable”, · means “no”, and ?
means unknown. When something is rated as unknown that is usually because
the test in question is not sufficiently specified.
Practical. A test should be able to be performed quickly and automatically, while from
a definition it should be possible to create an efficient test.
Test vs. Def. Finally, we note whether the proposal is more of a test, more of a
definition, or something in between.
It’s obviously false, there’s nothing in your definition, just a few equa-
tions. Perhaps the most common criticism is also the most vacuous one: It’s obviously
wrong! These people seem to believe that defining intelligence with an equation is clearly
impossible, and thus there must be very large and obvious flaws in our work. Not surpris-
ingly these people are also the least likely to want to spend 10 minutes having the material
explained to them. Unfortunately, none of these people have been able to communicate
why the work is so obviously flawed in any concrete way — despite in one instance having
one of the authors chasing the poor fellow out of the conference centre and down the street
begging for an explanation. If anyone would like to properly explain their position to us
in the future, we promise not to chase you down the street!
It’s obviously correct, indeed everybody already knows this stuff. Cu-
riously, the second most common criticism is the exact opposite: The work is obviously
right, and indeed it is already well known. Digging deeper, the heart of this criticism
comes from the perception that we have not done much more than just describe rein-
forcement learning. If you already accept that the reinforcement learning framework is
the most general and flexible way to describe artificial intelligence, and not everybody
does, then by mixing in Occam’s razor and a dash of complexity theory, the equation for
universal intelligence follows in a fairly straightforward way. While this is true, the way
in which we have brought these things together has never been done before, although it
does have some connection to other work, as discussed in Subsection 4.2. Furthermore,
simply coming up with an equation is not enough, one must argue that what the equation
describes is in fact “intelligence” in a sense that is reasonable for machines.
We have addressed this question in three main ways: Firstly, in Section 2 we devel-
oped an informal definition of intelligence based on expert definitions which was then
piece by piece formalised leading to the equation for Υ in Subsection 3.3. This chain of
argument strongly ties our equation for intelligence with existing informal definitions and
ideas on the nature of intelligence. Secondly, in Subsections 3.4 and 3.5 we showed that
the equation has properties that are consistent with a definition of intelligence. Finally,
in Subsection 3.4 it was shown that universal intelligence is strongly connected to the
theory of universally optimal learning agents, in particular AIXI. From this it follows that
machines with very high universal intelligence have a wide range of powerful optimality
properties. Clearly then, what we have done goes far beyond merely restating elementary
reinforcement learning theory.
of the universe that is not computable in the above sense. Nor is there any experimental
evidence showing that such a physical law must exist. This includes quantum theory
and chaotic systems, both of which can be extremely difficult to compute for some phys-
ical systems, but are not fundamentally incomputable theories. In the case of quantum
computers, they can compute with lower time complexity than classical Turing machines,
however they are unable to compute anything that a classical Turing machine cannot,
when given enough time. Thus, as there is no hard evidence of incomputable processes in
the universe, our assumption that the agent’s environment has a computable distribution
is certainly not unreasonable.
If a physical process was ever discovered that was not Turing computable, then this
would likely result in a new extended model of computation. Just as we have based uni-
versal intelligence on the Turing model of computation, it might be possible to construct
a new definition of universal intelligence based on this new model in a natural way.
Finally, even if the universe was not computable, and we did not update our formal
definition of intelligence to take this into account, the fact that everything in physics so
far is computable means that a computable approximation to our universe would still be
extremely accurate over a huge range of situations. In which case, an agent that could
deal with a wide range of computable environments would most likely still function well
within such a universe.
as it is able to pass the Turing test, but is in fact no more than just a big look-up table of
questions and answers [10] (for a related argument see [35]). Although such a look-up table
based machine would be unfeasibly large, the fact that a finite machine could in theory
consistently pass the Turing test, seemingly without any real intelligence, is worrisome.
Our formal measure of machine intelligence could be challenged in the same way, as could
any test of intelligence that relies only on an agent’s external behaviour.
Our response to this is very simple: If an agent has a very high value of Υ then it
is, by definition, able to successfully operate in a wide range of environments. We simply
do not care whether the agent is efficient, due to some very clever algorithm, or absurdly
inefficient, for example by using an unfeasibly gigantic look-up table of precomputed
answers. The important point for us is that the machine has an amazing ability to solve
a huge range of problems in a wide variety of environments.
that our definition of intelligence is perhaps too broad in its scope. Currently we know of
no such result.
Interestingly, if it could be shown that an upper limit on Υ existed for feasible machines
and that humans performed above this limit, then this would prove that humans have some
incomputable element to their operation, perhaps consciousness, which is of real practical
significance to their performance.
5.3 Conclusion
“. . . we need a definition of intelligence that is applicable to machines as well
as humans or even dogs. Further, it would be helpful to have a relative
measure of intelligence, that would enable us to judge one program more or
less intelligent than another, rather than identify some absolute criterion.
Then it will be possible to assess whether progress is being made . . . ” W. L.
Johnson [55]
Given the obvious significance of formal definitions of intelligence for research, and
calls for more direct measures of machine intelligence to replace the problematic Turing
test and other imitation based tests, little work has been done in this area. In this paper
we have attempted to tackle this problem by taking an informal definition of intelligence
modelled on expert definitions of human intelligence, and then generalise and formalise
it. We believe that the resulting mathematical definition captures the concept of ma-
chine intelligence in a very powerful and yet elegant way. Furthermore, by considering
alternative, more tractable measures of complexity, practical tests that estimate universal
intelligence should be possible. Developing such tests will be the next major task in this
direction of research.
The fact that we have stated our definition of machine intelligence in precise mathe-
matical terms, rather than the more usual vaguely worded descriptions, means that there
is no reason why criticisms of our approach should not be equally clear and precise. At
the very least we hope that this in itself will help raise the debate over the definition and
nature of machine intelligence to a new level of scientific rigour.
Acknowledgements
This work was supported by the Swiss NSF grant 200020-107616.
References
[1] J. S. Albus. Outline for a theory of intelligence. IEEE Trans. Systems, Man and
Cybernetics, 21(3):473–509, 1991.
[2] N. Alvarado, S. Adams, S. Burbeck, and C. Latta. Beyond the Turing test: Per-
formance metrics for evaluating a computer simulation of the human mind. In
Performance Metrics for Intelligent Systems Workshop, Gaithersburg, MD, USA,
2002. North-Holland.
[3] A. Anastasi. What counselors should know about the use and interpretation of
psychological tests. Journal of Counseling and Development, 70(5):610–615, 1992.
[4] A. Asohan. Leading humanity forward. The Star, October 14, 2003.
[5] T. C. Bell, J. G. Cleary, and I. H. Witten. Text compression. Prentice Hall, 1990.
REFERENCES 40
[63] S. Legg and M. Hutter. A formal measure of machine intelligence. In Annual Ma-
chine Learning Conference of Belgium and The Netherlands (Benelearn’06), Ghent,
2006.
[64] S. Legg and M. Hutter. A collection of definitions of intelligence. In B. Go-
ertzel and P. Wang, editors, Advances in Artificial General Intelligence: Concepts,
Architectures and Algorithms, volume 157 of Frontiers in Artificial Intelligence
and Applications, pages 17–24, Amsterdam, NL, 2007. IOS Press. Online verion:
www.vetta.org/shane/intelligence.html.
[65] L. A. Levin. Universal sequential search problems. Problems of Information Trans-
mission, 9:265–266, 1973.
[66] M. Li and P. M. B. Vitányi. An introduction to Kolmogorov complexity and its
applications. Springer, 2nd edition, 1997.
[67] H. G. Loebner. The Loebner prize — The first Turing test.
http://www.loebner.net/Prizef/loebner-prize.html, 1990.
[68] M. Looks, B. Goertzel, and C. Pennachin. Novamente: An integrative architecture
for general intelligence. In AAAI Fall Symposium, Achieving Human-level intelli-
gence, 2004.
[69] E. M. Macphail. Vertebrate intelligence: The null hypothesis. In L. Weiskrantz,
editor, Animal Intelligence, pages 37–50. Clarendon, Oxford, 1985.
[70] M. V. Mahoney. Text compression as a test for artificial intelligence. In AAAI/IAAI,
1999.
[71] H. Masum, S. Christensen, and F. Oppacher. The Turing ratio: Metrics for open-
ended tasks. In GECCO 2002: Proceedings of the Genetic and Evolutionary Compu-
tation Conference, pages 973–980, New York, 2002. Morgan Kaufmann Publishers.
[72] J. McCarthy. What is artificial intelligence?
www-formal.stanford.edu/jmc/whatisai/whatisai.html, 2004.
[73] M. Minsky. The Society of Mind. Simon and Schuster, New York, 1985.
[74] M. Müller. Stationary algorithmic probability. Technical report, TU Berlin, Berlin,
2006. http://arXiv.org/abs/cs/0608095.
[75] U. Neisser, G. Boodoo, T. J. Bouchard, Jr., A. W. Boykin, N. Brody, S. J. Ceci,
D. F. Halpern, J. C. Loehlin, R. Perloff, R. J. Sternberg, and S. Urbina. Intelligence:
Knowns and unknowns. American Psychologist, 51(2):77–101, 1996.
[76] A. Newell and H. A. Simon. Computer science as empirical enquiry: Symbols and
search. Communications of the ACM 19, 3:113–126, 1976.
[77] D. Poole, A. Mackworth, and R. Goebel. Computational Intelligence: A logical
approach. Oxford University Press, New York, NY, USA, 1998.
[78] J. Raven. The Raven’s progressive matrices: Change and stability over culture and
time. Cognitive Psychology, 41:1–48, 2000.
[79] Zh. I. Reznikova and B.Ya. Ryabko. Analysis of the language of ants by information-
theoretic methods. Problems Inform. Transmission, 22:245–249, 1986.
REFERENCES 44
[80] P. Sanghi and D. L. Dowe. A computer program capable of passing I.Q. tests. In
Proc. 4th ICCS International Conference on Cognitive Science (ICCS’03), pages
570–575, Sydney, NSW, Australia, 2003.
[81] A. Saygin, I. Cicekli, and V. Akman. Turing test: 50 years later. Minds and
Machines, 10, 2000.
[82] J. Schmidhuber. The Speed Prior: a new simplicity measure yielding near-optimal
computable predictions. In Proc. 15th Annual Conference on Computational Learn-
ing Theory (COLT 2002), Lecture Notes in Artificial Intelligence, pages 216–228,
Sydney, Australia, July 2002. Springer.
[83] P. Schweizer. The truly total Turing test. Minds and Machines, 8:263–272, 1998.
[84] J. Searle. Minds, brains, and programs. Behavioral & Brain Sciences, 3:417–458,
1980.
[85] S. Shieber. Lessons from a restricted Turing test. CACM: Communications of the
ACM, 37, 1994.
[86] D. K. Simonton. An interview with Dr. Simonton. In J. A. Plucker, editor, Hu-
man intelligence: Historical influences, current controversies, teaching resources.
http://www.indiana.edu/∼ intell, 2003.
[87] J. Slatter. Assessment of children: Cognitive applications. Jermone M. Satler Pub-
lisher Inc., San Diego, 4th edition, 2001.
[88] B. M. Slotnick and H. M. Katz. Olfactory learning-set formation in rats. Science,
185:796–798, 1974.
[89] W. D. Smith. Mathematical definition of “intelligence” (and consequences).
http://math.temple.edu/∼ wds/homepage/works.html, 2006.
[90] C. E. Spearman. The abilities of man, their nature and measurement. Macmillan,
New York, 1927.
[91] W. L. Stern. Psychologischen Methoden der Intelligenz-Prüfung. Barth, Leipzig,
1912.
[92] R. J. Sternberg. Beyond IQ: A triacrchi theory of human intelligence. Cambridge
University Press, New York, 1985.
[93] R. J. Sternberg, editor. Handbook of Intelligence. Cambridge University Press, 2000.
[94] R. J. Sternberg. An interview with Dr. Sternberg. In J. A. Plucker, editor, Hu-
man intelligence: Historical influences, current controversies, teaching resources.
http://www.indiana.edu/∼ intell, 2003.
[95] R. J. Sternberg and C. A. Berg. Quantitative integration: Definitions of intelli-
gence: A comparison of the 1921 and 1986 symposia. In R. J. Sternberg and D. K.
Detterman, editors, What is intelligence? Contemporary wiewpoints on its nature
and definition, pages 155–162, Norwood, NJ, 1986. Ablex.
[96] R. J. Sternberg and E. L. Grigorenko, editors. Dynamic Testing: The nature and
measurement of learning potential. Cambridge University Press, 2002.
[97] R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA,
MIT Press, 1998.
REFERENCES 45