You are on page 1of 202

How Close Is AI to Human-level Intelligence Here in April 2018?

Jack Krupansky

Apr 28, 2018

https://medium.com/@jackkrupansky/how-close-is-ai-to-human-level-intelligence-
here-in-april-2018-9a6ceaff2f9d
Sumário
How Close Is AI to Human-level Intelligence Here in April 2018?................................................. 9
Topics to be covered in this paper .......................................................................................... 13
Task-specific and domain-specific AI....................................................................................... 14
Learning and machine learning ............................................................................................... 14
Learning concepts ................................................................................................................... 14
Robotics vs. intellectual capacities.......................................................................................... 14
Is it AI or just automation? ...................................................................................................... 15
Are all heuristics AI? ............................................................................................................ 24
Robotics ............................................................................................................................... 24
Conclusion ........................................................................................................................... 25
Is it AI for just a heuristic? ....................................................................................................... 26
Progress on gaming ................................................................................................................. 26
Is it AI or just machine intelligence ......................................................................................... 27
Tasks beyond what people and animals traditionally performed ...................................... 30
Big learning and little learning ............................................................................................ 30
Training................................................................................................................................ 30
Deep learning and guided learning ..................................................................................... 31
Advanced machine learning ................................................................................................ 31
Automation ......................................................................................................................... 31
Patterns ............................................................................................................................... 32
Computer vision .................................................................................................................. 33
Internet of Things (IoT)........................................................................................................ 33
Cybersecurity....................................................................................................................... 34
Data analytics and business intelligence ............................................................................. 35
Scientific and engineering calculation and modelling......................................................... 35
Very complex data patterns and connections within data ................................................. 35
Summary ............................................................................................................................. 36
What fraction of strong AI is needed for your particular app? ............................................... 36
Areas of intelligence ............................................................................................................ 37
Areas of human intelligence ............................................................................................... 37
Areas of AI research ............................................................................................................ 38
Levels of function ................................................................................................................ 39
Specific human-level functions of intelligence.................................................................... 40
Degree of competence ........................................................................................................ 44
Common use of natural language processing (NLP) ........................................................... 45
Autonomy, principals, agents, and assistants ..................................................................... 46
Intelligent agents................................................................................................................. 46
Intelligent digital assistants ................................................................................................. 47
The robots are coming, to take all our jobs? ...................................................................... 47
How intelligent is an average worker? ................................................................................ 47
No sign of personal AI yet (strong AI) ................................................................................. 48
AI is generally not yet ready for consumers........................................................................ 49
Meaning and conceptual understanding ............................................................................ 49
Emotional intelligence......................................................................................................... 50
Wisdom, principles, and values........................................................................................... 50
Extreme AI ........................................................................................................................... 51
Ethics and liability................................................................................................................ 51
Dramatic breakthroughs needed ........................................................................................ 52
Fundamental computing model .......................................................................................... 53
How many years from research to practical application? .................................................. 53
Turing test for strong AI ...................................................................................................... 54
How to score the progress of AI .......................................................................................... 56
Links to my AI papers .......................................................................................................... 56
Conclusion: So, how long until we finally see strong AI? .................................................... 56
What Is AI (Artificial Intelligence)? .............................................................................................. 57
What is intelligence? ............................................................................................................... 60
Artificial intelligence is what we don’t know how to do yet ................................................... 62
Emotional intelligence............................................................................................................. 63
Autonomy, agency, and assistants .......................................................................................... 63
AI areas and capabilities.......................................................................................................... 64
Neural networks and deep learning ........................................................................................ 65
Animal AI ................................................................................................................................. 65
Robotics ................................................................................................................................... 65
Artificial life ............................................................................................................................. 66
Ethics ....................................................................................................................................... 66
Historical perspective by John McCarthy ................................................................................ 66
Can machines think? ............................................................................................................... 66
What’s the IQ of an AI? ........................................................................................................... 67
Turing test ............................................................................................................................... 67
And so much more .................................................................................................................. 67
What Are Autonomy and Agency? .............................................................................................. 68
Dictionary definitions .............................................................................................................. 68
Intelligent entities ................................................................................................................... 70
Computational entities............................................................................................................ 70
Entities..................................................................................................................................... 71
Actions and operations ........................................................................................................... 71
Tasks, objectives, purposes, and goals.................................................................................... 71
Principals and agents............................................................................................................... 71
Delegation of responsibility and authority ............................................................................. 72
Principal as its own agent........................................................................................................ 72
Agent as principal for subgoals ............................................................................................... 72
Authority ................................................................................................................................. 72
Responsibility, expectation, and obligation ............................................................................ 72
General obligations ................................................................................................................. 73
Ethics ....................................................................................................................................... 73
Liability .................................................................................................................................... 73
Elements of a goal ................................................................................................................... 73
Relationship between principal and agent ............................................................................. 74
Contracts ................................................................................................................................. 74
Capacity for agency ................................................................................................................. 74
Assistants................................................................................................................................. 75
Full autonomy of a principal ................................................................................................... 75
Limited autonomy or partial autonomy of agents .................................................................. 75
Assistants have no autonomy ................................................................................................. 76
Assistants have responsibility but no authority ...................................................................... 76
Control..................................................................................................................................... 76
Robots ..................................................................................................................................... 76
Robots and computers out of control with full autonomy? ................................................... 76
Mission and objectives ............................................................................................................ 77
Mission and operational autonomy ........................................................................................ 77
Independence — mission and operational ............................................................................. 78
Luck and Mark d’Inverno: A Formal Framework for Agency and Autonomy ......................... 78
Motivation ............................................................................................................................... 80
Sociology and philosophy ........................................................................................................ 80
Agent-based modeling (ABM) and agent-based simulation (ABS) ......................................... 81
Definitions ............................................................................................................................... 81
Autonomous systems .............................................................................................................. 83
Lethal autonomous weapons (LAWs) ..................................................................................... 83
Sovereignty.............................................................................................................................. 84
Summary ................................................................................................................................. 84
What Is an Assistant? .................................................................................................................. 85
Definition................................................................................................................................. 85
Specializations of the term...................................................................................................... 86
Virtual assistant — remote or software .................................................................................. 87
Related terms .......................................................................................................................... 88
Principal ................................................................................................................................... 89
Level of expertise and responsibility — simple, specialized, executive .................................. 89
Tasks vs. goals ......................................................................................................................... 89
Primary types of assistant ....................................................................................................... 90
Personal services ..................................................................................................................... 91
Many other tasks..................................................................................................................... 91
Software service — intelligent digital assistant ...................................................................... 92
Synthesized definition ............................................................................................................. 92
Intelligent Entities: Principals, Agents, and Assistants................................................................ 92
What’s the point of an intelligent entity? ............................................................................... 94
How much intelligence is needed? ......................................................................................... 95
Is a dog an intelligent entity? .................................................................................................. 95
Solving bigger problems .......................................................................................................... 96
General meaning of entity ...................................................................................................... 96
Definition of sapient entity ..................................................................................................... 97
Definition of intelligent entity ................................................................................................. 97
How intelligent? ...................................................................................................................... 98
Definition of computational entity.......................................................................................... 98
Dictionary definitions of entity, principal, agent, and assistant ............................................. 98
Definitions of autonomy and agency .................................................................................... 100
More depth on autonomy and agency ................................................................................. 100
Degrees of autonomy ............................................................................................................ 100
Full autonomy for principals ................................................................................................. 101
Science fiction for robot and AI autonomy ........................................................................... 101
Limited autonomy for robots and AI in the real world ......................................................... 102
Dictionary definitions of independent .................................................................................. 102
Dictionary definitions of independence ................................................................................ 103
Freedom of action ................................................................................................................. 103
Definition of independent and independence ...................................................................... 103
Independence and autonomy as synonyms ......................................................................... 104
Dictionary definitions of dependence ................................................................................... 104
Dictionary definitions of dependent ..................................................................................... 105
Definition of dependent, dependence, and dependency ..................................................... 106
Dependence of a principal .................................................................................................... 106
Dependence of an agent ....................................................................................................... 106
Dependence of an assistant .................................................................................................. 106
Dictionary definitions of control ........................................................................................... 107
Definition of control .............................................................................................................. 107
Controlling entities ................................................................................................................ 107
Dictionary definitions of mission........................................................................................... 107
Dictionary definitions of objective ........................................................................................ 108
Dictionary definitions of goal ................................................................................................ 108
Dictionary definitions of task ................................................................................................ 108
Dictionary definitions of delegation...................................................................................... 109
Dictionary definitions of responsibility ................................................................................. 109
Dictionary definitions of contract ......................................................................................... 111
Dictionary definitions of capability ....................................................................................... 111
Dictionary definitions of reputation...................................................................................... 114
Dictionary definitions of requirement .................................................................................. 114
Definition of mission ............................................................................................................. 115
Mission of a principal ............................................................................................................ 115
Mission of an agent ............................................................................................................... 115
Mission of an assistant .......................................................................................................... 115
Definition of objective ........................................................................................................... 115
Definition of delegation ........................................................................................................ 116
Definition of responsibility .................................................................................................... 116
Responsibility of principal or contracting party .................................................................... 116
Responsibility of agent or contracted party.......................................................................... 116
Responsibility of an assistant ................................................................................................ 116
Definition of contract ............................................................................................................ 117
Definition of capabilities ....................................................................................................... 117
Working with capabilities ...................................................................................................... 117
Definition of reputation ........................................................................................................ 117
Definitions of task, purpose, goal, subgoal ........................................................................... 118
Definitions of motivation and intention................................................................................ 118
Definitions of actions and operations ................................................................................... 118
Goals, tasks, and actions ....................................................................................................... 118
Definitions of principal, agent, and assistant ........................................................................ 119
Distinctive roles of principals, agents, and assistants ........................................................... 119
Contract between principal and agent ................................................................................. 119
Definition of requirement ..................................................................................................... 120
Matching requirements with capabilities ............................................................................. 120
Assistant ................................................................................................................................ 120
Technicians ............................................................................................................................ 121
Organizations ........................................................................................................................ 121
Other categories of intelligent entities ................................................................................. 121
Interactions ........................................................................................................................... 122
Relationships ......................................................................................................................... 123
Partners, allies, friends, enemies, adversaries, competitors, and antagonists..................... 123
Connections........................................................................................................................... 124
Legal liability .......................................................................................................................... 124
Ethics ..................................................................................................................................... 125
Morality and ethics of an agent or assistant ......................................................................... 125
Future work ........................................................................................................................... 125
What Is an Intelligent Digital Assistant?.................................................................................... 125
Purpose ................................................................................................................................. 126
Key distinguishing features ................................................................................................... 126
Features................................................................................................................................. 126
How intelligent are they? ...................................................................................................... 127
The Big Four .......................................................................................................................... 128
Connected intelligence, Internet-enabled ............................................................................ 129
Privacy, security, and personal data ..................................................................................... 129
Software and hardware......................................................................................................... 130
Smart speakers ...................................................................................................................... 130
Equivalent terms ................................................................................................................... 130
Related terms ........................................................................................................................ 131
What is the proper term?...................................................................................................... 132
Personal digital assistant ....................................................................................................... 132
Tasks vs. goals ....................................................................................................................... 132
Proactive................................................................................................................................ 133
Online customer service........................................................................................................ 134
Plugin modules for websites and services ............................................................................ 134
Smart cars.............................................................................................................................. 134
Virtual assistant ..................................................................................................................... 135
The one question a digital assistant can’t answer ................................................................ 135
History ................................................................................................................................... 135
Future directions for digital assistants .................................................................................. 135
Human in the loop................................................................................................................. 136
Crowdsourcing ...................................................................................................................... 136
Crowdsourcing questions ...................................................................................................... 136
Crowdsourcing tasks ............................................................................................................. 136
Group crowdsourcing ............................................................................................................ 136
Video ..................................................................................................................................... 137
Therapeutic assistants........................................................................................................... 137
How Much of Robotics Is AI?..................................................................................................... 137
Reader’s choice ..................................................................................................................... 139
Robotic prosthetics ............................................................................................................... 140
How intelligent is your robot?............................................................................................... 141
What is intelligence? ............................................................................................................. 141
Dumb robots ......................................................................................................................... 142
Is it simply automation or is intelligence required? .............................................................. 143
Computer vision .................................................................................................................... 143
Robotic sonar ........................................................................................................................ 143
Specialized hardware ............................................................................................................ 144
Artificial life (A-Life)............................................................................................................... 144
Biologically inspired technology............................................................................................ 145
Criteria ................................................................................................................................... 146
Scoring? ................................................................................................................................. 146
Summary ............................................................................................................................... 147
Vocabulary of Knowledge, Thought, and Reason ..................................................................... 147
Meaning ................................................................................................................................ 148
Truth ...................................................................................................................................... 148
Reality.................................................................................................................................... 148
Popper’s three worlds for reality .......................................................................................... 149
Relation to intelligence ......................................................................................................... 149
Relation to logic..................................................................................................................... 149
Relation to science ................................................................................................................ 149
Relation to epistemology ...................................................................................................... 150
Relation to metaphysics ........................................................................................................ 150
Relation to ethics .................................................................................................................. 150
Relation to communication ................................................................................................... 150
Relation to media .................................................................................................................. 150
Relation to language ............................................................................................................. 151
Relation to knowledge representation and knowledge artifacts ......................................... 151
Artificial intelligence (AI) ....................................................................................................... 151
Domains of truth ................................................................................................................... 151
Entities................................................................................................................................... 151
Short of details ...................................................................................................................... 152
Entity details — metadata..................................................................................................... 154
Sapient entity — people and robots ..................................................................................... 154
Sentient entity — animals and dumb robots ........................................................................ 154
Why not simply quote from the dictionary? ......................................................................... 154
Basic terms for knowledge, thought, and reason ................................................................. 154
Terms related to knowledge, thought, and reason .............................................................. 157
Work in progress ................................................................................................................... 194
Frontier AI: How far are we from artificial “general” intelligence, really? ............................... 194
More AI research, resources and compute than ever to figure out AGI .............................. 195
AI algorithms, old and new ................................................................................................... 198
The fusion of AI and neuroscience ........................................................................................ 200
Conclusion ............................................................................................................................. 201

How Close Is AI to Human-level


Intelligence Here in April 2018?
Jack Krupansky
Apr 28, 2018
Artificial intelligence (AI) is progressing rapidly, but how close is it to true human-level
intelligence here in April 2018? Not very, in my own estimation. It’s got a long way to
go.

This informal paper won’t delve so much into specific AI projects or features, but will
endeavor to explore elements of a conceptual framework for judging progress of AI
towards full, human-level intelligence, also known as Strong AI.

Even super-optimist Ray Kurzweil of The Singularity Is Near: When Humans


Transcend Biologyi fame was still touting 2029 last fall as his target date for machines
achieving human-level intelligence, on their way towards his technological singularity
of exponential superintelligence in 2045.

So, here we are, still over a decade short of the super-optimist’s forecast for human-
level intelligence.

But where are we really?

Are we on track for human-level intelligence in 2029?

Are we behind?

Or are we possibly ahead of schedule, on a path to achieve human-level intelligence in


5–7 years rather than 11 years?

First off, all bets are off, or maybe I should say that all bets are on since nobody, even
Kurzweil, has any clue where in that 5–11 year time horizon we really are.

Generally, progress proceeds in fits and starts, with occasional short bursts of
phenomenal breakthroughs, but interspersed with prolonged periods of disappointingly
slow progress.

Superficially, we seem to be in one of those rare periods of rapid advance, but who’s to
say how long it will last.

And who’s to say how long we will have to wait for the next burst.

And who’s to say how many bursts and breakthrough we will need to master before we
finally do break through to true human-level intelligence.

To recap, there are two very distinct questions:

1. How close is AI in April 2018 to human-level intelligence?


2. Is AI on track to achieve human-level intelligence in 2029–11 years from now?

And, I would submit that there is another pair of questions which are more urgent:

1. What fraction of applications which are crying out for AI capabilities have those
needs met with off the shelf AI packages — or at least custom AI which could
be completed within no more than a few months by no more than a few people?
2. What fraction of off the shelf AI capabilities are truly ready for prime-time
deployment in production applications?

On the matter of actual progress, there are two aspects:

1. Specific progress. Actual technical accomplishments.


2. A more abstract framework for criteria to judge progress.

I won’t recount specific technical technical accomplishments myself, but I would refer
curious readers to a recent post entitled Frontier AI: How far are we from artificial
“general” intelligence, really? by venture capitalist Matt Turck of First Mark Capital,
which mentions quite a few of the recent accomplishments. His overall conclusion:

• So, how far are we from AGI [Artificial General Intelligence]? This high level
tour shows contradictory trends. On the one hand, the pace of innovation is
dizzying — many of the developments and stories mentioned in this piece
(AlphaZero, new versions of GANs, capsule networks, RCNs breaking
CAPTCHA, Google’s 2nd generation of TPUs, etc.) occurred just in the last 12
months, in fact mostly in the last 6 months. On the other hand, many the AI
research community itself, while actively pursuing AGI, go to great lengths to
emphasize how far we still are — perhaps out of concern that the media hype
around AI may lead to dashed hopes and yet another AI nuclear winter.

As far as a more abstract framework for criteria to judge progress, I’ll defer that for a
future paper, but many of the elements of such a framework will be explored in the
remainder of this paper, and are already discussed to a fair degree in a companion paper,
Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and
Machine Learning.

A simplified framework for evaluating AI systems relative to strong AI has several


dimensions:

1. Areas of intelligence.
2. Levels of function. In each area.
3. Degree of competence. At each level of function, in each area.

Unfortunately it is impractical to characterize current AI systems according to such a


framework today since the structure and competence of such systems doesn’t really
parallel human-level intelligence to any significant degree.

Instead, what we have in AI systems today is either:

1. Small chunks of intelligence embedded in larger software systems. Not really


resembling a larger form of intelligence.
2. Narrow towers of intelligence. Where the machine performs comparable to or
even outperforms human-level intelligence. Such as data analysis or playing
games such as Go and chess.
3. Relatively weak forms of learning that require significant human assistance or
training. Such as image or pattern recognition. Commonly called machine
learning (ML).
4. So-called deep learning. The machine does extremely well, but requires very
careful setup with ground rules and preprogramming of basic logic. Formerly
called neural networks.
5. Preprogrammed intelligence. Such as natural language processing (NLP) or data
analysis.
6. Relatively thin layers of intelligence. The current crop of intelligent digital
assistants are quite amazing, but only in a rather superficial sense, answering
only basic questions and performing basic tasks.
7. Just automation, billed as AI. There is no law as to what can be called AI, so
many algorithms can be treated as if AI, even though they don’t really involve
something comparable to higher-order human intellect. Data analysis,
scheduling, optimization.

So, the basic problem we have today is that we can’t even begin to compare any
machine to the pathway of human intellectual development and ask the basic question:

• If a typical AI system were a person, what age or stage of intellectual


development has it achieved?

Sure, some of these towers of intelligence and preprogrammed intelligence have


achieved mature human-level of skill but at the same time they lack a lot of basic skills
of human intelligence that we expect from even small children.

And true general learning — as mastered by even small children — is well beyond
even the most advanced of AI systems today.

If I had to summarize today’s AI systems in a single statement, I would say:

• The state of the art for AI today is primarily in task-specific and domain-specific
AI systems.

And a follow-own statement regarding learning:

• The best AI systems today hinge critically on some degree of preprogrammed


basic intelligence and relatively narrow task or domain-specific supervised
training or limited, narrow learning.

Granted, there is some preliminary research in unsupervised learning, but that is the rare
exception rather than the general rule, and the whole trust of this paper is on what is
general and common today rather than fringe or atypical, or coming further down the
road.

In short, AI is now quite common but still quite primitive, with rare exceptions.

If you have a problem which you wish to solve using AI, you absolutely cannot just go
out to the store (or Amazon) and order an off the shelf solution, except in a relatively
small number of areas.

Some of the areas where fairly sophisticated AI can be bought or downloaded for free
include:
• Basic natural language transcription. Speech recognition.
• Basic natural language commands.
• Basic but relatively primitive automatic natural language translation. And
detection of language. Google Translate, built into Google Search.
• Particular games. Play chess against the machine, online, for free.
• Intelligent digital assistants. Alexa, Siri, Google.

Topics to be covered in this paper


The topics to be covered in this informal paper include:

1. Task-specific and domain-specific AI.


2. Learning and machine learning.
3. Learning concepts.
4. Robotics vs. intellectual capacities.
5. Is it AI or just automation?
6. Is it AI for just a heuristic?
7. Progress on gaming.
8. Is it AI or just machine intelligence?
9. What fraction of strong AI is needed for your particular app?
10. Areas of intelligence.
11. Areas of human intelligence.
12. Areas of AI research.
13. Levels of function.
14. Specific human-level functions of intelligence.
15. Degree of competence.
16. Common use of natural language processing (NLP).
17. Autonomy, principals, agents, and assistants.
18. Intelligent agents.
19. Intelligent digital assistants.
20. The robots are coming, to take all our jobs?
21. How intelligent is an average worker?
22. No sign of personal AI yet (strong AI).
23. AI is generally not yet ready for consumers.
24. Meaning and conceptual understanding.
25. Emotional intelligence.
26. Wisdom, principles, and values.
27. Extreme AI.
28. Ethics and liability.
29. Dramatic breakthroughs needed.
30. Fundamental computing model.
31. How many years from research to practical application?
32. Turing test for strong AI.
33. How to score the progress of AI.
34. Links to my AI papers.
35. Conclusion: So, how long until we finally see strong AI?

For more detail, see:

• What Is AI (Artificial Intelligence)?


• List of My Artificial Intelligence (AI) Papers (Web links)
• Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning (Web link)

Task-specific and domain-specific AI


Much of the recent advances in AI have been in task-specific and domain-specific AI.

And when an advance does indeed transcend multiple tasks or multiple domains, it is
usually fairly narrow.

Learning and machine learning


Learning is a fundamental component of intelligence. Machine learning is all the rage,
but despite impressive achievements at learning, the state of the art involves either
significant, supervised training, directed training, or explicit focus on a narrow problem
and preprogrammed foundation concepts.

Learning concepts
As of today, there has been no real breakthrough on the ability of a machine to
independently and without human direction discover concepts, especially foundation
concepts.

Granted, even genius-level humans come preprogrammed with a vast array of


conceptual and intuitive knowledge, encoded in our DNA, as well as culturally
programmed when we are young and in school, so that the precise nature of what it
means to learn as an adult learns does not easily or quickly translate into how a machine
should learn.

Even the learning process for children is still well beyond the capabilities of even the
most capable AI systems with the most advanced machine learning algorithms.

Robotics vs. intellectual capacities


One important distinction to draw in the progress of AI is robotic versus intellectual
capabilities.

Physical bodies and movement in the real world has a lot less to do with human-level
intelligence and more to do with the combination of:

• Animal-level body structure and mechanics.


• Animal-level movement.
• Animal-level intelligence.
• Mechanical and electrical aspects of robotics.
Granted, intellectual capacities are needed to decide where to move and what actions to
take when you get there, but the basic mechanics of moving a physical body from point
A to point B requires only the same capabilities as possessed by most animals. That’s
the primary job of robotics.

Whether the non-intellectual aspects of robotics should be considered intelligence or AI


is a matter of debate. See a companion paper, How Much of Robotics Is AI?.

This paper does not get into robotics per se, focusing more strictly on intellectual
capacities.

Is it AI or just automation?
Technically, just about anything a person can do in their head can be considered
intelligence, but do we really want to label relatively simple tasks such as the following
as artificial intelligence:

1. Ability to add a few numbers and compute an average.


2. Organize a list of names, addresses, and phone numbers.
3. Recognize printed text.
4. Schedule or optimize deliveries for a business.
5. Sorting or searching a very large dataset.
6. Organizing photos based on heuristics such as colors or superficial features.

I think not.

Automation, yes, but rising to the level of higher-order human intelligence, no.

I have a full paper on this topic: Is It Really AI or Just Automation?

Yes, incredible progress has been made on automating tasks performed by people.

But I would assert that none of that constitutes progress towards Strong AI, automating
higher-order intellectual capacities.

Is It Really AI or Just Automation?

A lot of digital devices, software, and features are being billed as AI (Artificial
Intelligence), but are they really true AI or just traditional automation, much as
computers have been doing for over 50 years (UNIVAC I in 1951) or since the 1890
census with Hollerith punched cards or even the punched cards of the Jacquard loom in
1804. At what point, level, or stage does computing cease being merely automation and
suddenly constitute Artificial Intelligence?

Alas, there is no magic answer, other than to say that it is a purely subjective
characterization to claim that a given form or level of automation is magically Artificial
Intelligence.

Six criteria will be used in this informal paper to try to distinguish mere automation
from more sophisticated AI:
1. Any function that a human or an animal can perform. That’s the starting point,
the initial filter.
2. Distinguish functions that a human can perform that are distinct from functions
that almost any animal can perform.
3. Distinguish animal functions that are not easy, simple, and straightforward to
program on a computer.
4. Distinguish higher-order human intellectual functions that are not easy, simple,
and straightforward to program on a computer.
5. Distinguish difficult human intellectual functions which are merely complex
information processing rather than complex from an intellectual perspective.
6. Distinguish manual tasks performed exclusively by humans which are non-trivial
to automate.

First we should consider possible definitions of automation and AI.

Automation could be a computer or other machine performing either:

1. Any task which a human or animal could do.


2. Any task which a human could do.
3. Any intellectual task which a human could do.
4. Any task which a human or animal could do that requires some degree of
intelligence, including interpreting sensory perceptions, sensing the
environment, and fine motor control that requires sensing the environment.

The definition of AI which I have previously used in my companion paper What is AI


(Artificial Intelligence)? is:

• AI is the capacity of a computer to approximate some fraction of the intellectual


capacity of a human being.

I stick with the focus of that definition on intellectual capacity, but for purposes in this
paper we’ll consider a somewhat broader and looser spectrum of possibilities:

1. Anything that requires higher-order intellectual activity. Reasoning, logic,


complex decisions, carefully considered judgment, planning, creativity,
imagination, speculation, etc.
2. Anything that requires any intellectual activity. Anything that even a 4-year old
child could do.
3. Anything that requires fine motor control requiring some degree of intelligence.
4. Anything that requires interpretation of sensory perception of the environment.

Part of the intent there is to include robots, even those which mimic only animals rather
than limiting the focus to the mimicking of only higher-order human intellectual
capacities.

Just to get a few simpler cases out of the way, the following forms of automation would
probably not be considered AI:

• Washing machine.
• Dish washer.
• Vacuum cleaner.
• Thermostat.
• Garage door opener.
• Textile and clothing power looms capable of being programmed for patterns.
• Numerical control machines (CNC — Computer Numerical Control).
• Player piano. Or any form of playing recorded music, speech, or video.

Granted, any of these could also be supplemented with AI features, but absent some
specific AI features, their main functions don’t suggest AI per se.

A device or appliance such as a Roomba would represent the crossover from mere
automation to the integration of AI due to its navigation abilities — interpreting sensory
perception of the environment.

As a starting point for further discussion, consider:

1. Human behavior and activities.


2. Human social behavior and group activities.
3. Human movement.
4. Human sensory perception.
5. Human intelligence.
6. Human intellectual activities.
7. Higher-order human intellectual activities.
8. Human communication.
9. Animal behavior and activities. Much of it shared with humans.
10. Animal social behavior and group activities. Some degree shared with humans.
11. Animal movement. Much of it shared with humans. Although we can’t fly like
birds or swim like fish.
12. Animal sensory perception. Much of it shared with humans. Animals do have
some more refined senses though.
13. Animal intelligence. Shared with humans.
14. Animal intellectual activities? None that I am aware of.
15. Animal communication. Some of it shared with humans.
16. Distinctly human behavior and activities. Not shared with animals.
17. Distinctly human social behavior and group activities. Not shared with animals.
Large and complex organizations, including governments.
18. Distinctly human movement. What little is not shared with animals. Ease of
walking upright. Dance. Space travel. What else?
19. Distinctly human sensory perception. Not shared with animals. Is there any?
20. Distinctly human intelligence. Not shared with animals.
21. Distinctly human intellectual activities. They are all distinctly human. Nothing
in common with animals beyond animal-level intelligence.
22. Distinctly human communication. Especially natural language, including the
spoken and written word. Knowledge artifacts, including books.
23. Common manual tasks of humans. From daily life.
24. Less common manual tasks of humans. Work life. Manufacturing. Office tasks.
Packing, shipping, and delivery.
25. Advanced manual tasks. Complex assembly. Detail work. Laboratory work.
Medicine.
26. Basic information processing capabilities of humans. Stuff even high school
students can be expected to handle.
27. Advanced information processing. Stuff that may require a computer science
degree.
28. Scientific data processing. Stuff requiring a science degree or even a PhD.
29. Complex information processing. A lot of data and complex relationships are
involved, but not necessarily requiring any strictly higher-order intellect in its
processing.
30. The intelligence of children, even those barely able to speak.
31. The intelligence of mature adults.
32. The intelligence of trained professionals.
33. The intelligence of so-called intellectuals.
34. The intelligence of true geniuses.
35. The intelligence of insects.
36. The intelligence of small rodents.
37. The intelligence of dogs. And monkeys, dolphins, pigs, horses, and elephants.
Anything beyond the capacities of small rodents and reptiles.
38. The intelligence of primates.
39. Simulating behavior and activities of animals. Much in common with humans.
40. Simulating social behavior and group activities of animals. Much in common
with humans.
41. Simulating movement of animals. Much in common with humans.
42. Simulating sensory perception of animals. Much in common with humans.
43. Simulating intelligence of animals. Subset of human intelligence.
44. Simulating communication of animals. Some in common with humans.
45. Simulating distinctly human behavior and activities. Beyond animals.
46. Simulating distinctly human social behavior and group activities. Beyond
animals.
47. Simulating distinctly human movement. Beyond animals.
48. Simulating distinctly human sensory perception. Beyond animals.
49. Simulating distinctly human intelligence. Beyond animals.
50. Simulating human intellectual activities. All distinct from animals.
51. Simulating human communication. Speaking, understanding, writing, and
reading natural language.

Those items constitute the menu for both automation and AI. The rest of this paper
focuses on sorting out how those items can or should be categorized as mere automation
in contrast with true artificial intelligence.

Some things that aren’t considered to be part of that list and aren’t really relevant to AI
and automation include some but not all basic biological functions, otherwise
considered part of behavior:

1. Breathing.
2. Eating.
3. Hydration.
4. Excretion.
5. Bone structure.
6. Blood.
7. Secretions.
8. Organs.
9. Cell metabolism.
10. Regulation of metabolic function.
11. Reproduction and mating.

Although the goal of AI overall is not to simulate animals per se, it seems clear that
doing so is an essential precursor to simulating humans, in terms of behavior (except the
exclusions listed above), social behavior, movement, sensory perception, and many
aspects of intelligence.

Granted, simulating animals will not get you even close to simulating distinctly human
intelligence and certainly not higher-order human intellectual functions, but it will
provide a solid and rich foundation as a starting point, especially when considering
robotics.

The items on that main list that are especially human include:

1. Distinctly human behavior and activities. Not shared with animals.


2. Distinctly human social behavior and group activities. Not shared with animals.
Large and complex organizations, including governments.
3. Distinctly human movement. What little is not shared with animals. Ease of
walking upright. Dance. Space travel. What else?
4. Distinctly human sensory perception. Not shared with animals. Is there any?
5. Distinctly human intelligence. Not shared with animals.
6. Distinctly human intellectual activities. They are all distinctly human. Nothing
in common with animals beyond animal-level intelligence.
7. Distinctly human communication. Especially natural language, including the
spoken and written word. Knowledge artifacts, including books.
8. Common manual tasks of humans. From daily life.
9. Less common manual tasks of humans. Work life. Manufacturing. Office tasks.
Packing, shipping, and delivery.
10. Advanced manual tasks. Complex assembly. Detail work. Laboratory work.
Medicine.
11. Basic information processing capabilities of humans. Stuff even high school
students can be expected to handle.
12. Advanced information processing. Stuff that may require a computer science
degree.
13. Scientific data processing. Stuff requiring a science degree or even a PhD.
14. Complex information processing. A lot of data and complex relationships are
involved, but not necessarily requiring any strictly higher-order intellect in its
processing.

Again, we’re back at the starting point that unless you accept that a computer
performing any task that a human is capable of is implicitly and by definition AI, we
need to consider which tasks or functions may be mere automation rather than AI per
se.

Some candidates for mere automation include:

1. Tasks around the home.


2. Tasks around the office.
3. Tasks in a manufacturing plant.
4. Tasks for a distribution center.
5. Tasks around the community.
6. Tasks in government.
7. Organizing information.
8. Arithmetic, statistics, and other calculations on collections of numbers.
9. Sorting and searching through information.
10. Querying and updating information.
11. Spreadsheet calculations.
12. Scientific and engineering calculations.
13. Complex modeling.
14. Word processing and document management.
15. Photo, audio, and video editing.
16. Archiving and libraries.

Many of the devices, machines, and software to handle or facilitate the tasks on that list
can be reasonably sophisticated, but commonly won’t be considered AI, although
elements of AI can be included in any of them.

The primary assertion of this informal paper is to focus attention on intellectual activity
as opposed to manual activity, although a lot of the interest, especially for robotics,
includes manual activity, a lot of which is shared with animals.

But what about the smart car feature of self-parking (parallel parking)? How intellectual
is that really? Seems like mostly a manual process, right? But I think most people
would currently consider it an AI feature.

Or anti-locking brakes on cars. There is certainly some clever real-time processing


going on that would otherwise require a very alert driver, but most people today would
not refer to this otherwise hidden feature of average cars to be some magical AI feature.

Or a self-driving car. It simply goes from point A to point B with lots of turning,
accelerating, and braking, which are primarily manual operations that hardly require
any significant intellectual effort when a human is driving a normal car. But a self-
driving vehicle would certainly be considered AI, at least today, although much of that
may really be due primarily to its novelty and sophistication rather than any true
intelligence or higher-order intellectual capacity.

Or look at the movement and actions of animals, whether dogs, higher primates,
rodents, small birds, or even miniscule insects. How much intellectual effort do they
need, all without the need of big brains like us humans? Still, building machines to
mimic such movements and actions are commonly accepted as being in the domain of
AI.

Robots are almost uniformly considered AI when so much of what they do, try to do, or
hope to eventually do is hardly more the the behavior of relatively simple animals, even
small rodents and insects.
Somehow, we currently consider movement and navigation in the real world as being
AI, despite the fact that little in the way of higher-order intellect is required.

As discussed in my previous paper, What Is AI (Artificial Intelligence)?, intelligence


or intellectual capacity includes quite a few mental functions and mental processes, not
all of which are focused on higher-order intellectual activity such as reasoning or
anything resembling what we consider human-level thinking:

• Perception
• Attention
• Recognition
• Communication
• Processing
• Memory
• Following rules
• Decision
• Volition or will
• Movement and motor control
• Behavior

Even relatively simple animals (or machines) require many of these mental functions
and mental processes.

Most people would accept or expect that these capabilities should be considered AI,
even if the behavior is not substantially more than that of animals and even insects.

Even then, it is a bit of a stretch to assert that these rudimentary functions and processes
constitute intelligence per se.

Hmmm… can it really be AI if it is not intelligence? Good point. Artificial life (A-Life)
would be a better term for a lot of what is being considered with robots and driverless
vehicles, especially when it comes to robotics and sensory perception, but it’s probably
easier to accept that AI covers a fair bit of A-Life, rather than confuse a lot of people
who have enough trouble understanding artificial intelligence.

Maybe the real point is that although that list of functions and processes isn’t
intelligence alone, they are precursors or requirements in order for a machine to
interact and behave in an intelligent manner. After all, what good is intelligence without
the ability to perceive, communicate, and act in some intelligent manner?

The remaining mental functions and mental processes from the What is AI list are more
clearly higher-order intellectual capacities, which appear to distinguish human
intelligence from animal intelligence:

1. Natural language. Communication beyond that of animals.


2. Learning. Beyond the basic forms of learning of even animals. Concepts.
Knowledge.
3. Analysis.
4. Speculation, imagination, and creativity.
5. Synthesis.
6. Reasoning.
7. Following rules. More complex rules, especially with conditions and nuances,
beyond what animals can do. Includes games (checkers, chess, Go, Jeopardy.)
8. Applying heuristics. In an intelligent manner, rather than in a rote, blind,
mechanical manner common in mere automation.
9. Intuitive leaps.
10. Mathematics.
11. Decision. More complex decisions and consequences, beyond what animals can
do.
12. Planning.

Granted, designers and engineers may use such intellectual capacities to produce
machines, devices, and software which automate tasks, but it is only when the created
system exhibits such higher-order intellectual capacities that we would feel obligated to
characterize the system as possessing AI.

Finishing off that thought, we will define higher-order intellectual activity as the use of
the higher-order intellectual capacities on that list, those special capacities which
distinguish humans from animals.

Beyond basic manual tasks and automation of basic information processing, more
complex tasks which are clearly automation and clearly involve very sophisticated
processing but just don’t seem to warrant classification as AI per se include:

1. Computer chip design.


2. Electronic circuit design.
3. Integrated circuit layout.
4. Chip and circuit simulation.
5. Chip and circuit testing.
6. Exoplanet detection.
7. Nuclear weapon design and simulation.
8. High energy physics.
9. Human flight.
10. Space flight.
11. Space missions.
12. Space stations.
13. Protein folding. Even the most simple creatures do it so easily and with no
apparent intelligence per se!
14. Computer operating systems.
15. Computer networking software.
16. Word processor software.
17. Spreadsheet software.
18. Photo and video editing software.
19. Accounting.
20. Payroll.
21. Inventory management.
22. Supply chain management.
23. Personnel records.
24. Scheduling.
25. Operations research.
26. Optimization.
27. Machine control.
28. Biomedical devices.
29. Real-time monitoring of environmental conditions.
30. Real-time control and feedback. May or may not included simple devices such
as a thermostat.
31. Database systems.
32. Search engines. The basics. Addition of AI-like features becoming common.
33. Elevators.

Granted, specific, niche AI features can find application within any of those automated
tasks, but the tasks themselves don’t seem to provide evidence that the computer is
engaging in higher-order intellectual activity as would be expected for an AI system.

The software is certainly an artifact of higher-order human intellectual activity, but


doesn’t seem able to engage in such activity itself.

There are plenty of borderline tasks which aren’t necessarily AI per se, but are more
than mere automation of simple or straightforward tasks, including:

1. Spelling checkers and correctors.


2. Grammar checkers and suggesters.
3. Auto-suggest search engine keywords.
4. Goal-seeking numeric problem solvers. Such as in spreadsheet software.
5. Anti-locking brakes.
6. Auto-focus cameras.
7. Acoustic echo cancellation.
8. Steadicam and other camera stabilization systems.
9. Automatic color and brightness correction.
10. Delivery routing.
11. Basic automated online customer service chat.

And then there are borderline tasks which are now commonly accepted as automation
but at least have roots in AI, including:

1. Document scanning and character recognition.


2. Letter address recognition.
3. Voice recognition.
4. Recommendation software. Based on mutual interests.
5. Aircraft collision alert.
6. Aircraft height alert.

And finally we have tasks which are not so dissimilar from some of these mere
automation tasks but are commonly accepted as AI, at least at the present moment:

1. Vehicle collision alert.


2. Self-parking cars.
3. Facial recognition.
4. Photo searching.
5. Searching for similar documents.
6. Heuristics that approximate intelligent activity without being truly intelligent.
7. Basic question and answer user interfaces. They can seem intelligent, but
generally rely heavily or primarily on heuristics.
8. More sophisticated automated online customer service chat.
9. Intelligent digital assistants.
10. Driverless vehicles.

Are all heuristics AI?

The use of heuristics is common in most disciplines. They are shortcuts or


approximations. They provide most of the benefit of true intelligence at a fraction of the
cost and without requiring deep, higher-order intellectual capacity.

Heuristics have the allure that they appear or seem to be comparable to intelligence, but
they have limitations that a true, human-level intelligence would not.

They constitute a gray area between mere automation and true intelligence.

Robotics

Much of robotics revolves around sensors and mechanical motions in the real world,
seeming to have very little to do with any intellectual activity per se, so one could
question how much of robotics is really AI.

Alternatively, one could say that sensors, movement, and activity enable acting on
intellectual interests and intentions, thus meriting coverage under the same umbrella as
AI.

In addition, it can be pointed out that a lot of fine motor control requires a distinct level
of processing that is more characteristic of intelligence than mere rote mechanical
movement.

In summary, the reader has a choice as to how much of robotics to include under the
umbrella of AI:

1. Only those components directly involved in intellectual activity.


2. Also sensors that provide the information needed for intellectual activity.
3. Also fine motor control and use of end effectors. Including grasping delicate
objects and hand-eye coordination.
4. Also any movement which enables pursuit of intellectual interests and
intentions.
5. Any structural elements or resource management needed to support the other
elements of a robotic system.
6. Any other supporting components, subsystems, or infrastructure needed to
support the other elements of a robotic system.
7. All components of a robotic system, provided that the overall system has at least
some minimal intellectual capacity. That’s the point of an AI system. A
mindless, merely mechanical robot with no intelligence would not constitute an
AI system.
In short, it’s not too much of a stretch to include virtually all of robotics under the rubric
of AI — provided there is at least some element of intelligence in the system, although
one may feel free to be more selective in specialized contexts.

Conclusion

You, the reader, have several choices to accept from:

1. The primary criterion for whether a system or feature is AI is whether it exhibits


higher-order intellectual capacity.
2. A secondary criterion for whether a system or feature is AI is whether it requires
interpretation of sensory perception to recognize objects in the real world and
possibly exhibits fine motor control to navigate relative to observed objects.
This covers robots. This should more properly be called artificial life (A-Life),
but common usage considers this AI.
3. The non-intellectual aspects of robotics, if they seem to directly or indirectly
enable and support intellectual activity of the robot.
4. Any form of automation that automates any task that can be performed by a
human (or an animal) is by definition AI.
5. Automation of any task that requires any fraction of intelligence, human or
animal is by definition AI. Including sensory perception and fine motor control,
even if no complex reasoning, advanced learning, or complex planning is
involved.
6. Any automation which requires a significant degree of complexity can be
considered AI, especially if it is simulating the activity of a person when they
are using higher-order intellectual capacities.
7. Whether the use of heuristics to approximate intelligence is sufficient to be
considered intelligence.
8. The non-intellectual aspects of robotics, if they seem to directly or indirectly
enable and support intellectual activity of the robot.

Personally, I prefer the first three choices, including robots which mimic animals, but I
also accept that some may prefer the other choices, as stated or with additional nuances.

So, I personally would not consider most traditional software or even a lot of modern or
even smart software unless it exhibits higher-order intellectual capacities.

I accept robots and driverless vehicles as being AI. Although we should start calling
them artificial life (A-Life.)

Intelligent digital assistants are in a gray area. They are generally more heuristic or use
specialized, niche AI features rather than being broad AI systems capable of a wide
range of higher-order intellectual activity. They’re more about automation than higher-
order intelligence. That said, I’ll accept that they fall under the AI rubric, at least for
now.

Of course, what current AI systems evolve into in the coming years and decades is
another matter.
Ten to twenty years from now people will look back and probably call current AI
systems mere toys, and laugh that we considered them to be intelligent.

But thirty years from now people won’t be laughing anymore about our current
technology at all. That’s because (at least according to Ray Kurzweil’s Singularity) the
robots will have taken over. Instead, it will be the robots laughing that we considered
ourselves to be intelligent.

But I’ll try to limit my analysis and speculation to the present and near future.

Is it AI for just a heuristic?


Computer scientists have done a great job over the years of coming up with relatively
simple rules and shortcuts or heuristics which have the effect of mimicking human-level
intelligence.

But merely getting comparable results to a human for a given task does not necessarily
mean that the machine has human-level intelligence.

Besides, a lot of tasks performed by people are relatively simple in the first place, so
that they aren’t necessarily tapping into the core of the higher-order intellectual
capacities which may be present but not necessarily used.

Heuristics and mental shortcuts are highly valued and to be applauded, but they most
certainly are not the same as higher-order intellectual capacities.

Progress on gaming
Some of the highest profile and most impressive advances in AI have been on the
gaming front, including:

• Chess.
• Go.
• Learning classic video games.

While impressive, and achieving human-level performance, there are difficulties


asserting that this constitutes progress towards Strong AI. Notably:

1. These are niches, or what I call towers of intelligence. Excellence in one of these
areas implies nothing about general intelligence in disparate areas.
2. Humans must preprogram basic knowledge and basic logic, such as ground
rules. These AI systems are not strictly learning from a completely blank tabula
rasa.
3. Heuristics and statistics rather than true, higher-order human-level intellectual
capacities are being exploited.

In short such advances are significant progress in machine intelligence, but not artificial
higher-order human intelligence per se.
Is it AI or just machine intelligence
Although some (many) people treat machine intelligence and artificial intelligence as
synonyms, I would strongly advise treating the terms as distinct.

We can and should strongly applaud advances in machine intelligence without any need
or obligation to assert that such advances necessarily constitute advances in artificial
intelligence of the higher-order human-level intellectual capacities kind.

I have another companion paper on this topic, Is It AI or Machine Intelligence?

Artificial Intelligence (AI) and machine intelligence are commonly used synonymously,
but there is a nuanced difference — AI is more properly intended to focus on simulating
the features of the human mind, while machine intelligence can also include intelligent
information processing that is distinct from or beyond the capabilities of the human
mind. Machine intelligence is also sometimes used to refer to machine learning (ML), a
subset of machine intelligence — and AI as well, depending on how you interpret the
terms AI and machine intelligence. This informal paper will endeavor to distinguish AI
and machine intelligence.

Whether discussing humans or machines, we have three related concepts:

1. Physical activity. Movement, observation, manipulation, and communication.


2. Intelligence. Processing of information and acquiring and applying knowledge.
Perception and recognition of objects, scenes, and qualities from the
environment, reasoning, generating new information, learning, and deciding,
planning, initiating, and guiding physical activity.
3. Learning. Acquisition of information and knowledge. This is part of intelligence,
but worth calling out specially. Recognition of patterns, rules, objects, and
qualities from information acquired from the environment, development of
concepts and principles from observations, reasoning, and existing knowledge.

Again, learning is properly part of intelligence but worth calling out since learning
commonly occurs distinctly from applying learned knowledge. As discussed later,
learning occurs on an ongoing basis in daily life — so-called little learning — while
more complex learning of concepts, principles, and methods — so-called big learning
— tends to require a more specialized and more dedicated effort. The main reason for
calling out learning here is since we have this specialized field of AI and machine
intelligence called machine learning.

Back to those larger concepts, for people we have:

1. Human physical activity.


2. Human intelligence.
3. Human learning.

And for machines we have:

1. Robotics or machine activity.


2. Artificial intelligence or machine intelligence.
3. Machine learning.

There is no requirement that machines directly parallel or directly mimic people, but
that is a common interest and noteworthy endeavor. It is worth calling out the
distinction:

1. Mimicking or paralleling human activity, intelligence, and learning. More


commonly referred to as AI.
2. Activity, intelligence, and learning for machines which may be distinct from that
of humans. Whether is is still proper to call this AI or whether it is distinctly
machine intelligence is a matter of debate.

A purist would argue that AI should only be used when the intelligence is directly
comparable to human intelligence.

A different class of purist could assert that everything a machine does is artificial, so
that any purported machine intelligence is by definition artificial intelligence.

Another purist could also assert that any intelligence of a machine is by definition
machine intelligence, so that even AI which directly mimics human intelligence should
be called machine intelligence even if another purist calls it machine intelligence.

The reader is free to take their pick. My job is simply to highlight the issues and the
distinctions.

Context is important as well. It depends on whether the emphasis is on how well a


machine can mimic a person or whether the emphasis is on what the machine can do
better than a person.

Just to recap the related concept areas:

1. Human physical activity. Things people do. Mostly manual tasks, guided by
intelligence.
2. Human intelligence. Human mental activity. Things people do in their minds.
Thinking, reasoning, calculating, planning, imagining, learning, communicating,
and guiding physical activity.
3. Human learning. Little learning, such as names and faces of people we meet,
places we go, things we observe, and activities we engage in, as well as big
learning, such as school and other forms of education, studying, reading, and
research to learn and develop concepts, principles, and methods.
4. Robotics. Physical activity of machines.
5. Artificial intelligence. Machines simulating human intelligence.
6. Machine intelligence. May be artificial intelligence, a subset of artificial or
human intelligence, or forms of reasoning, calculating, planning, imagining,
learning, communicating, guiding machine activity, which are distinct from the
mental activities of people.
7. Machine learning. Specialized forms of pattern and rule recognition that either
mimic analogous human recognition, or are distinct from human recognition.
Generally, a very limited subset or minimal approximation of human
recognition, or fairly distinct from human recognition. More advanced forms can
develop concepts, principles, and methods, but that is generally beyond the
abilities of most current machines.

Generally, machine intelligence refers to one of the following, sometimes dependent on


context and usage:

1. Artificial intelligence (AI). Exact synonym.


2. Machine learning. Exact synonym. A subset of AI.
3. AI for mechanical and electromechanical machines, as distinct from AI for
artificial biological life forms.
4. Intelligence beyond that of a human being. Superintelligence. Includes
Kurzweil’s Singularity.
5. Intelligence distinct from that of a human being. Such as processing of sensors
and forms of data beyond or very different from the five human senses.
6. Information processing involving complex data patterns and relationships for
which the algorithms can be comprehended by humans but carrying out the
volume or complexity of operations is impractical for a mere mortal. Includes
big data.

A more full list of the nuanced meanings of machine intelligence:

1. Any degree of approximation of human mental capabilities, even if not advanced


AI.
2. Exact synonym for artificial intelligence.
3. Artificial intelligence specific to electronic digital machines (computers), as
opposed to, say, an artificial biological system.
4. Synonym for machine learning.
5. Artificial intelligence that may may have elements of intelligence that have no
counterpart in humans.
6. Subset of human intelligence that machines excel at. Such as working with very
large amounts of data or performing complex calculations very quickly.
7. Any complex or sophisticated algorithm that accomplishes some task that
impresses a human as being a task that humans are good at and that seems like it
would be difficult for a machine.
8. Intelligence beyond that of human beings (or animals.) Ranging from a modest
improvement to true superintelligence.
9. Forms of intelligence that machines are capable of but are beyond, different
from, or difficult for humans to accomplish.
10. Intelligence that is merely different from human intelligence. Such as processing
data from a wider range of sensors or more specialized sensors than the five
human senses.
11. Intelligence needed for a robot to navigate and engage in activities in the real
world.
12. Some nuance or difference from AI that will have to be determined from the
context of usage.
13. Software that is beyond mere automation of rote mechanical operations and
basic numerical and information processing.

The remainder of this paper will focus on specific aspects of machine intelligence and
machine learning. The final summary offers some questions that will help the reader
decide whether any given piece of technology is better classified as AI or machine
intelligence — or machine learning.

Additional detail can be found in these related papers:

1. Untangling the Definitions of Artificial Intelligence, Machine Intelligence,


and Machine Learning
2. Is It Really AI or Just Automation?

And for a more brief, light introduction to AI, see What Is AI (Artificial Intelligence)?

Tasks beyond what people and animals traditionally performed

The label AI makes most sense when considering automated tasks which people (or
animals) have traditionally performed.

The label machine intelligence makes most sense when considering tasks which are not
traditionally associated with human (or animal) mental or physical activity, including:

1. High volumes of data.


2. Data that doesn’t correspond to the traditional five human senses.
3. Relatively complex processing of data which does correspond to the traditional
five human senses.
4. Relatively intelligent processing needed to accomplish robotic activity. Dealing
with the details and nuances of electrical, electronic, and mechanical systems —
in a fairly intelligent manner.

Big learning and little learning

For the purposes of clarification, I would distinguish what I call big learning from what
I call little learning.

The former (big learning) is the difficult form of learning where complex concepts,
principles, and methods are learned with great effort. This is the world of school and
other forms of education, studying, tests, research, and struggling. It is usually a distinct
activity from daily life.

The latter (little learning) is the learning we do in everyday daily life, like meeting new
people, learning their names and interests, and remembering their faces. Or facts we
learn as we travel and engage in activities in unfamiliar places. Not as much effort or as
much of a struggle as big learning. No new concepts, principles, or methods involved.

In between we have a hybrid, focused more on categorical distinctions and nuances. The
world of patterns and rules.

Training

Another in-between category is training where the individual is not expected to


understand the full depth of all concepts and principles as in big learning, but instead is
instructed in methods and just enough details and nuances to allow them to complete a
designated subset of tasks in a particular area.

A lot of what passes for machine learning is focused on a combination of the latter three
subsets of learning — little learning, hybrid of little and big learning, and training.
Sometimes the machine can learn completely on its own, but very commonly some
degree of human intervention and so-called training is required.

Deep learning and guided learning

Although AI researchers speak of deep learning, it would be more accurate to speak of


guided learning, where a human technician or subject matter expert establishes some
baseline hard-wired knowledge and points the machine to a particular focus of activity.

Then, the machine can begin to discover patterns and infer rules for that focused
domain.

The machine may indeed succeed at ferreting out what is happening (what the patterns
and rules are), but not why the activity is happening or what the objective of the activity
is. Very limited learning.

So, big learning focuses on deep concepts, broad principles, complex methods, and the
bigger picture context, while little learning is much more superficial, either being
relatively trivial or requiring hard-wired a priori knowledge (including human
genetically-encoded abilities or hard-coded knowledge for a machine) or a limited
ability to discover patterns and rules.

Advanced machine learning

Although much of the emphasis of machine learning today is on relatively basic patterns
and rules rather than the concepts, principles, and methods of human learning, it is
actually quite possible that machines can or could perform some forms of learning even
better than many humans, such as:

1. Processing large volumes of data.


2. Operating in dangerous physical environments.
3. Operating for extended lengths of time which would tax even the most patient
human.
4. Discovering extremely complex patterns and relationships.
5. Discovering patterns, rules, and relationships which require performing
extremely complex mathematical calculations and modeling.

As usual, a hybrid is possible, so that the best of both worlds, man and machine, can be
combined in a synergistic manner.

Automation

Technically, any task that a person could do could be considered AI or machine


intelligence if a machine were programmed to perform the same or comparable task, but
many more common, trivial, or easily programmed tasks are generally considered
automation rather than artificial or machine intelligence.

Adding a column of numbers or sorting a list of names does require some intelligence,
but we normally don’t consider arithmetic and basic information processing to be AI or
even machine intelligence.

Some forms of automation that are in a gray area where it’s a fielder’s choice whether to
consider it just automation or true machine intelligence include:

1. Optimization.
2. Scheduling.
3. Data analytics.
4. Business intelligence.
5. Simulation.
6. Scientific and engineering calculation and modeling.

A key question is whether there is some higher-order intellectual capacity required.

Or, is it just automating relatively rote, mechanical processing.

See a companion paper, Is It Really AI or Just Automation?, for more on this


distinction.

Patterns

There are really four distinct aspects of working with patterns:

1. Detecting or recognizing that a certain sequence or arrangement of data


conforms to a known pattern. Finding instances of a known pattern.
2. Discovering a new pattern, an abstract pattern. Deciding that an otherwise
random, chaotic, or even regular sequence or arrangement of data should be
considered to be an instance of a new pattern. Further, generalizing from that
instance or a collection of similar instances to a more general or abstract form of
pattern that can recognize a broader class of instances of the abstracted pattern.
3. Recognizing or discovering the significance of a newly discovered pattern. Its
cause. Its consequences. Its relationship to other patterns.
4. Discerning the semantic significance of a newly discovered pattern. The
concepts or principles that involve the abstract pattern. Connecting to human-
level concepts and principles. Or, possibly even transcending even human
knowledge.

For the purposes of this paper, AI would relate to patterns that a typical person could
recognize, like names, faces, objects, and concepts, while machine intelligence would
related to patterns more easily recognized by machines than people, as well as the
simpler forms of patterns that even an average person can recognize quite quickly.

Recognizing concepts and principles is generally beyond the common machine


intelligence methods of today, and only relevant to AI in relatively limited, niche, and
specialized domains at this time.
Both AI and machine intelligence will gradually and sometimes dramatically move up
the complexity curve in the coming years and decades.

Right now, people are generally pleasantly surprised whenever machines are able to
recognize even relatively simple patterns.

Granted, people are amazed by specialized cases such as DNA and fingerprint
matching, facial recognition, voice recognition, chess, Go, ping pong, and Jeopardy.
But, that’s the point here, that each of those is very specialized and requires careful and
complex hard-wiring of basic knowledge.

It may be another few years or more before people begin to be wowed and blown away
by machines recognizing concepts and principles.

Computer vision

Flip a coin whether to consider computer vision to be AI or machine intelligence.

In truth, most people would not consider visual recognition of objects, scenes, and
qualities to require much in the way of intelligence — even small children can do it. Or
even animals, for that matter.

Computer vision can legitimately be considered AI as well as machine intelligence. The


reader is welcome to choose for themselves.

Personally, I’d consider computer vision more associated with machine intelligence
than AI.

Generally, perception and recognition are inputs to intelligence rather than exemplifying
intelligence itself — reasoning, working with concepts and principles, planning, guiding
activity, etc.

A lot of applications of computer vision encompass types of objects, scenes, and


qualities that an average person might not have much interest or particular skill in. For
example, recognizing heat signatures from infrared light or detecting defects in objects,
or small movements or small changes in objects and scenes.

For sure, there are plenty of applications for human-like vision, whether for robots in
the home or driverless vehicles, but there are whole other broad categories for computer
vision that are more distinct from or beyond the vision that people generally possess. I
would classify the latter under machine intelligence, although technically some would
insist that this is still AI.

Internet of Things (IoT)

There is a significant opportunity for development and deployment of intelligence to


intelligently deal with large numbers of specialized devices for the so-called Internet of
Things (IoT).
This can include detecting patterns in data from individual devices as well as patterns in
data across many devices, as well as patterns across many different types of devices.

I would definitely classify such intelligence as machine intelligence, especially since it


is highly unlikely that many people would engage in such intelligent activity
themselves. In fact, people are likely to be unable to engage in such activity even if they
wanted to.

Again, technically, this could still be considered AI, but you gain no conceptual benefit
from labelling it AI rather than what it really is — machine intelligence.

Still, if the processing directly parallels human intellectual activity, then by all means
label it as AI as well.

Cybersecurity

Although skilled individuals can do a fair amount of interesting work in the area of
cybersecurity, automation is desperately needed. Some of that may be relatively simple
automation, but a fair amount could easily be classified as AI.

But at some stage, the types of data, its volume, its complexity, and the relationships
within and between the data begins to take on a character that is distinctly beyond what
even a highly-motivated technical specialist could muster. Enter the world of machine
intelligence.

Again, technically, it can still be considered AI, but it begins to look so different from
what any normal person would do that it just makes a lot more sense to label it what it
really is — machine intelligence.

That said, I wouldn’t want to go so far as to label all forms of automated cybersecurity
as machine intelligence. I’d prefer to reserve the term for cases such as discovery of
new patterns of data, new patterns of activity and behavior, and new threats, rather than
detecting instances of data, activity, behavior, and threats which are already known and
manually hard-coded by skilled human operators.

Generally, processing in cybersecurity will fall into one of four general categories.

1. Basic processing. But just vast amounts of data.


2. Automation. Of fairly mundane tasks.
3. AI. Automation that parallels or mimics sophisticated human thought.
4. Machine intelligence. Complex, intelligent processing that doesn’t have a direct
parallel in normal human thought. Or does have a parallel, but the complexity is
beyond that of a human or even a team of people.

The point here for machine intelligence is to emphasize the brain of the computer, not
just its brawn.

As usual, a hybrid is possible and welcome as well — man and machine, each
contributing their own strengths in intelligence.
Data analytics and business intelligence

Again, we don’t want to merely label all forms of automation as machine intelligence,
but there are indeed more advanced forms of data analytics and business intelligence
which contribute more than just raw number crunching and basic statistical processing.

But any time that the machine can discover new, previously unknown patterns, the label
of machine intelligence can become warranted.

That said, if the machine capabilities are simply aiding a human user in their own
detection of patterns, I’d be more reluctant to trot out the label of machine intelligence.

And if the types of patterns recognized look very little like the patterns that a human
would normally recognize, I’d be more cautious about labeling it AI.

Again, I lean towards labeling an automated activity as AI when it has a fairly direct
analog to human mental activity.

Otherwise, let’s just call it what it is — machine intelligence.

Scientific and engineering calculation and modelling

Generally speaking, I personally wouldn’t categorize most scientific and engineering


calculation and modelling as either machine or artificial intelligence, although clearly a
lot of intelligence went into the mathematics of the calculations and models. To me, it’s
more automation.

But if the calculations endeavor to automatically recognize new patterns, rules, concepts
and principles, or any other activity normally associated with higher-order intellectual
activity, then AI and machine intelligence might become appropriate labels. That is not
commonly the case.

Usually it is up to the researcher to use their own mind to recognize patterns, rules,
concepts, and principles from the data output from the calculations or modelling
process.

Very complex data patterns and connections within data

There are a number of applications, domains, and forms of data which have a fairly
dramatic level of complexity, such as:

1. Graphs.
2. Networks.
3. Time series.
4. Complex database joins.
5. Relationships.
6. Complex relationships.

The mere complexity would not automatically confer the label of machine intelligence,
but to the degree that the algorithms working with such data are discovering new
patterns, rules, or even concepts, principles, and methods, it becomes more appropriate
to label such processing as machine intelligence.

Although merely labeling such processing machine intelligence can also bring it under
the larger umbrella of AI, I’d opt to stick with labelling it machine intelligence rather
than labelling it AI unless there are distinctly human patterns, rules, concepts, and
principles involved.

If the machine is doing something that a human would traditionally have done (before
computers were invented), then it could well be considered AI proper.

Summary

The questions one should ask when considering whether a technology is AI or machine
intelligence are:

1. Is it simple, basic information processing? Then it’s probably just automation.


2. Is it a task that humans have traditionally done? If not, it’s probably machine
intelligence.
3. Is it something comparable to working with the five human senses? It’s probably
AI.
4. Is it working with sensor data that is very unlike the five human senses? It’s
probably machine intelligence.
5. Does it involve complex data relationships that no normal human would
personally relate to? Then it’s probably machine intelligence.
6. Does it involve learning that is rather different from the learning of humans?
Then it’s machine intelligence. Or, more specifically, machine learning.

When in doubt, unless it’s basic automation, it’s probably AI unless it’s processing data
that no mere mortal would relate to.

What fraction of strong AI is needed for your


particular app?
Not every application requires full human-level intelligence. A mere fraction of human-
level intelligence may be sufficient.

This is not unlike the simple fact that even for jobs staffed by people, not all positions
require a genius or even more than a very modest fraction of what the staff are really
capable of. Like, Einstein working as a patent clerk.

The fraction has three dimensions:

1. Area of intelligence.
2. Level of intelligence. In a given area of intelligence.
3. Degree of competence. For a given level in a given area of intelligence.
Not every application requires all areas of human-level intelligence. Not every app
needs to be a chess grandmaster. Not every app requires facility with quantum
mechanics.

Even in a given area of intelligence, not every app requires all levels of function in that
area. An auto mechanic doesn’t need to be able to design a new engine. A roadside
assistance technician doesn’t need to be able to tear down and rebuild an engine.

Even for a given level of function in a given area of intelligence, a given app doesn’t
require the maximum level of competence. Basic competence may be quite sufficient
and readily achievable, while expert or genius level competence may be expensive,
difficult, or even impractical.

Areas of intelligence

There are two rather distinct ways to look at areas of intelligence:

1. Abstract human intelligence. Nothing to do with AI per se, but certainly


applies to Strong AI.
2. Areas of research in AI. The areas that AI researchers feel are fruitful for
progress in automating human-level intelligence.

Areas of human intelligence

Even ignoring efforts in AI, human intelligence can be summarized as a variety of


major distinct areas:

1. Perception. The senses or sensors. Forming a raw impression of something in


the real world around us.
2. Attention. What to focus on.
3. Recognition. Identifying what is being perceived.
4. Communication. Conveying information or knowledge between two or more
intelligent entities.
5. Processing. Thinking. Working with perceptions and memories.
6. Memory. Remember and recall.
7. Learning. Acquisition of knowledge and know-how.
8. Analysis. Digesting and breaking down more complex matters.
9. Speculation, imagination, and creativity.
10. Synthesis. Putting simpler matters together into a more complex whole.
11. Reasoning. Logic and identifying cause and effect, consequences, and
preconditions.
12. Following rules. From recipes to instructions to laws and ethical guidelines.
13. Applying heuristics. Shortcuts that provide most of the benefit for a fraction of
the mental effort.
14. Intuitive leaps.
15. Mathematics. Calculation, solving problems, developing models, proving
theorems.
16. Decision. What to do. Choosing between alternatives.
17. Planning.
18. Volition. Will. Deciding to act. Development of intentions. When to act.
19. Movement. To aid perception or prepare for action. Includes motor control and
coordination. Also movement for its own sake, as in communication, exercise,
self-defense, entertainment, dance, performance, and recreation.
20. Behavior. Carrying out intentions. Action guided by intellectual activity. May
also be guided by non-intellectual drives and instincts.

Communication includes a variety of subareas:

1. Natural language.
2. Spoken word.
3. Written word.
4. Gestures. Hand, finger, arm.
5. Facial expressions. Smile, frown.
6. Nonlinguistic vocal expression. Grunts, sighs, giggles, laughter.
7. Body language.
8. Images.
9. Music.
10. Art.
11. Movement.
12. Creation and consumption of knowledge artifacts — letters, notes, books,
stories, movies, music, art.
13. Ability to engage in discourse. Discussion, conversation, inquiry, teaching,
learning, persuasion, negotiation.
14. Discerning and conveying meaning, both superficial and deep.

Recognition includes a variety of subareas:

1. Objects
2. Faces
3. Scenes
4. Places
5. Names
6. Voices
7. Activities

The measure of progress in AI in the coming years will be the pace at which additional
areas from those lists are ticked off, as well as improvements in the level of competence
in the levels of function in each area.

Progress in AI will likely continue to be uneven, with both strength and weakness in
distinct areas, levels of functions, and degrees of competence.

Areas of AI research

Areas of research in replication of human intelligence and human behavior in which AI


researchers feel they can make fruitful progress:

1. Reasoning.
2. Knowledge and knowledge representation.
3. Optimization, planning, and scheduling.
4. Learning.
5. Natural language processing (NLP).
6. Speech recognition and generation.
7. Automatic language translation.
8. Information extraction.
9. Image recognition.
10. Computer vision.
11. Moving and manipulating objects.
12. Robotics.
13. Driverless and autonomous vehicles.
14. General intelligence.
15. Expert systems.
16. Machine learning.
17. Pattern recognition.
18. Theorem proving.
19. Fuzzy systems.
20. Neural networks.
21. Evolutionary computation.
22. Intelligent agents.
23. Intelligent interfaces.
24. Distributed AI.
25. Data mining.
26. Games (chess, Go, Jeopardy).

For more depth in these areas, see Untangling the Definitions of Artificial Intelligence,
Machine Intelligence, and Machine Learning.

Levels of function

In a given area of intelligence, we can also discern levels of function — what is the AI
system actually accomplishing, relative to what a human might be able to accomplish.

Here are a list of generalized, abstract, but informal levels of function that can be
applied to any area of intelligence:

1. Non-functional. No apparent function. Noise. Twitches and vibrations.


2. Barely functional. The minimum level of function that we can discern. No
significant utility. Not normally considered AI. Automation of common trivial
tasks.
3. Merely, minimally, or marginally functional, tertiary function. Seems to
have some minimal, marginal value. Marginally considered AI. Automation of
non-trivial tasks. Not normally considered intelligence per se.
4. Minor or secondary function. Has some significance, but not in any major
way. Common behavior for animals. Common target for AI. Automation of
modestly to moderately complex tasks. This would also include involuntary and
at least rudimentary autonomous actions. Not normally considered intelligence
per se.
5. Major, significant, or primary function. Fairly notable function. Top of the
line for animals. Common ideal for AI at the present time. Automation of
complex tasks. Typically associated with consciousness, deliberation, decision,
and intent. Autonomy is the norm. Bordering on what could be considered
intelligence, or at least a serious portion of what could be considered
intelligence.
6. Highly functional, high function. Highly notable function. Common for
humans. Intuition comes into play. Sophisticated enough to be considered
human-level intelligence. Characterized by integration of numerous primary
functions.
7. Very high function. Exceptional human function, such as standout creativity,
imagination, invention, and difficult problem solving and planning. Exceptional
intuition.
8. Genius-level function. Extraordinary human, genius-level function, or
extraordinary AI function.
9. Super-human function. Hypothetical AI that exceeds human-level function.
10. Extreme AI. Virtuous spiral of learning how to learn and using AI to create new
AI systems ever-more capable of learning how to learn and how to teach new AI
systems better ways to learn and teach how to learn.
11. Ray Kurzweil’s Singularity. The ultimate in Extreme AI, combining digital
software and biological systems.
12. God or god-like function. The ultimate in function. Obviously not a realistic
research goal.

Specific human-level functions of intelligence

At a more detailed level, the mental functions and mental processes of intelligence or
intellectual capacity include:

• Sentience — to be able to feel, to be alive and know it.


• Sapience — to be able to think, exercise judgment, reason, and acquire and
utilize knowledge and wisdom.
• Ability, capability, and capacity to pursue knowledge (information and
meaning.)
• Sense the real world. Sight, sound, and other senses.
• Observe the real world.
• Direct and focus attention.
• Experience, sensation.
• Recognize — objects, plants, animals, people, faces, gestures, words,
phenomena.
• Listen, read, parse, and understand natural language.
• Identification after recognition (e.g, recognize a face and then remember a
name).
• Read people — what information or emotion are they expressing or conveying
visually or tonally.
• Detect lies.
• Take perspective into account for cognition and thought.
• Take context into account for cognition and thought.
• Adequately examine evidence and judge the degree to which it warrants beliefs
to be treated as proof of strong knowledge.
• Compare incoming information to existing knowledge, supplementing,
integrating, and adding as warranted.
• Understand phenomena and processes based on understanding evidence of their
components and stages.
• Assess whether a new belief is strong knowledge or weak knowledge.
• Judge whether fresh knowledge in conjunction with accumulated knowledge
warrant action.
• Learn by reinforcement — seeing the same thing repeatedly.
• Significant degree of self-organization of knowledge and wisdom.
• Form abstractions as knowledge.
• Form concepts as knowledge.
• Organize knowledge into taxonomies and ontologies that represent similarities
and relationships between classes and categories of entities.
• Acquire knowledge by acquaintance — direct experience.
• Acquire knowledge by description — communication from another intelligent
entity.
• Commit acquired knowledge to long-term memory.
• Conscious — alert, aware of surroundings, and responsive to input.
• Feel, emotionally.
• Cognition in general.
• Think — form thoughts and consider them.
• Assess meaning.
• Speculate.
• Conjecture.
• Theorize.
• Imagine, invent, and be creative.
• Ingenuity.
• Perform thought experiments.
• Guess.
• Cleverness.
• Approximate, estimate.
• Fill in gaps of knowledge in a credible manner consistent with existing
knowledge, such as interpolation.
• Extrapolation — extend knowledge in a sensible manner.
• Generalize — learn from common similarities, in a sensible manner, but refrain
from over-generalizing.
• Count things.
• Sense correspondence between things.
• Construct and use analogies.
• Calculate — from basic arithmetic to advanced math.
• Reason, especially using abstractions, concepts, taxonomies, and ontologies.
• Discern and discriminate, good vs. bad, useful/helpful vs. useless, relevant vs.
irrelevant.
• Use common sense.
• Problem solving.
• Pursue goals.
• Foresight — anticipate potential consequences of actions or future needs.
• Assess possible outcomes for the future.
• Exercise judgment and wisdom.
• Attitudes that affect interests and willingness to focus on various topical areas
for knowledge acquisition and action.
• Intuition.
• Maintain an appropriate sense of urgency for all tasks at hand.
• Sense of the passage of time.
• Sense of the value of time — elapsed, present value, and future value.
• Understand and assess motivations.
• Be mindful in thought and decisions.
• Formulate intentions.
• Decide.
• Make decisions in the face of incomplete or contradictory information.
• Sense of volition — sense of will and independent agency controlling decisions.
• Exercise free will.
• Plan.
• Execute plans.
• Initiate action(s) and assess the consequences.
• Assess feedback from actions and modify actions accordingly.
• Iterate plans.
• Experiment — plan, execute, assess feedback, and iterate.
• Formulate and evaluate theories of law-like behavior in the universe.
• Intentionally and rationally engage in trial and error experiments when no
directly rational solution to a problem is available.
• Explore, sometimes in a directed manner and sometimes in an undirected
manner to discover that which is unknown.
• Ability and willingness to choose to flip a coin, throw a dart, or otherwise
introduce an element of randomness into reasoning and decisions.
• Discover insights, relationships, and trends in data and knowledge.
• Cope with externalities — factors, the environment, and other entities outside of
the immediate contact, control, or concern of this intelligent entity.
• Adapt.
• Coordinate thought processes and activities.
• Organize — information, activities, and other intelligent entities.
• Collaborate, cooperate, and compete with other intelligent entities.
• Remember.
• Assert beliefs.
• Build knowledge, understanding (meaning), experience, skills, and wisdom.
• Assess desires.
• Assert desires.
• Exercise control over desires.
• Be guided or influenced by experiences, skills, beliefs, desires, intentions, and
wisdom.
• Be guided (but not controlled) by drives.
• Be guided (but not controlled) by emotions.
• Be guided by values, moral and ethical, personal and social group.
• Adhere to laws, rules, and recognized authorities.
• Selectively engage in civil disobedience, when warranted.
• Recall memories.
• Recognize correlation, cause and effect.
• Reflection and self-awareness.
• Awareness of self.
• Know thyself.
• Express emotion.
• Heartfelt sense of compassion.
• Empathy.
• Act benevolently, with kindness and compassion.
• Communicate with other intelligent entities — express beliefs, knowledge,
desires, and intentions.
• Form thoughts and intentions into natural language.
• Formulate and present arguments as to reasons, rationale, and justification for
beliefs, decisions, and actions.
• Persuade other intelligent entities to concur with beliefs, decisions, and actions.
• Judge whether information, beliefs, and knowledge communicated from other
intelligent entities are valid, true, and worth accepting.
• Render judgments about other intelligent entities based on the information,
beliefs, and knowledge communicated.
• Render judgments as to the honesty and reliability of other intelligent entities.
• Act consistently with survival — self-preservation.
• Act consistently with sustaining health.
• Regulate thoughts and actions — self-control.
• Keep purpose, goals, and motivations in mind when acquiring knowledge and
taking action.
• Able to work autonomously without any direct or frequent control by another
intelligent entity.
• Adaptability.
• Flexibility.
• Versatility.
• Refinement — make incremental improvements.
• Resilience — able to react, bounce back, and adapt to shocks, threats, and the
unexpected.
• Understand and cope with the nature of oneself and entities one is interacting
with, including abilities, strengths, weaknesses, drives, innate values, desires,
hopes, and dreams.
• Maintain a healthy balance between safety and adventure.
• Balance long-term strategies and short-term tactics.
• Positive response to novelty.
• Commitment to complete tasks and goals.
• Respect wisdom.
• Accrue wisdom over time.
• Grow continuously.
• Tell the truth at all times — unless there is a socially-valid justification.
• Refrain from lying — unless there is a socially-valid justification.
• Love.
• Dream.
• Seek a mate to reproduce.
• Engage in games, sports, and athletics to stimulate and rejuvenate both body and
mind.
• Engage in humor, joking, parody, satire, fiction, and fairy tales, etc. to relax,
release tension, and rejuvenate the mind.
• Seek entertainment, both for pleasure and to rejuvenate both body and mind.
• Selectively engage in risky activities to challenge and rejuvenate both body and
mind.
• Experience excitement and pleasure.
• Engage in music and art to relax and to stimulate the mind.
• Day dream (idly, for no conscious, intentional purpose) to relieve stress and
rejuvenate the mind.
• Seek to avoid boredom.
• Engage in disconnected and undirected thought, for the purpose of seeking
creative solutions to problems where no rational approach is known, or simply in
the hope of discovering something of interest and significant value.
• Brainstorm.
• Refrain from illegal, immoral, or unfair conduct.
• Resist corruption.
• Maintaining and controlling a healthy level of skepticism.
• Maintaining a healthy balance between engagement and detachment.
• Accept and comprehend that our perception and beliefs about the world are not
necessarily completely accurate.
• Accept and cope with doubt.
• Accept and cope with ambiguity.
• Resolve ambiguity, when possible.
• Solve puzzles.
• Design algorithms.
• Program computers.
• Pursue consensus with other intelligent entities.
• Gather and assess opinions from other intelligent entities. Are they just opinion,
or should they be treated as knowledge?
• Develop views and positions on various matters.
• Ponder and arrive at positions on matters of politics and public policy.
• Decide how to vote in elections.
• Practice religion — hold spiritual beliefs, pray, participate in services.
• Respond to questions.
• Respond to commands or requests for action.
• Experience and respond to pain.
• Sense to avoid going down rabbit holes — being easily distracted and difficult to
get back on track.
• Able to reason about and develop values and moral and ethical frameworks.
• Be suspicious — without being paranoid.
• Engage in philosophical inquiry.
• Critical thinking.
• Authenticity. Thinking and acting according to a strong sense of an autonomous
self rather than according to any external constraints, cultural conditioning, or a
preprogrammed sense of self.

That’s a very large amount of intellectual capacity which an AI system will need to
possess to be truly classified as Strong AI.

But as this paper suggests, we can partition intelligence into distinct areas, distinct
levels of function in each area, and distinct degrees of competence for each level of
function in each area.

Degree of competence
A given AI system will have some degree of competence for some level of function in
some area of intelligence. That AI system may have differing or even nonexistent
competence for other levels of function or in other areas of intelligence.

Levels of competence include:

1. Nothing. No automation capabilities in a particular area or level of function.


User is completely on their own.
2. Minimal subset of full function. Something better than nothing, but with severe
limits.
3. Rich subset. A lot more than minimal, but with substantial gaps.
4. Robust subset. Not complete and maybe not covering all aspects of a level of
function is an area, but close to complete in all aspects that it covers.
5. Near-expert. Not quite all there, but fairly close and good enough to fool the
average user into thinking an expert is in charge.
6. Expert-level. All there.
7. Elite expert-level. Best of human experts.
8. Super-expert level. More than even the best human experts.

Levels of competence for current AI systems are all over the map.

In some towers of intelligence current AI systems may indeed be at the expert, elite
expert, or even super-expert level.

But as a general proposition, current AI systems tend to be in the minimal to rich subset
range of competence for most functions.

Common use of natural language processing (NLP)

One of the brighter spots of AI in recent years has been the widespread use of fairly
competent natural language processing (NLP).

Recent widespread popularity of intelligent digital assistants which focus on natural


language processing exemplify this progress.

There are several unfortunate blemishes on this progress:

1. Most intelligent digital assistants require an extremely complex service in the


cloud. All of this NLP progress is still well beyond the capabilities of a typical
personal computing device.
2. Too much of the progress has been with proprietary systems rather than with
open source software.
3. Too much of the progress is beyond the reach of average software developers.
Only elite AI professionals need apply.

I expect these blemishes to be overcome without too much difficulty, but it may still be
another two to five years before natural language processing becomes a slam dunk and
second nature for computing in general. For now, it remains more of a special feature
rather than a presumed general feature.
Autonomy, principals, agents, and assistants

Autonomy is a key requirement for true strong AI. this means that the AI system would
be able to set its own goals, not merely do the bidding of a human master.

I have identified three levels of autonomy:

1. Principals. Full autonomy. Entity can set its own goals without approval or
control from any other entity.
2. Agents. Limited autonomy. Goals are set by another entity, a principal. Entity
has enough autonomy to organize its own time and resources to pursue tasks
needed to achieve the goals which it has been given.
3. Assistants. No significant autonomy to speak of. Unable to set its own goals. In
fact an assistant is given specific, relatively narrow tasks to perform, with no
real latitude as to how to complete each task.

A strong AI system would have full autonomy. It would be able to act as a principal.

Note that a driverless car would be an agent. It cannot decide for itself where to go, but
given a destination, it is free to choose the route to get there.

Technically, we shouldn’t expect full autonomy for AI systems in the foreseeable future.
That would mean citizen robots which control their own destiny rather than merely
doing our bidding, which would have limited them to being agents. Think Hal in 2001
or Skynet in Terminator.

Drones might be either agents or assistants, depending on whether their flight is


completely automated (including scheduling) or remotely controlled at all times by a
human pilot. The former constituting an agent, the latter an assistant.

For more on autonomy, principals, agents, and assistants see these two papers:

• Intelligent Entities: Principals, Agents, and Assistants


• What Are Autonomy and Agency?

Intelligent agents

Although agents by definition do not have full autonomy, it is very helpful if they have
a significant degree of autonomy so that the user can request that a goal be pursued
without needing to expend any significant energy detailing how the agent should
achieve that goal.

Intelligent agents don’t really exist today. Rather, we are seeing significant activity and
progress with intelligent assistants, but these AI systems are focused on narrow,
specified tasks rather than broader goals with much less latitude as to how to achieve
them.

As mentioned previously a driverless car is effectively an intelligent agent. It exercises a


significant degree of discretion to achieve the requested goal.
Intelligent digital assistants

Although we don’t have much in the way of principals and agents (besides driverless
cars), we are seeing significant activity and progress with intelligent assistants or
intelligent digital assistants, such as Alexa and Siri, able to complete relatively simple
tasks requested in spoken natural language.

For more on intelligent digital assistants, see What Is an Intelligent Digital Assistant?

The robots are coming, to take all our jobs?

Are massive waves of workers about to lose their jobs due to automation and intelligent
robots.

Uh, in a word, no.

Yes, robots are getting incrementally more sophisticated as every year goes by, but they
are still quite primitive.

We are definitely seeing significant progress in machine intelligence, but not so much
progress yet on higher-order human-level intellectual capacities.

Yes, incrementally, small numbers of workers will be displaced by robots, but nothing
major anytime soon.

Ongoing innovation will tend to create new forms of employment as quickly as older
jobs are eliminated. Granted, training and relocation may be required, but that’s the
world we live in.

How intelligent is an average worker?

Although robots are nowhere close to being capable of replacing large numbers of
workers, at some point in the more distant future that may indeed be the case.

Besides, an average worker doesn’t really utilize more than a tiny fraction of their
intelligence for the tasks they are commonly assigned.

So, how intelligent does a machine have to be to replace an average worker?

Not very.

But, still, probably significantly more intelligent than the current crop of robots.

Besides, even if 90% to 99% of the tasks performed by an average worker require
minimal intelligence, that other 1% to 10% of their work may require significantly more
intelligence, such as:

1. How to deal with equipment which fails.


2. How to deal with balky equipment which behaves in an inconsistent manner on
occasion.
3. Ability to cope with special requests.
4. Flexibility and adaptability.

Granted, even many of those areas are also significant opportunities for automation, but
progress will continue to be inconsistent, undependable, and problematic.

Sure, eventually, most issues will be resolved. But not so soon.

Incrementally, more and more workers will have their full jobs automated, but that
gradual process will likely be compatible with the incremental appearance of new forms
of work.

Even if it is indeed theoretically possible to innovate all existing jobs away, there will
always be practical reasons that this process will not occur rapidly.

And to the extent to which average workers are only using a small fraction of their
intellectual capacity, they have excellent potential to be trained for new jobs.

No sign of personal AI yet (strong AI)

The appearance of the personal computer (PC) was a major revolution. Ditto for the cell
phone and smartphone. We haven’t seen such a revolution for personal AI yet, in the
sense of strong AI. Weak AI, yes; strong AI no.

Okay, we have Alexa and Siri and other intelligent digital assistants, but they are too
minimal and too specialized to constitute a true revolution to what I would call personal
AI.

Alexa and Siri remind me more of the very early personal computers, such as the Altair,
Commodore, and Atari computers, which were truly mere toys and more suitable for
playing games and having fun than any serious computing. Even the Apple II was in
that category.

It wasn’t until the advent of the IBM PC and the Apple Macintosh that personal
computers could finally be counted on to do serious work.

That’s the kind of transition we are still waiting for for AI. From toylike, simple, single-
task features to broader and richer goal-oriented activities.

More significantly, we need AI systems that can figure out our needs and automatically
address them without requiring us to explicitly and carefully detail individual tasks.

AI systems and features currently provide plenty of automation, but are not yet offering
any significant higher-order human-level intellectual capacities.

Another benefit of the personal computer was that they offered a fairly rich set of
features right out of the box without requiring an expensive connection to an external
service. Alexa and Siri are interesting, but most of their function is accomplished in the
networked cloud rather than locally.
The four main qualities which I am looking for for personal AI, which is a breakthrough
comparable to the personal computer (IBM PC and Apple Macintosh) are:

1. Fairly rich set of features. 10X to 100X what Alexa provides. Covers a much
broader swath of the average person’s daily life.
2. Very easy for average user to set up, configure, control, monitor, and
understand. No degree in rocket science required. No technical sophistication
required.
3. No network connection required. Yes, a network connection may provide
additional features and power, but would not be required. Or at least not always
be required.
4. Based on open source software. Users should not be held hostage by vendors
and should be able to view and even enhance their systems. Even if a user
doesn’t wish to do this themselves, they can at least take advantage of the work
of other users who are willing and capable of taking advantage of open source
capabilities.

AI is generally not yet ready for consumers

AI systems are currently more oriented towards high-end applications than consumer-
oriented features except for more basic features.

Sure, web sites may use lots of AI under the hood or have AI chat bots for automated
customer service, but none of this is a direct benefit for the consumer.

Generally, AI is not yet ready for consumers for any higher-order human-level
intellectual capacities.

Only limited, narrow niches seem particularly ripe for consumer AI, such as:

1. Simple robotic animals. May be fun, amusing, and interesting, but offer little in
the way of intellectual capacity.
2. Task-specific AI or domain-specific features.
3. Broader but shallow features, such as intelligent digital assistants.
4. Photo manipulation and management.
5. Gaming.
6. Driving automation.

Driving automation has significant impact, but is generally weak to moderate AI, well
short of strong AI, with relatively narrow, specific functions such as:

1. Self parking cars.


2. Self driving vehicles.
3. Automated navigation.
4. Collision avoidance. Still problematic, but showing promise.

Meaning and conceptual understanding

A major shortfall of current AI systems relative to the metric of strong AI is a complete


lack of comprehension of human-level meaning and concepts.
An advanced AI system may be able to associate identities of objects, but the system
has no notion of how an object is important to a person, why it is important, or even
what its human-level importance is. AI systems may have mastered a lot of the details
of objects but the concepts are still out of reach on anything more than a superficial
basis.

Strong AI will of necessity and by definition need to comprehend human-level meaning.

Emotional intelligence

One important area where most AI systems are severely lacking is emotional
intelligence.

Emotional intelligence is not even needed for many weak AI applications.

But for AI systems to operate effectively in social environments, emotional intelligence


will be essential.

The mere fact that emotional intelligence does not even get mentioned for most AI
systems focuses attention on the fact that these systems are not focused on strong AI.

That will be a key indicator of true progress towards strong AI — that emotional
intelligence begins to play a more significant and even essential role.

Wisdom, principles, and values

I personally subscribe to the four-level model for knowledge, DIKW:

1. Data
2. Information
3. Knowledge
4. Wisdom

The first two levels are handled very adequately by current digital computing.

Knowledge is a mixed bag, with some fairly decent advances, but still some gaps. Facts
are a slam-dunk for the most part. Know-how can still be problematic, depending more
on hardcoded preprogramming more than true machine learning of concepts and
human-level meaning.

But wisdom is a whole other category, virtually untouched by current AI systems. And
considered essential for a mature human being.

Beyond basic facts and practical know-how, wisdom includes principles and values,
both more abstract than concrete. And the ability to apply abstract principles and values
in other domains where concrete knowledge, facts, and know-how may be minimal.

A companion paper has a proposed list of core principles of general wisdom for any AI
system which wishes to qualify as Strong AI:
• Proposal for Level 0 General Wisdom

We may well be more than a few years away from AI systems that exhibit human-level
wisdom, or even a small fraction of human-level wisdom.

Extreme AI

I refer to a concept called extreme AI, which would be a significant steppingstone to


Kurzweil’s Singularity.

Extreme AI indicates that a machine has several key capabilities:

1. It can learn how to learn. Far beyond preprogrammed knowledge and


intelligence or even so-called machine learning. Not only is the machine capable
of learning, but it is also capable of learning how to learn. In other words, the
capacity to learn is not limited to preprogrammed capabilities.
2. It can generate new AI systems on its own, without human intervention.
3. It can teach another AI system how to learn. And how to learn how to learn.
4. It can learn how to learn how to teach how to learn how to learn. That closes the
loop, allowing new generations of AI systems to be significantly more powerful,
without human intervention.

For more on extreme AI, see the companion paper Extreme AI: Closing the Loop and
Opening the Spiral, as well as the larger paper, Untangling the Definitions of Artificial
Intelligence, Machine Intelligence, and Machine Learning.

Current AI systems are not even beginning to exhibit extreme AI capabilities.

Extreme AI doesn’t even appear to be on the more distant horizon.

Ethics and liability

This paper is not intended to delve into ethical and legal issues of technology, focusing
primarily on the technology itself. These matters are covered a little in the companion
paper Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and
Machine Learning.

Other than to simply say that yes, there will be lots of ethical issues that will arise as we
begin to dawn on the age of strong AI, but we’re not even close yet, so not to worry.

There will be legal liability issues as well, but many of them already apply to existing
software systems.

Lethal autonomous weapons (LAWs) will present interesting ethical and legal
challenges, including international law and the law of war.

Fully autonomous or near-fully autonomous AI systems will present serious ethical and
legal challenges as well.
I don’t want to downplay these matters, but they are somewhat beyond the scope of this
paper. They warrant a separate paper or papers, but the unfortunate fact is that we won’t
be able to address ethical and legal issues in any meaningful depth until we understand
much better the capabilities of such systems, which we don’t.

Attempting to solve a problem which doesn’t yet exist is always problematic. Yes, we
can and should anticipate potential problems, but over-anticipating could cause more
problems than it might solve.

Dramatic breakthroughs needed

There is no question that a wide variety and range of dramatic breakthroughs are needed
to achieve true strong AI.

But what are these breakthroughs?

And how do we get them to happen?

The unfortunate realities of breakthroughs in general is that they have:

1. Unpredictable pace.
2. Unpredictable timing.
3. Unpredictable impact.

The best and most valuable breakthroughs tend to come out of nowhere, when you least
expect them.

Personally, I have no faith in:

1. Manhattan-style projects.
2. Moonshot programs.

Those approaches do work, but only on rare occasion, when there is a critical mass and
all of the fundamental elements are essentially in place.

Yes, significant research is still needed, and that means money, people, and priority, but
the emphasis should always be on patient effort, not some insane belief that if we throw
enough money at the problem a magical solution will appear overnight.

Patience is the guideword.

That said, when breakthroughs do eventually come, they always come fast and
furiously.

But when?

That’s the eternal — and unanswerable question.

But with AI as it is today, far too few of the fundamental elements are in place, or
anything close to it.
Fundamental computing model

All of our great progress in digital computing has been based on the power of Turing
machines, but there is no great clarity as to whether a Turing machine has sufficient
conceptual power to simulate the human brain and mind.

Turing himself hypothesized what he called a B-type u-machine which could rival the
computational power of the human mind. The operations of his unorganized machine
(u-machine) more closely parallel the neurons of the human brain. A B-type machine
has the ability to dynamically reconfigure the connections between those computing
elements.

Whether today’s neural network computing models have sufficient power to simulate
the human brain is unclear, especially as many of them are simulated on Turing
machines. There have been efforts to do specialized hardware for AI and neural
networks in particular, but there is no clarity about the effectiveness or limitations of
such efforts. They may be faster than a Turing machine, but the essential question is
whether they can compute anything different than a Turing machine. Certainly there is
great hope, but hope itself is not a solution.

Some researchers believe there is a lot more going on within neurons and their
connections that we do not yet fully fathom.

In the end, the fundamental computing model will matter, but we are not there yet, here
in April 2018.

How many years from research to practical application?

When we do achieve conceptual breakthroughs, how long will it take to get them from
the lab to the hands of consumers?

Unfortunately, the answer is that the time to market is unknown, unknowable, and
highly variable.

Stages of the process include:

1. Conceptual understanding of an area of intelligence. The theoretical, conceptual


essence of the breakthrough.
2. Development of strategies for implementing that conceptual understanding.
3. High-end research lab implementation.
4. High-end government application.
5. High-end business application.
6. Average business application.
7. General business application.
8. High-end consumer application.
9. Above-average consumer application.
10. Average consumer application.
11. General, low-cost consumer application.
Some of those stages can be slow and laborious and depend on multiple breakthroughs,
while others may happen in rapid succession, in parallel, or even skipped in some cases.

There is always the chance that some guy in his garage might engineer a breakthrough
that can be taken directly to market, but that is more of a fluke than something we
should depend upon. And usually the guy in his garage is building upon a lot of
foundational work which was done previously for high-end applications.

Turing test for strong AI

How do we actually test, measure, or evaluate whether a given AI system does or


doesn’t possess human-level intelligence?

Unfortunately, there is no great clarity for measuring intelligence.

The two main measures have been:

1. Standardized IQ tests.
2. The Turing test.

Human intelligence has traditionally been measured using standardized IQ tests. People
commonly recognize that an IQ of 140 is a genius and 160 is a super-genius.

Granted, there are a variety of disputes over standardized IQ tests, but they are the gold
standard.

But today’s AI systems could not even take an IQ test, let alone score high. That alone
says something about where we are relative to strong AI.

Although, a clever AI researcher could preprogram an AI system to be able to parse and


answer a wide enough range of typical IQ test questions so that the AI system could not
only take the test, but score fairly highly.

But, that exemplifies the state of affairs for AI today, namely that an AI system can be
fairly readily preprogrammed for a specific, relatively narrow application, but that does
not mean that such an AI system is capable of general intelligence applicable to a wide
range of problems.

There have been some proposals for specialized IQ tests designed specifically for AI
systems, but that only proves the point, that machine intelligence is rather distinct from
higher-order human-level intelligence.

The so-called Turing test, which Turing himself referred to as The Imitation Game, is
more of a metaphor for arriving at a true/false answer as to whether a human or a
machine is on the other end of a communication link (in another room from the
observer.)

The basic concept of the Turing test is to construct questions such that the answer will
provide a clue as to whether the responding entity is intelligent or not. So, ask a bunch
of questions, evaluate the responses, and decide whether the entity might be a person or
likely merely a machine which is imitating a person.

Technically, it’s a very hard problem.

I haven’t heard of any AI system that can credibly pass, consistently. There have been
some claims of success, but there have also been rebuttals to such claims. So, it remains
a matter of dispute.

And even if a claim to pass a given Turing test were to hold up, it merely means that the
entity was intelligent for that particular test, with no guarantee that the entity would
respond intelligently for other tasks.

Again, there is the risk that the AI system might be preprogrammed to pass the test
rather than truly intelligent and able to learn on its own.

In short, the traditional Turing test is too weak and vague to be a technically robust test
of true, higher-order human-level intelligence.

CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans
Apart) is a variation on Turing’s original test which focuses more on visual perception
rather than being an intellectual challenge. The test presents a small amount of arbitrary
text that is artificially distorted to make it very difficult to apply traditional optical
character recognition techniques. It does indeed work fairly well, although it has some
limits and there have been a number of recent efforts that do seem to be able to defeat or
pass CAPTCHA tests automatically, or by diverting the test query to a pool of users
who earn a small fee for correctly responding to the challenge.

But just because an algorithm does indeed defeat particular CAPTCHA tests does not
mean that it can defeat all CAPTCHA tests.

Worse, CAPTCHA was never really a test of intelligence in the sense of higher-order
intellectual capacities. At best, you could claim a tower of intelligence which works
well for a narrow range of tasks but has no real applications to a wide range of tasks.

The main issue with algorithms to defeat CAPTCHA is that it represents a


preprogrammed or trained skill rather than true human-level learning.

Worse, the CAPTCHA system doesn’t even comprehend the human-level meaning of
what it is doing, whereas a true human-level intelligence would in fact fathom the
nature of its tasks.

There has also been some work in training AI systems to pass various college-level
tests.

Once again, that is impressive as a heuristic or machine intelligence, but the AI system
doesn’t comprehend the meaning and concepts behind the subject matter being tested.

The AI system may well be able to pass the test, but wouldn’t necessarily be able to
succeed at applying the subject matter for solving real-world problems.
A better test would be to provide the AI system with only a raw PDF of the textbook
and any related materials, and then take the test. Or even better, answer detailed
questions on the subject from a sophisticated user.

Even that would not assure that the AI system actually comprehended the full depth of
the meaning and concepts of the subject matter.

Maybe that’s the indicator of our status with AI, that current AI systems mostly depend
on preprogrammed knowledge and limited, domain-specific learning so that they are not
truly facile with the concepts and their true meaning.

In short, we are not yet in a position to have reliable tests to evaluate the intelligence of
an AI system.

How to score the progress of AI

I haven’t worked out any precise, numerical scoring system for AI progress towards
strong AI, but there are some possibilities.

I think it would make sense to have four levels of scoring:

1. A score for each degree of competence in each level of function for each area of
intelligence. This would be the finest grain of scoring. Very specific.
2. An overall score for each level of function for each area of intelligence.
3. An overall score for each area of intelligence.
4. An overall score across all areas of intelligence. The total score. The IQ of the
AI system.

How to score AI systems which are towers of intelligence, focus on particular tasks or
domains, or are deficient in some areas while excelling in others is problematic. Having
separate scores for each area would make it easier to tell what the real story was.

Links to my AI papers

The most brief introduction to AI:

• What Is AI (Artificial Intelligence)?

I have a single master list of internet links to all of my AI-related papers:

• List of My Artificial Intelligence (AI) Papers

The most in-depth coverage of AI :

• Untangling the Definitions of Artificial Intelligence, Machine Intelligence,


and Machine Learning

Conclusion: So, how long until we finally see strong AI?


Okay, I’ll throw caution to the wind and plant a stake in the sand. I’ll say that we are at
least ten years from widespread application of strong AI. And that’s being very
optimistic.

Ten years from now would be 2028, a year short of Kurzweil’s target of 2029.

So, I’ll say ten to fifteen years, or 2028 to 2033. That’s not grossly out of step with
Kurzweil’s target.

Written by

Jack Krupansky

Freelance Consultant

What Is AI (Artificial Intelligence)?

Jack Krupansky

Jan 5, 2018

Artificial Intelligence, commonly known as AI, is everywhere these days. Or so it


seems. Or say they say. But what is AI really? This short, informal paper will provide
the casual reader with a very brief explanation that should be readily digestible for those
without the patience to read my full, 150-page paper that explores this topic in much
greater depth — Untangling the Definitions of Artificial Intelligence, Machine
Intelligence, and Machine Learning.

First, the obvious. AI means that the following elements are involved:

1. A machine.
2. A computer.
3. Computer software.
4. Some degree of intelligence that is suggestive of the intelligence of a human.

The operative definition of AI is fairly simple:

• AI is the capacity of a computer to approximate some fraction of the intellectual


capacity of a human being.

What about robotics, so much of which is merely mechanical and seemingly unrelated
to any intellectual activity — is it really AI per se? There is a section on Robotics later
in this paper to explore this question a little deeper. The short answer is that it’s a
fielder’s choice how much of robotics should be considered AI. If it enables or supports
intellectual activity or the carrying out of intellectual intentions, then it’s fair to be
considered under the rubric of AI.

The operative word there is suggestive — meaning that AI doesn’t require achieving the
full range of human cognitive and behavioral capabilities, but merely enough of a
fraction of the full range that at least hints at or gives the appearance of human-level
intelligence.

What fraction of human-level intelligence is required to merit the AI label? There is no


gold standard. It’s a matter of debate. And it’s very subjective.

Traditionally AI has held a two-level distinction about that fraction:

1. Weak AI. Only a relatively small, limited fraction of human intelligence.


2. Strong AI. Much closer if not at or above human intelligence.

You can also read articles about superintelligence, far beyond even human intelligence.
But that’s more the realm of science fiction and speculation at this stage.

In fact, even strong AI remains far beyond our technological reach at this stage.

What we have to settle for today is a variety of levels of weak AI.

In my longer paper I settled on five levels of intelligence:

1. Weak AI or Light AI. Individual functions or niche tasks, in isolation. Any


learning is limited to relatively simple patterns.
2. Moderate AI or Medium AI. Integration of multiple functions and tasks, as in
a robot, intelligent digital assistant, or driverless vehicle. Possibly some
relatively limited degree of learning.
3. Strong AI. Incorporates roughly or nearly human-level reasoning and some
significant degree of learning.
4. Extreme AI. Systems that learn and can produce even more capable systems
that can learn even more capably, in a virtuous spiral.
5. Ultimate AI. Essentially Ray Kurzweil’s Singularity or some equivalent of
superhuman intelligence. Also called superintelligence.
Weak AI is generally categorized as task-specific or domain-specific. The AI system
must be preprogrammed with task or domain-specific knowledge and skill, with
minimal ability to learn very much on its own other than patterns, even with so-called
machine learning.

Current intelligent digital assistants have achieved a minimal level of Moderate AI,
but they still fall far short of Strong AI.

Many current consumer and industrial products have some level of Weak AI,
occasionally bordering on minimal Moderate AI. It is common now to use the adjectives
intelligent or smart to indicate the presence of Weak AI in a product, system, service, or
feature, such as:

• Intelligent digital assistants.


• Smart appliances.
• Smart devices.
• Smart homes.
• Smart vehicles.

Again, these systems and devices exhibit some fraction of a human-level function, but
usually only in some relatively modest sense. And certainly nothing approaching
human-level Strong AI.

My longer paper also discusses levels of competence or how robust and capable a given
implementation is in any particular area of function, relative to a fully-functional
human. I call this competent AI. Levels of automation competence range from:

1. Nothing. No automation capabilities in a particular area. User is completely on


their own.
2. Minimal subset of full function. Something better than nothing, but with severe
limits.
3. Rich subset. A lot more than minimal, but with substantial gaps.
4. Robust subset. Not complete and maybe not covering all areas, but close to
complete in areas that it covers.
5. Near-expert. Not quite all there, but fairly close and good enough to fool the
average user into thinking an expert is in charge.
6. Expert-level. All there.
7. Super-expert level. More than an average human expert.

That’s it, the starting point for an understanding of AI. Continue reading if you need a
little more depth.

My longer paper also discusses the spectrum of functional behavior, to categorize how
functional a system is overall. The point of this model is that:

1. Behavior of both human and digital systems, as well as animals, can be


classified based on level of function.
2. Functional behavior spans a broad spectrum of levels.
3. Functional behavior must reach the level of being highly functional or high
function in order to be considered comparable to human-level intelligence or
behavior.
4. Integration and coordination of functions is requisite for high function and true,
human-level intelligence.

The levels of function in this spectrum are:

1. Non-functional. No apparent function. Noise. Twitches and vibrations.


2. Barely functional. The minimum level of function that we can discern. No
significant utility. Not normally considered AI. Automation of common trivial
tasks.
3. Merely, minimally, or marginally functional, tertiary function. Seems to
have some minimal, marginal value. Marginally considered AI. Automation of
non-trivial tasks. Not normally considered intelligence per se.
4. Minor or secondary function. Has some significance, but not in any major
way. Common behavior for animals. Common target for AI. Automation of
modestly to moderately complex tasks. This would also include involuntary and
at least rudimentary autonomous actions. Not normally considered intelligence
per se.
5. Major, significant, or primary function. Fairly notable function. Top of the
line for animals. Common ideal for AI at the present time. Automation of
complex tasks. Typically associated with consciousness, deliberation, decision,
and intent. Autonomy is the norm. Bordering on what could be considered
intelligence, or at least a serious portion of what could be considered
intelligence.
6. Highly functional, high function. Highly notable function. Common for
humans. Intuition comes into play. Sophisticated enough to be considered
human-level intelligence. Characterized by integration of numerous primary
functions.
7. Very high function. Exceptional human function, such as standout creativity,
imagination, invention, and difficult problem solving and planning. Exceptional
intuition.
8. Genius-level function. Extraordinary human, genius-level function, or
extraordinary AI function.
9. Super-human function. Hypothetical AI that exceeds human-level function.
10. Extreme AI. Virtuous spiral of learning how to learn and using AI to create new
AI systems ever-more capable of learning how to learn and how to teach new AI
systems better ways to learn and teach how to learn.
11. Ray Kurzweil’s Singularity. The ultimate in Extreme AI, combining digital
software and biological systems.
12. God or god-like function. The ultimate in function. Obviously not a realistic
research goal.

What is intelligence?
Unfortunately there is no concise, crisp, and definitive definition for intelligence,
especially at the human level. But a number of elements of intelligence are readily
identified.
When we refer to human intelligence we are referring to the intellectual capacity of a
human being.

See my longer AI paper for a lot more depth, but at a superficial level intelligence
includes a significant variety of mental functions and mental processes:

1. Perception. The senses or sensors. Forming a raw impression of something in


the real world around us.
2. Attention. What to focus on.
3. Recognition. Identifying what is being perceived.
4. Communication. Conveying information or knowledge between two or more
intelligent entities.
5. Processing. Thinking. Working with perceptions and memories.
6. Memory. Remember and recall.
7. Learning. Acquisition of knowledge and know-how.
8. Analysis. Digesting and breaking down more complex matters.
9. Speculation, imagination, and creativity.
10. Synthesis. Putting simpler matters together into a more complex whole.
11. Reasoning. Logic and identifying cause and effect, consequences and
preconditions.
12. Following rules. From recipes to instructions to laws and ethical guidelines.
13. Applying heuristics. Shortcuts that provide most of the benefit for a fraction of
the mental effort.
14. Intuitive leaps.
15. Mathematics. Calculation, solving problems, developing models, proving
theorems.
16. Decision. What to do. Choosing between alternatives.
17. Planning.
18. Volition. Will. Deciding to act. Development of intentions. When to act.
19. Movement. To aid perception or prepare for action. Includes motor control and
coordination. Also movement for its own sake, as in communication, exercise,
self-defense, entertainment, dance, performance, and recreation.
20. Behavior. Carrying out intentions. Action guided by intellectual activity. May
also be guided by non-intellectual drives and instincts.

Communication includes:

1. Natural language.
2. Spoken word.
3. Written word.
4. Gestures. Hand, finger, arm.
5. Facial expressions. Smile, frown.
6. Nonlinguistic vocal expression. Grunts, sighs, giggles, laughter.
7. Body language.
8. Images.
9. Music.
10. Art.
11. Movement.
12. Creation and consumption of knowledge artifacts — letters, notes, books,
stories, movies, music, art.
13. Ability to engage in discourse. Discussion, conversation, inquiry, teaching,
learning, persuasion, negotiation.
14. Discerning and conveying meaning, both superficial and deep.

Recognition includes:

1. Objects
2. Faces
3. Scenes
4. Places
5. Names
6. Voices
7. Activities
8. Identities
9. Intentions
10. Meaning

Only a Strong AI system would possess all or most of these characteristics. A Weak or
Moderate AI system may only possess a few or a relatively narrow subset.

The measure of progress in AI in the coming years will be the pace at which additional
elements from those lists are ticked off, as well as improvements in the level of
competence in these areas of function.

Artificial intelligence is what we don’t know how to do


yet
From the dawn of computing, the essential purpose of a computer has been to automate
some task that people normally do. Since such tasks always involve information, some
degree of intelligence has always been required.

When capabilities seem beyond what a computer can easily do, it is easy to ascribe it to
being a matter of intelligence. As if the tasks we have already automated didn’t require
intelligence.

Once we do manage to figure out how to automate some seemingly difficult task, we
assert that this is artificial intelligence. Or at least until it becomes widely accepted that
computers can obviously do a particular task and do it quite well. Only then will we
gradually and quietly cease using the AI label for those tasks that we no longer have a
need to refer to explicitly.

Maybe the issue is that since we have already automated so much of the low-hanging
fruit that we are finally bumping into the knee of the difficulty curve so that it takes
increasingly intense levels of effort and resources to advance up the intelligence
spectrum, so that each advance comes more slowly and therefore seems so much more
spectacular.
Robots, driverless cars, and even intelligent digital assistants certainly seem spectacular
right now, but once they get all the wrinkles worked out and they become common and
mundane rather than rare and special, the urge to label them AI will quickly fade.

Anti-locking brakes, optical character recognition, spelling and grammar checkers and
correctors, and auto-focusing cameras were once quite unusual and exceptional and
hence noteworthy as AI, but these days they are assumed and not so notable common
features, no longer warranting a label of AI.

Emotional intelligence
Usually, when people are discussing AI or even Strong AI they are referring to
relatively mechanical operations and calm, dispassionate reasoning. The non-emotional
side of reasoning, including the ability to read the emotional state of another intelligent
entity, whether human or machine.

Although there have been experimental efforts to imbue machines with some sense of
emotive capabilities, that is still more of a science fiction fantasy than current or
imminent reality. It may exist in some relatively narrow or specialized niches, but not in
any broad and general sense.

Yes, someday AI systems will have at least some emotive capabilities, so-called
emotional intelligence, but not in the near future.

My longer paper delves into this matter a bit more.

Autonomy, agency, and assistants


The true test of Strong AI is the ability to have a robot or AI system which operates
completely on its own without any human supervision or control. This is called
autonomy.

Short of full autonomy, agency is the capacity for an AI system to pursue a goal on
behalf of another entity, whether it be a human or some other digital system. An AI
system with agency is free to decide on its own how to achieve its given goal, but is not
free to set goals of its own, other than in a manner that is subsidiary to its assigned goal.

Note: In philosophy and sociology agency is taken to mean the same as autonomy is
used here, while computer science and AI use this alternative meaning for agency —
acting on behalf of another.

A driverless vehicle would fit the definition of agency but not true, full autonomy. The
vehicle might choose between alternative routes, but wouldn’t have the autonomy to
choose its own destination or to decide whether not to do as it is told by its owner or
operator.

An assistant has even more limited autonomy and agency, being given a specific task
and instructions and having very little freedom.
An intelligent digital assistant fits this definition for assistant.

For more on autonomy, agency, and assistants see the companion papers:

• What Are Autonomy and Agency?


• What Is an Assistant?
• Intelligent Entities: Principals, Agents, and Assistants
• What Is an Intelligent Digital Assistant?

Although it would be technically feasible to have a truly, fully autonomous AI system,


the human race is not ready to have robots and AI systems running around completely
independent of human control. Check out the Skynet AI computer network in the
Terminator movies or the HAL 9000 AI computer in the 2001: A Space Odyssey movie
— once these AI systems take charge, things don’t end well. Agency or semi-
autonomous operation is the more practical and desirable mode of operation relative to
full autonomy for the foreseeable future.

AI areas and capabilities


This is not an exhaustive or ordered list, but illustrates the range of capabilities pursued
by AI researchers and practitioners:

1. Reasoning
2. Knowledge and knowledge representation
3. Optimization, planning, and scheduling
4. Learning
5. Natural language processing (NLP)
6. Speech recognition and generation
7. Automatic language translation
8. Information extraction
9. Image recognition
10. Computer vision
11. Moving and manipulating objects
12. Robotics
13. Driverless and autonomous vehicles
14. General intelligence
15. Expert systems
16. Machine learning
17. Pattern recognition
18. Theorem proving
19. Fuzzy systems
20. Neural networks
21. Evolutionary computation
22. Intelligent agents
23. Intelligent interfaces
24. Distributed AI
25. Data mining
26. Games (chess, Go, Jeopardy)

Most of these are covered in my longer paper.


Neural networks and deep learning
This paper won’t delve into any detail on neural networks and the related concept of
deep learning, but simply note that they relate to machine learning, the limited degree
to which AI systems can seem to learn, mostly relatively simple patterns or images and
even some rules, mostly by correlating lots of examples, in contrast with even human
children for whom seeing even a single cat or dog is enough to sense how to recognize
similar creatures.

A little more detail is provided in my longer paper.

Animal AI
We tend to focus on human intelligence when discussing AI, but AI can be applied to
the animal world as well, such as a personable robot dog, a robotic bird, or a robotic
flying insect, although the focus in these cases is far less about higher-level cognitive
abilities such as reasoning, mathematics, and creativity, and more about the physics,
physiology, sensory perception, object recognition, and motor control of biological
systems.

Robotics
Much of robotics revolves around sensors and mechanical motions in the real world,
seeming to have very little to do with any intellectual activity per se, so one could
question how much of robotics is really AI.

Alternatively, one could say that sensors, movement, and activity enable acting on
intellectual interests and intentions, thus meriting coverage under the same umbrella as
AI.

In addition, it can be pointed out that a lot of fine motor control requires a distinct level
of processing that is more characteristic of intelligence than mere rote mechanical
movement.

In summary, the reader has a choice as to how much of robotics to include under the
umbrella of AI:

1. Only those components directly involved in intellectual activity.


2. Also sensors that provide the information needed for intellectual activity.
3. Also fine motor control and use of end effectors. Including grasping delicate
objects and hand-eye coordination.
4. Also any movement which enables pursuit of intellectual interests and
intentions.
5. Any structural elements or resource management needed to support the other
elements of a robotic system.
6. Any other supporting components, subsystems, or infrastructure needed to
support the other elements of a robotic system.
7. All components of a robotic system, provided that the overall system has at least
some minimal intellectual capacity. That’s the point of an AI system. A
mindless, merely mechanical robot with no intelligence would not constitute an
AI system.

In short, it’s not too much of a stretch to include virtually all of robotics under the rubric
of AI — provided there is at least some element of intelligence in the system, although
one may feel free to be more selective in specialized contexts.

Artificial life
Technically, some day scientists may be able to create artificial life forms in the lab that
have many of the qualities of natural biological life, but have possibly rather distinct
chemical bases, structures, and forms. Such artificial life could conceptually be imbued
with some form of intelligence as well — artificial intelligence for artificial life.

But, for now, such artificially intelligent artificial life remains the realm of speculation
and science fiction. Still, it would be very interesting and potentially very useful.
Granted, there might be more than a few ethical considerations.

An exception is virtual reality (VR), where even the laws of physics can be conveniently
ignored, if desired. Traditional chemistry and biology present no limitations to the
creativity of designer worlds in the realm of VR. In fact, one could say that all forms of
life in a VR world are artificial, by definition. One can even imbue otherwise inanimate
objects with any degree of life one chooses.

Ethics
Consult my longer paper for a discussion of ethical considerations for AI.

Historical perspective by John McCarthy


To get a sense of the roots and evolution of AI, consult AI pioneer John McCarthy’s
own response to the question of What is AI?:

• What is AI?
• What is Artificial Intelligence?
• Basic Questions

Can machines think?


An AI system may indeed possess a fraction of the cognitive abilities of a human, but is
that enough to claim that the machine is indeed thinking?

I have some comments and questions on that topic in a companion paper, Can
Machines Think?

Diving even deeper, I have a longer list of questions designed to spur thought on this
matter in another companion paper, Questions about Thinking.
What’s the IQ of an AI?
Next question. Seriously, there is no clarity as to how how the human concept of IQ
could be adapted to machines. Some people have ideas about how to do it, but there is
no consensus. It’s almost kind of moot until we actually achieve Strong AI or something
fairly close.

Besides, given the malleable nature of software, the code of an AI system could be
quickly revised to adapt to whatever new test came along so that an AI would score
significantly higher than if the code hadn’t been tuned to the test.

But that’s the nature of AI today — it is relatively easy to identify specific and
relatively narrow niche cases and code up heuristics that work fairly well for those
narrow niches, making the software appear quite intelligent, even while it is far more
difficult or even near impossible with today’s technology to achieve true, full, Strong AI
which works equally well for all niches.

Still, it would be good to have a more objective measure of the level of intelligence of
an AI than simply weak or strong, or even my moderate level or my spectrum of
functional behavior and levels of competence.

Turing test
In theory, the so-called Turing test (also called The Imitation Game) can detect whether
a machine or AI is able to interact in such a human-like manner that no human observer
could tell that it was a machine by asking a finite set of questions.

There is some significant dispute about both whether the test is indeed a valid binary
test of intelligence (always arrive at the correct conclusion whether the test subject has
human-level intelligence or not) and whether claims to have passed the test are truly
valid.

The real bottom line is that as a thought experiment the test highlights the great
difficulty of definitively defining human-level intelligence in any deeply objective and
measurable sense.

That’s really only an issue for defining and testing for Strong AI. Weak AI has no such
strong testing requirements — even if only a fraction of human-level capability, or
seeming only partially human-like, that’s good enough for many applications.

More details can be found in my longer paper.

And so much more


There is a lot more to AI than offered here. My longer paper — Untangling the
Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning —
dives down a few more levels for those who want more than is covered here but aren’t
prepared to invest the time, energy, attention, and money in a shelf full of dense text
books and academic papers.
For more of my writings on artificial intelligence, see List of My Artificial Intelligence
(AI) Papers.

What Are Autonomy and Agency?

Jack Krupansky

Dec 4, 2017

When considering robots, intelligent agents, and intelligent digital assistants, questions
of autonomy and agency arise. This informal paper attempts to define these key
concepts more clearly and explore questions of what are they, how they are different,
and how they are related.

Synthesized definitions for autonomy and agency will be provided after discussing all
the relevant aspects of these concepts.

A companion paper, Intelligent Entities: Principals, Agents, and Assistants, will


build on these concepts, but essential concepts of intelligent entities, principals, agents,
and assistants will be introduced here as well since these two sets of concepts are
closely related and intertwined. You can’t have one without the other.

Note that this paper is more focused on people, robots, and software agents rather than
on countries or autonomous regions within countries (e.g., Catalonia in Spain or
Kurdistan in Syria and Iraq), although the basic definition of autonomy still applies to
those cases as well.

Also note that sociology, philosophy, and agent-based modeling and simulation use the
terms agency and agent as the terms autonomy and autonomous entity are used in this
paper (freedom of choice and action, unconstrained by any other entity.)

For quick reference, see the section entitled Definitions of autonomy and agency for
the final definitions of these terms.

Dictionary definitions
A later section of this paper will come up with synthesized definitions for autonomy and
agency that are especially relevant to discussion of intelligent agents and intelligent
digital assistants, but the starting point is the traditional dictionary definitions of these
terms.

Definition entries from Merriam-Webster definition for autonomy:


1. self-directing freedom and especially moral independence
2. the state of existing or acting separately from others
3. the quality or state of being independent, free, and self-directing
4. the quality or state of being self-governing

Definition entries from Merriam-Webster definition for agency:

1. the relationship between a principal and that person’s agent


2. the capacity, condition, or state of acting or of exerting power — operation
3. a person or thing through which power is exerted or an end is achieved —
instrumentality
4. a person or thing through which power is used or something is achieved
5. a consensual fiduciary relationship in which one party acts on behalf of and
under the control of another in dealing with third parties
6. the power of one in a consensual fiduciary relationship to act on behalf of
another
7. general agency — an agency in which the agent is authorized to perform on
behalf of the principal in all matters in furtherance of a particular business of
the principal
8. special agency — an agency in which the agent is authorized to perform only
specified acts or to act only in a specified transaction
9. the law concerned with the relationship of a principal and an agent

Three related terms are entity, principal, and agent.

Definition entries from Merriam-Webster definition for entity:

1. independent, separate, or self-contained existence


2. something that has separate and distinct existence and objective or conceptual
reality

There are other meanings for entity, but those are the senses relevant to this paper.

Definition entries from Merriam-Webster definition for principal:

1. a person who has controlling authority or is in a leading position


2. a chief or head man or woman
3. the chief executive officer of an educational institution
4. one who engages another to act as an agent subject to general control and
instruction
5. the person from whom an agent’s authority derives
6. the chief or an actual participant in a crime
7. the person primarily or ultimately liable on a legal obligation
8. a leading performer — star

Definition entries from Merriam-Webster definition for agent:

1. one that acts or exerts power


2. something that produces or is capable of producing an effect
3. a means or instrument by which a guiding intelligence achieves a result
4. one who is authorized to act for or in the place of another
5. a computer application designed to automate certain tasks (such as gathering
information online)
6. a person who does business for another person
7. a person who acts on behalf of another
8. a person or thing that causes something to happen
9. something that produces an effect
10. a person who acts or does business for another
11. someone or something that acts or exerts power
12. a moving force in achieving some result
13. a person guided or instigated by another in some action
14. a person or entity (as an employee or independent contractor) authorized to act
on behalf of and under the control of another in dealing with third parties

Intelligent entities
Autonomy and agency are all about intelligent entities and their freedom to make
decisions and take actions, and their authority, responsibilities, and obligations.

Generally, an entity is any person, place, or thing. In the context of autonomy and
agency, an intelligent entity is a person or thing which is capable of action or operation
and at least some fraction of perception and cognition — thought and reason, coupled
with memory and knowledge.

More specifically, an intelligent entity has some sense of intelligence and judgment, and
is capable of making decisions and pursuing a course of action.

Whether or not an intelligent entity has autonomy or agency is not given:

1. Some entities may have autonomy, but not agency.


2. Some entities may have agency but not autonomy.
3. Some entities may have both autonomy and agency.
4. Some entities may have neither agency nor autonomy.

Computational entities
An intelligent entity can be a person or a machine or software running on a machine.

The latter are referred to as computational entities or digital entities. They include:

• Robots
• Driverless vehicles
• Smart appliances
• Software agents
• Intelligent agents
• Digital assistants
• Intelligent digital assistants
• Apps
• Web services
How much autonomy or agency a given computational entity has will vary greatly, at
the discretion of the the people who develop and deploy such entities based on needs,
requirements, desires, preferences, and available resources and costs.

Sometimes people want more control over their machines, and sometimes they value
greater autonomy, agency, or automation to free themselves from being concerned over
details.

Entities
As a convenience and for conciseness, this paper will sometimes use the shorter term
entity as implicitly referring to an intelligent entity, either a person or a computational
entity.

Actions and operations


Definitions:

1. Action. Something that can be done by an entity. An observable effect that can
be caused in the environment.
2. Operation. Generally a synonym for action. Alternatively, an action that
persists for some period of time.

For example flipping a switch to turn on a machine is an action, while the ongoing
operation of the machine is an operation. The flipping of the switch was an operation
too, only of a very short duration.

If a machine would operate only while a button was depressed, the pressing and holding
of the button as well as the operation of the machine would both be actions and
operations.

Tasks, objectives, purposes, and goals


Definitions:

1. Task. One or more actions or operations intended to achieve some purpose.


2. Purpose. The reason or desired intent for something.
3. Goal. A destination or state of affairs that is desired or intended, but without a
plan for a set of tasks to achieve it.
4. Objective. Synonym for goal.
5. Subgoal. A portion of a larger goal. A goal can be decomposed into any number
of subgoals.
6. Motivation. The rationale for pursuing a particular objective or goal.
7. Intentions. Desired objective or goal. What is desired, not why or how.

Principals and agents


A separate paper, Intelligent Entities: Principals, Agents, and Assistants, will delved
deeper into principals and agents, but definitions for the purposes here:

1. Principal. An intelligent entity which has the will and desire to formulate an
objective or goal.
2. Agent. An intelligent entity which has the capacity and resources to pursue and
achieve an objective or goal on behalf of another intelligent entity, its principal.

A given entity may be either:

1. A principal but not an agent. Does all actions itself, without any delegation to
agents.
2. An agent but not a principal.
3. Both a principal and an agent. A principal for subgoals.
4. Neither a principal nor an agent. Possibly an assistant for specific tasks, but not
any goals.

Delegation of responsibility and authority


The essence of the relationship between principal and agent is delegation. The principal
may delegate responsibility and possibly even authority for one or more objectives or
goals to one or more agents.

Principal as its own agent


Some intelligent entities may act as both principal and agent, doing its own work, rather
than delegating its work to one or more agents.

Agent as principal for subgoals


For more complex objectives, an agent may decompose a larger goal into subgoals, with
each subgoal delegated to yet another agent for whom this agent acts as principal.

Authority
The authority of an intelligent entity is the set of actions that the entity is permitted to
take.

A principal would have unlimited authority.

An agent would have limited authority related to the goal(s) that the principal is
authorizing the agent to pursue.

In the real world, many principals are in fact agents since they act on behalf of other
principals. A company has a board of directors, investors, and shareholders. Robots
have owners.

Responsibility, expectation, and obligation


The responsibility of an intelligent entity is the set of expectations and obligations of the
entity in terms of actions.

A principal has no responsibility, expectations, or obligations per se. A principal may


act as it sees fit.

An agent has responsibility, expectations, and obligations as set for it by its principal.
An agent may act as it sees fit, provided that its actions satisfy any limitations or
constraints set by its principal.

In the real world, many principals are in fact agents since they act on behalf of other
principals. A company has a board of directors, investors, and shareholders. Robots
have owners. So a company or robot may have responsibilities, expectations, and
obligations set by somebody else.

General obligations
Regardless of obligations which result from autonomy and agency, all intelligent
entities will have general obligations which spring from:

• Physics. Obey the laws of physics. Reality. The real world. Natural law. For
example, gravity, entropy, and the capacity of batteries.
• Limited resources and their cost. For examples, the availability and cost of
electricity, storage, computing power, and network bandwidth.
• Laws. Obey the laws of man. Including regulations and other formalized rules.
• Ethics. Adhere to ethical codes of conduct. Including professional and industry
codes of conduct.

Ethics
Just to reemphasize from the previous section, that intelligent entities will have to
adhere to ethical considerations in the real world.

Liability
A principal may be exposed to liability to the extent that it enlists the aid of an agent
and that agent causes harm or loss or violate laws or rules while acting on behalf of the
principal.

Requested goals might have unintended consequences which incur unexpected liability.

An agent may be exposed to liability if it naively follows the guidance of its principal
without carefully reviewing whether specified goals, expectations, or obligations, might
cause harm or loss or violate laws or rules when carried out.

Elements of a goal
A goal must be:
1. Formulated. Clearly stated.
2. Planned. A strategy developed. A plan developed. Resources allocated. Tasks
identified.
3. Pursued. Individual tasks performed. Decisions may need to be made or revised
and the original plan adapted based on results of individual tasks.
4. Achieved or not achieved. The results or lack thereof.

Relationship between principal and agent


Power, action, control, and responsibility are involved in formulating a plan for setting
and pursuing objectives and goals.

1. Power. The principal has the power to set the objectives and goals to be
pursued. The agent has only the delegated power to select tasks to achieve the
objectives and goals set by the principal and to pursue them through actions, but
no power to change the objectives or goals themselves.
2. Action. The agent is responsible for performing the actions or tasks needed to
achieve the objectives and goals set by the principal. The agent is also
responsible for deciding what tasks and actions must be performed to achieve
the objectives and goals, and for coming up with a plan for performing them
3. Control. The principal controls what objectives and goals are to be pursued.
The agent controls what tasks and actions must be performed to achieve the
objectives and goals and how to perform them.

The principal is in charge. The principal is the boss.

The agent is subservient to the principal.

The principal delegates to agents or assistants.

Contracts
Generally there is a contract of some form between a principal and its agent, which
clearly sets out the objectives and goals, responsibilities, expectations, and obligations
of both parties, both the principal and the agent.

The contract details what is expected of the agent, what the agent is expected to deliver,
what the agent needs to pursue the specified goals, including resources, and what
compensation the agent will receive in exchange for achieving the goals.

The contract also details any limitations or restrictions that will apply to the agent and
its work.

The contract authorizes and empowers the agent.

Contracts are needed both for human entities and for computational entities.

Capacity for agency


There are really two distinct senses of agency:

1. The capacity to act or exert power.


2. The relationship between a principal and an agent that empowers the agent to
operate on behalf of the principal.

The latter requires that there is a principal involved, doing the empowerment, the
authorization to act on its behalf.

The former can exist even if there is no principal present. An intelligent entity can act
on its own interests, on its own behalf, being its own principal. An entity can be self-
empowering. That’s what it means for an entity to have agency in a traditional,
sociological or philosophical sense.

The first sense is true in both instances, where either a principal is present as an external
entity, and when no principal is present.

In the context of intelligent agents and intelligent digital assistants, agency usually
refers to the latter sense, that the agent is acting on behalf of the principal, which is
commonly a human user, but may also be some other computational entity, such as
another intelligent agent or a robot.

Assistants
A separate companion paper, Intelligent Entities: Principals, Agents, and Assistants,
will introduce the concept of an assistant, which is quite similar to an agent in the sense
that it is capable of performing the tasks needed to achieve goals, but can only perform
specific tasks as dictated by its principal without any sense of any larger goal or
objective that the task is needed to achieve, and with much less room for discretion as to
how to perform the tasks.

An assistant has limited agency in that it performs tasks on behalf of a principal but it
lacks the authority or capacity to decide which tasks to perform in the context of a goal
or objective.

Full autonomy of a principal


A principal has full autonomy or complete autonomy, the full freedom to formulate,
choose, and pursue goals and objectives.

An agent does not have such full autonomy.

Limited autonomy or partial autonomy of agents


Generally speaking, agents do not have autonomy in the same sense as the full
autonomy of a principal, but agents do have limited autonomy or partial autonomy in
the sense that they are free to choose what tasks to perform to achieve the goals or
objectives chosen by the principal, and how to pursue those tasks.
Assistants have no autonomy
Unlike principals and agents, assistants have no autonomy whatsoever. They don’t get
to choose anything. Their only job is to perform the tasks given to them by an agent or
their principal.

Okay, technically, assistants do have a modest degree of autonomy, but very modest
and very minimal. Any system that doesn’t require a principal to be directly controlling
every tiny movement by definition is delegating at least a small amount of autonomy.
But not enough for the term autonomy to have any significant relevance to the freedom
of action of such a system. That’s the point of distinguishing assistants from agents —
to indicate the almost complete lack of autonomy.

Assistants have responsibility but no authority


A principal can delegate to both agents and assistants. Both will have responsibilities,
but only agents have even a limited sense of authority, the authority to decide how to
turn an objective or goal into specific tasks or actions.

An assistant has no authority, simply the responsibility for a specified task or action, as
specified, with little or no room for discretion or decision.

Control
A principal always has control over agents to which it has delegated responsibility for
goals, and control over assistants to which it has assigned specific tasks.

A principal could change or revise or even cancel goals, instructions which agents
would be obligated to comply with.

A principal can at any time request a status report on progress that an agent is making
on a goal or objective.

Robots
Superficially, robots would seem to be fully autonomous, but in reality they have the
more limited autonomy or partial autonomy of agents. After all, robots are owned and
work on behalf of their owners, performing tasks and pursuing goals as their owners see
fit, and dictate.

That said, as with an agent, a robot can be granted a significant level of autonomy and
be given fairly open-ended goals, so that they could actually be fairly autonomous even
if not absolutely fully autonomous.

Robots and computers out of control with full


autonomy?
That would make for a fairly scary science fiction story, a world in which robots and
computers could be granted complete autonomy and not have to answer to anybody. But
I wouldn’t expect that reality anytime soon.

But it’s also possible that someone might mistakenly grant a robot complete autonomy
and it might be difficult to regain control over the robot. Although, it would be possible
to make it illegal to grant a robot full autonomy.

The HAL computer in the 2001: A Space Odyssey movie and the Skynet AI network of
computers and machines in the Terminator movies were in fact machines which
somehow gained full autonomy — with quite scary consequences.

It would be interesting to see a science fiction movie in which fully autonomous robots
have a strictly benign and benevolent sense of autonomous responsibility. But maybe
that violates the strict definition of autonomy — if they act as if to serve people, then
they aren’t truly autonomous.

Maybe robots would need to exist in colonies or countries or planets or space stations of
their own, with full autonomy there, rather than coexisting within our human societies.
Robot societies and human societies could coexist separately and could interact, but
respecting the autonomy of each other, with neither in charge or dominating the other.
Maybe.

Mission and objectives


A mission is a larger context than discrete goals. Think of the mission of an enterprise
or organization. It’s purpose. It’s market or area of interest.

The mission will break down into objectives, which will break down into discrete goals.

The enterprise or organization may periodically review and adjust, revise, or even
radically change its mission and objectives. At its own discretion. That’s autonomy.

An agent is given a discrete goal to pursue. A small part of a larger mission and its
objectives. An agent does indeed have a mission and objective, but they are set by its
principal. An agent has no control over its mission or objective.

A principal has a larger mission and associated objectives for which discrete goals are
periodically identified and assigned to discrete agents. A principal sets its own mission
and objectives.

For more discussion of mission and objectives, see the companion paper, Intelligent
Entities: Principals, Agents, and Assistants.

Mission and operational autonomy


There are two categorical distinctions concerning the autonomy of an entity:
1. Mission autonomy. The entity can choose and control its own missions and
objectives rather than be constrained to pursue and follow a mission or objective
set for it by another entity, a principal. This is closer to true autonomy.
2. Operational autonomy. The entity can decide for itself how to accomplish
operational requirements. This is independent of control of the overall mission
and objectives. This is characteristic of an agent, although an autonomous entity
would tend to also have operational autonomy as well.

So:

1. Principals have mission autonomy. And generally operational autonomy as well.


2. Agents have operational autonomy. But no mission autonomy.

Independence — mission and operational


Autonomy is roughly a direct synonym for independence.

We can speak of two categorical distinctions concerning the independence of an entity:

1. Mission independence. The entity can choose and control its own missions
rather than be constrained to pursue and follow a mission set for it by another
entity, a principal. This is closer to true autonomy.
2. Operational independence. The entity can decide for itself how to accomplish
operational requirements. This is independent of control of the overall mission.
This is characteristic of an agent, although an autonomous entity would tend to
also have operational independence as well.

So:

1. Principals have mission independence. And generally operational independence


as well.
2. Agents have operational independence.

Luck and Mark d’Inverno: A Formal Framework for


Agency and Autonomy
Michael Luck and Mark d’Inverno published a paper back in 1995 entitled A Formal
Framework for Agency and Autonomy which examined agency and autonomy as this
paper does but focused strictly on software agents and multi-agent systems in particular:

• Abstract: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.4431
• PDF:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.25.8941&rep=rep1&t
ype=pdf

The abstract:

• With the recent rapid growth of interest in MultiAgent Systems, both in artificial
intelligence and software engineering, has come an associated difficulty
concerning basic terms and concepts. In particular, the terms agency and
autonomy are used with increasing frequency to denote different notions with
different connotations. In this paper we lay the foundations for a principled
theory of agency and autonomy, and specify the relationship between them.
Using the Z specification language, we describe a three-tiered hierarchy
comprising objects, agents and autonomous agents where agents are viewed as
objects with goals, and autonomous agents are agents with motivations.

The relevant phrases:

• agents are viewed as objects with goals


• autonomous agents are agents with motivations

I cite this reference here neither to blindly accept it nor to quibble with it, but simply to
provide a published foundation which readers can reference.

That said, I’ll offer a couple of relatively minor quibbles, more along the lines of how to
define terms:

1. I’d prefer to use the term entity or even intelligent entity rather than object. To
my mind, objects include trees, rocks, and mechanical machines, but generally
not include peoples and animals per se. Technically, intelligent entities are
indeed objects, but the term object doesn’t capture the essential meaning of an
intelligent entity.
2. The cited paper defines an object as an entity that comprises a set of actions and
a set of attributes.
3. This notion of having a set of actions or capable of acting is a bit more than the
traditional, real-world, non-computer science sense of the meaning of the
concept of an object.
4. A machine is capable of acting in some sense, but unless it has some sort of
robotic brain, it has no sense of sensing its environment and making decisions
about how to interact with its environment. A sense of agency is needed.
5. A washing machine or refrigerator would fit the meaning of an object in the
sense of the cited paper, although I would refer to them as assistants rather than
mere objects in a real-world sense. They have no agency, with no ability to
choose how to pursue a goal rather than to blindly perform a specific task. That
ability to perform tasks does fit the definition of an assistant used in this paper.
6. A driverless car would be a good fit for what I would call an intelligent entity
and would fit the concept of agent used in the cited paper. You tell the car where
you want to go and it figures out the rest, coming up with a plan and figuring out
what tasks are needed to get you to your objective.
7. Said driverless car would superficially seem to have a sense of autonomy, in that
it can move around without a person at the controls, but it lacks the ability to set
its goals. It can pursue and follow goals given to it, but not set them. In that
sense, both mine and the cited paper, said driverless car does not have
autonomy.
8. Driverless cars did not exist back in 1995, but I think even now the authors of
the cited paper would likely agree that a driverless car lacks the motivation or
ability to set goals that is required to meet the definition for autonomy.
9. As the paper would seem to agree, goals are set from motivations.
10. As the paper would seem to agree, being an agent does not automatically confer
the presence of motivations. Agents don’t need to be motivated. They just need
to be able to pursue and achieve goals.
11. In the context of software agents, which was indeed the context of that 1995
paper, I’d refer to degree of autonomy, meaning the extent to which the agent is
free to make its own choices, as opposed to the degree to which the agent’s
principal has already made choices and has decided to constrain the choices or
autonomy of the agent.
12. An upcoming companion paper, Intelligent Entities: Principals, Agents, and
Assistants, will explore this notion of principal with respect to autonomy.
13. The cited paper uses the term motivation to essentially mean that the agent has
the ability to set its own goals.
14. I agree with the cited paper that agents are all about goals.
15. The open issue is who sets the goals for a given agent.
16. In my terms, it is the principal which sets goals. That could be a person, or some
piece of software or even a robot. And this paper does allow for the prospect of
subgoals so that an agent can act as principal for a subgoal.
17. In the terms of the cited paper, an autonomous agent would correspond to my
concept of principal.
18. A key difference between the terminology of the cited paper and of this paper, is
that this paper first seeks to ground the terms in the real world of human entities
or people before extending the terms and concepts to the world of machines and
software.

Motivation
Motivation is a greater factor in autonomy, but can be relevant to agency as well.

A principal should clear have some good reason for its choices in setting objectives and
goals. Its motivation.

An agent might have some minor motivation for its choices as to what tasks to perform
to pursue and achieve the goals given to the agent by its principal, but those minor
motivations pale in significance to the larger motivation for why the goal should be
pursued at all, something only the principal can know.

The contract between principal and agent may likely express the motivation for each
goal or objective, although that expression may have dubious value to the agent.

One exception is when the specification for the objectives might be technically weak
and too vague, incomplete, or ambiguous, leaving the agent with the job of deducing the
full specification of objectives by parsing the motivation. That’s not the best approach,
but may be the only viable or sane approach.

Sociology and philosophy


The concept of agency takes on a different meaning in sociology and philosophy — it is
used as a synonym for what is defined as autonomy in this paper and in the context of
robots, intelligent agents, and intelligent digital assistants.
The relevant dictionary sense is:

• the capacity, condition, or state of acting or of exerting power

With no mention of any principal or other external intelligent entity setting objectives
for the agent to follow.

That would be more compatible with the sense of principal as agent used in this paper,
where the agent is indeed setting its own objectives and goals.

That’s an unfortunate ambiguity, but that’s the nature of natural language.

For more information on these usages, consult the Wikipedia:

• Wikipedia article on Agency (sociology).


• Wikipedia article on Agency (philosophy).

Agent-based modeling (ABM) and agent-based


simulation (ABS)
One other field in which agency is defined as being synonymous with autonomy is
agent-based modeling (ABM), also known as agent-based simulation (ABS), in which
agents have a distinct sense of independence, autonomy. These agents are more like the
principals defined in this paper.

ABM/ABS is a hybrid field, a mix of computer science and social science, and not
limited to computer science or social science, either. In fact, it can be applied to other
fields as well. Anywhere that there are discrete, autonomous entities that interact and
can have some sort of aggregated effect. ABM/ABS is more of a tool or method than a
true field per se.

For all intents and purposes, ABM/ABS could be considered part of social science and
sociology.

Definitions
As promised, here are the synthesized definitions of autonomy and agency as used in
this paper:

1. Autonomy. Degree to which an intelligent entity can set goals, make decisions,
and take actions without the approval of any other intelligent entity. The extent
to which an entity is free to exert its own will, independent of other entities. Can
range from the full autonomy of a principal to the limited autonomy or partial
autonomy of an agent to no autonomy for an assistant. The entity can decide
whether to take action itself or delegate responsibility for specific goals or
specific tasks to other intelligent entities, such as agents and assistants.
2. Agency. Ability of an intelligent entity, an agent, to plan, make decisions, and
take actions or perform tasks in pursuit of objectives and goals provided by a
principal. The agent has limited autonomy or partial autonomy to decide how to
pursue objectives and goals specified by its principal. A contract between
principal and agent specifies the objectives and goals to be pursued, authorizing
action and obligations, but leaving it to the agent to decide how to plan, define,
and perform tasks and actions. The agent may decompose given objectives and
goals into subgoals which it can delegate to other agents for whom this agent is
their principal. Note: In sociology and philosophy agency refers to autonomy or
the extent to which an entity is free to exert its own will, independent of other
entities.

Some derived terms:

1. Degree of autonomy. Quantification of how much autonomy an entity has.


2. Limited autonomy. Partial autonomy. Some degree of autonomy short of full
autonomy.
3. Weak autonomy. Entity with limited autonomy, constrained by goals set by
other entities. Roughly comparable to agency.
4. Autonomous intelligent entity. Intelligent entity that has some degree of
autonomy.
5. Autonomous entity. Synonym for autonomous intelligent entity. Or any entity
which acts autonomously, even if not strictly intelligent.
6. Full autonomy. Complete autonomy. Absolute autonomy. True autonomy.
Unlimited, unrestricted autonomy. No other entity is able to exert any
meaningful control over such an autonomous entity.
7. Mission autonomy. The entity can choose and control its own missions rather
than be constrained to pursue and follow a mission set for it by another entity, a
principal. This is closer to true autonomy.
8. Operational autonomy. The entity can decide for itself how to accomplish
operational requirements. This is independent of control of the overall mission.
This is characteristic of an agent, although an autonomous entity would tend to
also have operational autonomy as well.
9. Limited agency. Some degree of agency short of full agency. Some degree of
autonomy short of full autonomy.
10. Full agency. Unlimited, unrestricted agency, limited only by the contract
between the agent and its principal. Still only a limited degree of autonomy,
constrained by its contract with its principal.
11. Degree of agency. Quantification of how much agency an entity has.
12. Agent. Any entity with some degree of agency, but lacking full autonomy.
13. Autonomous agent. Improper term, in the view of this paper. An agent would,
by definition, not be fully autonomous. Nonetheless, the term is somewhat
commonly used in computer science to indicate an agent with a relatively high
degree of autonomy.

These definitions should apply equally well to human and computational entities, or at
least be reasonably compatible between those two domains.

Terms used within those definitions are defined elsewhere in this paper, including:

• Entity
• Intelligent entity
• Principal
• Agent
• Assistant
• Objective
• Goal
• Task
• Action
• Subgoal
• Responsibility
• Authority
• Delegation
• Contract

Autonomous systems
Generally and loosely speaking, people speak of autonomous systems, whether it be a
robot, a software application, a satellite, a deep space probe, or a military weapon.

This is not meant to imply that such systems are fully, completely, and absolutely
autonomous, but simply that they have a high degree of autonomy. Or what we call
limited autonomy or partial autonomy in this paper.

And to draw a contrast to directly or remotely controlled systems such as drones where
every tiny movement is controlled by a human operator.

Lethal autonomous weapons (LAWs)


A very special case is what is called a lethal autonomous weapon or LAW. These
weapons are of significant ethical concern since they largely take human judgment,
human discretion, and human compassion out of the equation.

As noted for autonomous systems in general, even so-called lethal autonomous weapons
will not typically be fully, completely, and absolutely autonomous.

They may have a significantly higher degree of autonomy, but not true, full autonomy.

There is some significant effort to assure that at least some minimal degree of human
interaction occurs, what they call meaningful human control. That’s still a somewhat
vague term, but the concept is still in the early stages.

Even an automatic rifle or machine gun has a trigger, causing it to stop firing when a
person decides to stop holding the trigger. That’s meaningful human control.

Even before we start getting heavily into artificial intelligence (AI), there are already
relatively autonomous systems such as the Phalanx CIWS close-in weapon system gun
for defense against anti-ship missiles. It is fully automated, but with oversight by a
human operator. It can automatically detect, track, and fire on incoming missiles, but
the operator can still turn it off.
A big ethical concern for lethal autonomous weapons is the question of accountability
and responsibility. Who is responsible when an innocent is harmed by such an
autonomous weapon when there is no person pulling the trigger?

A practical, but still ethical, concern is the technical capability of discriminating


between combatants and civilians. Granted, even people have difficulty discriminating
sometimes. Technical capabilities are evolving. They may still be too primitive today by
today’s standards, but further evolution is likely. In fact, there may come a day when
autonomous systems can do a much better job of discrimination than human operators.

The only truly fully autonomous lethal weapon I know of is the minefield. Granted it
has no AI or even any digital automation, and the individual mines are not connected,
but collectively it acts as a system and is fully, absolutely autonomous. It offers both the
best and worst of military and ethical qualities. It has no discrimination. It is fully
autonomous. It is quite reliable. It is quite lethal. It is quite humane. It has absolutely no
compassion. It has no accountability. No responsibility. And no human operator can
even turn it off other than by laboriously and dangerously dismantling the system one
mine at a time. Somebody put the mines there, but who?

Now, take that rather simple conception of a minefield and layer on robotics, digital
automation, and even just a little AI, and then you have mountains of technical,
logistical, and ethical issues. That’s when people start taking about killer robots and
swarms.

Sovereignty
Another related term which gets used in some contexts as a rough synonym for both
autonomy and independence is sovereignty.

From the Merriam-Webster definition of sovereignty:

• freedom from external control

One can refer to an entity as being sovereign if it is autonomous or independent.

But generally, it won’t be necessary to refer to sovereignty rather than autonomy.

Summary
To recap:

Autonomy refers to the freedom of an intelligent entity to set its own objectives and
goals and pursue them, either by acting directly itself or delegating goals to agents.

An autonomous intelligent entity (principal) controls its own destiny.

Agency refers to the freedom of an intelligent entity (agent) to pursue goals delegated to
it by its principal as it sees fit, although subject to expectations and obligations
specified by its principal in the contract which governs their relationship.
An agent owes its allegiance to its principal.

Although in sociology, philosophy, and agent-based modeling and simulation the terms
agency and agent are used and defined as the terms autonomy and autonomous entity
are in this paper.

One can also refer to degree of autonomy, so that an agent has some limited degree of
autonomy and so-called autonomous systems have a fair degree of autonomy even
though they do no have full, complete, and absolute autonomy.

Lethal autonomous weapons? Coming, but not here yet, and not in the very near future.

What Is an Assistant?

Jack Krupansky

Nov 30, 2017

As a prelude to writing about intelligent digital assistants, this informal paper


summarizes what it means to be an assistant in general — traditional human assistants.
This will facilitate a more comprehensive discussion of how well digital assistants
subsume the role(s) of human assistants.

There is subsequent paper, What is an Intelligent Digital Assistant?

Definition
Unfortunately, there is no single, universal definition for what it means to be an
assistant.

Some of the definitional fragments…

From Google:

• a person who ranks below a senior person.


• a person who helps in particular work.

From Merriam-Webster:

• a person who assists someone.


• helper.
• a person holding an assistantship.
• a device or product that provides assistance.
• a person who helps someone.
• a person whose job is to help another person to do work.
• a person whose job is to help the customers in a store.
• acting as a helper to another.
• a person who assists another.

From Dictionary.com:

• a person who assists or gives aid and support.


• helper.
• a person who is subordinate to another in rank, function, etc.
• one holding a secondary rank in an office or post.
• something that aids and supplements another.
• a faculty member of a college or university who ranks below an instructor and
whose responsibilities usually include grading papers, supervising laboratories,
and assisting in teaching.
• serving in an immediately subordinate position; of secondary rank.
• a person who assists, esp in a subordinate position.
• (archaic) helpful or useful as an aid.

Specializations of the term


Not all assistants are created equal. Each assists in some specialized sense.

From Wikipedia:

• Assistant district attorney


• Certified Nursing Assistant
• Graduate assistant
• Office Assistant
• Personal assistant
• Physician assistant
• Production assistant
• Research assistant
• Teaching assistant

Some others, many from job listings:

• Administrative assistant
• Artificially intelligent assistant
• Assistant account executive
• Assistant coach
• Assistant commissioner
• Assistant facility manager
• Assistant manager
• Assistant operations manager
• Assistant professor
• Assistant program director
• Assistant secretary
• Call center customer care assistant
• Customer care assistant
• Chatbot/virtual assistants
• Community assistant
• Customer care assistant
• Dental assistant
• Deputy assistant secretary
• Digital assistant
• Digital virtual assistant
• Editorial assistant
• Executive assistant
• Executive virtual assistant
• Finance assistant
• Health assistant
• Intelligent automated virtual assistant
• Intelligent digital assistant
• Intelligent personal assistant
• Intelligent virtual assistant
• Integration assistant
• Lab assistant, laboratory assistant
• Legal assistant
• Marketing administrative assistant
• Medical assistant
• Nursing assistant
• Outside advertising sales assistant
• Program and executive assistant
• Project assistant
• Recruiter assistant
• Retail assistant
• Sales assistant
• Staff assistant
• Student assistant
• Teacher assistant
• Team Assistant
• Virtual assistant
• Virtual digital assistant (VDA)
• Virtual office assistant

Virtual assistant — remote or software


Virtual assistant is an ambiguous term, with two distinct meanings:

1. Remote, telecommuting assistant. The common meaning in job search listings.


2. Software or artificial intelligence-based assistance. An intelligent digital
assistant.

The first is a real live person, a human, who just happens to work from a location other
than an office of the company or organization being served, such as from home or at a
third-party firm providing such services. They may work using the telephone, email, or
online chat.

The second is a digital simulation of a person, able to respond to a subset of the requests
that a normal person would be able to handle.
Alternatively, the software may be able to handle a much deeper or broader range of
requests than any single human being could be readily trained to handle.

More depth on digital assistants will be covered in a subsequent paper, What is an


intelligent digital assistant?

Related terms
Synonyms from Thesaurus.com:

• abettor
• accessory
• accomplice
• adherent
• adjunct
• aide
• aide-de-camp
• ally
• appointee
• apprentice
• associate
• attendant
• auxiliary
• backer
• backup
• coadjutor
• coadjutant
• collaborator
• colleague
• companion
• confederate
• cooperator
• deputy
• fellow worker
• flunky
• follower
• friend
• gofer
• help
• helper
• helpmate
• mate
• partner
• patron
• peon
• representative
• right-hand person
• secretary
• subordinate
• supporter
• temp
• temporary worker

Other related terms:

• Chatbot
• Customer care representative
• Customer service representative
• Medical scribe
• Personal digital assistant
• Project management specialist
• Receptionist
• Registration and scheduling specialist
• Representative
• Scribe
• Social bot
• Software assistant
• Technical assistance

Principal
This paper will use the term principal to refer to the boss or manager to whom an
assistant reports and who assigns them work or tasks to be completed. The person for
whom the assistant works. The person whom the assistant assists.

Level of expertise and responsibility — simple,


specialized, executive
Assistant roles come in a wide variety of colors and stripes, but there are three
significant distinctions based on expertise and responsibility:

1. Simple assistant. Your basic, run of the mill assistant. Minimal education,
minimal prior experience, minimal technical knowledge. And minimal level of
responsibility, a well-defined collection of administrative-type tasks. The
administrative assistant is representative of this level.
2. Specialized or technical assistant. May require a more specialized degree or
training. Requires the ability to master particular subject matter. Requires the
ability to perform specialized tasks, beyond simple administrative tasks. Medical
assistants and research assistants are representative of this level.
3. Executive assistant. Not so much a matter of education or specialized
knowledge, but rather able to handle more significant responsibilities. Able to
accomplish tasks or pursue goals of their principal without being explicitly
directed to perform each task. Almost literally able to read their principal’’s
mind, or at least be able to frequently and commonly anticipate predictable
requests. More significantly, is able to serve their principal better than their
principal could directly and explicitly instruct them.

Tasks vs. goals


Tasks are relatively simple operations that may require a lot of effort, but generally do
not require much in the way of complex reasoning, judgment, or careful decision. Only
limited planning required.

Goals are more complex collections of tasks that require some significant level of
complex reasoning, judgment, and careful decision. And significant planning.

A task is specified by detailing the operations to be performed. How to achieve the


objective.

A goal is specified by stating the objective to be achieved. The objective itself rather
than the details of how to achieve the objective.

Tasks generally don’t require much deep thought, just slogging through the work.

Goals tend to require deeper, more careful, and more insightful thought.

Administrative assistants would tend to be task-oriented.

Research assistants would have a significant degree of goal-oriented work. Possibly


with some degree of task-oriented administrative work as well.

An executive assistant would of course have a fair degree of task-oriented


administrative work, although they might also have administrative assistants of their
own to handle much of the mundane tasks, but a fair portion of their time would be
dedicated to more goal-oriented activities to anticipate and further the objectives of
their principal, without requiring ongoing, explicit direction.

Primary types of assistant


There are many types of assistant, as many as there are types of work and types of
human activity, but there are a relatively few primary categories of types:

1. Assistant. Generic, the full range. Could be merely administrative or be


technical and very capable, but simply dedicated to serving the needs of a single
senior manager or executive.
2. Gofer. Probably the simplest form of assistant. An employee whose duties
include running errands. Lackey. Not necessarily dedicated to a particular
individual.
3. Personal assistant. May simply be a gofer, or something more, but dedicated to
a particular individual. Frequently focused on personal services.
4. Concierge. Makes arrangements, such as reservations or purchasing tickets.
Traditionally in a hotel, but clubs, businesses, and other organizations now use
the concept.
5. Butler. Valet. Personal assistant in a home or club. Provides personal services.
6. Administrative assistant. Most common in businesses. As an office assistant,
covering basic office tasks such as paperwork, filing, basic scheduling and
calendar, travel arrangements, and basic computer skills. Possibly some degree
of database and spreadsheet skills. Duties may include assisting visitors and
guests, coffee, refreshments, meals, and logistical support for meetings and
conferences. No special degree, training, or subject matter knowledge required.
7. Research assistant. Requires half a brain, or more. Probably specialized
education or training. Probably subject matter expertise. Excellent written
communication skills.
8. Technical assistant. Technical qualifications. Specialized education and
training. Specialized skills and experience. Specialized tasks.
9. Executive assistant. Extensive experience working with senior managers and
executives and prestigious visitors. Anticipate needs without being asked.
Resolve problems without asking for help. Work well with important people
outside the organization. Ability to make decisions, rather than wait or ask.
10. House staff. Hotel staff. To the extent that they personally assist guests or
residents. Such as doorman, bellhop, concierge, room service, operators, valet,
or butler.
11. Virtual assistant. An assistant who telecommutes or works remotely or off-site,
possibly for a third-party contractor. Alternatively, an intelligent digital
assistant.
12. Intelligent digital assistant. Software service running on a digital computing
device which provides information and some interesting subset of the features of
a traditional, human assistant.

Personal services
Performing services of a personal rather than professional nature for the principal is a
primary function of a personal assistant, although specialized assistants as summarized
in the preceding section may focus more heavily on professional services.

Personal services may include:

• Clothing
• Appearance
• Hygiene
• Housekeeping
• Food
• Drink
• Dining
• Transportation and travel
• Entertainment

Many other tasks


The categories of types of assistants given in the primary types section give only a
subset of the full spectrum of tasks which can be performed by assistants. The spectrum
is unlimited.

About the only limitation is that the nature of the task needs to be clearly and simply
articulated so that the task can be performed in a direct, straightforward manner by the
assistant, who may not have the detailed education and knowledge to understand the
deeper why of the task. Their function is simply to excel at the how of the task.
Software service — intelligent digital assistant
A subsequent paper will dive into greater detail, but an intelligent digital assistant is a
software service, possibly coupled with a specialized hardware device, such as a smart
speaker, or merely a feature offered on a general purpose computing device such as a
personal computer, tablet, smartphone, or wearable computer (such as a digital
wristwatch), which offers some interesting set of the abilities of a traditional, human
assistant, most notably answering questions and performing tasks using voice and
natural language processing (NLP) backed by artificial intelligence (AI).

Examples include Amazon Alexa/Echo, Apple Siri, Google Assistant, and Microsoft
Cortana.

Synthesized definition
After digesting available material on the topic, I have come up with the following
synthesized definition for assistant:

• An assistant is a person, device, or software service who or which takes on some


portion of the workload or tasks of an individual or group.
• An assistant facilitates the activities and life of an individual or group.
• An assistant performs tasks or operations on behalf of and at the request of an
individual or a group.
• An assistant has very little or limited sense of autonomy or agency, deferring to
the explicit direction of the individual or group whom they serve. Some
assistants may have a greater degree of autonomy.

Intelligent Entities: Principals, Agents,


and Assistants

Jack Krupansky

Dec 15, 2017

Not all entities are created equal, whether they are people, robots, software agents, or
intelligent digital assistants. This informal paper explores the relative roles of
principals, agents, and assistants. These respective roles apply to both the real world of
people as well as the digital world of intelligent agents and intelligent digital assistants.

A prior paper, What Are Autonomy and Agency, explored autonomy and agency to
some degree. This paper goes deeper, focusing on the roles of the entities themselves as
well as their autonomy and agency.
Although the ultimate goal of this paper is to get at the computational aspects of
intelligent entities, principals, agents, and assistants, most of the concepts should apply
equally well to the human world and the world of computers.

The essential concepts explored in this paper relate to agency, which itself relates to
autonomy.

This paper is intended to serve as a baseline, fundamental reference on the essential


relationship between agents and entities which interact with agents. That may make it a
bit harder to read, but the focus is on developing a solid foundation and to serve as a
reference rather than merely a light introduction.

Much of the motivation for the depth in this paper is to enable the concepts of
intelligent entities, agency, and autonomy to be successfully implemented in artificial
intelligence systems. Outside of AI systems, people can get by with casual and intuitive
interpretations of these powerful concepts, but no computer system is going to figure
out all of the nuances of these concepts on its own without sufficient depth being
programmed in from the get-go.

The key concepts are:

1. Intelligent entities. These can be people or machines. Capable of thought and


organized behavior. They may be highly intelligent or just barely or anything
between.
2. Principals. These are the entities which have true autonomy, free to do
whatever they want. They don’t have bosses, but they may have customers,
clients, and users. They have some larger mission or purpose, which causes them
to define and pursue objectives, resulting in goals and tasks which they seek to
offload or delegate to others, namely agents and assistants, so that they can
achieve something larger than their personal, individual efforts.
3. Agents. They do the actual work of principals, limited only by their contractual
obligations to those principals, known as goals, but otherwise are free organize
their time and efforts to achieve their principal’s goals as they see fit. They have
limited autonomy, but maximal agency.
4. Assistants. Workers who focus on specific tasks, to pursue some larger goal,
objective, or mission of their boss, a principal or an agent.

Again, agency is the central focus of this paper, from two aspects:

1. The leveraging of effort that agents provide to principals.


2. What is takes to make a successful and useful agent.

Not all intelligent entities are locked into strict roles of principals, agents, and assistants,
but they are not related to the concept of agency, which is the focus of this paper. See
the section on Other categories of intelligent entities.

Other key concepts and terms that will be defined and explored in this paper:

1. Agency
2. Autonomy, autonomous
3. Independence, independent
4. Dependence, dependent, dependency
5. Control
6. Mission
7. Purpose
8. Objective
9. Contract
10. Capability
11. Reputation
12. Requirement
13. Delegation
14. Goal
15. Task
16. Action
17. Operation

Most concepts and terms will be provided with both their traditional dictionary
definitions as well as refined definitions which more closely capture the essential
meaning of the concepts and terms in the context of intelligent entities and their
relationships, especially agency.

What’s the point of an intelligent entity?


Why would an entity need to be intelligent? What’s the point of intelligence?

Simply put, intelligence is required to do interesting things that cannot be done with a
mere pre-programmed sequence of rote, mechanical steps.

Some degree of some of the following may be required to do anything interesting:

1. Defining a mission.
2. Defining objectives to achieve the mission.
3. Planning.
4. Making decisions.
5. Dealing with the unexpected.
6. Dealing with ambiguity.
7. Coping with the vagaries of human nature.
8. Coping with the vagaries of weather and other natural phenomena.
9. Dexterity that cannot be easily or cheaply replicated by a machine.
10. Pattern recognition that cannot be easily or cheaply replicated by a machine.
11. Creativity.
12. Imagination.

In some (or even many) cases it may be theoretically possible to arrange for a dumb
machine (or person) to be trained to accomplish a task or pursue a goal, but the cost or
risk of doing so might cause one to fall back on an intelligent entity, human or machine,
rather than deal with the complexities and vagaries of dumb entities, whether human or
machine.
How much intelligence is needed?
Enough to respond to common obstacles and variations in the operating environment.

Various levels and a spectrum of intelligence is explored in a companion paper,


Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and
Machine Learning.

Is a dog an intelligent entity?


Interesting question! A few key points in their favor:

1. Dogs do have utility, including guarding.


2. Dogs can perform simple tasks or at least actions upon request.
3. They can sometimes deal with the unexpected.
4. They have dexterity, to some degree.
5. They can do some forms of pattern recognition.
6. They do some some very minimal communication skills.
7. They can be great companions.
8. They can be specially trained for specific tasks or activities such as leading the
blind, bomb sniffing, tracking, fetching certain types of objects, etc.
9. They can be assistants to at least a limited degree.
10. They can help to assist in solving simple problems and performing simple tasks.

A few key points against them:

1. Nothing in the way of creativity or imagination.


2. Nothing in the way of planning.
3. No ability to perform more complex tasks.
4. No complex problem solving ability.
5. Very limited communication skills.
6. No apparent ability to consider alternatives and make decisions.
7. No sense of ethics or morality. Except maybe simple loyalty to their owner.

In short, I would say that they can be considered limited assistants.

In defense of dogs, it is worth noting that even some human assistants are not very
competent at some of those tasks and activities that are beyond the abilities of a dog, so
we shouldn’t necessarily hold those limitations against dogs per se.

I hesitate to grant them the full status of intelligent entity and assistant, but I don’t want
to completely dismiss their value either.

Maybe I’ll just leave it up to the discretion of the reader — you have my permission to
confer or deny full status of intelligent entity and assistant, as you see fit.

Me, I’ll grant our canine friends provisional status as intelligent entities until such time
as someone can offer a convincing and satisfying argument against such status.
Solving bigger problems
The essential rationale for principals, agents, and assistants is to enable entities to
address significantly larger problems than they could if working alone.

The principal defines the larger mission and breaks it down into manageable objectives,
which can in turn be broken down into narrower goals, each of which can be delegated
to an agent using a contract which specifies the details of the goal, or possibly into
more discrete tasks, each of which can be delegated or assigned to an assistant.

Each agent studies and analyzes the goal or objective it was assigned by its principal,
comes up with a strategy for how to achieve the goal, comes up with a plan to pursue
that strategy by decomposing the goal into individual tasks, and then parcels each task
out to an assistant, or if simple enough, performs the individual tasks itself.

Each assistant then focuses on a single task, sequencing through the specific actions or
operations needed to complete its task.

One principal can employ any number of agents. And possibly assistants as well.

Each agent can employ any number of assistants.

Even beyond that, an agent can decompose its assigned goal into subgoals and then
delegate each subgoal to yet another agent, each with its own contract for its specific
subgoal.

As well, an assistant can partition a large assigned task into smaller tasks and then
delegate or assign the smaller tasks to yet other assistants.

General meaning of entity


The general meaning of entity most relevant to this paper is person or computational
entity. The latter covering software and data in computers, tablets, smartphones,
intelligent agents, intelligent digital assistants, and any kind of device with embedded
digital capabilities such as smart appliances.

That said, it is instructive to look at the full range of meaning of the term entity first.

Definition entries from Merriam-Webster definition of entity:

1. Being, existence — independent, separate, or self-contained existence


2. the existence of a thing as contrasted with its attributes
3. something that has separate and distinct existence and objective or conceptual
reality
4. an organization (such as a business or governmental unit) that has an identity
separate from those of its members

From a companion paper, Vocabulary of Knowledge, Thought, and Reason, the


definition of entity:
• Entity. An object that has some sort of significance. Commonly a person, place,
or thing. May be a computational entity. May also be an idea, concept, topic,
area, event, matter, action, phenomenon, or any thing of unspecified or vaguely
specified nature. A group of closely related entities can also be considered
collectively as a larger entity, such as a family, partnership, team, business,
nonprofit organization, or a country. Animals, people, organizations, and robots
are entities.

The definition of object from that paper as well:

• Object. Something that exists or at least appears to have form, substance, shape,
or can be detected in some way, or can be experienced with the senses or
imagination, or manipulated by a computer, either as a real-world object or an
imaginary object, such as a media object, mental object, or computational
object, and can be distinguished from its environment. See also: entity, a subset
of which are objects. Whether liquid and gaseous matter should be considered to
be objects is debatable, but they are under this definition. A storm could
certainly be treated as an object even though it consists only of air and water.
Alternatively, the entity or matter at which an action is being directed — see
also: subject.

Technically, an entity does not even have to be a person or smart machine, so for the
context of this paper we need to restrict the definition to the subset of entities that are
people and smart machines — sapient entities, alternatively known as intelligent
entities. And smart machines are also known as computational entities.

Definition of sapient entity


Another key definition from that companion paper, Vocabulary of Knowledge,
Thought, and Reason, is that of sapient entity:

• Sapient entity. An intelligent entity, capable of wisdom. A person or an


intelligent machine or robot.

The intention in the context of this paper is that principals, agents, and assistants are all
sapient entities.

Granted, wisdom is a bit of a stretch for current AI, but anything better than really dumb
machines is worth at least partial credit.

This paper focuses on sapient entities, but for convenience and conciseness simply
refers to them as simply entities with sapience implied, or as intelligent entities.

Definition of intelligent entity


Another key definition from that companion paper, Vocabulary of Knowledge,
Thought, and Reason, is the definition of intelligent entity:
• Intelligent entity. Entity capable of perception and cognition — thought and
reason, coupled with memory and knowledge. Synonym for sapient entity.

How intelligent?
How exactly should we distinguish dumb entities (human or machine) from intelligent
entities? That’s an open matter of great debate.

For starters, review the extensive explorations of the nature of intelligence (both human
and machine) in the companion paper, Untangling the Definitions of Artificial
Intelligence, Machine Intelligence, and Machine Learning, particularly the sections
Levels of Artificial Intelligence and Spectrum of Functional Behavior.

To oversimplify, you have Weak AI and Strong AI, with plenty of shades of gray
between.

In short, you can credibly claim that you or your computer software is intelligent if it is
at least somewhat intelligent, exhibiting behavioral qualities that are at least quasi
human-like even if not all that sophisticated.

Put another way, an agent or assistant really only needs to be able to do something,
anything useful so that you can feel that it has taken some interesting, significant
burden off of your shoulders and made your life at least a non-trivial degree of better or
at least easier. And even if it is only a trivial degree of improvement, that’s likely good
enough as well.

Sure, ten years from now intelligent entities are going to be really intelligent, but we
should be content to crawl before we walk let alone run and sprint.

Definition of computational entity


An intelligent entity can be a person or a computational entity, as defined in the
companion paper, Vocabulary of Knowledge, Thought, and Reason:

• Computational entity. An imaginary entity created as a computational object. It


may be intended to accurately or approximately represent a real-world object,
mental object, or media object, or be entirely imaginary and exist only in the
computing environment.

Dictionary definitions of entity, principal, agent, and


assistant
Before providing refined definitions of the terms entity, principal, agent, and assistant,
the starting point is to review the traditional dictionary definitions of these terms.

Definition entries from Merriam-Webster definition of entity:

1. independent, separate, or self-contained existence


2. something that has separate and distinct existence and objective or conceptual
reality

There are other meanings for entity, but those are the senses relevant to this paper.

Definition entries from Merriam-Webster definition of principal:

1. a person who has controlling authority or is in a leading position


2. a chief or head man or woman
3. the chief executive officer of an educational institution
4. one who engages another to act as an agent subject to general control and
instruction
5. the person from whom an agent’s authority derives
6. the chief or an actual participant in a crime
7. the person primarily or ultimately liable on a legal obligation
8. a leading performer — star

Definition entries from Merriam-Webster definition of agent:

1. one that acts or exerts power


2. something that produces or is capable of producing an effect
3. a means or instrument by which a guiding intelligence achieves a result
4. one who is authorized to act for or in the place of another
5. a computer application designed to automate certain tasks (such as gathering
information online)
6. a person who does business for another person
7. a person who acts on behalf of another
8. a person or thing that causes something to happen
9. something that produces an effect
10. a person who acts or does business for another
11. someone or something that acts or exerts power
12. a moving force in achieving some result
13. a person guided or instigated by another in some action
14. a person or entity (as an employee or independent contractor) authorized to act
on behalf of and under the control of another in dealing with third parties

Definition entries from Merriam-Webster definition of assistant:

1. a person who assists someone


2. helper
3. a person holding an assistantship
4. a device or product that provides assistance
5. a person who helps someone
6. a person whose job is to help another person to do work
7. a person whose job is to help the customers in a store
8. acting as a helper to another
9. a person who assists another

Definition entries from Dictionary.com definition of assistant:


1. a person who assists or gives aid and support
2. helper
3. a person who is subordinate to another in rank, function, etc.
4. one holding a secondary rank in an office or post
5. something that aids and supplements another
6. a faculty member of a college or university who ranks below an instructor and
whose responsibilities usually include grading papers, supervising laboratories,
and assisting in teaching
7. serving in an immediately subordinate position; of secondary rank
8. a person who assists, esp in a subordinate position
9. (archaic) helpful or useful as an aid

Definitions of autonomy and agency


Definitions of autonomy and agency from What Are Autonomy and Agency?:

1. Autonomy. Degree to which an entity can set goals, make decisions, and take
actions without the approval of any other entity. Can range from the full
autonomy of a principal to the limited autonomy of an agent to no autonomy for
an assistant. The entity can decide whether to take action itself or delegate
responsibility for specific goals or specific tasks to other entities, such as agents
and assistants.
2. Agency. Ability of an entity, an agent, to plan, make decisions, and take actions
or perform tasks in pursuit of objectives and goals provided by a principal. The
agent has limited autonomy to decide how to pursue objectives and goals
specified by its principal. A contract between principal and agent specifies the
objectives and goals to be pursued, authorizing action and obligations, but
leaving it to the agent to decide how to plan, define, and perform tasks and
actions. The agent may decompose given objectives and goals into subgoals
which it can delegate to other agents for whom this agent is their principal.

More depth on autonomy and agency


For more detail on autonomy and agency, see the companion paper, What Are
Autonomy and Agency?.

Degrees of autonomy
In the real world, autonomy is not a binary all or nothing proposition. It’s a spectrum
with unlimited gradations, the most common and significant in the context of this paper
being:

1. No autonomy. All actions dictated by other entities.


2. Limited autonomy. Some degree of autonomy, but constrained by some
combination of other entities, commonly the principal of a contract between
entities, and external forces.
3. Semi-autonomous. More than merely some limited autonomy, but short of full
autonomy.
4. Full autonomy. Absolutely no limits. Well, other than the laws of physics and
statutory law.

And gradations between those gross levels of autonomy.

Full autonomy for principals


The general idea is that principals have autonomy while agents and assistants are more
significantly constrained in their freedom of action.

But can real principals in the real world ever have absolutely full autonomy?

In a technically purist sense, no. Real, practical principals will be constrained by:

• The laws of physics.


• The limitations of their bodies and their minds.
• National, local, and international law.
• Moral and ethical commitments, including professional ethics.
• Limited resources.
• Competition for resources with other entities.

But other than that, we can consider principals to have full autonomy.

That’s fine for people, but what about robots and AI? Well…

Science fiction for robot and AI autonomy


In the imaginary world of science fiction, full autonomy of robots and AI is quite
possible, if not expected.

The HAL computer in the 2001: A Space Odyssey movie and the Skynet AI network of
computers and machines in the Terminator movies were in fact machines which
somehow gained full autonomy — with quite scary consequences.

It would be interesting to see a science fiction movie in which fully autonomous robots
have a strictly benign and benevolent sense of autonomous responsibility. But maybe
that violates the strict definition of autonomy — if they act as if to serve people, then
they aren’t truly autonomous.

Maybe robots would need to exist in colonies or countries or planets or space stations of
their own, with full autonomy there, rather than coexisting within our human societies.
Robot societies and human societies could coexist separately and could interact, while
respecting the autonomy of each other, with neither in charge or dominating the other.
Maybe. But no time soon.

Even if you did manage to put a robot on an uninhabited island, on an unmanned ship,
or even launched into space never to return, it’s not clear how you could give up legal
ownership and responsibility so that the robot was truly autonomous. There would have
to be a change in our laws to permit such an emancipation of property, ala the concept
of emancipation of minors (children.) Merely abandoning or freeing a robot would not
address the legal aspect of ownership.

Limited autonomy for robots and AI in the real world


In the real world as we know it today and expect it for the indefinite future, there is no
current prospect that robots or AI could have the full and unlimited autonomy that is
permitted in science fiction.

For now, and the indefinite future, robots and AI systems will have owners, who have
control over them, which is inconsistent with full autonomy.

For now, and the indefinite future, robots will not be citizens or have the rights of
citizens.

For now, and the indefinite future, driverless cars will go where their owners or
occupants tell them to go. As such, a driverless car would be more of an agent rather
than a principal.

So, the proper characterization is that robots and AI systems can be semi-autonomous
with limited autonomy.

Dictionary definitions of independent


Independence and autonomy are closely related, if not synonyms.

Definition entries from Merriam-Webster definition of independent:

1. not dependent
2. not subject to control by others — self-governing
3. not affiliated with a larger controlling unit
4. not requiring or relying on something else — not contingent
5. not looking to others for one’s opinions or for guidance in conduct
6. not bound by or committed to a political party
7. not requiring or relying on others (as for care or livelihood)
8. being enough to free one from the necessity of working for a living
9. showing a desire for freedom
10. not determined by or capable of being deduced or derived from or expressed in
terms of members (such as axioms or equations) of the set under consideration
11. having the property that the joint probability (as of events or samples) or the
joint probability density function (as of random variables) equals the product of
the probabilities or probability density functions of separate occurrence
12. main
13. neither deducible from nor incompatible with another statement
14. one that is independent
15. one that is not bound by or definitively committed to a political party
16. someone or something that is not connected to others of the same kind
17. a person who does not belong to a political party
18. not under the control or rule of another
19. not connected with something else
20. not depending on anyone else for money to live on
21. thinking freely : not looking to others for guidance

Definition entries from Dictionary.com definition of independent:

1. not influenced or controlled by others in matters of opinion, conduct, etc.


2. thinking or acting for oneself
3. not subject to another’s authority or jurisdiction; autonomous — free
4. not influenced by the thought or action of others
5. not dependent
6. not depending or contingent upon something else for existence, operation, etc.
7. not relying on another or others for aid or support
8. rejecting others’ aid or support
9. refusing to be under obligation to others
10. possessing a competency
11. an independent person or thing
12. a small, privately owned business
13. a person who votes for candidates, measures, etc., in accordance with his or her
own judgment and without regard to the endorsement of, or the positions taken
by, any party

Dictionary definitions of independence


Definition entries from Merriam-Webster definition of independence:

1. the quality or state of being independent


2. freedom from outside control or support
3. the state of being independent
4. the time when a country or region gains political freedom from outside control
5. the quality or state of not being under the control of, reliant on, or connected
with someone or something else

Definition entries from Dictionary.com definition of independence:

1. the state or quality of being independent


2. freedom from the control, influence, support, aid, or the like, of others

Freedom of action
Autonomy and independence are terms for referring to the degree of freedom of action
of an entity.

That includes free will or the ability to make decisions without external constraint as
well.

Definition of independent and independence


1. Independent. An entity whose actions are not dependent or controlled by any
other entity. Generally a synonym for autonomy, except in the political sphere
where autonomy conveys some limited sense of control or dependence relative
to a larger state or country, but does not constitute true independence.
2. Independence. Degree to which an entity is independent or autonomous.
Degree to which an entity can make decisions and act without consulting with or
being authorized by some other entity or entities. May be full independence or
limited independence.
3. Full independence. No limitations to the autonomy or freedom of action of an
entity. Synonym for full autonomy.
4. Limited independence. There are restrictions on the degree to which an entity
can make decisions and act. The entity is dependent on information, support,
guidance, direction, permission, or authorization from one or more other entities.
Synonym for limited autonomy.

A principal can be fully independent, although technically it could be dependent on


agents or assistants to which it has delegated work or dependent on resources which it
does not directly control.

Generally, agents have only limited independence since they are acting on behalf of a
principal.

Ditto for assistants, which depend on their principal.

Independence and autonomy as synonyms


Independence can simply be treated as a synonym for autonomy, at least in the context
of this paper, with degrees of independence corresponding to degrees of autonomy.

Full autonomy would be identical to full independence.

That said, autonomy and independence are quite distinct in the political domain, with no
sense of degrees, levels, or gradations. For example, an autonomous region does not
have independence regardless of how autonomous it is. If it had full autonomy in the
sense used in this paper, it would be considered independent rather than merely
autonomous.

Dictionary definitions of dependence


Definition entries from Merriam-Webster definition of dependence:

1. the quality or state of being dependent


2. the quality or state of being influenced or determined by or subject to another
3. reliance
4. trust
5. one that is relied on
6. the state of needing something or someone else for support, help, etc.
7. a condition of being influenced and caused by something else
8. a state of having to rely on someone or something
9. the quality or state of being dependent upon or unduly subject to the influence of
another

Definition entries from Dictionary.com definition of dependence:

1. the state of relying on or needing someone or something for aid, support, or the
like
2. reliance
3. confidence
4. trust
5. an object of reliance or trust
6. the state of being conditional or contingent on something, as through a natural
or logical sequence
7. subordination or subjection

Dictionary definitions of dependent


Definition entries from Merriam-Webster definition of dependent:

1. hanging down
2. determined or conditioned by another — contingent
3. relying on another for support
4. subject to another’s jurisdiction
5. subordinate
6. not mathematically or statistically independent
7. equivalent
8. one that is dependent
9. a person who relies on another for support
10. relies on someone else for most or all of his or her financial support
11. decided or controlled by something else
12. needing someone or something else for support, help, etc.
13. a person (such as a child) whose food, clothing, etc., you are responsible for
providing
14. determined by something or someone else
15. relying on someone else for support
16. a person who depends upon another for support

Definition entries from Dictionary.com definition of dependent:

1. relying on someone or something else for aid, support, etc.


2. conditioned or determined by something else — contingent
3. subordinate
4. subject
5. not used in isolation; used only in connection with other forms
6. hanging down — pendent
7. having values determined by one or more independent variables
8. having solutions that are identical to those of another equation or to those of a
set of equations
9. (of an event or a value) not statistically independent
10. a person who depends on or needs someone or something for aid, support,
favor, etc.
11. a child, spouse, parent, or certain other relative to whom one contributes all or
a major amount of necessary financial support

Definition of dependent, dependence, and dependency


Cooperation between entities, as between principals and agents or assistants, implies
some degree of dependence.

1. Dependent. An entity requires information, support, guidance, direction,


permission, or authorization from some other entity in order to make a decision
or take an action.
2. Dependence. What information, support, guidance, direction, permission, or
authorization an entity requires from some other entity in order to make a
decision or take an action. Alternatively, the degree to which an entity depends
on other entities for information, support, guidance, direction, permission, or
authorization to make decisions and take actions.
3. Dependency. Dependencies. Specific technical details of the entities,
information, support, guidance, direction, permissions, and authorizations that
an entity is dependent on.

Dependence of a principal
A principal can be dependent on:

• Other principals.
• Agents to which it has delegated goals.
• Assistants to which it has delegated tasks.
• Resources needed to pursue its mission.
• Customers, clients, and users for their business or patronage.

Dependence of an agent
An agent can be dependent on:

• A principal for information, support, guidance, direction, permission, or


authorization, as well as for its patronage in the first place.
• Other agents to which it has delegated subgoals or goals of its own.
• Assistants to which it has delegated tasks.
• Resources needed to achieve its goals.

Dependence of an assistant
An assistant can be dependent on:

• A principal or agent for its task assignments, the resources needed to complete
its tasks, and for its overall employment.
Dictionary definitions of control
Definition entries from Merriam-Webster definition of control:

1. to incorporate suitable controls in


2. to exercise restraining or directing influence over — regulate
3. to have power over — rule
4. to reduce the incidence or severity of especially to innocuous levels
5. to incorporate controls in an experiment or study
6. to direct the behavior of (a person or animal)
7. to cause (a person or animal) to do what you want
8. to have power over (something)
9. to direct the actions or function of (something)
10. to cause (something) to act or function in a certain way
11. to have power over
12. to direct the actions or behavior of
13. to keep within bounds — restrain
14. to direct the function of
15. to exercise restraining or directing influence over especially by law
16. to have power or authority over
17. to have controlling interest in

Definition entries from Dictionary.com definition of control:

1. to exercise restraint or direction over — dominate, command


2. to hold in check — curb, restrain
3. to test or verify (a scientific experiment) by a parallel experiment or other
standard of comparison
4. to eliminate or prevent the flourishing or spread of

Definition of control
1. Control. To limit, restrain, direct, guide, or influence the decisions or actions of
another entity.

Controlling entities
A principal controls agents and assistants.

A principal is controlled by its internal management — owners, investors, board of


directors.

An agent is controlled by a principal or another agent.

An assistant is controlled by a principal or an agent.

Dictionary definitions of mission


Definition entries from Merriam-Webster definition of mission:

1. a body of persons sent to perform a service or carry on an activity


2. a specific task with which a person or a group is charged
3. a preestablished and often self-imposed objective or purpose
4. Calling, vocation

Definition entries from Dictionary.com definition of mission:

1. any important task or duty that is assigned, allotted, or self-imposed


2. an important goal or purpose that is accompanied by strong conviction
3. a calling or vocation

Dictionary definitions of objective


Definition entries from Merriam-Webster definition of objective:

1. something toward which effort is directed — an aim, goal, or end of action


2. a strategic position to be attained or a purpose to be achieved by a military
operation

Definition entries from Dictionary.com definition of objective:

1. something that one’s efforts or actions are intended to attain or accomplish —


purpose, goal, target

Dictionary definitions of goal


Definition entries from Merriam-Webster definition of goal:

1. the terminal point of a race


2. the end toward which effort is directed — aim
3. something that you are trying to do or achieve
4. purpose
5. the point at which a race or journey is to end

Definition entries from Dictionary.com definition of goal:

1. the result or achievement toward which effort is directed — aim, end


2. the terminal point in a race

Dictionary definitions of task


Definition entries from Merriam-Webster definition of task:

1. a usually assigned piece of work often to be finished within a certain time


2. something hard or unpleasant that has to be done
3. duty, function
4. a piece of work that has been given to someone
5. a job for someone to do
6. a piece of work that has been assigned, needs to be done, or presents a
challenge

Definition entries from Dictionary.com definition of task:

1. a definite piece of work assigned to, falling to, or expected of a person — duty
2. any piece of work
3. a matter of considerable labor or difficulty

Dictionary definitions of delegation


The verb sense of delegate is the more relevant meaning in this paper.

Definition entries from Merriam-Webster definition of delegation:

1. the act of empowering to act for another — the delegation of responsibilities


2. the act of giving control, authority, a job, a duty, etc., to another person
3. the act of giving someone authority or responsibility for
4. one or more persons chosen to represent others
5. the act of delegating

Definition entries from Dictionary.com definition of delegation:

1. the act of delegating


2. the state of being delegated

Definition entries from Merriam-Webster definition of delegate:

1. to entrust to another
2. to appoint as one’s representative
3. to assign responsibility or authority
4. to give (control, responsibility, authority, etc.) to someone
5. to trust someone with (a job, duty, etc.)
6. to choose (someone) to do something
7. to make responsible for getting something done
8. to entrust or transfer (as power, authority, or responsibility) to another
9. to transfer (one’s contractual duties) to another
10. to empower a body (as an administrative agency) to perform (a governmental
function)

Definition entries from Dictionary.com definition of delegate:

1. to send or appoint (a person) as deputy or representative


2. to commit (powers, functions, etc.) to another as agent or deputy
3. entrust
4. assign

Dictionary definitions of responsibility


Definition entries from Merriam-Webster definition of responsibility:

1. the quality or state of being responsible


2. moral, legal, or mental accountability
3. reliability, trustworthiness
4. something for which one is responsible
5. the state of being the person who caused something to happen
6. a duty or task that you are required or expected to do
7. something that you should do because it is morally right, legally required, etc.

Definition entries from Dictionary.com definition of responsibility:

1. the state or fact of being responsible, answerable, or accountable for something


within one’s power, control, or management
2. an instance of being responsible
3. a particular burden of obligation upon one who is responsible
4. a person or thing for which one is responsible
5. reliability or dependability, especially in meeting debts or payments

Definition entries from Merriam-Webster definition of responsible:

1. liable to be called on to answer


2. liable to be called to account as the primary cause, motive, or agent
3. being the cause or explanation
4. liable to legal review or in case of fault to penalties
5. able to answer for one’s conduct and obligations — trustworthy
6. able to choose for oneself between right and wrong
7. marked by or involving responsibility or accountability
8. politically answerable — especially to the electorate
9. having the job or duty of dealing with or taking care of something or someone
10. able to be trusted to do what is right or to do the things that are expected or
required
11. involving important duties, decisions, etc., that you are trusted to do
12. getting the credit or blame for acts or decisions
13. reliable
14. needing a dependable person
15. liable to be called on to answer
16. liable to be called to account as the primary cause, motive, or agent
17. liable to legal review or in case of fault to penalties
18. characterized by trustworthiness, integrity, and requisite abilities and resources
19. able to choose for oneself between right and wrong
20. marked by or involving accountability

Definition entries from Dictionary.com definition of responsible:

1. answerable or accountable, as for something within one’s power, control, or


management (often followed by to or for)
2. involving accountability or responsibility, as in having the power to control or
manage
3. chargeable with being the author, cause, or occasion of something (usually
followed by for)
4. having a capacity for moral decisions and therefore accountable; capable of
rational thought or action
5. able to discharge obligations or pay debts
6. reliable or dependable, as in meeting debts, conducting business dealings, etc.
7. (of a government, member of a government, government agency, or the like)
answerable to or serving at the discretion of an elected legislature or the
electorate

Dictionary definitions of contract


Definition entries from Merriam-Webster definition of contract:

1. a binding agreement between two or more persons or parties, especially one


legally enforceable
2. a business arrangement for the supply of goods or services at a fixed price
3. the act of marriage or an agreement to marry
4. a document describing the terms of a contract
5. a document on which the words of a contract are written
6. a legal agreement between people, companies, etc.
7. a legal agreement
8. a written document that shows the terms and conditions of a legal agreement

Definition entries from Dictionary.com definition of contract:

1. an agreement between two or more parties for the doing or not doing of
something specified
2. an agreement enforceable by law
3. the written form of such an agreement
4. the formal agreement of marriage — betrothal

Dictionary definitions of capability


Definition entries from Merriam-Webster definition of capability:

1. the quality or state of being capable — ability


2. a feature or faculty capable of development
3. the facility or potential for an indicated use or deployment
4. the ability to do something
5. ability

Definition entries from Dictionary.com definition of capability:

1. the quality of being capable — capacity, ability


2. the ability to undergo or be affected by a given treatment or action
3. Usually, capabilities — qualities, abilities, features, etc., that can be used or
developed — potential
Definition entries from Merriam-Webster definition of capable:

1. having attributes (such as physical or mental power) required for performance


or accomplishment
2. having traits conducive to or features permitting something
3. having the qualities or abilities that are needed to do or accomplish something
4. having legal right to own, enjoy, or perform
5. having or showing general efficiency and ability
6. able to do something
7. having the qualities or abilities that are needed to do something
8. skilled at doing something
9. able to do something well

Definition entries from Dictionary.com definition of capable:

1. having power and ability


2. efficient
3. competent
4. having the ability or capacity
5. open to influence or effect — susceptible
6. predisposed, inclined
7. having ability, especially in many different fields
8. competent

Definition entries from Merriam-Webster definition of ability:

1. the quality or state of being able, especially physical, mental, or legal power to
do something
2. natural aptitude or acquired proficiency, natural talent or acquired skill
3. the power or skill to do something

Definition entries from Dictionary.com definition of ability:

1. power or capacity to do or act physically, mentally, legally, morally, financially,


etc.
2. competence in an activity or occupation because of one’s skill, training, or other
qualification
3. abilities, talents; special skills or aptitudes
4. Expertness

Definition entries from Merriam-Webster definition of able:

1. having sufficient power, skill, or resources to do something


2. having enough power, skill, or resources to do something
3. having the freedom or opportunity to do something
4. not prevented from doing something
5. having the power, skill, money, etc., that is needed to do something
6. having a quality or nature that makes something possible
7. used to say that the quality or condition of something makes something possible
8. susceptible to some action or treatment
9. marked by intelligence, knowledge, skill, or competence
10. having skill or talent
11. skillful
12. competent

Definition entries from Dictionary.com definition of able:

1. having necessary power, skill, resources, or qualifications — qualified


2. having unusual or superior intelligence, skill, etc.
3. showing talent, skill, or knowledge
4. legally empowered, qualified, or authorized
5. fit

Definition entries from Merriam-Webster definition of competence:

1. the ability to do something well


2. the quality or state of being capable
3. the quality or state of being functionally adequate
4. a sufficiency of means for the necessities and conveniences of life
5. the quality or state of being competent
6. the knowledge that enables a person to speak and understand a language
7. possession of sufficient knowledge or skill
8. legal authority, ability, or admissibility

Definition entries from Dictionary.com definition of competence:

1. the quality of being competent — adequacy


2. possession of required skill, knowledge, qualification, or capacity
3. an income sufficient to furnish the necessities and modest comforts of life
4. sufficiency; a sufficient quantity
5. legal capacity or qualification based on the meeting of certain minimum
requirements of age, soundness of mind, citizenship, or the like
6. the implicit, internalized knowledge of a language that a speaker possesses and
that enables the speaker to produce and understand the language

Definition entries from Merriam-Webster definition of competent:

1. proper or rightly pertinent


2. having requisite or adequate ability or qualities — fit
3. having or showing requisite or adequate ability or qualities
4. legally qualified or adequate
5. having the capacity to function or develop in a particular way
6. having the necessary ability or skills
7. able to do something well or well enough to meet a standard
8. capable, efficient
9. free from addiction or mental defect that renders one incapable of taking care of
oneself or one’s property
10. capable of understanding one’s position as a criminal defendant and the nature
of the criminal proceedings and able to participate in one’s defense
11. legally qualified or adequate
12. intelligent

Definition entries from Dictionary.com definition of competent:

1. having suitable or sufficient skill, knowledge, experience, etc., for some purpose
2. having sufficient skill, knowledge, etc. — capable
3. properly qualified
4. adequate but not exceptional
5. having legal competence, as by meeting certain minimum requirements of age,
soundness of mind, or the like
6. proficient
7. suitable or sufficient for the purpose
8. properly or sufficiently qualified
9. capable of performing an allotted or required function
10. legally qualified or fit to perform an act
11. able to distinguish right from wrong and to manage one’s affairs

Dictionary definitions of reputation


Definition entries from Merriam-Webster definition of reputation:

1. overall quality or character as seen or judged by people in general


2. recognition by other people of some characteristic or ability
3. a place in public esteem or regard — good name
4. the common opinion that people have about someone or something
5. the way in which people think of someone or something
6. overall quality or character as judged by people in general
7. notice by other people of some quality or ability
8. overall quality or character as seen or judged by people in general within a
community

Definition entries from Dictionary.com definition of reputation:

1. the estimation in which a person or thing is held, especially by the community or


the public generally
2. favorable repute — good name
3. a favorable and publicly recognized name or standing for merit, achievement,
reliability, etc.
4. the estimation or name of being, having, having done, etc., something specified

Dictionary definitions of requirement


Definition entries from Merriam-Webster definition of requirement:

1. something required
2. something wanted or needed — necessity
3. something essential to the existence or occurrence of something else —
condition
4. something that is needed or that must be done
5. something that is necessary for something else to happen or be done
6. something that is necessary

Definition entries from Dictionary.com definition of requirement:

1. that which is required


2. a thing demanded or obligatory
3. an act or instance of requiring
4. a need or necessity
5. some quality or performance demanded of a person in accordance with certain
fixed regulations

Definition of mission
1. Mission. The larger and core purpose, general focus, and target audience for an
intelligent entity, primarily a principal, beyond immediate and specific
objectives, goals, and tasks. Objectives follow from the mission. Alternatively,
the immediate objective, goal, or task for an intelligent entity.

In the context of this paper, mission is generally used in the former sense, the larger
purpose and core purpose of a principal.

Mission of a principal
Generally, a principal will have a clearly defined mission. It’s objectives follow from
that mission.

Mission of an agent
Generally, an agent will not have a mission of its own with regard to its work on a
specific goal, deferring to the mission of the principal on whose behalf it is pursuing the
goal.

That said, an agent will commonly have the implied mission of serving principals,
satisfying their requirements, successfully completing contracts, and otherwise
establishing a solid track record of satisfying the requirements of principals.

Mission of an assistant
Generally, an assistant does not have a mission of its own. The mission of an assistant
in the traditional sense is simply to serve its principal by successfully completing tasks.

Definition of objective
1. Objective. Larger or more general target or aim to be pursued or achieved by an
entity as part of its mission. Not as specific as a goal, but not uncommonly goal
and objective are used as synonyms. The intent here is that a goal has a narrower
scope while an objective has a broader scope. Sometimes referred to as a
strategic objective.

The intention is that the mission of an entity is comprised of a set of objectives.

A principal will translate its mission into objectives, and then translate each objective
into one or more goals or tasks that can then be delegated to agents and assistants.

An agent would be contracted to pursue a goal which is needed to achieve an objective,


but the agent would not necessarily even be aware of the larger objective. Sometimes it
will, in which case that information would be included as part of the goal or detailed in
the contract for the goal, but generally the larger picture is beyond the scope of an agent.

In a simpler sense, the objective of an agent is simply the goal for which the agent has
been contracted.

Definition of delegation
1. Delegation. The process of an entity identifying a subset of its goals and tasks
that can be offloaded to another entity, or to multiple entities.

Delegation might be:

1. From a principal to an agent.


2. From a principal to an assistant.
3. From an agent to an assistant.
4. From an agent to another agent.

Definition of responsibility
1. Responsibility. Obligation or expectation of delivery of goods, services,
information, or guidance or performance or achievement of an entity by another
entity, typically as a result of delegation under a contract.

Both (or all) parties in a contract will have responsibilities.

Responsibility of principal or contracting party


The contracting party is responsible for supplying requirements, information, guidance,
and possibly resources needed by the contracted party to proceed with the contracted
work, including a statement of the goal and expectations for any work products.

Responsibility of agent or contracted party


The contracted party is responsible for performing the contracted work, achieving the
contracted goal, and delivering the contracted work products.

Responsibility of an assistant
Assistants would generally not operate under a formal contract for a given work request,
other than a general contract of employment that covers all of their work.

Rather, their responsibilities would be informally given with each task that they are
assigned.

Definition of contract
1. Contract. Agreement between two intelligent entities specifying a relationship
in which goods, services, or payments are exchanged according to agreed upon
terms and conditions, with the intention of pursuing and achieving a specified
goal. Commonly between a principal and agent, or between a principal or agent
and an assistant.

Definition of capabilities
1. Capabilities. Detailed list of what an intelligent entity can do and accomplish.
And in some cases non-intelligent entities. What types of objectives, goals,
tasks, actions, and operations the entity can perform. What abilities, skills,
knowledge, expertise, talents, aptitudes, competencies, and proficiencies the
entity has. Also what education, training, experience, credentials, qualifications,
and licensing the entity has.

Principals, agents, and assistants all have capabilities.

Working with capabilities


There are several aspects to capabilities.

1. Acquiring capabilities. Learning, developing, refining, honing.


2. Awareness of capabilities. Knowledge and confidence.
3. Detailing capabilities. Cataloging and expressing them.
4. Advertising capabilities. Letting other entities know what the entity has to
offer. Registering an entity and its capabilities in catalogs or listings.
5. Searching for capabilities. An entity seeks the services of another entity which
offers desired capabilities. And which has a reputation for delivering on those
capabilities. Issuing RFPs (Request For Proposal) for desired capabilities and
requirements. Or posting or advertising job requirements. Searching catalogs or
listings of entities which have registered as providing such capabilities.
6. Contracting for capabilities. Negotiating a contract for delivering specified
services from one entity to another, such as from an agent to a principal, an
assistant to a principal or agent, or between two agents or two principals.

See also reputation.

Definition of reputation
1. Reputation. Track record for an entity delivering on the terms and conditions of
contracts for its capabilities. Knowledge shared by entities which have
previously contracted with the entity.

Reputation includes:

1. Delivering of contracted work products, both goods and services. Achieving


goals.
2. Quality of work products.
3. Timely delivery.
4. Cost effectiveness.
5. Performance. How fast did the entity act and how quickly did it complete the
contract.
6. Capacity. How much or how big a job was the agent or assistant able to handle.
7. Quality of relationship between the entity and entities which contracted with it.

Definitions of task, purpose, goal, subgoal


1. Task. One or more actions or operations intended to achieve some purpose.
2. Purpose. The reason or desired intent for something.
3. Goal. A destination or state of affairs that is desired or intended, but without a
plan for a set of tasks to achieve it.
4. Subgoal. A portion of a larger goal. A goal may be decomposed into any
number of subgoals.

Definitions of motivation and intention


1. Motivation. The rationale for pursuing a particular objective or goal.
2. Intention. Desired objective or goal. What is desired, not why or how.

Definitions of actions and operations


1. Action. Something that can be done by an entity. An observable effect that can
be caused in the environment.
2. Operation. Generally a synonym for action. Alternatively, an action that
persists for some period of time.

For example flipping a switch to turn on a machine is an action, while the ongoing
operation of the machine is an operation. The flipping of the switch was an operation
too, only of a very short duration.

If a machine would operate only while a button was depressed, the pressing and holding
of the button as well as the operation of the machine would both be actions and
operations.

Goals, tasks, and actions


Accomplishing anything of note requires three levels of processing:
1. Setting or defining goals — by the principal.
2. Elaborating tasks to achieve a goal — by the agent.
3. Sequencing through discrete actions or operations to complete tasks — by the
assistant.

A higher level of intellectual effort is needed for the first two levels.

A higher level of skill is needed for the latter two levels.

Definitions of principal, agent, and assistant


1. Principal. An intelligent entity which has the will and desire to formulate an
objective or goal, and has unlimited autonomy as well as full agency.
Alternatively, the entity which delegates a task to an assistant.
2. Agent. An intelligent entity which has the capability and resources to pursue
and achieve an objective or goal on behalf of another entity, its principal, and
has agency but only limited autonomy, as well as responsibility to another agent,
an assistant, or a principal.
3. Assistant. An intelligent entity which has the capability and resources to
perform tasks as explicitly directed by a principal or an agent, but has neither
agency nor autonomy, only responsibility to its principal.

Distinctive roles of principals, agents, and assistants


The roles have distinctive differences:

1. Principals. Excel at identifying and detailing specific goals needed to achieve


the objectives that follow from the mission of the principal, collectively referred
to as setting goals, finding properly qualified agents capable of achieving those
goals and having acceptable reputations and track records, delegating or
assigning the goals to those agents after negotiating carefully detailed contracts
with those agents, and finally assuring that that the agents have fulfilled their
contracts.
2. Agents. Advertising themselves as being available to principals, including their
capabilities in terms of goals that they can pursue, negotiating contracts with
principals that detail the goals, and then achieving the contracted goal(s) by
identifying the sequences of tasks or subgoals needed to achieve the goals and
either delegating those tasks to assistants or other agents or completing them
itself. An agent creates its reputation and track record by successfully
completing contracts according to their terms and conditions.
3. Assistant. Able to efficiently perform designated tasks to a desired level of
quality.

Contract between principal and agent


An agent cannot properly do its job without a contract between it and its principal. This
paper won’t go into depth on contracts, but simply note the essential elements of any
good contract:
1. Set expectations of the principal. What the principal expects from the agent.
2. Set expectations of the agent. What the agent expects from the principal.
3. Obligations of the principal.
4. Obligations of the agent.
5. Responsibilities of the principal.
6. Responsibilities of the agent.
7. Statement of work and definition of the work product. The details.
8. Timeframe. When. When to start. When to complete.
9. Timeline. Milestones. Sequencing of work and work products.
10. Reporting. When, how often, and how agent is expected to report status.
11. Limitations and restrictions. How open-ended is the contract.
12. Capacity. How much is the agent or its work product expected to handle?
13. Performance. How fast is the agent or its work product expected to respond?
14. Compensation. For completion of contract, in full or in part.
15. Penalties for failure to complete the contract, in full or in part.

Definition of requirement
1. Requirement. A capability that is needed by an entity to achieve some goal.

Requirements are the foundation of a contract between two entities. They specify what
the contracting party of the contract needs and expects.

A contract is negotiated between a contracting party and a contracted party, such as


between a principal and an agent, where the contracted party (e.g., agent) is offering
capabilities which satisfy the requirements of the contracting party (e.g., principal.)

Matching requirements with capabilities


To a significant degree individual requirements will line up with individual capabilities.
In fact, if a requirement doesn’t match up with a capability, there will not be a match
between a contracting party and any entity.

One difference between requirements and capabilities is that a requirement may specify
a range of acceptable capabilities such that any entity with a capability that falls
somewhere in that range will be acceptable.

Or, a requirement may be optional so that even if entities with that capability might be
preferred, entities without that capability would still be acceptable.

Assistant
Synthesized definition of assistant from What Is an Assistant?:

• An assistant is a person, device, or software service who or which takes on some


portion of the workload or tasks of an individual or group.
• An assistant facilitates the activities and life of an individual or group.
• An assistant performs tasks or operations on behalf of and at the request of an
individual or a group.
• An assistant has very little or limited sense of autonomy or agency, deferring to
the explicit direction of the individual or group whom they serve. Some
assistants may have a greater degree of autonomy.

Technicians
A technician is simply an assistant who has specialized technical training for a set of
tasks in some specialized area.

Organizations
In additions to individual intelligent entities acting in the roles of principal, agent, and
assistant, entire organizations can act in each role as well.

A business may contract out to a service company such as a law firm, accounting firm,
or marketing firm to act as its agent.

A service firm could hire or contract out to another service firm to perform specific
tasks.

Each firm would of course have individual staff members performing the duties of
principals, agents, and assistants.

Other categories of intelligent entities


In addition to principals, agents, and assistants, there are a variety of other categories of
intelligent entities, including but not limited to:

1. Customers. Entities who purchase goods or services from principals.


2. Clients. Entities who purchase services from principals.
3. Users. Entities who purchase or use the services of principals.
4. Services. Entities which provide services that are used by principals, agents, or
assistants.
5. Workers. Entities who regularly perform similar tasks. May also or sometimes
be assistants.
6. Visitors and guests. Invited or otherwise welcomed.
7. Strangers. Chance encounters. Paths simply happened to cross.
8. Students.
9. Children.
10. Law enforcement.
11. Government officials.
12. Government services.
13. General population. The full set of intelligent entities in some specified area or
region,, each of whom may or may not fall into one or more of the other
categories.
14. Past population. The full set of intelligent entities that ever existed in some
specified area or region, some of which exist now, the remainder of which no
longer exist.
15. Future population. The full set of possible intelligent entities which will exist
in the future, some of which exist now, the remainder of which have not yet
come into existence.

Interactions
Intelligent entities can interact both with other intelligent entities and non-intelligent
entities as well.

In the context of principals, agents, and assistants, the interactions can be:

1. Between principals and agents.


2. Between principals and assistants.
3. Between principals and other principals.
4. Between agents and assistants.
5. Between agents and other agents.
6. Between assistants and other assistants.

Interactions can be initiated in either direction.

Other interactions between intelligent entities include:

1. With customers and clients.


2. With prospective customers and clients.
3. With users.
4. With workers.
5. With service workers.
6. With visitors and guests.
7. With strangers.
8. With law enforcement.
9. With government officials.
10. With government services.
11. With students.
12. With children.

Interactions between intelligent entities and non-intelligent entities:

1. With traditional computer services.


2. With dumb machines.
3. With supplies.
4. With paperwork.
5. With logistical details.
6. With raw materials.
7. With weather and other natural phenomena.
8. With animals
9. With pests.
10. With disease.
11. With one’s own body.
12. With a patient’s body. Medical treatment, surgery, first responders.
13. With nature, physical challenges.
Relationships
Interactions might be transient or they may be part of a larger pattern as part of a
relationship.

In the context of principals, agents, and assistants, the relationships can be between:

1. Principal and agent.


2. Principal and assistant.
3. Principal and principal.
4. Agent and assistant.
5. Agent and agent. Independent of principals.

Other relationships with principals, agents, and assistants include:

1. With customers and clients.


2. With prospective customers and clients.
3. With users.
4. With workers.
5. With service workers.
6. With visitors and guests.
7. With strangers.
8. With students.
9. With children.

Other relationships between intelligent entities include:

1. Teachers with students.


2. Parents, siblings and relatives with children.

Relationships with non-intelligent entities:

1. With animals
2. With pests.
3. With disease.
4. With one’s own body.
5. With a patient’s body. Medical treatment.
6. With nature.

Partners, allies, friends, enemies, adversaries,


competitors, and antagonists
Other forms of relationships include:

1. Partners. Two or more principals can be partners who work very closely
together for specific objectives or a shared mission.
2. Allies. Two or more principals or agents can be allies, having shared interests,
and related objectives, missions, or goals.
3. Friends. Entities whose interests are roughly compatible even if they do not
directly facilitate achieving their respective missions, objectives, goals, or tasks.
4. Enemies. Two or more principals can be enemies seeking the same mission or
objective but with opposing interests.
5. Adversaries. Less severe synonym for enemy.
6. Competitors. Less severe synonym for adversary.
7. Antagonists. Two or more principals, agents, or assistants can be antagonists
whose missions, objectives, goals, or tasks conflict in ways that makes it
difficult for each entity to achieve what it seeks even if not outright enemies,
competitors, or adversaries. For example, landlords, vendors, criminals.

Connections
In addition to interactions and relationships, entities can have connections, such as:

1. Familiarity. Know about.


2. Friends.
3. Colleagues.
4. Met at some event.
5. Shared some experience.
6. Affiliation.
7. Past affiliation.
8. Similar background.
9. Same industry.
10. Same or similar career.
11. Same or similar community.
12. Same demographic.
13. Same country.
14. Same or similar interests.
15. Complementary interests.

Legal liability
Legal liability can arise in various ways:

1. Liability of a principal for its own actions.


2. Liability of an agent for its own actions.
3. Liability of an assistant for its own actions.
4. Liability of an agent for its actions performed by it on behalf of its principal.
5. Liability of a principal for actions performed on its behalf by an agent.
6. Liability of a principal for actions performed on its behalf by an assistant.
7. Liability of an assistant for actions performed by it on behalf of its principal.
8. Liability of an assistant for actions performed by it on behalf of an agent.
9. Liability of an agent for actions performed on its behalf by an assistant.
10. Liability of an agent for actions performed on its behalf by another agent.
11. Liability of a principal for actions performed on its behalf by an another agent
working on behalf of an agent working on its behalf.
12. Contractual obligations on the part of the principal and agent.
Ethics
This paper will not explore ethics of principals, agents, and assistants, although it is a
very interesting and very relevant topic area.

Morality and ethics of an agent or assistant


An agent (or assistant) will not need to make moral or ethical decisions of its own per
se, but it will need to be able to ensure that its actions or inaction are compatible with
the moral and ethical frameworks of the principal.

Future work
The theory, technology, implementation, and practice of intelligent entities, principals,
agents, assistants, autonomy, and agency will remain works in progress for the
foreseeable future.

This paper will be updated as progress occurs in all of those areas.

What Is an Intelligent Digital Assistant?

Jack Krupansky

Nov 30, 2017

An intelligent digital assistant is a software service, possibly coupled with a specialized


hardware device, such as a smart speaker, or simply a feature offered on a general
purpose computing device such as a personal computer, tablet, smartphone, or
wearable computer (such as a digital wristwatch), which offers some interesting set of
the abilities of a traditional, human assistant, most notably answering questions and
performing tasks using voice and natural language processing (NLP) backed by
artificial intelligence (AI).

Examples include Amazon Alexa/Echo, Apple Siri, Google Assistant, and Microsoft
Cortana.

A previous paper detailed the traditional role of human assistants: What Is an


Assistant?

This informal paper will briefly explore the nature and capabilities of intelligent digital
assistants.

Note that the concept of an intelligent digital assistant is alternatively referred to by


terms such as:
• Digital assistant
• Intelligent personal assistant
• Intelligent virtual assistant
• Personal digital assistant
• Virtual assistant
• Virtual digital assistant
• Voice-enabled digital assistant

Technically, a digital assistant does not need to use voice or even natural language, but
in the context of this paper, the term digital assistant will be used as a shorthand for
intelligent digital assistant and presume that it is voice-enabled with natural language
processing.

Purpose
What is the purpose of a digital assistant? As Google puts it:

• Find info and get things done

Those are the twin purposes:

• Request information.
• Perform tasks.

The job of any good assistant, machine or human.

Key distinguishing features


Not that traditional digital devices and Internet services didn’t already serve those same
purposes, but now, the new devices and services focus on voice-enabled natural
language interaction:

• Voice input.
• Natural language processing (NLP).
• Voice output.

Two other distinguishing qualities are that execution of requests can be based on not
only the raw input request or command from the user, but also:

• Personal data of the user.


• Past history of usage by the user.

That’s where machine learning can come into play.

And, as with most devices and services, personal preferences of the user will be taken
into account.

Features
This generic list of features of digital assistants is not intended to be absolutely
comprehensive, but should be fairly representative:

1. Voice-enabled, voice control, voice interaction, voice queries.


2. Natural language interaction. Commands. Results.
3. Find information. Weather. Traffic. News.
4. Answer questions. Digital encyclopedia.
5. Make recommendations.
6. Perform simple actions around the home, controlling devices. Home automation.
7. Media control. Selecting content, controlling volume. Music. Audio. Video.
Movies. TV shows.
8. Make and take phone calls.
9. Send and receive messages.
10. Chat. Converse with the machine.
11. Foreign language translation.
12. Dictionary lookup.
13. Managing to-do lists.
14. Setting alarms, timers, reminders, and alerts.
15. Shopping.
16. Ordering take out for delivery.
17. E-commerce.
18. Concierge functions. Reservations. Tickets. Services.
19. Access specialized Internet services. Open-ended, modules developed by third
parties.
20. Proactive. Perform tasks or provide information without being explicitly asked.
To only a limited extent today.
21. Support for multiple users on a single device. For example, Google Assistant
Voice Match. Him vs. her.
22. Personalization. Adaptation. Responses and actions take the user’s data
(personal data, preferences, usage history) into account, rather than purely
canned responses.

How intelligent are they?


The nature of intelligence, especially in the context of artificial intelligence (AI), is a
fairly complex topic, which is explored in much greater depth in my paper Untangling
the Definitions of Artificial Intelligence, Machine Intelligence, and Machine
Learning.

For the purposes of this paper, the following points can be made about intelligence and
intelligent digital agents:

• Weak AI is the current state of the art.


• Some aspects of human-level intelligence are employed, but only in a very
limited sense.
• Strong AI or General Intelligence is well beyond the current state of the art.
• Human-level intelligence in any general sense won’t be available any time soon.
• A fair amount of intelligence in digital computing is simply rules, patterns, and
heuristics rather than any deep understanding of the deep, human meaning of
concepts.
• Natural language processing (NLP), the ability to analyze voice input, parse
natural language requests, and synthesize voice output is the primary reason
current, voice-activated digital assistants are referred to as intelligent.
• Beyond the NLP, most of the functions performed by these digital assistants can
be performed by a non-AI computer interface and web-based services.
• Few of the functions performed by these digital assistants require AI per se.
• Some amount of machine learning is employed in the services performed in the
cloud, but only in terms of weak AI, not strong AI. The machine is recognizing
simple patterns, but not concepts or what they mean in any deep, human sense.
• The voice match feature is interesting, but once again either solely a matter of
heuristics or weak AI rather than human-level intelligence. Even dogs can
recognize voices.
• It’s not clear if any of the current crop of intelligent digital assistants recognize
and respond to tone of voice, something that even dogs can do.

In short, the current crop of intelligent digital assistants exhibit some significant
qualities normally associated with intelligence, and even seem human-like, but only in a
fairly minimal and superficial sense. It is certainly better than nothing, but just a start
rather than anywhere near the finish line.

The Big Four


Although there have been digital assistants in the past and there are smaller and niche
players, the Big Four of the current wave of products include:

• Amazon Alexa/Echo
• Apple Siri
• Microsoft Cortana
• Google Assistant

Samsung Bixby is a new entrant in the market.

It is beyond the scope of this paper to delve into specific product features or
recommendations for such products.

The wikipedia pages for the Big Four:

• Amazon Alexa/Echo — Echo devices


• Apple Siri
• Microsoft Cortana
• Google Assistant — Google Home smart speakers

The company web pages for the Big Four products/services:

• Amazon Alexa/Echo
• Apple Siri
• Microsoft Cortana
• Google Assistant — Google Home devices
Connected intelligence, Internet-enabled
A key aspect of the design of this latest wave of digital assistants is that they are
services running on servers in the cloud, where most of the AI capabilities are in the
cloud, with the connected device seen and used by the user simply serving as an input
and output device.

Privacy, security, and personal data


Since these digital assistants are online and all relevant user data is online, there are
some significant privacy, security, ownership, and ethical issues. This paper won’t delve
into this important topic deeply, but simply note some of the top concerns:

• Who actually owns the user’s data and records of all requests and actions made
by the user?
• What exactly can and can’t the digital assistant vendors do with any of that user
data?
• Can the vendor give any third-parties access to that user data?
• How secure is that user data, really? Says who?
• Are user interactions with digital assistants vulnerable to man in the middle
attacks or using malware installed in the user device?
• How often is security and privacy of user data audited, and by what technical
means?
• What level of technical skill might be sufficient to hack into user data?
• Might government, foreign government, or intelligence services possess the
technical skills and means to hack user data?
• What assurances does a user have that vendor staff could not theoretically hack
user data as an inside job? For financial gain for to pursue a social or political
agenda.
• Can any of that user data be sold?
• Does the user have any way to get access to all of the data on them?
• Can a user move their data, including complete usage history to another vendor
or different type of device?
• Does the user have any way to scrub or delete some or all of the data on them?
• Is there a retention policy for user data?
• What rights does the user retain or forfeit with regard to court orders to access
their data? Both criminal and civil.
• How vigorously will vendors defend the rights of the user in the face of court
orders? Says who?
• In what legal jurisdiction(s) does the the user data reside? Servers and data
centers.
• Does the user have any control or ability to select a jurisdiction? Especially with
regard to court orders and actions of law enforcement in those jurisdictions.
• Might the user data be kept in more than one legal jurisdiction? Multiple copies
or distributed between servers in different data centers.
• Is location data given the same protection as interaction data?
• Can a user shield their location even if their interaction data is accessed, such as
through a court order?
• Can a parent or legal guardian get access to user data of children or relatives?
• Can a user allow another user to access their data?
• Can users share data?

Software and hardware


The software for the various digital assistants is capable of running on a wide range of
hardware platforms:

• Desktop computers
• Laptop computers
• Tablet computers
• Smartphones
• Smart wristwatches
• Wearable computers
• Smart speakers
• Smart TVs
• Smart appliances, smart kitchen appliances

As mentioned in the previous section, the real intelligence is off in the cloud, with the
user’s device or computer used only to communicate with the cloud-based services.

Smart speakers
Smart speakers are the rage right now, with Amazon Echo, Google Home, and soon
Apple HomePod.

It’s a bit of a misnomer to say that the speakers themselves are smart since the actual
speakers are simply output devices and the real smarts is driven by microphones
included in the same physical box as the speakers.

The microphones pick up your voice and send it off to servers in the cloud to be
processed by the actual AI algorithms, before sending audio back to the actual speakers
for you to listen to the result.

Equivalent terms
As with any new and evolving technology, the terminology around intelligent digital
assistants is fluid, in a state of flux, and still unsettled.

All of the following terms are roughly equivalent to intelligent digital assistant, or at
least used as if equivalent despite nuances of differences:

• AI assistant
• AI digital workforce platform
• AI voice assistant
• AI-powered virtual agent
• AI-powered voice assistant
• Artificial intelligence voice assistant
• Artificial-intelligence assistant
• Artificially intelligent assistant
• Bot
• Chatbot
• Chatterbot
• Connected assistant
• Connected intelligent assistant
• Digital agent
• Digital assistant
• Digital virtual assistant
• Digital voice assistant
• Intelligent assistant
• Intelligent digital assistant
• Intelligent personal assistant
• Intelligent virtual assistant
• Personal AI assistant
• Personal assistant
• Personal assistant voice apps
• Personal digital assistant
• Smart assistant
• Smart digital assistant
• Socialbot
• Virtual assistance
• Virtual assistant
• Virtual customer assistant
• Virtual digital assistant
• Virtual personal assistant
• Voice AI capabilities
• Voice AI–capable device
• Voice assistant
• Voice-enabled digital assistant
• Voice-powered digital assistant

Not all bots, chatbots, socialbots, or digital or virtual assistants are necessarily voice-
activated or use voice response. They may use text.

Not all bots or socialbots recognize natural language. They may simply act in a way that
mimics human behavior using a variety of heuristics such as recognizing keywords that
are significant for the particular subject matter domain which the bot is designed for.

Also see the online customer service section.

Related terms
Some other terms that might sometimes be used to refer to digital assistants:

• Agent
• Digital agent
• Intelligent agent
• Software agent
What is the proper term?
Alas, there is no single, widely acknowledged proper term for the products and services
covered by this paper. To wit, here are the common characterizations of the Big Four
products:

• Apple Siri — intelligent personal assistant.


• Amazon Alexa/Echo — intelligent personal assistant.
• Google Assistant — virtual personal assistant.
• Microsoft Cortana — virtual assistant.

Those are the terms used in the respective Wikipedia articles for those products and
services.

Given how fluid and unsettled the use of the terminology is, this paper arbitrarily settled
on the use of the term intelligent digital assistant, or digital assistant for convenience
and conciseness when the context is reasonably clear.

Personal digital assistant


The term personal digital assistant or PDA seems like such a natural candidate to use
for these new devices and services, but the term is already taken or at least was taken, as
exemplified by the classic Palm Pilot PDA device that was so popular back in the late
1990’s and early 2000’s, until smartphones with similar capabilities eclipsed the
handheld personal information management market.

It’s primary function was contact management with names, phone numbers, addresses,
and notes. A vest-pocket rolodex and notebook, to be used in conjunction with a non-
smart cell phone. No question/answer or task capabilities. Actually, there were a variety
of apps, games, and the like that could be downloaded to the device, but nothing like a
voice or natural language interface for those functions.

Maybe with time the term will be reclaimed as a synonym for intelligent digital
assistant.

In fact, the current web page for Microsoft Cortana uses the term at one point:

• Cortana is your truly personal digital assistant.

Although on a support page for Cortana they use the term digital agent:

• Cortana is your digital agent.

Thus illustrating how fluid and unsettled the terminology is for this new product/service
category.

Tasks vs. goals


The current crop of digital assistants are quite amazing, but still quite limited. Despite
their AI features, they still can’t compete with many of the qualities of an old-fashioned
human assistant.

In particular, digital assistants are task-oriented rather than goal-oriented.

As discussed in the preceding paper, What Is an Assistant?, tasks are relatively simple
operations that may require a lot of effort, but generally do not require much in the way
of complex reasoning, judgment, careful decision, and planning, while goals are more
complex collections of tasks that require some significant level of complex reasoning,
judgment, careful decision, and planning.

Granted, as that paper pointed out, much of the work of many assistants really is simply
task-oriented, but more specialized or capable assistants are capable of goal-oriented
work.

The AI in the current wave of digital assistants has barely enough capability to parse
basic natural language and recognize an interesting but rather limited set of patterns of
meaning, well short of the more complex meaning of the more advanced capabilities of
human assistants.

A task is specified by detailing the operations to be performed. How to achieve the


objective.

A goal is specified by stating the objective to be achieved. The objective itself rather
than the details of how to achieve the objective. In fact, and generally, the specific tasks
needed to achieve a goal might not be known in detail in advance and only become
apparent as work towards the goal progresses.

Current digital assistants are generally performing a single operation at a time. Google
do this. Alexa do that. One question or task at a time.

Tasks generally don’t require much deep thought, just slogging through the work.

Goals tend to require deeper, more careful, and more insightful thought. And planning.

Current digital assistants can handle relatively simple tasks, but not more complex tasks
or complex reasoning.

As discussed in my AI paper Untangling the Definitions of Artificial Intelligence,


Machine Intelligence, and Machine Learning, these digital assistants offer weak AI,
but are well short of strong AI.

Proactive
Current digital assistants have only limited capabilities at best for being proactive,
doing things for you without being explicitly asked. Reminders and alerts, and learning
from personal data and usage is about the best they can currently muster.
That said, future iterations of digital assistants are likely to become much more
proactive, even to the point of providing us with information and services before we are
even consciously aware that we might want or need them.

But that’s the future, not the present.

Online customer service


Many websites now feature some level of online customer service. Sometimes this is
simply online chat with a real, human customer service representative, but more and
more website operators are using AI-based chat using natural language.

This is a close cousin to the technology utilized by the kind of intelligent digital
assistants covered by this paper, but websites are focused more on commercial
customer service types of questions and tasks rather than consumer-oriented questions
and tasks.

That said, a website chat may offer more and deeper insight into narrow niches of your
online life than one of the general purpose intelligent digital assistants.

Plugin modules for websites and services


The intelligent digital assistant vendors are currently offering support for developers so
that websites can in theory develop plugin modules that would permit the general
purpose intelligent digital assistants to have more access to more aspects of the online
services that users utilize.

That’s not common today, but will likely become more common as adoption of
intelligent digital assistants grows, not unlike the fact that many websites also offer apps
for smartphones.

Smart cars
Even before the advent of driverless vehicles, cars in recent years have incorporated
quite a few smart features that automate actions that previously had to be done
manually by the human driver and do involve some degree and sensing and judgment by
the vehicle itself. Whether or not these features constitute intelligence per se is a matter
of debate, but at a minimum they do assist the driver, so in a very real sense they can be
considered a digital assistant, so it’s no great stretch to consider them an intelligent
digital assistant, especially when these features perform the kind of proactive tasks that
even home digital assistants do not currently muster.

Driverless vehicles are too new and unproven to draw any strong conclusions about yet.
In fact, part of the problem is that they still are relatively dumb, limited, and more
focused on heuristics and other weak AI capabilities rather than anything even remotely
resembling strong AI.

But in coming years, AI, smart car features, and driverless vehicles will each be
evolving so that it is not that big a stretch to consider cars of the future to be intelligent
digital assistants. After all, personal transportation is a personal service, traditionally
performed by a human assistant called a driver.

Virtual assistant
One nit is that the term virtual assistant is ambiguous. In addition to referring to one of
these new voice-activated digital assistants, the term also refers to a human assistant
who works remotely, such as from home or for a third-party contractor. That latter
usage is common for job listings.

The one question a digital assistant can’t answer


None of the Big Four, or any other connected-assistant can answer the following
question:

• Why can’t I connect to the Internet?

Why not? Because answering any question requires a network connection. No


connection, no answer.

History
The long history of digital assistants is interesting but beyond the scope of this paper.

The Wikipedia has some background on personal digital assistants.

Future directions for digital assistants


For starters, the full range of capabilities of traditional human assistants are great fodder
for the future of digital assistants.

A previous paper, What Is an Assistant?, details many of the capabilities of traditional


human assistants.

In addition, there are likely to be a wide range of enhancements that are based on the
unique capabilities of digital computing which are very different from human
capabilities.

Still, it is likely to be quite some time before digital assistants can surpass human
assistants. But since so many consumers are unable to hire an army of human assistants,
enhanced digital assistants are a promising future even at only a small fraction of
human-level capabilities.

A key gating factor for the evolution of intelligent digital assistants will be the pace of
advances in AI itself, which is discussed at much greater length in my paper Untangling
the Definitions of Artificial Intelligence, Machine Intelligence, and Machine
Learning.
Human in the loop
One prospect not exploited in the current crop of digital assistants is the ability to
integrate the human into the loop, not the user, but a third party, an expert or company
representative who can add value from their human intellect and subject matter
expertise that current digital assistant technology can’t quite muster at this stage.

Put simply, the digital assistant would do the vast bulk of the easy tasks, falling back to
human intervention only for the harder tasks.

Crowdsourcing
Crowdsourcing is another way of putting people in the loop, lots of them rather than
one, to answer more complex or subjective or current questions that a simple lookup or
real-time data reference can’t answer. Or to perform tasks.

Crowdsourcing questions
To the best of my knowledge, there are no digital assistants on the market using
crowdsourcing to respond to questions.

The best you can do today is post a question on a question/answer website such as
Quora or StackExchange and patiently wait for an answer.

Crowdsourcing tasks
There are a variety of crowdsourcing services on the Internet for tasks, but none that are
integrated with the top intelligent digital assistants at this time to the best of my
knowledge, but it’s probably only a matter of time before they start springing up.

There is a skill module that can be added to Amazon Alexa to invoke TaskRabbit, but
the integration seems a bit primitive.

What is needed is for each of the major digital assistants to have generic features for
crowdsourcing tasks that don’t require user knowledge of specific task services.
Crowdsource the crowd sourcing.

Even further, the user should simply be able to state the nature of their task or problem
and the digital assistant should be able to deduce what task is needed. Like:

• Alexa, my faucet is leaking. [a plumber or the building maintenance guy is


needed]
• Siri, my job sucks. [career or work counseling may be needed]
• Google, my head hurts and my vision is blurry. [maybe medical referral or even
911]

Group crowdsourcing
Crowdsourcing is generally very open-ended and unlimited — anybody anywhere can
participate, but for some activities there may be a desire or even an advantage to restrict
participation to a smaller or more select group.

Call it group crowdsourcing.

It might be a group of friends or relatives. Or experts in some area. Or members of an


organization. Or a selected demographic group. Or a local community, or possibly
nearby communities. Any imaginable subset of everybody everywhere.

Granted, selectivity at least partially defeats the whole point of wide-open


crowdsourcing — that you never really know where the real and most valuable
expertise is located, but for some types of tasks that may be an acceptable choice.

Video
The current voice-activated consumer digital assistants don’t incorporate video in their
operation, but that is likely to change in the coming years. They will eventually sense
motion and activity in the room, and eventually be able to recognize objects, people,
and pets and incorporate that information into their actions.

Smart cars and driverless vehicles already have video capabilities.

Therapeutic assistants
One promising avenue for the future are specialized digital assistants, therapeutic
assistants, to help people with mental and behavioral problems, to guide them towards
more useful thinking and behavior, and also to monitor them and alert their mental
health professionals or guardians to any problematic symptoms. And also to permit
mental health professionals to guide the actions of the digital assistant as well.

Granted, this area has lots of ethical issues, a virtual minefield. Still, it does have real
potential, to actually help people lead better lives.

A more wide-ranging but more ethically challenging application would be for a plug-in
module for everyday digital assistants to detect when an otherwise normal user might be
exhibiting mental or behavioral symptoms which should be brought to the attention of a
mental health professional. In this scenario there would not be any explicit action by the
user or some health authority to enable such monitoring, although the setup for the
digital assistant might have a simple opt-out or opt-in configuration setting to monitor
for potential health concerns.

Parents with at-risk children could more readily make a decision to explicitly enable
such monitoring for their children or family members or relatives who they worry might
be at risk.

How Much of Robotics Is AI?


Jack Krupansky

Jan 20, 2018

Anything concerning robots and robotics is commonly lumped in with artificial


intelligence (AI) these days, but how much of robotics is really AI? Is robotic hardware
implicitly AI? Not all robots are smart robots with even a modest fraction of human-
level intelligence. Should dumb robots be considered AI at all? Should robotic animals
be considered AI? Industrial robots? Robotic prosthetic limbs? These and related
questions will be explored in this informal paper.

Beyond this paper, these and many related issues are explored in greater depth in a
much larger (but still informal) paper, Untangling the Definitions of Artificial
Intelligence, Machine Intelligence, and Machine Learning. This paper is primarily an
excerpt from that paper, although some new material is presented as well.

Robotics is a rather broad category, not just humanoid human-looking intelligent


machines. The full range includes:

1. Human-like intelligent robots. More the realm of science fiction for now.
2. Human-like semi-smart machines. On the near horizon, a few here already.
3. Human-like dumb machines. Stiff, mechanical movements. Very limited
intellectual capacity.
4. Animal-like semi-smart machines. Some examples today. More on the near
horizon.
5. Animal-like dumb machines. Stiff, mechanical movements, although getting
better. Minimal function compared to comparable animals.
6. Non-mammal machines. No intelligence required. Varying degrees of
competence for motion and activity compared to comparable animals. Birds,
insects, reptiles, fish.
7. Industrial robots. Including numerical control (CNC) robotic arms. Precision
motor control, but not the dexterity of true AI robots.
8. Warehouse robots. Great for moving material over fixed, gridlike patterns.
9. Household convenience robots. Niche functions. Precision movement. Precision
sensors. Mostly heuristic rather than true, human-level intelligence.
10. Driverless vehicles.
11. Smart cars. Automation and AI for selected, niche functions, from anti-locking
brakes and cruise control to parallel parking, lane monitoring, and collision
avoidance.
12. Prosthetic limbs. Arms, hands, legs, feet. No higher-order intellectual function,
but precision motor control, sensor feedback, and end effector control (e.g.,
grasping with fingers.)
13. Remotely-piloted vehicles. Including drones. Human is in the loop. Not truly
autonomous.
14. Robotic arms. For amplifying human motions and actions, or working in
physically challenging environments or at a distance. Including surgery. Human
in the loop. Not truly autonomous. But can require fine motor control and
dexterity.

And more. But that’s a fairly comprehensive range. At least it’s representative.

To be clear, a robot possessing a significant fraction of human-level intellectual


capacity would certainly fall under the umbrella of artificial intelligence. Even a fair
fraction of animal-level intelligence would qualify as well.

Whether or not a robot possesses intelligence, there is so much more to robotics than
intellectual capacity. Just to mimic the movement capabilities of simple animals or even
insects is a lot of effort, most of it not directly associated with the kind of intellectual
effort associated with the human mind.

A lot of robotics, particular those aspects related to physical structure, movement,


sensors, and manipulation of real-world objects would be more appropriately referred to
as artificial life (A-Life) than AI per se. In fact, robots could be designed to mimic
animals such as reptiles and even insects which would not normally be considered
intelligent. Robotic dogs are popular, but their intellectual capacity is minimal.

Motion, movement, and manipulation of objects in the real world is a real challenge.
They require sophisticated software. They certainly qualify as artificial life, but whether
to consider them AI as well is a matter of debate. The mental aspects are more likely AI
(what do you wish to move, to where, and why), but the physical aspects not so much,
or at least a very gray area.

A key distinction of robotics from traditional, non-robotic AI systems is the fact that the
robotic system is continuously monitoring and reacting to the environment on a real-
time basis.

Much of robotics revolves around sensors and mechanical motions in the real world,
seeming to have very little to do with any intellectual activity per se, so one could
question how much of robotics is really AI.

Alternatively, one could say that sensors, movement, and activity enable acting on
intellectual interests and intentions, thus meriting coverage under the same umbrella as
AI.

In addition, it can be pointed out that a lot of fine motor control requires a distinct level
of processing that is more characteristic of intelligence than mere rote mechanical
movement.

Reader’s choice
In summary, the reader has a choice as to how much of robotics to include under the
umbrella of AI:

1. Only those components directly involved in intellectual activity.


2. Also sensors that provide the information needed for intellectual activity
interacting with the environment.
3. Also fine motor control and use of end effectors (e.g., fingers.) Including
grasping fragile objects and hand-eye coordination.
4. Also any movement which enables pursuit of intellectual interests and
intentions.
5. Any structural elements or resource management needed to support the other
elements of a robotic system.
6. Any other supporting components, subsystems, or infrastructure needed to
support the other elements of a robotic system.
7. All components of a robotic system, provided that the overall system has at least
some minimal intellectual capacity. That’s the point of an AI system. A
mindless, merely mechanical robot with no intelligence would not constitute an
AI system.

In short, it’s not too much of a stretch to include virtually all of robotics under the rubric
of AI — provided there is at least some element of intelligence in the system, although
one may feel free to be more selective in specialized contexts.

The section of the Untangling paper entitled Artificial Life (A-Life) also has some
interesting insight on this matter.

Robotic prosthetics
How to categorize robotic prosthetic limbs is any interesting edge case.

Since a limb, even for a sapient creature such as a human, has no direct role in
intellectual activity it’s quite a stretch to call it AI by itself.

But when a robotic prosthetic limb is attached to a (presumably) human body, it acts as
if it were a real limb.

Since movement, positioning, fine motor control, end effector control (e.g., use of
fingers), touch, pushing, hitting, kicking, grasping, and carrying are activities
peripherally related to activities that result from intellectual activity it is not too much of
a stretch to associate them with that intellectual activity, at least indirectly.

This makes it a fielder’s choice whether you want to categorize robotic prosthetic limbs
as AI per se.

Subsidiary mechanical and electrical activities that are needed to support the intellectual
intentions of the attached body could be considered at least tangential to the intentional
intellectual activity.

For the purposes of this paper, it is reasonably fair to conclude that robotic prosthetic
limbs could qualify as AI. Unless they are particularly simplistic and very limited in
their function and reaction to stimulus.

They could be considered AI at least to the extent they they contain and depend on some
sort of embedded computer chip that participates in the interaction between sensors and
fine motor control.
How intelligent is your robot?
Intelligence of machines is rather limited today, so we haven’t had to fully grapple with
measurement of machine intelligence, but one can usually get a quick sense of how
intelligent an AI system or robot at least appears to be.

The point here is simply that if the robot is grossly lacking any significant sense of
intelligence, then it is questionable whether it could be considered AI.

On the flip side, if the robot has some interesting fraction of human level intelligence, it
would then qualify as AI.

There are really two distinct categories of judging intelligence:

1. Communication skills. Carrying on a conversation in natural language. The


Turing Test.
2. Intelligent behavior. Able to competently and adaptively carry out tasks which
would otherwise require human (or animal) intelligence, without any human
intervention required. Autonomous behavior.

What is intelligence?
Intelligence is discussed at much greater depth in the Untangling the Definitions of
Artificial Intelligence, Machine Intelligence, and Machine Learning paper.

A more succinct discussion can be found in the What Is AI (Artificial Intelligence)?


paper.

Just to quickly summarize, there are a variety of levels of intelligence and a variety of
elements of intelligence. Elements include:

1. Perception. The senses or sensors. Forming a raw impression of something in


the real world around us.
2. Attention. What to focus on.
3. Recognition. Identifying what is being perceived.
4. Communication. Conveying information or knowledge between two or more
intelligent entities.
5. Processing. Thinking. Working with perceptions and memories.
6. Memory. Remember and recall.
7. Learning. Acquisition of knowledge and know-how.
8. Analysis. Digesting and breaking down more complex matters.
9. Speculation, imagination, and creativity.
10. Synthesis. Putting simpler matters together into a more complex whole.
11. Reasoning. Logic and identifying cause and effect, consequences and
preconditions.
12. Following rules. From recipes to instructions to laws and ethical guidelines.
13. Applying heuristics. Shortcuts that provide most of the benefit for a fraction of
the mental effort.
14. Intuitive leaps.
15. Mathematics. Calculation, solving problems, developing models, proving
theorems.
16. Decision. What to do. Choosing between alternatives.
17. Planning.
18. Volition. Will. Deciding to act. Development of intentions. When to act.
19. Movement. To aid perception or prepare for action. Includes motor control and
coordination. Also movement for its own sake, as in communication, exercise,
self-defense, entertainment, dance, performance, and recreation.
20. Behavior. Carrying out intentions. Action guided by intellectual activity. May
also be guided by non-intellectual drives and instincts.

That full list is way too much to expect from robots at this juncture, but even a very
modest fraction of those elements could still yield a robot that can act reasonably
intelligently, by today’s standards.

Dumb robots
Beyond intelligent and seemingly smart robots and relatively dumb robotic prosthetic
limbs, we also have relatively dumb robots.

For our purposes here, a dumb robot is simply one that does not have any interesting
degree of intelligence or intellectual capacity.

It’s a fielder’s choice whether robots designed to mimic animals should be considered
intelligence since they may or may not indeed possess the ability to mimic animal
intelligence.

For our purposes here, a robot capable of mimicking some significant degree of animal
intelligence could be considered under the category of AI.

For example, if the robot can recognize, interact with, or navigate around objects, they
could indeed be considered a reasonable subset of intelligence.

But if the robot’s movements and actions are strictly pre-programmed, rote, and
mechanical, that would not seem sufficient to warrant being considered AI.

A numerical control (CNC) robotic arm would not seem to qualify as AI by itself.

But if a CNC robotic arm had additional features, such as recognizing different types of
objects or being able to deftly manipulate fragile objects, the door to characterization as
AI would be opened. It would still all depend. It’s a gray area. But absent such
advanced features, a CNC robotic arm would not seem to qualify as AI.

That said, if the same CNC robotic arm could be used in different applications and
deployments as either a dumb, rote, mechanical arm or alternatively as a smart, human-
like hand, then the underlying robotic arm/hand technology could credibly be
considered AI since it is enabling the AI application of the technology.
In short, in general, a dumb robot is not AI per se, but the addition of even a single AI-
related function can indeed suddenly make an otherwise dumb robot or component of
robot technology a candidate for being considered AI.

That said, it isn’t always possible for a mere mortal to judge what is happening under
the hood of a robot. Sometimes even sophisticated intelligence can seem trivial, while at
other times a function that seems fairly intelligent may be based on simple, mechanical,
heuristic technology that doesn’t have any true AI at all.

In short, we are forced to accept the twin propositions that sometimes a dumb machine
will seem smart and sometimes a smart machine will seem dumb.

The reality is simply that if the robot seems smart it will be considered AI, and if the
robot seems dumb it will be considered to not be AI.

Is it simply automation or is intelligence required?


The whole point of a robot is to accomplish some task that would otherwise require the
involvement of a person (or possibly an animal.) The question is whether that
involvement was simply a rote, mechanical, manual task or requires something
approximating human-level intelligence (or animal intelligence, at a minimum.)

A traditional industrial robot or even a modern warehouse robot doesn’t require any
obvious intelligence. Some specific instances may, but not as a general proposition.

But if the tasks require fine motor control, dexterity, handling of fragile objects, or
visual recognition of objects, the line is being crossed from mere automation to
intelligence.

A robotic vacuum cleaner is in the gray zone, not having any dramatic level of
intelligence, but the ability to detect and navigate around obstacles suggests at least a
borderline level of intelligence.

A lot of judgment will still be required and a lot of cases will still amount to a fielder’s
choice whether the robot in question amounts to only mere automation of manual tasks
or amounts to at least minimal intelligence.

Computer vision
Any machine which has a video camera with with software to detect and identify
objects or scenes and autonomously taking action based on what it detects and identifies
is a decent candidate for being classified as AI.

Robotic sonar
Any machine which has a sonar sensor that can be used to detect objects using sound
waves and then take action based on location and distance of objects is a decent
candidate for being classified as AI.
Some applications may be too simple to consider AI, but to the degree that a significant
level of processing is performed using the sonar data the AI label would be more
warranted.

Specialized hardware
Generally speaking, most hardware components would not be considered AI, but an
exception might be appropriate for any components which were specially designed to
facilitate robotic applications, especially when these specialized components fairly
directly facilitate intelligence.

This could include, for example:

1. Better sensors which provide the robot’s mind with more refined sensory data.
2. Better motors and motor controls, such as facilitating finer motor control for
greater dexterity and handling of fragile objects.
3. Lighter weight and stronger structural materials which permit more intelligent
motion.
4. More efficient or cheaper batteries and motors which facilitate more adventurous
activity.
5. Customized processor, memory, and control chips which enable greater
intellectual activity.
6. Smaller, cheaper, and more efficient processor, memory and control chips that
enable design, construction, and operation of robots that might be too expensive
or unwieldy with off the shelf electronic components.

Artificial life (A-Life)


Robots are a specialized category of a larger category called artificial life or A-Life. The
categories within A-Life are:

1. Mechanical robots. Electromechanical, actually.


2. Artificial biological lifeforms. Genetically engineered. May or may not closely
follow natural genetic lifeforms. May or may not closely follow natural carbon-
based lifeforms. This area is also known as synthetic life.
3. Virtual reality worlds. No physical manifestation. Don’t even have to follow
the laws of physics or deal with resource limitations of the real world.

Most of the questions and points made in this paper about robotics would apply to the
full category of A-Life.

A-Life is generally considered a branch of AI, at least to the extent that concerns about
the role of intelligence are considered.

Generally, the point of A-Life is to support intelligent activity that merely happens to be
artificial in some sense.
For more on A-Life, see the Artificial Life (A-Life) section in the Untangling the
Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning
paper.

Biologically inspired technology


Leading edge robotics and A-Life research and development is pursuing approaches that
are inspired by natural biological systems.

Of course, even a two-legged walking robot with two arms, two hands, and two eyes
can be said to be biologically inspired.

The main thrust of bio-inspired technology is to look at, model, or mimic the muscles,
cells, materials, structures, and control systems of natural biological systems.

And the mind as well, both the human mind and the minds or at least brains of lesser
animals.

And social systems and social behavior of the natural biological world as well.

More properly, bio-inspired technology should be associated with the larger A-Life
category (artificial life), of which robotics (mechanical or electromechanical robots) is
only one part.

Generally, any technology which was specially designed to mimic or model biological
systems of the natural world would get a free pass to be considered under the category
of A-life. Whether it should also be categorized as AI would depend on the degree to
which it is either directly involved with intelligence (intellectual activity) or aids or
facilitates intellectual activity or carrying out actions that resulted from intellectual
activity. And of course there is plenty of room for discretion as to where to draw the
line on how closely related the bio-inspired activity is to any intellectual activity.

Bio-inspired technology might be referred to using such terms as:

• Biologically inspired robotics or bio-inspired robotics


• Biologically inspired technology or bio-inspired technology
• Biologically inspired engineering or bio-inspired engineering
• Biologically inspired computing or bio-inspired computing
• Biologically inspired systems or bio-inspired systems
• Biologically inspired design or bio-inspired design
• Biologically inspired cognitive architectures or bio-inspired cognitive
architectures
• Biologically inspired vision systems or bio-inspired vision systems
• Biologically inspired computer vision or bio-inspired computer vision
• Biologically inspired products or bio-inspired products
• Biologically inspired product design or bio-inspired product design
• Biologically inspired microrobots or bio-inspired microrobots
• Biologically inspired algorithms or bio-inspired algorithms
• Biologically inspired materials or bio-inspired materials
• Biologically inspired learning or bio-inspired learning
• Biologically inspired artificial intelligence or bio-inspired artificial intelligence
• Biologically inspired intelligence or bio-inspired intelligence

Criteria
To be categorized as AI, the robotic technology would need to apply to at least one of:

1. Contains embedded intelligence (intellectual capacity) of its own.


2. Provides fine motor control, under the control of an intelligent entity.
3. Provides gross movement or positioning, under the control of an intelligent
entity, to enable the intelligent entity to observe the environment or act on its
intellectual intentions and plans.
4. Provides sensor input to an intelligent entity to inform its intellectual activities.
Including vision, sound, and touch.
5. Provides end effector function (e.g., fingers grasping), under the control of an
intelligent entity.
6. Provides dexterity, under control of an intelligent entity.
7. Provides supporting or enabling infrastructure for any of the above.
8. Is a critical component needed to construct an intelligent robot.
9. Is a component which was specially designed for robotic applications — and
facilitates intelligence of the robot.
10. Is the component or system clearly under the A-Life (artificial life) umbrella?
11. Is the component or system clearly bio-inspired?

Even then, some amount of judgment is required. For example:

1. Is a dumb video camera AI?


2. Is a pair of pliers AI?
3. Is a battery AI?
4. Is an electrical outlet AI?
5. Is a power supply AI?
6. Is a piece of wire AI?
7. Is a bolt AI?
8. Is a wheel, roller, or pulley AI?
9. Is a standard computer chip or memory chip AI?

Generally, the answer is that there needs to be a fairly obvious, direct, or at least not too
indirect connection between the technology in question and at least some sort of
intellectual activity.

Scoring?
Conceptually, one could render a score for how close a given piece of technology comes
to being considered AI.

Basic, off the shelf hardware components, like bolts, batteries, and pieces of metal
would get a score of zero.
A fully-functional, human-like robot including natural language functions comparable
to an intelligent digital assistant would get a score of 100%.

Or even a dog-like robot could score near 100% if its function were close enough to that
of a real dog.

A relatively dumb robotic prosthetic limb might get a score down near 25%, while a
more advanced limb with dexterity and smooth movement that is easily controlled by
the wearer might score 50% or even 75% or higher.

This idea of scoring hasn’t been pursued any deeper, to the best of my knowledge, but
seems promising.

Summary
The mere labeling of a technology as robotic does not immediately inform us as to
whether the technology should be categorized as AI.

In short, if the technology directly results in intelligent behavior, directly informs such
behavior, or fairly directly enables such behavior, then it seems fair to consider it under
the umbrella of AI.

Vocabulary of Knowledge, Thought, and


Reason

Jack Krupansky

Nov 3, 2017 · 78 min read

This informal paper attempts to clarify the meanings and usages of the terms related to
knowledge, thought, and reason. Too frequently, public discourse is horribly marred by
really sloppy vocabulary and misuse of terms.

Part of the problem is that traditional definitions are somewhat vague and sloppy, even
in a decent dictionary.

Actual usage has been even sloppier, with a lot of these terms treated and used as
synonyms.

It is not the intention of this paper to give a full treatment of thought and reason, but
only enough to support the full treatment of knowledge, since knowledge, the primary
focus of this paper, is so dependent on thought and reason.
Meaning
Knowledge, by definition, includes meaning, at least basic meaning and various levels
or layers of meaning.

There may be additional levels or layers beyond those included in knowledge per se,
such as subjective meaning that is not shared by everyone who shares the basic,
objective meaning of particular knowledge.

Or layers of subjective meaning that are shared by some individuals or groups but not
by others.

This paper takes the position that all of those layers or levels of meaning are still by
definition part of knowledge, even though they may not be shared by all who possess
that same knowledge. That may feel a little odd, but the simple fact of life is that
knowledge possessed by more than one person is not necessarily exactly 100.000%
identical for each of those persons. Even if they all read the same exact definition, they
may each interpret it slightly differently.

Truth
Truth itself is a very slippery topic. It is indeed touched on in a variety of ways in this
paper, but not in great depth. At best, beliefs, facts, and knowledge seek to approximate
truth, but frequently fall short or even entirely miss the mark. In any case, aspects of
truth are included in this paper to the extent that they hinge on knowledge itself.

Various theories of truth, such as correspondence, coherence, realism, and pragmatism


are not covered per se, but elements of all such frameworks that have filtered into
everyday and professional discourse will of course be covered, just not tied directly to
any particular framework of truth.

It is worth noting that truth and knowledge are not strict synonyms. The nuances are
beyond the scope of this paper. Sometimes ultimate truth is not accessible by even the
best of human intentions. And sometimes interpretations are more important in human
discourse than actual reality.

Reality
Reality refers to all that exists, the physical world, the natural world, life, human life,
human social structures, and any artifacts created by the efforts of humans.

Technically, reality would include human knowledge, but simply as any other physical
artifacts, rather than asserting that the ideas embedded within human knowledge are
real per se.

Imagination or the products of imagination would not be considered reality unless or to


the extent the imagined products are actually created in the real world.
Truth and reality are not strict synonyms. There are a number of theories concerning the
relationship between truth, knowledge, and reality. The nuances are beyond the scope of
this paper, which is focused on knowledge. But certainly elements of truth and reality
will be included as they relate directly to knowledge.

Generally, the best we can hope for is that our knowledge approximates reality.
Sometimes we can get very close or even occasionally happen to be exactly correct, but
very frequently we are far off base.

Popper’s three worlds for reality


Philosopher Karl Popper had a three-world model of how we relate to reality:

1. World 1. The physical world. Reality. Animals and people exists in the physical
world, but merely as objects which happen to move around and their knowledge
exists only as electrical signals and markings on objects, devoid of any meaning
in World 1 pr se.
2. World 2. The mental world. Our models of what we perceive the physical world
to be and how we think about it. Our beliefs, models, theories, feelings, creative
urges, and imagination.
3. World 3. Knowledge and media artifacts which represent our World 2 beliefs,
which we believe correspond to the objective reality of the physical world. Real-
world objects which are products of the human mind.

Knowledge exists in our minds, in World 2.

Knowledge can be represented, communicated, and shared as World 3 knowledge


artifacts and media artifacts.

Relation to intelligence
Knowledge, thought, and reason are inexorably intertwined with intelligence. As such, a
fair amount of terms related to intelligence are included in this paper. But not all terms
related to intelligence will be included here, only those which reasonably intersect with
knowledge, thought, and reason per se.

Relation to logic
Knowledge and reason certainly relate to logic. A modest amount of terms related to
logic are included in this paper, but not all terms related to logic will be included here,
only those which are reasonably necessary to understand and discuss knowledge and
reason per se.

Relation to science
Knowledge, thought, and reason and science are also very intertwined. A modest
amount of terms related to science are included in this paper, but not all terms related to
science will be included here, only those which are reasonably necessary to understand
and discuss knowledge, thought, and reason per se.

Relation to epistemology
Epistemology is the branch of philosophy concerned with the nature of knowledge. The
terms discussed in this paper will certainly overlap with epistemology, but there is no
intention to fully explore epistemology here, just to the extent that it hinges on the
vocabulary of knowledge, thought, and reason.

Relation to metaphysics
Metaphysics is the branch of philosophy concerned with the nature of existence. The
terms discussed in this paper will certainly overlap with metaphysics, but there is no
intention to fully explore metaphysics here, just to the extent that it hinges on the
vocabulary of knowledge, thought, and reason.

Deeper discussion of existence and essence are contained in a companion paper, Model
for Existence and Essence.

Relation to ethics
Ethics is the branch of philosophy concerned with the nature of human nature and social
interaction. The terms discussed in this paper will certainly overlap with ethics, but
there is no intention to fully explore ethics here, just to the extent that it hinges on the
vocabulary of knowledge, thought, and reason.

Relation to communication
Communication is essential to the sharing of knowledge, but the means and methods of
communication are neutral with respect to the actual knowledge and meaning
transmitted and received via communication. A subset of the terms related to
communication will be included in this paper, but only to the extent that they directly
hinge on the knowledge, thought, and reason itself.

Relation to media
Media and communication are tightly related.

Media has only one real purpose, to enable and facilitate communication.

Actually, media has two roles, to communicate knowledge and to record knowledge, but
the recording of knowledge is for the purpose of communicating that knowledge.

Books, papers, audio recordings, and videos simultaneously record and communicate
knowledge, content, and meaning.
This paper does not endeavor to provide a full treatment of media, but simply to cover it
enough to discuss its relationship to knowledge.

Relation to language
Language is clearly essential to sharing of knowledge, but it is meaning that is most
relevant here, not the syntax, grammar, punctuation, or spelling and pronunciation of
words. A subset of the terms related to language will be included in this paper, but only
to the extent that they directly hinge on issues related to meaning and knowledge.

Relation to knowledge representation and knowledge


artifacts
Knowledge representation is an important, even essential, matter.

Knowledge is commonly represented as knowledge artifacts such as words in language


and imagery and other forms of media, embodied in media artifacts.

Communication requires knowledge representation and knowledge artifacts and media


artifacts.

But a full treatment of knowledge representation and knowledge artifacts is well beyond
the scope of this paper. Knowledge representation and knowledge artifacts are treated
here only to the degree needed to provide a full treatment of knowledge.

Artificial intelligence (AI)


The notions of human knowledge and intelligence intersect with machine intelligence,
also known as artificial intelligence or AI. The intersection is explored in greater depth
in a companion paper, Untangling the Definitions of Artificial Intelligence, Machine
Intelligence, and Machine Learning.

To be clear, the vocabulary of knowledge elaborated here is equally applicable to the


human and machine domains.

Domains of truth
There are many distinct domains of knowledge, each having its own distinctive
concepts and vocabulary, even if they all still share the same basic natural language
(English or whatever.) Truth and meaning in one domain don’t necessarily mean the
same thing in different domains.

This matter is discussed and detailed in much greater detail in a companion paper,
Domains of Truth.

Entities
This paper takes an expansive view of entity, treating the concept as referring to
anything that could be referred to in the propositions of knowledge. More than solely
tangible objects, entity is used to refer to intangibles as well, including ideas and
concepts — anything someone might seek to refer to in a statement or proposition.

A partial list of things that would be considered entities in the vocabulary of this paper
include:

• Object
• Inanimate object
• Machine
• Living creature
• Person
• Place
• Thing
• Idea
• Concept
• Thought
• Decision
• Plan
• Topic
• Area
• Event
• Matter
• Action
• Phenomenon
• Situation
• Environment
• Conditions
• Computational entity. Computer software program, object, or data. Robotic or
artificially intelligent computer software.
• Anything of unspecified or even vaguely specified nature that has some sort of
significance.
• A group of closely related entities can also be considered collectively as a larger
entity, such as a family, partnership, team, business, nonprofit organization, or a
country.
• Entities with some characteristics in common can constitute a category or class,
which itself is an entity.
• Qualities, characteristics, attributes, details, and metadata of entities are
themselves entities.

Generally, an entity is anything that would be referred to using a noun.

All of that said, the term entity has a more strict meaning for entities which have a
significant sense of independence, in contrast with subsidiary entities which have a
significant meaning only within the context of a larger, umbrella entity.

Short of details
The goal here is to define the vocabulary needing to talk about knowledge, thought, and
reason, but not to detail knowledge about anything other than knowledge, thought, and
reason themselves. As such, the vocabulary stops at the level of the concept of detail, so
that language needed to elaborate detail is excluded, such as:

• Color
• Size
• Height
• Width
• Depth
• Weight
• Texture
• Shape
• Structure
• Purpose
• Function
• Substance
• Emotions
• Intentions
• Gender
• Types of objects
• Types or forms of life
• Details of life
• Specific actions
• Specific activities
• Specific values
• Specific senses
• Specific intentions
• Specific emotions
• Specific feelings
• Specific drives
• Specific attitudes
• Details of logic
• Details of truth
• Structure of the universe.
• Nature of the physical world.

Deeper discussion of existence and essence are contained in a companion paper, Model
for Existence and Essence.

This paper does not delve into:

• Specific entities.
• Specific classes or types of entities.
• Specific details or metadata of classes of entities.
• Specific details of specific entities.
• Any domain-specific entities.
• Any domain-specific metadata.

The goal here is to be as general and abstract as possible.


Entity details — metadata
Details of entities are referred to as metadata or entity metadata in this paper.

The point of this paper is not to define the details of entities, but to treat such details in
an abstract manner and simply to acknowledge that entities have details which
themselves are referred to as entities and that those details are referred to as entity
metadata in an abstract sense — no detail to be described in this paper.

Sapient entity — people and robots


This paper aims to address knowledge, thought, and reason from both the perspective of
people or human beings, and intelligent robots or artificial intelligence (AI) as well. The
common term for both people and intelligent robots is sapient entity. Sapient meaning
intelligent or wise, and entity meaning a person or object.

Many terms defined in this paper will use this term, sapient entity, rather than human
being, person, people, or individual.

Sentient entity — animals and dumb robots


Not all perception and communication strictly requires human-level intelligence.
Animals and dumb robots can both perceive, sense, feel, and react to the world around
them. That ability is known as sentience.

People and intelligent robots are sapient (intelligent and wise) as well, but some forms
of perception and basic information about the world only require sentience rather than
full-blown sapience.

Your personal robot could whip out an umbrella for you when it starts raining, but that
requires only sentience rather than sapience.

Why not simply quote from the dictionary?


As stated in the introduction, many terms in the vocabulary of knowledge, thought, and
reason are vague, sloppy, and frequently misused. The raw dictionary simply adds to the
confusion. This paper attempts to correct the flaws of the dictionary and common usage
by providing term definitions that are tailored and streamlined to focus on the needs of
engaging in concise and accurate discourse about knowledge, thought, and reason.

Basic terms for knowledge, thought, and reason


Before diving into the full, detailed list of terms, here are the basic terms for knowledge,
thought, and reason, in alphabetical order:

1. Assertion. An assumption that is strongly believed to be true.


2. Assumption. A proposition which is presumed to be true, but not offering proof.
3. Basic fact. A proposition which can be verified by simple, direct observation,
measurement, simple calculation, a statistic, or looking it up in commonly
accepted reference materials. No reasoning or faith required. Should also include
the details and method of the observation, measurement, calculation, or
reference and their provenance (source.) See also: fact, conclusion.
4. Belief. One or more related propositions which are believed to be true. Includes
all meaning associated with those propositions. A willingness to accept or agree
with a purported fact, not necessarily backed by a strong justification. Includes
all knowledge and facts which a sapient entity accepts as true, even if the truth
of such matters is more firmly supported than the individual who believes in
them can demonstrate themselves. All knowledge is a belief, while not all
beliefs are knowledge.
5. Claim. A statement or proposition made as if fact, and not allowing for any
possibility that the claim might not be true.
6. Communicate. Communication. The process of conveying beliefs, opinions,
information, knowledge and meaning, or feelings between sapient entities
(people or robots), represented in some language, nonverbal gestures, or
nonlinguistic vocal expression. May be direct, from sapient entity to sapient
entity, or indirectly via knowledge artifacts (e.g., books, videos, or other media
artifacts.)
7. Concept. An organized and formalized notion with associated deep meaning
developed from thought. Beyond an unorganized thought or rough idea. May be
represented as a term or one or more propositions.
8. Conclusion. A proposition that is considered true as a result of reasoning and
possibly experimentation. Follows from premises. A conclusion is the
knowledge produced by reasoning. It is the objective of reasoning. See also:
foregone conclusion.
9. Conjecture. A proposition which may be believed to be true on reasonable
argument, but which has not been substantiated.
10. Data. Raw information without any significant structure or significance.
11. Definition. A term whose meaning is given.
12. Description. Elaboration of observable and measurable details or characteristics
of an entity. See also: adjective.
13. Detail. Some aspect of an entity. Synonyms are aspect, feature, attribute,
characteristic, quality, and property. Part of metadata for an entity. See also:
adjective, minutiae, technicality.
14. Entity. An object, person, place, thing, idea, concept, thought, decision, plan,
topic, area, event, matter, action, phenomenon, situation, environment,
conditions, or anything of unspecified or even vaguely specified nature that has
some sort of significance. Details and statistics about an entity are themselves
entities, although in a more strict sense an entity would tend to have a relatively
independent existence rather than being wholly dependent on a larger entity.
Something to be referred to in a statement, proposition, or thought, commonly
using a noun. Commonly a person, place, or thing. May be a computational
entity. A group of closely related entities can also be considered collectively as a
larger entity, such as a family, partnership, team, business, nonprofit
organization, or a country. Animals, people, organizations, and robots are
entities. Entities with some characteristics in common can constitute a category
or class, which itself is an entity. Details about an entity are referred to as entity
metadata.
15. Evidence. Basic evidence, fact, testimony, argument, or proposition, or the
collection of basic evidence, facts, arguments, testimony, and propositions
which supports a proposition or conclusion. May not be definitive proof. May be
weak or strong. Alternatively, a widely accepted conclusion. See also:
circumstantial evidence, direct evidence.
16. Explanation. The causal factors for a phenomenon or event — focus on why and
how rather than merely what happened. Description may be included with an
explanation, but is incidental to the causal factors.
17. Express. Expression. The ability of a sapient entity to communicate or convey
knowledge, meaning, and feelings, commonly using language and gestures,
either directly to other sapient entities such as with speech or through the
creation of knowledge artifacts such as written text that can later be examined
and understood by a sapient entity.
18. Fact. Either a basic fact or a conclusion that is relatively widely recognized and
accepted as true. Should also include the reasoning, evidence, and justification
in support of the fact, as well as its provenance (source.) Alternatively, a belief
that is part of general knowledge, which is presumed to be fact but whose
provenance is unclear. See also: asserted fact.
19. Hypothesis. A proposition made with an expectation of evaluation as soon as
possible, typically to validate a theory.
20. Hypothetical. Possible or suggested entity or proposition. May or may not be an
actual entity or proposition. See also: naive hypothetical.
21. Information. A collection, sequence, or structure of symbols and images. May be
a representation of knowledge and meaning. Knowledge stripped of its
significance or meaning. Sometimes used as a synonym for knowledge.
22. Just plain wrong. An emotional subjective reaction to a statement which is
considered unacceptable and treated as absolutely false without question. The
statement is likely to be false, but the reaction may simply indicate a
disagreement over interpretation.
23. Justification. Reasoning and evidence used to arrive at a conclusion. May be
weak or strong.
24. Justification. Reasoning and evidence used to arrive at a conclusion. May be
weak or strong.
25. Justified true belief. JTB. Belief with strong enough justification to warrant
status as knowledge. A belief that is justified and happens to be true.
26. Knowledge. What is known by one or more sapient entities, including both
information and skills. What is believed to be true. True beliefs. More formally,
justified true beliefs (JTB). Alternatively, the sum total of the beliefs, facts, and
memories of an individual. Or all individuals. Or all members of a group.
Includes meaning, to some degree, but there may be additional levels of
meaning based on interpretation and context in which the knowledge is
considered. May be detailed knowledge, limited knowledge, or only casual
knowledge. May be objective knowledge or subjective knowledge.
27. Language. The means by which knowledge and meaning can be represented by a
sapient entity, either to simply record that knowledge and meaning, or to
communicate it to another sapient entity. Language can be spoken or written or
transmitted in electronic form. See also: writing, communication.
28. Meaning. The understanding, significance, and feeling that a sapient entity
associates with knowledge, a concept, feeling, or an event. May have any
number of levels, both shallow and deep. The simplest meaning being the
dictionary meaning of a term. See also: multiple meanings.
29. Measurement. An assessment of the number of units of some physical quantity
such as size, weight, or length.
30. Object. Something that exists or at least appears to have form, substance, shape,
or can be detected in some way, or can be experienced with the senses or
imagination, or manipulated by a computer, either as a real-world object or an
imaginary object, such as a media object, mental object, or computational object,
and can be distinguished from its environment. See also: entity, a subset of
which are objects. Whether liquid and gaseous matter should be considered to be
objects is debatable, but they are under this definition. A storm could certainly
be treated as an object even though it consists only of air and water.
Alternatively, the entity at which an action is being directed — see also: subject.
31. Observation. What can be seen, heard, felt, or otherwise directly experienced by
a sentient entity.
32. Promise. Commitment, assurance, assertion, or claim that some entity will be in
a designated state or some proposition will remain or become true for some
definite or indefinite point or period of time in the future. Can vary in clarity or
specificity from vague to crystal clear and precise.
33. Proof. Evidence which is sufficient to indicate that a conclusion is justified.
34. Proposition. A statement which may or not be true. Including the meaning of the
statement.
35. Reason. Abstract process of reasoning through rational thought, to reach a
conclusion, result, goal, decision, judgment, assessment, understanding, or other
outcome that is thoroughly and convincingly justified by the reasoning process.
Alternatively, a proposition which provides specific support for an argument,
conclusion, or explanation for a fact. Alternatively, a credible explanation,
ground, or motive for an action or belief, as opposed to a mere excuse which
may be based on nothing more than emotion.
36. Sapient. Intelligent, capable of wisdom. Primarily people, but may include some
but not all robots.
37. Sentient. Able to perceive, feel, and experience the real world. Includes animals
and robots.
38. Statement. Any declarative sentence in natural language. No implications as to
its truth per se. Includes the meaning of the statement.
39. Strong belief. A belief that one has a lot of passion for, although that passion
may or may not be matched with an equally strong justification.
40. Theory. A coherent explanation of a phenomenon, capable of fully explaining
the phenomenon, consistent with past observations, and able to predict future
observations. Alternatively, and more loosely, a proposed explanation for some
matter, such as who or what caused a particular outcome, without necessarily
offering definitive proof of that explanation.
41. Truth. Truth of propositions or truth of existence. A proposition that is true, in
accord with reality. Reality as it exists.
42. Wisdom. Knowledge, experience, and judgment which permit sound reasoning
and intelligent behavior.

Terms related to knowledge, thought, and reason


The terms related most directly to knowledge, thought, and reason are listed here in
alphabetical order:

1. Accept on faith. Acceptance of a belief on the word of another with absolutely


no reliance on evidence, proof, reason, or justification. See also: faith.
2. Abstraction. A concept which represents more than one entity, either because the
concept represents what the entities have in common, or that the entities are
parts of a whole. Excludes details which are not in common for the former. The
former is a generalization abstraction, the latter a aggregation abstraction.
3. Acceptance. Willing to go along with claim that a proposition is true, possibly
without proof and possibly with reservations. See also: agreement.
4. Account. Synonym for story or record. Alternatively, the cause, origin, or
explanation for some entity.
5. Action. Movement or motion of an entity.
6. Activity. A set of actions that a sentient entity engages in for some purpose.
7. Actual truth. The real truth of any matter, in contrast with speculation,
perception, or interpretation. Synonym for ultimate truth. Close synonym for
ground truth, but ground truth can still be only an approximation of actual or
ultimate truth. See also: eternal truth, ground truth, objective truth. May still be
subjective and may not be eternal.
8. Adjective. One or more words representing an attribute, property, or
characteristic of an entity. See also: noun, verb.
9. Aggregation abstraction. An abstraction (concept) which represents more than
one entity which are parts of a whole.
10. Agreement. Concurrence with the judgment of others on some matter or
proposition. Alternatively, a commitment between two or more entities to act or
behave in a designated manner in the future.
11. Allegation. Synonym for claim, assertion.
12. Alternative fact. Fact based on a different source, perspective, or reasoning
process. Alternatively, euphemism for a false statement or lie. See also: disputed
fact.
13. Ambiguity. Ambiguous. Statement or proposition which may have more than
one interpretation as to its meaning, particularly conflicting interpretations as
opposed to mere nuances of interpretation. Alternatively, an entity which may
have more than one interpretation as to its meaning, particularly conflicting
interpretations as opposed to mere nuances of interpretation. Alternatively, a
proposition which explicitly represents the inherent ambiguity of some matter
that is ambiguous by its nature no matter how unambiguous statements and
propositions about it are. See also: unambiguous.
14. Analogy. Comparison between two things, either for the purpose of illustrating
or explaining something, or as a form of argument to justify a conclusion.
15. Analyze. Analysis. Study a matter to determine the facts.
16. Anecdote. Simple, short story about a single incident of what is considered a
larger class of events.
17. Anecdotal evidence. Asserting a larger truth by generalizing from one or more
anecdotes. Alternatively, the specific evidence from those particular anecdotes.
18. Answer. Response to a question. See also: exchange.
19. Antithesis. Opposite or contrast to something, especially a concept.
20. Any. A nonspecific entity or member of a class or category. Alternatively, a
nonspecific number of entities or members of some class or category.
21. Anything. Any entity.
22. Apocryphal. Indicates that a story or proposition has dubious authenticity but is
still widely circulated and treated as true due to its appeal.
23. Apprehension. Understanding, comprehension, or grasp of something.
24. Approximate. Approximation. Specification or quantification of something
without full precision or exact accuracy. Close but not exact. See also: estimate,
heuristic.
25. Area. Spatial region or topic.
26. Argument. Loose, casual reasoning. A list of reasons cited to support some
belief or conclusion. Alternatively, a single reason or step of a larger argument.
An argument may be weak or strong, in whole or in any part. Commonly
rhetorical and possibly even heated or inflammatory. Alternatively, a reference
to reasoning as well.
27. Argumentation. Process of persuading another party to accept a belief or
conclusion by the presentation of a sequence of arguments, commonly to
influence a decision or judgment. See also: reason.
28. Article. A discourse concerning an entity.
29. Artifact. An object created by a sapient entity for some utilitarian purpose. See
also: knowledge artifact, media artifact.
30. Aspect. A clearly discernible subset of something, some entity. Synonyms are
detail, feature, attribute, characteristic, quality, and property. Part of metadata
for an entity.
31. Asserted fact. A proposition that is being asserted as true, but without
justification that fully validates its veracity. May not have any justification other
than the assertion of its validity.
32. Assertion. An assumption that is strongly believed to be true.
33. Assessment. Conclusion or propositions about some matter based on reasoning,
including analysis and judgment.
34. Assumption. A proposition which is presumed to be true, but not offering proof
or a strong justification.
35. Attribute. Synonym for detail. Part of metadata for an entity.
36. Authoritative source. Source of information or a proposition which is widely
recognized as being reliable for knowledge in some area. See also: official
source.
37. Authority. Person or organization with the power and control to set standards of
behavior as well as to permit specific instances of behavior. Also, an
authoritative source for information and knowledge.
38. Awareness. Perception or knowledge of something by a sentient entity. Its
existence at a minimum, but possibly details as well.
39. Axiom. Definition in formal logic.
40. Basic evidence. An object, observation, measurement, image, recording, imprint,
raw information or data, calculation, or technical report which is purported to
support or disprove a proposition or conclusion. See also: direct evidence,
testimonial evidence, and circumstantial evidence. Basic evidence does not
include testimonial evidence, but direct evidence does include testimonial
evidence. Essentially a synonym for basic fact.
41. Basic fact. A proposition which can be verified by simple, direct observation,
measurement, simple calculation, a statistic, or looking it up in commonly
accepted reference materials. No reasoning or faith required. Should also include
the details and method of the observation, measurement, calculation, or
reference and their provenance (source.) See also: fact, conclusion. Essentially a
synonym for basic evidence. Similar to direct evidence, except that the latter
includes witness testimony.
42. Basis. Synonym for precondition. More properly the full set of preconditions for
something.
43. Behavior. Action of a sentient entity. Alternatively, action of a phenomenon.
44. Belief. One or more related propositions which are believed to be true. Includes
all meaning associated with those propositions. A willingness to accept or agree
with a purported fact, not necessarily backed by a strong justification. Includes
all knowledge and facts which a sapient entity accepts as true, even if the truth
of such matters is more firmly supported than the individual who believes in
them can demonstrate themselves. All knowledge is a belief, while not all
beliefs are knowledge.
45. Belief system. Collection of beliefs and principles held by a particular sapient
entity, a group, or general consensus for some topic, area, or field of study. Also,
as the basis for a religion or moral or ethical code. See also: ideology, dogma.
46. Best. Better than all other alternatives.
47. Beyond doubt. Beyond a doubt. Beyond all doubt. Beyond a shadow of a doubt.
Beyond all shadow of a doubt. Belief in the certitude of a proposition, with no
doubts or even minor reservations.
48. Beyond a reasonable doubt. Significant level of certainty in a proposition, even
if not quite absolute certitude. No significant doubts. At worst, only minor
reservations. In contrast to beyond all doubt.
49. Bias. Preference driven by prejudice rather than reason.
50. Bias. Preference driven by prejudice rather than reason.
51. Big lie. A big lie. The big lie. An outrageous falsehood, a gross
misrepresentation of facts, stated with such boldness, confidence, and intensity
as well as extended repetition that people feel compelled to accept it, afraid,
unwilling, or unable to match its intensity and persistence. Commonly in the
form of social and political propaganda. Alternatively, a more modest falsehood
which gradually gains belief through extended repetition that either goes
unchallenged or where the challenge gradually dissipates as the repetition
continues.
52. Bold statement. Synonym for claim. Insistent about truth, but not necessarily
with strong justification.
53. Bright line. Bright line distinction. Very clear distinction between two or more
entities, in contrast to a fuzzy distinction.
54. Blurred distinction. Synonym for fuzzy distinction.
55. Calculation. Developing a fact based on a mathematical calculation from
existing facts.
56. Casual knowledge. Knowledge limited to the existence of something and no
more than only a modest amount of its details.
57. Category. Subset of entities which share some specified characteristics. See also:
abstraction, class. Commonly named.
58. Category name. Noun or name for a category.
59. Cause. Causation. Causality. Causal relationship. Causal link. A correlation or
relationship between entities in which an event involving one causes an event
involving another. More than simple correlation, and more than mere influence.
Might the second event have occurred without the first event? Yes, which makes
causality difficult to prove. Or, a third event might have caused the second event
anyway. Or, maybe both the first and third events might have to occur to cause
the second event. It can get complicated. Correlation is much easier to
demonstrate; it can be evidence of a causal relationship, but is not proof.
60. Certain. Certainty. Facts and truth of some matter are known with a high degree
of confidence, beyond any real doubt, and without any real dispute. Preferably
with validation, and corroboration. More about confidence than truth per se.
May or may not rise to the absolute level of certitude.
61. Certitude. Highest and most perfect level of certainty. Absolute certainty. No
doubt.
62. Characteristic. Synonym for detail. Part of metadata for an entity.
63. Characterize. Characterization. Description of an entity in terms of its most
notable characteristics. Alternatively, to describe an entity in less than favorable
terms by focusing on characteristics which show the entity in less than the most
favorable light. See also: mischaracterize.
64. Cherry picking. Selectively choosing a subset of data, facts, or anecdotes to fit a
preferred conclusion.
65. Choose. Choice. Select from alternatives. Something which is selected as a
result of a decision.
66. Chronicle. Synonym for story, record, or article.
67. Circumstances. Synonym for situation or context.
68. Circumstantial evidence. Evidence which requires an inference to connect it to
the asserted conclusion. In contrast to direct evidence.
69. Claim. A statement or proposition made as if fact. May or may not allow for any
possibility that the claim might not be true.
70. Class. Synonym for category.
71. Class name. Noun or name for a class or category of entities.
72. Class of entities. A group or category of entities which share some common
characteristics. See also: particular entity.
73. Clause. The major structural unit of a statement. A sequence of phrases.
74. Close-minded. A sapient entity unwilling to consider new ideas, existing ideas in
new ways, or reconsider previous judgments. See also: open-minded.
75. Clue. Proposition or fact that suggests a larger proposition but is insufficient to
provide definitive proof or sufficient justification. Synonym for evidence.
76. Cognition. Cognitive skills. Cognitive processes. Process of acquiring
knowledge. Synonym for knowledge acquisition. Debatable whether it includes
all mental processes, such as creativity, reflection, speculation, imagination,
memory, planning, speech, and motor control.
77. Coherent. Coherence. A proposition, argument, reasoning, theory, or
explanation which is logical, rational, consistent, and sufficiently transparent
and easy to follow that a sapient entity can easily and fully comprehend it.
78. Collaboration. Two or more individuals cooperate in a joint effort to perform
some activity or develop knowledge in some area, sharing knowledge in the
process.
79. Collateral fact. A fact which is unconnected, or only remotely connected, with
the issue or matter in dispute. Something that is secondary and subordinate to
the main issue. As no fair and reasonable inference can be drawn from such
facts, they are inadmissible in evidence in a court matter.
80. Command. Statement directing an action to be performed. May be an
authoritative order. Do this. Synonym for directive. See also: instruction.
81. Common sense. Sound judgment of even average individuals, based of general
experience, general knowledge, and good sense.
82. Communicate. Communication. The process of conveying beliefs, opinions,
information, knowledge and meaning, or feelings between sapient entities
(people or robots), represented in some language, nonverbal gestures, or
nonlinguistic vocal expression. May be direct, from sapient entity to sapient
entity, or indirectly via knowledge artifacts (e.g., books, videos, or other media
artifacts.) See also: media.
83. Comprehend. Generally a synonym for understand.
84. Computing environment. A computer or a computer embedded within a device,
machine, or other object, including desktop computers, laptop computers,
tablets, smartphones, wearable computers, servers, the cloud, smart devices,
smart appliances, smart vehicles, smart homes, smart buildings, Internet of
Things (IoT) devices, or network of any of the above.
85. Computational environment. Synonym for computing environment.
86. Computational entity. An imaginary entity created as a computational object. It
may be intended to accurately or approximately represent a real-world object,
mental object, or media object, or be entirely imaginary and exist only in the
computing environment.
87. Computational object. An imaginary object constructed within a computing
environment using software and data. It may be intended to accurately or
approximately represent a real-world object, mental object, or media object, or
be entirely imaginary and exist only in the computing environment.
88. Concept. An organized and formalized notion with associated deep meaning
developed from thought or received via communication. Beyond an unorganized
thought or rough idea. May be represented as a term or one or more
propositions.
89. Conceptual framework. Collection of related concepts that describe and explain
some larger entity, as well as context for framing those concepts and their
relationships. Includes theory and organizing principles for the collection.
90. Concern. An issue or a matter surrounded by some degree of uncertainty, such
that either the issue needs to be resolved or the uncertainty needs to be dispelled.
91. Conclusion. A proposition that is considered true as a result of reasoning and
possibly experimentation. Follows from premises. A conclusion is the
knowledge produced by reasoning. It is the objective of reasoning. See also:
foregone conclusion.
92. Condition. Qualitative evaluation of the state of an entity.
93. Conditions. State, environment, situation, or circumstances in which an entity
exists.
94. Confidence. The psychological and emotional strength of belief in a proposition.
May be based on reasoning, evidence, or intuition, or even bias. Confidence
does not imply truth of the matter. Alternatively, confidence may be a technical
assessment of the strength of a proposition — technical confidence. See also:
emotional confidence, psychological confidence, and technical confidence.
95. Conflation. Treating two or more concepts as if they were the same. May be
acceptable and reasonable in some situations, but not in others. Generally risky
and to be avoided.
96. Confusion. Wide variance in the uncertainty as to the truth of a proposition or
belief. Discomfort over the truth of a matter.
97. Conjecture. A proposition which may be believed to be true on reasonable
argument, but which has not been substantiated or validated.
98. Connect the dots. A rudimentary form of inference in which an inferential leap
is made to connect one dot (proposition) to another dot (proposition) based
merely on the fact that the two propositions seem connected, possibly merely by
no more than physical or semantic proximity or to fit a perceived pattern.
Commonly used when available information is incomplete. Commonly more
wishful thinking than strong reasoning. A dubious form of inference at best.
99. Connection. A relationship between two or more entities or phenomena. May be
a strong correlation or even a causal link, but none is necessarily implied. May
simply be an informal, psychological or emotional association.
100. Consensus. A majority of the individuals in a group share certain
knowledge and beliefs.
101. Consequence. Result or effect of an action, event, or phenomena.
102. Constraint. Limitation or restriction for some proposition, belief, or
knowledge.
103. Contemplate. Contemplation. Consider knowledge possessed by an
individual, or some matter.
104. Contemporaneous event. Event that is either currently in progress or
occurred recently. Could include events that are expected to occur in the near
future.
105. Content. Information, knowledge, and meaning transmitted via
communication or in knowledge artifacts or media artifacts.
106. Context. The situation, environment, or conditions immediately
surrounding an entity, or in which a proposition is presented or evaluated.
107. Contradiction. A proposition which appears to indicate the opposite of
another proposition, or at least suggests some degree of inconsistency between
propositions.
108. Contrast. Comparison between two or more entities, highlighting both
similarities and differences. Alternatively, emphasizing differences — synonym
for opposite.
109. Controversy. Synonym for dispute.
110. Convention. Norm of behavior, not enforced, but expected and typical.
111. Conventional wisdom. Generally accepted beliefs. May or may not be
strong knowledge or even wisdom per se, but simply beliefs which are strongly
held by even average individuals. Alternatively, generally accepted beliefs of
experts in some field, regardless of whether they are indeed strong knowledge or
even wisdom per se. Frequently used in a pejorative sense to acknowledge
weakness in the justification for such presumed knowledge or wisdom.
112. Conversation. Casual, informal interaction and communication between
two or more parties about some matter or possibly no specific matter at all, with
no specific intention of any party learning anything. Sometimes a euphemism
for more serious discussion under the guise of being casual and informal to
encourage one or more parties to let their guard down in order to get them to
disclose more valuable information or to get them to be more receptive to input,
feedback, and advice, or possibly even for purposes of deception. See also:
discussion, interaction.
113. Convey. Transmit information or meaning between entities, or movement
of entities from one location to another.
114. Conviction. Firmly held belief or opinion, independent of its truth. Strong
psychological and emotional confidence. Confidence in a belief.
115. Correlation. A one to one correspondence between two or more entities.
116. Corroboration. More than one source for a given piece of information or
proposition.
117. Count. Integral quantity of objects or phenomena.
118. Counterargument. An argument intended to rebut or refute an argument.
119. Creativity. The ability to create new beliefs and images, unconstrained by
memories of what exists and is known.
120. Creature. Synonym for animal. Sentient by definition, but may or may
not be sapient.
121. Credibility. Confidence in a source.
122. Credible. Believable. Rational. Inspiring confidence. Based on more than
a very minimal degree of reason. A belief may be credible. A source of
information may be credible. In stark contrast to incredible.
123. Credible rationale. Rationale that which is reasonably credible.
124. Criteria. Propositions which must be satisfied to make a decision or
select something.
125. Crowdsource. Collaboration involving a potentially very large number of
individuals, such as over the Internet.
126. Culture. Values, qualities, practices, and norms of a society.
127. Curiosity. Desire and passion for discovering and acquiring new
knowledge.
128. Cyberwarfare. Using information technology to disrupt information
systems and information of an adversary. Alternatively, the use of information
technology to disrupt military and civilian infrastructure and services. See also:
psychological warfare.
129. Data. Raw information without any significant structure or significance
or deep meaning. Raw observations and raw measurements.
130. Data tables. Data organized in tabular form of columns in rows, for ease
of visual processing and publication. Also a form used for processing by a
computational entity.
131. Dataset. Collection or list of data or information relevant to some entity
or class of entities. May vary in size but could be very large. See also: statistic.
132. Deceived. An individual or group that has an incorrect, inaccurate, or
misleading perception of some matter due to malevolent intent of another party.
Alternatively, a direct synonym for misinformed, although being misinformed
can also be due to being ignorant or incompetent, or the misinformation may
simply be due to an inadvertent misunderstanding.
133. Deception. Intent to convey information which is not factually true.
Intent to lead a sapient entity into accepting a false belief. To deceive. See also:
disinformation. May be a synonym for disinformation, but the latter may be
misleading but true, while deception is false by definition.
134. Decide. Decision. Transition from contemplating some action or choice
between alternatives to forming an intention to take the action or accept the
chosen alternative. Culmination of formulation of intention. Alternatively, to
simply make a choice about an action or to choose between alternatives.
135. Decision-making. The process leading up to the making of a decision.
May or may not include the decision itself, but the focus is on the process
leading up to the decision.
136. Deduction. A conclusion reached by identifying facts and proven
propositions that justify the conclusion.
137. Deductive reasoning. Reasoning which employs deduction. See also:
inductive reasoning.
138. Deep meaning. All levels of meaning associated with something or
knowledge.
139. Definition. Basic meaning for a word, term, or idiom. There may be
additional levels of meaning beyond the basic meaning. Commonly recorded in
a dictionary or glossary. See also: vocabulary.
140. Definition truth. Proposition which is simply a definition for a term.
141. Definitive proof. Redundant, but emphasizes that evidence alone is not
necessarily proof.
142. Denial. Claim that a proposition is false, especially when the proposition
is true.
143. Derivation. Developing knowledge on the basis of existing knowledge.
144. Description. Elaboration of observable and measurable details of an
entity.
145. Detail. Some aspect of an entity. Synonyms are aspect, feature, attribute,
characteristic, quality, and property. Part of metadata for an entity. See also:
adjective, minutiae, technicality.
146. Detailed knowledge. Knowledge of many or most of the details of
something.
147. Detection. Able to sense the existence of something.
148. Determiner. Word or phrase which specifies which of something is being
referred to in a sentence. Includes all quantifiers as well. Examples include the,
a, some, all, my, your, which, this, that, both. See also: parts of speech.
149. Dictionary. Catalog of words, terms, and idioms with their definitions or
basic meanings.
150. Dictionary meaning. The basic meaning of a word, term, or idiomatic
phrase as commonly found in a dictionary. Excludes deeper meaning or detail
such as found in an encyclopedia or book on the subject.
151. Dictum. Noteworthy, authoritative statement or principle. Common
wisdom.
152. Different. Two or more entities which are not identical or the same.
Some fraction of their characteristics are not the same. See also: identical, same,
similar, equal, equivalent, dissimilar, distinct.
153. Difference. Characteristic which is not identical between two or more
entities.
154. Direct evidence. Evidence which directly implies the asserted conclusion.
In contrast with circumstantial evidence which requires an inference. Includes
basic evidence and testimonial evidence. See also: basic fact.
155. Directive. Statement directing an action to be performed. May be an
authoritative order. Do this. Synonym for command. See also: instruction.
156. Discourse. A somewhat organized collection of propositions on one or
more related topics.
157. Discover. Find something unexpected.
158. Discussion. Interaction and communication between two or more parties
about some matter(s), with the intent for at least one party to learning something.
See also: conversation, interaction.
159. Dishonest. Dishonesty. Intent to deceive. Tendency to communicate
untruthfully.
160. Disinformation. False or misleading information. May be technically true
but presented in a way or context that misleads. Deliberate attempt to mislead,
regardless of whether true. Alternatively, it may not involve conscious intent to
deceive per se, but beyond being mere misinformation it may be colored by a
very cavalier disregard for whether or not it is true — it may simply be hoped to
be true with no checking. May be treated as a synonym for deception, but
deception is usually false, while disinformation could be misleading but true.
See also: red herring, misinformation.
161. Disinformation campaign. An intentional effort to disrupt a group,
society, or country by sowing disinformation. See also: cyberwarfare,
psychological warfare.
162. Dispute. Disputed. Not everyone agrees as to the certainty of a matter or
proposition. Synonym: controversy.
163. Disputed fact. Fact which lacks a universal consensus, typically a fairly
significant minority challenges the fact. See also: alternative fact.
164. Dissimilar. Two or more entities differ in some fraction of their
characteristics so that they are not similar. See also: same, identical, equivalent,
equal, different, distinct.
165. Distinct. Distinction. Difference between two or more entities. Extent or
degree to which two or more entities can be distinguished. May be a bright line
distinction or a fuzzy distinction.
166. Dogma. Belief system for a particular group, established by the
authorities of that group, decreed to be taken as articles of faith by members of
that group.
167. Domain. Discrete area which is very distinct from other areas.
168. Doubt. Limitation on confidence in the veracity of a proposition. May be
specific or merely vague anxiety. See also: reservation, beyond doubt.
169. Education. Facilitated learning, through a teacher, classes, textbooks,
reading, and guided experiences.
170. Education. Facilitated learning, through a teacher, classes, textbooks,
reading, and guided experiences.
171. Effect. Synonym for result.
172. Electronic communication. Communication using wires, optical cable, or
radio waves, as opposed to speech or physical knowledge artifacts such as paper,
books, and newspapers.
173. Electronic knowledge artifact. Knowledge artifact represented in
electronic or digital form, such as a file on computer or the Internet. See also:
physical knowledge artifact.
174. Electronic data. Data existing within a computer, computer network, or
electronic device, represented using electrons, photons, or magnetism, as
opposed to some non-electronic medium such as paper.
175. Electronic medium. Use of electrons, photons, or magnetism to represent
information, as opposed to non-electronic medium such as paper.
176. Electronic object. Synonym for computational object, or media object in
an electronic medium.
177. Emotional confidence. Confidence in a proposition that is based on the
emotional strength of belief in the proposition. See also: psychological
confidence and technical confidence.
178. Emphasis. Higher level of importance, meaning, prominence, or
conviction associated with something or a belief or knowledge. May or may not
be warranted by facts and reason.
179. Empirical evidence. Evidence gathered from observation, measurement,
and experimentation. Direct sensory evidence.
180. Empirical validation. Confirmation of a proposition or theory by testing it
in the real world.
181. Encyclopedia. A collection of essays or articles on a wide and
comprehensive range of topics, by experts in those topics.
182. Enlightenment. Great knowledge, insight, and wisdom in some area.
183. Entity. An object, person, place, thing, idea, concept, thought, decision,
plan, topic, area, event, matter, action, phenomenon, situation, environment,
conditions, or anything of unspecified or even vaguely specified nature that has
some sort of significance. Details and statistics about an entity are themselves
entities, although in a more strict sense an entity would tend to have a relatively
independent existence rather than being wholly dependent on a larger entity.
Something to be referred to in a statement, proposition, or thought, commonly
using a noun. Commonly a person, place, or thing. May be a computational
entity. A group of closely related entities can also be considered collectively as a
larger entity, such as a family, partnership, team, business, nonprofit
organization, or a country. Animals, people, organizations, and robots are
entities. Entities with some characteristics in common can constitute a category
or class, which itself is an entity. Details about an entity are referred to as entity
metadata.
184. Entity details. Synonym for entity metadata.
185. Entity metadata. See metadata.
186. Environment. All entities, phenomena, features, and conditions of the
natural world in either a relatively small area or a much larger area.
Alternatively, includes manmade structures and artifacts as well. Alternatively,
includes the human social environment as well.
187. Epistemology. Study of the nature of knowledge. A branch of
philosophy.
188. Equal. Two or more entities which are either Identical or have very
comparable value. See also: same, similar, equivalent, equal, different, distinct.
189. Equivalent. Two or more entities which are not Identical or equal but
have a reasonable degree of comparable value. See also: same, similar, equal,
different, dissimilar, distinct.
190. Error. Synonym for mistake.
191. Essay. A discourse that focuses on selected aspects of some topic.
192. Essence. Core details of an entity that are essential for its existence and
that distinguish it from other entities or other types of entities.
193. Estimate. Estimation. Specification or quantification of something
without full precision or exact accuracy. May be close but not exact. See also:
approximate, heuristic.
194. Eternal truth. Propositions which are true for for all things in all places in
all situations for all times, in contrast with subjective and objective truth. See
also: objective truth, ultimate truth.
195. Ethics. Study of the nature of human nature and social interaction.
Including morality. A branch of philosophy.
196. Etiology. Study of causation, origination, and reasons for the state of
entities. Alternatively, the specific cause, origin, or reason for the particular state
of a particular entity. See https://en.wikipedia.org/wiki/Etiology.
197. Evaluation. Synonym for assessment.
198. Event. An instance of a phenomenon or a specific element or subset of
elements of behavior within a phenomenon. Classically characterized with the
Five W’s — the five questions Who, What, When, Where, and Why.
199. Evidence. Basic evidence, fact, testimony, argument, or proposition, or
the collection of basic evidence, facts, arguments, testimony, and propositions
which supports a proposition or conclusion. May not be definitive proof. May be
weak or strong. Alternatively, a widely accepted conclusion. See also:
circumstantial evidence, direct evidence, inculpatory evidence, exculpatory
evidence.
200. Exaggerate. Exaggeration. A proposition which misleads by asserting
that something is significantly more or significantly less than it really is.
Amplify positive qualities and minimize negative qualities.
201. Example. Examples. Sample from a population of entities which exhibits
a reasonably representative set of aspects common to that population. Multiple
examples help to show the diversity of the population. Also used to exemplify a
principle or rule. Synonym for illustration.
202. Exchange. Simple, brief interaction, typically between only two parties.
Not as involved or extensive as a conversation or discussion. Possibly as simple
as a question and an answer. May be a nonverbal interaction, such as using
gestures.
203. Exculpate. Exculpatory evidence. Evidence which tends to disprove or
undermine a proposition, primarily to free an entity from responsibility or blame
for some particular outcome or incident. In legal proceedings, evidence which
exonerates or frees a defendant from responsibility or blame for some misdeed.
In contrast with inculpatory evidence, which tends to indicate that a particular
entity is responsible for or the cause of some particular outcome or incident.
204. Excuse. An expression of justification for a belief or action or inaction
which is too weak to constitute a fully justified reason.
205. Expected. In accord with one’s beliefs.
206. Experience. Exposure of an individual or group to sensory input and
involvement in actions and activities.
207. Experience. The collective sensory perception and emotional response to
an event or phenomenon.
208. Experimentation. Attempt to validate a hypothesis. Or possibly simply an
attempt to discover facts and make observations about a phenomenon.
209. Expertise. Practical knowledge, experience, and judgment possessed by
an individual or group.
210. Expert. Individual with a recognized level of knowledge and expertise in
a field or area which is well above the average for a group, stemming from some
significant combination of native ability, specialized education, general
education, specialized training, reading, study, research, work experience,
general experience, developed skills, sound judgment, track record of recognized
accomplishments, college degrees, credentials, certifications, licenses, sterling
reputation, very positive ratings, professional or academic accolades, and
glowing recommendations.
211. Explanation. The causal factors for a phenomenon or event — focus on
why and how rather than merely what happened. Description may be included
with an explanation, but is incidental to the causal factors.
212. Explicit. Proposition which is stated literally, to avoid any possible
confusion, in contrast to implicit.
213. Express. Expression. The ability of a sapient entity to communicate or
convey knowledge, meaning, and feelings, commonly using language and
gestures, either directly to other sapient entities such as with speech or through
the creation of knowledge artifacts such as written text that can later be
examined and understood by a sapient entity.
214. Extrapolation. Inferring a purported fact as an extension from one or
more known facts, either because of close proximity or a linear relationship
between two or more known facts. Logically equivalent to extending a line
between X and Y to a point Z which lies on the line between and Y.
215. Fabricate. Construct a statement or story that is not true in significant
ways.
216. Facsimile object. An object which is intended to be a copy of all or a
significant portion of the essential qualities of another object. Includes photos
and other media objects.
217. Fact. Either a basic fact or a conclusion that is relatively widely
recognized and accepted as true. Should also include the reasoning, evidence,
and justification in support of the fact, as well as its provenance (source.)
Alternatively, a belief that is part of general knowledge, which is presumed to be
fact but whose provenance is unclear. See also: asserted fact.
218. Fact pattern. Fact situation. All of the facts relevant to a particular matter,
such as a court case or life situation. Alternatively, the essential or most
important facts of a matter that describe it in the abstract so that other matters
with different specific details but the same overall fact pattern can be treated in a
similar manner.
219. Factoid. A fact which is relatively trivial but has some appeal to the
general populace. Alternatively, a purported fact which is not necessarily true
but is treated as if true due to its appeal to either the general populace or some
significant segment of the population.
220. Faith. Acceptance of a belief on the word of another with absolutely no
reliance on evidence, proof, reason, or justification.
221. Fake news. Satirical take on news, for entertainment. Alternatively,
deliberate intent to mislead under the guise and appearance of news.
222. Fallacy. Flaw in reasoning (logical fallacy) or mistaken belief. See
Fallacy.
223. Familiarity. Knowledge of something. May be detailed knowledge or
casual knowledge.
224. Fantasy. Speculation not intended to have practical utility.
225. Feature. Synonym for detail.
226. Features of the natural world. Aspects of land, water, atmosphere, life,
planets, stars, and galaxies. From the subatomic to the microscopic to the
geologic to the astronomical. Can loosely be treated as objects.
227. Feeling. How an individual feels about some matter.
228. Fiction. Fantasy that is not intended to become reality, but may be useful
as a story. Could be intended for entertainment, education, or to deceive.
229. Figure of speech. Word or phrase used to associate a figurative
(nonliteral) meaning rather than the literal meaning for emphasis or dramatic
effect. See Figure of speech. See also: metaphor.
230. First-order logic. First-order predicate calculus. Predicate logic. Basic
formal logic. See: First-order logic.
231. Five W’s. The five questions Who, What, When, Where, and Why
classically used to characterize an event.
232. Flawed logic. Reasoning whose conclusion cannot be trusted due to
errors or fallacies in its logical structure.
233. Flawed reasoning. Reasoning whose conclusion cannot be trusted due to
mistakes at some stage of the reasoning process, such as logical fallacies.
234. Folk tale. Story of dubious veracity commonly passed down through the
generations by word of mouth. See also: tall tale, old chestnut, old wives’ tale.
235. Folk wisdom. Perceived wisdom in the form of propositions of dubious
veracity accepted in popular culture and commonly passed down through the
generations by word of mouth.
236. Foregone conclusion. An outcome which is strongly believed and
accepted as inevitable, without the need to engage in a formal reasoning process
to justify such a belief.
237. Form. Physical manifestation of an entity, as distinct from its function
and purpose.
238. Formal fallacy. Flaw in the logical structure of a formal argument. See
Formal fallacy.
239. Foundation. Collection of propositions, principles, and concepts used as
the basis for something. See also: conceptual framework, organization.
240. Frame. Framing. Provide context for one or more concepts or
propositions.
241. Framework. See conceptual framework.
242. Function. The reason and benefit of an entity or action. What the entity
does or the action accomplishes.
243. Future. The prospects of events which have not yet occurred.
244. Fuzzy distinction. Inability to clearly distinguish between two or more
entities, in contrast to a bright line distinction.
245. General. Generality. Concerning most entities and matters. Not specific
or particular.
246. General knowledge. Knowledge expected to be possessed by even
average individuals.
247. General wisdom. Wisdom expected to be possessed by even average
individuals. See also: conventional wisdom.
248. Generalization. Proving or presuming that if a proposition is true for one
or more particular instances of a population then the proposition will be true for
all instances within that population which share any characteristics which are
needed for the proof for the particular instances. See also: induction,
mathematical induction.
249. Generalization abstraction. An abstraction (concept) which represents
more than one entity, representing what the entities have in common. Excludes
details which are not in common to all of the entities.
250. Gesture. Simple, nonverbal form of communication, involving facial
expressions, hand movements, or other body language. See also: nonlinguistic
vocal expression.
251. Glossary. An abbreviated dictionary of terms relevant to a particular
topic, area, matter, or discourse.
252. Goal. The desired outcome for an action or activity.
253. Good. Acceptable condition.
254. Good cause. Credible rationale for deciding on a course of action or to
hold a belief. May or may not rise to the level of sound reasoning.
255. Gossip. Propositions obtained informally from an individual’s social
network. Commonly but not exclusively about a person. Dubious veracity, but
not infrequently true. Synonym for rumor.
256. Government. Organizational body and structure which governs society,
creating, enforcing, and administering laws and regulations, and providing a
variety of services to the members of society.
257. Grain of salt. That a claim be viewed with skepticism as to whether it
represents the whole truth or even literal truth. With a grain of salt.
258. Grammar. Rules for constructing sentences in a language, from words,
phrases, clauses, and punctuation, collectively known as syntax, as well as
morphology and phonology, which refer to how individual words and sounds are
formed, but the latter are beyond the scope of this paper, which focuses on
knowledge and meaning. Synonym for syntax, presuming that morphology and
phonology of words are not of interest. See also: parsing.
259. Ground truth. As close to actual truth or ultimate truth for a matter as can
humanly and practically be achieved. Synonym for ultimate truth, but more of a
close approximation for ultimate truth — ultimate truth is the ideal while ground
truth is the best we can do. Sometimes used in contrast to speculation, what we
perceive or estimate truth to be versus what really is true.
260. Group. A collection of individuals who interact in some way involving
activities and communication. Knowledge and meaning can be shared among
members of the group.
261. Half true. Quality of a statement which is partially true but partially false,
in roughly equal measures. Not necessarily an attempt to deceive. See also: half-
truth, mostly true, mostly false, outright false.
262. Half-truth. A statement which is partially true but partially false, in
roughly equal measures, usually in an attempt to deceive. May be mostly false,
but just barely true enough to escape being labeled as outright false. See also:
half true, mostly true, mostly false, outright false.
263. Healthy skepticism. Skepticism that is based on sound reason or good
cause. In contrast to irrational skepticism.
264. Hearsay evidence. A statement given to a party by a second party which
is alleged by the second party to have been made by a third party. The difficulty
is that the first party has no way to validate the veracity of the purported
statement.
265. Heuristic. A mental or computational shortcut for reasoning or
calculation that approximates a correct conclusion or result at a fraction of the
effort of more careful reasoning or a more comprehensive and complete
calculation. No guarantee of absolute correctness, but generally sufficient for
immediate needs. Must be used with care. See also: intuition.
266. Hierarchy. Nested classification of entities into subclasses, with an
arbitrary number of levels of subclasses. See also: taxonomy.
267. Higher-order human intelligence. Higher-order human-level intelligence
specifically limited to human beings, excluding non-human sapient entities, such
a robots or AI systems.
268. Higher-order human-level intelligence. Synonym for higher-order
intellectual capacity.
269. Higher-order intellect. Individual possessing higher-order intellectual
capacity. Synonym for higher-order intellectual capacity.
270. Higher-order intellectual activity. Higher-order intellectual capacity in
action. Synonym for higher-order intellectual capacity.
271. Higher-order intellectual capacity. Human-level intelligence. Beyond
animal intelligence. Includes wisdom, reasoning, planning, creativity,
speculation, intuition, judgment, critical thinking, natural language, and
storytelling. Excludes the more mundane basic human intellectual capacities
such as basic perception, basic communication, basic language skills, simple
information transfer, simple transactions, basic planning, basic reasoning, and
basic decision-making. Synonym for sapience. Synonym for higher-order
human-level intelligence.
272. Hint. Provide a clue or suggestion of a proposition without stating it
explicitly, or to provide a proposition without sufficient justification and
promising or implying that justification will or may come in the future.
273. History. A representation of the flow of events for a collection of
interacting entities. May range from a small number of entities over a short
period of time to a very large number of entities over a very long period of time.
274. Honest mistake. Inadvertent mistake. No intent to mislead. Good faith
attempt to be honest. Not negligent, either.
275. Honesty. Tendency to communicate truthfully.
276. Human creation. Any object or substance which has been created or
constructed by human beings.
277. Human existence. People who are alive now or have lived in the past.
May or may not include their creations. May or may not include their impact.
278. Human impact. Changes to the natural world as a result of human
activity.
279. Human natural language. Natural language of human beings, as distinct
from languages specific to machines.
280. Human social environment. All entities, phenomena, features, and
conditions related to human social activity. In contrast to the features of the
natural world, both inanimate and living. May include manmade structures and
artifacts, or exclude them.
281. Hypothesis. A proposition made with an expectation of evaluation and
validation as soon as possible, typically to validate a theory.
282. Hypothetical. Possible or suggested entity or proposition. May or may
not be an actual entity or proposition. See also: naive hypothetical.
283. Idea. A partially organized notion that results from considering a thought.
Recognition that a thought has some potential value. Synonym for thought,
loosely.
284. Ideal. Best possible alternative, outcome, or goal. Alternatively, highly
valued principle.
285. Identical. Two or more entities share their characcteristics so that they
cannot be differentiated. See also: same, similar, equivalent, equal, different,
distinct.
286. Ideology. Synonym for belief system. Commonly associated with
political beliefs.
287. Idiom. A phrase or sequence of words or terms which have a meaning
somewhat different from the meaning of the individual words or terms that
comprise the phrase. For example, it’s raining cats and dogs.
288. Ignorance. Ignorant. Lack of knowledge. May be innocent, negligent,
willful, or malevolent.
289. Illustration. Drawing. Alternatively, synonym for example.
290. Imaginary entity. An entity which exists other than in the real world — a
mental entity, media entity, or computational entity. It may also exist in the real-
world.
291. Imaginary object. An object which exists other than in the real world — a
mental object, media object, or computational object, in contrast to a real-world
object. It may also exist in the real-world.
292. Imaginary world. A world that we construct in our minds, media, or in a
computing environment, from our imaginations, or in a computer, using a
model.
293. Imagination. Knowledge, beliefs, images, and sounds that we construct in
our minds. The ability to construct knowledge, beliefs, images, and sounds in
our minds.
294. Implicit. Unstated proposition which is assumed, in contrast to explicit.
295. Imply. Lead another party to believe an unstated proposition without
there being an explicit and proven proposition, or to believe a stated proposition
without adequately justifying its truth.
296. Inanimate object. Nonliving object.
297. Inculpate. Inculpatory evidence. Evidence which tends to prove or
support a proposition, primarily to blame or assign responsibility to an entity for
some particular outcome or incident. In legal proceedings, evidence which
incriminates or helps to convict a defendant for some misdeed. In contrast with
exculpatory evidence, which tends to indicate that a particular entity is not
responsible for or the cause of some particular outcome or incident.
298. Incredible. Not credible or difficult to believe. Nonetheless, an incredible
proposition might actually be true.
299. Indirect evidence. Evidence which requires an inference to fully justify
an asserted conclusion (fact.) Evidence which by itself does not fully justify a
particular fact, but from which that fact may be inferred (or concluded), possibly
or probably in conjunction with other facts and evidence. In contrast with direct
evidence which by itself justifies a fact. An inference is required. Synonym for
circumstantial evidence.
300. Individual. A single sapient entity. Member of a group. Typically but not
necessarily human. See also: person.
301. Induction. Reasoning which infers a generalized conclusion from one or
more particular instances for which the conclusion can be shown to be true, the
presumption being that what holds true for one instance should necessarily hold
true for all instances of a general pattern. See also mathematical induction.
302. Inductive reasoning. Reasoning which employs induction. See also:
generalization.
303. Inference. A fact which implies the truth of a proposition.
304. Influence. Communicate with the intent of gently causing someone to
believe something or change their mind about something.
305. Inform. Communicate with simply the intention of sharing information or
knowledge and meaning.
306. Informal argument. Casual approach to reasoning. No sense of rigor, but
not necessarily weak either.
307. Information. A collection, sequence, or structure of symbols and images.
May be a representation of knowledge and meaning. Knowledge stripped of its
significance or meaning. Sometimes used as a synonym for knowledge.
308. Information warfare. An intentional effort to disrupt an adversary using
information, such as propaganda, disinformation campaigns, psychological
warfare, espionage, hacking, trolling, and cyberwarfare.
309. Inquiry. Synonym for investigation. Alternatively, to request information
or ask a question.
310. Insight. Deep understanding in some area, especially nuances of which
others are unaware.
311. Instance. Particular entity which is a member of a class of entities.
312. Instruction. Instructions. Sequence of directives to accomplish some task
or to reach some goal.
313. Intellectual. Relating to mental objects and the processes of the mind,
especially reasoning.
314. Intellectual object. Synonym for mental object.
315. Intelligence. Capacity for intellectual thought — perception, and
cognition. Beyond the scope of this paper. See Untangling the Definitions of
Artificial Intelligence, Machine Intelligence, and Machine Learning.
316. Intelligent entity. Entity capable of perception and cognition — thought
and reason, coupled with memory and knowledge. Synonym for sapient entity.
317. Intent. Intention. Formulate, accept, and place some degree of focus on a
goal. Purpose or desired outcome for a belief, decision, or action.
318. Intentional mistake. Willful mistake made in an effort to deceive.
319. Interaction. Communication between two or more parties on one or more
matters. Conversation, discussion, or exchange. May be a nonverbal interaction,
such as using gestures.
320. Interpolation. Inferring a purported fact between two or more facts on the
presumption of a linear relationship between those known facts. For example, if
X and Y are true, all points on a presumed logical line between X and Y must be
true.
321. Interpretation. An extension of facts, inferring beliefs, based on
subjective considerations.
322. Intimate. Intimation. Hint at a claim or proposition without stating it
explicitly. Synonym for imply and implicit.
323. Intuition. Ability to arrive at a reasonable conclusion without the use of
explicit, conscious reasoning.
324. Investigation. Research, study, and analysis of some entity to ascertain
the facts of the matter.
325. Irrational. Not rational. Thought, belief, expression, or action which is
not based on reasonable thought. Thought that is marred by fallacious reasoning,
flawed logic, or excessive intrusion of emotion and passion. Ill-considered.
326. Irrational belief. A belief that is not based on rational thought.
327. Irrational skepticism. Skepticism that is not based on sound reason or
good cause, such as bias or emotion. See also: healthy skepticism.
328. Irrational thought. Thought that is irrational.
329. Issue. A matter which is considered problematic, a problem in need of a
solution.
330. It stands to reason. Assertion that a fact or conclusion is obvious, that the
assertion or inference should be accepted as true without formally providing the
justification for that belief. Alternatively, asserting that a fully justified
argument could be made in support of a conclusion (fact), but is being skipped
as an expediency.
331. Judgment. Ability, skill, experience, and degree of competence at
reasoning and making decisions. A decision on some matter.
332. Just plain wrong. An emotional subjective reaction to a statement which
is considered unacceptable and treated as absolutely false without question. The
statement is likely to be false, but the reaction may simply indicate a
disagreement over interpretation.
333. Justification. Reasoning and evidence used to arrive at a conclusion. May
be weak or strong.
334. Justified belief. A belief with sufficient justification to be accepted as
true.
335. Justified true belief. JTB. Belief with strong enough justification to
warrant status as knowledge. A belief that is justified and also has been shown to
be true.
336. Know. To possess knowledge about something. Alternatively, to possess
a belief.
337. Knowledge acquisition. Process of developing and accepting knowledge,
through perception, communication, or independent thought. Synonym for
cognition.
338. Knowledge artifact. An object created to represent knowledge. Such as a
book, a letter, a note, a paper, a document, a memo, a song, a picture, a diagram,
a recording, or a video. Frequently using language or some other form of
symbols. May be physical (objects) or electronic. See also: media, media
artifact.
339. Knowledge representation. Knowledge expressed in a form that can be
conveyed to a sapient entity, such as language and knowledge artifacts or
speech.
340. Knowledge. What is known by one or more sapient entities, including
both information and skills. What is believed to be true. True beliefs. More
formally, justified true beliefs (JTB). Alternatively, the sum total of the beliefs,
facts, and memories of an individual. Or all individuals. Or all members of a
group. Includes meaning, to some degree, but there may be additional levels of
meaning based on interpretation and context in which the knowledge is
considered. May be detailed knowledge, limited knowledge, or only casual
knowledge. May be objective knowledge or subjective knowledge.
341. Known. Knowledge possessed by a sapient entity, group, or society.
342. Language. The means by which knowledge and meaning can be
represented by a sapient entity, either to simply record that knowledge and
meaning, or to communicate it to another sapient entity. Language can be
spoken or written or transmitted in electronic form.
343. Law. Behavior which is required or prohibited or otherwise regulated as
a result of legislative action of the government.
344. Learning. Acquiring knowledge through education or experience.
345. Legal entity. Any entity that has legal standing. Namely individuals and
organizations.
346. Lexicon. Synonym for vocabulary or dictionary.
347. Lie. Intentionally make a statement which is known by the speaker to be
false, but with the intent that the listener will believe it to be true. A false
statement which was represented as being true. See also: interpretation.
348. Life. Living entity. Alternatively, all living entities. Alternatively, all
stages and characteristics of a living entity, from the moment of its conception
until it no longer is alive. Alternatively, all stages and characteristics of a
nonliving entity during which it has utility, such as for a machine or structure.
349. Life story. A representation of the flow of events for the entire lifetime,
to date, for a single entity.
350. Limit. Bound for an entity or proposition.
351. Limited knowledge. Knowledge of only some of the details of
something.
352. Limited scope. Propositions which are not universal, only being
applicable for some specified region, area, or parameters.
353. Linguistics. Linguist. Formal or even scientific study of language,
including form, function, structure, sound, meaning, origin, evolution, and
usage.
354. Listening. Receiving communication in the form of the spoken word via
hearing.
355. Living entity. Biota. Plants and animals, including humans. Organisms.
Microorganisms.
356. Living object. Living entity.
357. Living thing. Living entity.
358. Location. Spatial or geographic position. There may or may not be
anything at that position. Synonym for place, where. May be specified
indefinitely as in somewhere, everywhere, nowhere, some place, no place,
almost everywhere, almost nowhere.
359. Logic. Very tight, rule-based reasoning, used to prove a proposition or
matter. May or may not be correct, but can be difficult to understand.
Alternatively, loosely, the thought processes used to arrive at a conclusion, even
if they may not be valid. See also: first-order logic.
360. Logical fallacy. Flaw in the logic used in reasoning, such as a formal
fallacy. See Fallacy.
361. Mark. Synonym for symbol, sign. In some philosophy systems there may
be more nuanced distinctions between symbol, sign, and mark, but that’s beyond
the scope of this paper.
362. Math. Mathematics. Technical means and methods for justifying
propositions and performing calculations or computations to form facts from
raw data, especially numbers, typically representing quantities or measurements
of entities in the real world or imagined worlds. See also: logic.
363. Mathematical induction. Proof of a mathematical theorem for all
elements in a (possibly infinite) sequence based on proving the theorem for the
first element and then proving that if it is true of any element then it is true for
the next element of the sequence. See also: induction, generalization.
364. Mathematical methods. Methods which use mathematics, such as to
evaluate and assess confidence in a proposition or theory.
365. Mathematical relationship. Correlation based on formal mathematical
methods, such that results for different inputs can be calculated rather than
requiring reasoning, observation, or measurement.
366. Matter. An entity of interest. Something related to the entity. Anything of
interest for which some inquiry or judgment is desired.
367. Meaning. The understanding, significance, and feeling that a sapient
entity associates with knowledge, a term, a concept, feeling, an event, or a
matter. May have any number of levels, both shallow and deep. The simplest
meaning being the dictionary meaning of a term. See also: multiple meanings.
368. Measurement. An assessment of the number of units of some physical
quantity such as size, weight, length, or number of entities.
369. Media. Means and methods for communicating. Such as books,
newspapers, magazines, radio, television, telephone calls, recordings, videos,
movies, and the Internet. Alternatively, media outlets. See also: medium.
370. Media artifacts. Knowledge artifacts used in the process of
communicating via media and media outlets. Includes newspapers, books, audio,
videos, shows, magazines, and podcasts. See also: media object.
371. Media entity. An imaginary entity that exists as a representation in
media, such as an image or video. The entity may or may not also exist as a real-
world entity, mental entity, or computational entity, either as an accurate or
approximate representation.
372. Media object. An imaginary object that exists as a representation in
media, such as an image or video. The object may or may not also exist as a
real-world object, mental object, or computational object, either as an accurate
or approximate representation. See also: facsimile object, media artifact.
373. Media outlet. Organization focused on distributing or broadcasting
content, information, knowledge, and meaning in the form of media artifacts to a
relatively broad audience, or possibly a limited, narrow, or specialized audience.
374. Medium. Object used to represent information. May be electronic or non-
electronic, such as paper. Used for media artifacts and knowledge artifacts in
general.
375. Meme. An idea, concept, knowledge, or narrative that spreads rapidly
and widely across a diverse and dispersed audience.
376. Memory. The totality of knowledge, beliefs, images, and sounds as
experienced, learned, or imagined by a single individual. Alternatively,
information stored in a computer, on a computer network, or on an electronic
device.
377. Mental entity. An imaginary entity that exists in the mind of a sentient
entity. The entity may or may not also exist as a real-world entity, media entity,
or computational entity, either as an accurate or approximate representation.
378. Mental object. An imaginary object that exists in the mind of a sentient
entity. The object may or may not also exist as a real-world object, media object,
or computational object.
379. Mental state. Context within the mind in which a sapient entity engages
in cognition.
380. Metadata. Information about an entity that is considered distinct from the
entity itself. Metadata may be embedded, attached, or kept separately. Metadata
can correspond to actual attributes and descriptive characteristics of the entity as
it is, or attributes and descriptive characteristics which are associated with the
entity by a separate intelligent entity, possibly in a subjective rather than clearly
objective manner. Aspect, attribute, characteristic, detail, feature, property, and
quality are all metadata. Also, statistics related to the entity. Common metadata
includes name, title, author, owner, publisher, interesting dates and times in
history of the entity, description, origin, current location, notable physical
characteristics, packaging, summary and indexing, relationships to other entities,
statistics, supplemental information, etc.
381. Metaphor. Figure of speech in which a word or phrase is used to
informally associate its literal meaning with an unrelated entity as an analogy or
for emphasis or dramatic effect.
382. Metaphysics. Study of the nature of existence. A branch of philosophy.
Includes ontology.
383. Method. Technique and intentions used to perform an action or activity.
384. Mind. Source and seat of sapience in a sapient entity.
385. Minutiae. Very small detail, not considered very important or very
relevant. See also: technicality.
386. Mischaracterize. Mischaracterization. Inaccurate description of an entity
by misrepresenting its characteristics. Alternatively, a selective characterization
which is designed to be misleading. See also: characterize.
387. Misinformation. Information that is false, inaccurate, or misleading. May
be intentional deception, or due to ignorance or incompetence, or inadvertent, or
simply a misunderstanding.
388. Misinformed. An individual or group that has an incorrect, inaccurate, or
misleading perception of some matter. They may have been intentionally
deceived or misled, such as through disinformation, or may simply be ignorant
or incompetent, or the misinformation may simply be due to an inadvertent
misunderstanding. See also: deceived.
389. Mislead. Intent for the recipient of a communication to form a perception
that is false.
390. Misperception. A perception that is not true or inaccurate in some way,
that is not in absolute correspondence with reality. Alternatively, a
misunderstanding or incorrect, improper, or inappropriate interpretation of some
matter.
391. Misrepresent. Misrepresentation. Convey information which is not
factually true. May or may not be intentional. See also: lie, mislead, deceive.
392. Misunderstanding. Failed communication. The person receiving
information is unable to interpret its meaning as was intended by the sender.
393. Mistake. Incorrect, inappropriate, or missing proposition, or treatment of
a false proposition as true or a true proposition as false. May be honest,
negligent, or intentional. Synonym for error.
394. Model. A representation that approximates how some portion of the real
world or an imaginary world is believed to work. Rarely clear whether the real
world or imaginary world actually works exactly as modeled. May be used to
manually step through simulations of how the real or imaginary world works, or
developed as a computer model which automates operation of the model.
395. Modeling. Using a model to develop facts from existing facts and
assumptions, via simulations of scenarios.
396. Moment. Discrete point in time, typically when some event occurs.
397. Mostly false. Quality of a statement which is more false than true.
Usually but not necessarily an attempt to deceive. See also: half true, half-truth,
mostly true, outright false.
398. Mostly true. Quality of a statement which is more true than false.
Sometimes but not necessarily an attempt to deceive. See also: half true, half-
truth, mostly false, outright false.
399. Motive. Psychological reason for a belief, decision, or action, which may
lean towards bias rather than strong reasoning. May be based on emotion or self-
interest.
400. Multiple meanings. A proposition, knowledge, word, phrase, or term may
have more than one meaning, due either to multiple senses, multiple
interpretations, or ambiguity.
401. Myth. Traditional story from the history of a culture which may be rooted
in some fact, but has an exaggerated and incredible nature so that it at least
seems false. Alternatively, a widely held belief that is demonstrably false.
402. Naive hypothetical. Overly-simplistic hypothetical. A possible or
suggested entity or proposition which is unlikely to be valid.
403. Name. Some combination of words and terms used to identify a
particular entity, to distinguish it from other entities. See also: proper noun.
404. Narrative. A story focusing on a deeper or higher level of meaning than
on superficial details. Also commonly treated as synonym for story.
405. Natural language. Languages used by sapient entities. Generally used to
refer to human language, as distinct from language used by machines or non-
sapient animals. Technically, machines can use human natural language as well,
and specialized machine languages could rival the complexity and power of
human natural language.
406. Natural phenomenon. An integrated pattern of activity or behavior of
objects and processes in the natural world, excluding human activity, such as
weather systems and geologic events.
407. Natural world. All entities, features, and phenomena of the real world
exclusive of sapient entities and anything that they may have created. Synonym
is physical world.
408. Negligent. Negligence. Not exercising proper diligence and good
judgment. Synonym: careless.
409. Negligent mistake. Mistake due to negligence, in contrast to an honest
mistake or intentional mistake.
410. News. Reports of contemporaneous events.
411. Neutral. Not taking a position on the meaning or value of something.
412. Non-electronic medium. Medium other than electronic medium, such as
paper or non-electronic recordings.
413. Nonlinguistic vocal expression. A sound uttered as if speech, but not
comprised of words, commonly as an expression of emotion or feeling as its
meaning. Such as a sigh, gasp, growl, cry, howl, laugh, giggle, clicking or
clucking sound, or mimicking an animal. See also: nonverbal gestures.
414. Nonrational. Thought, conclusions, and actions which do not require
careful reasoning, but are not marred by irrational thought either. Includes
desires and interests.
415. Nonsense statement. Statement devoid of any significant sensibility.
Incomprehensible statement. Not necessarily false, but may be too
incomprehensible to evaluate its veracity.
416. Nonverbal gestures. Redundant — all gestures are nonverbal, although
gestures can also be combined with speech for reinforcement or nuances of
communication. See also: nonlinguistic vocal expression.
417. Nonverbal interaction. Exchange between entities using only gestures or
nonlinguistic vocal expressions.
418. Norm. Norms. Standard, usual, or typical expected behavior in society,
an organization, or an environment.
419. Nothing. Lack of entities, phenomena, or anything else of significance.
420. Notion. Informal reference to a thought, idea, or concept. No sense of
formality or specificity. Synonym for something. Anything that could be present
in the mind.
421. Noun. Any word used to identify a class of entities. A proper noun would
be used to identify or refer to a particular entity. See also: adjective, verb.
422. Now. Moment of the present.
423. Number. Representation of a quantity. May be integral or fractional.
424. Object. Something that exists or at least appears to have form, substance,
shape, or can be detected in some way, or can be experienced with the senses or
imagination, or manipulated by a computer, either as a real-world object or an
imaginary object, such as a media object, mental object, or computational object,
and can be distinguished from its environment. See also: entity, a subset of
which are objects. Whether liquid and gaseous matter should be considered to be
objects is debatable, but they are under this definition. A storm could certainly
be treated as an object even though it consists only of air and water.
Alternatively, the entity at which an action is being directed — see also: subject.
425. Objective. A proposition is true for all individuals and all situations.
Alternatively, able to separate out personal and group feelings and bias when
judging some matter. Alternatively, synonym for goal.
426. Objective knowledge. Knowledge that is true for all sapient entities, in
contrast to subjective knowledge which may vary between sapient entities.
Levels or layers of meaning may differ between sapient entities, in which case
those layers that are shared can be considered objective knowledge and meaning,
while those layers which are not shared among all sapient entities would be
considered subjective knowledge and meaning.
427. Objective meaning. Meaning which is shared between all sapient entities
which possess the particular knowledge associated with that meaning, in contrast
with subjective meaning, which may vary between sapient entities.
428. Objective truth. All propositions which are true for all individuals in all
situations, in contrast to subjective truth. See also: ultimate truth, eternal truth.
429. Observation. What can be seen, heard, felt, or otherwise sensed or
directly experienced by a sentient entity.
430. Official source. Source of information or a proposition which has been
appointed to have responsibility for activities and knowledge in some area.
431. Old chestnut. Traditional belief of dubious veracity. May have been true
at one time, but now more clearly not true or not as true as it might once have
been. See also: old wives’ tale.
432. Old wives’ tale. Traditional belief of very dubious veracity at best.
Presumed to be true in the past even though it was not likely to have ever been
true, and is now clearly false or of minimal truth, but may remain popular
anyway. See also: old chestnut.
433. Omitted. Omission. Left out or excluded. May be intentional or
inadvertent.
434. Ontology. Study of the nature of existence. Part of the metaphysics
branch of philosophy. Defining classes and hierarchies of entities, their
characteristics, and their relationships and connections.
435. Open-minded. A sapient entity willing to consider new ideas, existing
ideas in new ways, or reconsider previous judgments. See also: close-minded.
436. Opinion. A belief held by an individual that does not require
confirmation from any outside source or any other individual.
437. Opposite. Opposition. Two entities or propositions which are very
different, possibly as different as possible. See also: contrast.
438. Organization. Arrangement of entities or concepts to achieve desired
functions, goals, or purpose.
439. Organizing principle. Principle which provides at least part of the
foundation for developing an organization. See also: conceptual framework.
440. Origin. The time and place of the first instance of something.
Alternatively, the cause of something.
441. Outcome. The result of an action, activity, or event. Either the intended
result or the actual result. May or may not be expected.
442. Outright false. Quality of a statement which is unequivocally false, with
not even a hint of truth (or at least no more than a slight hint of truth.) May or
may not also be a lie depending on whether there is malicious intent to deceive,
or merely ignorance of the truth of a matter. See also: half true, half-truth,
mostly false, mostly true, outright lie.
443. Outright lie. A statement which is unequivocally false and made with the
malicious intent to deceive. See also: half true, half-truth, mostly false, mostly
true, outright false.
444. Paradigm. Model, pattern, method, or example representing a larger class
of entities.
445. Parse. Parsing. Analyze the text of a proposition, statement, or sentence
in a language, to determine its linguistic structure and meaning. See also:
grammar, syntax.
446. Particular. Instance of a class or a group of entities. Alternatively, a detail
of an entity.
447. Particular entity. Instance of a class of entities. May or may not have a
name (proper noun.)
448. Parts of speech. Categories of words used to structure the meaning of a
sentence in natural language — noun, verb, adjective, adverb, pronoun,
preposition, conjunction, interjection or exclamation, numeral, article, quantifier,
and determiner.
449. Party. One of the sapient entities participating in an interaction.
450. Past. Events which have occurred and entities and context involved with
them.
451. Pattern. Recurrence or regularity of some entity, either spatially or
temporally.
452. Perception. The experience of the real world, from the perspective of a
sentient entity, primarily as experienced directly through their senses, but also
through communication with other sentient entities. Perception of sapient
entities is more of interest here but they do intersect. In many cases the raw
sensory perception abilities of animals can exceed those of sapient entities, in
particular human beings. Alternatively, a sentient entity’s understanding and
interpretation of some matter. See also: misperception.
453. Perfection. Could not be better. Synonym for ideal.
454. Period. Range of time, with a start and end.
455. Person. Human sapient entity. Individual.
456. Personal. The truth of a proposition will be dependent on the individual.
457. Personal knowledge. The knowledge possessed by an individual, which
may or may not be shared by other entities.
458. Perspective. Variation in context which may affect the meaning or
veracity of a proposition or the view on an entity.
459. Persuade. Persuasion. Stronger form of influence, resulting in a change of
views by the persuaded entity or entities.
460. Phenomenon. An integrated pattern of activity or behavior of one or
more entities. May be a natural phenomenon, such as weather systems and
geologic events. May involve sapient, sentient, or non-sentient entities of the
natural world.
461. Philosophy. Study of the nature of knowledge, existence, human
behavior and social interaction, power, and beauty — epistemology,
metaphysics, ethics, politics, and aesthetics.
462. Phrase meaning. The meaning of a phrase to a sapient entity if different
from the meaning of the individual words and terms that comprise the phrase.
Especially idiom.
463. Phrase. The basic building block for the structure of a statement. A
sequence of words or terms. Part of a clause.
464. Physical knowledge artifact. Knowledge artifact which is a physical
object, such as a letter, book, newspaper, or magazine or journal. See also:
electronic knowledge artifact.
465. Physical manifestation. The extent to which an object has some sensible,
observable, measurable, or detectable presence in the physical world. Quality of
a physical object. Question: Does dark matter have a physical manifestation if it
cannot be either observed or detected?
466. Physical object. Object which has a physical manifestation in the real-
world, as opposed to an imaginary object. It may or may not also exist as an
imaginary object, such as a mental object, media object, or computational object.
Synonym for real-world object.
467. Physical world. Synonym for natural world, the real world exclusive of
sapient creatures (humans.) Technically, the physical world would include
anything created by sapient creatures, just not the sapient creatures themselves.
Whether intelligent robots would be considered part of the physical world is
debatable. Context will determine whether natural world or physical world is the
more appropriate term — whether to include manmade objects, structures, and
materials.
468. Place. Location for an entity. Synonym for where.
469. Plan. Develop and propose a sequence of actions to achieve some goal.
Synonym for scheme.
470. Poll. Approximation of truth by querying a sample of the total
population.
471. Population. Full set of entities of a given type in a given area of interest,
in contrast to a sample. Alternatively, the count or specific quantity of the full
set of entities of a given type in a given area of interest.
472. Postulate. Assumption used as a step in a reasoning process. May or may
not be true. May or may not be proved later in the reasoning process.
473. Practical knowledge. Knowledge which has utility in daily life or is
needed to perform various activities.
474. Practical meaning. Utility of knowledge. Its practical application in
everyday life or specific activities. See also: pragmatism.
475. Pragmatic meaning. Synonym for practical meaning.
476. Pragmatism. Philosophical tradition focused on the pragmatic or practical
meaning of knowledge.
477. Predicate. Basis for a proposition — what it is predicated on.
Alternatively, what a proposition states about its subject — what it is asserting
about its subject. Alternatively, a function or proposition that is being evaluated
relative to the assertion of a proposition. Usage can vary and depends on
context.
478. Predicate logic. Synonym for first-order logic and first-order predicate
calculus. Basic formal logic. See: First-order logic.
479. Prediction. The ability of a theory to forecast a phenomenon or event or
details of a phenomenon or event in the future. Alternatively, a belief about
something in the future, but without any firm theory of causation. Alternatively,
a conjecture about what might transpire in the future.
480. Preference. Liking something more than the alternatives.
481. Prejudice. Preconceived, irrational, or unreasonable emotional opposition
to something. Synonym for bias.
482. Premise. Proposition used to reach a conclusion based on reasoning. A
proposition or condition upon which a subsequent proposition, argument, or
conclusion is to be based.
483. Present. Events which are occurring right now and the entities and
phenomena involved with them, as well as the environment right now.
484. Presentation. One-way communication of information, knowledge, and
narrative.
485. Presumption. An assumption based on some reasonable evidence or
belief.
486. Principle. Principles. Highly-valued beliefs. Generally shared across a
group, but may be strictly personal as well. Alternatively, a proposition or theory
that supports a broad range of other beliefs or knowledge. See also: ideal.
487. Probability. Likelihood that a proposition might be true. May be based on
evidence or merely based on confidence.
488. Promise. Commitment, assurance, assertion, or claim that some entity
will be in a designated state or some proposition will remain or become true for
some definite or indefinite point or period of time in the future. Can vary in
clarity or specificity from vague to crystal clear and precise.
489. Proof. Evidence which is sufficient to indicate that a conclusion is
justified and true.
490. Propaganda. Information expressed for the purpose of pursuing an
agenda, commonly for a political or social cause. May be false or misleading,
but may also be true and honest but biased in favor of an underlying agenda.
May be part of the truth but frequently not the whole truth.
491. Proper noun. Name of a particular entity.
492. Property. Synonym for detail. Part of metadata for an entity.
493. Proposal. A proposition, conjecture, or theory that is being advanced,
proposed, for consideration, to be evaluated.
494. Proposition. A statement which may or not be true. Including the
meaning of the statement.
495. Proposition meaning. The meaning that a sapient entity associates with a
proposition.
496. Provenance. Origin of something, information, an entity, a belief,
knowledge, or claim.
497. Psychological confidence. Confidence in a proposition that is based on
the psychological strength of belief in the proposition. See also: emotional
confidence and technical confidence.
498. Psychological warfare. An intentional effort to disrupt an adversary by
presenting information, stories, narratives, and media which prey on
psychological vulnerabilities, most commonly intending to attack morale and
confident to reduce motivation or willingness to continue fighting. See also:
cyberwarfare, disinformation campaign.
499. Purported. Synonym for claim, assertion.
500. Purported fact. Alleged fact. Claimed fact. Fact which may not yet be
widely accepted or may be subject to controversy.
501. Purpose. The function or meaning of an entity or action. Sometimes
simply a synonym for function, but sometimes a sense of higher meaning more
than mere function. Could be a religious or spiritual sense of purpose,
particularly for a person’s life.
502. Qualify. Qualifications. Specify reservations, limitations, restrictions, or
additional requirements on a proposition, such that the proposition may not be
true or relevant without those qualifications.
503. Quality. Synonym for detail. Part of metadata for an entity.
504. Quantity. Numeric value which represents a one to one relationship with
individual instances of some object. Synonym for count.
505. Quantify. Specify exact quantity of some entity, or an indefinite quantity
using a quantifier.
506. Quantifier. Term representing an indefinite quantity, such as none, all,
both, few, many, most, almost all, almost none, virtually all, virtually none,
majority, supermajority, minority, etc. Technically includes specific quantity as
well, such as 23 or twenty-three. Quantifiers are determiners as well. See also:
parts of speech.
507. Question. A request for information or for the truth of a proposition.
508. Rational. Thought, belief, expression, or action based on reasonable
thought, either because it was based on sound reasoning, or at least gives that
appearance. Based on sensible thought, unmarred by any significant degree of
irrationality.
509. Rational belief. A belief resulting from rational thought.
510. Rational thought. The underlying thought process that leads to rational
belief, expression, or action.
511. Rationale. Very informal or weak reasoning which tends not to qualify as
strong reasoning. May or not be credible. Synonym for reasons. See also:
excuse, credible rationale.
512. Real world. Reality. Synonyms are natural world and physical world,
although sometimes they exclude human existence and possibly human
creations.
513. Real-world entity. An entity which exists in the real world, as opposed to
an imaginary entity. It may also exist as an imaginary entity, such as a mental
entity, media entity, or computational entity, either as an accurate or
approximate representation.
514. Real-world object. An object that exists in the real world, as opposed to
an imaginary object. It may also exist as an imaginary object, such as a mental
object, media object, or computational object, either as an accurate or
approximate representation.
515. Reality. All that exists. Synonym for real world.
516. Reason. Abstract process of reasoning through rational thought, to reach
a conclusion, result, goal, decision, judgment, assessment, understanding, or
other outcome that is thoroughly and convincingly justified by the reasoning
process. Alternatively, a proposition which provides specific support for an
argument, conclusion, or explanation for a fact. Alternatively, a credible
explanation, ground, or motive for an action or belief, as opposed to a mere
excuse which may be based on nothing more than emotion.
517. Reasonable. A belief, conclusion, decision, or action which at least
superficially seems in concordance with sound judgment, fairness, and
sensibility. May or may not be based on strong reasoning. May not even be
based on weak reasoning or any reasoning at all, but simply be considered
acceptable and not in conflict with sound judgment.
518. Reasoning. Rational intellectual process of arriving at a conclusion,
result, goal, decision, judgment, assessment, understanding, or other outcome
that is thoroughly and convincingly justified through a rational thought process
grounded in facts, competent and credible analysis, and sound judgment which
involves intentional, considered, thoughtful, orderly, credible, sensible, and
otherwise rational steps or arguments, such as argumentation or logic, generally
dispassionate but may be informed, influenced, shaped, or constrained or
otherwise limited by emotional, social, or practical considerations, as well as
shortcuts and leaps such as heuristics and intuition. Intuition can count for
reasoning, provided it is based on and informed by some significant degree of
experience and judgment. May also be based on common sense and general
wisdom. The point or purpose of reasoning is to achieve an optimal, best, fair,
just, and persuasive outcome. The process may start with observations,
evidence, facts, assumptions, and possibly a desired or intended goal or
proposed conclusion. The process may proceed wherever the evidence leads it,
or may proceed by driving towards a desired result. May be informal or formal,
weak or strong. May use informal argument, casual or rhetorical argumentation,
or tight, formal logic, or even rigorous mathematical proof. By default,
reasoning is presumed to be fairly rigorous — strong reasoning, but in practice
tends to be somewhat weaker than rigorous logic. There is no universal, purely
objective form of reasoning. Reasoning may be subjective and may be relative to
the values of the individual or group engaged in reasoning. Assumptions and
even logic itself may be influenced, constrained, or driven by values and other
subjective considerations. A conclusion reached through reasoning is considered
knowledge.The steps of the process are in themselves knowledge. Whether the
reasoning behind a conclusion should be considered as part of the conclusion or
distinct from the conclusion is not so clear, debatable, and may depend on the
context in which the conclusion is used. See also: argumentation. See also:
deductive reasoning, inductive reasoning, scientific reasoning, rationale, and
foregone conclusion.
519. Reasons. Arguments given for belief in a proposition or course of action.
520. Rebut. Rebuttal. To offer a counterargument or response to an argument.
Does not necessarily refute the original argument. See also: refutation.
521. Received information. Information representing knowledge and meaning
conveyed to a sapient entity. Requires mental processing to deduce the
knowledge and meaning represented in the information.
522. Record. A knowledge artifact which represent a memory of one or more
entities or beliefs. As a verb, to create a knowledge artifact.
523. Red herring. A fact that is true but despite its truth is irrelevant,
misleading, or distracting to the immediate situation, possibly or even typically
by design.
524. Reference materials. Books, documents, maps, indexes, databases, etc. in
which a variety of knowledge, information, and data is recorded for easy access
and easy reference. Effectively a large quantity of propositions whose truth is
widely accepted. Includes dictionaries, encyclopedias, handbooks, guidebooks,
atlases, etc.
525. Reflect. Reflection. Synonym for contemplation.
526. Refute. Refutation. To disprove an argument using facts and reason.
Counterarguments may rebut an argument but not necessarily refute it. See also:
rebut.
527. Regulation. Behavior which is required or prohibited or otherwise
regulated as a result of administrative regulatory action of the government.
Alternatively, regulation of an entity by the environment in which it operates.
528. Relation. A correlation that appears to have some meaning.
529. Relationship. The correlation between two or more entities or
phenomena.
530. Reliable. Consistently available or trusted source for knowledge.
531. Report. Presentation or communication of information, knowledge, and
narrative on some matter.
532. Representation. Knowledge, meaning, or one or more concepts presented
in the form of language, imagery, other forms of media, or action in some
medium, or within the mind. Alternatively, a term represents a concept.
533. Reservation. Reservations. Not fully accepting some matter or
proposition. See also: doubt, concern, beyond doubt.
534. Resources. Sources of information and knowledge that can be used by a
sapient entity to increase its personal knowledge.
535. Result. The state of affairs upon completion of an action, activity, event,
or phenomenon. How the world is different due to the impact of the action,
activity, event, or phenomenon. Synonym for effect, consequence.
536. Rule. Behavior which is required or prohibited or otherwise regulated by
an organization or some other authority, or within a government facility.
537. Rule of thumb. Heuristic for quickly approximating the value of a
calculation.
538. Rumor. Proposition obtained informally, not from an official or
authoritative source. Dubious veracity, but not infrequently true. Synonym for
gossip.
539. Same. Two or more entities share their characteristics so that they cannot
be differentiated. See also: identical, similar, equivalent, equal, different,
distinct.
540. Sample. A relatively small subset of the entities of a much larger
population, with the expectation that the sample should be fairly representative
of the full population, based on the presumption that the population is fairly well
distributed.
541. Sapience. Sapient. Intelligent, capable of wisdom. People and intelligent
robots, but not animals or dumb robots.
542. Sapient entity. An intelligent entity, capable of wisdom — sapience. A
person or an intelligent machine or robot.
543. Scenario. Hypothetical or real sequence of events, including relevant
entities and context.
544. Scheme. Systematic approach to organizing entities or actions. Synonym
for plan.
545. Science. The systematic development and organization of knowledge
about the universe using the methods of science — observation, measurement,
speculation, conjectures, theories, hypotheses, experimentation, empirical
validation, reproducibility, statistical control, peer review, publication, and
transparency.
546. Scientific belief. A belief whose justification is based on the methods of
science.
547. Scientific consensus. Consensus for a group of scientists. Agreement that
a proposition, explanation, or theory is valid.
548. Scientific controversy. Theories, conjectures, propositions, and data for
which there is some significant disagreement by scientists.
549. Scientific knowledge. Knowledge developed as a result of pursuing the
methods of science. See also: scientific belief.
550. Scientific law. Proposition or mathematical relationship concerning the
natural world which has been confirmed through empirical validation.
551. Scientific reasoning. Reasoning in the pursuit of science, which by
definition absolutely excludes emotion, passion, personal, subjective, social, and
practical considerations.
552. Self-awareness. Reflect on knowledge that one possesses of oneself.
553. Scope. Limits, bounds, range, region, or conditions in which an entity
operates or a proposition is valid.
554. Self-awareness. Reflect on knowledge that one possesses of oneself.
555. Semantics. The association of a symbol with meaning by a sapient entity.
556. Semiotics. Study of how symbols (signs) and associated meanings are
developed and used. See also: pragmatism.
557. Sense. Senses. The various inputs that a sentient entity can perceive,
including sight, hearing, smell, taste, and touch. Animals may have other senses.
Robots may have other sensing devices. As a verb, to perceive sensory input.
Alternatively, the variety of distinct meanings that a word, phrase, or term may
have, as in multiple dictionary entries for a word.
558. Sensory input. Information received directly from the senses for a
sentient entity.
559. Sentence. Unit of expression of knowledge and meaning in natural
language. See also: proposition, statement, word, parts of speech, phrase, clause,
meaning.
560. Sentience. Sentient. Able to perceive, feel, experience, and react to the
real world. Includes animals and robots, and human beings, of course. In
contrast with sapience which adds intelligence and wisdom.
561. Sentient creature. Synonym for sentient entity, although a robot could be
a sentient entity but not a sentient creature since a creature is an animal.
562. Sentient entity. A creature or machine able to perceive, feel, and
experience the real world. A person. Animals as well. May include robots.
Easily confused with sapience — sorry, Buddhists.
563. Sentiment. Feeling about some entity. Not necessarily reducible to
reason. May be loosely represented as knowledge and in language, but not
necessarily reliably since it may involve emotion and feelings.
564. Sequence. Events or actions which succeed each other, one after the
other, with a start and an end.
565. Shallow meaning. Little more than the most rudimentary meaning of
something, such as the dictionary meaning of a term plus optionally a little
significance. No real depth. Antithesis of deep meaning. Synonym for surface
meaning.
566. Shared knowledge. Knowledge which is common to two or more sapient
entities.
567. Shared semantics. Two or more sapient entities comprehend the same
concepts and feelings.
568. Sign. Synonym for symbol, mark. In some philosophy systems there may
be more nuanced distinctions between symbol, sign, and mark, but that’s beyond
the scope of this paper.
569. Sign language. Communication via visual gestures (signs) made with the
hands.
570. Significance. Generally a synonym for meaning. Another level or layer
of meaning.
571. Similar. Two or more entities share a significant fraction of their
characteristics so that they seem related, but are still clearly not identical. See
also: same, identical, equivalent, equal, different, dissimilar, distinct.
572. Similarity. Characteristic which is the same between two or more
entities. Alternatively, degree to which two or more entities have the same
characteristics.
573. Simulation. Running a model which mimics how a scenario from the real
world or an imaginary world plays out. May or may not directly parallel the
actual scenario in the real world or the imaginary world. At best it approximates
how the actual scenario plays out.
574. Situation. Recent, current, and imminent events in some area, as well as
condition of the environment of that area. See also: context.
575. Skepticism. Hesitance, reluctance, resistance, or refusal to accept the
veracity of a proposition or fact. May be rational, or not — healthy skepticism,
irrational skepticism.
576. Skills. Ability to perform particular tasks and activities. Developed over
time and with education, training, practice, and experience.
577. Social behavior. The interactions between sentient entities. Alternatively,
restricted to sapient entities of the same species. More simply, interactions in a
society.
578. Social environment. All entities, phenomena, features, and conditions
related to social activity of living creatures. May be human or animal. In contrast
to the physical features of the natural world. May include manmade structures
and artifacts, or exclude them.
579. Social media. Online, computer-based communication and collaboration,
permitting rapid and widespread dissemination of information, media, and
knowledge.
580. Social truth. Propositions which are true for a given social group, but
may not be true or even relevant for some other group.
581. Society. The full population of individuals and groups in some
geographical area.
582. Something. Synonym for thing, or characteristic of an entity.
583. Sound basis. Acceptable reasoning and evidence used to support and
justify a proposition.
584. Sound evidence. Evidence in which confidence is very strong.
585. Sound judgment. Strong ability, skill, experience, and degree of
competence at reasoning and making decisions. A decision on some matter
based on sound judgment. Synonym for sound reason.
586. Sound reason. Sound reasoning. Reasoning which is very credible since
it has a very substantial rational basis and lacks any significant weakness.
Synonym for strong reasoning. See also: good cause.
587. Source. Where or who information or a proposition came from.
588. Speech. Communication via the spoken word. See also: listening.
589. Specific. Synonym for particular.
590. Speculation. Thought process used to construct propositions and beliefs
in our minds. Intended to have practical utility.
591. Standard. Firm and significant level for criteria, measurement, and
strength of belief.
592. Standards. Collection of standards shared by a group or of an individual.
593. State. All aspects of an entity at a particular moment.
594. Statement. Any declarative sentence in natural language. No implications
as to its truth per se. Includes the meaning of the statement. See also:
proposition, command, question.
595. Stat. Stats. Synonym for statistic.
596. Statistic. Element of data which is found or calculated by analyzing a
dataset. Interesting information about one or more entities.
597. Statistical methods. Methods which employ statistics, such as to evaluate
and access confidence in a proposition or theory.
598. Statistics. Mathematics of collection and analysis of data about entities.
Alternatively, plural of statistic.
599. Story. Representation of the flow of events for a limited time period of a
limited collection of entities. See also: Narrative.
600. Strict scientific reasoning. Redundant — scientific reasoning is strict and
absolute by definition.
601. Strong argument. Argument with solid evidence and strong reasoning.
602. Strong belief. Belief in which one has a lot of passion, although that
passion may or may not be matched with an equally strong justification.
603. Strong correlation. A correlation which is usually, almost always, or
always clear.
604. Strong feeling. An individual feels passionately about some matter.
605. Strong justification. Robust, solid reasoning that fully supports a
conclusion or belief.
606. Strong reasoning. Robust process for arriving at a conclusion. Inspires
great confidence. Commonly using relatively formal methods, based on sound
evidence. Sufficiently transparent so that it can be understood by anyone.
607. Strongly held belief. A belief in which there is a significant emotional
investment. A strong belief, but it may be absent strong justification.
608. Study. Focused attention on acquiring knowledge about some matter or
area. Part of learning as well.
609. Subclass. Nested class within a class, such that entities in the subclass
share some additional characteristics which they do not share with entities in the
larger class.
610. Subject. Matter of interest, such as an entity. See also: object.
611. Subjective. A proposition may be true for some sapient entities or groups
but not for others.
612. Subjective knowledge. Knowledge and meaning which may vary vary
between sapient entities, in contrast to objective knowledge which will not vary
between sapient entities which possess that knowledge. Levels or layers of
meaning may differ between sapient entities, in which case those layers that are
shared can be considered objective knowledge and meaning, while those layers
which are not shared among all sapient entities would be considered subjective
knowledge and meaning.
613. Subjective meaning. Meaning which is not shared between all sapient
entities which possess the particular knowledge associated with that meaning, in
contrast with objective meaning, which is shared and does not vary between
sapient entities who share the associated knowledge.
614. Subjective truth. A proposition which is true for some sapient entities or
groups but not for others, in contrast with objective truth and eternal truth.
615. Subjective value. Value whose meaning is determined by the individual
as distinct from other individuals, or by a group as distinct from other groups.
616. Substance. Matter forming an object in the real world. Whether liquid
and gaseous matter should be considered to be substance is debatable, but they
are under this definition.
617. Substantiate. Synonym for providing justification or proof.
618. Surface meaning. The most rudimentary or shallow meaning of
something, such as the dictionary meaning of a term. No real depth. Antithesis
of deep meaning.
619. Survey. Gathering and summarizing the behavior, preferences, and
opinions of individuals in some area. Alternatively, review existing knowledge
in some area.
620. Suspicion. Feeling that a proposition might be true or false but not with
solid evidence or strong reasoning.
621. Symbol. A word, sign, or other form of marking that is used to associate
that mark with the concept and meaning that a sapient entity associates with that
mark. In the philosophical system of pragmatics, a symbol is called the signifier,
and the concept is called the signified.
622. Syntax. Rules for constructing sentences in a language, from words,
phrases, clauses, and punctuation. Synonym for grammar. See also: parsing.
Technically, grammar is more than just syntax (morphology and phonology,
how individual words and sounds are formed), but that’s beyond the scope of
this paper, which focuses on knowledge and meaning.
623. Synthesize. Synthesis. Develop a model or knowledge or theory of a
matter from the basic facts and existing knowledge.
624. System of belief. Collection of beliefs used as a foundation. Such as a
religion, a field of study, a profession, or a country or a society.
625. Tacit knowledge. Knowledge or expertise possessed by an individual,
typically an expert, which cannot be readily or easily communicated or
transferred to others.
626. Tall tale. Exaggerated story of dubious veracity, but may be popular due
to its colorful language.
627. Taxonomy. Method for organizing entities into a hierarchy of subclasses.
628. Technical confidence. Confidence in a proposition or conclusion that is
based on technical assessment of the propositions, evidence, and reasoning
which purport to support the proposition or conclusion. May be based on
reasoning and evidence, especially mathematical methods, including statistical
methods. See also: emotional confidence and psychological confidence.
629. Technicality. A detail or fact about a matter which is considered true but
relatively minor or insignificant and may be considered relatively irrelevant.
May be a matter of dispute as to how important or relevant it may be. See also:
minutiae.
630. Term. One or more words which signify a concept or other type of entity.
A term is a symbol representing a concept or other type of entity. Commonly
defined in a dictionary of some sort. See also: term definition, term meaning,
vocabulary.
631. Term definition. The basic meaning of a term to a sapient entity. As
found in a dictionary or more specialized catalog of terms. See also: term
meaning, vocabulary, dictionary, glossary.
632. Term meaning. The meaning of a term to a sapient entity. Includes term
definition, as found in a dictionary or more specialized catalog of terms and may
include additional levels or layers of meaning beyond the term definition.
633. Terminology. The terms or vocabulary for a particular domain or area of
interest. All of the terms one must comprehend to engage in discourse about a
domain or area of interest. These terms may or may not be unique to that
particular domain, but are of particular interest to that domain. See also:
glossary.
634. Testable hypothesis. A hypothesis for which empirical validation is being
proposed which should confirm or reject the hypothesis, helping to confirm or
reject a theory.
635. Testimonial evidence. Testimony. Statement by a witness attesting to the
truth of a claim.
636. Text. Representation of information or knowledge as textual information,
words in natural language.
637. Theorem. Proposition and the formal reasoning that proves it in
mathematics.
638. Theoretical knowledge. Knowledge which has does not have immediate
practical utility in daily life or is needed to perform various activities, but may
have utility in the future or in areas other than those of current interest.
639. Theory. A coherent explanation of a phenomenon, capable of fully
explaining the phenomenon, consistent with past observations, and able to
predict future observations. Once validated via empirical validation it becomes a
validated theory. Alternatively, and more loosely, a proposed explanation for
some matter, such as who or what caused a particular outcome, without
necessarily offering definitive proof of that explanation.
640. Thesis. A conjecture or claim that is intended to be developed into a
theory. A proposal that has not yet been developed into a full-blown theory.
Alternatively, the conjectural portion of a theory or explanation.
641. Thing. Synonym for entity. Alternatively a passionate interest in
something. Commonly synonym for object, especially an inanimate object. See
also: something.
642. Think. Thinking. To work through the significance of some received
information, to contemplate existing knowledge and information, or to work
towards a conclusion, form a belief, or develop an opinion.
643. Thought. The basic unit of processing in a conscious mind of a sapient
entity. The mental object of the process of thinking. Alternatively, all processing
in a conscious mind of a sapient entity.
644. Thought experiment. An experiment carried out entirely in the mind or in
words and images. Simulating an experiment in the mind or on paper.
645. Time. Progression of events. Alternatively, the moment in time of a
particular event. Time can also be referenced in indefinite form such as all time,
always, for all time, never, sometimes, most times, usually, rarely, almost
always, almost never, soon, sooner, later, recently, recent past, near future,
distant past, distant future.
646. Time period. Span of time between two moments or events.
647. Topic. An area of interest. A designated subset of the real world or an
imaginary world.
648. True. In accord with reality.
649. Truth. Truth of propositions or truth of existence. A proposition that is
true, in accord with reality. Reality as it exists. See also: Domains of Truth.
650. Truth of existence. Existence of something. It’s existence is its own truth.
As it exists in the real world, regardless of our perception of its existence, or
whether we are even aware of its existence.
651. Truth of propositions. Whether a proposition is true or not, whether it is
in accord with reality.
652. Trust. Belief that an individual can be depended on for performance,
judgment, availability, reliability, honesty, integrity, and veracity of knowledge.
653. Trusted source. Source of knowledge which other entities accept without
further justification.
654. Type. Synonym for category, class.
655. Ultimate truth. The actual truth of any matter. May not be known or even
accessible. See also: eternal truth, objective truth. Not necessarily an eternal
truth or even apply to more than a narrow situation. Synonym: actual truth,
ground truth.
656. Unambiguous. No ambiguity about a statement, proposition, or matter.
Only a single meaning. See also: uncertain.
657. Uncertain. Uncertainty. Unclear what the truth of some matter is. The
truth may be known or believed to be known, but without sufficient confidence
to be certain.
658. Understand. Ability to ascertain the meaning and significance of an
event, perception, or communication.
659. Unexpected. Not in accord with one’s beliefs and expectations.
660. Unique. Different from all other entities, or at least all entities in some
area or category. One of a kind. Unlike anything else. See also: distinct.
661. Universal. Universal truth. A proposition is true for all observers, in all
places, at all times, and for all situations. See also: eternal truth.
662. Utilitarian purpose. Function or purpose, exclusive of religious or
spiritual purpose.
663. Valid. A proposition has a sound basis.
664. Validate. Validation. Process of establishing the truth of a proposition
using empirical methods or finding a source which can confirm the proposition.
665. Validated theory. A theory whose predictive ability has been confirmed
through empirical validation.
666. Value judgment. A judgment on some matter based on values.
667. Value. Values. Beliefs and behaviors which are highly valued and shared
by a group or of an individual sapient entity. Includes principles. See also:
subjective value, virtue.
668. Veracity. Truth of a proposition. Alternatively, how closely a proposition
conforms to the truth of the matter covered by the proposition. Alternatively,
reputation of an individual sapient entity or group for honesty and truth.
669. Verb. Word representing activity, existence, or change of state of an
entity. See also: noun, adjective.
670. Verification. Synonym for validation.
671. View. Belief, position, posture, or attitude adopted, possessed, pursued,
and promoted by an individual entity or group.
672. Virtue. Highly valued quality or behavior of an entity. See also: value.
673. Vocabulary. The words, terms, phrases, and idioms and associated
meanings used in a particular language, that a sapient entity comprehends, or
that a topic or area requires. Sapient entities will have difficulty communicating
and sharing knowledge to the extent that they don’t share the same vocabulary,
and a sapient entity may have difficulty comprehending a matter if it does not
possess the full vocabulary required for the knowledge of that matter. Synonym
for lexicon.
674. Weak argument. Argument with minimal or no evidence or reasoning.
675. Weak belief. A belief based on only weak justification.
676. Weak correlation. A correlation that is not always clear and not
infrequently not clear.
677. Weak evidence. Evidence in which confidence is not very strong or that
only partially supports a belief.
678. Weak justification. Argument or evidence that only partially supports a
conclusion or belief.
679. Weak reasoning. Weak or dubious process for arriving at a conclusion.
Fails to inspire great confidence. Commonly using relatively informal methods,
based on weak evidence. Insufficiently transparent so that it cannot be easily and
readily understood by everyone.
680. Weasel words. Words used as a caveat to assert the possibility that a
proposition might be true even though there is not definitive evidence and
justification of its truth, and there is a credible risk that the proposition might be
false. For example, presumably, arguably, possibly, may, maybe, could, might,
likely, alleged, purported, apparently, suspected.
681. Where. Location of an entity. Synonym for place.
682. Wiki. Online, computer-based collaboration tool and effort for
developing shared text and incorporating other media as well. For example, the
Wikipedia.
683. Wisdom. Knowledge, experience, and judgment which permit sound
reasoning and intelligent behavior.
684. Wise. An entity possessing knowledge, experience, and good judgment.
A choice made using knowledge, experience, and good judgment.
685. With a grain of salt. That a claim be viewed with skepticism as to
whether it represents the whole truth or even literal truth.
686. Word. The basic building block of a statement. See also: phrase, clause,
parts of speech.
687. Word meaning. The basic meaning of a word to a sapient entity. As
found in a dictionary.
688. Writing. Knowledge, meaning, and feeling represented in the form of
symbols in some language on some medium such as paper or electronic
communication.

Work in progress
The analysis described here remains a work in progress. It is as complete as I know at
this time, but it will be enhanced and revised as I become aware of new information.

Frontier AI: How far are we from


artificial “general” intelligence, really?
Matt Turck

Some call it “strong” AI, others “real” AI, “true” AI or artificial “general” intelligence
(AGI)… whatever the term (and important nuances), there are few questions of greater
importance than whether we are collectively in the process of developing generalized AI
that can truly think like a human — possibly even at a superhuman intelligence level,
with unpredictable, uncontrollable consequences.
This has been a recurring theme of science fiction for many decades, but given the
dramatic progress of AI over the last few years, the debate has been flaring anew with
particular intensity, with an increasingly vocal stream of media and conversations
warning us that AGI (of the nefarious kind) is coming, and much sooner than we’d
think. Latest example: the new documentary Do you trust this computer?, which
streamed last weekend for free courtesy of Elon Musk, and features a number of
respected AI experts from both academia and industry. The documentary paints an
alarming picture of artificial intelligence, a “new life form” on planet earth that is about
to “wrap its tentacles” around us. There is also an accelerating flow of stories pointing
to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity
face generator and deepfakes, with full video generation and speech synthesis being
likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one:
robots cooperating to open a door) and reports about Google’s AI getting “highly
aggressive“

However, as an investor who spends a lot of time in the “trenches” of AI, I have been
experiencing a fair amount of cognitive dissonance on this topic. I interact daily with a
number of AI entrepreneurs (both in my portfolio and outside), and the reality I see is
quite different: it is still very difficult to build an AI product for the real world, even if
you tackle one specific problem, hire great machine learning engineers and raise
millions of dollars of venture capital. Evidently, even “narrow” AI in the wild is
nowhere near working just yet in scenarios where it needs to perform accurately 100%
of the time, as most tragically evidenced by self-driving related recent deaths.

So which one is it? The main characteristic of exponential technology accelerations is


that they look like they’re in the distant future, until suddenly they’re not. Are we about
to hit an inflection point?

A lot of my blog posts on AI have been about how to build AI applications and
startups. In this post, I look a bit upstream at the world of AI research to try and
understand who’s doing what work, and what may be coming down the pipe from the
AI research labs. In particular, I was privileged to attend an incredible small-group
workshop ahead of the Canonical Computation in Brains and Machines held at NYU a
few weeks ago, which was particularly enlightening and informs some of the content in
this post.

Those are just my notes, destined to anyone in tech and startups generally curious about
AI, as opposed to a technical audience. Certainly a work in progress, and comments are
most welcome.

Here is what I have learned so far.

More AI research, resources and compute than ever to


figure out AGI
A lot has been written about the explosion of startup activity in AI, with a reported
$15.2 billion of venture capital going to AI startups in 2017 (CB Insights), but the same
has also been happening upstream in AI research.
The overall number of research papers published on AI has increased dramatically since
2012 – to the point of generating projects like Arxiv Sanity Preserver, a browser to
access some 45,000+ papers, launched by Andrej Karpathy, “because things were
seriously getting out of hand”

NIPS, a highly technical conference started in 1987, once a tiny and obscure event, had
8,000 participants in 2017 .

AI research is an increasingly global effort. In addition to the “usual suspect” US


universities (e.g. MIT CSAIL lab), some of the most advanced AI research centers are
located in Canada (particularly Toronto, with both University of Toronto and the new
Vector Institute, and Montreal, including MILA), Europe (London, Paris, Berlin), Israel
and, increasingly, China.

(Anecdotally, many in AI academia report increasingly meeting very impressive young


researchers, including some teenagers, who are incredibly technically proficient and
forward thinking in their research, presumably as a result of the democratization of AI
tools and education).

The other major recent trend has been that fundamental AI research has been
increasingly conducted in large Internet companies. The model of the company-
sponsored lab, of course, is not new – think Bell Labs. But it’s taken a new dimension
in AI research recently. Alphabet/Google have both DeepMind (a then startup acquired
in 2014, now a 700-person group focused largely on fundamental AI research, run by
Demis Hassabis) and Google Brain (started in 2011 by Jeff Dean, Greg Corrado and
Andrew Ng, with more focus on applied AI). Facebook has FAIR, headed up by Yann
LeCun, one of the fathers of deep learning. Microsoft has MSR AI. Uber has the Uber
AI Labs, that came out of their acquisition of New York startup Geometric
Intelligence. Alibaba has Alibaba A.I. Labs, Baidu has Baidu Research and Tencent has
the Tencent AI Lab. The list goes on.

Those industry labs have deep resources and routinely pay millions to secure top
researchers. One of the recurring themes in conversations with AI researchers is that, if
it is hard for startups to attract students graduating with a PhD in machine learning, it’s
even harder for academia to retain them.

Many of those labs and are pursuing, explicitly or implicitly, AGI.

In addition, AI research, particularly in those industry labs, has access to two key
resources at unprecedented levels: data and computing power.
The ever-increasing amount of data available to train AI has been well documented by
now, and indeed Internet giants like Google and Facebook have a big advantage when it
comes to developing broad horizontal AI solutions. Things are also getting
“interesting” in China where massive pools of data are being aggregated to train AI for
face recognition, with unicorn startups like Megvii (also known as Face++) and
SenseTime as beneficiaries. In 2017, a plan called Xue Liang (“sharp eyes”) was
announced and involved pooling and processing centrally footage from surveillance
cameras (both public and private) across over 50 Chinese cities. There are also rumors
of aggregation of data across the various Chinese Internet giants for purposes of AI
training.

Beyond data, another big shift that could precipitate AGI is a massive acceleration in
computing power, particularly over the last couple of years. This is a result of progress
both in terms of leveraging existing hardware, and building new high performance
hardware specifically for AI, resulting in progress at a faster pace than Moore’s law.

To rewind a bit, the team that won the ImageNet competition in 2012 (the event that
triggered much of the current wave of enthusiasm around AI) used 2 GPUs to train their
network model. This took 5 to 6 days, and was considered the state of the art. In 2017,
Facebook announced that it had been able to train ImageNet in one hour, using 256
GPUs. And a mere months after it did, a Japanese team from Preferred Networks broke
that record, training ImageNet in 15 minutes with 1024 NVIDIA Tesla P100 GPUs.

But this could be a mere warm-up, as the world is now engaged in a race to produce
ever more powerful AI chips and the hardware that surrounds them. In 2017, Google
released the second generation of its Tensor Processing Units (TPUs), which are
designed specifically to speed up machine learning tasks. Each TPU can deliver 180
teraflops of performance (and be used for both inference and training of machine
learning models). Those TPUs can be clustered to produce super-computers – a 1,000
cloud TPU system is available to AI researchers willing to openly share their work.

There is also tremendous activity at the startup level, with heavily-funded emerging
hardware players like Cerebras, Graphcore, Wave Computing, Mythic and Lambda, as
well as Chinese startups Horizon Robotics, Cambricon and DeePhi.

Finally, there’s emerging hardware innovation around quantum computing and optical
computing. While still very early from a research standpoint, both Google and IBM
announced some meaningful progress in their quantum computing efforts, which would
take AI to yet another level of exponential acceleration.

The massive increase in computing power opens the door to training the AI with ever
increasing amounts of data. It also enables AI researchers to run experiments much
faster, accelerating progress and enabling the creation of new algorithms.

One of the key point that folks at OpenAI (Elon Musk’s nonprofit research lab) make is
that AI already surprised us with its power when the algorithm were running on
comparatively modest hardware a mere five years ago – who knows what will happen
with all this computing power? (see this excellent TWiML & AI podcast with Greg
Brockman, CTO of OpenAI)
AI algorithms, old and new
The astounding resurrection of AI that effectively started around the 2012 ImageNet
competition has very much been propelled by deep learning. This statistical technique,
pioneered and perfected by several AI researchers including Geoff Hinton, Yann LeCun
and Yoshua Bengio, involves multiple layers of processing that gradually refine results
(see this 2015 Nature article for an in depth explanation). It is an old technique that
dates back to the 1960s, 1970s and 1980s, but it suddenly showed its power when fed
enough data and computing power.

Deep learning powers just about any exciting AI product from Alexa to uses of AI in
radiology to the “hot dog or not” spoof product from HBO’s Silicon Valley. It has
proven remarkably effective at pattern recognition across a variety of problems – speech
recognition, image classification, object recognition and some language problems.

From an AGI perspective, deep learning has stirred imaginations because it does more
than what it was programmed to do, for example grouping images or words (like “New
York” and “USA”) around ideas, without having been explicitly told there was a
connection between such images or words (like “New York is located in the USA”). AI
researchers themselves don’t always know exactly why deep learning does what it
does.

Interestingly, however, as the rest of the world is starting to widely embrace deep
learning across a number of consumer and enterprise applications, the AI research
world is asking whether it is hitting diminishing returns. Geoff Hinton himself at a
conference in September 2017 questioned back-propagation, the backbone of neural
networks which he helped invent, and suggested starting over, which sent shockwaves
in the AI research world. A January 2018 paper by Gary Marcus presented ten concerns
for deep learning and suggested that “deep learning must be supplemented by other
techniques if we are to reach artificial general intelligence”.

Much of the discussion seems to have focused on “supervised” deep learning – the form
of deep learning that requires being shown large amounts of labeled examples to train
the machine on how to recognize similar patterns.

The AI research community now seems to agree that, if we are to reach AGI, efforts
need to focus more on unsupervised learning – the form of learning where the
machine gets trained without labeled data. There are many variations of unsupervised
learning, including autoencoders, deep belief networks and GANs.

GANs, or “generative adversarial networks” is a much more recent method, directly


related to unsupervised deep learning, pioneered by Ian Goodfellow in 2014, then a PhD
student at University of Montreal. GANs work by creating a rivalry between two neural
nets, trained on the same data. One network (the generator) creates outputs (like photos)
that are as realistic as possible; the other network (the discriminator) compares the
photos against the data set it was trained on and tries to determine whether whether each
photo is real or fake; the first network then adjusts its parameters for creating new
images, and so and so forth. GANs have had their own evolution, with multiple
versions of GAN appearing just in 2017 (WGAN, BEGAN, CycleGan, Progressive
GAN).
This last approach of progressively training GANs enabled Nvidia to generate high
resolution facial photos of fake celebrities.

Another related area that has seen considerable acceleration is reinforcement learning
– a technique where the AI teaches itself how to do something by trying again and
again, separating good moves (that lead to rewards) from bad ones, and altering its
approach each time, until it masters the skill. Reinforcement learning is another
technique that goes back as far as the 1950s, and was considered for a long time an
interesting idea that didn’t work well. However, that all changed in late 2013 when
DeepMind, then an independent startup, taught an AI to play 22 Atari 2600 games,
including Space Invaders, at a superhuman level. In 2016, its AlphaGo, an AI trained
with reinforcement learning, beat the South Korean Go master Lee Sedol. Then just a
few months ago in December 2017, AlphaZero, a more generalized and powerful
version of AlphaGo used the same approach to master not just Go, but also chess and
shogi. Without any human guidance other than the game rules, AlphaZero taught itself
how to play chess at a master level in only four hours. Within 24 hours, AlphaZero was
able to defeat all state of the art AI programs in those 3 games (Stockfish, elmo and the
3-day version of AlphaGo).

How close is AlphaZero from AGI? Demis Hassabis, the CEO of DeepMind, called
AlphaZero’s playstyle “alien”, because it would sometime win with completely
counterintuitive moves like sacrifices. Seeing a computer program teach itself the most
complex human games to a world-class level in a mere few hours is an unnerving
experience that would appear close to a form of intelligence. One key counter-argument
in the AI community is that AlphaZero is an impressive exercise in brute
force: AlphaZero was trained via self-play using 5,000 first generation TPUs and 64
second generation TPUs; once trained it ran on a single machine with 4 TPUs. In
reinforcement learning, AI researchers point out that the AI has no idea what it is
actually doing (like playing a game) and is limited to the specific constraints that it was
given (the rules of the game). Here is an interesting blog post disputing whether
AlphaZero is a true scientific breakthrough.

When it comes to AGI, or even the success of machine learning in general, several
researchers have high hopes for transfer learning. Demis Hassabis of DeepMind, for
example, calls transfer learning “the key to general intelligence”. Transfer learning is a
machine learning technique where a model trained on one task is re-purposed on a
second related task. The idea is that with this precedent knowledge learned from the
first task, the AI will perform better, train faster and require less labeled data than a new
neural network trained from scratch on the second related task. Fundamentally, the
hope it that it can help AI be more “general” and hop from task to task and domain to
domain, particularly those where labeled data is less readily available (see a good
overview here)

For transfer learning to lead to AGI, the AI would need to be able to do transfer learning
across increasingly far apart tasks and domains, which would require increasing
abstraction. According to Hassabis “the key to doing transfer learning will be the
acquisition of conceptual knowledge that is abstracted away from perceptual details of
where you learned it from”. We’re not quite there as of yet. Transfer learning has been
mostly challenging to make work – it works well when the tasks are closely related, but
becomes much more complex beyond that. But this is a key area of focus for AI
research. DeepMind made significant progress with its PathNet project (see a good
overview here), a network of neural networks. As another example of interest from the
field, just a few days ago, OpenAI launched a transfer learning contest that measures a
reinforcement learning algorithm’s ability to generalize from previous experience. The
algorithms will be tested against 30 SEGA “old school” video games.

Recursive Cortical Networks (RCN) are yet another promising approach. Developed
by Silicon Valley startup Vicarious, RCN were recently used to solve text-based
CAPTCHAs with a high accuracy rate using significantly less data than its counterparts
much – 300x less in the case of a scene text recognition benchmark, (see Science article,
December 8, 2017)

There are many more methods being contemplated, developed or re-explored in light of
the most recent technological progress, including in no particular order: Geoff Hinton’s
capsule networks or CapNets (approachable explanation involving Kim Kardashian
here), neural attention models (approachable explanation without Kim Kardashian
here), one shot learning, differentiable neural computers (DNC), neuroevolution,
evolutionary strategies,… the list goes on, as further testament to the explosive vitality
of AI research.

The fusion of AI and neuroscience


All the techniques described so far are essentially mathematical and statistical in nature
and rely on a lot of computing power and/or data to reach success. While considerable
prowess has been displayed in creating and improving such algorithms, a common
criticism against those methods is that machines are still not able to start from, or learn,
principles. AlphaZero doesn’t know it is playing a game, or what a game is, for that
matter.

A growing line of thinking in research is to rethink core principles of AI in light of how


the human brain works, including in children. While originally inspired by the human
brain (hence the term “neural”), neural networks separated pretty quickly from biology
– a common example is that back propagation doesn’t have an equivalent in nature.

Teaching a machine of how to learn like a child is one of the oldest ideas of AI, going
back to Turing and Minsky in the 1950s, but progress is being made as both the field of
artificial intelligence and the field of neuroscience are maturing.

This intersection of AI and neuroscience was very much the theme of the “Canonical
Computation in Brains and Machines” workshop I alluded to earlier. While both fields
are still getting to know each other, it was clear that some of the deepest AI thinkers are
increasingly focused on neuroscience inspired research, including deep learning
godfathers Yann LeCun (video: What are the principles of learning in newborns?) and
Yoshua Bengio (video: Bridging the gap between deep learning and neuroscience).

A particularly promising line of research comes from Josh Tenenbaum, a professor of


Cognitive Science and Computation at MIT. A key part of Tenenbaum’s work has
been to focus on building quantitative models of how an infant or child learns
(including in her sleep!), as opposed to what she inherits from evolution, in particular
what he calls “intuitive physics” and “intuitive psychology”. His work has been
propelled by progress in probabilistic languages (part of the Bayesian world) that
incorporate a variety of methods such as symbolic languages for knowledge
representation, probabilistic inference for reasoning under uncertainty and neural
networks for pattern recognition. (Videos: “Building machines that learn and think like
people” and “Building machines that see, learn, and think like people”)

While MIT just launched in February an initiative called MIT Intelligence Quest to help
“crack the code of intelligence” with a combination of neuroscience, cognitive science,
and computer science, all of this is still very much lab research and will most likely
require significant patience to produce results applicable to the real world and industry.

Conclusion
So, how far are we from AGI? This high level tour shows contradictory trends. On the
one hand, the pace of innovation is dizzying — many of the developments and stories
mentioned in this piece (AlphaZero, new versions of GANs, capsule networks, RCNs
breaking CAPTCHA, Google’s 2nd generation of TPUs, etc.) occurred just in the last
12 months, in fact mostly in the last 6 months. On the other hand, many the AI research
community itself, while actively pursuing AGI, go to great lengths to emphasize how
far we still are – perhaps out of concern that the media hype around AI may lead to
dashed hopes and yet another AI nuclear winter.

Regardless of whether we get to AGI in the near term or not, it is clear that AI is getting
vastly more powerful, and will get even more so as it runs on ever more powerful
computers, which raises legitimate concerns about what would happen if its power was
left in the wrong hands (whether human or artificial). One chilling point that Elon
Musk was making the “Do you trust this computer?” documentary was that AI didn’t
even need to want to be hostile to humans, or even know what humans are, for that
matter. In its relentless quest to complete a task by all means, it could be harmful to
humans just because they happened to be in the way, like a roadkill.

Leaving aside physical harm, progress in AI leads to a whole series of more immediate
dangers that need to be thoroughly thought through – from significant job losses across
large industries (back offices, trucking) to a complete distortion of our sense of reality
(when fake videos and audio can be easily created).

Photo / chart credits: GOOGLE/CONNIE ZHOU A row of servers in Google’s data


center (with a cooling off system powered by Google Brain) in Douglas County,
Ga. Arxiv-sanity chart found on this blog post.

i
The Singularity Is Near: When Humans Transcend Biology is a 2005 non-fiction book about artificial
intelligence and the future of humanity by inventor and futurist Ray Kurzweil.

You might also like