You are on page 1of 110

MURANG’A UNIVERSITY OF TECHNOLOGY

COURSE OUTLINE

Unit Code: SCS302: ARTIFICIAL INTELLIGENCE


Department: COMPUTER SCIENCE
Lecturer’s Name: Mr. Muhindi
Lecturer’s Tel. No.__0716123617_ Email Address: georgemuhindi@mut.ac.ke_
Contact Hours: 45
Semester: _II___ Academic Year: ____2023/2024_________________________
Pre-requisites: None
Purpose
To provide an overview and introduction to the field of Artificial Intelligence. Notions of
rational behavior and intellıgent agents will be discussedas well as emphasising on
understanding the fundamental concepts and being able to practically apply the
corresponding approaches in solving practical problems.
Expected Learning Outcomes of the course
By the end of this course unit, the learners should be able to:
1. Give an overview of the main issues in Artificial Intelligence.
2. Define the main sub-disciplines of Artificial Intelligence.
3. Describe the key ideas and concepts of each sub-discipline of Artificial Intelligence.
4. Appreciate symbolic systems, their processing and application.
Course Content:
Definition of Artificial Intelligence. Problems, problem spaces and search. Knowledge
representation issues. Introduction to knowledge-based and experiential systems.
Intelligent agents and distributed Artificial Intelligence. Introduction to game playing.
Introduction to planning. Introduction to robotics. Introduction to natural language
understanding and speech recognition. Introduction to expert systems. Introduction to
learning and adaptive systems. Introduction to vision. Use of an AI language to write
programs.

Mode of Delivery
Lectures, demonstrations, Group/class discussions and practical exercises

Page 1 of 4
Instructional Materials/Equipments
Computers, Learning Management System, writing boards, writing materials, projectors
etc
Course Assessment
Type of Assessment Weighting
C.A.T 1 10%
C.A.T 2 10%
Assignment 10%
Examination 700%
Total Scores 100%
Core Reading Materials for the Course
1. Russell SJ& Norvig, P 2009, Artificial Intelligence: A Modern Approach, Prentice-
Hall Inc.
2. Luger, G 2009, Artificial Intelligence: Structures and Strategies for Complex
Problem Solving, 6th ed., Addison Wesley.
Recommended Reference Materials
3. Jones, TM 2008, Artificial Intelligence: A Systems Approach, Infinity Science Press.
4. Negnevitsky, Michael (2004), Artificial Intelligence – A Guide to Intelligent
Systems (Second Edition), Harlow, UK, Addison Wesley, ISBN: 0321204662
5. George F Luger, Artificial Intelligence (Sixth Edition), Addison Wesley, 2009, ISBN
= 9780321545893.
6. Journal of Computing Sciences in Colleges, Volume 25 Issue 4, April 2010.

Week Topic Sub-topic


Week 1 ARTIFICIAL INTELLIGENCE -  Defining Artificial Intelligence
OVERVIEW  Philosophy of AI
 Goals of AI
 Understanding AI Technique
 Applications of AI
Week 2 ARTIFICIAL INTELLIGENCE -  Understanding is Intelligence
INTELLIGENT SYSTEMS  Types of Intelligence
 Composition of Intelligence
o Reasoning
o Learning
o Problem Solving
o Perception

Page 2 of 4
o Linguistic Intelligence
 Difference between Human and Machine
Intelligence
Week 3 CAT 1 Writing Continuous Assessment Test 1

Week 4 ARTIFICIAL INTELLIGENCE -  Real Life Applications of Research Areas


RESEARCH AREAS o Expert Systems
o Natural Language Processing
o Neural Networks
o Robotics
o Fuzzy Logic Systems
o Speech and Voice Recognition
 Task Classification of AI
Week 5 AI - AGENTS & ENVIRONMENTS  Understanding Agent and Environment
 The Structure of Intelligent Agents
 The Nature of Environments
 Turing Test
 Properties of Environment

Week 6 AI - POPULAR SEARCH  Single Agent Pathfinding Problems


ALGORITHMS  Search Terminology
 Brute-Force Search Strategies
 Breadth-First Search
 Depth-First Search
 Informed Heuristic Search Strategies

Week 7 ARTIFICIAL INTELLIGENCE - FUZZY  Understanding Fuzzy Logic?
LOGIC SYSTEMS  Fuzzy Logic Systems Architecture
 Algorithm and development
 Application Areas of Fuzzy Logic

Week 8 AI - NATURAL LANGUAGE  Components of NLP


PROCESSING  Steps in NLP
 implementation Aspects of Syntactic
Analysis

Week 9 CAT 2  Writing Continuous Assessment Test 1

Week 9 ARTIFICIAL INTELLIGENCE - EXPERT  What are Expert Systems?


AND 10 SYSTEMS  Characteristics of Expert Systems
 Capabilities of Expert Systems
 Components of Expert Systems
 Applications of Expert System
 Expert System Technology
 Development of Expert Systems: General
Steps
Week 11 ARTIFICIAL INTELLIGENCE -  What is Robotics?
ROBOTICS  Aspects of Robotics

Page 3 of 4
 Difference in Robot System and Other
AI Program
 Components of a Robot
 Computer Vision
 Hardware of Computer Vision System
 Tasks of Computer Vision
 Application Domains of Computer
Vision
 Applications of Robotics
Week 12 ARTIFICIAL INTELLIGENCE -  What are Artificial Neural Networks?
NEURAL NETWORKS  Basic Structure of ANNs
 Types of Artificial Neural Networks
 Machine Learning in ANNs
 Back Propagation Algorithm
 Bayesian Networks
 Applications of Neural Networks

 Threat to Privacy
ARTIFICIAL INTELLIGENCE - ISSUES  Threat to Human Dignity
 Threat to Safety
Week 13 REVISION

Week 14 FINAL EXAMINATION

Page 4 of 4
SCS 302: ARTIFICIAL INTELLIGENCE
LECTURE 1: INTRODUCTION
GOALS OF THIS COURSE
 This class is a broad introduction to artificial
intelligence (AI)

 AI is a very broad field with many subareas


 We will cover many of the primary concepts/ideas
 But in 10 weeks we can’t cover everything

2
TODAY’S LECTURE
 What is intelligence? What is artificial intelligence?

 A very brief history of AI


 Modern successes: Stanley the driving robot; Deep Blue the
chess game player; IBM’s Watson computer

 An AI scorecard
 How much progress has been made in different aspects of
AI

 AI in practice
 Successful applications

The rational agent view of AI


3

WHAT IS INTELLIGENCE?
 Intelligence:
 “the capacity to learn and solve problems” (Websters
dictionary)
 in particular,
 the ability to solve novel problems
 the ability to act rationally

 the ability to act like humans

 Artificial Intelligence
 build and understand intelligent entities or agents
 2 main approaches: “engineering” versus “cognitive
modeling” 4
TYPES OF INTELLIGENCE
According to Howard Gardner’s multiple intelligence theory, there
are various types of intelligence viz:
 General intelligence: -
 Abilities that allow us to be flexible and adaptive thinkers, not
necessarily tied to acquired knowledge.
 Linguistic-verbal intelligence: -
 Use words and language in various forms / Ability to manipulate
language to express oneself poetically
 Logical-Mathematical intelligence: -
 Ability to detect patterns / Approach problems logically / Reason
deductively
 Musical intelligence: -
 Recognize nonverbal sounds: pitch, rhythm, and tonal patterns
 Spatial intelligence: -
 Typically thinks in images and pictures / Used in both arts and
sciences
5
TYPES OF INTELLIGENCE (2)
 Intrapersonal intelligence: -
 Ability to understand oneself, including feelings and
motivations / Can discipline themselves to accomplish a
wide variety of tasks
 Interpersonal intelligence: -
 Ability to "read people"—discriminate among other
individuals especially their moods, intentions, motivations;
/ Adept at group work, typically assume a leadership role.
 Naturalist intelligence: -
 Ability to recognize and classify living things like plants,
animals
 Bodily-Kinesthetic intelligence: -
 Use one’s mental abilities to coordinate one’s own bodily 6
movements
TYPES OF INTELLIGENCE (3)
Note:
 Understanding the various types of intelligence
provides theoretical foundations for recognizing
different talents and abilities in people

 "What makes life interesting, however, is that we


don’t have the same strength in each intelligence
area, and we don’t have the same amalgam of
intelligences. Just as we look different from one
another and have different kinds of personalities,
we also have different kinds of minds."
7
WHAT IS AI? (2)
 There is no agreed definition of the term artificial intelligence.
However, there are various definitions that have been proposed.
These are considered below.
 AI is a study in which computer systems are made that think like
human beings. Haugeland, 1985 & Bellman, 1978.
 AI is a study in which computer systems are made that act like
people.
 AI is the art of creating computers that perform functions that
require intelligence when performed by people. Kurzweil, 1990.
 AI is the study of how to make computers do things, which at the
moment people are better at. Rich & Knight
 AI is the study of computations that make it possible to perceive,
reason and act. Winston, 1992
 AI is considered to be a study that seeks to explain and emulate
intelligent behaviour in terms of computational processes.
Schalkeoff, 1990.
 AI is considered to be a branch of computer science that is concerned
with the automation of intelligent behavior. Luger &
Stubblefield, 1993. 9
WHAT IS AI? (3)
 Artificial Intelligence is the development of
systems that exhibit the characteristics we
associate with intelligence in human behavior;
 perception,
 natural language processing,
 reasoning,
 planning,
 problem solving,
 learning and adaptation,
 etc.

10
WHAT’S INVOLVED IN INTELLIGENCE?
 Ability to interact with the real world
 to perceive, understand, and act
 e.g., speech recognition and understanding and synthesis
 e.g., image understanding
 e.g., ability to take actions, have an effect

 Reasoning and Planning


 modeling the external world, given input
 solving new problems, planning, and making decisions
 ability to deal with unexpected problems, uncertainties

 Learning and Adaptation


 we are continuously learning and adapting
 our internal models are always being “updated” 11
 e.g., a baby learning to categorize and recognize animals
ACADEMIC DISCIPLINES RELEVANT TO AI
 Philosophy Logic, methods of reasoning, mind as physical
system, foundations of learning, language,
rationality.

 Mathematics Formal representation and proof, algorithms,


computation, (un)decidability, (in)tractability

 Probability/Statistics modeling uncertainty, learning from data

 Economics utility, decision theory, rational economic agents

 Neuroscience neurons as information processing units.

 Psychology/ how do people behave, perceive, process cognitive


Cognitive Science information, represent knowledge.

 Computer building fast computers


engineering

 Control theory design systems that maximize an objective


function over time

 Linguistics knowledge representation, grammars

12
HISTORY OF AI
 1943: early beginnings
 McCulloch & Pitts: Boolean circuit model of brain

 1950: Turing
 Turing's "Computing Machinery and Intelligence“

 1956: birth of AI
 Dartmouth meeting: "Artificial Intelligence“ name adopted

 1950s: initial promise


 Early AI programs, including
 Samuel's checkers program
 Newell & Simon's Logic Theorist

 1955-65: “great enthusiasm”


 Newell and Simon: GPS, general problem solver
 Gelertner: Geometry Theorem Prover
 McCarthy: invention of LISP
13
HISTORY OF AI
 1966—73: Reality dawns
 Realization that many AI problems are intractable
 Limitations of existing neural network methods identified
 Neural network research almost disappears

 1969—85: Adding domain knowledge


 Development of knowledge-based systems
 Success of rule-based expert systems,
 E.g., DENDRAL, MYCIN
 But were brittle and did not scale well in practice

 1986-- Rise of machine learning


 Neural networks return to popularity
 Major advances in machine learning algorithms and applications

 1990-- Role of uncertainty


 Bayesian networks as a knowledge representation framework

 1995-- AI as Science
 Integration of learning, reasoning, knowledge representation
 AI methods used in vision, language, data mining, etc 14
SUCCESS STORIES
 Deep Blue defeated the reigning world chess champion
Garry Kasparov in 1997

 AI program proved a mathematical conjecture (Robbins


conjecture) unsolved for decades

 During the 1991 Gulf War, US forces deployed an AI


logistics planning and scheduling program that involved up
to 50,000 vehicles, cargo, and people

 NASA's on-board autonomous planning program controlled


the scheduling of operations for a spacecraft

 Proverb solves crossword puzzles better than most


humans

 Robot driving: DARPA grand challenge 2003-2007

 2006: face recognition software available in consumer


cameras 15
EXAMPLE: DARPA GRAND CHALLENGE
 Grand Challenge
 Cash prizes ($1 to $2 million) offered to first robots to
complete a long course completely unassisted
 Stimulates research in vision, robotics, planning, machine
learning, reasoning, etc

 2004 Grand Challenge:


 150 mile route in Nevada desert
 Furthest any robot went was about 7 miles
 … but hardest terrain was at the beginning of the course

 2005 Grand Challenge:


 132 mile race
 Narrow tunnels, winding mountain passes, etc
 Stanford 1st, CMU 2nd, both finished in about 6 hours

 2007 Urban Grand Challenge


 November in Victorville, California 16
HAL: FROM THE MOVIE 2001

 2001: A Space Odyssey


 classic science fiction movie from 1969

 HAL
 part of the story centers around an intelligent computer
called HAL
 HAL is the “brains” of an intelligent spaceship
 in the movie, HAL can
 speak easily with the crew

 see and understand the emotions of the crew

 navigate the ship automatically

 diagnose on-board problems

 make life-and-death decisions

 display emotions

 In 1969 this was science fiction: is it still science


fiction?
HAL AND AI

 HAL’s Legacy: 2001’s Computer as Dream and


Reality
 MIT Press, 1997, David Stork (ed.)
 discusses
 HAL as an intelligent computer
 are the predictions for HAL realizable with AI today?

 The website contains


 full text and abstracts of chapters from the book
 links to related material and AI information
 sound and images from the film 18
CONSIDER WHAT MIGHT BE INVOLVED IN
BUILDING A COMPUTER LIKE HAL….

 What are the components that might be useful?


 Fast hardware?
 Chess-playing at grandmaster level?
 Speech interaction?
 speech synthesis
 speech recognition

 speech understanding

 Image recognition and understanding ?


 Learning?
 Planning and decision-making?

19
CAN WE BUILD HARDWARE AS COMPLEX
AS THE BRAIN?

 How complicated is our brain?


 a neuron, or nerve cell, is the basic information processing unit
 estimated to be on the order of 10 12 neurons in a human brain
 many more synapses (10 14) connecting these neurons
 cycle time: 10 -3 seconds (1 millisecond)

 How complex can we make computers?


 108 or more transistors per CPU
 supercomputer: hundreds of CPUs, 1012 bits of RAM
 cycle times: order of 10 - 9 seconds

 Conclusion
 YES: in the near future we can have computers with as many
basic processing elements as our brain, but with
 far fewer interconnections (wires or synapses) than the brain
 much faster updates than the brain
 but building hardware is very different from making a computer 20
behave like a brain!
CAN COMPUTERS TALK?
 This is known as “speech synthesis”
 translate text to phonetic form
 e.g., “fictitious” -> fik-tish-es
 use pronunciation rules to map phonemes to actual sound
 e.g., “tish” -> sequence of basic audio sounds

 Difficulties
 sounds made by this “lookup” approach sound unnatural
 sounds are not independent
 e.g., “act” and “action”
 modern systems (e.g., at AT&T) can handle this pretty well
 a harder problem is emphasis, emotion, etc
 humans understand what they are saying
 machines don’t: so they sound unnatural

 Conclusion:
21
 NO, for complete sentences
 YES, for individual words
CAN COMPUTERS RECOGNIZE SPEECH?
 Speech Recognition:
 mapping sounds from a microphone into a list of words
 classic problem in AI, very difficult
 “Lets talk about how to wreck a nice beach”

 (I really said “________________________”)

 Recognizing single words from a small vocabulary


 systems can do this with high accuracy (order of 99%)
 e.g., directory inquiries
 limited vocabulary (area codes, city names)

 computer tries to recognize you first, if unsuccessful hands


you over to a human operator
 saves millions of dollars a year for the phone companies
22
RECOGNIZING HUMAN SPEECH (CTD.)
 Recognizing normal speech is much more difficult
 speech is continuous: where are the boundaries between
words?
 e.g., “John’s car has a flat tire”
 large vocabularies
 can be many thousands of possible words
 we can use context to help figure out what someone said
 e.g., hypothesize and test

 try telling a waiter in a restaurant:


“I would like some dream and sugar in my coffee”
 background noise, other speakers, accents, colds, etc
 on normal speech, modern systems are only about 60-70%
accurate

 Conclusion:
 NO, normal speech is too complex to accurately recognize
 YES, for restricted problems (small vocabulary, single
speaker) 23
CAN COMPUTERS UNDERSTAND SPEECH?
 Understanding is different to recognition:
 “Time flies like an arrow”
 assume the computer can recognize all the words
 how many different interpretations are there?

24
CAN COMPUTERS UNDERSTAND SPEECH?
 Understanding is different to recognition:
 “Time flies like an arrow”
 assume the computer can recognize all the words
 how many different interpretations are there?

 1. time passes quickly like an arrow?

 2. command: time the flies the way an arrow times the

flies
 3. command: only time those flies which are like an arrow

 4. “time-flies” are fond of arrows

25
CAN COMPUTERS UNDERSTAND SPEECH?
 Understanding is different to recognition:
 “Time flies like an arrow”
 assume the computer can recognize all the words
 how many different interpretations are there?
 1. time passes quickly like an arrow?

 2. command: time the flies the way an arrow times the flies

 3. command: only time those flies which are like an arrow

 4. “time-flies” are fond of arrows

 only 1. makes any sense,


 but how could a computer figure this out?

 clearly humans use a lot of implicit commonsense knowledge


in communication

 Conclusion: NO, much of what we say is beyond the 26


capabilities of a computer to understand at present
CAN COMPUTERS LEARN AND ADAPT ?
 Learning and Adaptation
 consider a computer learning to drive on the freeway
 we could teach it lots of rules about what to do
 or we could let it drive and steer it back on course when it
heads for the embankment
 systems like this are under development (e.g., Daimler Benz)
 e.g., RALPH at CMU
 in mid 90’s it drove 98% of the way from Pittsburgh to San
Diego without any human assistance
 machine learning allows computers to learn to do things
without explicit programming
 many successful applications:
 requires some “set-up”: does not mean your PC can learn to
forecast the stock market or become a brain surgeon

 Conclusion: YES, computers can learn and adapt, when


presented with information in the appropriate way
27
CAN COMPUTERS “SEE”?
 Recognition v. Understanding (like Speech)
 Recognition and Understanding of Objects in a scene
 look around this room
 you can effortlessly recognize objects
 human brain can map 2d visual image to 3d “map”

 Why is visual recognition a hard problem?

 Conclusion:
 mostly NO: computers can only “see” certain types of objects
under limited circumstances
 YES for certain constrained problems (e.g., face recognition)
28
CAN COMPUTERS PLAN AND MAKE
OPTIMAL DECISIONS?

 Intelligence
 involves solving problems and making decisions and plans
 e.g., you want to take a holiday in Brazil
 you need to decide on dates, flights

 you need to get to the airport, etc

 involves a sequence of decisions, plans, and actions

 What makes planning hard?


 the world is not predictable:
 your flight is canceled or there’s a backup on the 405

 there are a potentially huge number of details


 do you consider all flights? all dates?

 no: commonsense constrains your solutions

 AI systems are only successful in constrained planning problems

 Conclusion: NO, real-world planning and decision-making is still beyond the


capabilities of modern computers
29
 exception: very well-defined, constrained problems
SUMMARY OF STATE OF AI SYSTEMS IN
PRACTICE
 Speech synthesis, recognition and understanding
 very useful for limited vocabulary applications
 unconstrained speech understanding is still too hard

 Computer vision
 works for constrained problems (hand-written zip-codes)
 understanding real-world, natural scenes is still too hard

 Learning
 adaptive systems are used in many applications: have their limits

 Planning and Reasoning


 only works for constrained problems: e.g., chess
 real-world is too complex for general systems

 Overall:
 many components of intelligent systems are “doable” 30
 there are many interesting research problems remaining
INTELLIGENT SYSTEMS IN YOUR EVERYDAY LIFE

 Post Office
 automatic address recognition and sorting of mail

 Banks
 automatic check readers, signature verification systems
 automated loan application classification

 Customer Service
 automatic voice recognition

 The Web
 Identifying your age, gender, location, from your Web surfing
 Automated fraud detection

 Digital Cameras
 Automated face detection and focusing

 Computer Games
 Intelligent characters/agents 31
AI APPLICATIONS: MACHINE TRANSLATION
 Language problems in international business
 e.g., at a meeting of Japanese, Korean, Vietnamese and Swedish investors, no
common language
 or: you are shipping your software manuals to 127 countries
 solution; hire translators to translate
 would be much cheaper if a machine could do this

 How hard is automated translation


 very difficult! e.g., English to Russian
 “The spirit is willing but the flesh is weak” (English)

 “the vodka is good but the meat is rotten” (Russian)

 not only must the words be translated, but their meaning also!
 is this problem “AI-complete”?

 Nonetheless....
 commercial systems can do a lot of the work very well (e.g.,restricted vocabularies in
software documentation)
 algorithms which combine dictionaries, grammar models, etc.
 Recent progress using “black-box” machine learning techniques
32
AI AND WEB SEARCH

33
WHAT’S INVOLVED IN INTELLIGENCE?
(AGAIN)
 Perceiving, recognizing, understanding the real
world

 Reasoning and planning about the external world

 Learning and adaptation

 So what general principles should we use to


achieve these goals? 34
CHARACTERISTICS OF AI
 Symbolic Processing
 AI emphasizes manipulation of symbols rather than numbers.
 The manner in which symbols are processed is non-
algorithmic since most human reasoning process do not
necessarily follow a step by step approach (algorithmic
approach).
 Heuristics
 proceeding to a solution by trial and error or by rules that are
only loosely defined.
 Are similar to rules of thumb where you need not rethink
completely what to do every time a similar problem is
encountered.
 Inferencing
 the process of running live data through a trained AI model to
make a prediction or solve a task. Inference is an AI model's
moment of truth, a test of how well it can apply information
learned during training to make a prediction or solve a task.

 This is a form of reasoning with facts and rules using


heuristics or some search strategies. 35
CHARACTERISTICS OF AI (2)
 Pattern matching
 A process of describing objects, events or processes in terms
of their qualitative features and logical and computational
relationships.
 Knowledge Processing
 Knowledge consists of facts, concepts, theories, heuristics
methods, procedures and relationships.
 Knowledge bases.
 Collection of knowledge related to a problem or an
opportunity used in problem.
 Reasoning occurs based on this knowledge base.
 The use of a KB in AI systems is shown in the next slide. 36
CONTRASTING AI WITH NATURAL
INTELLIGENCE
 Important commercial advantages of AI are:-
1) AI is permanent as long as computer system and
programs remain unchanged
2) AI offers ease of duplications and dissemination as
compared to long apprenticeship for natural
intelligence.
3) AI can be less expensive than natural intelligence.
4) AI being a computer system is consistent and
thorough; natural intelligence may be erratic since
people are erratic, they don’t perform consistently.
5) AI can execute certain tasks much faster than
humans can.
6) AI can perform certain tasks better than many or 37
even most people.
CONTRASTING AI WITH NATURAL
INTELLIGENCE (2)
Natural Intelligence has the following advantages
1) Natural intelligence is creative, AI is
uninspired- human always determine
knowledge.
2) Natural intelligence enables people to benefit
from use of sensory experience directly, while
most AI systems must work with symbolic
knowledge.

38
DIFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

1. Modeling exactly how humans actually think

2. Modeling exactly how humans actually act

3. Modeling how ideal agents “should think”

4. Modeling how ideal agents “should act”

 Modern AI focuses on the last definition


 we will also focus on this “engineering” approach
 success is judged by how well the agent performs 39
ACTING HUMANLY: TURING TEST
 Turing (1950) "Computing machinery and intelligence“

 "Can machines think?"  "Can machines behave intelligently?“

 Operational test for intelligent behavior: the Imitation Game

 Suggests major components required for AI:

 Natural language processing - to enable it to communicate successfully in


English
 knowledge representation - to store what it knows or hears
 Automated reasoning - to use the stored information to answer questions
and to draw new conclusions
 Machine learning -to adapt to new circumstances and to detect and
extrapolate patterns.
40
* Question: is it important that an intelligent system act like a human?
TURING TEST FOR INTELLIGENCE
 Tests the ability of a computer system to act
humanly
 The aim is to determine if the human
interrogator thinks he/she is communicating with
a human.
 To pass Turing Test the computer must:
 Process natural language;
 Represent knowledge;
 Reason;
 Learn and adapt to the new situations.

 Total Turing test included vision & robotics.


41
TURING TEST (THE IMITATION GAME)-
1950

42
 Total Turing Test use a video signal so that the
interrogator can test the subject’s perceptual
abilities, as well as the opportunity for the
interrogator to pass physical objects “through the
hatch.” To pass the total Turing Test, the
computer will need;
 computer vision to perceive objects

 ROBOTICS • robotics to manipulate objects and


move about.

43
THINKING HUMANLY
 Cognitive Science approach
 Try to get “inside” our minds
 E.g., conduct experiments with people to try to “reverse-
engineer” how we reason, learning, remember, predict
 The interdisciplinary field of cognitive science brings
together computer models from AI and experimental
techniques from psychology to construct precise and
testable theories of the human mind.

 Problems
 Humans don’t behave rationally
 e.g., insurance

 The reverse engineering is very hard to do

 The brain’s hardware is very different to a computer 44


program
THINKING RATIONALLY
 The “laws of thought” approach
 Represent facts about the world via logic
 The Greek philosopher Aristotle was one of the first to attempt
to codify “right thinking,” that is, irrefutable reasoning
processes.
 Use logical inference as a basis for reasoning about these facts

 Can be a very useful approach to AI


 E.g., theorem-provers

 Limitations
 Does not account for an agent’s uncertainty about the world
 E.g., difficult to couple to vision or speech systems

 Has no way to represent goals, costs, etc (important aspects of


real-world environments) 45
 There are two main obstacles to this approach.
 First, it is not easy to take informal knowledge
and state it in the formal terms required by
logical notation, particularly when the knowledge
is less than 100% certain.
 Second, there is a big difference between solving
a problem “in principle” and solving it in practice.
Even problems with just a few hundred facts can
exhaust the computational resources of any
computer unless it has some guidance as to
which reasoning steps to try first.
46
ACTING RATIONALLY
 Decision theory/Economics
 Set of future states of the world
 Set of possible actions an agent can take
 Utility = gain to an agent for each action/state pair

 An agent acts rationally if it selects the action that


maximizes its “utility”
 Or expected utility if there is uncertainty

 Emphasis is on autonomous agents that behave


rationally (make the best predictions, take the best
actions)
 on average over time 47
 within computational limitations (“bounded rationality”)
TWO VIEWS OF AI
48
SYMBOLIC AI
 Based on Newell & Simons Physical Symbol System
Hypothesis
 Uses logical operations that are applied to declarative
knowledge bases (FOPL)(First Order Predicate Logic)
 Commonly referred to as “Classical AI”
 Represents knowledge about a problem as a set of
declarative sentences in FOPL
 Then logical reasoning methods are used to deduce
consequences
 Another name for this type of approach is called “the
knowledge-based” approach
 The Symbol Processing Approach uses “top-down” design of
intelligent behavior. 49
SUB-SYMBOLIC APPROACH
 Based on the Physical Grounding Hypothesis
 “bottom-up” style
 Starting at the lowest layers and working upward.
 In the sub-symbolic approach signals are generally used
rather than symbols
 Proponents believe that the development of machine
intelligence must follow many of the same evolutionary
steps.
 Sub-symbolic approaches rely primarily on interaction
between machine and environment. This interaction
produces and emergent behavior (evolutionary robotics,
Nordin, Lund)
 Some other sub-symbolic approaches are: Evolutionary
Computation, Artificial Immune Systems, and Neural
Networks 50
MODELLING AN AI SYSTEM
 A typical AI system consists of three subsystems,
i.e.,
 Perception Subsystem
 Reasoning Subsystem
 Action Subsystem(made of actuators/effectors)

51
AI APPLICATIONS
52
AI APPLICATION AREAS
 Game Playing
 Much of the early research in state space
search was done using common board games
such as checkers, chess, and the 15-puzzle
 Games can generate extremely large search
spaces. These are large and complex enough
to require powerful techniques for determining
what alternative to explore
 Advanced games like God of War Ragnarok,
Call of duty,
53
AI APPLICATION AREAS
 Automated reasoning and Theorem
Proving
 Theorem-proving is one of the most fruitful
branches of the field
 Theorem-proving research was responsible in
formalizing search algorithms and developing
formal representation languages such as
predicate calculus and the logic programming
language
 Boolen Conjuncture to BoleanLogic
54
AI APPLICATION AREAS
 Expert System
 One major insight gained from early work in problem
solving was the importance of domain-specific
knowledge
 Expert knowledge is a combination of a theoretical
understanding of the problem and a collection of
heuristic problem-solving rules
 Current deficiencies:
 Lack of flexibility; if human cannot answer a question
immediately, he can return to an examination of first
principle and come up with something
 Inability to provide deep explanations

 Little learning from experience

55
AI APPLICATION AREAS
 Natural Language Understanding and Semantics

 One of the long-standing goals of AI is the creation of


programming that are capable of understanding and
generating human language

56
AI APPLICATION AREAS
 Modeling Human Performance

 Capture the human mind (knowledge representation)

57
AI APPLICATION AREAS
 Robotics

 A robot that blindly performs a sequence of actions


without responding to changes or being able to detect
and correct errors could hardly be considered
intelligent
 It should have some degree of sensors and algorithms
to guide it

58
AI APPLICATION AREAS
 Machine Learning

 Learning has remained a challenging area in AI


 An expert system may perform extensive and costly
computation to solve a problem; unlike human, it
usually doesn’t remember the solution
 Examples include:
 Decision tree learning
 Genetic algorithms

 Neural networks

59
APPLICATION DOMAINS OF AI
 Application domain areas include:
 Military
 Medicine
 Industry
 Entertainment
 Education
 Business

60
SUMMARY OF TODAY’S LECTURE
 Intelligence and types
 Artificial Intelligence involves the study of:
 automated recognition and understanding of signals
 reasoning, planning, and decision-making
 learning and adaptation

 AI has made substantial progress in


 recognition and learning
 some planning and reasoning problems
 …but many open research problems

 AI Applications
 improvements in hardware and algorithms => AI applications in industry, finance,
medicine, and science.

 Rational agent view of AI

 Two views of AI: symbol based vs. sub-symbolic AI


Turing test for intelligence and AI applications 61
Lecture 2. AI Agents

1
Outline
• What’s an agent?
– Definition of an agent
– Rationality and autonomy
– Types of agents
– Properties of environments

2
How do you design an intelligent agent?
 Definition: An intelligent agent perceives its environment via sensors
and acts rationally upon that environment with its effectors.
 The main point about agents is they are autonomous: capable of
acting independently, exhibiting control over their internal state
 Thus: an agent is a computer system capable of autonomous action in
some environment in order to meet its design objectives

 A discrete agent receives percepts (input) one at a time, and maps


this percept sequence to a sequence of discrete actions.
 Properties
Autonomous
Reactive to the environment
Pro-active (goal-directed)
Interacts with other agents
via the environment

3
What do you mean,
sensors/percepts and effectors/actions?

 Humans
Sensors: Eyes (vision), ears (hearing), skin (touch), tongue
(gustation), nose (olfaction), neuromuscular system
(proprioception)
Percepts:
 At the lowest level – electrical signals from these sensors
 After preprocessing – objects in the visual field (location,
textures, colors, …), auditory streams (pitch, loudness,
direction), …
Effectors: limbs, digits, eyes, tongue, …
Actions: lift a finger, turn left, walk, run, carry an object, …
 The Point: percepts and actions need to be carefully
defined, possibly at different levels of abstraction

4
A more specific example: Automated taxi
driving system

• Percepts: Video, sonar, speedometer, odometer, engine sensors,


keyboard input, microphone, GPS, …
• Actions: Steer, accelerate, brake, horn, speak/display, …
• Goals: Maintain safety, reach destination, maximize profits (fuel, tire
wear), obey laws, provide passenger comfort, …
• Environment: Nairobi streets, highways, traffic, pedestrians, weather,
customers, …

• Different aspects of driving may require


different types of agent programs!

5
PEAS for self-driving cars
Let's suppose a self-driving car then PEAS
representation will be:
• Performance: Safety, time, legal drive, comfort
• Environment: Roads, other vehicles, road
signs, pedestrian
• Actuators: Steering, accelerator, brake, signal,
horn
• Sensors: Camera, GPS, speedometer,
odometer, accelerometer, sonar.
6
What is an Agent?
• Trivial (non-interesting) agents:
– thermostat
– UNIX daemon (e.g., biff- Binary Interchange File
Format) A spreadsheet file format that holds data and
charts)
• An intelligent agent is a computer system capable
of flexible autonomous action in some
environment
• By flexible, we mean:
– reactive
– pro-active
– social

7
Reactivity
 If a program’s environment is guaranteed to be fixed, the
program need never worry about its own success or failure –
program just executes blindly
 Example of fixed environment: compiler
 The real world is not like that: things change, information is
incomplete. Many (most?) interesting environments are
dynamic
 Software is hard to build for dynamic domains: program must
take into account possibility of failure – ask itself whether it is
worth executing!
 A reactive system is one that maintains an ongoing interaction
with its environment, and responds to changes that occur in it
(in time for the response to be useful)
8
Proactiveness
• Reacting to an environment is easy (e.g.,
stimulus  response rules)
• But we generally want agents to do things
for us
• Hence goal directed behavior
• Pro-activeness = generating and attempting
to achieve goals; not driven solely by
events; taking the initiative
• Recognizing opportunities
9
Balancing Reactive and Goal-Oriented
Behavior
• We want our agents to be reactive,
responding to changing conditions in an
appropriate (timely) fashion
• We want our agents to systematically work
towards long-term goals
• These two considerations can be at odds with
one another
• Designing an agent that can balance the two
remains an open research problem
10
Social Ability
The real world is a multi-agent environment: we cannot
go around attempting to achieve goals without taking
others into account
Some goals can only be achieved with the cooperation
of others
Similarly for many computer environments: witness the
Internet
Social ability in agents is the ability to interact with
other agents (and possibly humans) via some kind of
agent-communication language, and perhaps
cooperate with others

11
Autonomy
• A system is autonomous to the extent that its own
behavior is determined by its own experience.
• Therefore, a system is not autonomous if it is guided
by its designer according to a priori decisions.
• To survive, agents must have:
– Enough built-in knowledge to survive.
– The ability to learn.

12
Other Properties
• Other properties, sometimes discussed in the context of agency:
• mobility: the ability of an agent to move around an electronic
network
• veracity: an agent will not knowingly communicate false
information; accuracy
• benevolence: agents do not have conflicting goals, and that every
agent will therefore always try to do what is asked of it(kindly)
• rationality: agent will act in order to achieve its goals, and will not
act in such a way as to prevent its goals being achieved — at least
insofar as its beliefs permit
• learning/adaption: agents improve performance over time

13
Rationality
 An ideal rational agent should, for each possible percept
sequence, do whatever actions will maximize its expected
performance measure based on
(1) the percept sequence, and
(2) its built-in and acquired knowledge.
 Rationality includes information gathering, not "rational
ignorance." (If you don’t know something, find out!)
 Rationality => Need a performance measure to say how
well a task has been achieved.
 Types of performance measures: false alarm (false
positive) and false dismissal (false negative) rates, speed,
resources required, effect on environment, etc.

14
Agents and Objects
• Are agents just objects by another name?
• Object:
– encapsulates some state
– communicates via message passing
– has methods, corresponding to operations
that may be performed on this state

15
Agents and Objects
• Main differences:
– agents are autonomous:
agents embody stronger notion of autonomy than objects,
and in particular, they decide for themselves whether or not
to perform an action on request from another agent
– agents are smart:
capable of flexible (reactive, pro-active, social) behavior, and
the standard object model has nothing to say about such
types of behavior
– agents are active:
a multi-agent system is inherently multi-threaded, in that
each agent is assumed to have at least one thread of active
control
16
Objects do it for free…
• agents do it because they want to
• agents do it for money

17
Agents and Expert Systems
• Aren’t agents just expert systems by another name?
• Expert systems typically disembodied ‘expertise’ about
some (abstract) domain of discourse (e.g., blood
diseases)
• Example: MYCIN knows about blood diseases in humans
– It has a wealth of knowledge about blood diseases, in the form
of rules
– A doctor can obtain expert advice about blood diseases by
giving MYCIN facts, answering questions, and posing queries

18
Agents and Expert Systems
• Main differences:
– agents situated in an environment:
MYCIN is not aware of the world — only
information obtained is by asking the user
questions
– agents act:
MYCIN does not operate on patients
• Some real-time (typically process control)
expert systems are agents

19
Intelligent Agents and AI
• Aren’t agents just the AI project?
Isn’t building an agent what AI is all about?
• AI aims to build systems that can
(ultimately) understand natural language,
recognize and understand scenes, use
common sense, think creatively, etc. — all
of which are very hard
• So, don’t we need to solve all of AI to build
an agent…?
20
Intelligent Agents and AI
When building an agent, we simply want a system
that can choose the right action to perform,
typically in a limited domain
We do not have to solve all the problems of AI to
build a useful agent:
a little intelligence goes a long way!
Oren Etzioni, speaking about the commercial
experience of NETBOT, Inc:
“We made our agents dumber and dumber and
dumber…until finally they made money.”
21
Examples of Agent Types and their Descriptions

22
Some Agent Types
 Table-driven agents
 use a percept sequence/action table in memory to find the next action. They
are implemented by a (large) lookup table.
 Simple reflex agents
 are based on condition-action rules, implemented with an appropriate
production system. They are stateless devices which do not have memory of
past world states.
 Agents with memory
 have internal state, which is used to keep track of past states of the world.
 Agents with goals
 are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
 Utility-based agents
 base their decisions on classic axiomatic utility theory in order to act
rationally.

23
Simple Reflex Agent

Table lookup of percept-action pairs defining


all possible condition-action rules necessary to
interact in an environment
Problems
Too big to generate and to store (Chess has about 10^120
states, for example)
No knowledge of non-perceptual parts of the current state
Not adaptive to changes in the environment; requires entire
table to be updated if changes occur
Looping: Can't make actions conditional

24
A Simple Reflex Agent: Schema

25
Reflex Agent with Internal State
Encode "internal state" of the world to remember the
past as contained in earlier percepts
Needed because sensors do not usually give the entire
state of the world at each input, so perception of the
environment is captured over time. "State" used to
encode different "world states" that generate the same
immediate percept.
Requires ability to represent change in the world; one
possibility is to represent just the latest state, but then
can't reason about hypothetical courses of action
Example: Rodney Brooks’s Subsumption Architecture

26
Brooks Subsumption Architecture
Main idea: build complex, intelligent robots by
decomposing behaviors into a hierarchy of skills,
each completely defining a complete percept-
action cycle for one very specific task.
Examples: avoiding contact, wandering,
exploring, recognizing doorways, etc.
Each behavior is modeled by a finite-state
machine with a few states (though each state may
correspond to a complex function or module).
Behaviors are loosely coupled, asynchronous
interactions.
27
Agents that Keep Track of the World

28
Goal-Based Agent
Choose actions so as to achieve a (given or
computed) goal.
A goal is a description of a desirable situation
Keeping track of the current state is often not
enough -- need to add goals to decide which
situations are good
Deliberative instead of reactive
May have to consider long sequences of possible
actions before deciding if goal is achieved --
involves consideration of the future, “what will
happen if I do...?”
29
Agents with Explicit Goals

30
Utility-Based Agent
When there are multiple possible alternatives, how to
decide which one is best?
A goal specifies a crude distinction between a happy
and unhappy state, but often need a more general
performance measure that describes "degree of
happiness"
Utility function U: State --> Reals indicating a measure
of success or happiness when at a given state
Allows decisions comparing choice between conflicting
goals, and choice between likelihood of success and
importance of goal (if achievement is uncertain)

31
A Complete Utility-Based Agent

32
PROPERTIES OF ENVIRONMENTS

33
Environments – Accessible vs.
inaccessible
• An accessible environment is one in which
the agent can obtain complete, accurate, up-
to-date information about the environment’s
state
• Most moderately complex environments
(including, for example, the everyday
physical world and the Internet) are
inaccessible
• The more accessible an environment is, the
simpler it is to build agents to operate in it 34
Environments –
Deterministic vs. non-deterministic
• A deterministic environment is one in which
any action has a single guaranteed effect —
there is no uncertainty about the state that
will result from performing an action
• The physical world can to all intents and
purposes be regarded as non-deterministic
• Non-deterministic environments present
greater problems for the agent designer

35
Environments - Episodic vs. non-
episodic
• In an episodic environment, the performance
of an agent is dependent on a number of
discrete episodes, with no link between the
performance of an agent in different scenarios
• Episodic environments are simpler from the
agent developer’s perspective because the
agent can decide what action to perform
based only on the current episode — it need
not reason about the interactions between this
and future episodes 36
Environments - Static vs. dynamic
• A static environment is one that can be
assumed to remain unchanged except by the
performance of actions by the agent
• A dynamic environment is one that has other
processes operating on it, and which hence
changes in ways beyond the agent’s control
• Other processes can interfere with the agent’s
actions (as in concurrent systems theory)
• The physical world is a highly dynamic
environment 37
Environments – Discrete vs. continuous
• An environment is discrete if there are a fixed,
finite number of actions and percepts in it
• Russell and Norvig give a chess game as an
example of a discrete environment, and taxi
driving as an example of a continuous one
• Continuous environments have a certain level
of mismatch with computer systems
• Discrete environments could in principle be
handled by a kind of “lookup table”
38
Environments: With/Without rational
adversaries

Without rationally thinking, adversary agents,


the agent need not worry about strategic, game-
theoretic aspects of the environment
Most engineering environments are without
rational adversaries, whereas most social and
economic systems get their complexity from the
interactions of (more or less) rational agents.
As example for a game with a rational adversary,
try the Prisoner's Dilemma

39
Characteristics of environments
Accessible Deterministic Episodic Static Discrete

Solitaire

Backgammon

Taxi driving

Internet
shopping
Medical
diagnosis

40
Characteristics of environments
Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon

Taxi driving

Internet
shopping
Medical
diagnosis

41
Characteristics of environments
Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving

Internet
shopping
Medical
diagnosis

42
Characteristics of environments
Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving No No No No No

Internet
shopping
Medical
diagnosis

43
Characteristics of environments
Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving No No No No No

Internet No No No No No
shopping
Medical
diagnosis

44
Characteristics of environments
Accessible Deterministic Episodic Static Discrete

Solitaire No Yes Yes Yes Yes

Backgammon Yes No No Yes Yes

Taxi driving No No No No No

Internet No No No No No
shopping
Medical No No No No No
diagnosis

→ Lots of real-world domains fall into the hardest case!


45
Summary
• An agent perceives and acts in an environment, has an architecture and is
implemented by an agent program.
• An ideal agent always chooses the action which maximizes its expected
performance, given percept sequence received so far.
• An autonomous agent uses its own experience rather than built-in knowledge
of the environment by the designer.
• An agent program maps from percept to action & updates its internal state.
– Reflex agents respond immediately to percepts.
– Goal-based agents act in order to achieve their goal(s).
– Utility-based agents maximize their own utility function.
• Representing knowledge is important for successful agent design.
• Some environments are more difficult for agents than others. The most
challenging environments are inaccessible, nondeterministic, non-episodic,
dynamic, and continuous.

46

You might also like