Professional Documents
Culture Documents
COURSE OUTLINE
Mode of Delivery
Lectures, demonstrations, Group/class discussions and practical exercises
Page 1 of 4
Instructional Materials/Equipments
Computers, Learning Management System, writing boards, writing materials, projectors
etc
Course Assessment
Type of Assessment Weighting
C.A.T 1 10%
C.A.T 2 10%
Assignment 10%
Examination 700%
Total Scores 100%
Core Reading Materials for the Course
1. Russell SJ& Norvig, P 2009, Artificial Intelligence: A Modern Approach, Prentice-
Hall Inc.
2. Luger, G 2009, Artificial Intelligence: Structures and Strategies for Complex
Problem Solving, 6th ed., Addison Wesley.
Recommended Reference Materials
3. Jones, TM 2008, Artificial Intelligence: A Systems Approach, Infinity Science Press.
4. Negnevitsky, Michael (2004), Artificial Intelligence – A Guide to Intelligent
Systems (Second Edition), Harlow, UK, Addison Wesley, ISBN: 0321204662
5. George F Luger, Artificial Intelligence (Sixth Edition), Addison Wesley, 2009, ISBN
= 9780321545893.
6. Journal of Computing Sciences in Colleges, Volume 25 Issue 4, April 2010.
Page 2 of 4
o Linguistic Intelligence
Difference between Human and Machine
Intelligence
Week 3 CAT 1 Writing Continuous Assessment Test 1
Page 3 of 4
Difference in Robot System and Other
AI Program
Components of a Robot
Computer Vision
Hardware of Computer Vision System
Tasks of Computer Vision
Application Domains of Computer
Vision
Applications of Robotics
Week 12 ARTIFICIAL INTELLIGENCE - What are Artificial Neural Networks?
NEURAL NETWORKS Basic Structure of ANNs
Types of Artificial Neural Networks
Machine Learning in ANNs
Back Propagation Algorithm
Bayesian Networks
Applications of Neural Networks
Threat to Privacy
ARTIFICIAL INTELLIGENCE - ISSUES Threat to Human Dignity
Threat to Safety
Week 13 REVISION
Page 4 of 4
SCS 302: ARTIFICIAL INTELLIGENCE
LECTURE 1: INTRODUCTION
GOALS OF THIS COURSE
This class is a broad introduction to artificial
intelligence (AI)
2
TODAY’S LECTURE
What is intelligence? What is artificial intelligence?
An AI scorecard
How much progress has been made in different aspects of
AI
AI in practice
Successful applications
Artificial Intelligence
build and understand intelligent entities or agents
2 main approaches: “engineering” versus “cognitive
modeling” 4
TYPES OF INTELLIGENCE
According to Howard Gardner’s multiple intelligence theory, there
are various types of intelligence viz:
General intelligence: -
Abilities that allow us to be flexible and adaptive thinkers, not
necessarily tied to acquired knowledge.
Linguistic-verbal intelligence: -
Use words and language in various forms / Ability to manipulate
language to express oneself poetically
Logical-Mathematical intelligence: -
Ability to detect patterns / Approach problems logically / Reason
deductively
Musical intelligence: -
Recognize nonverbal sounds: pitch, rhythm, and tonal patterns
Spatial intelligence: -
Typically thinks in images and pictures / Used in both arts and
sciences
5
TYPES OF INTELLIGENCE (2)
Intrapersonal intelligence: -
Ability to understand oneself, including feelings and
motivations / Can discipline themselves to accomplish a
wide variety of tasks
Interpersonal intelligence: -
Ability to "read people"—discriminate among other
individuals especially their moods, intentions, motivations;
/ Adept at group work, typically assume a leadership role.
Naturalist intelligence: -
Ability to recognize and classify living things like plants,
animals
Bodily-Kinesthetic intelligence: -
Use one’s mental abilities to coordinate one’s own bodily 6
movements
TYPES OF INTELLIGENCE (3)
Note:
Understanding the various types of intelligence
provides theoretical foundations for recognizing
different talents and abilities in people
10
WHAT’S INVOLVED IN INTELLIGENCE?
Ability to interact with the real world
to perceive, understand, and act
e.g., speech recognition and understanding and synthesis
e.g., image understanding
e.g., ability to take actions, have an effect
12
HISTORY OF AI
1943: early beginnings
McCulloch & Pitts: Boolean circuit model of brain
1950: Turing
Turing's "Computing Machinery and Intelligence“
1956: birth of AI
Dartmouth meeting: "Artificial Intelligence“ name adopted
1995-- AI as Science
Integration of learning, reasoning, knowledge representation
AI methods used in vision, language, data mining, etc 14
SUCCESS STORIES
Deep Blue defeated the reigning world chess champion
Garry Kasparov in 1997
HAL
part of the story centers around an intelligent computer
called HAL
HAL is the “brains” of an intelligent spaceship
in the movie, HAL can
speak easily with the crew
display emotions
speech understanding
19
CAN WE BUILD HARDWARE AS COMPLEX
AS THE BRAIN?
Conclusion
YES: in the near future we can have computers with as many
basic processing elements as our brain, but with
far fewer interconnections (wires or synapses) than the brain
much faster updates than the brain
but building hardware is very different from making a computer 20
behave like a brain!
CAN COMPUTERS TALK?
This is known as “speech synthesis”
translate text to phonetic form
e.g., “fictitious” -> fik-tish-es
use pronunciation rules to map phonemes to actual sound
e.g., “tish” -> sequence of basic audio sounds
Difficulties
sounds made by this “lookup” approach sound unnatural
sounds are not independent
e.g., “act” and “action”
modern systems (e.g., at AT&T) can handle this pretty well
a harder problem is emphasis, emotion, etc
humans understand what they are saying
machines don’t: so they sound unnatural
Conclusion:
21
NO, for complete sentences
YES, for individual words
CAN COMPUTERS RECOGNIZE SPEECH?
Speech Recognition:
mapping sounds from a microphone into a list of words
classic problem in AI, very difficult
“Lets talk about how to wreck a nice beach”
Conclusion:
NO, normal speech is too complex to accurately recognize
YES, for restricted problems (small vocabulary, single
speaker) 23
CAN COMPUTERS UNDERSTAND SPEECH?
Understanding is different to recognition:
“Time flies like an arrow”
assume the computer can recognize all the words
how many different interpretations are there?
24
CAN COMPUTERS UNDERSTAND SPEECH?
Understanding is different to recognition:
“Time flies like an arrow”
assume the computer can recognize all the words
how many different interpretations are there?
flies
3. command: only time those flies which are like an arrow
25
CAN COMPUTERS UNDERSTAND SPEECH?
Understanding is different to recognition:
“Time flies like an arrow”
assume the computer can recognize all the words
how many different interpretations are there?
1. time passes quickly like an arrow?
2. command: time the flies the way an arrow times the flies
Conclusion:
mostly NO: computers can only “see” certain types of objects
under limited circumstances
YES for certain constrained problems (e.g., face recognition)
28
CAN COMPUTERS PLAN AND MAKE
OPTIMAL DECISIONS?
Intelligence
involves solving problems and making decisions and plans
e.g., you want to take a holiday in Brazil
you need to decide on dates, flights
Computer vision
works for constrained problems (hand-written zip-codes)
understanding real-world, natural scenes is still too hard
Learning
adaptive systems are used in many applications: have their limits
Overall:
many components of intelligent systems are “doable” 30
there are many interesting research problems remaining
INTELLIGENT SYSTEMS IN YOUR EVERYDAY LIFE
Post Office
automatic address recognition and sorting of mail
Banks
automatic check readers, signature verification systems
automated loan application classification
Customer Service
automatic voice recognition
The Web
Identifying your age, gender, location, from your Web surfing
Automated fraud detection
Digital Cameras
Automated face detection and focusing
Computer Games
Intelligent characters/agents 31
AI APPLICATIONS: MACHINE TRANSLATION
Language problems in international business
e.g., at a meeting of Japanese, Korean, Vietnamese and Swedish investors, no
common language
or: you are shipping your software manuals to 127 countries
solution; hire translators to translate
would be much cheaper if a machine could do this
not only must the words be translated, but their meaning also!
is this problem “AI-complete”?
Nonetheless....
commercial systems can do a lot of the work very well (e.g.,restricted vocabularies in
software documentation)
algorithms which combine dictionaries, grammar models, etc.
Recent progress using “black-box” machine learning techniques
32
AI AND WEB SEARCH
33
WHAT’S INVOLVED IN INTELLIGENCE?
(AGAIN)
Perceiving, recognizing, understanding the real
world
38
DIFFERENT TYPES OF ARTIFICIAL INTELLIGENCE
42
Total Turing Test use a video signal so that the
interrogator can test the subject’s perceptual
abilities, as well as the opportunity for the
interrogator to pass physical objects “through the
hatch.” To pass the total Turing Test, the
computer will need;
computer vision to perceive objects
43
THINKING HUMANLY
Cognitive Science approach
Try to get “inside” our minds
E.g., conduct experiments with people to try to “reverse-
engineer” how we reason, learning, remember, predict
The interdisciplinary field of cognitive science brings
together computer models from AI and experimental
techniques from psychology to construct precise and
testable theories of the human mind.
Problems
Humans don’t behave rationally
e.g., insurance
Limitations
Does not account for an agent’s uncertainty about the world
E.g., difficult to couple to vision or speech systems
51
AI APPLICATIONS
52
AI APPLICATION AREAS
Game Playing
Much of the early research in state space
search was done using common board games
such as checkers, chess, and the 15-puzzle
Games can generate extremely large search
spaces. These are large and complex enough
to require powerful techniques for determining
what alternative to explore
Advanced games like God of War Ragnarok,
Call of duty,
53
AI APPLICATION AREAS
Automated reasoning and Theorem
Proving
Theorem-proving is one of the most fruitful
branches of the field
Theorem-proving research was responsible in
formalizing search algorithms and developing
formal representation languages such as
predicate calculus and the logic programming
language
Boolen Conjuncture to BoleanLogic
54
AI APPLICATION AREAS
Expert System
One major insight gained from early work in problem
solving was the importance of domain-specific
knowledge
Expert knowledge is a combination of a theoretical
understanding of the problem and a collection of
heuristic problem-solving rules
Current deficiencies:
Lack of flexibility; if human cannot answer a question
immediately, he can return to an examination of first
principle and come up with something
Inability to provide deep explanations
55
AI APPLICATION AREAS
Natural Language Understanding and Semantics
56
AI APPLICATION AREAS
Modeling Human Performance
57
AI APPLICATION AREAS
Robotics
58
AI APPLICATION AREAS
Machine Learning
Neural networks
59
APPLICATION DOMAINS OF AI
Application domain areas include:
Military
Medicine
Industry
Entertainment
Education
Business
60
SUMMARY OF TODAY’S LECTURE
Intelligence and types
Artificial Intelligence involves the study of:
automated recognition and understanding of signals
reasoning, planning, and decision-making
learning and adaptation
AI Applications
improvements in hardware and algorithms => AI applications in industry, finance,
medicine, and science.
1
Outline
• What’s an agent?
– Definition of an agent
– Rationality and autonomy
– Types of agents
– Properties of environments
2
How do you design an intelligent agent?
Definition: An intelligent agent perceives its environment via sensors
and acts rationally upon that environment with its effectors.
The main point about agents is they are autonomous: capable of
acting independently, exhibiting control over their internal state
Thus: an agent is a computer system capable of autonomous action in
some environment in order to meet its design objectives
3
What do you mean,
sensors/percepts and effectors/actions?
Humans
Sensors: Eyes (vision), ears (hearing), skin (touch), tongue
(gustation), nose (olfaction), neuromuscular system
(proprioception)
Percepts:
At the lowest level – electrical signals from these sensors
After preprocessing – objects in the visual field (location,
textures, colors, …), auditory streams (pitch, loudness,
direction), …
Effectors: limbs, digits, eyes, tongue, …
Actions: lift a finger, turn left, walk, run, carry an object, …
The Point: percepts and actions need to be carefully
defined, possibly at different levels of abstraction
4
A more specific example: Automated taxi
driving system
5
PEAS for self-driving cars
Let's suppose a self-driving car then PEAS
representation will be:
• Performance: Safety, time, legal drive, comfort
• Environment: Roads, other vehicles, road
signs, pedestrian
• Actuators: Steering, accelerator, brake, signal,
horn
• Sensors: Camera, GPS, speedometer,
odometer, accelerometer, sonar.
6
What is an Agent?
• Trivial (non-interesting) agents:
– thermostat
– UNIX daemon (e.g., biff- Binary Interchange File
Format) A spreadsheet file format that holds data and
charts)
• An intelligent agent is a computer system capable
of flexible autonomous action in some
environment
• By flexible, we mean:
– reactive
– pro-active
– social
7
Reactivity
If a program’s environment is guaranteed to be fixed, the
program need never worry about its own success or failure –
program just executes blindly
Example of fixed environment: compiler
The real world is not like that: things change, information is
incomplete. Many (most?) interesting environments are
dynamic
Software is hard to build for dynamic domains: program must
take into account possibility of failure – ask itself whether it is
worth executing!
A reactive system is one that maintains an ongoing interaction
with its environment, and responds to changes that occur in it
(in time for the response to be useful)
8
Proactiveness
• Reacting to an environment is easy (e.g.,
stimulus response rules)
• But we generally want agents to do things
for us
• Hence goal directed behavior
• Pro-activeness = generating and attempting
to achieve goals; not driven solely by
events; taking the initiative
• Recognizing opportunities
9
Balancing Reactive and Goal-Oriented
Behavior
• We want our agents to be reactive,
responding to changing conditions in an
appropriate (timely) fashion
• We want our agents to systematically work
towards long-term goals
• These two considerations can be at odds with
one another
• Designing an agent that can balance the two
remains an open research problem
10
Social Ability
The real world is a multi-agent environment: we cannot
go around attempting to achieve goals without taking
others into account
Some goals can only be achieved with the cooperation
of others
Similarly for many computer environments: witness the
Internet
Social ability in agents is the ability to interact with
other agents (and possibly humans) via some kind of
agent-communication language, and perhaps
cooperate with others
11
Autonomy
• A system is autonomous to the extent that its own
behavior is determined by its own experience.
• Therefore, a system is not autonomous if it is guided
by its designer according to a priori decisions.
• To survive, agents must have:
– Enough built-in knowledge to survive.
– The ability to learn.
12
Other Properties
• Other properties, sometimes discussed in the context of agency:
• mobility: the ability of an agent to move around an electronic
network
• veracity: an agent will not knowingly communicate false
information; accuracy
• benevolence: agents do not have conflicting goals, and that every
agent will therefore always try to do what is asked of it(kindly)
• rationality: agent will act in order to achieve its goals, and will not
act in such a way as to prevent its goals being achieved — at least
insofar as its beliefs permit
• learning/adaption: agents improve performance over time
13
Rationality
An ideal rational agent should, for each possible percept
sequence, do whatever actions will maximize its expected
performance measure based on
(1) the percept sequence, and
(2) its built-in and acquired knowledge.
Rationality includes information gathering, not "rational
ignorance." (If you don’t know something, find out!)
Rationality => Need a performance measure to say how
well a task has been achieved.
Types of performance measures: false alarm (false
positive) and false dismissal (false negative) rates, speed,
resources required, effect on environment, etc.
14
Agents and Objects
• Are agents just objects by another name?
• Object:
– encapsulates some state
– communicates via message passing
– has methods, corresponding to operations
that may be performed on this state
15
Agents and Objects
• Main differences:
– agents are autonomous:
agents embody stronger notion of autonomy than objects,
and in particular, they decide for themselves whether or not
to perform an action on request from another agent
– agents are smart:
capable of flexible (reactive, pro-active, social) behavior, and
the standard object model has nothing to say about such
types of behavior
– agents are active:
a multi-agent system is inherently multi-threaded, in that
each agent is assumed to have at least one thread of active
control
16
Objects do it for free…
• agents do it because they want to
• agents do it for money
17
Agents and Expert Systems
• Aren’t agents just expert systems by another name?
• Expert systems typically disembodied ‘expertise’ about
some (abstract) domain of discourse (e.g., blood
diseases)
• Example: MYCIN knows about blood diseases in humans
– It has a wealth of knowledge about blood diseases, in the form
of rules
– A doctor can obtain expert advice about blood diseases by
giving MYCIN facts, answering questions, and posing queries
18
Agents and Expert Systems
• Main differences:
– agents situated in an environment:
MYCIN is not aware of the world — only
information obtained is by asking the user
questions
– agents act:
MYCIN does not operate on patients
• Some real-time (typically process control)
expert systems are agents
19
Intelligent Agents and AI
• Aren’t agents just the AI project?
Isn’t building an agent what AI is all about?
• AI aims to build systems that can
(ultimately) understand natural language,
recognize and understand scenes, use
common sense, think creatively, etc. — all
of which are very hard
• So, don’t we need to solve all of AI to build
an agent…?
20
Intelligent Agents and AI
When building an agent, we simply want a system
that can choose the right action to perform,
typically in a limited domain
We do not have to solve all the problems of AI to
build a useful agent:
a little intelligence goes a long way!
Oren Etzioni, speaking about the commercial
experience of NETBOT, Inc:
“We made our agents dumber and dumber and
dumber…until finally they made money.”
21
Examples of Agent Types and their Descriptions
22
Some Agent Types
Table-driven agents
use a percept sequence/action table in memory to find the next action. They
are implemented by a (large) lookup table.
Simple reflex agents
are based on condition-action rules, implemented with an appropriate
production system. They are stateless devices which do not have memory of
past world states.
Agents with memory
have internal state, which is used to keep track of past states of the world.
Agents with goals
are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
Utility-based agents
base their decisions on classic axiomatic utility theory in order to act
rationally.
23
Simple Reflex Agent
24
A Simple Reflex Agent: Schema
25
Reflex Agent with Internal State
Encode "internal state" of the world to remember the
past as contained in earlier percepts
Needed because sensors do not usually give the entire
state of the world at each input, so perception of the
environment is captured over time. "State" used to
encode different "world states" that generate the same
immediate percept.
Requires ability to represent change in the world; one
possibility is to represent just the latest state, but then
can't reason about hypothetical courses of action
Example: Rodney Brooks’s Subsumption Architecture
26
Brooks Subsumption Architecture
Main idea: build complex, intelligent robots by
decomposing behaviors into a hierarchy of skills,
each completely defining a complete percept-
action cycle for one very specific task.
Examples: avoiding contact, wandering,
exploring, recognizing doorways, etc.
Each behavior is modeled by a finite-state
machine with a few states (though each state may
correspond to a complex function or module).
Behaviors are loosely coupled, asynchronous
interactions.
27
Agents that Keep Track of the World
28
Goal-Based Agent
Choose actions so as to achieve a (given or
computed) goal.
A goal is a description of a desirable situation
Keeping track of the current state is often not
enough -- need to add goals to decide which
situations are good
Deliberative instead of reactive
May have to consider long sequences of possible
actions before deciding if goal is achieved --
involves consideration of the future, “what will
happen if I do...?”
29
Agents with Explicit Goals
30
Utility-Based Agent
When there are multiple possible alternatives, how to
decide which one is best?
A goal specifies a crude distinction between a happy
and unhappy state, but often need a more general
performance measure that describes "degree of
happiness"
Utility function U: State --> Reals indicating a measure
of success or happiness when at a given state
Allows decisions comparing choice between conflicting
goals, and choice between likelihood of success and
importance of goal (if achievement is uncertain)
31
A Complete Utility-Based Agent
32
PROPERTIES OF ENVIRONMENTS
33
Environments – Accessible vs.
inaccessible
• An accessible environment is one in which
the agent can obtain complete, accurate, up-
to-date information about the environment’s
state
• Most moderately complex environments
(including, for example, the everyday
physical world and the Internet) are
inaccessible
• The more accessible an environment is, the
simpler it is to build agents to operate in it 34
Environments –
Deterministic vs. non-deterministic
• A deterministic environment is one in which
any action has a single guaranteed effect —
there is no uncertainty about the state that
will result from performing an action
• The physical world can to all intents and
purposes be regarded as non-deterministic
• Non-deterministic environments present
greater problems for the agent designer
35
Environments - Episodic vs. non-
episodic
• In an episodic environment, the performance
of an agent is dependent on a number of
discrete episodes, with no link between the
performance of an agent in different scenarios
• Episodic environments are simpler from the
agent developer’s perspective because the
agent can decide what action to perform
based only on the current episode — it need
not reason about the interactions between this
and future episodes 36
Environments - Static vs. dynamic
• A static environment is one that can be
assumed to remain unchanged except by the
performance of actions by the agent
• A dynamic environment is one that has other
processes operating on it, and which hence
changes in ways beyond the agent’s control
• Other processes can interfere with the agent’s
actions (as in concurrent systems theory)
• The physical world is a highly dynamic
environment 37
Environments – Discrete vs. continuous
• An environment is discrete if there are a fixed,
finite number of actions and percepts in it
• Russell and Norvig give a chess game as an
example of a discrete environment, and taxi
driving as an example of a continuous one
• Continuous environments have a certain level
of mismatch with computer systems
• Discrete environments could in principle be
handled by a kind of “lookup table”
38
Environments: With/Without rational
adversaries
39
Characteristics of environments
Accessible Deterministic Episodic Static Discrete
Solitaire
Backgammon
Taxi driving
Internet
shopping
Medical
diagnosis
40
Characteristics of environments
Accessible Deterministic Episodic Static Discrete
Backgammon
Taxi driving
Internet
shopping
Medical
diagnosis
41
Characteristics of environments
Accessible Deterministic Episodic Static Discrete
Taxi driving
Internet
shopping
Medical
diagnosis
42
Characteristics of environments
Accessible Deterministic Episodic Static Discrete
Taxi driving No No No No No
Internet
shopping
Medical
diagnosis
43
Characteristics of environments
Accessible Deterministic Episodic Static Discrete
Taxi driving No No No No No
Internet No No No No No
shopping
Medical
diagnosis
44
Characteristics of environments
Accessible Deterministic Episodic Static Discrete
Taxi driving No No No No No
Internet No No No No No
shopping
Medical No No No No No
diagnosis
46