You are on page 1of 31

Introduction to

Artificial Intelligence
Chapter 1: Introduction

Dr. Ahmed Fouad Ali


What is Artificial Intelligence (AI) ?
What is AI?
Thinking Rationally Thinking Humanly
The study of mental faculties through the use of The exciting new effort to make computers
computational models.”(Charniak and think . machines with minds, in the full and
McDermott, 1985) literal sense.” (Haugeland, 1985)

The study of the computations that make it [The automation of] activities that we
possible to perceive, reason, and act. (Winston, associate with human thinking, activities such
1992) as decision-making, problem solving,
learning . (Bellman, 1978)
Acting Rationally Acting Humanly
Computational Intelligence is the study of the The art of creating machines that perform
design of intelligent agents. (Poole et al., 1998) functions that require intelligence when
performed by people. (Kurzweil, 1990)
AI . . . is concerned with intelligent behavior in
artifacts. (Nilsson, 1998) The study of how to make computers do things
at which, at the moment, people are better.”
(Rich and Knight, 1991)
Acting humanly: The Turing Test approach
•The Turing Test, proposed by Alan Turing (1950),
was designed to provide a satisfactory operational
definition of intelligence.

•A computer passes the test if a human interrogator,


after posing some written questions, cannot tell
whether the written responses come from a person or
from a computer.
Acting humanly: The Turing Test approach (Cont.)
The computer would need to possess the following capabilities:

• Natural language processing to enable it to communicate


successfully in English;

• Knowledge representation to store what it knows or hears;

• Automated reasoning to use the stored information to answer


questions and to draw new conclusions;

• Machine learning to adapt to new circumstances and to detect


and extrapolate patterns

• Computer vision to perceive objects.

• Robotics to manipulate objects and move about.


Thinking humanly: The cognitive modeling approach
• If we are going to say that a given program thinks like a human, we
must have some way of determining how humans think.

• There are three ways to do this:

• Through introspection—trying to catch our own thoughts

• Through psychological experiments—observing a person in action

• Through brain imaging—observing the brain in action.

• If the program's input—output behavior matches corresponding


human behavior, that is evidence that some of the program's
mechanisms could also be operating in humans.
Thinking rationally: The "laws of thought" approach
• The Greek philosopher Aristotle was one of the first to attempt to
codify "right thinking," that SYLLOGISM is, irrefutable reasoning
processes.

• His syllogisms provided patterns for argument structures that


always yielded correct conclusions when given correct premises—
for example, "Socrates is a man; all men are mortal; therefore,
Socrates is mortal.”

• There are two main obstacles to this approach:-

• First, it is not easy to take informal knowledge and state it in the


formal terms required by logical notation, particularly when the
knowledge is less than 100% certain.

• Second, there is a big difference between solving a problem "in


principle" and solving it in practice.
Acting rationally: The rational agent approach
• An agent is just something that acts

• All computer programs do something, but computer agents are


expected to do more: operate autonomously, perceive their
environment, persist over a prolonged time period, adapt to
change, and create and pursue goals.

• The rational-agent approach has two advantages over the other


approaches.

• First, it is more general than the "laws of thought" approach


because correct inference is just one of several possible
mechanisms for achieving rationality.

• Second, it is more amenable to scientific development than other


approaches based on human behavior or human thought
Chapter 2: INTELLIGENT AGENTS
Outline
• Agents and Environments

• The Concept of Rationality

• The Nature of Environments

• The Structure of Agents


Agents and environments
An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators.

A human agent has eyes, ears, and other organs for


sensors and hands, legs, vocal tract, and so on for
actuators.

A robotic agent might have cameras and infrared


range finders for sensors and various motors for
actuators.

A software agent receives keystrokes, file contents,


and network packets as sensory inputs and acts on
the environment by displaying on the screen,
writing files, and sending network packets
Agents and environments (Cont.)
We use the term percept to refer to the
agent's perceptual inputs at any given
instant.

An agent's percept sequence is the complete


history of everything the agent has ever Action Percept sequence
perceived. Right ]A, Clean[
Suck ]A, Dirty[
The agent function that maps any given Left ]B, Clean[
percept sequence to an action.
Suck ]B, Dirty[
Right ]A, Clean[ ,]A, Clean[
The agent program is a concrete
implementation, running within some Suck ]A, Dirty[ ,]A, Clean[


physical system.
Right ]A, Clean[ ,]A, Clean[ ,]A, Clean[
Suck ]A, Dirty[ ,]A, Clean[ ,]A, Clean[


The concept of rationality
A rational agent is one that does the right thing
conceptually speaking, every entry in the table for the
agent function is filled out correctly.

Consider the simple vacuum-cleaner agent that cleans


a square if it is dirty and moves to the other square if
not.

Is this a rational agent?

That depends! First, we need to say


What the performance measure is
What is known about the environment
What sensors and actuators the agent has
The nature of environments (Cont.)
Specifying the task environment

In designing an agent, the first step must always be to specify the task environment as
filly as possible PEAS (Performance, Environment, Actuators, Sensors) description

Sensors Actuators Environment Performance Agent Type


Measure
Cameras, Steering, Roads, other Safe, fast, legal, Taxi driver
sonar, accelerator, traffic, comfortable
speedometer, brake, signal, pedestrians, trip,
GPS, horn, display customers maximize
odometer, profits
accelerometer,
engine sensors,
keyboard
The nature of environments (Cont.)
• Properties of task environments

• Fully observable vs. partially observable:


• A task environment is effectively fully observable if
the sensors detect all aspects that are relevant to
the choice of action

• An environment might be partially observable


because of noisy and inaccurate sensors or because
parts of the state are simply missing from the
sensor data

• for example, a vacuum agent with only a local dirt


sensor cannot tell whether there is dirt in other
squares, and an automated taxi cannot see what
other drivers are thinking If the agent has no
sensors at all then the environment is unobservable
The nature of environments (Cont.)
• Single agent vs. multiagent:

• For example, an agent solving a crossword


puzzle by itself is clearly in a single-agent
environment, whereas an agent playing chess
is in a two-agent environment.

• There are, however, some subtle issues. First,


we have described how an entity may be
viewed as an agent, but we have not explained
which entities must be viewed as agents.

• Does an agent A (the taxi driver for example)


have to treat an object B (another vehicle) as
an agent.
The nature of environments (Cont.)
• Deterministic vs. stochastic.

• If the next state of the environment is


completely determined by the current state
and the action executed by the agent, then we
say the environment is deterministic;
otherwise, it is stochastic

• Thus, a game can be deterministic even though


each agent may be unable to predict the
actions of the others.)

• Taxi driving is clearly stochastic in this sense,


because one can never predict the behavior of
traffic exactly
The nature of environments (Cont.)
• Episodic vs. sequential:

• In an episodic task environment, the agent's


experience is divided into atomic episodes.

• In each episode the agent receives a percept


and then performs a single action

• The next episode does not depend on the


actions taken in previous episodes.

• Part-picking robot is an example of episodic


environment

• Backgammon is an example of sequential


environment
The nature of environments (Cont.)
• Static vs. dynamic:

• If the environment can change while an agent is


deliberating, then we say the environment is dynamic for
that agent; otherwise, it is static.

• Static environments are easy to deal with because the agent


need not keep looking at the world while it is deciding on an
action, nor need it worry about the passage of time.

• Dynamic environments, on the other hand are


continuously asking the agent what it wants to do.

• Taxi driving is clearly dynamic, chess, when played with a


clock, is semi-dynamic. Crossword puzzles are static.
The Structure of Agents
What is an Intelligent Agent?

• An agent is anything that can:-

• Perceive its environment through sensors, and


act upon that environment through actuators

• Goal: Design rational agents that do a “good


job” of acting in their environments
• success determined based on some objective
performance measure
Vacuum Cleaner World
Action Percept sequence
Environment: square A and B Right ]A, Clean[
Percepts: location and content, e.g. [A, Dirty]
Suck ]A, Dirty[
Actions: Left, Right, Suck, NoOp
Left ]B, Clean[
Suck ]B, Dirty[
Right ]A, Clean[ ,]A, Clean[
Suck ]A, Dirty[ ,]A, Clean[

Right ]A, Clean[ ,]A, Clean[ ,]A, Clean[


Suck ]A, Dirty[ ,]A, Clean[ ,]A, Clean[
Agent Function and Agent Program
Agent Function (percepts ==> actions)

Maps from percept histories to actions f: P*  A

The agent program runs on the physical architecture to


produce the function f

agent = architecture + program

Action := Function(Percept Sequence) If (Percept


Sequence) then do Action

Example: A Simple Agent Function for Vacuum World

If (current square is dirty) then suck


Else move to adjacent square
Table Agent Program function TABLE-DRIVEN-AGENT(percept ) returns an action
persistent: percepts, a sequence, initially empty table, a
table of actions, indexed by percept sequences, initially fully
specified
append percept to the end of percepts
action ←LOOKUP (percepts, table)
return action

The disadvantages of this architecture

• Infeasibility (excessive size)


• lack of adaptiveness
• How big would the table have to be?
• Could the agent ever learn from its
mistakes?
Agent Types
 Simple reflex agents  Agents with goals
 are based on condition-action rules and  are agents which in addition to state
implemented with an appropriate information have a kind of goal
production system information which describes desirable
situations.
 They are stateless devices which do not
have memory of past world states.  Agents of this kind take future events into
consideration
 Reflex Agents with memory (Model-
Based)  Utility-based agents
 have internal state which is used to  base their decision on classic axiomatic
keep track of past states of the world utility-theory in order to act rationally
A Simple Reflex Agent
 Example:
if car-in-front-brakes
then initiate braking

 Agent works by finding a rule whose


condition matches the current situation
 rule-based systems

function SIMPLE-REFLEX-AGENT(percept ) returns


 But, this only works if the current an action
percept is sufficient for making the persistent: rules, a set of condition–action rules
correct decision state←INTERPRET-INPUT(percept )
rule←RULE-MATCH(state, rules)
action ←rule.ACTION
return action
Example: Simple Reflex Vacuum Agent

function REFLEX-VACUUM-AGENT([location,status]) returns an action


if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Model Based Reflex Agent
 Updating internal state requires two kinds of
encoded knowledge

 knowledge about how the world changes


(independent of the agents’ actions)

 knowledge about how the agents’ actions


affect the world function MODEL-BASED-REFLEX-AGENT(percept ) returns an
action
persistent: state, the agent’s current conception of the
 But, knowledge of the internal state is not always world state
enough model , a description of how the next state
depends on current state and action
rules, a set of condition–action rules
 how to choose among alternative decision action, the most recent action, initially none
state←UPDATE-STATE(state, action, percept ,model )
paths (e.g., where should the car go at an rule←RULE-MATCH(state, rules)
intersection)? action ←rule.ACTION
 Requires knowledge of the goal to be return action
achieved
Agents with Explicit Goals
 Knowing current state is not always enough.

 State allows an agent to keep track of unseen parts


of the world, but the agent must update state based
on knowledge of changes in the world and of
effects of own actions.
 Goal = description of desired situation
 Examples:
Reasoning about actions
 Decision to change lanes depends on a goal to go
somewhere (and other factors);
 Notes: • reflex agents only act based on pre-
computed knowledge (rules)
 Search and Planning are concerned with finding
• goal-based (planning) act by
sequences of actions to satisfy a goal.
reasoning about which actions
 Reflexive agent concerned with one action at a
achieve the goal
time.
• less efficient, but more adaptive and
 Classical Planning: finding a sequence of actions
flexible
that achieves a goal.
A Complete Utility-Based Agent

Preferred world state has higher


utility for agent = quality of being
useful

Examples
quicker, safer, more reliable ways to
get where going;
price comparison shopping
Utility Function

Utility function: state ==> U(state) = 4 allows rational decisions in two kinds of
measure of happiness situations
h evaluation of the tradeoffs among conflicting
goals
h evaluation of competing goals
Learning agents
 All previous agent-programs describe methods
 Critic :
for selecting actions.
provides feedback on agents performance based
on fixed performance standard
 Yet it does not explain the origin of these
programs.
 Learning mechanisms can be used to perform
 Learning element: introduce improvements
in performance element
this task.
 Teach them instead of instructing them.
 Advantage is the robustness of the program
 Performance element: selecting actions
toward initially unknown environments. based on percepts
 Corresponds to the previous agent programs

 Problem generator: suggests actions that will


lead to new and informative experiences.
 Exploration vs. exploitation

You might also like