You are on page 1of 31

Artificial Intelligence

Outline
 Agent program
 Simple reflex agents
 Model based reflex agents
 Goal based agents
 Utility based agents
 Learning agents
 Weak AI vs Strong AI
Agent functions and programs
 An agent is completely specified by the agent function
mapping percept sequences to actions
 The job of AI is to design the agent program that
implements the agent function mapping percepts to
actions.

 agent=architecture+program
Agent Program
Function TABLE-DRIVEN_AGENT(percept) returns an action

static: percepts, a sequence initially empty


table, a table of actions, indexed by percept sequence

append percept to the end of percepts


action  LOOKUP(percepts, table)
return action
Agent Program
 Drawbacks:
 Huge table
 Take a long time to build the table
 Need a long time to search the table entries
Agent types
 Four basic kind of agent programs will be
discussed:
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
 All these can be turned into learning agents.
1. Simple reflex agents
 Select action on the basis of only the current percept.
 Ignores the rest of the percept history
 E.g. the vacuum-agent

 Implemented through condition-action rules


 If dirty then suck
 If car in-front is braking then initiate braking.
Simple reflex agents
The vacuum-cleaner world

function REFLEX-VACUUM-AGENT ([location, status]) return an action

if status == Dirty then return Suck


else if location == A then return Right
else if location == B then return Left

9
Agent types; simple reflex
function SIMPLE-REFLEX-AGENT(percept) returns an action

static: rules, a set of condition-action rules


Generates abstracted
state  INTERPRET-INPUT(percept) description of the current state
rule  RULE-MATCH(state, rule) from the percept
action  RULE-ACTION[rule] Returns the first rule in the set
return action of rules that matches the given
state description

Will only work if the environment is fully observable.


For example if car would either brake continuously and
unnecessarily or worse, never brake at all.
Agent types; simple reflex
 The INTERPRET-INPUT function generates an
abstracted description of the current state from the
percept
 The RULE-MATCH function returns the first rule in
the set of rules that matches the given state
description.
Agent types; simple reflex
 Simple reflex agents have the admirable property of
being simple, but they turn out to be of very limited
intelligence.
 will work only if the correct decision can be made on
the basis of only the current percept-that is, only if the
environment is fully observable.
2-Model-based reflex agents
 To tackle partially observable environments.
 Maintain internal state that depends on percept history
 Reflects at least some of the unobserved aspects of the current state
 Over time update state using world knowledge
 Model of the World

 Agent uses such model is called model based agent.

 Current percept is combined with old internal state to generate the


updated description of the current state.
Model-based reflex agents
Agent types; reflex and state
function REFLEX-AGENT-WITH-STATE(percept) returns an action

static: rules, a set of condition-action rules


state, a description of the current world state
action, the most recent action.

state  UPDATE-STATE(state, action, percept)


rule  RULE-MATCH(state, rule) Responsible for
action  RULE-ACTION[rule] creating the new
internal state
return action description
3. Goal-based agents
 Knowing about the current state of the environment is not always
enough to decide what to do

 The agent needs a goal to know which situations are desirable e.g
being at passenger’s destination

 Typically investigated in search and planning research.

 Major difference: future is taken into account

 Search and planning are the subfields of AI devoted to finding action


sequences that achieve the agent's goals.
Goal-based agents


Goal-based agents
 Goal based agents are more flexible because the
knowledge that supports its decisions is represented
explicitly and can be modified.
 The goal-based agent's behavior can easily be changed
to go to a different location.
 The reflex agent's rules for when to turn and when to
go straight will work only for a single destination;
they must all be replaced to go somewhere new.
4. Utility-based agents
 Certain goals can be reached in different ways.
 Some are better, have a higher utility e.g quicker, safer, more reliable, or cheaper
than others

 For example, many action sequences that will get the taxi to its
destination but some are quicker, safer, more reliable and cheaper than
others.

 Utility function maps a (sequence of) state(s) onto a real number,


which describes the associated degree of happiness.

 Generate high quality behavior


Utility-based agents
5. Learning agents
 All previous agent-programs describe methods for
selecting actions.
 Yet it does not explain the origin of these programs.
 Learning mechanisms can be used to perform this task.
 Teach them instead of instructing them.
 Advantage is the robustness of the program toward initially unknown
environments.
Learning agents
Learning agents
 Performance element: selecting actions based on percepts.
 Corresponds to the previous agent programs

 Learning element: introduce improvements in performance element.

 Critic provides feedback on agents performance based on fixed


performance standard.

 Problem generator: suggests actions that will lead to new and


informative experiences.
 Suggest experiments
 Used to evolve new theories
Weak AI vs Strong AI
 Weak (narrow) AI that is focused on one particular
problem or task domain

 Strong (general) AI that focuses on building


intelligence that can handle any task or problem in any
domain
Weak AI
 Siri and Alexa could be considered AI, but generally,
they are weak AI programs.

 Even advanced chess programs are considered weak


AI.

 Voice-activated assistance and chess programs often


have a programmed response.
 They are sensing for things similar to what they know,
and classifying them accordingly.
 This presents a human-like experience, but that is all it
is—a simulation.

 They operate within a limited pre-defined range of


functions.
Strong AI
 Acts more like a brain
 It does not classify, but uses clustering and association
to process data.

 There isn’t a programmed answer to your keywords or


requests, as can be seen in weak AIs

 The results of their programming and functions 


are largely unpredictable.
 With Strong AI, a single system could theoretically
handle all the same problems that a single human
could
 Strong AI does not currently exist. Some experts
predict it may be developed by 2030 or 2045.
 Others more conservatively predict that it may be
developed within the next century, or that the
development of Strong AI may not be possible at all.
Differences between strong and
weak AI
 With strong AI, machines can actually think and carry
out tasks on their own, just like humans do. With
weak AI, the machines cannot do this on their own
and rely heavily on human interference.

 Strong AI has a complex algorithm that helps it act in


different situations, while all the actions in weak AIs
are pre-programmed by a human.
Differences between strong and
weak AI
 Strong AI-powered machines have a mind of their
own. They can process and make independent
decisions, while weak AI-based machines can only
simulate human behavior.

You might also like