You are on page 1of 20

Artificial Intelligent.

AI

Ch.2: Agents of Artificial intelligent


Or
intelligent Agents
PART 1

Lecturer: Omar Ibrahim


Definition of AI Agents
• An intelligent agent (IA) refers to an autonomous
entity which acts, directing its activity towards
achieving goals (i.e. it is an agent), upon an
environment using observation through sensors
and consequent actuators (i.e. it is intelligent)
• Intelligent agents(IA) may also learn or use
knowledge to achieve their goals
• A reflex machine, such as a thermostat, is
considered an example of an intelligent agent
CONT..
• An agent is anything that can perceive its environment
through sensors and acts upon that environment through
effectors.
• A human agent has sensory organs such as eyes, ears,
nose, tongue and skin parallel to the sensors, and other
organs such as hands, legs, mouth, for effectors.
• A robotic agent replaces cameras and infrared range
finders for the sensors, and various motors and actuators
for effectors.
• A software agent has encoded bit strings as its programs
and actions
Cont..
• The part of the agent that used to identify or
to know the perception is called sensors.
• The part of the agent taking an action is called
an Actuators.
Classes of Agents
• Russell & Norvig (2003) group agents into five
classes based on their degree of perceived
intelligence and capability:
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
5. Learning agents
Simple reflex agents
• Simple reflex agents act only on the basis of
the current percept, ignoring the rest of the
percept history.
• The agent function is based on the condition-
action rule: "if condition, then action.
• This agent function only succeeds when the
environment is fully observable
Cont..
CONT…
• Problems with Simple reflex agents are:
 Very limited intelligence.
 No knowledge of non-perceptual parts of
state.
 Usually too big to generate and store.
 If there occurs any change in the environment,
then the collection of rules need to be
updated
2. Model-based reflex agents
• A model-based agent can handle partially
observable environments.
• Its current state is stored inside the agent
maintaining some kind of structure which
describes the part of the world which cannot be
seen.
• This knowledge about "how the world works" is
called a model of the world, hence the name
"model-based agent”
Cont..
• An agent may also use models to describe and
predict the behaviors of other agents in the
environment
• Updating the state requires information about
 How the world evolves in-dependently from
the agent, and
 How the agent actions affects the world.
3. Goal-based agents
• Goal-based agents further expand on the capabilities
of the model-based agents, by using "goal"
information
• Goal information describes situations that are
desirable. This allows the agent a way to choose
among multiple possibilities, selecting the one which
reaches a goal state.
• Search and planning are the subfields of artificial
intelligence devoted to finding action sequences that
achieve the agent's goals.
4. Utility-based agents
• A more general performance measure should allow
a comparison of different world states according to
exactly how happy they would make the agent. The
term utility can be used to describe how "happy"
the agent is.
• A rational utility-based agent chooses the action
that maximizes the expected utility of the action
outcomes - that is, what the agent expects to derive,
on average, given the probabilities and utilities of
each outcome
5. Learning agents
• Learning has the advantage that it allows the
agents to initially operate in unknown
environments and to become more competent
than its initial knowledge alone might allow
• A learning agent in AI is the type of agent which
can learn from its past experiences or it has
learning capabilities. It starts to act with basic
knowledge and then able to act and adapt
automatically through learning
Cont…
• A learning agent has mainly four conceptual components, which
are:
1. Learning element :It is responsible for making improvements by
learning from the environment
2. Critic: Learning element takes feedback from critic which
describes how well the agent is doing with respect to a fixed
performance standard.
3. Performance element: It is responsible for selecting external
action or chooses what action to take.
4. Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences
THE END
THANK YOU

You might also like