You are on page 1of 25

Introduction to Artificial Intelligence

Intelligent agents

By:
Boreshban boreshban@gmail.com

1 / 25
outline:
• Intelligent agents
• Structure of intelligent agents
• Rational agents
• Environment types
• Agent types

2 / 25
Agents
• An agent is any thing that can be viewed as
• Sensors: perceive environment
• Action: act upon environment

3 / 25
Samples of agents
• Human agent: eyes, ears, and other organs for sensors; hands, legs,
mouth, and other body parts for effectors.

• Robotic agent: cameras and infrared range finders for sensors;


various motors for effectors.

4 / 25
Rational agent
• "do the right thing "based on the perception history and the actions
it can perform.

• Rational Agent: For each possible percept sequence, a rational


agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent
has.

5 / 25
Primary Design Notes (PAGE)
• Perceptions
• Actions
• Goals
• Environments

6 / 25
PAGE Samples

Agent: Automated taxi driver

• Perceptions: Cameras, sonar, speedometer, GPS, odometer,


engine sensors, microphone

• Actions: Steering wheel, accelerator, brake, signal, horn

• Goal: Safe, fast, legal, comfortable trip, maximize profits

• Environment: Roads, other traffic, pedestrians, customers

7 / 25
PAGE Samples
Agent: Medical diagnosis system

• Perceptions: Keyboard (entry of symptoms, findings , patient's


answers)
• Actions: Screen display (questions, tests, diagnoses, treatments,
referrals)
• Goal: Healthy patient, minimize costs
• Environment: Patient, hospital, staff

8 / 25
PAGE Samples
Agent: Part picking robot

• Perceptions: Camera, joint angle sensors


• Actions: Jointed arm and hand
• Goal: Percentage of parts in correct bins
• Environment: Conveyor belt with parts, bins

9 / 25
PAGE Samples
Agent: Interactive English tutor

• Perceptions: Keyboard
• Actions: Screen display (exercises, suggestions, corrections)
• Goal: Maximize student's score on test
• Environment: Set of students

10 / 25
Autonomy

• An agent is autonomous if its behavior is determined by its own


experience (with ability to learn and adapt)

• Not just relies only on prior knowledge of designer


• Learns to compensate for partial or incorrect prior knowledge
▫ Benefit: changing environment
▫ Starts by acting randomly or base on designer knowledge and then
learns form experience

11 / 25
Environment types
• Fully observable(vs. partially observable): An agent's sensors
give it access to the complete state of the environment at each point
in time.
▫ Example: Chess vs. Taxi driver

• Deterministic(vs. stochastic): The next state of the environment is


completely determined by the current state and the action executed
by the agent.
▫ Example: Chess vs. Taxi driver

12 / 25
Environment types
• Episodic (vs. sequential): The agent's experience is divided into
atomic "episodes" (each episode consists of the agent perceiving
and then performing a single action), and the choice of action in
each episode depends only on the episode itself.
▫ Episodic environments are much simpler because the agent does not
need to think ahead.

• Static (vs. dynamic): The environment is unchanged while an


agent is deliberating.
▫ Example: Chess vs. Taxi driver

13 / 25
Environment types
• Discrete(vs. continuous): A limited number of distinct, clearly
defined percepts and actions.
▫ Example: Chess vs. Taxi driver

• Single agent(vs. multi agent): An agent operating by itself in


an environment.

14 / 25
Environment types

Environment Accessible Deterministic Episodic Static Discrete

Chess with a clock Fully Deterministic sequential Semi Discrete

Chess without a Fully Deterministic sequential Static Discrete


clock
backgammon Fully stochastic sequential Static Discrete
Taxi driving partially stochastic sequential dynamic continuous
Football partially stochastic sequential dynamic continuous

15 / 25
Agent Program Types
• Look Up Table

• Simple Reflexive

• Model-based reflex agents

• Goal-based agents

• Utility-based agents

16 / 25
Look Up Table Agents
• Benefits:
▫ Easy to implement

• Drawbacks
▫ Huge table
▫ Take a long time to build the table
▫ No autonomy
▫ Even with learning, need a long time to learn the table entries

17 / 25
Simple Reflex Agents

18 / 25
Simple Reflex Agents

• No Memory, No planning

19 / 25
Reflex agents with states

20 / 25
Reflex agents with states

• no longer-term planning

21 / 25
Goal-based agents

22 / 25
Goal-based agents

23 / 25
Utility-based agents

24 / 25
Utility-based agents

25 / 25

You might also like