Professional Documents
Culture Documents
Intelligent Agents
Intelligent Agents
2.1. Intelligent Agents
2.2. Agents and Environments
2.3. Acting of Intelligent Agents (Rationality)
2.4. Structure of Intelligent Agents
Healthy patients,
Medical diagnosis system Symptoms, findings, Questions, tests, treatments minimize costs Patient, hospital
patient's answers
Satellite image analysis Pixels of varying Print a categorization of Correct Images from orbiting
system intensity, color scene categorization satellite
Refinery controller Temperature, Open, close valves; adjust Maximize purity, Refinery
pressure readings temperature yield, safety
Interactive English tutor Typed words Print exercises, suggestions, Set of students
corrections Maximize student's
score on test
Questions
Answer 1:
Question 1: Which are agents? (A) : Yes Tsegaye G/Medhin
(A) Tsegaye G/medhin. (B) : Yes your dog
(B) Your dog. (C) : Yes, if it’s an autonomous vacuum cleaner. Else, no.
(C) Vacuum cleaner.
7
Example 2: The percepts, actions, goals and environment for the taxi
how an agent should act?
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize its
performance measure with all possible actions.
A rational agent is said to perform the right things.
AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement learning algorithm, for each best
possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:
o Performance measure which defines the success criterion.
o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.
Note: Rationality differs from Omniscience because an Omniscient agent knows the actual
outcome of its action and act accordingly, which is not possible in reality.
how an agent should act?(1)
For each possible percept sequence, an rational agent should select an action expected to maximize its
performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge
the agent has
E.g. exam
Maximize marks, based on the questions on the paper & your knowledge
An omniscient agent
Knows the actual outcome of its actions in advance
No other possible outcomes
However, impossible in real world
E.g. Example
Crossing a street but died of the fallen cargo door from 33,000ft irrational?
Learning
Actuators
Sensors
In designing an agent, the first step must always be to specify the task environment as
fully as possible.
Task environment (1)
Environment Sensor
A taxi must deal with a variety of roads camera,
Traffic lights, other vehicles, pedestrians, stray animals,
road works, police cars, etc.
Interact with the customer
Properties of task environments
Fully observable vs. Partially observable
Fully observable
If an agent’s sensors give it access to the complete state of the environment at each point in time then
the environment is effectively and fully observable
Partially observable
If an agent’s sensors doesn't give it access to the complete state of the environment.
Because of noisy and inaccurate sensors or parts of the state are simply missing from the sensor data.
E.g. A local dirt sensor of the cleaner cannot tell Whether other squares are clean or not.
environment is discrete
E.g.Chess game If the environment is unknown, the agent will
have to learn how it works in order to make
good decisions.
Continuous environment
Example: new video game.
E.g. Taxi driving
Examples of task environments
Structure of agents
2.1. Intelligent Agents
2.2. how an agent should act?
2.4. Structure of Intelligent Agents
2.5. Agent Types
2.5.1. Simple reflex agent
1. Simple reflex agents: These agents take decisions on the basis of the current percepts and ignore the rest of
the percept history. They have very limited intelligence
Agents that keep track of the world, and the agent is with memory
A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.
o Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the model.
they perform actions
For the world that is partially observable the agent has to keep track of an internal state
That depends on the percept history, and reflecting some of the unobserved aspects
E.g. driving a car and changing lane
Updating the agent state requires two types of knowledge (information) about:
an agent performs the action based on Goal-based agents are less efficient
Current state of the environment is to find out the action sequences to achieve its goal
Acts based not only goals but also the best way to achieve the goal.
Uses multiple possible alternatives, and an agent has to choose in order to perform the best action.
It chooses the action that leads to the best expected utility, which is computed by
averaging over all possible outcome states, weighted by the probability of the
outcome.
E.g. meals in Canteen
Uses a world model along with utility function that influences its preferences among
the states of that world.
Utility-based agent (1)