You are on page 1of 9

ARTIFICIAL

INTELLIGENCE

UNIT-II
Environment Types
• Fully observable(vs. partially observable)
• Deterministic(vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete(vs. continuous)
• Single agent(vs. multi agent)
Fully observable vs. Partially observable
• One in which the agent can always see the entire state of
environment. Fully observable environment does not need memory
to make an optimal decision. Example: Checker Game 
• Partially observable environment is one in which the agent can never
see the entire state of environment. It needs memory for optimal
decisions. Example: Poker game
• When an agent sensor is capable to sense or access the complete
state of an agent at each point in time, it is said to be a fully
observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no
need to keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no
sensors in all environments.
• Example: 
• Chess – the board is fully observable, so are the opponent’s
moves
• Driving – the environment is partially observable because
what’s around the corner is not known.
• Episodic vs. Sequential
• Sequential environments require memory of past actions
to determine the next best action. 
• Playing tennis is a perfect example where a player observes
the opponent’s shot and takes action.
• Episodic environments are a series of one-shot actions, and
only the current (or recent) percept is relevant.
•  A support bot (agent) answer to a question and then answer
to another question and so on. So each question-answer is a
single episode.
• Deterministic vs. Stochastic
• An environment is called Deterministic where
your agent’s actions uniquely determine the
outcome. For example in Chess, there is no randomness when
you move a piece. 
• An environment is called Stochastic where your
agent’s actions don’t uniquely determine the
outcome. For example in games with dice, you can determine
your dice throwing action but not the outcome of the dice.
• Self Driving Cars – the actions of a self-driving car are
not unique, it varies time to time
Static vs. Dynamic
• Static AI environments rely on data-knowledge sources that don’t
change frequently over time. Contrasting with that model, dynamic AI
environments deal with data sources that change quite frequently.

• An environment that keeps constantly changing itself when the agent


is up with some action is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the
environment keeps changing every instant.
• An idle environment with no change in its state is called a static
environment.
• An empty house is static as there’s no change in the surroundings
when an agent enters
• Discrete vs. Continuous
• A Discrete Environment is one where you have finitely many
choices and finitely many things you can sense. For example, there is
finitely many board positions and moves you can make in a chess
game.
•  A Continuous Environment is an environment where the possible
choices you can make and things you can sense are infinite.
• If an environment consists of a finite number of actions that can be
deliberated in the environment to obtain the output, it is said to be
a discrete environment.
• The game of chess is discrete as it has only a finite number of
moves. The number of moves might vary with every game, but still,
it’s finite.
• The environment in which the actions performed cannot be
numbered i.e. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as
their actions are driving, parking, etc. which cannot be numbered.
• Single Agent vs. Multiple Agent
• In a single agent environment, there is only one
agent responsible for the action. Solving a jigsaw puzzle. In
a multiagent environment, there is interaction between
the performance and actions of two agent.
• An environment consisting of only one agent is said to be a
single-agent environment.
• A person left alone in a maze is an example of the single-
agent system.
• An environment involving more than one agent is a multi-
agent environment.
• The game of football is multi-agent as it involves 11 players in
each team.
Thank you

You might also like