Professional Documents
Culture Documents
CS302 Unit1-II
CS302 Unit1-II
INTELLIGENCE
UNIT-II
Environment Types
• Fully observable(vs. partially observable)
• Deterministic(vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete(vs. continuous)
• Single agent(vs. multi agent)
Fully observable vs. Partially observable
• One in which the agent can always see the entire state of
environment. Fully observable environment does not need memory
to make an optimal decision. Example: Checker Game
• Partially observable environment is one in which the agent can never
see the entire state of environment. It needs memory for optimal
decisions. Example: Poker game
• When an agent sensor is capable to sense or access the complete
state of an agent at each point in time, it is said to be a fully
observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no
need to keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no
sensors in all environments.
• Example:
• Chess – the board is fully observable, so are the opponent’s
moves
• Driving – the environment is partially observable because
what’s around the corner is not known.
• Episodic vs. Sequential
• Sequential environments require memory of past actions
to determine the next best action.
• Playing tennis is a perfect example where a player observes
the opponent’s shot and takes action.
• Episodic environments are a series of one-shot actions, and
only the current (or recent) percept is relevant.
• A support bot (agent) answer to a question and then answer
to another question and so on. So each question-answer is a
single episode.
• Deterministic vs. Stochastic
• An environment is called Deterministic where
your agent’s actions uniquely determine the
outcome. For example in Chess, there is no randomness when
you move a piece.
• An environment is called Stochastic where your
agent’s actions don’t uniquely determine the
outcome. For example in games with dice, you can determine
your dice throwing action but not the outcome of the dice.
• Self Driving Cars – the actions of a self-driving car are
not unique, it varies time to time
Static vs. Dynamic
• Static AI environments rely on data-knowledge sources that don’t
change frequently over time. Contrasting with that model, dynamic AI
environments deal with data sources that change quite frequently.