You are on page 1of 18

Module 2: Intelligent Agents

Contents:

2.1 Introduction of agents, Structure of Intelligent Agent, Characteristics of intelligent


agent.
2.2 Types of Agents: Simple Reflex, Model Based, Goal Based, Utility Based Agents
2.3 Environment Types: Deterministic, Stochastic, Static, Dynamic, Observable, Semi-
observable, Single Agent, Multi Agent
Introduction to Agents

• Anything that perceive its environment through sensors, and act upon environment.
• Runs in the cycle of perceiving, thinking and acting.
• Agent can be
1. Human Agent
2. Robotic Agent
3. Software Agent
Intelligent Agent

➢ Agent = Architecture + Agent program


Intelligent Agent

• Following are the main three terms involved in the structure of an AI agent:
• Architecture

• Agent Function
• Agent program
Characteristic of Intelligent Agent

• Reactiveness: giving reaction to situations in a stipulated time frame.

• Pro-activeness: It is controlling situation rather than just responding to it.

• Social Ability: Intelligent agents can interact with other agents.


Types of Agents

• Agents can be divided into four classes based on their perceived intelligence and ability:
1. Simple reflex agent

2. Model-based reflex agent

3. Target-based agent

4. Utility-based agent
Simple Reflex Agent
Model Based Reflex Agent
Goal Based Agent
Utility Based Agent
Agent Environment in AI
• An environment can be described as a situation in which an agent is present.
1. Fully observable

2. Partially Observable

3. Static

4. Dynamic

5. Deterministic

6. Stochastic

7. Single-agent

8. Multi-agent
Fully observable

• It is based on observation.
• The agent sensors can have access to complete environment.
• Agents are able to gather all necessary information.
• Agent don’t keep record of internal states.
• Example: Sudoku puzzle.
Partially observable

• Environment not seen completely at any point of time.


• Sensor provide error containing information.
• Environment when agent sensor fail to provide information about internal states.
• Example: Automated car driven system.
Deterministic

• The next state of environment is completely known by previous state.


• The action executed by agents are known.
• Example: Chess game.
Stochastic

• Next state does not depends on current state and agents action.
• Automated car driven system have stochastic environment.
• As agent not able to control the traffic conditions on the road.
• Semi observable environment consider in this category.
Static vs Dynamic

• When there is no change in environment while agent perform action.


• Example: Vacuum cleaner & Sudoku Puzzle.
• Environment change according to agents performance.
• Sensor have to continuously keep sending signals to agent about current activity.
• Automatic car driven comes under this.
Single Agent Vs Multi Agent

• The agent is operating on its own or in collaboration with other agents.


• An agent playing Tetris by itself can be described as single.
• Vacuum cleaner = single
• Car driving agent, there are multiple agents driving on a road.
• Chess game.
Assignment 2

• Poster making on any AI topic in class room.

You might also like