You are on page 1of 88

AL3391-ARTIFICIAL INTELLIGENCE

UNIT - I (INTELLIGENT AGENTS)


Agents and Environment
An agent is anything that can perceive its environment
through sensors and acts upon that environment through actuators.

Example:
consider human as a agent
sensors – eyes,ears,other organs
actuators- hands,leg,mouth,other body parts

consider robot as a agent


sensors – cameras,infrared range finders
actuators- motors
• A human agent has sensory organs such as eyes, ears, nose,
tongue and skin parallel to the sensors, and other organs such
as hands, legs, mouth, for effectors.
• A robotic agent replaces cameras and infrared range finders
for the sensors, and various motors and actuators for
effectors.
• A software agent has encoded bit strings as its programs and
actions.
• Sensor: Sensor is a device which detects the change in the
environment and sends the information to other electronic devices.
An agent observes its environment through sensors.
• Actuators: Actuators are the component of machines that converts
energy into motion. The actuators are only responsible for moving
and controlling a system. An actuator can be an electric motor,
gears, rails, etc.
• Effectors: Effectors are the devices which affect the environment.
Effectors can be legs, wheels, arms, fingers, wings, fins, and display
screen.
Agent Terminology

• Performance Measure of Agent − It is the criteria, which


determines how successful an agent is.
• Behavior of Agent − It is the action that agent performs after any
given sequence of percepts.
• Percept − It is agent’s perceptual inputs at a given instance.
• Percept Sequence − It is the history of all that an agent has
perceived till date.
• Agent Function − It is a map from the precept sequence to an
action.
• Agent program- concrete implementation running on the agent
architecure.
Types of AI Agents

• Simple Reflex Agent


• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Simple Reflex Agent

• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of
the percept history.
• The Simple reflex agent works on Condition-action rule, which
means it maps the current state to action. Such as a Room Cleaner
agent, it works only if there is dirt in the room.
Problems for the simple reflex agent design approach:
 They have very limited intelligence
 They do not have knowledge of non-perceptual parts of the current
state
 Mostly too big to generate and to store.
 Not adaptive to changes in the environment.
Eg: if car – infront-breakes
Model-based reflex agent

• The Model-based agent can work in a partially


observable environment, and track the situation.
• A model-based agent has two important factors:
Model: It is knowledge about "how things happen in
the world," so it is called a Model-based agent.
Internal State: It is a representation of the current
state based on percept history.
Goal-based agents

• Goal-based agents expand the capabilities of the model-based


agent by having the "goal" information.

• They choose an action, so that they can achieve the goal.

• These agents may have to consider a long sequence of


possible actions before deciding whether the goal is achieved
or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.

Eg: GPS system finding the path to certain destination


Utility-based agents
• Utility-based agent act based not only goals but also the best
way to achieve the goal.

• The Utility-based agent is useful when there are multiple


possible alternatives, and an agent has to choose in order to
perform the best action.

• The utility function maps each state to a real number to check


how efficiently each action achieves the goals.

Eg: GPS system finding a shortest/faster/safer to certain


destination
Learning Agents
A learning agent in AI is the type of agent which can learn from its
past experiences, or it has learning capabilities.

It starts to act with basic knowledge and then able to act and
adapt automatically through learning.
four conceptual components:
1.Learning element: making improvements by learning
2.Critic: gives feedback to the learning element w r t fixed
performance standard
3. Performance element: selecting external action
4.Problem generator: responsible for suggesting actions that
will lead to new and informative experiences.
Agents Example

• Percepts: location and contents, e.g., [A,Dirty]


• Actions: Left, Right, Suck, NoOperations
• Agent’s function  look-up table
– For many agents this is a very large table
Rational agents
• Rationality
– Performance measuring success
– Agents prior knowledge of environment
– Actions that agent can perform
– Agent’s percept sequence to date

• Rational Agent: For each possible percept sequence, a


rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
Properties of task
environment
•Fully observable vs partially observable
Single agent vs multiagent
•Static vs dynamic
•Deterministic vs. stochastic
•Episodic vs sequential
•Discrete vs. continuous
•Known vs unknown
1. Fully Observable vs
Partially Observable
• If an agent sensor can sense or access the
complete state of an environment at each
point of time then it is a fully observable
environment, else it is partially observable.
• An environment is called unobservable when
the agent has no sensors in all environments.
• Examples:
• Chess – the board is fully observable, and
so are the opponent’s moves.
• Card game – the environment is partially
observable because opposite team card is
not known.
2. Single Agent vs Multi
Agent
• An environment consisting of only one
agent is said to be a single-agent
environment.
• An environment involving more than one
agent is a multi-agent environment.
• Eg. of Single agent environment - An
agent solving crossword puzzle by itself.
• Eg. of Multi agent environment - Game of
football involving 11 players in each team.
• Game of chess involving two agents is a
competitive multi agent environment.
3. Static vs Dynamic

• An environment that keeps constantly


changing itself when the agent is up with
some action is said to be dynamic.
• A roller coaster ride, taxi driving is
dynamic as it is set in motion and the
environment keeps changing every instant.
• An idle environment with no change in its
state is called a static environment.
• An empty house, crossword puzzles is static
as there’s no change in the surroundings when
an agent enters.
4. Deterministic vs Non
deterministic (Stochastic)
• If the next state of the environment is completely
determined by the current state and the action executed
by the agent(s), the environment is said to be
deterministic.
• The stochastic environment is random in nature which
is not unique and cannot be completely determined by
the agent.
Examples:
 Chess – there would be only a few possible moves
for a coin at the current state and these moves can
be determined.
 Self-Driving Cars - the actions of a self-driving car
are not unique, it varies time to time, because one
can never predict the behavior of traffic exactly.
5. Episodic vs Sequential
• Episodic task environment - There is no
dependency between current and previous
incidents.
• In each incident, an agent receives input from the
environment and then performs the corresponding
action eg. Pick and Place robot - to detect
defective parts from the conveyor belts.
• Here, every time robot(agent) will make the
decision on the current part i.e. there is no
dependency between current and previous
decisions.
• Sequential environment, the previous decisions
can affect all future decisions.
• The next action of the agent depends on what
action he has taken previously eg. Chess and Taxi
Driving - where the previous move can affect all
the following moves.
6. Discrete vs Continuous
• If an environment consists of a finite number
of actions that can be deliberated in the
environment to obtain the output, it is said to
be a discrete environment.
• The game of chess is discrete as it has only a
finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
• The environment in which the actions are
performed cannot be numbered i.e. is not
discrete, is said to be continuous eg. Self-
driving cars, as their actions are driving,
parking, etc. which cannot be numbered
(infinite).
7. known vs unknown
• In known environment the results for all the
actions are known to the agent.

• Example: Card game

• In unknown environment the agents needs to learn


how it works inorder to perform an action

• Example: A new video game


Nature of Environments
• The environment is the Task Environment (problem) for which
the Rational Agent is the solution.
• Any task environment is characterized on the basis of PEAS.
– Performance – What is the performance characteristic which would
either make the agent successful or not? For example, as per
the vacuum cleaner world -> clean floor, optimal energy
consumption might be performance measures.
– Environment – Physical characteristics and constraints expected.
example, wood floors, furniture in the way etc.
– Actuators – The physical or logical constructs which would take
action. For the vacuum cleaner, actuators are the suction pumps.
– Sensors – Again physical or logical constructs which would sense the
environment. For example, sensors are cameras and dirt sensors.
Example of Agent types and PEAS
description of the task environment
Problem Solving Agents

• Problem can be solved by building an Artificial Intelligence


system.
• For solving any type problem(task) in real world, to consider a
sequence of actions that form a path to a goal state such an agent
is called a problem-solving agent, and the computational
process it undertakes is called search.
• In AI, search techniques are universal problem solving methods.
• Problem solving agents are the goal based agents which use these
search strategies or algorithms to solve a particular problem and
provide the best result.
• Therefore, a problem-solving agent is a goal-driven agent and
focuses on satisfying the goal.
PROBLEM DEFINITION
• To build a system to solve a particular problem, we need to do
four things:
(i) Define the problem precisely. This definition must include
specification of the initial situations and also final situations
which constitute (i.e) acceptable solution to the problem.
(ii) Analyze the problem (i.e) important features have an
immense (i.e) huge impact on the appropriateness of various
techniques for solving the problems.
(iii) Isolate and represent: the knowledge to solve the problem.
(iv) Choose the best: problem solving techniques and apply it to
the particular problem.
Problem Solving Steps / Four-phase
problem-solving process
Goal Formulation:
• It is the first and simplest step in problem-solving.

• It organizes the steps/sequence required to formulate one goal


out of multiple goals as well as actions to achieve that goal.

• Goal formulation is based on the current situation and the


agent’s performance measure
Problem Formulation
It is the most important step of problem-solving which decides what
actions should be taken to achieve the formulated goal.
There are following five components involved in problem formulation:
Initial State: It is the starting state or initial step of the agent
towards its goal.
Actions: It is the description of the possible actions available to
the agent.
Transition Model: It describes what each action does.
Goal Test: It determines if the given state is a goal state.
Path cost: It assigns a numeric cost to each path that follows the
goal. The problem solving agent selects a cost function, which reflects
its performance measure. Remember, an optimal solution has the lowest
path cost among all the solutions.
Search solution:
 It identifies all the best possible sequence of actions to
reach the goal state from the current state.
 It takes a problem as an input and returns solution as its
output.
 It finds the best algorithm out of various algorithms,
which may be proven as the best optimal solution.

Execution:
It executes the best optimal solution from the searching
algorithms to reach the goal state from the current state.
Example: 8 puzzle problem
 Goal Formulation:

Initial State Goal State


Problem Formulation
 Most important step, which decides what actions
should be taken to achieve the formulated goal.
 Initial State – Starting state of the agent towards its goal.
(123, b46, 758)
 Actions – Possible actions available to the agent. (Left, Right,
Up, Down)
 Transition model: among four which action given the
solution
 Goal Test – It determines if the given state is a goal state.
(123, 456, 78b)
 Path Cost – It assigns a numeric cost to each path that follows
the goal. (Cost 1 per action)
• Search solution & Execution:
search the possible solution ,execute it and
reach our goal state
Example : 8-queens problem:
• The aim of this problem is to place eight queens on a chessboard in
an order where no queen may attack another.
• A queen can attack other queens either diagonally or in same row
and column.
• From the following figure, we can understand the problem as well
as its correct solution.
Problem formulation types
• For this problem, there are two main kinds of
formulation:
1. Incremental formulation
2. complete state formulation
Incremental formulation :
It starts from an empty state where the
operator augments a queen at each step
Following steps are involved in this formulation:
States: Arrangement of any 0 to 8 queens on the
chessboard
Initial State: An empty chessboard
Actions: Add a queen to any empty box
Transition model: Returns the chessboard with the queen
added in a box.
Goal test: Checks whether 8-queens are placed on the
chessboard without any attack.
Path cost: There is no need for path cost because only
final states are counted
• Complete-state formulation:
It starts with all the 8-queens on the chessboard and
moves them around, saving from the attacks.
Following steps are involved in this formulation
States: Arrangement of all the 8 queens one the board
Actions: Move the queen at the location where it is safe
from the attacks.
Transition model: move the queen to different square
Goal test: No queens attacked
Real Time Example
Water Jug problem
Search Solution
4g 3g
Initial State
4g 3g 4g 3g

4g 3g 4g 3g 4g 3g 4g 3g

4g 3g 4g 3g 4g 3g 4g 3g 4g 3g 4g 3g 4g 3g 4g 3g

Goal State
CRYPT ARITHMETIC PROBLEM
RULES
Searching Algorithm

• Search: Searching is a step by step procedure to solve a search-


problem in a given search space.
• A search problem can have three main factors:
• Search Space
• Start state
• Goal state
Search Space: Search space represents a set of possible solutions,
which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.
Search tree:
A tree representation of search problem is called Search tree.
The root of the search tree is the root node which is corresponding to the
initial state.
Actions:
It gives the description of all the available actions to the agent.

Transition model:
A description of what each action do, can be represented as a
transition model.
Path Cost:
It is a function which assigns a numeric cost to each path.

Solution:
It is an action sequence which leads from the start node
to the goal node.

Optimal Solution:
If a solution has the lowest cost among all solutions.
Properties of Search Algorithms:

Following are the four essential properties of


search algorithms to compare the efficiency of these algorithms:

Completeness:
A search algorithm is said to be complete if it guarantees
to return a solution if at least any solution exists for any random input.
Optimality:
If a solution found for an algorithm is guaranteed to be the
best solution (lowest path cost) among all other solutions, then such a
solution for is said to be an optimal solution.
Time Complexity:
Time complexity is a measure of time for an algorithm to
complete its task.
Space Complexity:
It is the maximum storage space required at any point
during the search, as the complexity of the problem.
SEARCH STRATEGIES in AI

There are two types of strategies that describe a solution for a


given problem:

1. Uninformed Search (Blind Search)


2.Informed Search (Heuristic Search)
Uninformed Search (Blind Search)
This type of search strategy does not have any additional
information about the states except the information provided in the
problem definition.
They can only generate the successors and distinguish a
goal state from a non-goal state.
These type of search does not maintain any internal state,
that’s why it is also known as Blind search.

Informed Search (Heuristic Search)


This type of search strategy contains some additional
information about the states beyond the problem definition.
This search uses problem-specific knowledge to find
more efficient solutions.
This search maintains some sort of internal states via
heuristic functions (which provides hints), so it is also called heuristic
search.
BREADTH-FIRST SEARCH
Breadth-first search is the most common search strategy for traversing
a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to nodes
of next level.
The breadth-first search algorithm is an example of a general-graph
search algorithm.
Breadth-first search implemented using FIFO queue data structure.
Example: In the below tree structure, we have shown the traversing of
the tree using BFS algorithm from the root node S to goal node K. BFS
search algorithm traverse in layers, so it will follow the path which is
shown by the dotted arrow, and the traversed path will be: 1. S---> A---
>B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity:
Time Complexity of BFS algorithm can be obtained by
the number of nodes traversed in BFS until the shallowest Node. Where
the d= depth of shallowest solution and b is a node at every state.

Space Complexity:
Space complexity of BFS algorithm is given by the
Memory size of frontier which is O(bd).
Completeness:
BFS is complete, which means if the shallowest goal
node is at some finite depth, then BFS will find a solution.
Optimality:
BFS is optimal if path cost is a non-decreasing
function of the depth of the node.
Advantages:
BFS will provide a solution if any solution exists.

If there are more than one solutions for a given


problem, then BFS will provide the minimal solution which
requires the least number of steps.

Disadvantages:
It requires lots of memory since each level of the tree
must be saved into memory to expand the next level.
BFS needs lots of time if the solution is far away from
the root node.
BFS using Queue (FIFO)
2. DEPTH-FIRST SEARCH

• Depth-first search is a recursive algorithm for


traversing a tree or graph data structure.
• It is called the depth-first search because it starts
from the root node and follows each path to its
greatest depth node before moving to the next path.
• DFS uses a stack data structure for its
implementation.
• The process of the DFS algorithm is similar to the
BFS algorithm.
• Example: In the below search tree, we have shown the flow of depth-first
search, and it will follow the order as:
Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E,
after traversing E, it will backtrack the tree as E has no other successor and
still goal node is not found.
After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness:
DFS search algorithm is complete within finite
state space as it will expand every node within a limited search
tree.
Time Complexity:
Time complexity of DFS will be equivalent to the
node traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much
larger than d (Shallowest solution depth)
Space Complexity:
DFS algorithm needs to store only single path from
the root node, hence space complexity of DFS is equivalent to the
size of the fringe set, which is O(bm).
Advantage:
DFS requires very less memory as it only needs to
store a stack of the nodes on the path from root node to the
current node.
It takes less time to reach to the goal node than BFS
algorithm (if it traverses in the right path).

Disadvantage:
There is the possibility that many states keep re-
occurring, and there is no guarantee of finding the solution.
DFS algorithm goes for deep down searching
and sometime it may go to the infinite loop.
Apply DFS algorithm
Goal - K
3. DEPTH-LIMITED SEARCH ALGORITHM

• A depth-limited search algorithm is similar to depth-first search with a


predetermined limit.
• Depth-limited search can solve the drawback of the infinite path in the
Depth-first search. In this algorithm, the node at the depth limit will treat
as it has no successor nodes further.
• Depth-limited search can be terminated with two Conditions of failure:
Standard failure value: It indicates that problem does not have
any solution.
Cutoff failure value: It defines no solution for the problem
within a given depth limit.
Completeness:
DLS search algorithm is complete if the solution is
above the depth-limit.
Time Complexity:
Time complexity of DLS algorithm is O(bℓ ).
Space Complexity:
Space complexity of DLS algorithm is O(b×ℓ).
Optimal:
Depth-limited search can be viewed as a
special case of DFS, and it is also not optimal
even if ℓ>d.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
Depth-limited search also has a disadvantage of
incompleteness.
It may not be optimal if the problem has more than one
solution.
Apply depth limited search take k as a Goal
4.UNIFORM-COST SEARCH
ALGORITHM
• Uniform-cost search is a searching algorithm used for traversing a
weighted tree or graph.
• This algorithm comes into play when a different cost is available for each
edge.
• The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost.
• Uniform-cost search expands nodes according to their path costs form the
root node. It can be used to solve any graph/tree where the optimal cost is
in demand.
• A uniform cost search algorithm is implemented by the priority queue. It
gives maximum priority to the lowest cumulative cost.
• Uniform cost search is equivalent to BFS algorithm if the path cost of all
edges is the same.
• Consider Figure 3.10, where the problem is to get from Sibiu to Bucharest. The successors of
Sibiu are Rimnicu Vilcea and Fagaras, with costs 80 and 99, respectively. The leastcost node,
Rimnicu Vilcea, is expanded next, adding Pitesti with cost 80+97=177.
• The least-costnodeisnowFagaras,soitisexpanded,addingBucharestwithcost 99+211=310.
Bucharest is the goal, but the algorithmtestsforgoalsonlywhenitexpandsanode,notwhen it
generates a node, so it has not yet detected that this is a path to the goal.
Thealgorithmcontinueson,choosingPitestiforexpansionnextandaddingasecondpath to
Bucharest with cost 80+97+101=278. It has a lower cost, so it replaces the previous path
in reached and is added to the frontier. It turns out this node now has the lowest cost, so it is
considered next, found to be a goal, and returned.
Completeness:
Uniform-cost search is complete, such as if there is a
solution, UCS will find it.

Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get
closer to the goal node. Then the number of steps is = C*/ε+1. Here we have
taken +1, as we start from state 0 and end to C*/ε. Hence, the worst-case time
complexity of Uniform-cost search is O(b1 + [C*/ε])/.

Space Complexity:
The same logic is for space complexity so, the worst-case
space complexity of Uniform-cost search is O(b1 + [C*/ε]).

Optimal:
Uniform-cost search is always optimal as it only selects a path
with the lowest path cost.
Advantages:
Uniform cost search is optimal because at every state
the path with the least cost is chosen.

Disadvantages:
It does not care about the number of steps involve
in searching and only concerned about path cost.
Due to which this algorithm may be stuck in
an infinite loop
Apply uniform cost search and find path?
5.ITERATIVE DEEPENING DEPTH-FIRST
SEARCH
• The iterative deepening algorithm is a combination of DFS
and BFS algorithms.
• This search algorithm finds out the best depth limit and does
it by gradually increasing the limit until a goal is found. This
algorithm performs depth-first search up to a certain "depth
limit", and it keeps increasing the depth limit after each
iteration until the goal node is found.
• This Search algorithm combines the benefits of Breadth-first
search's fast search and depth-first search's memory efficiency.
• The iterative search algorithm is useful uninformed search
when search space is large, and depth of goal node is
unknown.

Fig: Four iterations of iterative deepening search for goal M on a binary tree, with the depth
limit varying from 0 to 3.
Completeness:
This algorithm is complete is if the branching factor is finite.

Time Complexity:
Let's suppose b is the branching factor and depth is d
then the worst-case time complexity is O(bd ).
Space Complexity:
The space complexity of IDDFS will be O(bd).
Optimal:
IDDFS algorithm is optimal if path cost is a non-
decreasing function of the depth of the node
Advantages:
It combines the benefits of BFS and DFS search
algorithm in terms of fast search and memory efficiency.

Disadvantages:
The main drawback of IDDFS is that it repeats
all the work of the previous phase.
6.BIDIRECTIONAL SEARCH
ALGORITHM
• Bidirectional search algorithm runs two simultaneous searches,
one form initial state called as forward-search and other from
goal node called as backward-search, to find the goal node.

• Bidirectional search replaces one single search graph with two


small sub graphs in which one starts the search from an initial
vertex and other starts from goal vertex.

• The search stops when these two graphs intersect each other.
Bidirectional search can use search techniques such as BFS,
DFS, DLS, etc
• Example: In the below search tree, bidirectional search algorithm is
applied. This algorithm divides one graph/tree into two sub-graphs. It starts
traversing from node 1 in the forward direction and starts from goal node
16 in the backward direction. The algorithm terminates at node 9 where
two searches meet
Completeness:
Bidirectional Search is complete if we use BFS in both searches.
Time Complexity:
Time complexity of bidirectional search using BFS is O(bd ).
Space Complexity:
Space complexity of bidirectional search is O(bd ).
Optimal: Bidirectional search is Optimal.

Advantages:
Bidirectional search is fast.
Bidirectional search requires less memory

Disadvantages:
Implementation of the bidirectional search tree is difficult.

In bidirectional search, one should know the goal state


in advance.

You might also like