You are on page 1of 32

LECTURE 2 : PROBLEM SOLVING

Delivered by
Joel Anandraj.E
AP/IT
Intelligent Agent
An Intelligent agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through effectors.

Problem solving involves


“Goal Formulation,Problem formulation, Search
Solution & Execution"
Goal Formulation:
Goal formulation is the first step in problem solving.
An intelligent agent perceives it environment in terms of
state space.
Goal is also one of the state space and actions are
transitions that aid an agent reach the goal state.
An agent has to find out which actions will get it to a goal
state.
PROBLEM FORMULATION

Problem formulation is the process of deciding what actions


and states to consider.

An agent will try with few possible action and choose one
among it in random.
Search:
Searching is the process of finding a sequence of
action that leads to the goal state.

A search algorithm takes a problem as input and


returns a solution in the form of an action sequence.
Execution:
This phase involves executing the action sequence
genereted during searching.
Formulating a Problem: More detailed analysis

Lets understand the different amount of knowledge the


agent has on its action and its current/resulting state.

An agent knowledge on it environment depends upon


how well the agent is connected to its environment
through its percepts and action.
There are four essentially different types of problem
Single state problems
Multiple-state problems
Contingency problems
Exploration problem
Vaccum World Problem :
The environment contains two location.
Each location may or may not contain dirt, and the
agent may be in one location or the other.
There are 8 possible world states.
The agent has three possible actions in this version of
the vacuum world: Left, Right, and Suck.
The goal is to clean up all the dirt.
Single-state problem
Observable – The agent sensor may give it enough
information to tell exactly which state it is in
Deterministic – The agent can calculate exactly which
state it will be in after any sequence of actions.

This kind of problem, we call is single state problem.


Example
If its initial state is 5, then it can calculate that the
action sequence [Right,Suck] will get to a goal state.

INITIAL STATE GOAL STATE


Multiple-state problem

Partially Observable – The agent knows all the effects of its


actions, but has limited access to the world state.
 In the worst case, the agent may not even know its initial state.

Deterministic –The agent knows what its actions do, the


agent can discover the right action sequence to reach a goal
state no matter what the start state is.

Here the agent must reason about sets of states that it might
get to, rather than single states.
Example:
The agent can calculate that the action Right will cause
it to be in one of the states {2,4,6,8}.

Further the agent can discover that the action sequence


[Right,Suck,Left,Suck] is guaranteed to reach a goal
state no matter what the start state

INITIAL STATE

GOAL STATE
Contingency problem

Partially observable (initial state not observable)


Non-deterministic : Exact prediction is impossible.
Contigency problem requires sensing during the
execution phase.
Exploration problem
In this problem an agent that has no information about
the effects of its actions.

The agent needs to discover its states by executing


various actions.
Well-defined problems and solutions
A problem is really a collection of information that the
agent will use to decide what to do.
The basic elements of a problem definition are the
states and actions.
These two elements are captured formally as,
• The initial state
• Operator set
• Together, these define the state space of the problem
Goal test
A test carried out by an agent in a state to determine it
has reached the goal.

A goal can be explicit or implicit


Explicit : Water Jug Problem
Abstract : Chess Game
Water Jug Problem: The expected water level is
explicity mentioned as goal state.
Chess Game : The Goal state is mentioned abstract
Sometimes, one solution is preferable to another, even
though they both reach the goal.

Path cost:
A path cost function is a function that assigns a cost to
a path.
The cost of a path is the sum of the costs of the
individual actions along the path.
The path cost function is often denoted by g.
“The initial state, operator set, goal test, and path cost
function define a problem.”
Naturally, we can then define a datatype with which to
represent problems:

Instances of this datatype will be the input to our search


algorithms.

The output of a search algorithm is a solution, that is, a path


from the initial state to a state that satisfies the goal test.
Measuring problem-solving performance
The effectiveness of a search can be measured in at least
three ways.
First, does it Find a solution at all?
Second, is it a good solution (one with a low path
cost)?
Third, what is the search cost associated with the time
and memory required to find a solution?
The total cost of the search is the sum of the path cost
and the search cost.
EXAMPLE PROBLEMS
The range of task environments that can be
characterized by well-defined problems is vast

We can categorize them as,


Toy Problems
Real World Problems
TOY PROBLEMS
The 8-puzzIe
States :Integer locations of tiles
Operators :left, right, up, down
Path cost :Each step costs 1, so the path cost is just the
length of the path.
Goal test:State matches the goal configuration
Real-world problems
Real-world problems, which tend to be more difficult
and whose solutions people actually care about
Route Finding
Touring and travelling salesperson problems.
Robot Navigation
Thank You
References
A State Space is a set of all states reachable from the
initial state by any sequence of actions.
A path in the state space is simply any sequence of
actions leading from one state to another.

You might also like