You are on page 1of 16

Artificial Intelligence-PROBLEM SOLVING

Problem‐solving agents, Example problems, Searching for Solutions


Uninformed Search Strategies: Breadth First search, Depth First Search
Textbook 1: Chapter 3- 3.1, 3.2, 3.3, 3.4.1, 3.4.3
The AI Problem
There are some of the problems contained within AI.
1. Game Playing and theorem proving share the property that people who do them well are
considered to be displaying intelligence.
2. Another important prob into AI is focused on Commonsense Reasoning . It includes reasoning
about physical objects and their relationships to each other, as well as reasoning about actions and
other consequences.
3. To investigate this sort of reasoning Nowell Shaw and Simon built the General Problem Solver
which they applied to several common sense tasks as well as the problem of performing symbolic
manipulations of logical expressions. But no attempt was made to create a program with a large
amount of knowledge about a particular problem domain. Only quite simple tasks were selected.
4. The following are the figures showing some of the tasks that are the targets of work in AI
PROBLEM SOLVING AGENTS
An agent is anything ( a program, a machine assembly ) that can be viewed as perceiving its
environment through sensors and acting upon that environment through actuators. A rational agent
is one that does the right thing. Here right thing is one that will cause agent to be more successful.
That leaves us with the problem of deciding how and when to evaluate the agent’s success.
Learning agent model has 4 components –
1) Learning element.
2) Performance element.
3) Critic
4) Problem Generator.
Agent program is important and central part of agent system. It drives the agent, means it analyzes
date and provides probable actions agent could take. It is internally implemented as agent function.
It takes input as the current percept from the sensor and returns an action to the effectors.
Characteristics of Intelligent Agent(IA)
The IA must learn and improve through interaction with the environment.
The IA must adapt online and in the real time situation.
The IA must accommodate new problem-solving rules incrementally.
The IA must have memory which must exhibit storage and retrieval capabilities.
The functionalities of the agent function
Agent function is a mathematical function which maps each and every possible percept sequence to
a possible action. The major functionality of the agent function is to generate the possible action to
each and every percept. It helps the agent to get the list of possible actions the agent can take. Agent
function can be represented in the tabular form.
The basic agent program.
The basic agent program is a concrete implementation of the agent function which runs on the agent
architecture. Agent program puts bound on the length of percent sequence and considers only
required percept sequences. Agent program implements the functions of percept sequence and
action which are external characteristics of theagent.
Agent program takes input as the current percept from the sensor and return an action to the
effectors (Actuators)
The four components to define a problem?
1. initial state: state in which agent starts in.
2. A description of possible actions: description of possible actions which are available to the agent.
3. The goal test: it is the test that determines whether a given state is goal state.
4. A path cost function: it is the function that assigns a numeric cost (value ) to each path. The
problem-solving agent is expected to choose a cost function that reflects its own performance
measure.
The reflex agents are known as the simplest agents because they directly map states into actions.
Unfortunately, these agents fail to operate in an environment where the mapping is too large to store
and learn.
Goal-based agent, on the other hand, considers future actions and the desired outcomes. Here, we
will discuss one type of goal-based agent known as a problem-solving agent, which uses atomic
representation with no internal states visible to the problem-solving algorithms.
Problem-solving agent
A simple problem solving agent formulates a goal and a problem, searches for a sequence of actions
that would solve the problem, and then executes the actions one at a time. When this is complete, it
formulates another goal and starts over.They perfoms precisely by defining problems and its several
solutions. According to psychology, “a problem-solving refers to a state where we wish to reach to a
definite goal from a present state or condition.” According to computer science, a problem-solving is
a part of artificial intelligence which en-compasses a number of techniques such as algorithms,
heuristics to solve a problem.Therefore, a problem-solving agent is a goal-driven agent and focuses
on satisfying the goal.
PROBLEM DEFINITION
To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of the initial situations
and also final situations which constitute (i.e) acceptable solution to the problem.
(ii)Analyze the problem (i.e) important features have an immense (i.e) huge impact on the
appropriateness of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the particular problem.
Steps performed by Problem-solving agent
Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps or
sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal.
Goal formulation is based on the current situation and the agent’s performance measure.
Well-defined problems and solutions
Problem Formulation: It is the most important step of problem-solving which decides what
actions should be taken to achieve the formulated goal. There are following five components
involved in problem formulation:-
This formulation seems reasonable, but it is still a model an abstract mathematical description and
not the real thing. The process of removing detail from a representation is called abstraction
a. Initial State: It is the starting state or initial step of the agent towards its goal.
b. Actions: It is the description of the possible actions available to the agent.
c. Transition Model: It describes what each action does.
d. Goal Test: It determines if the given state is a goal state.
e. Path cost: It assigns a numeric cost to each path that follows the goal. The problem- solving
agent selects a cost function, which reflects its performance measure. Remember, an optimal
solution has the lowest path cost among all the solutions.
Note: Initial state, actions, and transition model together define the state-space of the problem
implicitly. State-space of a problem is a set of all states which can be reached from the initial state
followed by any sequence of actions. The state-space forms a directed map or graph where nodes
are the states, links between the nodes are actions, and the path is a sequence of states connected by
the sequence of actions.
Search: It identifies all the best possible sequence of actions to reach the goal state from the current
state. It takes a problem as an input and returns solution as its output.
Solution: It finds the best algorithm out of various algorithms, which may be proven as the best
optimal solution.
Execution: It executes the best optimal solution from the searching algorithms to reach the goal
state from the current state.
Example Problems .Basically, there are two types of problem approaches:
Toy Problem: It is a concise and exact description of the problem which is used by the researchers
to compare the performance of algorithms.
Real-world Problem: It is real-world based problems which require solutions. Unlike a toy
problem, it does not depend on descriptions, but we can have a general formulation of the problem.
Some Toy Problems
8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a
blank space. The tile adjacent to the blank space can slide into that space. The objective is to reach a
specified goal state similar to the goal state, as shown in the below In the figure, our task is to
convert the current state into goal state by sliding digits into the blank space.

In the above figure, our task is to convert the current(Start) state into goal state by sliding digits
into the blank space.The problem formulation is as follows:
States: It describes the location of each numbered tiles and the blank tile.
Initial State: We can start from any state as the initial state.
Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down
Transition Model: It returns the resulting state as per the given state and actions.
Goal test: It identifies whether we have reached the correct goal-state.
Path cost: The path cost is the number of steps in the path where the cost of each step is 1.
Note: The 8-puzzle problem is a type of sliding-block problem which is used for testing new search
algorithms in artificial intelligence
8-queens problem: The aim of this problem is to place eight queens on a chessboard in an order
where no queen may attack another. A queen can attack other queens either diagonally or in same
row and column From the following figure, we can understand the problem as well as its correct
solution. It is noticed from the above figure that each queen is set into the chessboard in a position
where no other queen is placed diagonally, in same row or column. Therefore, it is one right
approach to the 8-queens problem.
For this problem, there are two main kinds of formulation:
1. Incremental formulation: It starts from an empty state where the operator augments a queen at
each step.
Following steps are involved in this formulation:
States: Arrangement of any 0 to 8 queens on the chessboard.
Initial State: An empty chessboard
Actions: Add a queen to any empty box.
Transition model: Returns the chessboard with the queen added in a box.
Goal test: Checks whether 8-queens are placed on the chessboard without any attack.
Path cost: There is no need for path cost because only final states are counted.
In this formulation, there is approximately 1.8 x 10 14 possible sequence to investigate.
2. Complete-state formulation: It starts with all the 8-queens on the chessboard and moves them
around, saving from the attacks. Following steps are involved in this formulation
States: Arrangement of all the 8 queens one per column with no queen attacking the other queen.
Actions: Move the queen at the location where it is safe from the attacks.
This formulation is better than the incremental formulation as it reduces the state space from 1.8 x
10 14 to 2057, and it is easy to find the solutions.
Some Real-world problems
Traveling salesperson problem(TSP): It is a touring problem where the salesman can visit each city
only once. The objective is to find the shortest tour and sell-out the stuff in each city.
VLSI Layout problem: In this problem, millions of components and connections are positioned on a
chip in order to minimize the area, circuit-delays, stray-capacitances, and maximizing the
manufacturing yield.
The layout problem is split into two parts:
Cell layout: Here, the primitive components of the circuit are grouped into cells, each performing
its specific function. Each cell has a fixed shape and size. The task is to place the cells on the chip
without overlapping each other.
Channel routing: It finds a specific route for each wire through the gaps between the cells.
Protein Design: The objective is to find a sequence of amino acids which will fold into 3D protein
having a property to cure some disease.
Searching for solutions
We have seen many problems. Now, there is a need to search for solutions to solve them. In this
section, we will understand how searching can be used by the agent to solve a problem. For solving
different kinds of problem, an agent makes use of different strategies to reach the goal by searching
the best possible algorithms. This process of searching is known as search strategy.
Search Algorithms in Artificial Intelligence / Searching for Solutions
Overview
Search algorithms in artificial intelligence are significant because they provide solutions to many
artificial intelligence-related issues. There are a variety of search algorithms in artificial
intelligence. Additionally, it will categorize search algorithms in artificial intelligence and list the
most often used ones.
Introduction
Search algorithms in AI are algorithms that aid in the resolution of search issues. A search issue
comprises the search space, start, and goal state. By evaluating scenarios and alternatives, search
algorithms in artificial intelligence assist AI agents in achieving the objective state. The algorithms
provide search solutions by transforming the initial state to the desired state. Therefore, AI
machines and applications can only perform search functions and discover viable solutions with
these algorithms.
AI agents make artificial intelligence easy. These agents carry out tasks to achieve a specific
objective and plan actions that can lead to the intended outcome. The combination of these actions
completes the given task. The AI agents discover the best solution by considering all alternatives or
solutions. Search algorithms in artificial intelligence are used to find the best possible solutions for
AI agents.
Problem-solving Agents
Building agents with rational behavior is the subject of artificial intelligence research. These agents
often use a search algorithm in the background to complete their job. Search techniques are
universal problem-solving approaches in Artificial Intelligence. Rational or problem-solving agents
mostly use these search strategies or algorithms in AI to solve a particular problem and provide the
best result. The goal-based agents are problem-solving agents that use atomic representation. We
will learn about different problem-solving search algorithms in AI in this topic.
Search Algorithm Terminologies
1. Search - Searching solves a search issue in a given space step by step. Three major factors
can influence a search issue.
 Search Space - A search space is a collection of potential solutions a system may have.
 Start State - The jurisdiction where the agent starts the search.
 Goal test - A function that examines the current state and returns whether or not the goal
state has been attained.
2. Search tree - A Search tree is a tree representation of a search issue. The node at the root of
the search tree corresponds to the initial condition.
3. Actions - It describes all the steps, activities, or operations accessible to the agent.
4. Transition model - It can be used to convey a description of what each action does.
5. Path Cost - It is a function that gives a cost to each path.
6. Solution - An action sequence connects the start node to the target node.
7. Optimal Solution - If a solution has the lowest cost among all solutions, it is said to be the
optimal answer.
Properties of Search Algorithms or Measuring problem-solving performance
The four important properties of search algorithms in artificial intelligence for comparing their
efficiency are as follows:
1. Completeness - A search algorithm is said to be complete if it guarantees to yield a solution
for any random input if at least one solution exists.
2. Optimality - A solution discovered for an algorithm is considered optimal if it is assumed to
be the best solution (lowest path cost) among all other solutions.
3. Time complexity - It measures how long an algorithm takes to complete its job.
4. Space Complexity - The maximum storage space required during the search, as determined
by the problem's complexity.
These characteristics often contrast the effectiveness of various search algorithms in artificial
intelligence.
Importance of Search Algorithms in Artificial Intelligence
The following points explain how and why the search algorithms in AI are important:
 Solving problems: Using logical search mechanisms, including problem description,
actions, and search space, search algorithms in artificial intelligence improve problem-
solving. Applications for route planning, like Google Maps, are one real-world illustration of
how search algorithms in AI are utilized to solve problems. These programs employ search
algorithms to determine the quickest or shortest path between two locations.
 Search programming: Many AI activities can be coded in terms of searching, which
improves the formulation of a given problem's solution.
 Goal-based agents: Goal-based agents' efficiency is improved through search algorithms in
artificial intelligence. These agents look for the most optimal course of action that can offer
the finest resolution to an issue to solve it.
 Support production systems: Search algorithms in artificial intelligence help production
systems run. These systems help AI applications by using rules and methods for putting
them into practice. Production systems use search algorithms in artificial intelligence to find
the rules that can lead to the required action.
 Neural network systems: The neural network systems also use these algorithms. These
computing systems comprise a hidden layer, an input layer, an output layer, and coupled
nodes. Neural networks are used to execute many tasks in artificial intelligence. For
example, the search for connection weights that will result in the required input-output
mapping is improved by search algorithms in AI.
Types of Search Algorithms in AI
We can divide search algorithms in artificial intelligence into uninformed (Blind search) and
informed (Heuristic search) algorithms based on the search issues.
Un-informed / Blind Search
The uninformed search needs domain information, such as proximity or goal location. It works by
brute force because it only contains information on traversing the tree and identifying leaf and goal
nodes.
It is a method of searching a search tree without knowledge of the search space, such as initial state
operators and tests for the objective and is also known as blind search. It goes through each tree node
until it reaches the target node. These algorithms are limited to producing successors and
distinguishing between goal and non-goal states.
1 Breadth-first search - This is a search method for a graph or tree data structure. It starts at the
tree root or searches key and goes through the adjacent nodes in the current depth level before
moving on to the nodes in the next depth level. It uses the queue data structure that works on the
first in, first out (FIFO) concept. It is a complete algorithm as it returns a solution if a solution
exists.
Imagine searching a uniform tree where every state has b successors. The root of the search tree
generates b nodes at the first level, each of which generates b more nodes, for a total of b 2 at the
second level. Each of these generates b more nodes, yielding b 3 nodes at the third level, and so on.
Now suppose that the solution is at depth d. In the worst case, it is the last node generated at that
level. Then the total number of
nodes generated is b+b2+b
3 + · · · + b d = O(b d)

Algorithm:
1) Create a variable called NODE_LIST and set it to the initial state.
2) Until a goal state is found or NODE_LIST is empty do:
a. Remove the first element from NODE_LIST and call it E. If NODE_LIST was empty quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is goal state, quit and return this state
iii. Otherwise add the new state to the end of NODE_LIST

The data structure used in this algorithm is QUEUE.


Explanation of Algorithm:
– Initially put (0,0) state in the queue
– Apply the production rules and generate new state
– If the new states are not the goal state, (not generated before and not expanded)
then only add these states to queue.
2. Depth-first search - It is also an algorithm used to explore graph or tree data structures. It starts
at the root node, as opposed to the breadth-first search. It goes through the branch nodes and then
returns. It is implemented using a stack data structure that works on the concept of last in, first out
(LIFO).

The depth-first search algorithm is an instance of the graph-search algorithm in Figure whereas
breadth-first-search uses a FIFO queue, depth-first search uses a LIFO queue. A LIFO queue means
that the most recently generated node is chosen for expansion. This must be the deepest unexpanded
node because it is one deeper than its parent—which, in turn, was the deepest unexpanded node
when it was selected.
The properties of depth-first search depend strongly on whether the graph search or tree-search
version is used. The graph-search version, which avoids repeated states and redundant paths, is
complete in finite state spaces because it will eventually expand every node.
Algorithm:
1) If the initial state is the goal state, quit return success.
2) Otherwise, do the following until success or failure is signaled
a. Generate a successor E of the initial state, if there are no more successors, signal failure
b. Call Depth-First Search with E as the initial state
c. If success is returned, signal success. Otherwise continue in this loop.

The data structure used in this algorithm is STACK.


Explanation of Algorithm:
– Initially put the (0,0) state in the stack.
– Apply production rules and generate the
new state.
– If the new states are not a goal state, (not generated before and no expanded) then only add the
state to top of the Stack.
– If already generated state is encountered then POP the top of stack elements and search in another
direction.
The time complexity of depth-first graph search is bounded by the size of the state space(which may
be infinite, of course). A depth-first tree search, on the other hand, may generate all of the O(b m )
nodes in the search tree, where m is the maximum depth of any node; this can be much greater
than the size of the state space. Note that m itself can be much larger than d (the depth of the
shallowest solution) and is infinite if the tree is unbounded.
A variant of depth-first search called backtracking search uses still less memory.In backtracking,
only one successor is generated at a time rather than all successors; each partially expanded node
remembers which successor to generate next. In this way, only O(m) memory is needed rather than
O(bm).
3.Uniform cost search(UCS) - Unlike breadth-first and depth-first algorithms, uniform cost search
considers the expense. When there are multiple paths to achieving the desired objective, the optimal
solution of uniform cost algorithms is the one with the lowest cost. So uniform cost search will
check the expense to go to the next node. It will choose the path with the least cost if there are
multiple paths. Only finite states and the absence of loops with zero weights make UCS complete.
Also, only when there are no negative costs is UCS optimum. It is similar to the breadth-first search
if each transition's cost is the same.

4. Iterative deepening depth-first search – It performs a depth-first search to level 1, then restarts,
completes a depth-first search to level 2, and so on until the answer is found. It only generates a
node once all the lower nodes have been produced. It only stores a node stack.
The algorithm terminates at depth d when the goal node is found.
5. Bidirectional search - It searches forward from the initial state and backward from the target
state until they meet to identify a common state. The route from the initial state is joined to the path
from the goal state. Each search only covers half of the entire path.

Breadth First Search (BFS)


Breadth first search is a general technique of traversing a graph. Breadth first search may use more
memory but will always find the shortest path first. In this type of search the state space isrepresent-
ed in form of a tree. The solution is obtained by traversing through the tree. The nodes of the tree
represent the start value or starting state, various intermediate states and the final state. In this
search a queue data structure is used and it is level by level traversal. Breadth first search expands
nodes in order of their distance from the root. It is a path finding algorithm that is capable of always
finding the solution if one exists. The solution which is found is always the optional solution. This
task is completed in a very memory intensive manner. Each node in the search tree is expanded in a
breadth wise at each level.
Concept:
Step 1: Traverse the root node
Step 2: Traverse all neighbours of root node.
Step 3: Traverse all neighbours of neighbours of the root node.
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 1: Place the root node inside the queue.
Step 2: If the queue is empty then stops and return failure.
Step 3: If the FRONT node of the queue is a goal node then stop and return success.
Step 4: Remove the FRONT node from the queue. Process it and find all its neighbours that are in
readystate then place them inside the queue in any order.
Step 5: Go to Step 3.
Step 6: Exit.
Implementation:
Let us implement the above algorithm of BFS by taking the following suitable
example.

Consider the graph in which let us take A as the starting node and F as the goal node (*)
Step 1: Place the root node inside the queue i.e. A
Step 2: Now the queue is not empty and also the FRONT node i.e. A is not our goal node.
Step 3: So remove the FRONT node from the queue i.e. A and find the neighbour of A
i.e. B and C B C A
Step 4: Now b is the FRONT node of the queue .So process B and finds the neighbours of B
i.e. D. C D B
Step 5: Now find out the neighbours of C
i.e. E D B E
Step 6: Next find out the neighbours of D as D is the FRONT node of the queue E F D
Step 7: Now E is the front node of the queue. So the neighbour of E is F which is our goal node. F E
Step 8: Finally F is our goal node which is the FRONT of the queue. So exit. F
Advantages:
In this procedure at any way it will find the goal.
It does not follow a single unfruitful path for a long time.
It finds the minimal solution in case of multiple paths.
Disadvantages:
BFS consumes large memory space. Its time complexity is more.
It has long pathways, when all paths to a destination are on approximately the same search depth.
Depth First Search (DFS)
DFS is also an important type of uniform search. DFS visits all the vertices in the graph. This type
of algorithm always chooses to go deeper into the graph. After DFS visited all the reachable vertices
from a particular sources vertices it chooses one of the remaining undiscovered vertices and
continues the search. DFS reminds the space limitation of breath first search by always generating
next a child of the deepest unexpanded nodded. The data structure stack or last in first out (LIFO) is
used for DFS. One interesting property of DFS is that, the discover and finish time of each vertex
from a parenthesis structure. If we use one open parenthesis when a vertex is finished then the result
is properly nested set of parenthesis.
Concept:
Step 1: Traverse the root node.
Step 2: Traverse any neighbour of the root node.
Step 3: Traverse any neighbour of neighbour of the root node.
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 1: PUSH the starting node into the stack.
Step 2: If the stack is empty then stop and return failure.
Step 3: If the top node of the stack is the goal node, then stop and return success.
Step 4: Else POP the top node from the stack and process it. Find all its neighbours that are in ready
stateand PUSH them into the stack in any order.
Step 5: Go to step 3.
Step 6: Exit.
Implementation:
Let us take an example for implementing the above DFS algorithm.

Examples of DFS
Consider A as the root node and L as the goal node in the graph figure
Step 1: PUSH the starting node into the stack i.e.A
Step 2: Now the stack is not empty and A is not our goal node. Hence move to next step.
Step 3: POP the top node from the stack i.e. A and find the neighbours of A
i.e. B and C. B C A
Step 4: Now C is top node of the stack. Find its neighbours i.e. F and G. B F G C
Step 5: Now G is the top node of the stack. Find its neighbour i.e. M B F M G
Step 6: Now M is the top node and find its neighbour, but there is no neighbours of M in the graph
so POP it from the stack. B F M
Step 7: Now F is the top node and its neighbours are K and L. so PUSH them on to the stack.
BKLF
Step 8: Now L is the top node of the stack, which is our goal node. B K L
Advantages:
DFSconsumes very less memory space.
It will reach at the goal node in a less time period than BFS if it traverses in a right path.
It may find a solution without examining much of search because we may get the desired solution in
the very first go
Disadvantages:
It is possible that may states keep reoccurring. There is no guarantee of finding the goal node.
Sometimes the states may also enter into infinite loop

You might also like