You are on page 1of 2

9/17/2020 CS440 Lectures

CS 440/ECE 448
Fall 2020
Margaret Fleck
Uninformed Search 3

Returning paths
When a search algorithm finds the goal, it must be able to return best path to the goal. So, during search, each
seen state must remember the best path back to the start state. To do this efficiently, each state stores a pointer to
the previous state in the best path. When the goal is found, we retrieve the best path by chaining backwards until
we reach the start state.

Length-bounded DFS
DFS performs very poorly when applied directly to most search problems. However, it becomes the method of
choice when we have a tight bound on the length of the solution path. This happens in two types of situations,
which we'll see later in the course:

The search space has a small natural depth (e.g. constraint-based problems).
Our system resources impose a limit on how far away we can search (e.g. complex games or large robot
environments).

In this situation, DFS can be implemented so as to use very little memory. In AI problems, the set of seen states
can often become very large, so that programs can run out of memory or put enough pressure on the system
memory management that programs slow down. Instead of storing and checking this large set, length-bounded
DFS checks only that the current path contains no duplicate states. There is a tradeoff here. Checking only the
current path will cause the algorithm to re-do some of its search work, because it can't remember what states has
already searched. In practice, the wisdom of this approach depends on the application. When there are tight
bounds on path length, it's safe to implement DFS using recursion rather than a stack.

Iterative deepening
We can dramatically improve the low-memory version of DFS by running it with a progressively increasing
bound on the maximum path length. This is a bit like having a dog running around you on a leash, and gradually
letting the leash out. Specifically:

For k = 1 to infinity

Run DFS but stop whenever a path contains at least k states.


Halt when we find a goal state

This is called "iterative deepening." Its properties

Overall search behavior looks like BFS.


Each iteration starts from scratch, forgetting all previous work.
Uses very little memory.

Each iteration repeats the work of the previous one. So iterative deepening is most useful when the new search
area at distance k is large compared to the area search in previous iterations, so that the search at distance k is
dominated by the paths of length k.

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/search3.html 1/2
9/17/2020 CS440 Lectures
These conditions don't hold for 2D mazes. The search area at radius k is proportional to k, whereas the area
inside that radius is proportional to k . 2

However, suppose that the search space looks like a 4-ary tree. We know from discrete math that there are 4 k

nodes at level k. Using a geometric series, we find that there are 1/3 ⋅ (4 − 1) nodes in levels 1 through k-1.
k

So the nodes in the lowest level dominate the search. This type of situation holds (approximately) for big games
such as chess.

Backwards and Bidirectional Search


Backwards and bidirectional search are basically what they sound like. Backwards search is like forwards search
except that you start from the goal. In bidirectional search, you run two parallel searches, starting from start and
goal states, in hopes that they will connect at a shared state. These approaches are useful only in very specific
situations.

Backwards search makes sense when you have relatively few goal states, known at the start. This often fails, e.g.
chess has a huge number of winning board configurations. Also, actions must be easy to run backwards (often
true).

Example: James Bond is driving his Lexus in central Boston and wants to get to Dr. Evil's lair in Hawley, MA
(population 337). There's a vast number of ways to get out of Boston but only one road of significant size going
through Hawley. In that situation, working backwards from the goal might be much better than working
forwards from the starting state.

https://courses.grainger.illinois.edu/cs440/fa2020/lectures/search3.html 2/2

You might also like