Professional Documents
Culture Documents
Linear-Space Est-First Search: Summary Of: Richard E. Korf"
Linear-Space Est-First Search: Summary Of: Richard E. Korf"
Abstract cost of node n is g(n), the sum of the edge costs from
the root to node n, then best-first search becomes Di-
Best-first search is a general search algorithm that, jkstra’s single-source shortest-path algorithm[Dijkstra
always expands next a frontier node of lowest cost. 19591. If the cost function is f(n) = g(n)+ h(n), where
Its applicability, however, is limited by its expo- h(lz) is a heuristic estimate of the cost of reaching a
nential memory requirement. Iterative deepen- goal from node n, then best-first search becomes the
ing, a previous approach to this problem, does A* algorithm[IIart, Nilsson, & Raphael 19681.
not expand nodes in best-first, order if t’he cost, Since best-first search stores all generated nodes in
function can decrease along a path. We present the Open or Closed lists, its space complexity is the
a linear-space best-first search algorithm (RBFS) same as its time complexity, which is typically expo-
that always explores new nodes in best-first or- nential. Given the ratio of memory to processing speed
der, regardless of the cost function, and expands on current computers, in practice best-first search ex-
fewer nodes than iterative deepening with a. non- hausts the available memory on most machines in a
decreasing cost function. On the sliding-tile puz- matter of minutes, halting the algorithm.
zles, RBFS with a weighted evaluation function
dramatically reduces computation time with only
Previous Work: Iterative Deepening
a small penalty in solution cost. In general, RBFS
reduces the space complexity of best-first search The memory limitation of best-first sea.rch was first
from exponential to linear, a.t the cost of only a addressed by iterative deepening[I<orf 19851. Itera-
constant factor in time complexity in our expcri- tive decpcning performs a series of depth-first searches,
ments. pruning branches when their cost exceeds a threshold
for that iteration. The initial threshold is the cost
of the root node, and the threshold for each succeed-
Introduction: Best-First Search ing iteration is the minimum node cost that exceeded
Best-first search is a very general heuristic search algo- the previous threshold. Since iterative deepening is a
rithm. It maintains an Open list of the frontier nodes depth-first algorithm, it only stores the nodes on the
of a partially expanded search graph, and a Closed list current search path, requiring space that is only lin-
of the interior nodes. Every node has an associated ear in the search depth. As with best-first search, dif-
cost value. At each cycle, an Open node of minimum ferent cost functions produce different iterative deep-
cost is expanded, generating all of its children. The cning algorit,hms, including depth-first iterative deep-
children are evaluated by the cost functfionY inserted ening (f( 12) = depth(n)) and iterative-deepening-A*
into the Open list, and the pa.rent is placed on the (f(4 = m + 44)*
Closed list. Initia.lly, the Open list contains just the If h(12) is consistent[Pearl 19841, all the cost func-
initial node, and the algorithm terminates when a. goal tions described above are 7~2onutonic, in the sense that
node is chosen for expansion. the cost of a child is always greater than or equal
If the cost of a node is its depth in the graph, then to the cost, of its parent. With a monotonic cost
best-first search becomes breaclth-first search. If the function, the order in which nodes are first expanded
-- by iterative deepening is best first. Many importa.nt
*This research was supported by an NSF Presidential cost functions are non-monotonic, however, such as
\Cung Investigator Award, No. IRI-8553925, and a grant f(n) = g(n) + IV. h(n)[Pohl 1970] with CV > 1. The
from Rockwell International. Tlla.nks to Valerie Aylett for
advantage of this cost function is that while it returns
drawing the graphs and figures, to Cindy Mason, Jot Pem-
suboptimai solutions, it generates far fewer nodes thaa
berton, and Ed Purcell for their cornmcnt.s on an early draft
of this manuscript, and to Stuart Russell for discussions on arc required to find optima.1 solutions.
related work. With a non-monotonic cost function, the cost of a
Korf 533
Figure 1: SRBFS with non-monotonic cost function
child can be less than the cost of its parent, and iter-
ative deepening no longer expands nodes in best-first
order. For example, consider the tree fragment in Fig-
ure lB, ignoring the caption and inequalities for now,
where the numbers in the nodes represent their costs.
A best-first search would expand these nodes in the
order 5, 1, 2. With iterative deepening, the initial
threshold would be the cost of the root node, 5. Af-
ter generating the left child of the root, node 2, iter-
ative deepening would expand all descendents of node
2 whose costs did not exceed the threshold of 5, in
depth-first order, before expanding node 1. Even if all
the children of a node were generated at once, and or-
dered by their cost values, so that uode 1 was expanded
before node 2, iterative deepening would explore the
subtrees below nodes 4 and 3 before expauding node Figure 2: SRBFS example w th cost equal to depth
2. The real problem here is that while searching nodes
whose costs are less than the current threshold, itera-
tive deepening ignores the information in the values of
call SRBFS ou the right child. The best Open nodes in
those nodes, and proceeds strictly depth first. the tree will be descendents of the right child as long
as their costs do not exceed the value of the left child.
Recursive Best-First Search Thus, the recursive call on the right child is made with
Recursive best-firs-t search (RBFS) is a linear-space al- an upper bound equal to the value of its best (only)
gorithm that always expands nodes in best-first order, brother , 2. SRBFS expands the right child, and eval-
even with a non-monotonic cost function. For ped- uates the grandchildren, as shown in Figure 1B. Since
agogical reasons, we first present a simple version of the values of both grandchildren, 4 and 3, exceed the
the algorithm (SRBFS), and then consider the more upper bound on their parent, 2, the recursive call ter-
efficient full algorithm (RBFS). These algorithms first minates. It returns as its result the minimum value of
appeared in [Korf 1991a], the main results were de- its children, 3. This backed-up value of 3 is stored as
scribed in [Korf 1991b], and a full treatment appears the new value of the right child (figure lC), indicating
in [Korf 1991c], including proofs of all the theorems. that the lowest-cost Open node below this node has a
cost of 3. A recursive call is then made 011 the new
Simple Recursive Best-First Search best child, the left one, with an upper bound equal to
3, which is the new value of its best brother, the right
While iterative-deepening uses a global cost t,hreshold,
child. In general, the upper bound on a child node is
Simple Recursive Best-First Search (SRBFS) uses a
equal to the minimum of the upper bound on its par-
local cost threshold for each recursive call. It takes
ent, and the current value of its lowest-cost brother.
two arguments, a node aud an upper bound 011 cost,
and explores the subtree below the node as long as it Figure 2 shows a more extensive example of SRBFS.
contains frontier nodes whose costs do not exceed the In this case, the cost function is simply the depth of a
upper bound. It then returns the minimum cost, of t,he node in the tree, corresponding to breadth-first search.
frontier nodes of the explored subtree. Figure 1 shows Initially, the stored value of a node, F(n), equals its
how SRBFS searches the tree in Figure 1B in best- stntic value, f(n). After a recursive call on the node
first order. The initial call on the root, is made with an ret,urns, its stored value is equal to the minimum value
upper bound of infinity. We expand the root, and com- of all frontier nodes in the subtree explored duriug the
pute the costs of the children as shown in Figure 1A. last call. SRBFS proceeds down a pakh until the stored
Since the right child has the lower cost, we recursively values of all children of the last node expanded exceed
Korf 535
Theoretical Results
We briefly present the main theoretical properties of
SRBFS and RBFS. Space limitations preclude the for-
mal statements and proofs of the theorems found in
[Korf 1991c]. The most important result, and the most
50% 60% 70% 80% 90% 100%
difficult to establish, is that both SRBFS and RBFS
are best-first searches. In other words, the first time a relative weight of h(n) term
rithms from classical best-first search is that the space Figure 4: Solutions Lengths vs. Weight on h(n)
complexity of both SRBFS and RBFS is O(M), where
b is the branching factor, and d is the maximum search
depth. The reason is that at any given point, the recur- ma.y have greater costs associated with them, a.nd will
sion stack only contains the path to the best frontier not be searched in the next iteration. Thus, while it-
node, plus the brothers of all nodes on that p&h. If erative deepening must regenerate the entire previous
we assume a constant branching factor, the space com- tree during each new iteration, RBFS will only explore
plexity is linear in the search depth. the subtrees of brother nodes on the last path of the
While the time complexities of both algorithms are previous iteration whose stored values equal the up-
linear in the number of nodes generated, this num- per bound for the next iteration. If nodes of similar
ber is heavily dependent on the particular cost func- cost are highly clustered in the tree, this will result in
tion. In the worst case, all nodes have unique cost significant savings. Even in those situations where the
values, and the asymptotic time complexity of both al- entire tree must be explored in each iteration, as in the
gorithms is O(b2d). This is the same as the worst-case case of brea.dth-first search, RBFS avoids regenerating
time complexity of IDA* on a tree[Patrick, Almulla, & the last path of the previous iteration, although the
Newborn]. This scenario is somewhat unrealistic, how- savings is not significant in that case.
ever, since in order to maintain unique cost values in
the face of an exponentially growing number of nodes,
the number of bits used to represent the values must
Experimental Results
increase with each level of the tree. We implemented RBFS on the Travelling Salesman
As a more realistic example of time complcxit$y, we Problem (TSP) and the sliding-t,ile puzzles. For TSP,
examined the special case of a uniform tree where using the monotonic A* cost function f(n) = g(n) +
the cost of a node is its depth in the tree, corre- h(?b), with the minimumspa.nning tree heuristic, RBFS
sponding to breadth-first search. In this case, the genera.tes only one-sixth as many nodes as IDA*,
asymptotic time complexity of SRBFS is 0( xd), where while finding optimal solutions. Both algorithms, how-
z = (b+ l+ db2 + 2b - 3)/Z For large values of b, this ever, generate more nodes than depth-first branch-and-
approaches O((b+ l)d). The asymptotic time complex- bound, another linear-space algorithm.
ity of RBFS, however, is O(bd), showing that RBFS is On the Eight Puzzle, with the A* cost function and
asymptotically optimal in this case, but SR.BFS is not. the Manhattan Distance heuristic, RBFS finds optimal
Finally, for the important special cease of a mono- solutions while generating slightly fewer nodes than
tonic cost function, RBFS generates fewer nodes than IDA*. Depth-first branch-and-bound doesn’t work on
iterative deepening, up to tie-brea.king among nodes of this problem, since finding any solution is difficult.
equal cost. With a monotonic cost function, both algo- In order to find sub-optimal solutions more quickly,
rithms expand all nodes of a given cost. before cspand- we used the weighted non-monotonic cost function,
ing any nodes of greater cost. In itemtive deepeuing, m-4 = g(n) + W - h(n), with FV > l[Pohl 19701.
each new iteration expands nodes of greater cost. Be- M7e ran three different algorithms on the Fifteen Puz-
tween iterations, the search path collapses to just the zle with this function: weighted-A* (WA*), weighted-
root node, and the entire tree must be regenerated to IDA* (WIDA*), and RBFS. The solutions returned are
find the nodes of next greater cost. For RBFS, we cau guaranteed to be within a factor of CV of optimal[Davis,
similarly define an “itera.tion” as the int,erval of t8ime Bramanti-Gregor, & Wang 19891, but in practice a.11
when those nodes being expanded for the first time are three algorithms produce significantly better solutions,
all of the same cost. When the last node of a given cost as shown in figure 4. The horizontal a.xis is the rel-
is expanded, ending the current itera.tion, the recursion ative weight on 12.(n), or lOOFV/(IV + l), while the
stack contains the path to that node, plus the brothers vert+al axis shows the solution lengths in moves. All
of all nodes on that path. Many of these brother nodes data points are averages of the 100 problem instances
will have stored values equa,l to the next greater cost, in [Korf 1985]. The bottom line shows the solution
and the subtrees below these nodes will be explored in lengths returned by WA* and RBFS. Since both algo-
the next iteration. Other nodes attached to this path rithms are best-first searches, they produce solutions
Korf 537
Conclusions in Uncertain, Unpredictable, or Changing Environ-
ments, Stanford, Ca., March 1990, pp. 72-76.
RBFS is a linear-space algorithm that a.lways ex-
pands new nodes in best-first order, even with a non- Korf, R.E., Best-first search in limited memory, in
monotonic cost function. While its time complexity UCLA Computer Science Annual, University of Cal-
depends on the cost function, for the special case where ifornia, Los Angeles, Ca., April, 1991, pp. 5-22.
cost is equal to depth, corresponding to breadth-first, Korf, R.S., Linear-Space Best-First Search: Extended
search, RBFS is asymptotically optimal, generating Abstra.ct, Proceedings of the Sixth International Sym-
O(bd) nodes. With a monotonic cost function, it finds posium on Computer and Information Sciences, An-
optimal solutions, while expanding fewer nodes than talya, Turkey, October 30, 1991, pp. 581-584.
iterative-deepening. With a non-monotonic cost func- Korf, R.E., Linear-space best-first search, submitted
tion on the sliding-tile puzzles, both RBFS and iter- for publication, August 1991.
ative deepening generate orders of magnitude fewer
RJahanti, A., D.S. Nau, S. Ghosh, and L.N. Kanal,
nodes than required to find optimal solutions, with
An Efficient Iterative Threshold Heuristic Tree Search
only small increases in solution lengths. RBFS consis-
Algorithm, Technical Report UMIACS TR 92-29, CS
tently finds shorter solutions than iterative deepening
TR 2853, Computer Science Department, University
with the same cost function, but also generates more
of hlaryland, College Park, Md, March 1992.
nodes in this domain. The number of nodes generated
by RBFS was a constant multiple of the nodes t1la.t Patrick, B.G., M. Almulla, and M.M. Newborn,
would be generated by standard best-first search on a An upper bound on the complexity of iterative-
tree, if sufficient memory were available to execute it. deepening-A*, in Proceedings of the Symposium on
Thus, RBFS reduces the space complexity of best-first Artificial Intelligence and Mathematics, Ft. Laud-
search from exponential to linear in general, while in- erdale, Fla., Dec. 1989.
creasing the time complexity by only a constant factor Pearl, J. Heuristics, Addison-Wesley, Reading, Mass,
in our experiments and analyses. 1984.
Pohl, I., Heuristic search viewed as path finding in a
References graph, Artificial Intelligence, Vol. 1, 1970, pp. 193-
204.
Bratko, I., PROLOG: Programming for Artificial In-
telligence, Addison-Wesley, 1986, pp. 2G5-273. Rao, V.N., V. Kumar, and R.E. Korf, Depth-first vs.
best-first search, Proceedings of the National Confer-
Chakrabarti, P.P., S. Ghose, A. Acharya, and S.C. de
ence 011 Artifkial Intelligence, (AAAI-91), Anaheim,
Sarkar, Heuristic search in restricted memory, Artifi-
Ca., July, 1991, pp. 434-440.
cial Intelligence, Vol. 41, No. 2, Dec. 1989, pp. 197-
221. Ruby, D., and D. Kibler, Learning subgoal se-
quences for planning, Proceedings of the Eleventh In-
Davis, H.W., A. Bramanti-Gregor, and J. Wang, The
tern.ational Joint Conference on Artificial Intelligence
advantages of using depth and breadth components
(IJCAI-89), Detroit, Mich, Aug. 1989, pp. 609-614.
in heuristic search, in Methodologies for Intelligent
Systems 3, Z.W. Ras and L. Saitta (Eds.), North- Russell, S., Efficient memory-bounded search meth-
Holland, Amsterdam, 1989, pp. 19-28. ods, Proceedings of the Tenth European Conference
on Artilfcial Intelligence (ECAI-92), Vienna, Austria,
Dijkstra, E.W., A note on two problems in connexion Aug., 1992.
with graphs, Numerische Mathematih, Vol. 1, 1959,
pp. 269-71. Sarkar, U.K., P.P. Chakrabarti, S. Ghose, and
S.C. DeSarkar, Reducing reexpansions in iterative-
Gaschnig, J. Performance measurement and an.aly- deepening search by controlling cutoff bounds, Ar-
sis of certain search algorithms, Ph.D. thesis, Depart- tificial Intelligence, Vol. 50, No. 2, July, 1991, pp.
ment of Computer Science, Carnegie-hlellon Univer- 207-221.
sity, Pittsburgh, Pa, 1979.
Sen, A.K., and A. Ba.gchi, Fast recursive formulations
Hart, P.E., N.J. Nilsson, and B. Raphael, A formal For best-first search that allow controlled use of mem-
basis for the heuristic determination of minimum cost ory, Proceedings of the 11th International Joint Con-
paths, IEEE Transactions on Systems Science and ference on Artificial Intelligence (IJCAI-89), Detroit,
Cybernetics, Vol. 4, No. 2, 1968, pp. 100-3.07. h$ichigan, Aug., 1989, pp. 297-302.
Korf, R.E., Depth-first itera.tive-deepening: A11 optsi- Wall, B.W., MIDA*, An IDA* search with dy-
oral admissible tree search, Artificial Intelligence, j’ol. namic control, Technical Report UILIJ-ENG-91-
37, No. 1, 1985, pp. 97-109. 2216, CRIIC-91-9, Center for Reliable and High-
korf, R.E., Real-time heuristic search, Arl$cinl In- Performance Computing, Coordinated Science Labo-
telligence, Vol. 4-,3 No. 3-3, hlarcll 1990, pp. 189-311. ratory, IJniversity of Illinois, Urbana, Ill., April, 1991.
Korf, R.E., Real-Time search for dynamic planning,
Proceedings of the AAAI Symposium on Plwning