Computer (korf@ Science

Search A@orithms



y of California,

Los Angeles,

Los Angeles,

CA 90024


The Problem

Many problems, such as one- and twoplayer games, constraint-satisfaction tasks, and combinatorial optimizations, can be solved by heuristic search of a problem-space graph. These searches are guided by a heuristic evaluation function that estimates the merit of a node with respect to the goal. Given such a function, a natural algorithm is best-first search, where at each point, it explores further the node that appears best according to the evaluation function. The simplest example of a best-first search is breadth-first search, where the cost of a node is its depth in the graph. If have different costs, different edges breadth-jlrst search generalizes to Dijkstra’s [1959] single-source shortest-path algorithm. If, in addition to the cost from the root to a node, we have a heuristic estimate of the cost from the node to a goal, we get the A* algorithm [Hart et al. 1968]. All these algorithms are special cases of best-first search, differing only in their evaluation functions. In fact, best-first search, with an evaluation function that does not overestimate actual cost, is an optimal algorithm for finding lowest-cast solutions to such problems [Dechter and Pearl 1985]. The main problem with best-first search, however, is that it must store in memory all the frontier nodes in order to determine the best node to expand next. In most cases, the memory required by a best-first search grows linearly with the

running time and exponentially with the problem size. For example, on a 1O-MIPS machine with 100 megabytes of memory, a problem that requires 100 instructions to generate a node and four bytes to store it would exhaust the memory available in just over four minutes, unless a solution were found first. As memory capacities increase, processors get faster as well, and best-first search still exhausts memory in minutes. This space limitation has been the focus of a significant body of research over the past decade, which we briefly survey in this article. Iterative Deepening

One approach to this problem is to use depth-first search (DFS), because its space complexity is only linear in the search depth. There are several drawbacks to DFS, however. One is that it will not terminate on an infinite tree. Furthermore, on a finite tree, the first solution found by DFS is not necessarily optimal, and continuing the search for an optimal solution may take much longer than a breadth-first search. The solution to these problems, called iterative deepening, is to perform a series of depth-first searches, each with a depth limit that is one greater than the until a solution is previous iteration, 1985; Stickel and Tyson found [Korf 1985]. In other words, first search depthfirst to depth one, then search depth-first to depth two, and so on. The first solution found is a shortest solution, the space

Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that copiefi are not made or diatributcd for profit or commercial advantage, the copyright notice, the title of the pubhcatlon and its date appear, and notice M gwen that copying is by permission of ACM, Inc. To copy otherwise, to repubhsh, to post on servers, or to redistribute to lists, reqmres prior specfiic permmsion and/or a fee. 01995 ACM 0360-0300/95/0900-0337 $03.50




Vol. 27, No 3, September





E. Korf

complexity is linear in the maximum depth, and the overhead of generating some nodes multiple times does not affect the asymptotic complexity of the algorithm. Iterative deepening readily generalizes to more complex cost functions, where the cutoff condition of the depth-first searches is the total cost of a branch. In iterative-deepening-A* (IDA* ), for example, a branch is pruned when its total cost exceeds a threshold for that iteration [Korf 1985]. This threshold is incremented in each successive iteration to the minimum value of those nodes that exceeded the threshold on the previous iteration [Korf 1985; Patrick et al., 1992]. Several versions of IDA* set these cost increments less conservatively, in order to minimize the node regeneration overhead of the algorithm [Rao et al. 1991; Sarkar et al. 1991; Wah and Shang 1995].

The ideas of iterative deepening and node retraction have been combined in an algorithm called recursive best-first search [Korf 1993]. This maintains the path to the current best frontier node, along with all siblings of nodes on this path and the values of the best node under each of the siblings. When the best path changes from one frontier node to another, the algorithm backs up the path to their lowest common ancestor and then proceeds down to the new best node. As a result, its space requirement is only linear in the maximum search depth. Related algorithms include those in Ghosh et al. [ 1994] and Russell [ 1992].



Node Retraction Iterative deepening uses very little of the memory available on most machines. The simplest way to make use of this additional memory is to run best-first search until most of the memory is filled, and then perform further node expansions by iterative deepening below the frontier nodes [Sen and Bagchi 1989]. This approach allocates all the available memory to the first nodes that are generated. A more flexible approach maintains the best nodes in memory [Chakrabarti et al. 1989]. When memory is exhausted, the children of the worst node in memory are deleted and the value of their parent is updated to the minimum value among
the children. This is a more accurate esti-

Many of these techniques have been applied to two-player games as well. For example, iterative deepening was originally used in two-player games to guarantee that a reasonable move is always available, because it is difficult to predict how long a search to a given depth might take [Slate and Atkin 1977]. Recursive best-first search is easily generalized to two-player minimax trees, resulting in a selective best-first minimax algorithm [Korf and Chickening 1996]. A number of other best-first alternatives to alpha-beta minimax have made use of the techniques described here to reduce the memory requirements of those algorithms [Bhattacharya 1995; Plaat et al. 1995; Reinefeld and Ridinger 1994].


Search problems,

For single-agent
rectional search

a form

of bidisearch


mate of the value of the parent, and the process is known as node retraction. The algorithm alternates phases of node retraction and expansion until a goal node is chosen for expansion. The initial implementation of this idea was plagued by very high overheads [Chakrabarti et al. 1989], but subsequent implementations have improved its efficiency [Evett et al. 1990; Russell 1992].

has been proposed [Dillenburg and Nelson 1994]. A breadth-first search is performed backward from the goal state until most of the memory is filled. Then a linear-space forward search from the initial state is executed until a state on the frontier of the breadth-first search from the goal is reached. This appears to be the most time-efficient way to use more than linear space in a search algorithm





27, No

3, September



R. E. An 27, Intell. 1985. optimal 1993.

Depth-first admw.sible Linear-space 41-78.



[Kaindl et al. 1995]. In addition, the perimeter around the goal can be used to improve the accuracy of the heuristic evaluation function, leading to further savings [Manzini 1995]. Conclusion A numlber of algorithms have been designed to simulate best-first search without exhausting the memory available on existing machines. Surprisingly, although designed to operate using less memory, many of these algorithms actually run faster than standard best-first search, in spite of generating more nodes, because of lower overhead per node generation [Zhang and Korf 1995]. REFERENCES
BHATTACHARYA, S. 1995. Experimenting with revisits in game tree search. In Proceedings of the Fourteenth Znternatzonal Joint Conference on Artificial Intelligence ( IJCAI-95) (Montreal, Canada, Aug.) (to appear). CHAKRAB~RTI, P. P., GHOSE, S., ACHARYA, A., AND DE SARIZA.R, S. C. 1989, Heuristic search in restricted memory. Artzf. In tell. 41, 2, (Dec.) 197-221. DECHTER, R. AND PEARL J. best-~lrst search strategies of A*. J. ACM 32, 3 (July), 1985. Generalized and the optimality 505–536. in 1,

KORF, ing:

iterative-deepentree search. Artzf. search.

Intell. Artzf.

1, 97-109. best-first 62, (July)


KORF, R. E. AND CHICKENING, D. M 1996. Bestfirst minimax search. Artif. Zntell. (to appear). MANZINL G. 1995. BIDA*: An improved ter search algorithm. Artzf. Intell.75, 347-360. PATRICK, B. G., ALMULLA, M., AND NEWBORN, M. M. 1992. An upper bound on the complexity of iterative-deepening-A’ Ann. Math. Artzf. Intell. 5, 265–278. PLAAT, A., SCHAEFFER, J., PIJLS, W., AND DEBRUIN, A. 1995. Best-first minimax search in practice. In Proceedings of the Fourteenth International Joint Conference on Artificial gence ZJCAI-95 (Montreal, Canada, appear). RAO, IntelliAug.) (to perime2 (June),

V. N., KUMAK, V., AND KORF, R. E. 1991, Depth-first vs. best-first search. In Proceedings of the National Conference (Anaheim, search. on Artificial CA, July), P. 1994. Artif. In telll434-440. Time-effiIn tell. 71, 2 AAAL91



cient state space (Dec.), 397-408.

RUSSELL, S. 1992. Efficient memory-bounded search methods. In Proceedings of the Tenth European Conference on Art@_iclal Intelligence, ECAI-92 (Vienna, Austria, Aug.). SARKAR, U. K., DESARKAR, CHAKRABARTI, P. P., GHOSE, S., AND S. C. 1991. Reducing reexpansearch by controlIntell. 50, 2 (July),

DIJKSTRA, E. W. 1959. A note on two problems connexion with graphs. Numer. Math. 269-’71. DILLENBURG, Perimeter 165-178. J. F. AND NELSON, search. Artif. Zntell.

sions in iterative-deepening ling cutoff bounds. Artif. 207-221.

P. C. 1994. 65, 1 (Jan.),

EVETT, M., HENDLER, J., MAHANTI, A., AND NAU, D. 1990 PRA’: A memory-limited heuristic search procedure for the connection machine. In Thu-d Symposwm on the Frontiers of Masswely Parallel Computations, 145–149. GHOSH, R., MAHANTI, A., AND NAU, D. S. 1994. An efficient limited-memory heuristic tree search algorithm. In Proceedings of the Twelfth National Conference on Artificial Intelligence ( AAAZ-94 ) (Seattle, WA, Aug.), 1353-1358. HART, P. IE., NILSSON, N. J., AND RAPHAEL, B. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sc~. Cybern. 4, 2, 100-107. KAINDL, H., KAINZ, G., LEEB, A., AND SMETANA, H. 1995 How to use limited memory in hem-i stic search. In Proceedings of the Fourteenth Internat~onal Joint Conference on Artificial In telli gence (IJCYW95) (Montreal, Canada, Aug.) (to appear).

SEN, A. K. AND BAGCHI, A. 1989. Fast recursive formulations for best-first search that allow controlled use of memory. In Proceedings of the Eleventh International Joint Conference on Artficial Intelligence, IJCAI-89 (Detroit, MI, Aug.), 297-302. SLATE, D. J. AND ATKIN, 4.5—The Northwestern gram. P. W. 82-118. STICKEL, M. E. AND TYSON, W. M. 1985. An analysis of consecutively bounded depth-first search with applications in automated deduction. In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAL85 (Los Angeles, CA, Aug.) 1073-1075. of ArWAH, B. W. AND SHANG, Y. 1995. A comparison a class of IDA* search algorithms, Int. J. tif. In tell. Tools (to appear). ZHANG, W. AND KORF, R. A. 1995. linear-space search algorithms. July (to appear). L. R. 1977. CHESS University chess proand Machine, New York,

In Chess Skill m Man Frey, Ed., Springer-Verlag,

Performance of Arttf. Intell.,




Vol. 27, No 3, September


Sign up to vote on this title
UsefulNot useful