You are on page 1of 22

Lecture Note

Artificial Intelligence “Search”

Professor dr. Hiroyuki Iida


JAIST

2012-11-1 1
Artificial intelligence (AI) is the intelligence of
machines and the branch of computer science
that aims to create it.
AI textbooks define the field as "the study
and design of intelligent agents" where an
intelligent agent is a system that perceives its
environment and takes actions that maximize
its chances of success.
John McCarthy, who coined the term in
1956, defines it as "the science and
engineering of making intelligent machines.“

Wikipedia, Artificial Intelligence 2


The field was founded on the claim that a central
property of humans, intelligence—the sapience of
Homo sapiens—can be so precisely described that it
can be simulated by a machine.
This raises philosophical issues about the
nature of the mind and the limits of scientific inquiry,
issues which have been addressed by myth, fiction
and philosophy since antiquity.
Artificial intelligence has been the subject of
optimism, but has also suffered setbacks and, today,
has become an essential part of the technology
industry, providing the heavy lifting for many of the
most difficult problems in computer science.

continued, Artificial Intelligence 3


AI research is highly technical and specialized, deeply
divided into subfields that often fail to communicate with
each other.
Subfields have grown up around particular
institutions, the work of individual researchers, the solution
of specific problems, longstanding differences of opinion
about how AI should be done and the application of widely
differing tools.
The central problems of AI include such traits as
reasoning, knowledge, planning, learning, communication,
perception and the ability to move and manipulate objects.
General intelligence (or "strong AI") is still among the field's
long term goals.

continued, Artificial Intelligence 4


Search and optimization
Main articles: Search algorithm, Optimization (mathematics), and
Evolutionary computation
Many problems in AI can be solved in theory by intelligently
searching through many possible solutions: Reasoning can be reduced
to performing a search.
For example, logical proof can be viewed as searching for a path
that leads from premises to conclusions, where each step is the
application of an inference rule.

Planning algorithms search through trees of goals and subgoals,


attempting to find a path to a target goal, a process called
means-ends analysis.

Robotics algorithms for moving limbs and grasping objects use


local searches in configuration space. Many learning algorithms use
search algorithms based on optimization.

Wikipedia, Search - optimization 5


Simple exhaustive searches are rarely sufficient for most real world
problems: the search space (the number of places to search)
quickly grows to astronomical numbers. The result is a search that
is too slow or never completes. The solution, for many problems, is
to use "heuristics" or "rules of thumb" that eliminate choices that
are unlikely to lead to the goal (called "pruning the search tree").
Heuristics supply the program with a "best guess" for what path the
solution lies on.
A very different kind of search came to prominence in the
1990s, based on the mathematical theory of optimization. For many
problems, it is possible to begin the search with some form of a
guess and then refine the guess incrementally until no more
refinements can be made. These algorithms can be visualized as
blind hill climbing: we begin the search at a random point on the
landscape, and then, by jumps or steps, we keep moving our guess
uphill, until we reach the top. Other optimization algorithms are
simulated annealing, beam search and random optimization.
continued, Search - optimization 6
Evolutionary computation uses a form of
optimization search. For example, they may begin
with a population of organisms (the guesses) and
then allow them to mutate and recombine,
selecting only the fittest to survive each generation
(refining the guesses).
Forms of evolutionary computation include
swarm intelligence algorithms (such as ant colony
or particle swarm optimization) and
evolutionary algorithms (such as
genetic algorithms and genetic programming).

continued, Search - optimization 7


Search: The Central Paradigm of AI
http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/Search

"Search is a problem-solving technique that systematically explores a


space of problem states, i.e., successive and alternative stages in the
problem-solving process.
Examples of problem states might include the different board
configurations in a game or intermediate steps in a reasoning process.
This space of alternative solutions is then searched to find an answer.
Newell and Simon (1976) have argued that this is the essential
basis of human problem solving. Indeed, when a chess player
examines the effects of different moves or a doctor considers a
number of alternative diagnoses, they are searching among
alternatives."
- from Section 1.2 of Chapter One (available online) of George
F. Luger's textbook, Artificial Intelligence: Structures and Strategies for
Complex Problem Solving, 5th Edition (Addison-Wesley; 2005).

8
Search. From Artificial Intelligence, by David B. Leake, Indiana
University [to appear, Van Nostrand Scientific Encyclopedia,
Ninth Edition, Wiley, New York, 2002].

"In 1976, Newell and Simon [Newell and Simon1976] proposed


that intelligent behavior arises from the manipulation of
symbols--entities that represent other entities, and that the
process by which intelligence arises is heuristic search. Search
is a process of formulating and examining alternatives."

Search. An AI Bite by Simon Colton. Sponsored by, and


available from, The Society for the Study of Artificial
Intelligence and Simulation of Behaviour. "From choosing the
right move to beat a chess grandmaster, to synthesising the
right chemical to dock with a protein, search techniques are
used across AI applications and have been very successful
over the years."
continued 9
Search Definitions and Search Methods from Patrick Doyle. An extensive
overview covering topics such as Real-time A*, Constraint-Satisfaction
Problems, Game Trees, Alpha-Beta Pruning, Beam Search, Simulated
Annealing and Notable Search Programs (Logic Theorist, General
Problem Solver, and ABSTRIPS). [Made available via CS 44, Artificial
Intelligence, Computer Science Department, Dartmouth College Winter,
1998.]

Cognitive Science 108b: Artificial Intelligence Modeling - Lecture Notes


from John Batali, Associate Professor Department of Cognitive Science,
University of California at San Diego.

Solving Problems by Searching -and- Informed Search Methods. Lecture


notes from Dr. Martin Johnson, Massey University, Albany, New Zealand.
"This is the essence of search - choosing one option and putting the
others aside for later, in case the first choice does not lead to a solution.
We continue choosing, testing, and expanding until a solution is found or
until there are no more states that can be expanded. The choice of which
state to expand first is determined by the search strategy."
continued 10
"Search encompasses a very general class of
algorithms for exploring a problem space in order
to find a solution. Virtually every problem can be
solved using a search algorithm, although
searching can be relatively inefficient for tasks
such as arithmetic. Search methods can be divided
into systematic and nonsystematic algorithms."
From CIRL (The Computational Intelligence
Research Laboratory of the University of Oregon).
Be sure to follow the links to Subareas at the
bottom of their pages for related information.

Searching for Optimal Solutions: the A* Algorithm,


IDA* (A Memory-Efficient Variation of A*), and an
applet illustrating IDA* with sliding-tile puzzles. An
interactive resource from the AIxploratorium.
continued 11
Higher Games - On the 10th anniversary of Deep Blue's triumph over
Garry Kasparov in chess, a prominent philosopher of mind asks, What
did the match mean? By Daniel C. Dennett. Technology Review
Magazine (September/October 2007).

"For a decade, human beings have had to live with the fact that one of
our species' most celebrated intellectual summits--the title of world
chess champion--has to be shared with a machine, Deep Blue, which
beat Garry Kasparov in a highly publicized match in 1997. How could
this be? What lessons could be gleaned from this shocking upset? Did
we learn that machines could actually think as well as the smartest of
us, or had chess been exposed as not such a deep game after all? ...

continued 12
Silicon machines can now play chess better than any
protein machines can. Big deal. This calm and reasonable
reaction, however, is hard for most people to sustain. They
don't like the idea that their brains are protein machines.

When Deep Blue beat Kasparov in 1997, many


commentators were tempted to insist that its brute-force
search methods were entirely unlike the exploratory
processes that Kasparov used when he conjured up his
chess moves. But that is simply not so. Kasparov's brain is
made of organic materials and has an architecture notably
unlike that of Deep Blue, but it is still, so far as we know, a
massively parallel search engine that has an outstanding
array of heuristic pruning techniques that keep it from
wasting time on unlikely branches."
continued 13
Search: An Overview. By Ann V. D. L. Gardner. AI
Magazine 2(1): Winter 1980, 2-6, 23.

"This article is the second planned excerpt from the


Handbook of Artificial Intelligence being complied at
Stanford University. This overview of the Handbook chapter
on search ... introduces the important ideas and techniques,
which are discussed in detail later in the chapter."

14
A Comparison of Fast Search Methods for Real-Time Situated Agents.
By Sven Koenig. In Proceedings of the International Joint Conference
on Autonomous Agents and Multiagent Systems (AAMAS), pages
864-871, 2004.

Abstract: "Real-time situated agents, including characters in real-time


computer games, often do not know the terrain in advance but
automatically observe it within a certain range around them. They
have to interleave planning with movement to make planning tractable
when moving autonomously to user-specified coordinates. Planning
faces real-time requirements since it is important that the agents be
responsive to the commands of the users and move smoothly.
In this paper, we compare two fast search methods for this
task that speed up planning in different ways, namely real-time
heuristic search (LRTA*) and incremental heuristic search (D* Lite),
resulting in the first comparison of real-time and incremental heuristic
search in the literature. We characterize when to choose which search
method, depending on the kind of terrain and the planning objective."
15
Also see Sven Koenig's Fast Replanning project: "Autonomous
agents often need to operate in domains that are only incompletely
known or change dynamically. In this case, they need to replan
quickly as their knowledge changes.

For example, mobile robots or computer-controlled agents in


combat games (such as 'Total Annihilation' or 'Warcraft') might
observe obstacles in the terrain that they did not know about or their
goal locations might change (for example, if they chase moving
targets).

Replanning from scratch is often very time consuming but combat


games have very tight time requirements, need to control many
agents, cannot allocate much processor time for the artificial
intelligence, and often run on old computers with slow processors.
We therefore study planning methods that are incremental versions
of heuristic search methods, and analyze their behavior."

16
All the Needles in a Haystack: Can Exhaustive
Search Overcome Combinational Chaos? J.
Nievergelt, R. Gasser, F. Maser, C. Wirth. 1995.
(Available in several formats from CiteSeer.)

"Exhaustive search is truly a creation of the


computer era. Although the history of
mathematics records amazing feats of paper-
and-pencil computation, as a human activity,
exhaustive search is boring, error-prone,
exhausting, and never gets very far anyway."

17
Processing, bottom-up and top-down. Stuart C. Shapiro.
1987. In S. C. Shapiro, editor, Encyclopedia of Artificial
Intelligence, 779 - 785. New York: John Wiley & Sons, Inc.

"'Bottom-up' vs. 'top-down,' 'forward' vs. 'backward,' and


'data-driven' vs. 'goal-directed' are three pairs of modifiers
for terms such as 'chaining,' 'inference,' 'parsing,'
'processing,' 'reasoning,' and 'search.' They express
essentially the same distinction, their differences lying in
different metaphors drawn from different subareas of
computer science and AI. ...

One general way to consider the distinction is from within


the paradigm of search. The basic issue for all of search is
to find a way to get from where you are to where you want
to be."
18
"The economist Herbert Simon, who reminded us of the
futility of trying to consider every possible alternative in a
world without end....

It was Simon's ideas - particularly his notion of 'satisficing'


- that first got me interested in fiction-writing machines. ...

Computers are just as subject as humans to Simon's


'bounded rationality.' Computers cannot create narratives
by using brute computational force to mindlessly try every
alternative." - from Computers as Authors? Literary
Luddites Unite! Essay by Daniel Akst. The New York
Times (November 22, 2004).

19
Search and Game Playing
http://aima.cs.berkeley.edu/ai.html#search

20
Computer Chess and Search
http://webdocs.cs.ualberta.ca/~tony/RecentPapers/report.mac.pdf

WCCC 2002: Boris Alterman, Alex Kure,


Shay Bushinsky, Amir Ban, Tony Marsland
and Jaap van den Herik [6]
Chess pioneers in Sacher Hotel
Vienna, Austria 1980: Ben Mittman,
Monty Newborn, Tony Marsland,
Dave Slate, David Levy, Claude
Shannon, Ken Thompson, Betty
21
Shannon, Tom Truscott [5]
The JAIST 20th Anniversary
Events with the ICGA
(World Computer Chess
Championship, Computer
Olympiad and Conference)
will be held in Ishikawa, being
the first time in Japan

22

You might also like