# AI

MODULE:1

BY:ABHINAV KISHORE

THE MONKEY & BANANAS PROBLEMa cage and bananas are suspended from ´ A monkey is in
the ceiling, the monkey wants to eat a banana but cannot reach them
« « «

in the room are a chair and a stick if the monkey stands on the chair and waves the stick, he can knock a banana down to eat it what are the actions the monkey should take? Initial state: monkey on ground with empty hand bananas suspended Goal state: monkey eating Actions: climb chair/get off grab X wave X eat X

SEARCH
´

Given a problem expressed as a state space (whether explicitly or implicitly)
«

«

with operators/actions, an initial state and a goal state, how do we find the sequence of operators needed to solve the problem? this requires search N = set of nodes or states of a graph A = set of arcs (edges) between nodes that correspond to the steps in the problem (the legal actions or operators) S = a nonempty subset of N that represents start states GD = a nonempty subset of N that represents goal states we can use any of the numerous graph traversal techniques for this but in general, they divide into two categories:
brute force ² unguided search ² heuristic ² guided search
²

´

Formally, we define a search space as [N, A, S, GD]
« « « «

´

Our problem becomes one of traversing the graph from a node in S to a node in GD
«

CONSEQUENCES OF SEARCH
´

As shown a few slides back, the 8-puzzle has over 40000 different states
«

what about the 15 puzzle?

´

A brute force search means trying all possible states blindly until you find the solution
« «

for a state space for a problem requiring n moves where each move consists of m choices, there are 2m*n possible states two forms of brute force search are: depth first search, breath first search

´

A guided search examines a state and uses some heuristic (usually a function) to determine how good that state is (how close you might be to a solution) to help determine what state to move to
« « « «

hill climbing best-first search A/A* algorithm Minimax

´

While a good heuristic can reduce the complexity from 2m*n to something tractable, there is no guarantee so any form of search is O(2n) in the worst case

´

The common form of reasoning starts with data and leads to conclusions
«

FORWARD VS BACKWARD SEARCH

for instance, diagnosis is data-driven ² given the patient symptoms, we work toward disease hypotheses
²

we often think of this form of reasoning as ´forward chainingµ through rules

´

Backward search reasons from goals to actions
«

Planning and design are often goal-driven
²

´backward chainingµ

DEPTH-FIRST SEARCH

Starting at node A, our search gives us: A, B, E, K, S, L, T, F, M, C, G, N, H, O, P, U, D, I, Q, J, R

DEPTH-FIRST SEARCH EXAMPLE

TRAVELING SALESMAN PROBLEM

BREADTH-FIRST SEARCH

Starting at node A, our search would generate the nodes in alphabetical order from A to U

BREADTH-FIRST SEARCH EXAMPLE

BACKTRACKING SEARCH ALGORITHM

´ ´

The monkey and the banana The purpose of this example is to show the use of variables. Description A monkey enters a room via the door. In the room, near the window, is a box. In the middle of the room hangs a banana from the ceiling. The monkey wants to grasp the banana, and can do so after climbing on the box in the middle of the room. States For each state, we need to record: - the position of the monkey (door, window, middle, ...) - the position of the box - if the monkey is on the box - if the monkey has the banana The initial state is (door, window, no, no). The set of goal states is (*, *, *, yes).

´

Moves walk(P): from (M, B, no, H) to (P, B, no, H). push(P): from (M, M, no, H) to (P, P, no, H). climb: from (M, M, no, H) to (M, M, yes, H). grasp: from (middle, B, yes, no) to (middle, B, yes, yes). State space Without variables, the state space and search space can be very large (how many positions are there?). With variables, we can represent the reachable part as follows.

´

Monkey and Banana Example

There is a monkey at the door of a room. In the middle of the room a banana hangs from the ceiling. The monkey wants it, but cannot jump high enough from the floor. At the window of the room there is a box that the monkey can use.

´ ´ ´ ´ ´ ´ ´

The monkey can perform the following actions: Walk on the floor Climb the box Push the box around (if it is beside the box) Grasp the banana if it is standing on the box directly under the banana We define the state as a 4 tuple : (monkey at, on floor, box at, has banana)

´ ´ ´ ´ ´ ´ ´ ´

move( state( middle, onbox , middle, hasnot ), onbox, grasp, state( middle, onbox , middle, has)). onbox move( state( P, onfloor , P, H ), onfloor climb, state( P, onbox , P, H )). onbox move( state( P1, onfloor , P1, H ), onfloor push( P1, P2 ), state( P2, onfloor , P2, H)). onfloor move( state( P1, onfloor , B, H ), onfloor walk( P1, P2 ), state( P2, onfloor , B, H )). Onfloor

canget ( state( _, _, _, has )). ´ canget ( State1 ) :move( State1, Move, State2 ), canget ( State2 ). ´ canget( state( atdoor , onfloor , atwindow , hasnot ))
´

INTRODUCTORY PROBLEM: TIC-TACTOE

X o

X

18

INTRODUCTORY PROBLEM: TIC-TACTOE
Program 1: Data Structures: ´ Board: 9 element vector representing the board, with 1-9 for each square. An element contains the value 0 if it is blank, 1 if it is filled by X, or 2 if it is filled with a O ´ Movetable: A large vector of 19,683 elements ( 3^9), each element is 9element vector. Algorithm: 1. 2. 3. View the vector as a ternary number. Convert it to a decimal number. Use the computed number as an index into Move-Table and access the vector stored there. Set the new board to that vector.

19

INTRODUCTORY PROBLEM: TIC-TACTOE
Comments: This program is very efficient in time.

1. A lot of space to store the Move-Table. 2. A lot of work to specify all the entries in the Move-Table. 3. Difficult to extend.

20

INTRODUCTORY PROBLEM: TIC-TACTOE

1 2 3 4 5 6 7 8 9
21

INTRODUCTORY PROBLEM: TIC-TACTOE

Program 2: Data Structure: A nine element vector representing the board. But instead of using 0,1 and 2 in each element, we store 2 for blank, 3 for X and 5 for O Functions: Make2: returns 5 if the center sqaure is blank. Else any other balnk sq Posswin(p): Returns 0 if the player p cannot win on his next move; otherwise it returns the number of the square that constitutes a winning move. If the product is 18 (3x3x2), then X can win. If the product is 50 ( 5x5x2) then O can win. Go(n): Makes a move in the square n Strategy:

Turn = 1 Turn = 2 Turn = 3 Turn = 4 .......

Go(1) If Board[5] is blank, Go(5), else Go(1) If Board[9] is blank, Go(9), else Go(3) If Posswin(X) { 0, then Go(Posswin(X))

22

INTRODUCTORY PROBLEM: TIC-TACTOE
Comments: 1. Not efficient in time, as it has to check several conditions before making each move. 2. Easier to understand the program·s strategy. 3. Hard to generalize.

23

INTRODUCTORY PROBLEM: TIC-TACTOE

8 3 4 1 5 9 6 7 2
15  (8 + 5)
24

INTRODUCTORY PROBLEM: TIC-TACTOE
Comments: 1. Checking for a possible win is quicker. 2. Human finds the row-scan approach easier, while computer finds the number-counting approach more efficient.
25

INTRODUCTORY PROBLEM: TIC-TACTOE
Program 3: 1. If it is a win, give it the highest rating. 2. Otherwise, consider all the moves the opponent could make next. Assume the opponent will make the move that is worst for us. Assign the rating of that move to the current node. 3. The best node is then the one with the highest rating.
26

INTRODUCTORY PROBLEM: TIC-TACTOE
Comments:
moves. 2. Could be extended to handle more complicated games.

1. Require much more time to consider all possible

27

STATE SPACE SEARCH: PLAYING CHESS ´Each position can be described by an 8-by-8 array. ´Initial position is the game opening position. ´Goal position is any position in which the
opponent does not have a legal move and his or her king is under attack.

´Legal moves can be described by a set of rules: 
Left sides are matched against the current state.  Right sides describe the new resulting state.

28

STATE SPACE SEARCH: PLAYING CHESS

´State space is a set of legal positions. ´Starting at the initial state. ´Using the set of rules to move from one state
to another.

´Attempting to end up in a goal state.

29

STATE SPACE SEARCH: WATER JUG PROBLEM ´You are given two jugs, a 4-litre one and a 3-litre one. Neither has any measuring markers on it. There is a pump that can be used to fill the jugs with water. How can you get exactly 2 litres of water into 4-litre jug.µ
30

STATE SPACE SEARCH: WATER JUG PROBLEM ´State: (x, y) x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3

´Start state: (0, 0). ´Goal state: (2, n) for any n. ´Attempting to end up in a goal state.
31

STATE SPACE SEARCH: WATER JUG PROBLEM 1. (x, y) p (4, y) if x 4 2. (x, y) p (x, 3) if y 3 3. (x, y) p (x  d, y) if x " 0 4. (x, y) p (x, y  d) if y " 0
32

STATE SPACE SEARCH: WATER JUG PROBLEM p (0, y) 5. (x, y) if x " 0 6. (x, y) p (x, 0) if y " 0 7. (x, y) p (4, y  (4  x)) if x  y u 4, y " 0 8. (x, y) p (x  (3  y), 3) if x  y u 3, x " 0
33

STATE SPACE SEARCH: WATER JUG PROBLEM 9. (x, y) p (x  y, 0) if x  y e 4, y " 0 10. (x, y) p (0, x  y) if x  y e 3, x " 0 11. (0, 2) 12. (2, y) p (2, 0) p (0, y)

34

STATE SPACE SEARCH: WATER JUG PROBLEM
1.

current state = (0, 0) 
Apply a rule whose left side matches the current state  Set the new current state to be the resulting state

2. Loop until reaching the goal state (2, 0) (0, 0) (0, 3) (3, 0) (3, 3) (4, 2) (0, 2) (2, 0)
35

STATE SPACE SEARCH: WATER JUG PROBLEM
The role of the condition in the left side of a rule   restrict the application of the rule   more efficient 1. (x, y) if x 4 2. (x, y) if y 3 p (4, y) p (x, 3)

36

STATE SPACE SEARCH: WATER JUG PROBLEM
Special-purpose rules to capture special-case knowledge that can be used at some stage in solving a problem 11.(0, 2) 12. (2, y)

p (2, 0) p (0, y)
37

SEARCH STRATEGIES
Requirements of a good search strategy: 1. It causes motion
Otherwise, it will never lead to a solution.

2. It is systematic
Otherwise, it may use more steps than necessary.

3. It is efficient
Find a good, but not necessarily the best, answer.

38

SEARCH STRATEGIES
1. Uninformed search (blind search)
Having no information about the number of steps from the current state to the goal.

2. Informed search (heuristic search)
More efficient than uninformed search.

39

SEARCH STRATEGIES
(0, 0) (4, 0) (0, 3)

(4, 3)

(0, 0)

(1, 3)

(4, 3)

(0, 0)

(3, 0)

40

SEARCH STRATEGIES: BLIND SEARCH

´Breadth-first search
Expand all the nodes of one level first.

´Depth-first search
Expand one of the nodes at the deepest level.

41

SEARCH STRATEGIES: BLIND SEARCH
Criterion Time Space Optimal? Complete? BreadthFirst DepthFirst

b: branching factor

d: solution depth

m: maximum depth
42

SEARCH STRATEGIES: BLIND SEARCH
Criterion Time Space Optimal? Complete? BreadthFirst DepthFirst

bd bd
Yes Yes

bm bm
No No

b: branching factor

d: solution depth

m: maximum depth
43

SEARCH STRATEGIES: HEURISTIC SEARCH

´Heuristic: involving or serving as an aid to
learning, discovery, or problem-solving by experimental and especially trial-and-error methods. (Merriam-Webster·s dictionary)

´Heuristic technique improves the efficiency of
a search process, possibly by sacrificing claims of completeness or optimality.
44

SEARCH STRATEGIES: HEURISTIC SEARCH

´Heuristic is for combinatorial explosion. ´Optimal solutions are rarely needed.

45

SEARCH STRATEGIES: HEURISTIC SEARCH
The Travelling Salesman Problem
´A salesman has a list of cities, each of which he must visit exactly once. There are direct roads between each pair of cities on the list. Find the route the salesman should follow for the shortest possible round trip that both starts and finishes at any one of the cities.µ
A 1 D 15 B 5 C 5 5
46

10 E

SEARCH STRATEGIES: HEURISTIC SEARCH
Nearest neighbour heuristic: 1. Select a starting city. 2. Select the one closest to the current city. 3. Repeat step 2 until all cities have been visited.

47

SEARCH STRATEGIES: HEURISTIC SEARCH
Nearest neighbour heuristic: 1. Select a starting city. 2. Select the one closest to the current city. 3. Repeat step 2 until all cities have been visited.

O(n2) vs. O(n!)
48

HILL CLIMBING

´Searching for a goal state = Climbing to the
top of a hill

49

HILL CLIMBING

´Generate-and-test + direction to move. ´Heuristic function to estimate how close a
given state is to a goal state.

50

SIMPLE HILL CLIMBING
Algorithm 1. Evaluate the initial state.
2.

Loop until a solution is found or there are no new operators left to be applied: 
Select and apply a new operator  Evaluate the new state: goal p quit better than current state p new current state

51

SIMPLE HILL CLIMBING

´Evaluation function as a way to inject taskspecific knowledge into the control process.

52

STEEPEST-ASCENT HILL CLIMBING (GRADIENT SEARCH)

´Considers all the moves from the current state. ´Selects the best one as the next state.

53

STEEPEST-ASCENT HILL CLIMBING (GRADIENT SEARCH)
Algorithm 1. Evaluate the initial state.
2.

Loop until a solution is found or a complete iteration produces no change to current state: 
SUCC = a state such that any possible successor of the current state will be better than SUCC (the worst state).  For each operator that applies to the current state, evaluate the new state: goal p quit better than SUCC p set SUCC to this state  SUCC is better than the current state p set the current state to SUCC.
54

HILL CLIMBING: DISADVANTAGES
Local maximum A state that is better than all of its neighbours, but not better than some other states far away.

55

HILL CLIMBING: DISADVANTAGES
Plateau A flat area of the search space in which all neighbouring states have the same value.

56

HILL CLIMBING: DISADVANTAGES
Ways Out

´Backtrack to some earlier node and try going
in a different direction.

´Make a big jump to try to get in a new section. ´Moving in several directions at once.

57

HILL CLIMBING: DISADVANTAGES

´Hill climbing is a local method:
Decides what to do next by looking only at the ´immediateµ consequences of its choices.

´Global information might be encoded in
heuristic functions.

58

BEST-FIRST SEARCH ´Depth-first search: not all competing branches
having to be expanded.

´Breadth-first search: not getting trapped on deadend paths.

Combining the two is to follow a single path at a time, but switch paths whenever some competing path look more promising than the current one.

59

BEST-FIRST SEARCH
A B 3 A C 5 D 1 B 3 A C 5 D E 4 F 6

A B G 6 H 5 C 5 D E 4 F 6 G 6 B H 5

A C 5 I 2 D E J 1 F 6

60

BEST-FIRST SEARCH

´OPEN: nodes that have been generated, but
have not examined. This is organized as a priority queue.

´CLOSED: nodes that have already been
examined. Whenever a new node is generated, check whether it has been generated before.
61

BEST-FIRST SEARCH
Algorithm 1. OPEN = {initial state}.
2.

Loop until a goal is found or there are no nodes left in OPEN: 
Pick the best node in OPEN  Generate its successors  For each successor: new p evaluate it, add it to OPEN, record its parent generated before p change parent, update successors

62

BEST-FIRST SEARCH ´Greedy search:
h(n) = estimated cost of the cheapest path from n to a goal state. node Neither optimal nor complete

´Uniform-cost search:

g(n) = cost of the cheapest path from the initial state to node n. Optimal and complete, but very inefficient
63

PROBLEM REDUCTION
Goal: Acquire TV set

Goal: Steal TV set

Goal: Earn some money AND-OR Graphs

Goal: Buy TV set

Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980)

64

PROBLEM REDUCTION: AO*
A 5 B 3 A 6 9 C 4 D 5

A 9 B 3 9 C 4 D 10 E 4 F 4 G 5 B 6 12 H 7

A 11 C 4 D 10 E 4 F 4

65

PROBLEM REDUCTION: AO*
A 11 B 13 D 5 E 6 G 5 C 10 F 3 D 5 A 14 B 13 E 6 G 10 C 15 F 3

Necessary backward propagation

H 9

66

Sign up to vote on this title