You are on page 1of 135

Artificial Intelligence

Search Problem (2)


Uninformed and informed searches
Since we know what
Breadth-first search the goal state is like, is
it possible get there
?faster

Oracle path
Search Example: Route Finding

Actions: go straight, turn left, turn right


Goal: shortest? fastest? most scenic?
Heuristic Search
Heuristics means choosing branches in a state

space (when no exact solution available as in

medical diagnostic or computational cost very

high as in chess) that are most likely to be

acceptable problem solution.


Informed search
• So far, have assumed that no nongoal state
looks better than another
• Unrealistic
– Even without knowing the road structure, some
locations seem closer to the goal than others
– Some states of the 8s puzzle seem closer to the
goal than others
• Makes sense to expand closer-seeming
nodes first
Heuristic
Heuristic function
• Heuristic function h(n) estimates the cost of
reaching goal from node n
• Example:
Start state

Goal state
A heuristic function
• Let evaluation function h(n) (heuristic)

– h(n) = estimated cost of the cheapest path from


node n to goal node.
– If n is goal then h(n)=0

8
:Examples (1)
• Imagine the problem of finding a route on a road
map and that the NET below is the road map:

A 4 B 4 C
3
S 5 5
G
4 D E F 3
2 4
Define f(T) = the straight-line distance from T to G
A 10.4 B 6.7 C 4
The estimate
S 11
G can be wrong!
8.9
D E 6.9 F 3
A Quick Review
• g(n) = cost from the initial state to the current
state n

• h(n) = estimated cost of the cheapest path from


node n to a goal node

• f(n) = evaluation function to select a node for


expansion (usually the lowest cost node)

10
75
A
E
150 125
50
100
60 75
B 80 D

75 80
C
75
A
E
150 125
50
100
60 75
B 80 D

75 80
C
 50
75
A
E
150 125
50
100
60 75
B 80 D

75 80
C
 125
75
A
E
150 125
50
100
60 75
B 80 D

75 80
C
 200
75
A
E
150 125
50
100
60 75
B 80 D

75 80
C
 300
75
A
E
150 125
50
100
60 75
B 80 D

75 80
C
 450
75
A
E
150 125
50
100
60 75
B 80 D

75 80
C
 380
Heuristic Functions
• Estimate of path cost h
– From state to nearest solution
– h(state) >= 0 Liverpool Leeds

– h(solution) = 0 135
Nottingham
• Example: straight line distance
155
– As the crow flies in route finding 75

• Where does h come from? Peterborough

– maths, introspection, inspection 120


or programs (e.g. ABSOLVER)
London
.Romania with straight-line dist
Examples (2): 8-puzzle

1 3 2
f1 8 4 =4
5 6 7

1 3 2
f2 8 4 =4
5 6 7

Most often, ‘distance to goal’ heuristics are more useful !


1 3 2
f2 8 4 =1+1+2+2=6
5 6 7
Examples (4):
:Chess
• F(T) = (Value count of black pieces) - (Value
count of white pieces)

f = v( ) + v( )
+ v( ) + v( )
- v( ) - v( )
Heuristic Evaluation Function
• It evaluate the performance of the different
heuristics for solving the problem.
f(n) = g(n) + h(n)
– Where f is the evaluation function
– G(n) the actual length of the path from state n to
start state
– H(n) estimate the distance from state n to the
goal
Search Methods
• Best-first search

• Greedy best-first search

• A* search

• Hill-climbing search

• Genetic algorithms
Best-First Search
• Evaluation function f gives cost for each state
– Choose state with smallest f(state) (‘the best’)
– Agenda: f decides where new states are put
– Graph: f decides which node to expand next

• Many different strategies depending on f


– For uniform-cost search f = path cost
– greedy best-first search
– A* search
Greedy best-first search
• Evaluation function f(n) = h(n)
(heuristic)
= estimate of cost from n to goal
• Ignores the path cost
• Greedy best-first search expands the
node that appears to be closest to goal

Greedy search
• Use as an evaluation function
f(n) = h(n)
• Selects node to expand a
believed to be closest (hence
“greedy”) to a goal node (i.e.,
h=2 b g h=4

select node with smallest f h=1 c h h=1


value)
h=1 d h=0
• as in the example. i
– Assuming all arc costs are 1, then greedy h=1 e
search will find goal g, which has a solution
cost of 5.
– However, the optimal solution is the path to
h=0 g
goal I with cost 3.
Romania with step costs in km
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Optimal Path
Greedy Best-First Search Algorithm
Input: State Space
Ouput: failure or path from a start state to a goal state.
Assumptions:
 L is a list of nodes that have not yet been examined ordered by their h value.
 The state space is a tree where each node has a single parent.
1. Set L to be a list of the initial nodes in the problem.
2. While L is not empty
1. Pick a node n from the front of L.
2. If n is a goal node
1. stop and return it and the path from the initial node to n.
Else
1. remove n from L.
2. For each child c of n
1. insert c into L while preserving the ordering of nodes in L and labelling c with its path from the initial
node as well as its h value.
End for
End if
End while
Return failure
Properties of greedy best-first search
• Complete?
– Not unless it keeps track of all states visited
• Otherwise can get stuck in loops (just like DFS)

• Optimal?
– No – we just saw a counter-example

• Time?
– O(bm), can generate all nodes at depth m before finding solution
– m = maximum depth of search space

• Space?
– O(bm) – again, worst case, can generate all nodes at depth m before
finding solution
Uniform Cost Search
• Let g(n) be the sum of the edges costs from root to
node n. If g(n) is our overall cost function, then the
best first search becomes Uniform Cost Search,
also known as Dijkstra’s single-source-shortest-
path algorithm .
• Initially the root node is placed in Open with a cost
of zero. At each step, the next node n to be
expanded is an Open node whose cost g(n) is
lowest among all Open nodes.
Example of Uniform Cost Search
• Assume an example tree with different edge costs, represented
by numbers next to the edges.

a
2 1
b c

1 2 1 2
f gc dc ce

Notations for this example:


generated node
expanded node
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 0, expanded: 0 start
expnd. node Frontier list
{S} 5 2 4

A B C

9 4 6 2
6 G 1
D E goal F

H
40
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 1, expanded: 1 start
expnd. node Frontier list
{S:0} 5 2 4
S not goal {B:2,C:4,A:5}
A B C

9 4 6 2
6 G 1
D E goal F

H
41
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 2, expanded: 2 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B not goal {C:4,A:5,G:2+6}
9 4 6 2
6 G 1
D E goal F

H
42
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 3, expanded: 3 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C not goal {A:5,F:4+2,G:8}
9 4 6 2
6 G 1
D E goal F

H
43
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 4, expanded: 4 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A not goal {F:6,G:8,E:5+4,
D:5+9} 6 G 1
D E goal F

H
44
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 5, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
F not goal {G:4+2+1,G:8,E:9,
6 G 1
D E goal F
D:14}
7

H
45
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
F {G:7,G:8,E:9,D:14}
6 G 1
D E goal F
G goal {G:8,E:9,D:14}
no expand 7

H
46
Uniform-Cost Search (UCS)

generalSearch(problem, priorityQueue)
S
# of nodes tested: 6, expanded: 5 start
expnd. node Frontier list
{S} 5 2 4
S {B:2,C:4,A:5}
A B C
B {C:4,A:5,G:8}
C {A:5,F:6,G:8}
9 4 6 2
A {F:6,G:8,E:9,D:14}
F {G:7,G:8,E:9,D:14}
6 G 1
D E goal F
G {G:8,E:9,D:14}
7
path: S,C,F,G
H cost: 7
47
Uniform-cost search Sample

0
Uniform-cost search Sample
75

X
140
11
8
Uniform-cost search Sample
146

140
11
8
Uniform-cost search Sample
146

140

X
22
9
Uniform-cost search
• Complete? Yes
• Time? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal solution

• Space? # of nodes with g ≤ cost of optimal solution,


O(bceiling(C*/ ε))
• Optimal? Yes


Hill Climbing & Gradient Descent
Hill-climbing search

• Problem: depending on initial state, can get stuck in local maxima


Hill Climbing - Algorithm
1. Pick a random point in the search space
2. Consider all the neighbors of the current state
3. Choose the neighbor with the best quality and
move to that state
4. Repeat 2 thru 4 until all the neighboring
states are of lower quality
5. Return the current state as the solution state
Example: 8 Queens
• Place 8 queens on board
– So no one can “take” another
• Gradient descent search
– Throw queens on randomly
– e = number of pairs which can attack
each other
– Move a queen out of other’s way
• Decrease the evaluation function
– If this can’t be done
• Throw queens on randomly again
Hill-climbing search
• Looks one step ahead to determine if any
successor is better than the current state; if
there is, move to the best successor.
• Rule: If there exists a successor s for the current
state n such that
• h(s) < h(n) and

• h(s) ≤ h(t) for all the successors t of n,

then move from n to s. Otherwise, halt at n.


Hill-climbing search
• Similar to Greedy search in that
it uses h(), but does not allow
backtracking or jumping to an
alternative path since it doesn’t
“remember” where it has been.
A* Search Algorithm
Evaluation function f(n) = h(n) + g(n)
h(n) estimated cost to goal from n
g(n) cost so far to reach n
A* uses admissible heuristics, i.e.,
h(n) ≤ h*(n) where h*(n) is the
true cost from n. A* Search finds the
optimal path

01/01/23
A* search

60
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 1, expanded: 1 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S not goal {A:1+8,B:5+4,C:8+3} h=8 h=4 h=3

3 7 9 4 5
D E G
h=∞ h=∞ h=0
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 2, expanded: 2 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A not goal {B:9,G:1+9+0,C:11,
3 7 9 4 5
D:1+3+∞,E:1+7+∞}
D E G
h=∞ h=∞ h=0
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 3, expanded: 3 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A {B:9,G:10,C:11,D:∞,E:∞} 3 7 9 4 5
B not goal {G:5+4+0,G:10,C:11,
D E G
D:∞,E:∞} replace h=∞ h=∞ h=0
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 4, expanded: 3 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A {B:9,G:10,C:11,D:∞,E:∞} 3 7 9 4 5
B {G:9,C:11,D:∞,E:∞} D E G
G goal {C:11,D:∞,E:∞} h=∞ h=∞ h=0
not expanded
A* Search

f(n) = g(n) + h(n)


S
# of nodes tested: 4, expanded: 3 h=8
expnd. Frontier
node 1 5 8
{S:8} A B C
S {A:9,B:9,C:11} h=8 h=4 h=3
A {B:9,G:10,C:11,D:∞,E:∞} 3 7 9 4 5
B {G:9,C:11,D:∞,E:∞} D E G
G {C:11,D:∞,E:∞} h=∞ h=∞ h=0

• Pretty fast and optimal path: S,B,G


cost: 9
A search example
*
A search example
*
A search example
*
A search example
*
A search example
*
A search example
*
straight-line distances
A 6 D 1 F
3 1 h(S-G)=10
h(A-G)=7
S 2 B 4 E 8 G
h(D-G)=1
1 h(F-G)=1
C 20
h(B-G)=10
h(E-G)=8
try yourself h(C-G)=20

The graph above shows the step-costs for different paths going from the start
(S) to
.the goal (G). On the right you find the straight-line distances

.Draw the search tree for this problem. Avoid repeated states .1

.Give the order in which the tree is searched (e.g. S-C-B...-G) for A* search .2
Use the straight-line dist. as a heuristic function, i.e. h=SLD,
and indicate for each node visited what the value for the evaluation function,
.f, is
*Properties of A
• Complete? Yes (unless there are infinitely
many nodes with f ≤ f(G) )
• Time? Exponential
• Space? Keeps all nodes in memory
• Optimal? Yes




start
B City Map
2714 300
200 50
630
Q P 600 N
2848 1841 2427
685 780
570 950
1350 1430
I 120 C W
1190 1170 1974
700 890
A 1220 1080 1025
1318 M K
740 725 775 480
340
J O
730 870 600
1666 0
goal
B Open B
2714 + 0 Close
2714
B Open P N Q
2714 + 0 Close B
200 630 50
Q P N
2848 + 200 1841 + 630 2427 + 50
3048 2471 2477
B Open I N C Q A W
2714 + 0 Close B P
200 630 50
Q P N
2848 + 200 1841 + 630 2427 + 50
3048 2477
570 685 780
1350 I C W
1190 + 1180 1170 + 1315 1974 + 1410
2390 2485 3384
A
1318 + 1980
3298
B Open N C M Q A W
2714 + 0 Close B P I
200 630 50
Q P N
2848 + 200 1841 + 630 2427 + 50
3048 2477
570 685 780
I C W
1190 + 1180 1170 + 1315 1974 + 1410
2485 3384
700 890
A
1318 + 1900 M
3218 725 + 2090
2815
B Open C M W Q A
2714 + 0 Close B P I N
200 630 50
Q P N
2848 + 200 1841 + 630 2427 + 50
3048
570 685 950
I C W
1190 + 1200 1170 + 1315 1974 + 1000
2485 2974
700 890
A
1318 + 1900 M
3218 725 + 2090
2815
B Open M K W Q A
2714 + 0 Close B P I N
200 630 50
Q P N
2848 + 200 1841 + 630 2427 + 50
3048
570 685 950
I C W
1190 + 1180 1170 + 1315 1974 + 1000
2974
700 890
A 1025
1318 + 1900 M K
3218 725 + 2090 480 + 2340
2815 2820
B Open K O W Q A J
2714 + 0 Close B P I N M
200 630 50
Q P N
2848 + 200 1841 + 630 2427 + 50
3048
570 685 950
I C W
1190 + 1180 1170 + 1315 1974 + 1000
2974
700 890
A 1025
1318 + 1900 M K
3218 725 + 2090 480 + 2340
740 870 2820
J O
1666 + 2830 0 + 2960
4496 2960
B Open O W Q A J
2714 + 0 Close B P I N M K
200 630 50
Q P N
2848 + 200 1841 + 630 2427 + 50
3048
570 685 950
I C W
1190 + 1180 1170 + 1315 1974 + 1000
2974
700 890
A 1025
1318 + 1900 M K
3218 725 + 2090 600 480 + 2340
740
J O
1666 + 2830 0 + 2940
4496 2940
B Open O W Q A J
2714 + 0 Close B P I N M K

630
P
1841 + 630

685
C
1170 + 1315

1025
K
600 480 + 2340
O
0 + 2940
2940
IDA* Search
• Problem with A* search
– You have to record all the nodes
– In case you have to back up from a dead-end
• A* searches often run out of memory, not
time
• Use the same iterative deepening trick as IDS
– But iterate over f(state) rather than depth
– Define contours: f < 100, f < 200, f < 300 etc.
• Complete & optimal as A*, but less memory
IDA* Search: Contours
• Find all nodes
– Where f(n) < 100
– Ignore f(n) >= 100
• Find all nodes
– Where f(n) < 200
– Ignore f(n) >= 200
• And so on…
Genetic algorithms
• A successor state is generated by combining two parent states

• Start with k randomly generated states (population)

• A state is represented as a string over a finite alphabet (often a


string of 0s and 1s)

• Evaluation function (fitness function). Higher values for better


states.

• Produce the next generation of states by selection, crossover, and


mutation



Theory of Evolution
• Every organism has unique attributes that can
be transmitted to its offspring
• Selective breeding can be used to manage
changes from one generation to the next
• Nature applies certain pressures that cause
individuals to evolve over time
Genetic Algorithms
• What are they?
– Evolutionary algorithms that make use of operations like
mutation, recombination, and selection
• Uses?
– Difficult search problems
– Optimization problems
– Machine learning
– Adaptive rule-bases
Classical GAs
• Representation of parameters is a bit string
– Solutions to a problem represented in binary
– 101010010011101010101
• Start with a population (fairly large set)
– Of possible solutions known as individuals
• Combine possible solutions by swapping material
– Choose the “best” solutions to swap material between
and kill off the worse solutions
– This generates a new set of possible solutions
• Requires a notion of “fitness” of the individual
– Base on an evaluation function with respect to the
problem
Genetic Algorithm
Representation

Phenotype space Genotype space =


Encoding {0,1}L
(representation) 10010001
10010010
010001001
011101001
Decoding
(inverse representation)
GA Representation
• Genetic algorithms are represented as gene
• Each population consists of a whole set of
genes
• Using biological reproduction, new population
is created from old one.
The Initial Population
• Represent solutions to problems
– As a bit string of length L
• Choose an initial population size
– Generate length L strings of 1s & 0s randomly
• Strings are sometimes called
chromosomes
– Letters in the string are called “genes”
– We call the bit-string “individuals”
Initialization

• Initial population must be a representative


sample of the search space

• Random initialization can be a good idea (if


the sample is large enough)
The gene
• Each gene in the population is represented by
bit strings.

001 10 10
Outlook Wind play tennis

0011010
Gene Example
• The idea is to use a bit string to describe the
value of attribute
• The attribute Outlook has 3 values (sunny,
overcast, raining)
• So we use 3 bit length to represent attribute
outlook
• 010 represent the outlook = overcast
GA

• The fitness function evaluates each solution

and decide it will be in next generation of

solutions
Selection
• Want to to give preference to “better”
individuals to add to mating pool
• If entire population ends up being selected it
may be desirable to conduct a tournament
to order individuals in population
• Would like to keep the best in the mating
pool and drop the worst
Selection methods

Common selection methods used in GAs are


• Fitness Proportionate Selection
• Rank Selection
• Tournament Selection
Rank Selection
• All individuals are sorted according to their
fitness.

• Each individual is then assigned a probability


of being selected from some prior probability
density.
Selection
• Main idea: better individuals get higher chance
– Chances proportional to fitness
– Implementation: roulette wheel technique
» Assign to each individual a part of the
roulette wheel
» Spin the wheel n times to select n
individuals

1/6 = 17%

A B fitness(A) = 3
C fitness(B) = 1
3/6 = 50% 2/6 = 33%
fitness(C) = 2
Roulette Wheel Selection

1 2 3 4 5 6 7 8
1 2 3 1 3 5 1 2

0 Rnd[0..18] = 7 Rnd[0..18] = 12 18
Chromosome4 Chromosome6
Parent1 Parent2
Tournament
Selection
• Select a group of N
(N>1) members.

• Select the fittest member of this group and


discard the rest.
New Population
• To build new population from old one we use
genetic operators to evolve the population of
solutions
• Genetic operators are
– Crossover operator
– Mutation operator
– Production operator
Crossover operator

• It produces two new offspring from two


parent strings by copying selected bits
from each parent.

00001010101 11101001000

11101010101 00001001000
Crossover
• In sexual reproduction the genetic codes of
both parents are combined to create offspring
• Would like to keep 60/40 split between parent
contributions
• 95/5 splits negate the benefits of crossover
(too much like asexual reproduction)
Mutation operator

• It produces offspring from single parent


by small random change in bit string.

11101001000

11100001000
One-point crossover
• Randomly choose a single point in both individuals
– Both have the same point
– Split into LHS1+RHS1 and LHS2+RHS2
• Generate two offspring from the combinations
– Offspring 1: LHS1+RHS2
– Offspring 2: LSH2+RHS
• Example: (X,Y,a,b are all ones and zeros)
Two-point Crossover
• Two points are chosen in the strings
• The material falling between the two points
– is swapped in the string for the two offspring
• Example:
Mutation

• Mutation is important for maintaining diversity in


the genetic code
• In humans, mutation was responsible for the
evolution of intelleigence
• Example: The occasional (low probably) alteration
of a bit position in a string
Replacement
• Determine when to insert new offspring into
the population and which individuals to drop
out based on fitness
• Steady state evolution calls for the same
number of individuals in the population, so
each new offspring processed one at a time so
fit individuals can remain a long time
The Termination Check
• Termination may involve testing whether an individual
solves the problem to a good enough standard
– Not necessarily having found a definitive answer
• Alternatively, termination may occur after a fixed time
– Or after a fixed number of generations
• Note that the best individual in a population
• So, your GA should:
– Record the best from each generation
– Output the best from all populations as the answer
GA in Search
• Genetic Algorithms is a search method in which
multiple search paths are followed in parallel.
• At each step, current states of different pairs of
these paths are combined to form new paths.
• This way the search paths don't remain
independent, instead they share information
and thus try to improve the overall
performance of the complete search space
Eight Queens Problem
:Fitness Function Q8

Q1 can attack NONE Q1

Q2 can attack NONE Q4 Q7

Q3 can attack Q6 Q5

Q6
Q4 can attack Q5
Q2
Q5 can attack Q4
Q6 can attack Q5
Q3
Q7 can attack Q4
Q8 can attack Q5

Fitness = No of. Queens that can attack none


Fitness = 2
Eight Queens Problem
Suppose the following individuals are chosen for •
crossover

Q Q
Q Q
Q
2 Q 3
Q Q Q Q
Q
Q Q Q
Q Q

85727135 45827165
Eight Queens Problem
Using Crossover
Parents Children

85727135 85727165

45827165 45827135
?Why use genetic algorithms
• They can solve hard problems
• Easy to interface genetic algorithms to existing
simulations and models
• GA’s are extensible
• GA’s are easy to hybridize
• GA’s work by sampling, so populations can be
sized to detect differences with specified error
rates
• Use little problem specific code
Example

No. A1 A2 Classification
1 T T +
2 T T +
3 T F -
4 F F +
5 F T -
6 F T -
Representation
• A1 ={T, F}  10 = T && 01 = F
• A2={T, F}  10 = T && 01 = F
• Classification = {+, -}  1 = + && 0 = -
• The gene is A1 (2) + A2 (2) +Classification (1) =
5 bits
[1 0 1 0 1]
A1= T & A2=T & Classification = +
Initial Population
• Let we construct 10 genes randomly as
[11101] fitness = 4 cases { 2 true and 2 false}=0.5
[10001] fitness = 0.66
[01011] fitness = 0.0
[01011]
[01111]
[11111]
[01000]
[00001]
[11110]
[01100]
Crossover operation

[11101] X [01100][11100] + [01101]

[01011] X [10001]  [11001] + [01010]


Mutation Operation

• [01011]  [01010]

• [11111]  [11011]
New Population
[11100]
[01010]
[11001]
[01111]
[11011]
[01000]
[00001]
[11110]
[01101]
GA Application
• Searching maximum of function
• Search for value of x to y=f(x) be maximum.
• X play the role of genes, binary code of x value
in 8 bits gene (chromosome)
• Y the role of fitness
GA Application

• Four bits are required to represent char

• 0:0000, …….9:1001, +:1010, -:1011,

*:1100, /:1101

• Gene:01101010010111000100110100101010

0001 is 6+5*4/2+1 = 23
TSP Application
• Use a genetic algorithm to solve the traveling
salesman problem we could begin by creating a
population of candidate solutions
• We need to define mutation, crossover, and
selection methods to aid in evolving a solution from
this population
GP Application 1

• Given the digits 0 through 9 and


operators +, -, *, and /. Find sequence
that represent given target number.
• Given number 23, the sequence
+6+5*4/2+1
• If number is 75.5, then 5/2+9*7-5
Class Exercise:
Local Search for Map/Graph Coloring
G(n) = The cost of each move as the distance between each town (shown on
map).
.H(n) = The Distance between any town and town M
)START(
A
36

B
61
31
32 C 80 L
D
52
31
E F 102

43 112
G K
20 122
H 32 M
40 36 )END(
I
J
45
Search Strategies
Uninformed Informed
• Breadth-first search • Greedy search
• Depth-first search • A* search
• Iterative deepening • IDA* search
• Bidirectional search • Hill climbing
• Uniform-cost search
G(n) = The cost of each move as the distance between each town
H(n) = The Straight Line Distance between any town and town M.

A 40 B

12
C 10 D 23 10

5
20 E F 10
G H
10 5
I J
10 10
15
5

K 20 20 M
L
A 45 E 32 I 12 M 0
B 20 F 23 J 5
C 34 G 15 K 40
D 25 H 10 L 20
The 8-puzzle problem starting from the
initial state 1 3 5 to the goal state 1 2 3
4 2 - 4 5 6
7 8 6 7 8 -
• Consider the following search problem. Assume a state is
represented as an integer, that the initial state is the number 1,
and that the two successors of a state n are the states 2n and
2n+1. For example, the successors of 1 are 2 and 3, the successors
of 2 are 4 and 5, the successors of 3 are 6 and 7, etc. Assumes the
goal state is the number 12. Consider the following heuristics for
evaluating the state n where the goal state is g
• h1(n) = |n-g| & h2(n) = (g – n) if (n  g) and h2 (n) =  if (n >g)
• Show the search trees generated for each of the following
strategies for the initial state 1 and the goal state 12, numbering
the nodes in the order expanded.
• Depth-first search b) Breadth-first search
• c) beast-first with heuristic h1 d) A* with heuristic (h1+h2)
• If any of these strategies get lost and never find the goal, then
show the few steps and say "FAILS"
The end!

You might also like