You are on page 1of 52

ARTIFICIAL INTELLIGENCE

Dr. Nidhi Kushwaha

Department of Computer Science and Engineering

Indian Institute of Information Technology, Ranchi


2

OVERVIEW

02/27/2024
Local Search Algorithm
Hill Climbing Search
 Simple Hill Climbing
 Steepest-ascent Hill Climbing
 Stochastic Hill Climbing

Genetic algorithm
Local Beam Search
Simulated Annealing Search
LOCAL SEARCH
ALGORITHMS
 In many optimization problems, the path to the goal is
irrelevant; the goal state itself is the solution.
Example: N-Queen problem, IC Design, Job scheduling, vehicle
routing, To reduce cost, as in cost functions

 State space = set of complete configurations

 Find configuration satisfying constraints, e.g., n-queens

 In such cases, we can use local search algorithms that keep a


single "current" state, try to improve it using its neighbor.

Advantages:
 Very memory efficient (only remember current state)

 Find reasonable solution in large infinite space.


LOCAL SEARCH ALGORITHMS
 Local search can be used on problems that can be formulated as
finding a solution maximizing a criterion among a number of
candidate solutions.
 Local search algorithms move from solution to solution in the space
of the search space until a solution deemed optimal is found or a
time bound is elapsed.
 For example, the travelling salesman problem, in which a solution
is a cycle containing all nodes of the graph and the target is to
minimize the total length of the cycle. i.e. a solution can be a cycle
and the criterion to maximize is a combination of the number of
nodes and the length of the cycle.
 A local search algorithm starts from the search space and then
iteratively moves to a neighbour solution.
LOCAL SEARCH ALGORITHMS

 Local search algorithms are typically incomplete algorithms, as


the search may stop even if the best solution found by the
algorithm is not optimal.

 State-space landscape
LOCAL SEARCH ALGORITHMS
EXAMPLE: N-QUEENS
 Put n queens on an n × n board with no two queens on
the same row, column, or diagonal
HILL CLIMBING SEARCH
 Hill climbing search algorithm is simply a loop that
continuously moves in the direction of increasing value.
 It stops when it reaches a “peak” where no neighbour has
higher value.
 This algorithm is considered to be one of the simplest
procedures for implementing heuristic search.
 The hill climbing comes from the idea that if you are trying
to find the top of the hill and you go up direction from
wherever you are.
 Sometimes called Greedy Local Search
Hill-Climbing Search
HILL CLIMBING SEARCH
 Only records the state and its objective function value.
 Does not look ahead beyond the immediate.

 This heuristic combines the advantages of both depth-


first and breadth-first searches into a single method.

 Problem: can get stuck


in local maxima
TYPES OF HILL CLIMBING
 Simple Hill Climbing
It examines the neighbouring nodes one-by-one and selects the
first neighbouring node which optimises the current cost as
next node.
 Steepest-Ascent Hill Climbing

At first examines all the neighbouring nodes and then selects


the node closest to the solution state as the next node.
 Stochastic Hill Climbing

It does not examine all the neighbouring nodes before deciding


which node to select. It just selects a neighbouring node at
random, and decides (based on the amount of improvement
in that neighbour) whether to move to that neighbour or to
examine another.
11

STEEPEST-ASCENT HILL-CLIMBING ALGORITHM

02/27/2024
Begin
/*Initially OPEN contains the root node and CLOSE is EMPTY*/
OPEN=[start]
CLOSE=[]
/*We Continue the loop till OPEN list is not EMPTY*/
While OPEN≠[] do
Begin
Remove the left most state from OPEN and call it X The algorithm
If X=GOAL then described is
Return SUCCESS called as
Else steepest
Begin ascent hill
1. Generate children of X climbing.
2. Put X on close
3. Discard the children of X if already on OPEN or CLOSE
4. Sort the remaining Children
5. Put the children on left side of OPEN in sorted order. according to the
heuristic value of each state
/*Note that the children are sorted first according to the heuristic merit and are
then put to the left sideof OPEN*/
End
End
Return Fail
End
Apply hill climbing to the following tree considering G is the Goal State and
Node A as the initial state.

Step 1: The hill climbing algorithm starts with the initial state.

OPEN = [A3]
CLOSE = []

Note that leftmost node in the OPEN list is Node A, and hence, we
expand Node A in the next step.
Apply hill climbing to the following tree considering
G is the Goal State and Node A as the initial state.

Step 2: All the children of Node A are now generated.

OPEN = [C5, B3, D3]


CLOSE = [A]

Note that the leftmost node in the OPEN list is Node C. Hence, we
expand Node C in Step 3.
Apply hill climbing to the following tree considering G is the Goal State and
Node A as the initial state.

Step 3: Amongst all the children of Node root Node A Node C has the
highest heuristic. Hence, all the children of Node C are now generated.

OPEN = [E8, F7, B3, D3]


CLOSE = [A, C]

Note that leftmost node in OPEN list is Node E. Hence, we expand Node E
in Step 4.
Apply hill climbing to the following tree considering G is the Goal State and
Node A as the initial state.

Step 4: Now the algorithm proceeds towards Node E and among all the
children of Node E (only G is the child of Node E). Node G has a highest
Heuristic merit; and hence, Node G is generated.

OPEN = [G9, F7, B3, D3]


CLOSE = [A, C, E]

Note that the leftmost node in OPEN list is Node G and Node G is a GOAL and
hence we STOP
DEMERITS OF HILL CLIMBING

State–space diagram is a graphical representation of the set of states our search


algorithm can reach versus the value of our objective function (the function which
we wish to maximise).
X-axis: It denotes the state space, that is, states or configuration our algorithm
may reach.
Y-axis: It denotes the values of objective function corresponding to a particular
state. The best solution will be that state space where objective function has
maximum value (global maximum).
DIFFERENT REGIONS IN THE STATE–SPACE DIAGRAM

1. Local maximum: It is a state which is better than its


neighbouring state. However, there exists a state which is
better than it (global maximum). This state is better because
here value of objective function is higher than its neighbours.
2. Global maximum: It is the best possible state in the state–
space diagram. This because at this state, objective function
has the highest value.
3. Plateau/Flat local maximum: It is a flat region of state space
where neighbouring states have the same value.
4. Ridge: It is region which is higher than its neighbours but
itself has a slope. It is a special kind of local maximum.
5. Current state: The region of state–space diagram, where we
are currently present during the search.
6. Shoulder: It is a plateau that has an uphill edge.
Problems in Different Regions in Hill climbing
1. Local maximum: At a local maximum all neighbouring states
having a values, which is worse than the current state. Since, hill
climbing uses greedy approach, it will not move to the worse
state and terminate itself. The process will end even though a
better solution may exist.

To overcome local maximum problem Solution to the problem are:


(a) One possible solution is backtracking.
We can backtrack to some earlier node and try to go in a different direction to
attain the global peak. We can maintain a list of paths almost taken and go back
to one of them if the path that was taken leads to a dead end.

(b) Another solution can be a list of promising plan.


Problems in Different Regions in Hill climbing

2. Plateau: On plateau all neighbours have same value. Hence, it is not possible to
select the best direction. Plateau is a flat area of the search space in which a whole
set of neighbouring states has the value. On a plateau, it is not possible to
determine the best direction in which to move the local comparisons.

To overcome plateaus:
(a) A big jump in some direction can be done in order to get to a new
section of search space. This method is recommended as in a plateau all
neighbouring points have the same value.
(b) Another solution is to apply small steps several times in the same
direction. This depends on the rules available.
Problems in Different Regions in Hill climbing

3. Ridge: Any point on a ridge can look like peak because


movement in all possible directions is downwards. Hence, the
algorithm stops when it reaches this state.

To overcome Ridge: Trying different


paths at the same time.
In this kind of obstacle, move in several
directions at once. Bidirectional search can
be useful in such case.
22

EXAMPLE (4-QUEEN PROBLEM)

02/27/2024
Here, h is heuristic cost function, is the number of pairs of queens
that are attacking each other, either directly or indirectly. Global
minimum is h=0, which occurs in perfect situation.
23

EXAMPLE (8-QUEEN PROBLEM)

02/27/2024
HILL CLIMBING (EXAMPLE-1), 4-QUEEN
PROBLEM
24

02/27/2024
25
EXAMPLE: 4-QUEEN PROBLEM
* * * *

02/27/2024
4 5 5 4
4 4 4 4
4 3 3 4
Genetic Algorithm (GA)
GENETIC ALGORITHM (GA)
 Inspired by evolutionary biology and natural selection,
such as inheritance.
 Evolves toward better solutions.

 Start with k randomly generated states (population),


Each state is an individual.
 A successor state is generated by combining two parent
states, rather by modifying a single state.
GENETIC ALGORITHM (GA)
 A state or individual is represented as a string over a
finite alphabet (often a string of 0s and 1s)
 A state or individual is rated using Objective Function
or in GA it is Evaluation function (Fitness Function).
Higher values for better states.
 For Example: in 8-queen Problem

 We use Non attacking pairs of the queens which has


value 28 for the best solution.
GENETIC ALGORITHM (GA)

 Produce the next generation of states by using operator


selection, crossover, and mutation.

 Commonly, the algorithm terminates when either a maximum


number of generations has been produced, or a satisfactory
fitness level has been reached for the population.
30

GENETIC ALGORITHM (GA)

02/27/2024
 Two individuals, are selected at random for reproduction, in accordance with the
probabilities. This phase is called as Selection.

 A crossover point is chosen randomly from position in string.

 The offspring themselves are created using crossing over the parent strings at
crossover point. This phase is called as Crossover.

 One or more digits are mutated in randomly chosen offspring. This phase is called as
Mutation.
For ex: In 8-queens it corresponds to choosing a queen at random and moving it to a
random square in its column.
GENETIC ALGORITHM
8 QUEEN PROBLEM
 A better state is generated by combining two parent
states.
8 QUEEN PROBLEM
 Represent individuals (chromosomes) : Can be
represented by a string digits 1 to 8, that represents the
position of the 8 queens in the 8 columns.
8 QUEEN PROBLEM

 Fitness Function : Possible fitness function is the


number of non-attacking pairs of queens. (min = 0, max
= 8 × 7/2 = 28)
8 QUEEN PROBLEM
 Fitness Function : Calculate the probability of being
regenerated in next generation. For example:
24/(24+23+20+11) = 31% , 23/(24+23+20+11) = 29% , etc.
8 QUEEN PROBLEM

 Selection: Pairs of individuals are selected at random for


reproduction w.r.t. some probabilities. Pick a crossover
point per pair.
8 QUEEN PROBLEM
 Crossover: A crossover point is chosen randomly in the string.
Offspring are created by crossing the parents at the crossover
point.
8 QUEEN PROBLEM
 Mutation Each element in the string is also subject to
some mutation with a small probability.
39
GENETIC ALGORITHM
(WORKING TOWARDS SOLUTION OF 8-QUEEN
PROBLEM USING GENETIC ALGORITHM)

02/27/2024
Local Beam Search
LOCAL BEAM SEARCH
Function Beam-Search(Problem, k) returns a solution state
Start with k randomly generated states
Loop
Generate all successors of all k states
If any of them is a solution then return it
Else select the k best successors
LOCAL BEAM SEARCH
 Start with k- random generated states.
 Now obtain successor values of all the random generated
state.
 The state that generate the best successor say to other come
here to find the solution.
 i.e. in this we select k best successor state and repeat all if it is
not the goal state.

 In local beam search, useful information is passed among the


state through threads.
 Can suffer from lack of diversity,
 Stochastic beam search
 Choose k successors proportional to state quality.
Simulated Annealing
Search
SIMULATED ANNEALING
SEARCH

To avoid being stuck in a local


maxima, it tries randomly (using a
probability function) to move to
another state, if this new state is better
it moves into it, otherwise try another
move… and so on.
SIMULATED ANNEALING SEARCH
ANNEALING
 Annealing is a thermal process for obtaining low
energy states of a solid in a heat bath.
 The process contains two steps:
 Increase the temperature of the heat bath to a
maximum value at which the solid melts.
 Decrease carefully the temperature of the heat bath
until the particles arrange themselves in the ground
state of the solid. Ground state is a minimum energy
state of the solid.
 The ground state of the solid is obtained only if
the maximum temperature is high enough and
the cooling is done slowly.
SIMULATED ANNEALING

 Terminates when finding an acceptably good solution in


a fixed amount of time, rather than the best possible
solution.
 Locating a good approximation to the global minimum
of a given function in a large search space.
 Widely used in VLSI layout, airline scheduling, etc.
SIMULATED ANNEALING
 To apply simulated annealing with optimization
purposes we require the following:
 A successor function that returns a “close” neighboring solution
given the actual one. This will work as the “disturbance” for the
particles of the system.
 A target function to optimize that depends on the current state of
the system. This function will work as the energy of the system.
 The search is started with a randomized state. In a
polling loop we will move to neighboring states always
accepting the moves that decrease the energy while only
accepting bad moves accordingly to a probability
distribution dependent on the “temperature” of the
system.
SIMULATED ANNEALING
 Decrease the temperature slowly, accepting less bad
moves at each temperature level until at very low
temperatures the algorithm becomes a greedy hill-
climbing algorithm.
 The distribution used to decide if we accept a bad
movement is know as Boltzman distribution.
 This distribution is very well known is in solid physics
and plays a central role in simulated annealing. Where γ
is the current configuration of the system, E γ is the
energy related with it, and Z is a normalization constant.
PROPERTIES OF SIMULATED
ANNEALING SEARCH
 The problem with this approach is that the neighbors of a state
are not guaranteed to contain any of the existing better
solutions which means that failure to find a better solution
among them does not guarantee that no better solution exists.

 It will not get stuck to a local optimum.

 If it runs for an infinite amount of time, the global optimum


will be found.
ISSUES WITH SIMULATED
ANNEALING
 The cost function should be fast it is going to be called
“millions” of times.
 The best is if we just have to calculate the deltas
produced by the modification instead of traversing
through all the state.
 This is dependent on the application.
Thank You!!

You might also like