0 views

Uploaded by Varshika Choudhary

problem solving methods

- bt2
- Astar Tutorial
- Unicast QoS Routing Algorithms for SDN 2018
- Comparative Analysis of Algorithms for Single Source Shortest Path Problem
- Scimakelatex.56972.Raymond+Sheppard
- Untitled
- 02-hsearch
- DESIGN, IMPLEMENT AND SIMULATE AN AGENT MOTION PLANNING ALGORITHM IN 2D AND 3D ENVIRONMENTS
- AI NOTES
- Zt Java Collections Cheat Sheet
- Wikipedi Mathematical Finance
- Graphs Part 3 VULA
- mcs021b1
- Development of a GIS-Based Monitoring System for Road Network
- AaDS - W01b Foundations.pdf
- R.gaul Essentials SE
- Basis Pursuit
- Ch3 Model & Constraints
- 06 NaiveBayes Example
- beliefnetworksbayesianclassification-130611015246-phpapp02

You are on page 1of 121

Problem solving Methods – Search Strategies- Uninformed - Informed -

Heuristics - Local Search Algorithms and Optimization Problems - Searching

with Partial Observations - Constraint Satisfaction Problems – Constraint

Propagation - Backtracking Search - Game Playing - Optimal Decisions in

Games – Alpha - Beta Pruning - Stochastic Games.

Outline

Algorithms

Searching with Partial Observations - Constraint Satisfaction Problems –

Constraint Propagation

Alpha - Beta Pruning - Stochastic Games.

Problem Solving in AI…………

There are two types of problems.

The Problems like, computation of the sine of an angle or the square

root of a value.

These can be solved through the use of

deterministic procedure and the success is guaranteed.

In the real world, very few problems lend themselves to

straightforward solutions.

solution.

AI is concerned with these type of problems solving.

Problem Solving is

a problem is characterized by a set of goals,

a set of objects, and

a set of operations.

These could be ill-defined and may evolve during problem solving.

directly from data to solution). Instead, problem solving often need to

use indirect or model-based methods.

To solve a problem…build a system by

following steps………….

Define the problem precisely - find input situations as well as final situations for

acceptable solution to the problem.

Analyze the problem - find few important features that may have impact on the appropriateness

of various possible techniques for solving the problem.

Choose the best problem solving technique(s) and apply to the particular problem.

Problem…….

To provide a formal description of a problem, we need to do following:

a. Define a state space that contains all the possible configurations of

the relevant objects, including some impossible ones.

b. Specify one or more states, that describe possible situations, from

which the problem-solving process may start. These states are called

initial states.

c. Specify one or more states that would be acceptable solution to the

problem. These states are called goal states.

d. Specify a set of rules that describe the actions (operators) available.

Proceeding with problem solving….

The problem can then be solved by using the rules, in combination with an

appropriate control strategy, to move through the problem space until a path from an initial

state to a goal state is found.

This process is known as search.

Search is fundamental to the problem-solving process.

Search is a general mechanism that can be used when more direct

method is not known.

Search provides the framework into which more direct methods for

solving subparts of a problem can be embedded.

A very large number of AI problems are formulated as search problems.

Search………….

space.

Search proceeds with different types of search control

strategies.

The depth-first search and breadth-first search are

the two common search strategies.

Problem Space………..

A problem space is represented by directed graph, where nodes

represent search state and paths represent the operators applied to

change the state.

programmatically represent a problem space as a tree.

Here, the cost is due to duplicating some nodes on the tree that were linked numerous

times in the graph; e.g., node B and node D shown in example below.

A tree is a graph in which any two vertices are connected by exactly one

path. Alternatively, any connected graph with no cycles is a tree.

Graph and Tree

How to do problem Solving……….

actions to reach some predefined goal or solution.

exploits very specific features of the situation in which the problem is

embedded.

General-purpose method is applicable to a wide variety of problems.

One general-purpose technique used in AI is "means-end analysis". It is

a step-by-step, or incremental, reduction of the difference between

current state and final goal.

Examples : Tower of Hanoi puzzle

Problem Solution

A State space is the set of all states reachable from the initial state.

Definitions of terms :

◊ A state space forms a graph (or map) in which the nodes are states

and the arcs between nodes are actions.

In the state space, a solution is a path from the initial state to a goal

state or sometime just a goal state.

◊ A Solution cost function assigns a numeric cost to each path;

It also gives the cost of applying the operators to the states.

◊ A Solution quality is measured by the path cost function; and

An optimal solution has the lowest path cost among all solutions.

◊ The solution may be any or optimal or all.

◊ The importance of cost depends on the problem and the type of

solution asked.

Example to understand Problem

Definition…………..

A game of 8-Puzzle

◊ State space : configuration of 8 - tiles on the

board

◊ Initial state : any configuration

◊ Goal state : tiles in a specific order

◊ Action : "blank tile move"

Condition: the move is within the board

Transformation: blank moves Left, Right, Up, Dn

◊ Solution : optimal sequence of operators

Search……………

Word "Search" refers to the search for a solution in a

problem space.

strategies".

nodes expand.

dimensions:

Completeness, Time complexity, Space complexity,

Optimality.

It is necessary to compare performance to

be able to select most appropriate

approach for a given situation………..

◊ Performance of an algorithm depends on internal and external factors.

Internal factors

‡ Time required, to run

‡ Space (memory) required to

run

External factors

‡ Size of input to the algorithm

‡ Quality of the compiler

Complexity………

It measures the internal factors, usually in time than space.

then run time of A is the number of steps taken on the input of length n.

'best' algorithm A for f .

memory used by the `best' algorithm A for f .

‘Big O’……….

algorithm,

usually indicates the time or the memory needed, given the problem

size n, which is usually the number of items.

◊ Big-O notation

The Big-O notation is used to give an approximation to

the run-time- efficiency of an algorithm ;

operations or space at run-time.

The Big-O of an Algorithm A

algorithm A is said to be of order f(n), and it is denoted as O(f(n)).

If algorithm A requires time proportional to n2 , then order of the

algorithm is said to be O(n 2).

If algorithm A requires time proportional to n, then order of the

algorithm is said to be O(n).

The function f(n) is called the algorithm's growth-rate function.

If an algorithm has performance complexity O(n), this means that the

run-time t should be directly proportional to n, ie t ∝ n or t = k n

where k is constant of proportionality.

Similarly, for algorithms having performance complexity O(log 2(n)),

O(log N), O(N log N) , O(2 N) and so on.

Determine Big-O of an algorithm:

Example 1 : 1-D Array

Calculate the sum of the n elements in an integer array a[0 . . . . n-1].

For the polynomial (2*n + 3) the Big-O is dominated by the 1st term

Uninformed and Informed search

algorithms…….

--Uninformed search algorithms or Brute-force algorithms search, through

the search space, all possible candidates for the solution checking whether

each candidate satisfies the problem's statement.

Informed search algorithms use heuristic functions, that are specific to the

problem, apply them to guide the search through the search space to try

to reduce the amount of time spent in searching.

Heuristic functions………..

For complex problems, the traditional algorithms are unable to find the solution within

some practical time and space limits.

The algorithms that use heuristic functions are called heuristic algorithms.

Heuristic algorithms..

because they achieve better performance.

Heuristic algorithms are more efficient because they take advantage of

feedback from the data to direct the search path.

Power of a good heuristic…………

uninformed search.

For example, the Traveling Salesman Problem (TSP)

where the goal is to find a good solution instead of finding the best solution.

In TPS like problems, the search proceeds using current information about

the problem to predict which path is closer to the goal and follow it, although

it does not always guarantee to find the best possible solution. Such

techniques help in finding a solution within reasonable time and space.

How do Heuristic Search Algorithms

work?

problem space or a path from the initial state.

Then, test to see if this possible solution is a real solution by

comparing the state reached with the set of goal states.

Lastly, if it is a real solution, return, else repeat from the first again.

Common Uninformed search

approaches….

A search is said to be exhaustive if the search is guaranteed to generate

all reachable states (outcomes) before it terminates with failure.

A graphical representation of all possible reachable states and the paths by

which they may be reached is called decision tree.

Breadth-first search……..

layer of a decision tree is searched completely before proceeding to the

next layer is called Breadth-first search (BFS).

In this strategy, no viable solution is omitted and therefore guarantee that

optimal solution is found.

This strategy is often not feasible when the search space is large.

Depth-first search…………

path as far as possible before backtracking to the last choice point and trying

the next alternative path is called Depth-first search (DFS).

This strategy does not guarantee that the optimal solution has been found.

In this strategy, search reaches a satisfactory solution more rapidly than

breadth first, an advantage when the search space is large.

The Breadth-first search (BFS) and depth-first search (DFS) are exhaustive and

foundation for all other search techniques.

Details of BFS………….

www.myreaders.info

This is an exhaustive search technique.

◊ The search generates all nodes at a particular level before proceeding

to the next level of the tree.

◊ The search systematically proceeds testing each node that is reachable

from a parent node before it expands to any child of those nodes.

◊ The control regime guarantees that the space of possible moves is

systematically examined; this search requires considerable memory

resources.

◊ The space that is searched is quite large and the solution may lie a

thousand steps away from the start node. It does, however, guarantee

that if we find a solution it will be the shortest possible.

Search terminates when a solution is found and the test returns true.

Uniform Cost Search----Another variation of Breadth-first ----Expands the node with least

path cost.

Details of DFS………..

www.myreaders.info

This is an exhaustive search technique to an assigned depth.

◊ Here, the search systematically proceeds to some depth d, before

another path is considered.

◊ If the maximum depth of search tree is three, then if this limit is

reached and if the solution has not been found, then the search

backtracks to the previous level and explores any remaining alternatives

at this level, and so on.

◊ It is this systematic backtracking procedure that guarantees that it will

systematically and exhaustively examine all of the possibilities.

◊ If the tree is very deep and the maximum depth searched is less then

the maximum depth of the tree, then this procedure is "exhaustive

modulo of depth" that has been set.

DFS Vs BFS

DFS BFS

node found is not a minimum-depth (nearest

necessarily minimum

depth to root) goal node

‡ Large tree, may take ‡ Large tree, may require

excessive long time to excessive memory

find even a nearby goal

node

‡ Requires, how to combine the advantages and avoid

disadvantages?

This means, perform Depth-first searches with a depth

limit.

Depth-first Iterative-Deepening (DFID)

www.myreaders.info

depth first and breadth first search.

Algorithm : Steps

First perform a Depth-first search (DFS) to depth one.

Then, discarding the nodes generated in the first search, starts all

over and do a DFS to level two.

Then three . . . . until the goal state is reached.

Example…..

Consider a state space where the start state is number 1 and the successor

function for state n returns two states, numbers 2n and 2n + 1.

b. Suppose the goal state is 11. List the order in which nodes will be visited for

breadth-first search and depth-first search.

(a).

(b).

BFS: 1 → 2 → 3 → 4 → 5 → 6 → 7 → 8 → 9 → 10 → 11.

DFS: 1 → 2 → 4 → 8 → 9 → 5 → 10 → 11.

Moving towards heuristic search…

For complex problems presented above, the traditional algorithms are unable to

find the solution within some practical time and space limits.

Consequently, many special techniques are developed, using heuristic functions

Blind search is not always possible, because they require too much time

or Space (memory).

Heuristics are rules of thumb; they do not guarantee for a solution to a

problem.

applied correctly; they require domain specific information

Characteristics of heuristic searches..

over blind search.

estimated merit of state, with respect to goal.

merit of a state with respect to goal.

leading to goal state.

presuming function is efficient.

Comparison –Brute force Vs Heuristic

search

Heuristic Algorithms….

2. Hill Climbing

Simple

Steepest-Ascent Hill Climbing

Simulated Annealing

3. Best-First Search/Greedy Search

4. A* Algorithm (OR Graph)

5. AO* Algorithm (AND-OR Graph) [Problem reduction]

Constraint Satisfaction Problem

Mean-end Analysis

Generate and Test

do while goal not accomplished

generate a possible solution

test solution to see if it is a goal

rules for solution generation.

Generate and test

Generate-and-test method

The method first guesses the solution and then tests whether this

solution is correct, means solution satisfies the constraints/conditions.

* Generator to enumerate possible solutions (hypotheses).

* Test to evaluate each proposed solution

◊ The algorithm is

Generate and Test

variables which are rejected in the testing phase.

◊ Generator leaves conflicting instantiations and it generates other

assignments independently of the conflict.

◊ For better efficiency GT approach need to be supported by backtracking

approach.

Example: Opening a combination lock without knowing the combination.

Example - Traveling Salesman Problem

(TSP)

• Traveler needs to visit n cities.

• Know the distance between each pair of cities.

• Want to know the shortest route that visits all the cities

once.

• n=80 will take millions of years to solve exhaustively!

45

TSP Example

A 6 B

1 2

5 3

D 4 C

46

Generate-and-test Example

• TSP - generation of

possible solutions is done

in lexicographical order A B C D

of cities:

1. A - B - C - D B C D

2. A - B - D - C

3. A - C - B - D Not C D B D C B

efficient

4. A - C - D – B

Test with heuristics/knowledge of the problem D C D B B C

eg: distance between cities, closest from 47

current city etc.

Hill Climbing

• Variation on generate-and-test:

– generation of next state depends on feedback from the test

procedure.

– Test now includes a heuristic function that provides a guess as to

how good each possible state is.

• There are a number of ways to use the information returned

by the test procedure.

48

Hill climbing

The algorithm

select a heuristic function;

set C, the current node, to the

highest-valued initial node;

loop

select N, the highest-valued

child of C;

return C if its value is better

than the value of N;

otherwise set C to N;

Simple Hill Climbing

Function Optimization

y = f(x)

y

x 50

Hill Climbing

• Visualize the choices as an 2-dimensional space,

apply a heuristic and seek the state that takes you

uphill the maximum amount

• In actuality, many problems will be viewed in

more than 2 dimensions

– In 3-dimensions, the heuristic worth represents “height”

• To solve a problem, pick a next state that moves

you “uphill”

• Variations:

– Simple Hill Climbing

– Steepest Ascent Hill Climbing

– Simulated Annealing

Simple Hill Climbing

• Use heuristic to move only to states that are better than the

current state.

• The process ends when all operators have been applied and

none of the resulting states are better than the current state.

52

Simple Hill Climbing….Algorithm

• Given initial state perform the following

– Set initial state to current

– Loop on the following until goal is found or no more

operators available

• Select an operator and apply it to create a new state

• Evaluate new state

• If new state is better than current state, perform operator

making new state the current state

– Once loop is exited, either we will have found the goal

• This algorithm only tries to improve during each

selection, but not find the best solution

Simple Hill Climbing Example

• TSP - define state space as the set of all possible tours.

the current tour.

54

TSP Hill Climb State Space

ABCD Initial State

Swap 3,4 Swap 4,1

Swap 2,3 Swap 4,1

55

Steepest-Ascent Hill Climbing

• A variation on simple hill climbing.

• Instead of moving to the first state that is better, move to the

best possible state that is one move away (the successor).

• The order of operators does not matter.

slope.

56

Steepest Ascent Hill Climbing

• Here, we attempt to improve on the previous Hill

Climbing algorithm

– Given initial state perform the following

• Set initial state to current

• Loop on the following until goal is found or a complete iteration

occurs without change to current state

– Generate all successor states to current state

– Evaluate all successor states using heuristic

– Select the successor state that yields the highest heuristic value

and perform that operator

• Notice that this algorithm can lead us to a state that has

no better move, this is called a local maximum (other

phenomenon are plateaus and ridges)

Hill-climbing search

• Problem: depending on initial state, can get stuck in local

maxima

Problems detailed & Solution

peak, but it is lower than the highest peak in the

whole space.

2. The plateau problem: all local moves are equally

unpromising, and all peaks seem far away, indicating

flat area.

3. The ridge problem: almost every move takes us

down.

Solution:

Random-walk hill climbing is continuous walk till better

state.

Random-restart hill climbing is a series of hill-climbing

searches with a randomly selected start node

whenever the current search gets stuck.

Both Random-walk and Random-restart

Simulated annealing

The intuition for this search method, an

improvement on hill-climbing, is the process

of gradually cooling a liquid. There is a

schedule of "temperatures", changing in time.

Reaching the "freezing point" (temperature 0)

stops the process.

Instead of a random restart, we use a more

systematic method.

allowing some "bad" moves but

gradually decrease their frequency

Simulated Annealing

6

2 4

3 5

1

61

Simulated Annealing

• Based on physical process of annealing a

metal to get the best (minimal energy) state.

• It is Hill climbing with a twist:

– allow some moves downhill (to worse states)

– start out allowing large downhill moves (to much

worse states) and gradually allow only small

downhill moves.

62

Simulated Annealing (cont.)

• The search initially jumps around a lot, exploring many

regions of the state space.

simple hill climb (search for local optimum).

63

Simulated annealing

input: problem, a problem

schedule, a mapping from time to temperature

local variables: current, a node.

next, a node.

T, a “temperature” controlling the probability of downward steps

current MAKE-NODE(INITIAL-STATE[problem])

for t 1 to ∞ do

T schedule[t]

if T = 0 then return current

next a randomly selected successor of current

∆E VALUE[next] - VALUE[current]

if ∆E > 0 then current next

else current next only with probability e∆E /T

Best-first search/Greedy search

– One problem with hill climbing is that you

are throwing out old states when you move

uphill and yet some of those old states may

wind up being better than a few uphill moves

– Algorithm uses two sets, open nodes (which

can be selected) and closed nodes (already A (5)

selected)

• Start with Open containing the initial state B (4) C (3) D (6)

• While current <> goal and there are nodes left in

Open do G (6) H (4) E (2) F (3)

– Set current = best node in Open and move

current to Closed

I (3) J (8)

– Generate current’s successors

– Add successors to Open if they are not

already in Open or Closed

Best-First Search/Greedy Search

• Combines the advantages of Breadth-First and

Depth-First searches.

– DFS: follows a single path, don’t need to generate

all competing paths.

– BFS: doesn’t get caught in loops or dead-end-

paths.

• Best First Search: explore the most promising

path seen so far.

66

Best-First Search/Greedy (cont.)

While goal not reached:

1. Generate all potential successor states and

add to a list of states.

2. Pick the best state in the list and go to it.

It uses evaluation function….f(n).

f(n) is calculated as g(n).

g(n) is path cost…. It chooses best g(n).

…..which is lowest path cost

• Similar to steepest-ascent, but don’t throw

away states that are not chosen. 67

[Example>>>>]

Example

68

Greedy Search…….

Possibilities from Arad: Go to Zerind, Timisoara, or Sibiu?

Taking air distance as the heuristic: Considering the Air distance:

Zerind (374), Timisoara (329), or Sibiu (253) The shorter the air distance, the better will

Closest air distance Sibiu, hence go to Sibiu.

Possibilities from Sibiu: Arad, Fagaras, Oradea, or Rimnicu Vilcea? be the heuristic value (h(n)).

Take as heuristic the air distance:

Arad (366), Fagaras (176), Oradea (380), or Rimnicu Vilcea (193)

Closest air distance Fagaras, hence go to Fagaras.

Possibilities from Fagaras: Go to Sibiu or Bucharest?

Take as heuristic the air distance:

Sibiu (253) or Bucharest (0)?

Closest air distance Bucharest, hence go to Bucharest.

That is 140km +99km +211km = 450km .

69

Variations of Best-first

• A* Algorithm – add to the heuristic the cost

of getting to that node

– For instance, if a solution takes 20 steps and

another takes 10 steps, even though the 10-step

solution may not be as good, it takes less effort

to get there

• Problem-Reduction – use AND/OR graphs

where some steps allow choices and others

require combined steps

– AO* Algorithm – A* variation for AND/OR

graphs

– Alpha-Beta Pruning – use a threshold to remove

any possible steps that look too poor to consider

The A* procedure

Hill-climbing (and its clever versions) may miss an optimal solution. Here

is a search method that ensures optimality of the solution.

The algorithm

keep a list of partial paths (initially root to root, length 0);

repeat

succeed if the first path P reaches the goal node;

otherwise remove path P from the list;

extend P in all possible ways, add new paths to the list;

sort the list by the sum of two values: the real cost of P till now, and an

estimate of the remaining distance;

prune the list by leaving only the shortest path for each node reached so

far;

until

success or the list of paths becomes empty;

A* Algorithm

• The A* algorithm uses a modified evaluation function and a

Best-First search.

solution in the optimal time!

72

A* evaluation function

• The evaluation function f is an estimate of the

value of a node n given by:

f(n) = g(n) + h’(n)

• g(x) is the cost to get from the start state to

state n. [path cost]

• h’(x) is the estimated cost to get from state x to

the goal state [the heuristic]/[straight distance].

– loops are avoided - we don’t expand the

same state twice.

– Information about the path to the goal state

73

is retained.

Example………A*

Consider the following map.

Use the following cost functions.

g(n) = The cost of each move as the distance between each town (shown on

map).

h(n) = The Straight Line Distance between any town and town M. These

distances are given in the table below.

Provided: Straight Line Distance to M

B222 F136 J60

C166 G122 K32

D192 H111 L102

74

Search Tree

take is A, C, F, K, M

at a cost of 236.

75

AO* Algorithm

• ao star algorithm is a type of heuristic search algorithm.

• ao star or a* algorithm is used when problems can be

divided into sub parts and which can be combined.

• Ao star or a* in artificial intelligence is represented using

and or graph or and or tree.

• ao star algorithm a* have one or more and arc in it.

76

Or Connector

AND Connector

Difference between A* and AO*

•

An A* algorithm is an OR graph algorithm that is used

to determine just one solution. On the other hand, an

• AO* algorithm is an AND-OR graph algorithm, which

can be used to determine over one solution.

78

AND/OR graphs

• Some problems are best represented as

achieving subgoals, some of which achieved

simultaneously and independently (AND)

• Up to now, only dealt with OR options

Possess TV set

Searching AND/OR graphs

• A solution in an AND-OR tree is a sub tree

whose leafs are included in the goal set

f(n) = f(n1) + f(n2) + …. + f(nk)

trees? The AO* algorithm.

AND/OR Best-First-Search

• Traverse the graph (from the initial node)

following the best current path.

• Pick one of the unexpanded nodes on that path

and expand it. Add its successors to the graph

and compute f for each of them

• Change the expanded node’s f value to reflect

its successors. Propagate the change up the

graph.

• Reconsider the current best solution and

repeat until a solution is found

AND/OR search

• We must examine several nodes

simultaneously when choosing the next move

A

(9) A

(3) B D (5)

(4) C 38

B C D

17 9 27

E F G H I J

(5) (10) (3) (4) (15) (10)

AND/OR Best-First-Search

• Traverse the graph (from the initial node)

following the best current path.

• Pick one of the unexpanded nodes on that path

and expand it. Add its successors to the graph

and compute f for each of them

• Change the expanded node’s f value to reflect

its successors. Propagate the change up the

graph.

• Reconsider the current best solution and

repeat until a solution is found

AO* algorithm

1. Let G be a graph with only starting node INIT.

2. Repeat the followings until INIT is labeled SOLVED or

h(INIT) > FUTILITY

a) Select an unexpanded node from the most promising path from

INIT (call it NODE)

b) Generate successors of NODE. If there are none, set h(NODE)

= FUTILITY (i.e., NODE is unsolvable); otherwise for each

SUCCESSOR that is not an ancestor of NODE do the

following:

i. Add SUCCESSSOR to G.

ii. If SUCCESSOR is a terminal node, label it SOLVED and set

h(SUCCESSOR) = 0.

iii. If SUCCESSPR is not a terminal node, compute its h

AO* algorithm (Cont.)

c) Propagate the newly discovered information up the graph

by doing the following: let S be set of SOLVED nodes or

nodes whose h values have been changed and need to have

values propagated back to their parents. Initialize S to

Node. Until S is empty repeat the followings:

ii. Compute the cost of each of the arcs emerging from CURRENT.

Assign minimum cost of its successors as its h.

iii. Mark the best path out of CURRENT by marking the arc that had

the minimum cost in step ii

iv. Mark CURRENT as SOLVED if all of the nodes connected to it

through new labeled arc have been labeled SOLVED

v. If CURRENT has been labeled SOLVED or its cost was just

changed, propagate its new cost back up through the graph. So add

all of the ancestors of CURRENT to S.

Constraint Satisfaction Problem

(CSPs) and are solved through search.

Examples of constraints :

The sum of three angles of a triangle is 180 degrees,

The sum of the currents flowing into a node must equal zero.

Constraints…Constraint Satisfaction

the constraints relate objects without precisely specifying their

positions; moving any one, the relation is still maintained.

example : "circle is inside the square".

◊ Constraint satisfaction

The Constraint satisfaction is a process of finding a solution to

a set of constraints.

the constraints articulate allowed values for variables, and

finding solution is evaluation of these variables that

satisfies all constraints.

Constraint Satisfaction problems (CSPs)

◊ The CSPs are all around us while managing work, home life,

budgeting expenses and so on;

where we do not get success in finding solution, there we

run into problems.

we need to find solution to such problems satisfying all constraints.

the Constraint Satisfaction problems (CSPs) are solved through

search.

N-Queen example…the difficulty

can attack any other queen, the puzzle has 92 distinct solutions.

Humans would find it hard to solve N-Queens puzzle while N becomes large.

Example : The possible number of configurations are :

For 4-Queens there are 256 different configurations.

For 8-Queens there are 16,777,216 configurations.

For 16-Queens there are 18,446,744,073,709,551,616 configurations.

In general, for N - Queens there are we have N configurations.

For N = 16, this would take about 12,000 years on a fast machine.

How do we solve such problems ?

Three computer based approaches or models are stated below. They are

‡ Generate and Test (GT) ,

‡ Backtracking (BT) and

‡ Constrain Satisfaction Problems (CSPs)

Finding a solution….

variables that satisfies all of the constraints

is hard:NP-Complete (nondeterministic polynomial-

time hard)

searching through the space of possible assignments of

values to variables.

variables, then we have d n possible assignments.

Constraint Satisfaction Problem (CSP) and its

solution

‡ Variables, a finite set X = {x1 , . . . , xn } ,

‡ Domain, a finite set Di of possible values which each variable xi can take,

‡ Constraints, a set of values that the variables can simultaneously satisfy

the constraint (e.g. D1 != D2)

◊ A solution to a CSP is an assignment of a value from its domain to every

variable satisfying every constraint; that could be :

‡ one solution, with no preference as to which one,

‡ all solutions,

‡ an optimal, or a good solution [Constraint Optimization Problem(COP)]

Understanding Problem

satisfying constraint that no two queens threaten each other.

(a queen threatens other queens on same row, column and diagonal).

Solution : To model this problem

◊ Assume that each queen is in different column;

◊ Assign a variable Ri (i = 1 to N) to the queen in the i-th column

indicating the position of the queen in the row.

◊ Apply "no-threatening" constraints between each couple Ri and Rj

of queens and evolve the algorithm.

◊ Example : 8 - Queens puzzle

Example..

Properties of Constraints…

uniquely specify the values of its variables;

Constraints are non-directional, typically a constraint on (say)

V1 , V2 can be used to infer a constraint on V 1 given

a constraint on V 2 and vice versa;

Constraints are declarative; they specify what relationship must

hold without specifying a computational procedure to enforce that

relationship;

Constraints are additive; the order of imposition of constraints

does not matter, all that matters at the end is that the

conjunction of constraints is in effect;

Constraints are rarely independent; typically constraints in the

constraint store share variables.

Means-Ends Analysis

Pick the operator which moves the problem most

towards the goal state

Repeat until the goal state has been reached

Forward and backward chaining may be used

Subgoaling required

Generating intermediate states or steps

Means-Ends is often used in planning problems,

but can be applied in other situations too such

as math theorem proving

Means-ends analysis is much like top-down

design in programming

Means-Ends Analysis

Allows both backward and forward searching.

This means we could solve major parts of a problem first and then return to

smaller problems when assembling the final solution.

General Problem Solver was the first AI program to exploit means-ends

analysis.

STRIPS (A robot Planner) is an advanced problem solver that incorporates

means-ends analysis and other techniques.

How it works?

Compare the current state to the goal state

Pick the operator which moves the problem most towards the goal state

Repeat until the goal state has been reached

Forward and backward chaining may be used

Sub-goaling required

Generating intermediate states or steps

Means-Ends is often used in planning problems, but can be

applied in other situations too such as math theorem proving

Means-ends analysis (2)

An example: planning a trip.

P r o c e d u r e

miles

D

i > 1000 X

s

t 100-1000 X X

a

n 10-100 X X

c

e

<1 X

Local Search algorithms

explore all the search space systematically by keeping

one or more paths in memory and by recording which

alternatives have been explored.

When a goal is found, the path to that goal constitutes a

solution.

Local search algorithms can be very helpful if we are

interested in the solution state but not in the path to that

goal. They operate only on the current state and move

into neighboring states .

In computer science, local search is a heuristic method

for solving computationally hard optimization problems.

Local search algorithms move solution to solution in

space of candidate solutions (the search space) by

applying local changes until a solution deemed optimum

is found or a time bound is elapsed.

Local Search algorithms

They use very little memory

They can find reasonable solutions in large or infinite

(continuous) state spaces.

Some examples of local search algorithms are:

Hill-climbing

Simulated annealing

Genetic algorithms.

Genetic Algorithms

GAs are part of evolutionary computing, a rapidly growing area of AI.

■ Genetic algorithms are implemented as a computer simulation, where AI Techniques

techniques are inspired by evolutionary biology.

■ Mechanics of biological evolution

◊ Every organism has a set of rules, describing how that organism is built, and encoded in

the genes of an organism.

◊ The genes are connected together into long strings called chromosomes.

◊ Each gene represents a specific trait (feature) of the organism and

has several different settings, e.g. setting for a hair color gene may be black or brown.

◊ The genes and their settings are referred as an organism's genotype.

◊ When two organisms mate they share their genes. The resultant offspring may end up

having half the genes from one parent and half from the other. This process is called

cross over.

◊ A gene may be mutated and expressed in the organism as a completely new trait.

■ Thus, Genetic Algorithms are a way of solving problems by mimicking processes, the

nature uses, Selection, Crosses over, Mutation and Accepting to evolve a solution to a

problem.

Steps in GA

(1) [Start] Generate random population of n chromosomes (Encode suitable solutions for the problem)

(2) [Fitness] Evaluate the fitness f(x) of each chromosome x in the population.

(3) [New population] Create a new population by repeating following steps until the new population is

complete.

(a) [Selection] Select two parent chromosomes from a population according to their fitness.

(b) [Crossover] With a crossover probability, cross over the parents to form new offspring (children). If

no crossover was performed, offspring is the exact copy of parents.

(c) [Mutation] With a mutation probability, mutate new offspring at each locus (position in

chromosome).

(d) [Accepting] Place new offspring in the new population.

(4) [Replace] Use new generated population for a further run of the algorithm.

(5) [Test] If the end condition is satisfied, stop, and return the best solution in current population.

(6) [Loop] Go to step 2.

■ Genetic Algorithms does unsupervised learning – the right answer is not known beforehand.

GA graphical representation

Searching with partial observations

and deterministic

That the agent knows what the effects of each action are

Therefore, the agent can calculate exactly which state results from any

sequence of actions and always knows which state it is in

Its percepts provide no new information after each action

In a more realistic situation the agent’s knowledge of states and actions is

incomplete.

If the agent has no sensors at all, then as far as it knows it could be in one of

several possible initial states, and each action might therefore lead to one

of several possible successor states

Searching with partial observations

contd….

An agent without sensors (vacuum cleaner without sensors), thus, must

reason about sets of states that it might get to, rather than single states

At each instant the agent has a belief of in which state it might be

A solution is now a path that leads to a belief state, all of whose members

are goal states

If the environment is partially observable or if the effects of actions are

uncertain, then each new action yields new information

Searching with partial observations

contd….

The cause of uncertainty may be another agent, an adversary

When the states of the environment and actions are uncertain, the agent

has to explore its environment to gather information

In a partially observable world one cannot determine a fixed action

sequence in advance, but needs to condition actions on future percepts

As the agent can gather new knowledge through its actions, it is often not

useful to plan for each possible situation

Rather, it is better to interleave search and execution

110 Games….stochastic games….

The object of a search is to find a path from the starting position to a goal

position

In a puzzle-type problem, you (the searcher) get to choose every move

In a two-player competitive game, you alternate moves with the other

player

The other player doesn’t want to reach your goal

Your search technique is unique and exclusive to you.

111 Payoffs

number

By convention, we prefer positive numbers

In some games, the outcome is either a simple win (+1) or a simple

loss (-1)

In some games, you might also tie, or draw (0)

In other games, outcomes may be other numbers (say, the amount

of money you win at Poker)

112 Zero-sum games

Most common games are zero-sum: What I win ($12), plus what you win (-

$12), equals zero

Not all games are zero-sum games

For simplicity, we consider only zero-sum games

From our point of view, positive numbers are good, negative numbers are

bad

From our opponents point of view, positive numbers are bad, negative

numbers are good

113 Heuristics

A heuristic function computes, for a given node, your best guess as to

what the payoff will be

The heuristic function uses whatever knowledge you can build into

the program

We make two key assumptions:

Your opponent uses the same heuristic function as you do

The more moves ahead you look, the better your heuristic function will

work

114 PBVs

Explore down to a given level using depth-first search

As you reach each lowest-level node, evaluate it using your heuristic function

Back up values to the next higher node, according to the following rules:

If it’s your move, bring up the largest value, possibly replacing a smaller value

If it’s your opponent’s move, bring up the smallest value, possible replacing a larger

value

115 Using PBVs

Your • Explore 5; smaller than 8, so ignore it

move

• Backtrack; bring 8 up another level

8 • Explore 2; bring it up

Opponents • Explore 9; better than 2, so bring it

move up, replacing 2

8 2

9 • 9 is not better than 8 (for your

Your opponent), so ignore it

move

• Explore –3, bring it up

8 5 2 9 -3

• Etc.

116 Bringing up values

If it’s your move, and the next child of this node has a larger value than this

node, replace this value

If it’s your opponent’s move, and the next child of this node has a smaller

value than this node, replace this value

At your move, never reduce a value

At your opponent’s move, never increase a value

117 Alpha cutoffs

Your

move is 8 (so far)

alpha

8 1 cutoff If you move right, the

Opponents value there is 1 (so far)

move

Your opponent will

8 9 1 never increase the value

Your

move at this node; it will

8 5 2 9 -3 1 always be less than 8

You can ignore the

remaining nodes

118 Alpha cutoffs, in more detail

parent (has

PBV of 8) node You have an alpha

being cutoff when:

examined

8 You are examining a

Your node at which it is your

move

opponent’s move, and

alpha You have a PBV for the

8 1 cutoff node’s parent, and

Opponents You have brought up a

move PBV that is less than the

PBV of the node’s parent,

8 9 1 and

Your

move The node has other

children (which we can

8 5 2 9 -3 1 now “prune”)

119 Beta cutoffs

It is your opponent’s turn to move

You have computed a PBV for this node’s parent

The node’s parent has a higher PBV than this node

This node has other children you haven’t yet considered

A beta cutoff occurs where

It is your turn to move

You have computed a PBV for this node’s parent

The node’s parent has a lower PBV than this node

This node has other children you haven’t yet considered

120 Using beta cutoffs

Beta cutoffs are harder to understand, because you have to see things

from your opponent’s point of view

Your opponent’s alpha cutoff is your beta cutoff

We assume your opponent is rational, and is using a heuristic function

similar to yours

Even if this assumption is incorrect, it’s still the best we can do

121 The importance of cutoffs

If you can search to the end of the game, you know exactly how to

play

The further ahead you can search, the better

If you can prune (ignore) large parts of the tree, you can search

deeper on the other parts

Since the number of nodes at each level grows exponentially, the

higher you can prune, the better

You can save exponential time

- bt2Uploaded bywelber.ribeiro4251
- Astar TutorialUploaded byayesha_khan963
- Unicast QoS Routing Algorithms for SDN 2018Uploaded byEladhio Burgos
- Comparative Analysis of Algorithms for Single Source Shortest Path ProblemUploaded byAI Coordinator - CSC Journals
- Scimakelatex.56972.Raymond+SheppardUploaded byd_lumsdon
- UntitledUploaded byapi-127299018
- 02-hsearchUploaded bySubhan50
- DESIGN, IMPLEMENT AND SIMULATE AN AGENT MOTION PLANNING ALGORITHM IN 2D AND 3D ENVIRONMENTSUploaded byCS & IT
- AI NOTESUploaded byRajesh Kumar
- Zt Java Collections Cheat SheetUploaded bympvinaykumar
- Wikipedi Mathematical FinanceUploaded byali adnan
- Graphs Part 3 VULAUploaded bySabelo Eric
- mcs021b1Uploaded bysawant_shnkr
- Development of a GIS-Based Monitoring System for Road NetworkUploaded byJournal of Computing
- AaDS - W01b Foundations.pdfUploaded byGustavoMCPRA
- R.gaul Essentials SEUploaded byhamif
- Basis PursuitUploaded byOscar Hurtado

- Ch3 Model & ConstraintsUploaded byVarshika Choudhary
- 06 NaiveBayes ExampleUploaded bykakalott77
- beliefnetworksbayesianclassification-130611015246-phpapp02Uploaded byVarshika Choudhary
- State SpaceUploaded byVarshika Choudhary
- State SpaceUploaded bysiva
- Green ChemistryUploaded byVarshika Choudhary
- UEI609Uploaded byVarshika Choudhary
- 9 - Phase DiagramsUploaded bywawawa1

- Falcom Workbench User Guide PDFUploaded byTodd
- IT EssentialsUploaded bymnyengema
- Writing for OrganUploaded byJoshuaNichols
- 009 101 209 Day1 Decision Levels and ApproachesUploaded bynemesis123
- ASKVideo Cubase 5 Tutorial DVDUploaded byjonathan
- 2010 Senior Handbook2.pdfUploaded byperkinsc305
- Gam Vendor Letter 13aug12Uploaded byspraving
- Handout ApproxUploaded byMarcelo Machado
- AS1.10Uploaded byAhmed Raza
- RoboticsUploaded byPeter Nomikos
- Regulatory FrameworkUploaded bypankajgupta
- On PhotographyUploaded bySkylar
- WhitePaper Solution for Staging Area 01Uploaded byVamsi Karthik
- Report WritingUploaded bymohit
- Use Case Model of Crime Management SystemUploaded byMuzaffar Salik
- TOP GRADE S905.xlsUploaded bySaidur Rahman Sajib
- TYBCOM Paper PatternUploaded byRohit Gupta
- actividad 3 - 1 Andrea.docUploaded byAndreita Rojas Mantilla
- dr khalid mcqUploaded byHaytham Saadon
- X3 Modelling Equations No AnsUploaded byShabi Hassan
- Paul Chorbajian, as President and on Behalf of Aoa Chapter, Flight Engineers International Union, A. F. L. And on Behalf of Himself and All Other Flight Engineers of Said Union Similarly Situated v. Civil Aeronautics Board, 229 F.2d 81, 2d Cir. (1956)Uploaded byScribd Government Docs
- 2013 Catalogue EnUploaded byAnonymous T02GVGzB
- Heat and Mass HW #1Uploaded byrafael
- ACCEPTABILITY, CALCIUM LEVEL AND ANTIOXIDANT ACTIVITY OF INSTANT DRINK POWDER OF SUGAR PALM FRUIT (ARENGA PINNATA, MERR) WITH SOURSOP (ANNONA MURICATA,L) FLAVOR.Uploaded byIJAR Journal
- catUploaded byssx
- Tullow Oil 2013 Uganda Country Report.pdfUploaded byAnonymous axNabfT4K
- nets.pdfUploaded byYasin Firdaus
- Tutorial 2 the Typing Blood Game @Nobelprize.orgUploaded byHelmi
- convergence_of_indian_accounting_standards_with_ifrs_prospects_and_challenges.pdfUploaded byAman Pandey
- carpiauxUploaded byEldos Karimov