You are on page 1of 174

Artificial Intelligence

CSL7540
Deepak
(Based on the courses from UCB, UW and IIT Delhi)
Logistics
• Timings: Mon(4 PM)/Wed (2 PM)/Fri (2 PM)

• Join Google classroom: izoj4yc

• Google meet link for lectures: https://meet.google.com/spq-upzz-fqk

• Textbook: Artificial Intelligence: A Modern Approach (3rd edition),


Russell and Norvig
Eligibility and prerequisites
• Official prerequisites
• None

• Requirements
• Probability and statistics
• Algorithms
• Coding skills
Evaluation policy (tentative)
• Assignments 2/3 – 20%
• Project – 20%
• Quizzes
• Reading assignments 20%
• Self study questions
• Final Exam – 40%
Course objectives
• A brief intro to the philosophy of AI
• A brief intro to the breadth of ideas in AI
• Be able to write from scratch, debug and run (some) AI algorithms.

• Lectures would be oriented toward theory and modelling


• Why, What, How
• Assignments would be balanced toward applications

• Work load – HIGH!!!


Motivation
History of AI
1997: Deep Blue defeated world champion

Peter Morgan/Reuters
2005: Self driving cars
2006: Dawn of a new era

Deep Learning
What will we learn
• Search
• Planning
• Constraint Satisfaction
• Logics
• Knowledge representation
• Learning
• Reasoning
• RL
What is
Artificial
Intelligence?
Deep Blue

Was Deep
Blue
intelligent?
Deep Blue
• Brute-force computing power
• Massive parallel computation
• Human-designed search procedure
• Terminal position
• Win/lose/draw
• Certain search depth (e.g. 10 moves ahead)
• Searching up to 200 million positions/sec
Is it human like intelligence?
Deep Blue

Intelligence measures an agent’s ability to


achieve goals in a wide range of
environments.
- Legg and Hutter

• If the goal was to win at chess against a


world champion, Deep Blue was intelligent
Intelligence vs. humans
• Are humans intelligent?

• Are humans rational?

• Can non-human behaviour be intelligent?


What is AI?
What is AI?
• AI is the science of making machines or programs that:
Thinking humanly
• How humans think?
• Cognitive science
• Neuroscience

• Do we want a machine that beats humans or a machine that thinks


like humans while beating humans?

• Eliza - https://www.masswerk.at/elizabot/
Thinking rationally
• Irrefutable reasoning processes
• Ensures that all actions performed by a computer are formally provable from
inputs and prior knowledge
• John is a man
• All men are mortal
• therefore, John is mortal
• Field of logic
• Limitations
• Logical inference does not cover everything
• Reflexes
• Combinatorial explosion (in time and space)
Acting humanly: The Turing Test approach
• An agent would not pass the Turing Test without the following
requirements:
• natural language processing
• knowledge representation
• automated reasoning
• machine learning
• computer vision (total Turing test)
• robotics (total Turing test)
• Problems
• What if people call a human a machine?
• Make human-like errors
• No possible mathematical analysis
Acting rationally
• Rational behaviour: doing the right thing
• Need not always be deliberative
• Reflexive
• A rational agent acts so as to achieve the best (expected) outcome
• Perfect rationality cannot be achieved due to computational limitations
• Goals are expressed in terms of the performance or utility of
outcomes
• Being rational means maximizing its expected performance
Artificial Intelligence = Maximizing expected performance
Artificial Intelligence
CSL7540
Deepak
(Based on the courses from MIT, UW and IIT Delhi)
Agents
• An agent is an entity that perceives its environment through sensors
and take actions through actuators.
• The agent behaviour is described by the agent function, or policy, that
maps percept histories to actions:

𝑓 ∶ 𝒫∗ → 𝒜

• The agent program runs on the physical architecture to produce 𝑓


Agents
Agents
• A vacuum-cleaner agent
Goal-based agents
• Atomic environment representation
• Inputs
• Set of states
• Set of actions [costs]
• Start state
• Goal state [test]

• Output
• Path: start state → a state satisfying goal test
• May require shortest path
Search
• Search is a class of techniques for systematically finding or
constructing solutions to problems.
• Many AI problems can be formulated as search problems!
• Examples
• Path planning
• Games
• Natural Language Processing
• Recommendation system

Example: 8-puzzle

States: Location of the tiles


Actions: Move blank left,
right, up, down
Goal: Goal state (given)
Cost: 1 per move
Example: 8-puzzle
Example: traveling in Romania
Example: traveling in Romania
• State: the city we are in
• Actions: Going from the current city to the cities that are directly
connected to it.
• Goal test: whether we are in Bucharest.
• Cost: distances between cities
Tree Search strategies
• A search strategy is defined by picking the order of node expansion

• Strategies are evaluated along the following dimensions:


• Completeness: does it always find a solution if one exists?
• Time complexity: number of nodes generated
• Space complexity: maximum number of nodes in memory
• Optimality: does it always find a least-cost solution?
Tree Search strategies
• Time and space complexity are measured in terms of
• b: maximum branching factor of the search tree
• d: depth of the least-cost solution
• m: maximum depth of the state space (may be ∞)
Uninformed search strategies
• Uninformed search strategies use only the information available in
the problem definition

• Breadth-first search a
• Depth-first search
b c
• Uniform cost search
• Iterative deepening search
d e f g h
Breadth First Search
• Maintain queue of nodes to visit

• Evaluation
• Complete? a
• Time Complexity?
• Space Complexity? b c
• Optimal?
d e f g h
Depth-first search
• Maintain stack of nodes to visit

• Evaluation
• Complete? a
• Time Complexity?
• Space Complexity? b c
• Optimal?
d e f g h

https://www.youtube.com/watch?v=dtoFAvtVE4U
Iterative deepening search
• Complete?

• Time complexity?

• Space complexity?

• Optimal?
Iterative deepening search
• Complete? - yes

• Time complexity?
d b1 + (d-1) b2 + … + bd = O(bd)

• Space complexity? – O(bd)

• Optimal? – yes, if step cost = 1


Uniform cost search (Dijkstra’s algorithm)
• Maintain priority queue of nodes to visit

• Evaluation
• Complete? – yes a
1 5
• Time Complexity? – O(b^(C*/e))
• Space Complexity? – O(b^(C*/e)) b c
• Optimal? – yes 2 6
1 4
3
d e f g h

https://www.youtube.com/watch?v=z6lUnb9ktkE
Graph (instead of tree) Search: Handling
repeated nodes
• Repeated expansions is a bigger issue for DFS than for BFS or IDDFS
• Trying to remember all previously expanded nodes and comparing the new
nodes with them is infeasible
• Space becomes exponential
• duplicate checking can also be expensive

• Partial reduction in repeated expansion can be done by


• Checking to see if any children of a node n have the same state as the parent
of n
• Checking to see if any children of a node n have the same state as any
ancestor of n (at most d ancestors for n—where d is the depth of n)
Issues
• All these methods are slow (blind)

• Solution – add guidance


– informed search
Artificial Intelligence
CSL7540
Deepak
(Based on the courses from MIT, UCB, UW and IIT Delhi)
General tree search
Informed (Heuristic) search
• Be smart about the searching path
What is a Heuristic?
Best-first search
• A search strategy is defined by picking the order of node expansion
• Idea: use an evaluation function f(n) for each node
• Estimate of desirability
• Idea: use an evaluation function f(n) for each node
• Implementation: Order the nodes in decreasing order of desirability
• Examples:
• Greedy best-first search
• A* search
Uninformed Informed
• Breadth First = Best First
• If f(n) = depth(n)

• Uniform cost search = Best First


• If f(n) = the sum of edge costs from start to n
Greedy best-first search
• Evaluation function f(n) = h(n) = estimate of cost from n to goal

• Romania: hSLD(n) = straight-line distance from n to Bucharest

• Greedy best-first search expands the node that appears to be closest


to goal
What is a Heuristic?
• An estimate of how close a state is to a goal
• A node is selected for expansion based on an evaluation f(n) function that
estimates cost to goal

Manhattan distance: 10 + 5 = 15 Actual distance: 2 + 4 + 2 + 1 + 8 = 17


Euclidean distance: 11.2
Greedy best-first search
• Complete?
• No, can get stuck in loops

• Time complexity?
• O(bm), but can be improved with good h(n)

• Space complexity?
• O(bm)

• Optimal
• No
A* search
• Shakey the Robot
• A* was first proposed in 1968 to
improve robot planning

• Goal was to navigate through a


room with obstacles
A* search
• Idea: Avoid expanding paths that are already expensive
• Evaluation function f(n) = g(n) + h(n)

• g(n) = cost so far to reach n (backward cost)


• h(n) = estimated cost from n to goal (forward cost)
• f(n) = estimated total cost of path through n to goal
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n),
where h*(n) is the true cost to reach the goal state from n

• An admissible heuristic never overestimates the cost to reach the


goal, i.e., it is optimistic
• Example: hSLD(n) (never overestimates the actual road distance)
• Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal
Consistent Heuristics
• h(n) is consistent if
• for every node n
• for every successor n’ due to legal action a
• h(n) <= c(n, a, n’) + h(n’)

• Every consistent heuristic is also admissible


• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal
Proof of Optimality of (Tree) A*
• Suppose some sub-optimal goal state G2 has been generated and is
on the frontier
• Let n be an unexpanded state on the agenda such that n is on a
shortest (optimal) path to the optimal goal state G
• Assume h() is admissible
Proof of Optimality of (Tree) A*
• Suppose some sub-optimal goal state G2 has been generated and is
on the frontier
• Let n be an unexpanded state on the agenda such that n is on a
shortest (optimal) path to the optimal goal state G
• Assume h() is admissible
f(G2) = g(G2) since h(G2) = 0
g(G2) > g(G) since G2 is suboptimal

f(G) = g(G)
f(G2) > f(G) since h(G) = 0 substitution
Proof of Optimality of (Tree) A*
• Suppose some sub-optimal goal state G2 has been generated and is
on the frontier
• Let n be an unexpanded state on the agenda such that n is on a
shortest (optimal) path to the optimal goal state G
• Assume h() is admissible
Proof of Optimality of (Tree) A*
• Suppose some sub-optimal goal state G2 has been generated and is
on the frontier
• Let n be an unexpanded state on the agenda such that n is on a
shortest (optimal) path to the optimal goal state G
• Assume h() is admissible

f(G2) > f(G) ≥ f(n)


Properties of A*
• Complete?
• Yes (unless there are infinitely many nodes with f ≤ f(G))

• Time complexity?
• Exponential

• Space complexity?
• Keeps all nodes in memory

• Optimal
• Yes
http://www.youtube.com/watch?v=huJEgJ82360
• Is ‘h’ admissible?
• Is ‘h’ consistent?
• What is the sequence of nodes explored by A* search?
• What is the path returned by A*?
Memory Problem?
• Iterative deepening A*
• Similar to ID search
Admissible heuristics
Admissible heuristics
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance

• h1 = 8
• h2 = 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18
Admissible heuristics
Admissible heuristics
Admissible heuristics
Admissible heuristics
Admissible heuristics
Admissible heuristics
Admissible heuristics
Pattern databases
• The idea is is to store (precompute) the exact solution costs for every
possible sub-problem instance
• every possible configuration of the four tiles and the blank

• Usage
• Use position of chosen tiles as index into DB
• Use lookup value as heuristic, hDB

• Admissible?
Artificial Intelligence
CSL7540
Deepak
(Based on the courses from UCB, UW and IIT Delhi)
Path vs. State
• Till now: Systematic exploration of search space to reach goal
• Optimization of path to the goal

• What if path is irrelevant?


• 8-queens
• factory floor layout
• Integrated circuit design

• Local search?
• Hill-climbing, Gradient methods, Simulated annealing, Genetic algorithms
Local search and optimization
• Idea
• Keep track of single current state
• Move only to neighboring states
• Ignore paths

• Advantages
• Low memory requirements
• Can often find reasonable solutions in large or infinite (continuous) state
spaces
Local search and optimization
Local search and optimization
• Pure optimization problems
• All states have an objective function
• Aim is to find the best state according to an objective function
• State with max (or min) objective value
• Many optimization problems do not fit the “standard” search model
presented earlier
• Path-cost/Goal-state formulation
• Local search can do quite well on these problems
Hill-climbing (Greedy Local Search)

• Min version will reverse inequalities and look for lowest valued
successor
Hill-climbing (Greedy Local Search)
• A loop that continuously moves towards increasing value
• Value can be an objective/heuristic function value
• Terminates when a peak is reached (greedy local search)
• Hill climbing does not look ahead of the immediate neighbors
• Can randomly choose among the set of best successors
• If multiple have the best value

• “climbing Mount Everest in a thick fog with amnesia”


State-space landscape
• Location: defined by the state
• Elevation: defined by the value of the heuristic cost function or
objective function
Hill-climbing search: 8-queens problem
• State
• 8 queens on the board in some configuration
• Successor function
• move a single queen to another square in the
same column
• h = number of pairs of queens that are
attacking each other
• h = 17 for the shown state
Hill-climbing search: 8-queens problem
Hill-climbing search: 8-queens problem
• Randomly generated 8-queens starting states…
• 14% the time it solves the problem
• 86% of the time it get stuck at a local minimum

• However…
• Takes only 4 steps on average when it succeeds
• And 3 on average when it gets stuck
• (for a state space with 8^8 =~17 million states)

• Other issues: Shoulders, Plateaus, Ridges


Escaping Shoulders
• If no downhill (uphill) moves, allow sideways moves in hope that
algorithm can escape
• Need to place a limit on the possible number of sideways moves to avoid
infinite loops
• For 8-queens
• Allow sideways moves with a limit of 100
• Raises percentage of problem instances
solved from 14% to 94%
• However….
• 21 steps for every successful solution
• 64 for each failure
Variants of hill climbing
• Stochastic hill climbing
• Chooses at random from among the uphill moves
• The probability of selection can vary with the steepness of the uphill move
• First-choice hill climbing
• Generate successors randomly until better one than the current state is generated
• Random-restart hill climbing
• “If at first you don’t succeed, try, try again.”
• Complete (no local maxima)
• Say each search has probability p of success
• E.g., for 8-queens, p = 0.14 with no sideways moves
• Expected number of restarts? Expected number of steps taken?
• If you want to pick one local search algorithm, learn this one
Simulated Annealing
• Simulated Annealing = physics inspired twist on random walk
• A version of stochastic hill climbing where some downhill moves are
allowed
• Idea
• Identify the quality of the local improvements
• Instead of picking the best move, pick one randomly
• Say the change in objective function is δ
• If δ is positive, then move to that state
• Otherwise
• Move to this state with probability proportional to δ
• Thus: worse moves (very large negative δ) are executed less often
• Over time, make it less likely to accept locally bad moves
Simulated Annealing
Simulated Annealing
• High T: probability of “locally bad” move is higher
• Low T: probability of “locally bad” move is lower
• Typically, T is decreased as the algorithm runs longer
• There is a “temperature schedule” (analogous to physical process of
cooling)

• T must be decreased very gradually to retail optimality


• Slow
Local beam search
• Keeping only one node in memory is an extreme reaction to memory
problems

• Keep track of k states instead of one


• Initialization: k randomly selected states
• Next: determine all successors of k states
• If any of successors is goal – finished
• Else select k best from successors and repeat
Local beam search
• Same as ‘k random-start searches’ run in parallel?
• NO
• In a local beam search, useful information is passed among the
parallel search threads
• Searches that find good states recruit other searches to join them
• Problem: quite often, all k states end up on same local hill
• Idea: Stochastic beam search
• Choose k successors randomly, biased towards good ones

• Analogy to natural selection?


Genetic algorithms
• Twist on search with k states: Successor is generated by combining two
parent states
• A state is represented as a string over a finite alphabet
• 8-queens
• A state is represented as a string over a finite alphabet
• Start with k randomly generated states (population)
• Evaluation function (fitness function):
• Higher values for better states
• Opposite to heuristic function, e.g., # non-attacking pairs in 8-queens
• Produce the next generation of states by “simulated evolution”
• Random selection
• Crossover
• Random mutation
Genetic algorithms
Genetic algorithms

• Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)
• 24/(24+23+20+11) = 31%
• 23/(24+23+20+11) = 29% etc.
Genetic algorithms (crossover)

• Has the effect of “jumping” to a completely different new part of the


search space (quite non-local)
Genetic algorithms
• Genetic algorithm is a variant of “stochastic beam search”
• Random exploration can find solutions that local search can’t
• (via crossover primarily)
• Appealing connection to human evolution

• Issues:
• Large number of “tunable” parameters
• Lack of good empirical studies comparing to simpler methods
• No convincing evidence that GAs are better than hill-climbing
Local search in continuous spaces
• Discretization
• use hill-climbing

• Gradient descent
• Make a move in the direction of the gradient
Local search in continuous spaces
• Objective function: f(x1, x2….. xn)
• Compute gradient ∂f/ ∂xi
• Take a small step downhill in the direction of the gradient:
• Xi ← Xi – λ.∂f/ ∂xi
• Repeat until convergence
Artificial Intelligence
CSL7540
Deepak
(Based on the courses from MIT, UW and IIT Delhi)
Games
• Why do AI researchers study game playing?
• Adversarial search

• What Kinds of Games?


• Mainly games of strategy with the following characteristics
• Sequence of moves to play
• Rules that specify possible moves
• Rules that specify a payment for each move
• Objective is to maximize your payment
Games as Adversarial Search
• States
• Board configurations
• Initial state:
• The board position and which player will move
• Successor function
• Returns list of (move, state) pairs, each indicating a legal move and the resulting
state
• Terminal test
• Determines when the game is over
• Utility function
• Gives a numeric value in terminal states
Minimax search
• Idea: Choose move to position with highest minimax value
• Choose move to position with highest minimax value

Utility values for MAX


Properties of minimax
• Complete?
• Yes (if tree is finite)
• Optimal?
• Yes (against an optimal opponent)
• No (does not exploit opponent weakness against suboptimal opponent)
• Time complexity?
• O(bm)
• Space complexity?
• O(bm)
Good enough?
• Chess
• branching factor b ≈ 35
• game length m ≈ 100
• search space bm ≈ 35100 ≈ 10154

• The Universe
• Number of atoms ≈ 1078
• Age ≈ 1018 seconds
• 108 moves/sec x 1078 x 1018 = 10104
Alpha – Beta search

α = The value of the best (i.e.,


highest-value) choice we have found
so far at any choice point
along the path for MAX.

β = The value of the best (i.e.,


lowest-value) choice we have found
so far at any choice point
along the path for MIN.
Alpha – Beta search – Node ordering

• Chess: captures first, then threats, then forward moves, and then backward
moves)
• Hash table of previously seen positions
Cutting off search
• MinimaxCutoff is identical to MinimaxValue except
• Terminal? is replaced by Cutoff?
• Utility is replaced by Eval

• Evaluation functions?
• For chess/checkers, typically linear weighted sum of features
• Eval(s) = w1 f1(s) + w2 f2(s) + … + wn fn(s)
The Horizon effect

• Inevitable losses are postponed


• Unachievable goals appear achievable
• Short-term gains mask unavoidable consequences (traps)
Solutions
• Feedover
• Do not cut off search at non-quiescent board positions (dynamic positions)
• Example, king in danger
• Keep searching down that path until reach quiescent (stable) nodes

• Secondary Search
• Search further down selected path to ensure this is the best move

• Progressive Deepening
• Search one ply, then two ply, etc., until run out of time
• Similar to IDS
Quiescence Search
• Cutting off can be dangerous in presence of wild swings

• The evaluation function should be applied only to positions that are


quiescent

• Nonquiescent positions can be expanded further until quiescent


positions are reached
Additional Refinements
• Probabilistic Cut: cut branches probabilistically based on shallow
search and global depth-level statistics (forward pruning)

• Openings/Endgames: for some parts of the game (especially initial


and end moves), keep a catalog of best moves to make

• Singular Extensions: find obviously good moves and try them at cutoff
Deterministic Games in Practice
• Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley
in 1994. Used a precomputed endgame database defining perfect play for all
positions involving 8 or fewer pieces on the board, a total of 444 billion positions.
Checkers is now solved!

• Chess: Deep Blue defeated human world champion Garry Kasparov in a six-game
match in 1997. Deep Blue searches 200 million positions per second, uses very
sophisticated evaluation, and undisclosed methods for extending some lines of
search up to 40 ply. Current programs are even better, if less historic!

• Go: human champions refused to compete against computers, who are too bad.
In Go, b > 300, so most programs use pattern knowledge bases to suggest
plausible moves, along with aggressive pruning. In 2016, DeepMind’s AlphaGo
defeated Lee Sedol 4-1 to end the human reign.
Success in Go
• Combination of
• Deep Neural Networks
• Monte Carlo Tree Search

• More details later


Games of Chance – Expectiminimax
• Chance nodes
Partially observable games
• Card games

• Idea: For all deals consistent with what you can see
• Compute the minimax value of available actions for each of possible deals
• Compute the expected value over all deals
Summary
• Summary

• They illustrate several important points about AI

• Game playing programs have shown the world what AI can do


Artificial Intelligence
CSL7540
Deepak
(Based on the courses from UCB, UW and IIT Delhi)
Constraint satisfaction problems (CSPs)
• Standard search problem:
• State is atomic (black box)

• CSPs:
• State is factored
• Defined by variables Xi with values from domain Di
• Goal test is a set of constraints specifying allowable combinations of values
for subsets of variables
• A problem is solved when each variable has a value that satisfies all the
constraints on the variable
• CSP search algorithms take advantage of the structure of states and use
general-purpose rather than problem-specific heuristic
Example: Map-Colouring
• Variables: WA, NT, Q, NSW, V, SA, T

• Domains Di = {red; green; blue}

• Constraints: Adjacent regions must


have different colors
E.g., WA ≠ NT (implicit)
(WA; NT) ϵ {(red; green); (red; blue); (green; red); (green; blue); …}
(explicit)
Example: Map-Colouring
Constraint graph
• Binary CSP: Each constraint relates at most two variables
• Constraint graph: Nodes are variables, arcs show constraints
Varieties of constraints
• Unary constraints involve a single variable,
• E.g., SA ≠ green

• Binary constraints involve pairs of variables,


• E.g., SA ≠ WA

• Higher-order constraints involve 3 or more variables,


• E.g., cryptarithmetic column constraints

• Preferences (soft constraints),


• E.g., red is better than green
Real-world CSPs
• Assignment problems
• E.g., who teaches what class
• Timetabling problems
• E.g., which class is offered when and where?
• Hardware configuration
• Spreadsheets
• Transportation scheduling
• Factory scheduling
• Floor planning

• Notice that many real-world problems involve real-valued variables


Standard search
• BFS?

• DFS?
Backtracking search
• Backtracking search is the basic uninformed algorithm for CSPs
• Idea 1:
• Variable assignments are commutative, i.e.,
• [WA = red then NT = green] same as [NT = green then WA = red]
• Only need to consider assignments to a single variable at each node

• Idea 2:
• Check constraints as you go
• Incremental goal test

• DFS with these two ideas is called backtracking search


Backtracking search
Backtracking search
Issues
• Which variable should be assigned next?
• In what order should its values be tried?
• Can we detect inevitable failure early?
• Can we take advantage of problem structure?
Forward checking
• Idea: Keep track of remaining legal values for unassigned variables
• Terminate search when any variable has no legal values
Forward checking
• Idea: Keep track of remaining legal values for unassigned variables
• Terminate search when any variable has no legal values
Forward checking
• Idea: Keep track of remaining legal values for unassigned variables
• Terminate search when any variable has no legal values
Forward checking
• Idea: Keep track of remaining legal values for unassigned variables
• Terminate search when any variable has no legal values
Forward checking
• Forward checking propagates information from assigned to unassigned
variables, but doesn’t provide early detection for all failure

• NT and SA cannot both be blue!


Arc consistency – constraint propagation
• Simplest form of propagation makes each arc consistent
• X → Y is consistent iff
• for every value x of X there is some allowed y
Arc consistency
• Simplest form of propagation makes each arc consistent
• X → Y is consistent iff
• for every value x of X there is some allowed y
Arc consistency
• Simplest form of propagation makes each arc consistent
• X → Y is consistent iff
• for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked


Limitations of arc consistency

• Runs inside the backtracking


Ordering - MRV
• Minimum remaining value (MRV)
• Choose the variable with fewest legal values left in its domain
• Also known as most constrained variable
• Fail-fast ordering
Ordering - LCV
• Least constraining value (LCV)
• The one which rules out fewest values in the remaining variables
• Why least?
Improving backtracking
• Which variable should be assigned next? – MRV
• In what order should its values be tried? – LCV
• Can we detect inevitable failure early? – Arc Consistency
• Can we take advantage of problem structure?
Problem structure
• Independent sub-problems
• Connected components

• Suppose each sub-problem has


c variables out of n total
• Worst-case solution cost is (n/c · dc)
• Linear in n
• E.g. n=80, d=2, c=20,
• 280 = 4 billion years at 10 million nodes/sec
• 4.220 = 0.4 seconds at 10 million nodes/sec
Tree-structured CSPs

• Theorem: if the constraint graph has no loops, the CSP can be solved
in O(n.d2) time

• Compare to general CSPs, where worst-case time is O(dn)


Tree-structured CSPs – Algorithm
1. Choose a variable as root, order variables from root to leaves such that
every node’s parent precedes it in the ordering

2. For j from n down to 2, apply RemoveInconsistent(Parent(Xj); Xj)


3. For j from 1 to n, assign Xj consistently with Parent(Xj)
Tree-structured CSPs
• Advantages:
• O(n.d2) time
• No backtracking

• Disadvantages:
• Does not work with loops
Improving structure
• Conditioning: Instantiate a variable, prune its neighbors’ domains

• Cutset conditioning: Instantiate (in all ways) a set of variables such that the
remaining constraint graph is a tree
• Cutset size c: runtime O(dc · (n − c)d2), very fast for small c
Iterative algorithms for CSPs
• We have seen Backtracking (modified DFS) for CSPs
• How about local search?
• Hill-climbing, simulated annealing typically work with “complete” states, i.e.,
all variables assigned
• To apply to CSPs:
• Allow states with unsatisfied constraints operators reassign variable values
• Variable selection: Randomly select any conflicted variable
• Value selection by min-conflicts heuristic:
• Choose value that violates the fewest constraints
Performance of min-conflicts
• Given random initial state, can solve n-queens in almost constant
time for arbitrary n with high probability (e.g., n = 10,000,000)
• The same appears to be true for any randomly-generated CSP except
in a narrow range of the ratio
# "# $"%&'()*%&
•𝑅= # "# +)(*),-.&
Summary
• CSPs are a special kind of problem:
• States defined by values of a fixed set of variables
• Goal test defined by constraints on variable values
• Backtracking = depth-first search with one variable assigned per node
• Variable ordering and value selection heuristics help significantly
• Forward checking prevents assignments that guarantee later failure
• Constraint propagation (e.g., arc consistency) does additional work to constrain values
and detect inconsistencies
• The CSP representation allows analysis of problem structure
• Tree-structured CSPs can be solved in linear time
• Iterative min-conflicts is usually effective in practice
Min-conflicts algorithm
• Variables: Q1, Q2, Q3, and Q4 (4 queens)
• Domain: {A, B, C, D} (positions)
• Given configuration: Q1=C, Q2=C, Q3=D, Q4=B
• Assume that in every step, your algorithm always
chooses the leftmost conflicted queen to reduce
conflict and moves the queen along the column
• If there are ties, choose the topmost square
• What would be the configuration of queens after
I. One step?
II. Two steps?
III. Three steps?
IV. How many more steps are required to reach the solution?

You might also like