You are on page 1of 301

Machine Intelligence

and Applications
Yuan Luo
CUHKSZ
ECE4010

Slide credits: Harvard CS50, CMU AI and UC Berkeley AI


Textbook
Russell, Stuart J., and Peter Norvig. "Artificial Intelligence-A
Modern Approach, Third Int. Edition." (2010).
Goal of This Course
• Get familiar with machine intelligence (AI) techniques,
including their implementations
• Be able to develop machine intelligence applications (Python)

• Understand the theory behind the techniques, knowing


which techniques to apply when (and why)

• Become familiar with a range of applications of machine


intelligence, including “classic” and current systems.
Course Evaluation
• Homework Assignments (30%)
• Written questions or programming assignments
• 6 HWs in total (released via BB)
• Late assignment is not allowed
• Deadline: One week after the HW released

• Final Exam (40%)

• Class Project (30%)


• Proposal presentation
• 5 minutes (could be 10 minutes depends on the final student #)
• Individual or Team (no more than 3 students)
• All team members received the same score
Proposal Presentation
• Proposal Presentation
• Present a novel machine intelligence research idea
• We do not require results or working implementations
• 5 minutes (could be 10 minutes depends on the final student #)

• Things to consider:
• An interesting application with existing machine intelligence
method
• What problem are you solving?
• How have people done it before?
• What is your method? What type of model+ architecture would
you use? Why?
• What is the data for this task? (collected data or public dataset)
Project Evaluation
• Evaluated by our course staffs (70%) and your classmates
(30%) on the basis of the following criteria
• novelty and impact
• technical soundness, feasibility, and organization, including quality
of any presented results
• clarity and presentation
• time limits
• you are not allowed to present published work even you are
in the author lists
Commonly asked questions
Machine Intelligence (AI) vs Machine Learning

Machine Intelligence
Any technique that enables computers
to mimic human behavior

Machine Learning
Ability to learn without explicitly being
programmed

Deep Learning
Extract patterns from data using
neural networks
Search

• Search Problem • Adversarial Search


• Uninformed/Informed Search (minimax/Alpha-Beta pruning)
(BFS/DFS/Greedy/A*)
Knowledge

• Inference via Model Checking and Theorem


Proving (model/Inference rules)
• Inference by Resolution (clause/conjunctive
normal form/conversion to CNF/resolution)
• First-Order Logic
Uncertainty

• Bayesian Network
• Inference
• Approximate Inference
(sampling/rejection
sampling/likelihood
weighting/Gibbs sampling)
• Markov Chain Inference
Optimization

• Local Search (hill climbing/simulated


annealing)
• Linear programming
(formulation/graphical representation
and solving)
• Constraint satisfaction
(formulation/backtracking search)
Learning
• Supervised learning (k-nearest-neighbor
classification/perceptron/SVM)
• Evaluating hypotheses (loss
function/gradient learning methods)
• MDPs and RL(offline/model-
based/model-free)
• Unsupervised learning
Neural Networks
• ANN (simple functions)
• CNN (convolution/pooling)
• RNN/LSTM
• Self-attention/Transformer
• Deep Reinforcement Learning


Deep Learning New Frontiers
Frontiers
Search Knowledge Uncertainty
• Search Problem • Inference via Model • Bayesian Network
• Uninformed/Informed Checking and Theorem • Inference
Search Proving (model/Inference • Approximate Inference
(BFS/DFS/Greedy/A*) rules) (sampling/rejection
• Adversarial Search • Inference by Resolution sampling/likelihood
(minimax/Alpha-Beta (clause/conjunctive normal weighting/Gibbs sampling)
pruning) form/conversion to • Markov Chain Inference
CNF/resolution)
• First-Order Logic

Optimization Learning Neural Network


• Local Search (hill • Supervised learning (k-nearest- • ANN (simple functions)
climbing/simulated neighbor • CNN (convolution/pooling)
annealing) classification/perceptron/SVM)• RNN/LSTM
• Linear programming • Evaluating hypotheses (loss • Self-attention/Transformer
(formulation/graphical function/gradient learning • Deep Reinforcement
representation and solving) methods) Learning
• Constraint satisfaction • MDPs and RL(offline/model-
(formulation/backtracking based/model-free)
search) • Unsupervised learning
What’s more…..
• The way of thinking
• identify a problem
• naïve way (greedy/enumeration/nearest neighbor….)
• Use all the available info. (distance/inference rule/location…)
• Involve randomness (simulated annealing/page ranking/𝜀-
greedy/training… )
• …

• Seek intuition behind it and ask why

• Be wisely step with the new technology

Flatland-RL : Multi-Agent Reinforcement Learning on Trains https://arxiv.org/pdf/2012.05893.pdf


Do We Really Need Deep Learning Models for Time Series Forecasting? (https://arxiv.org/abs/2101.02118)
Cortical ensembles orchestrate social competition through hypothalamic ? (https://www.nature.com/articles/s41586-022-04507-5)
UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition
(https://arxiv.org/pdf/2311.15599.pdf)
Why open this course?
Flatland

https://flatland.aicrowd.com/intro.html
https://arxiv.org/pdf/2012.05893.pdf
Workload
Programming Assignments
• Complete the implementations of a class/function
• Maze, prediction and handwriting recognition
Programming Assignments
• Complete the implementations of a class/function
Course Information
• Course information (course outline, homework, slides,
project presentation) at Blackboard
(https://bb.cuhk.edu.cn/)
• Discussions: WeChat group
Course Information: TAs

Yifan Luo (罗易凡) Jin Cheng (成锦) Wentao Ye (叶文涛)


Office Hours: Tuesday 15:00-16:00 Office Hours: Thursday 16:00-17:00 Office hour: Friday 10:30-11:30
Email: yifanluo@link.cuhk.edu.cn Email: 222010009@link.cuhk.edu.cn Email: 221019027@link.cuhk.edu.cn
Search
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15
Search Problems
agent
entity that perceives its environment
and acts upon that environment
state
a configuration of the agent and
its environment
2 4 5 7 12 9 4 2 15 4 10 3
8 3 1 11 8 7 3 14 13 1 11 12
14 6 10 1 6 11 9 5 14 7
9 13 15 12 5 13 10 15 6 8 2
initial state
the state in which the agent begins
initial state 2 4 5 7
8 3 1 11
14 6 10
9 13 15 12
actions
choices that can be made in a state
actions
ACTIONS(s) returns the set of actions that
can be executed in state s
1 2
actions

3
4
transition model
a description of what state results from
performing any applicable action in any
state
transition model
RESULT(s, a) returns the state resulting from
performing action a in state s
transition model
2 4 5 7 2 4 5 7
8 3 1 11 8 3 1 11
RESULT( , )=
14 6 10 12 14 6 10
9 13 15 9 13 15 12
state space
the set of all states reachable from the
initial state by any sequence of actions
2 4 5 7

8 3 1 11

14 6 10 12

2 4 5 7 9 13 15 2 4 5 7

8 3 1 11 8 3 1 11

14 6 10 12 14 6 10

9 13 15 9 13 15 12

2 4 5 7 2 4 5 7 2 4 5 7 2 4 5 7

8 3 1 11 8 3 1 11 8 3 1 11 8 3 1

14 6 10 12 14 6 12 14 6 10 14 6 10 11

9 13 15 9 13 10 15 9 13 15 12 9 13 15 12
goal test
way to determine whether a given state
is a goal state
path cost
numerical cost associated with a given path
A
B

C D
E F G

I K
H J

L
M
A 4
2 B

5 2
C
1 D 6
E F G
3 2 3
I 4 K 3
H 4 J 2
1
2 L
M
A 1
1 B

1 1
C
1 D 1
E F G
1 1 1
I 1 K 1
H 1 J 1
1
1 L
M
Search Problems

• initial state
• actions
• transition model
• goal test
• path cost function
solution
a sequence of actions that leads from the
initial state to a goal state
optimal solution
a solution that has the lowest path cost
among all solutions
node
a data structure that keeps track of
- a state
- a parent (node that generated this node)
- an action (action applied to parent to get node)
- a path cost (from initial state to node)
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

E D

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
Find a path from A to E. A
B
Frontier

C D
• Start with a frontier that contains the initial state.
• Repeat:
• If the frontier is empty, then no solution.
• Remove a node from the frontier. E
• If node contains goal state, return the solution. F
• Expand node, add resulting nodes to the frontier.
What could go wrong?
Find a path from A to E. A
B
Frontier

C D

E
F
Find a path from A to E. A
B
Frontier

C D

E
F
Find a path from A to E. A
B
Frontier

C D

E
F
Find a path from A to E. A
B
Frontier

C D

E
F
Find a path from A to E. A
B
Frontier

C D

E
F
Find a path from A to E. A
B
Frontier

A C D

C D

E
F
Find a path from A to E. A
B
Frontier

C D

C D

E
F
stack
last-in first-out data type
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B

E
F
Find a path from A to E. A
B
Frontier

C D

C D
Explored Set

A B

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B D

E
F
Find a path from A to E. A
B
Frontier

C F

C D
Explored Set

A B D

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B D F

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B D F C

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B D F C

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B D F C

E
F
Depth-First Search
depth-first search
search algorithm that always expands the
deepest node in the frontier
Breadth-First Search
breadth-first search
search algorithm that always expands the
shallowest node in the frontier
queue
first-in first-out data type
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B

E
F
Find a path from A to E. A
B
Frontier

C D

C D
Explored Set

A B

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B C

E
F
Find a path from A to E. A
B
Frontier

D E

C D
Explored Set

A B C

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B C D

E
F
Find a path from A to E. A
B
Frontier

E F

C D
Explored Set

A B C D

E
F
Find a path from A to E. A
B
Frontier

C D
Explored Set

A B C D

E
F
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Depth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
Breadth-First Search

A
uninformed search
search strategy that uses no problem-
specific knowledge
informed search
search strategy that uses problem-specific
knowledge to find solutions more efficiently
greedy best-first search
search algorithm that expands the node
that is closest to the goal, as estimated by a
heuristic function h(n)
Heuristic function?

A
Heuristic function?

A
Heuristic function? Manhattan distance.

A
Greedy Best-First Search

A
Greedy Best-First Search

B
1

A
Greedy Best-First Search

B
1

A
Greedy Best-First Search

B
1

A
Greedy Best-First Search

B
1

A
Greedy Best-First Search

B
1

5 4

A
Greedy Best-First Search

2 B
1

5 4

A
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

11 9 7 3 2 B
12 10 8 7 6 4 1

13 12 11 9 7 6 5 2

13 10 8 6 3

14 13 12 11 9 7 6 5 4

13 10

A 16 15 14 11 10 9 8 7 6
Greedy Best-First Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

13 11 5

A 16 15 14 12 11 10 9 8 7 6
Greedy Best-First Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

13 11 5

A 16 15 14 12 11 10 9 8 7 6
Greedy Best-First Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

13 11 5

A 16 15 14 12 11 10 9 8 7 6
A* search
search algorithm that expands node with
lowest value of g(n) + h(n)

g(n) = cost to reach node


h(n) = estimated cost to goal
A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

13 11 5

A 16 15 14 12 11 10 9 8 7 6
A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

13 11 5

A 1+16 15 14 12 11 10 9 8 7 6
A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

13 11 5

A 1+16 2+15 14 12 11 10 9 8 7 6
A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 10 9 8 7 6 5 4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 9 8 7 6 5 4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 8 7 6 5 4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 7 6 5 4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 6 5 4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 5 4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 12+5 4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

13 6+11 5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

13 6+11 14+5 3

14 13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

13 6+11 14+5 3

14 6+13 5+12 10 9 8 7 6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

13 6+11 14+5 3

14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

10 9 8 7 6 5 4 3 2 1 B
10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 9 8 7 6 5 4 3 2 1 B
10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 8 7 6 5 4 3 2 1 B
10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 7 6 5 4 3 2 1 B


10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 14+7 6 5 4 3 2 1 B


10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 14+7 15+6 5 4 3 2 1 B


10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 14+7 15+6 16+5 4 3 2 1 B


10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 14+7 15+6 16+5 17+4 3 2 1 B


10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 14+7 15+6 16+5 17+4 18+3 2 1 B


10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 14+7 15+6 16+5 17+4 18+3 19+2 1 B


10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* Search

11+10 12+9 13+8 14+7 15+6 16+5 17+4 18+3 19+2 20+1 B
10+11 1

9+12 7+10 8+9 9+8 10+7 11+6 12+5 13+4 2

8+13 6+11 14+5 3

7+14 6+13 5+12 10 9 8 7 15+6 4

4+13 11 5

A 1+16 2+15 3+14 12 11 10 9 8 7 6


A* search
optimal if
- h(n) is admissible (never overestimates the
true cost), and
- h(n) is consistent (for every node n and
successor n' with step cost c, h(n) ≤ h(n') + c)

You might also like