You are on page 1of 46

Artificial Intelligence

Dr Siby Abraham
Email:siby@mu.ac.in
Website:www.machintelligence.net
• Unit 1: Ch1,2,3

• Unit II: Ch 18

• Unit III: Ch 20, 21


Unit I
• What Is AI

• Intelligent Agents

• Problem Solving by searching


What Is AI
• What Is AI • Foundations

• Intelligent Agents • History and

• Problem Solving by searching • State of the Art of AI.


Intelligent Agents
• Agents : An agent is anything that
can be viewed as perceiving its
• What Is AI environment through sensors and
acting upon that environment
through actuators.
• Intelligent Agents
• Human agent: eyes, ears, and other
• Problem Solving by searching organs for sensors.
• Hands, legs, mouth, and other body
parts for actuators.

• Robotic agent: cameras and infrared


range finders for sensors;
• Various motors for actuators

• Environments, Nature of
Environments, Structure of Agents
Agents and environments

• The agent function maps from percept histories to actions:


[f: P*  A]
• The agent program runs on the physical architecture to
produce f
Eg: The vacuum-cleaner world

• Environment: square A and B


• Percepts: [location and content] e.g. [A, Dirty]
• Actions: left, right, suck, and no-op
Environments
• PEAS description of the environment:
– Performance
– Environment
– Actuators
– Sensors
PEAS : E.g. Fully automated taxi

• Performance:Safety, destination, profits, legality,


comfort.
• Environment:Streets/freeways, other traffic,
pedestrians, weather,, …
• Actuators : Steering, accelerator, brake, horn,
speaker/display,…
• Sensors : Video, speedometer, engine sensors,
keyboard, GPS, …
Problem Solving by searching:
• What Is AI • Problem-Solving Agents,

• Intelligent Agents Example Problems

• Problem Solving by
searching
Problem-solving agents Example: Romania
Single-state problem formulation

A problem is defined by four items:


1. initial state e.g., "at Arad"
2. actions or successor function S(x) = set of action–state pairs
– e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
3. goal test, can be
– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate(x)
4. path cost (additive)
– e.g., sum of distances, number of actions executed, etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0
• A solution is a sequence of actions leading from the initial
state to a goal state
Tree search example
Tree search example
Tree search example
Uninformed search strategies
• Uninformed search strategies use only the information
available in the problem definition
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Breadth-first search

• Expand shallowest unexpanded node


• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search

• Expand shallowest unexpanded node


• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search

• Expand shallowest unexpanded node


• Implementation:
– fringe is a FIFO queue, i.e., new successors go at end
Properties of breadth-first search
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
• Space? O(bd+1) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step)

• Space is the bigger problem (more than time)


Uniform-cost search

• Expand least-cost unexpanded node


• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Complete? Yes, if step cost ≥ ε
• Time? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal solution
• Space? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing order of g(n)
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search

• Expand deepest unexpanded node


• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node

• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Depth-first search
• Expand deepest unexpanded node

• Implementation:
– fringe = LIFO queue, i.e., put successors at front
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces, spaces with loops
– Modify to avoid repeated states along path
 complete in finite spaces
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No
Informed searches
• Informed = use problem-specific knowledge
• Which search strategies?
– Best-first search and its variants
• Heuristic functions?
– How to invent them
• Local search and optimization
– Hill climbing, local beam search, genetic algorithms,…
• Local search in continuous spaces
• Online search agents

37
Unit II – Learning from examples:
Ch 18
• Forms of Learning,
• Supervised Learning,
• Learning Decision Trees,
• Evaluating and Choosing the Best Hypothesis,
• Theory of Learning,
• Regression and Classification with Linear Models,
• Artificial Neural Networks,
• Nonparametric Models,
• Support Vector Machines,
• Ensemble Learning,
• Practical Machine Learning
Types of learning
• Supervised

• Unsupervised

• Reinforcement
Restaurant waiting problem

• Willwait ?
Restaurant problem using
Decision tree
Unit III- Learning probabilistic models
Ch 20, 21
• Statistical Learning,
• Learning with Complete Data,
• Learning with Hidden Variables: The EM Algorithm.
• Reinforcement learning:
– Passive Reinforcement Learning,
– Active Reinforcement Learning,
– Generalization in Reinforcement Learning,
– Policy Search,
– Applications of Reinforcement Learning.
3 by 4 world problem

• Action: left, right, up, down


Practical

• Practical 1 to 4 – Romanian map problem

• Practical 5 to 8 – Restaurant waiting problem

• Practical 9to 10 – 3 by 4 world problem


• Text book link: http://aima.cs.berkeley.edu

• Practical link: https://github.com/aimacode/aima-python

You might also like