You are on page 1of 88

CS8691 ARTIFICIAL

INTELLIGENCE
A.Jeyanthi
Associate Professor/CSE
Misrimal Navajee Munoth Jain Engg
College

A.Jeyanthi Associate Professor Misriml


1
Navajee Munoth Jain Engg College
CS8691 ARTIFICIAL
INTELLIGENCE
• UNIT I INTRODUCTION 
Introduction–Definition – Future of Artificial Intelligence – Characteristics of Intelligent Agents–
Typical Intelligent Agents – Problem Solving Approach to Typical AI problems.
UNIT II PROBLEM SOLVING METHODS 
Problem solving Methods – Search Strategies- Uninformed – Informed – Heuristics – Local Search
Algorithms and Optimization Problems -Searching with Partial Observations – Constraint
Satisfaction Problems – Constraint Propagation – Backtracking Search – Game Playing – Optimal
Decisions in Games – Alpha – Beta Pruning – Stochastic Games
• UNIT III KNOWLEDGE REPRESENTATION 
First Order Predicate Logic – Prolog Programming – Unification – Forward Chaining-Backward
Chaining – Resolution – Knowledge Representation – Ontological Engineering-Categories and
Objects – Events – Mental Events and Mental Objects – Reasoning Systems for Categories -
Reasoning with Default Information
UNIT IV SOFTWARE AGENTS 
Architecture for Intelligent Agents – Agent communication – Negotiation and Bargaining –
Argumentation among Agents – Trust and Reputation in Multi-agent systems.
UNIT V APPLICATIONS 
AI applications – Language Models – Information Retrieval- Information Extraction – Natural
Language Processing – Machine Translation – Speech Recognition – Robot – Hardware –Perception
– Planning – Moving

A.Jeyanthi Associate Professor Misriml


2
Navajee Munoth Jain Engg College
Text Books
• S. Russell and P. Norvig, “Artificial Intelligence: A Modern Approach,
Prentice Hall, Third Edition, 2009.
• I. Bratko, Prolog: Programming for Artificial Intelligence, Fourth edition,
Addison-Wesley Educational Publishers Inc., 2011.
• References:
• M. Tim Jones, Artificial Intelligence: A Systems Approach(Computer
Science), Jones and Bartlett Publishers, Inc.; First Edition, 2008
• Nils J. Nilsson, The Quest for Artificial Intelligence, Cambridge University
Press, 2009.
• William F. Clocksin and Christopher S. Mellish, Programming in Prolog:
Using the ISO Standard, Fifth Edition, Springer, 2003.
• Gerhard Weiss, Multi Agent Systems, Second Edition, MIT Press, 2013.
• David L. Poole and Alan K. Mackworth, Artificial Intelligence: Foundations
of Computational Agents, Cambridge University Press, 2010.
 

A.Jeyanthi Associate Professor Misriml


3
Navajee Munoth Jain Engg College
OBJECTIVES of this course:

• To understand the various characteristics of


Intelligent agents
• To learn the different search strategies in AI
• To learn to represent knowledge in solving AI
problems
• To understand the different ways of designing
software agents
• To know about the various applications of AI.
A.Jeyanthi Associate Professor Misriml
4
Navajee Munoth Jain Engg College
UNIT I INTRODUCTION
• UNIT I INTRODUCTION 
Introduction–Definition – Future of Artificial
Intelligence – Characteristics of Intelligent
Agents–Typical Intelligent Agents – Problem
Solving Approach to Typical AI problems.

A.Jeyanthi Associate Professor Misriml


5
Navajee Munoth Jain Engg College
ARTIFICIAL ARTIFICIAL
INTELLIGENC
Device INTELLIGENCE
E

Artificial Intelligence: AI is concerned with the design of Intelligence


in an Artificial Device
example consider the light in your room;
we can give intelligence to the light with suitable sensors and coding
Using which the light can be made to sense the outside lighting and
based on the outside lighting, it can
A.Jeyanthi ON
Associate or OFF
Professor Misriml itself artificial
Navajee Munoth Jain Engg College
6
What is Artificial Intelligence?
• Artificial intelligence is a broad branch of computer
science
• AI is a large Umbrella
that covers may fields

Fig:Related Fields of AI

• AI is concerned with the design of Intelligence in an


Artificial Device
• Goal of AI: is to design the system that can function
intelligently and independently
A.Jeyanthi Associate Professor Misriml
Navajee Munoth Jain Engg College
7
What are the intelligences been given to
computer?
• AI is used in our daily life
• human can listen and speak in natural language ; this is the field of
“ speech recognition “; computer can also do speech recognition
• humans can read write text in a language; this is the field of
“natural language processing” ;computer can also do natural
language processing
• humans can see and process what they see; this is the field of
“computer vision”; the computer can also do “computer vision”
• human can move around obstacles; this is possible by robots
• human can group similar items called “pattern recognition”;
computers can also do this this is called pattern recognition

A.Jeyanthi Associate Professor Misriml


8
Navajee Munoth Jain Engg College
History of
“Artificial Intelligence”
• John McCarthy coined the term AI at Darmouth workshop, 1956
• He stated that AI is the science and engg of making intelligent machine

A.Jeyanthi Associate Professor Misriml


9
Navajee Munoth Jain Engg College
A.Jeyanthi Associate Professor Misriml
10
Navajee Munoth Jain Engg College
Sophia Robot
• Sophia is a Social humanoid robot developed
by Hong Kong based company
• In 2019 Saudi Arabia has given citizenship to
Sophia robot

A.Jeyanthi Associate Professor Misriml


11
Navajee Munoth Jain Engg College
Ways of Achieving AI
• Machine Learning
• Deep Learning

A.Jeyanthi Associate Professor Misriml


12
Navajee Munoth Jain Engg College
Machine learning
• Machine learning is the subset of AI
• Human can understand and learn only the data
having few dimensions
• Machine learning can learn and understand higher
dimensional data (data having more than 100 or
1000 dimensions)

A.Jeyanthi Associate Professor Misriml


13
Navajee Munoth Jain Engg College
• Consider Sales Data:
• Has only 7 dimensions
• Hence human may understand and learn the data; and make
predictions on future input

• But if the data has more number of dimensions ;then human cannot
• But Machine learning can learn and understand higher
A.Jeyanthi Associate Professor Misriml
dimensional data Navajee Munoth Jain Engg College
14
Three Components of Machine learning

• 1. Datasets. Machine learning systems are trained on special


collections of samples called datasets. The samples can include
numbers, images, texts or any other kind of data. It usually takes a lot
of time and effort to create a good dataset.
• 2.Features. Features are extracted from the data set.
• 3.Algorithm.  Using the extracted features, a model is trained using
algm.
• The obtained model is tested for the test data
A.Jeyanthi Associate Professor Misriml
15
• But accuracy of ML is notNavajee
adequate
Munoth Jain Engg College
Types ML Techniques
Supervised learning
Un Supervised learning
Semi Supervised learning

A.Jeyanthi Associate Professor Misriml


16
Navajee Munoth Jain Engg College
Machine learning

For Ex: To train the machine learning model to recognise dog images
1) data set consisting of N dog images is collected
2) the features are extracted from the images
3) Using the extracted features the model is trained and the trained model is tested with any
dog images
The drawback of machine learning
1) If we give input for which the model is not trained then the algorithm may provide wrong
output
Machine learning if we give the input for which it is not trained then show some errors
A.Jeyanthi Associate Professor Misriml
17
Navajee Munoth Jain Engg College
Deep learning
• Deep learning is a class of machine learning algorithms inspired by the
structure of a human brain.
• Human brain is a network of neurons using which human can understand
and learn things
• In the same way if we make m/c to undersatnd complex things it is called
DL which uses multi layered neural netwok
• Neural network consist of i/p layer , o/p layer and hidden layers
• These layers simulate the function of neurons
• Neural network replicate neurons of human for cognitive learning
• In Neural network If the number of layer is more it is called deep learning
• when human see different types of dogs, the brain create abstract view
of the dogs
• So even if we see the dog for the first time, we can recognize it as Dog
• In similar way DL extracts the features that are common in all dogs
based on these it can recognize it
• DL consists of multiple NN layers
• These layers simulate the function of neurons

A.Jeyanthi Associate Professor Misriml


18
Navajee Munoth Jain Engg College
Deep learning NEURAL NETWORK

A.Jeyanthi Associate Professor Misriml


19
Navajee Munoth Jain Engg College
 The chances of error are
almost nil

 It can be used to explore


space, depths of
ocean

 Smartphones are greatest


example of A.I.

 It can be used in time consuming


tasks efficiently

 Algorithms can help the doctors


asses patients and their health
risks

 Machines do not require A.Jeyanthi


sleep or Associate Professor Misriml
20
Navajee Munoth Jain Engg College
break and are able to function
A.I.FOR GOOD AVIATION EDUCATION

• Analyse Satellite • Gate allocation • Companies are


Images to for plane creating
identify which while landing robots
areas have the •A.Jeyanthi
Ticket priceMisriml
Associate Professor
to teach subjects
21
highest poverty determination
Navajee Munoth Jain Engg College
HEALTHCARE

• Solving a variety of
problems ofpatients,
hospitals & healthcare
industry overall. • Robots have become very • Algorithmic Trading
• Using Avatars in place common in many • Market analysis &
of patients. data
industries
mining
• Can do repetitive • Personal Finance
A.Jeyanthi Associate Professor Misriml
laborious tasks
Navajee Munoth Jain Engg College • Portfolio 22
A.Jeyanthi Associate Professor Misriml
23
Navajee Munoth Jain Engg College
 High cost

 Decrease in demand for human labour

 AI may be programmed to do something


devastating

 Machine Ethics

 The storage and


access are not
as effective as human brains

 No improvement
with experience
A.Jeyanthi Associate Professor Misriml
24
Navajee Munoth Jain Engg College
OBJECTIVES :

A.Jeyanthi Associate Professor Misriml


25
Navajee Munoth Jain Engg College
 Improved speech, voice, image ,video interact
recognition will change the way devices with our

 Personal assistants will become more personal and context


aware

 More and more systems will run autonomously to a point

 The positive impact AI research can have on humanity will start


to be across many walks
A.Jeyanthi Associate Professor Misriml
26
Navajee Munoth Jain Engg College
of life - much of it behind the scenes
Two views of Intelligence

• 1. Behaving humanly :
Behaving intelligently like a human
• 2.Behaving rationally:
• Doing the right thing or behaving in best possible
manner
Human behaviour cannot satisfy the definition of
rationality
therefore it is not rational

A.Jeyanthi Associate Professor Misriml


27
Navajee Munoth Jain Engg College
Two types of Behavior
• 1.Thinking Behavior
• 2.Acting Behavior

A.Jeyanthi Associate Professor Misriml


28
Navajee Munoth Jain Engg College
Four views of AI
• 1.Thinking humanly
• 2.Acting humanly
• 3.Thinking rationally
• 4.Acting rationally
• In First two views, AI is measured against
human performance
• In Last two views, AI is measured against
rationality

A.Jeyanthi Associate Professor Misriml


29
Navajee Munoth Jain Engg College
Four definitions of AI
• 1.Thinking humanly (the cognitive model approach)
AI is an effort to make computer think
• 2. Thinking Rationally: (Laws of thought approach) AI
is study of computation that make machine
perceive,reason and act
• 3.Acting humanly:(Turing test approach) AI is study
of how to make computers doing things at the
moment people are better
• 4.Acting rationally:(Rational agent approach)AI is the
study of intelligent agents
A.Jeyanthi Associate Professor Misriml
30
Navajee Munoth Jain Engg College
Turing test?
(Can Machine think? A. M. Turing, 1950)
• There is a closed room containing human or computer
• Interrogator asks a question
• Being inside the room processes the question and answer the
question
• If the integrator cannot distinguish the human or computer,
then computer possesses intelligence

A.Jeyanthi Associate Professor Misriml


31
Navajee Munoth Jain Engg College
Typical AI tasks
• AI can do various tasks such as
• 1. commonplace task(can be done by all people )
• 2.expert task (cannot be done by all people only done by skilled
specialist)
• Common Place task:
• 1.Recognising people, object
• 2.communicating through natural language
• 3.navigating around the obstacles on the street
• Expert task :
• 1.Medical diagnosis
• 2. mathematical problem solving
• 3. playing games like chess
• AI has reached considerable success in achieving expert task than
common place task
A.Jeyanthi Associate Professor Misriml
32
Navajee Munoth Jain Engg College
What today's I can do
• today's I can achieve Limited success in summer following task
• 1.Autonomus planning and scheduling
• NASA’S remote agent program control is the autonomous planning
program to control the operation of spacecraft

• 2.Game Playing: In 1997 IBM’s deep blue program defeated Garry


kasparov in chess

• 3.Autonomous control: Computer vision system was trained to


steer a car to keep it following a lane
• Idea is : the camera in the car takes the image ;which is given to
trained neural network; based on the image the neural network
tells whether to turn the car to the left a right or go straight

A.Jeyanthi Associate Professor Misriml


33
Navajee Munoth Jain Engg College
• 3.Diagnosis: Medical diagnosis program can perform at
the level of experts physician

• 4.logistic planning: In 1999 US usedDART(dynamic


analysis and replanning tool) to do Automated logistics
planning and scheduling for transportation; It generated
a route plan for 50000 objects, including cargo and
people at a time considering starting point, destination
and route

• 5.Medical:Many surgeons use robot in microsurgery

• 6.language understanding and problem solving:


PROVERB is a computer program to solve crossword
better than most human
A.Jeyanthi Associate Professor Misriml
34
Navajee Munoth Jain Engg College
Agent
• An agent is an entity that as perceives its environment
through sensors and acts upon that environment through
actuators

• Human agent: eyes, ears, and other organs for sensors;


hands,legs, mouth, and other body parts for actuators
• Robotic agent: cameras and infrared range finders for
sensors;various motors for actuators
• Software program can be considered as software agent,
• for software agent keystrokes, file content ,network
packets are sensor inputs
• displaying on the screen, sending network packet writing
file are actuator o/p
A.Jeyanthi Associate Professor Misriml
35
Navajee Munoth Jain Engg College
• Percept:
• Perceptual input to an agent at any instant of time
• Percept Sequence:
• A complete history of percept or percept received so far
• Agent’s action depends on current percept or percept
sequence
• Agent function maps from percept sequence to actions:
[f: P*  A] It describes Agent’s behaviour
• Agent program
• Agent’s function is implemented by agent program .
• It runs on the physical architecture to produce f
• agent = architecture + program


• A.Jeyanthi Associate Professor Misriml
36
Navajee Munoth Jain Engg College
• Table lookup Approach (or) Look up table approach for agent constn
• In this approach the agent function that maps the percept to action is
tabulated
• given an agent we can construct this table by trying out all possible
percept sequence and recording which action the agent does in
response
• Example of table Driven Approach:
• Vacuum-cleaner world

• Vaccum- cleaner has two locations A and B.


• The vaccum agent perceives which square it is in and whether there
is any dirt in the square .It can choose the action move left ,move
right ,suck or do nothing
• Percepts: location and contents, e.g., [A,Dirty]
• Agent function :if the current square is dirty then suck othervice
move to the other square
• partial tabulation of this age and function is shown:

• Actions: Left, Right, Suck, NoOp


A.Jeyanthi Associate Professor Misriml
Navajee Munoth Jain Engg College
37
A.Jeyanthi Associate Professor Misriml
38
Navajee Munoth Jain Engg College
• Table lookup Approach (or) Look up table approach
• Drawback :
• Huge table
• table take long time to build
• no autonomy(as all actions are predefined)
• for learning we need long time to learn the table
entries (since table size is large)

A.Jeyanthi Associate Professor Misriml


39
Navajee Munoth Jain Engg College
Concept of Rationality
• Rational agent : is the agent that does the right thing
• Definition: for each possible percept sequence , rational agent
should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence, whatever the built-in knowledge the agent has
• Rationality is not same as perfection
• Rationality maximizes expected performance, whereas perfection
maximizes actual performance
• Rational agent may have a limitation in terms of resource, time
and space
• Given its limitation its expected to maximize its performance
• Rational agent is not omniscient, as it does not know the outcome
of all action actions ; It may not know certain aspects of
environment
• Rational action is the action that maximizes the expected value of
the performance measure given the percept sequence

A.Jeyanthi Associate Professor Misriml


40
Navajee Munoth Jain Engg College
Rational action; is best to the best of its knowledge
Rational action; is optimal to the best of its ability
Characteristics of intelligent agent
1.Rational:Able to act in a rational way or intelligent way
2.Autonomous: must be able to act independently not
subject to external control
3.Persistant:able to run continuously
4.Communication: able to provide information or
command to other agents
5.Cooperative:Must be able to work with other agents to
achieve goals
6.Mobile:Ability to move7.Adaptive: able to learn and
adapt A.Jeyanthi Associate Professor Misriml
Navajee Munoth Jain Engg College
41
Performance measure
Performance measure means measuring how successfully the
agent performs its function
Example: speed of Agent, power consumption, time needed,
accuracy and money spent
There is no fixed performance measure suitable for all agents;
When an agent is fixed in an environment it generates a
sequence of actions according to the percent it receives .
This sequence of action causes the environment to go through
sequence of states, If the sequence is desirable then the
agent has performed well
Performance measure depends on agent’s function
Example: performance measure of vacuum cleaner is, amount of
Dirt cleaned up, time taken, power consumed ,noise
generated ….
A.Jeyanthi Associate Professor Misriml
42
Navajee Munoth Jain Engg College
Task environment
Task environments are the problems for which
agents are solution

Specifying Task environment


 Specified by the description PEAS
 PEAS:
 Performance, Environment, Actuators, Sensors

A.Jeyanthi Associate Professor Misriml


43
Navajee Munoth Jain Engg College
Taxi Driver Example
Performance Environment Actuators Sensors
Measure

camera,
safe, fast, steering, sonar,
roads, other
legal, accelerator, speedometer,
traffic,
comfortable brake, GPS,
pedestrians,
trip, signal, horn, odometer,
customers
maximize display engine
profits sensors,
keyboard,
accelerator

A.Jeyanthi Associate Professor Misriml


44
Navajee Munoth Jain Engg College
Medical Diagnosis System

Performance Environment Actuators Sensors


Measure
healthy patient, display keyboard
patient, hospital, staff questions, entry of
minimize tests, symptoms,
costs, lawsuits diagnosis, findings,
treatments, patient’s
referrals answers

A.Jeyanthi Associate Professor Misriml


45
Navajee Munoth Jain Engg College
Mushroom-Picking Robot

Performance Environment Actuators Sensors


Measure
Percentage of Conveyor belt Jointed arm camera, joint
good with and hand angle sensors
mushrooms in mushrooms,
correct bins bins

A.Jeyanthi Associate Professor Misriml


46
Navajee Munoth Jain Engg College
Properties of Task Environments
1.Fully observable (vs. partially observable) Environment:
 Fully observable An agent's sensors give the complete state of the features
of environment at each point in time
 the environment is convenient, because the agent does not need to maintain
the internal state to keep track of the change in the environment
 Ex:Chess playing pgm
 partially observable
 Due to interference or uncertainty the action relevant features of the
environment are only partially observable
 Ex:automatic taxi
2.Deterministic (vs. stochastic) Environment:
 Deterministic :The next state of the env. is determined by current state and
the agent’s action
 EX: Image analysis system, given the i/p image , operation the o/p image
is determined
 Stochastic: due to interference , uncertainty if the environment is partially
observable ,then the next state can not be determined
 Ex:automatic taxi
 Strategic:If the next state of environment is determinied by current state
and the actions of other agents, then the environment is strategic
A.Jeyanthi Associate Professor Misriml
47
Navajee Munoth Jain Engg College
3. Episodic (vs. sequential) Environment:
Episodic: The task of the environment can be divided into atomic
phases or episodes ;
Each episode consists of agent’s percept that causes single action
In this environment, the current episode does not depend on
action on previous episode; it depends only on the same episode
Each episode’s action depends only on the same episode
Example:
Identifying defective parts in PCB; current decision of choosing
the defective part does not depend on the previous decision
Sequential : the current episode’s action depends on previous
episode’s action and affects the future episodes
Example: chess ,automated taxi
Episodic environments are simple than sequential ,because agent
does not need to think ahead

A.Jeyanthi Associate Professor Misriml


48
Navajee Munoth Jain Engg College
4.Static (vs. dynamic) Environment:
Static: The environment can change from one state to next state only due to agent’s
action
 Ex: crossword puzzle

Dynamic:The environment changes over the time independent of the action of the
agent
Ex: Taxi Driving
5.Discrete (vs. continuous) Environment:
 Discrete: If percept, action occurs at finite time interval
 Ex:Chess
 Continuous: If percept, action occurs are continuous in time.
Ex: taxi
6.single agent vs multi agent environment
single agent environment has single agent
Ex: Crossword Puzzle
multi agent environment :Has multiple agents Ex:Chess

A.Jeyanthi Associate Professor Misriml


49
Navajee Munoth Jain Engg College
7.Known versus unknown environment
Known environment: the outcomes for all actions are given
Ex: chess playing
Unknown environment: the outcomes of actions are unknown, the agen
will have to learn how it should in order to make good decisions
Ex:Automatic taxi
8. Competitive Vs cooperative
Competitive multi-agent environment :Agents work in
competitiv manner
Ex:Chess
Cooperative environment in multi agent: Agents work in
cooperative manner environment
Ex:Automated Taxi
A.Jeyanthi Associate Professor Misriml
50
Navajee Munoth Jain Engg College
Examples
Task Environment Oberservable Deterministic Episodic Static Discrete Agents

Crossword puzzle fully deterministic sequential static discrete single

Chess with a clock fully strategic sequential semi discrete multi

Taxi driver partially stochastic sequential dynamic conti. multi

mushroom-pickingpartially stochastic episodic dynamic conti. single

 The environment type largely determines the agent design

 The real world is (of course) partially observable, stochastic,


sequential, dynamic, continuous, multi-agent

A.Jeyanthi Associate Professor Misriml


51
Navajee Munoth Jain Engg College
Typical Intelligent Agents
Structure of Agent
Job of AI is to design the agent that implements the
agent’s function that maps percepts to action
This program runs on some sort of computing
device with physical sensors and actuators this is
called architecture
Agent=Architecture +Program
The architecture gives the percept from the sensors
to the agent program, the program and feeds action
from the program o/p to the actuators
A.Jeyanthi Associate Professor Misriml
52
Navajee Munoth Jain Engg College
Table driven approach (Already Discussed)
Agent program gets the input from sensor and gives the o/p as action to actuators.

Drawbacks of table driven approach


In table driven approach constructed table contains appropriate actions are every possible
percept sequence
Let P be the set of possible percepts and
let T be the lifetime of the agent (the total number of percepts it will receive)
Then lookup table contains entries
So even for the simple environment like vacuum cleaner world the size of the table is large
A.Jeyanthi Associate Professor Misriml
53
Navajee Munoth Jain Engg College
Five Basic Agent Types
 Arranged in order of increasing generality:
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents; and
 Learning agents

A.Jeyanthi Associate
Professor Misriml
54
Navajee Munoth Jain
Engg College
Simple Reflex Agent
• The simplest kind of agent is the simple reflex agent. These agents select actions on the
basis of the current percept, ignoring the rest of the percept history.
• Its suitable only for fully observable environment
• Ex: Vacuum Cleaner Agent

• Consider complex environment like Automated Taxi,


• If the car in front brakes and its brake lights come on, then you should notice this and
initiate braking. For doing this we have to establish condition–action rule,
• written as “” if car-in-front-is-braking then initiate-braking.”
• The program in Figure 2.8 is specific to one particular vacuum environment.
• A more general and flexible approach is first to build a general-purpose interpreter for
condition–action rules and then to create rule sets for specific task environments.
• Figure 2.9 gives the structure of this general program in schematic form, showing how the
condition–action rules allow the agent to make the connection from percept to action.
A.Jeyanthi Associate Professor Misriml
55
Navajee Munoth Jain Engg College
Simple Reflex Agent

A.Jeyanthi Associate Professor Misriml


56
Navajee Munoth Jain Engg College
• Ex: Vacuum Cleaner Agent

• INTERPRET-INPUT :function generates an abstracted description of the current state from the
percept
• RULE-MATCH function returns the first rule in the set of rules that matches the given state
description
• RULE - ACTION – the selected rule is executed as action of the given percept
• Advantage:Simple reflex agents have the admirable property of being simple,
• Disadvantage: but has limited intelligence. It will work only if the environment is fully
observable.Even a little bit of unobservability can cause serious trouble.
• For example, earlier assumed that the condition car-in-front-is-braking can be determined from
the current single frame of video. Unfortunately, its not possible always
• Unfortunately, older models have different configurations of taillights, brake lights, and turn-signal
lights, and it is not always possible to tell from a single image
• whether the car is braking. A simpleA.Jeyanthi
reflex agent driving behind such a car would either brake
Associate Professor Misriml
continuously and unnecessarily, or, worse, neverJainbrake 57
Navajee Munoth Engg at all
College
Model based Reflex Agents
• The most effective way to handle partial observability.
• It maintains the internal state or internal rep of the world called
‘’model’’ to keep track of the change in an environment. Hence is
called model based agents.
• It stores info on unobserved aspects of current state
• It combines the current percept with old internal state to generate
updated discription of current state
• Model based agent update internal state info using 2 things:
• 1.information about how the world evolves independently of the
agent—for example, that an overtaking car generally will
• be closer behind than it was a moment ago.
• 2.Information about how the agent’s own actions affect the world—
for example, that when the agent turns the steering wheel clockwise,
the car turns to the right,
A.Jeyanthi Associate Professor Misriml
58
Navajee Munoth Jain Engg College
A.Jeyanthi Associate Professor Misriml
59
Navajee Munoth Jain Engg College
The function UPDATE-STATE, is responsible for creating the new internal state description

A.Jeyanthi Associate Professor Misriml


60
Navajee Munoth Jain Engg College
Goal-Based Agent

A.Jeyanthi Associate
Professor Misriml
61
Navajee Munoth Jain
Engg College
• Goal-Based Agent
• Knowing something about the current state of the environment is not always enough to
decide what to do.
• For example, at a road junction, the taxi can turn left, turn right, or go straight on.
• The correct decision depends on destination.
• Therefore, in addition to current state description, the agent needs some sort of goal
information to reach the goal
• The agent program can combine the goal info with model to choose the action that achieves
goal
• Therefore goal based agent chooses action based on goal.
• goal-based action selection is straightforward if it can be achieved from a single action.
• If goal achievement needs long action sequence, -then it needs searching and planning
technique
• Advantages:
• Although the goal-based agent appears less efficient, it is more flexible ,because in
automated taxi goal based agent can be easily changed for different locations

A.Jeyanthi Associate Professor Misriml


62
Navajee Munoth Jain Engg College
Utility based Agent

A.Jeyanthi Associate Professor Misriml


63
Navajee Munoth Jain Engg College
Utility based Agent
• Goal based agents are not enough to generate high-quality behavior in
most environments.
• For example, many action sequences will get the taxi to its destination
but some are quicker, safer, more reliable, or cheaper than others.
• Goal based agents provide a binary distinction between “happy” and
“unhappy” states.
• Utility based agent uses utility function
• It maps the number for each State, which describes associate degree of
happiness
• it is used to measure preference among the states of the world
• Expected utility is computed by averaging all possible out come states
and weighing by the probability of the outcome
• Then it choose action that leads to best expected utility

A.Jeyanthi Associate Professor Misriml


64
Navajee Munoth Jain Engg College
Learning Agent

A.Jeyanthi Associate Professor Misriml


65
Navajee Munoth Jain Engg College
Learning Agent
• Learning allows the agent to operate in initially unknown
environments and to become more competent than its initial
knowledge
• A learning agent can be divided into four conceptual components,
• performance element: is responsible for selecting external actions. It
takes in percepts and decides on actions.
• CRITIC: The critic tells the learning element how well the agent is
doing with respect to a fixed performance standard
• learning element : The learning element uses feedback from the
critic on how the agent is doing and determines how the
performance element should be modified to do better in the future.
Learning element is also responsible for making improvements
• The design of learning element depends on performance element
• Pbm Generator:It is responsible for suggesting actions that will lead
to new and informative A.Jeyanthi
experiences
Associate Professor Misriml
66
Navajee Munoth Jain Engg College
Ex:Automated Taxi
• the performance element consists of knowledge and procedure selecting
driving actions
• the taxi goes out on the road and is driven by performance element
• critic observes the world and passes the information to the learning
element
• For Ex: If the taxi makes a quick left turn across the road; the critic
observes the shocking language used by other drivers and
• From this experience learning element formulate a new rule saying this
was a bad action and a performance element is modified by installing a
new rule
• The problem generator identify area of behaviours that needs
improvement and suggest experiments such as trying outbreak on
different road surfaces under different conditions

A.Jeyanthi Associate Professor Misriml


67
Navajee Munoth Jain Engg College
Problem Solving Approach to Typical AI
problems
• Problem solving Agents
• Problem solving agent is one kind of goal based agent,
• It needs to achieve certain goals
• To achieve the goal, it chooses seq of actions
• The process of finding seq of actions is called searching
• Searching algm takes problem as input and returns sequence of actions as solutions
• Steps in Problem Solving
• i) Goal formulation - based on current situation the goal and performance measure is
formulated
• (ii)Problem formulation - is the process of deciding what actions and states to consider
for a goal
• (iii) Search - is the process of finding different possible sequence of actions that lead to
goal
• (iv) Solution - a search algorithm takes a problem as input and returns a solution in the
form of action sequence.
• (v) Execution phase - if the solution exists, the action it recommends can be carried out.
A.Jeyanthi Associate Professor Misriml
68
Navajee Munoth Jain Engg College
A.Jeyanthi Associate Professor Misriml
69
Navajee Munoth Jain Engg College
Problem definition
• A problem can be defined formally by four components:
• 1. initial state ,2. successor function 3. goal test 4. path cost
• 1.The initial state : is the state where the agent starts in.
• 2.Successor function (S) - Given a particular state x, S(x) returns a
state reachable from x by any single action.
• 3.The goal test, which determines whether a given state is a goal
state.
• State space - The set of all possible states reachable from the initial
state by any sequence of actions.
• Path (state space) - The sequence of states connected by action
• 4.A path cost function that assigns a numeric cost to each path. It is
the sum of the individual action cost along the path
• It is sum of individual action cost along the path
A.Jeyanthi Associate Professor Misriml
70
Navajee Munoth Jain Engg College
Example Problems
The problem-solving approach has been applied to two types of pbms.
• A toy problem is intended to illustrate various problem-solving methods. It can
be given a concise, exact description. It can be used easily by different
researchers to compare the performance of algorithms
• A real-world problem is one whose solutions people actually care about.
• Toy Problems
• i) Vacuum world Problem
• States: The agent is in one of two locations, each of which might or might not
contain dirt. Thus there are 2 ^2*2 = 8 possible world states.
• Initial state: Any state can be designated as the initial state.
• Successor function: returns states that results from three actions (Left, Right,
and Suck).
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path

A.Jeyanthi Associate Professor Misriml


71
Navajee Munoth Jain Engg College
A.Jeyanthi Associate Professor Misriml
72
Navajee Munoth Jain Engg College
• ii) 8-puzzle Problem
• The 8-puzzle problem consists of a 3 x 3 board with eight numbered tiles
and a blank space. A tile adjacent to the blank space can slide into the
space. The objective is to reach a specified goal state
• States: A state description specifies the location of each of the eight tiles
and the blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state.
• Successor function: This generates the legal states that result from
trying the four actions (blank moves Left, Right, Up, or Down).
• Goal test: This checks whether the state matches the goal configuration
• Path cost: Each step costs 1, so the path cost is the number of steps in
the path

A.Jeyanthi Associate Professor Misriml


73
Navajee Munoth Jain Engg College
State space of 8 puzzle pbm

A.Jeyanthi Associate Professor Misriml


74
Navajee Munoth Jain Engg College
• iii) 8-queens problem
• The goal of the 8-queens problem is to place eight queens on a
chessboard such that no queen attacks any other. (A queen attacks
any piece in the same row, column or diagonal.
• 2 kinds of formulation:
• 1)Incremental Formulation
• 2)Complete state Formulation
• 1)Incremental Formulation
• Starts with an empty state, each action adds a queen to the empty
space
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Successor function: Add a queen to any empty square.
• Goal test: 8 queens are on the board, none attacked.

A.Jeyanthi Associate Professor Misriml


• In this formulation we have
Navajee1.8x10^14 possible solns to investigate
Munoth Jain Engg College
75
• 2)Complete state Formulation
• Starts with all 8 queens on the board and moves them
around
• State: Arrangement of 8 queens , 1 per column
• Successor function: Add a queen to any square from the left
most empty column that is not attacked by any other queen
• For Complete state Formulation the possible solns is
reduced to 2057

A.Jeyanthi Associate Professor Misriml


76
Navajee Munoth Jain Engg College
State space of 4 – Queen Problem

A.Jeyanthi Associate Professor Misriml


77
Navajee Munoth Jain Engg College
Real-world problems
• 1.Route-finding problem is defined in terms of specified locations and transitions along links between
them. Route-finding algorithms are used in a variety of applications, such as routing in computer
networks, military operations planning, and airline travel planning systems

• 2.The traveling salesperson problem (TSP) is a touring problem in which each city must be visited
exactly once. The aim is to find the shortest tour.

• 3. VLSI layout problem requires positioning millions of components and connections on a chip to
minimize area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield.
The layout problem comes after the logical design phase, and is usually split into two parts:
• cell layout and
• channel routing.
• In cell layout, the primitive components of the circuit are grouped into cells, each of which performs
some recognized function. Each cell has a fixed footprint (size and shape) and requires a certain number
of connections to each of the other cells. The aim is to place the cells on the chip so that they do not
overlap and so that there is room for the connecting wires to be placed between the cells.
• Channel routing finds a specific route for each wire
• through the gaps between the cells.

A.Jeyanthi Associate Professor Misriml


78
Navajee Munoth Jain Engg College
• Robot navigation is a generalization of the route-finding problem
described earlier. Rather than a discrete set of routes, a robot can move
in a continuous space with (in principle) an infinite set of possible actions
and states. For a circular robot moving on a flat surface, the space is
essentially two-dimensional.
• When the robot has arms and legs or wheels that must also be controlled,
the search space becomes many-dimensional. Advanced techniques are
required just to make the search space finite. In addition to the
complexity of the problem, real robots must also deal with errors in their
sensor readings and motor controls.

A.Jeyanthi Associate Professor Misriml


79
Navajee Munoth Jain Engg College
Searching for Solutions
• Problem solving is done by searching through state space
• There are several Search techniques for searching through state space
• Search techniques use an search tree
• search tree is a tree that is generated during search process
• Search tree is a tree that is generated by the initial state and the successor function
and the state space.
• Search node The root of the search tree is a search node, it correspond to the initial
state of the problem.
• General Tree Search algorithm
• 1.Starts with root node
• 2.The first step is to test whether this is a goal state,if it is a goal state return success.
• 3.Else Apply the successor function to the current state, and generate a new set of
nodes
• 4.Choose any other node and repeat choosing, testing, and expanding until either a
solution is found or there are no more nodes to be expanded.
• Note:The choice of which node to expand is determined by the search strategy
• A.Jeyanthi Associate Professor Misriml
80
Navajee Munoth Jain Engg College
• Consider the route finding pbm:
• Find the route from A to F
A

B C

D E

• Frontier is a collection of nodes that have been generated but not


expanded
• Search tech chooses the next node for expansion from the
frontier(fringe)
• Leaf node:is a node with no successr

A.Jeyanthi Associate Professor Misriml
81
Navajee Munoth Jain Engg College
A.Jeyanthi Associate Professor Misriml
82
Navajee Munoth Jain Engg College
GRAPH-SEARCH Algm
• Draw back of tree search algm is possibility of exploring
already expanded node
• It can be overcome by GRAPH-SEARCH Algm
• Is an extension of Tree Search Algm ,by adding “Explored Set”
which remembers every expanded node.
• Newly generated nodes that match previously generated
nodes—ones in the explored set or the frontier—can be
discarded instead of being added to the frontier.

A.Jeyanthi Associate Professor Misriml


83
Navajee Munoth Jain Engg College
Data structure for Search tree
• Search tree is a group of nodes ; Each node is defined by 5 components:
• For each node n of the tree, we have a structure that contains four
components:
• • n.STATE: the state in the state space to which the node corresponds;
• • n.PARENT: the node in the search tree that generated this node;
• • n.ACTION: the action that was applied to the parent to generate the
node;
• • n.PATH-COST: the cost, traditionally denoted by g(n), of the path from
the initial state to the node, as indicated by the parent pointers.
• n.DEPTH: the number of steps along the path from the initial state.
• Difference b/w node and States
• A state correspond to a config of real world
• A node is a book keeping DS to rep search tree
• Multiple nodes can contain same state; if it is generated through diff
search path

A.Jeyanthi Associate Professor Misriml


84
Navajee Munoth Jain Engg College
A.Jeyanthi Associate Professor Misriml
85
Navajee Munoth Jain Engg College
• The collection of nodes represented in the search tree is
defined using queue representation.
• Queue: Collection of nodes are represented, using queue.
The queue operations are defined as:
• EMPTY?(queue) returns true only if there are no more
elements in the queue.
• • POP(queue) removes the first element of the queue and
returns it.
• • INSERT(element, queue) inserts an element and returns the
resulting queue

A.Jeyanthi Associate Professor Misriml


86
Navajee Munoth Jain Engg College
• Three common variants of Queues:
• FIFO queue, which pops the oldest element
of the queue;
• LIFO QUEUE the last-in, first-out or LIFO
queue (also known as a stack), which pops
the newest element
• PRIORITY QUEUE which pops the element of
the queue with the highest priority according
to some ordering function.

A.Jeyanthi Associate Professor Misriml


87
Navajee Munoth Jain Engg College
• Measuring problem solving performance
• The search strategy algorithms are evaluated depends on four important criteria’s.
They are:
• (i) Completeness :Whether algm/ strategy can find a solution or not.
• (ii) Time complexity : Time taken to run a solution
• (iii) Space complexity : Memory needed to perform the search.
• (iv) Optimality : If more than one way exists to derive the solution then the best
one is Selected
• complexity is expressed in terms of three quantities:
• b BRANCHING FACTOR maximum number of successors of any node;
• d, the depth of the shallowest(SHORTEST) goal node
• m, the maximum length of any path in the state space.

• Time is often measured in terms of the number of nodes generated during the
search,
• space in terms of the maximum number of nodes stored in memory.

A.Jeyanthi Associate Professor Misriml


88
Navajee Munoth Jain Engg College

You might also like