You are on page 1of 271

Artificial Intelligence

Chapter One
Introduction

12/04/2021 AI/CSE 3206 1


Introduction

• Chapter Objectives
– Define intelligence
– Define AI
– Describe what an agent is
– State what rational agent is
– Identifying areas and achievements of AI
– Explain AI history and trends

12/04/2021 AI/CSE 3206 2


Data, information, knowledge and wisdom
– According to Russell Ackoff, the content of the human mind can be
classified into five categories:
• Data: is a raw fact (symbols)
• Information: data that are processed to be useful(giving meaning to
data)
• Knowledge: application of data and information; answers "how“
• Understanding: appreciation of "why“
• Wisdom: evaluated understanding
Data
• it simply exists and has no significance beyond its existence
• It can exist in any form, usable or not
• It does not have meaning by itself
• A spreadsheet generally starts out by holding data

12/04/2021 AI/CSE 3206 3


Data…..contd
• Information
•Is data that has been given meaning by way of relational
connection
•This "meaning" can be useful, but does not have to be
•A relational database makes information from the data stored
within it

12/04/2021 AI/CSE 3206 4


Data…..contd

12/04/2021 AI/CSE 3206 5


Data…..contd

12/04/2021 AI/CSE 3206 6


Data…..contd

12/04/2021 AI/CSE 3206 7


Data…..contd

• Knowledge
• It is the appropriate collection of information, such that it's intent is to
be useful.
• Knowledge is a deterministic process. 
• Most of the applications we use (modeling, simulation, etc.) exercise
some type of stored knowledge.
Understanding
• is a true cognitive and analytical ability
• Understanding is an interpolative and probabilistic process
• Synthesize new knowledge from the previously held knowledge

12/04/2021 AI/CSE 3206 8


Data…..contd
• Wisdom
• Is an extrapolative and non-deterministic, non probabilistic process
• It calls upon all the previous levels of consciousness, and specifically upon
special types of human programming (moral, ethical codes, etc.).
• Most people believe that computers do not have, and will never have the
ability to posses wisdom.

• The following diagram represents the transitions from data, to


information, to knowledge, and finally to wisdom. It is called as
knowledge hierarchy

12/04/2021 AI/CSE 3206 9


Data…..contd
Knowledge Hierarchy

12/04/2021 AI/CSE 3206 10


Data…..contd
• The first four categories relate to the past.
• Only the fifth category, wisdom, deals with the future because it
incorporates vision and design.
• With wisdom, people can create the future rather than just grasp the
present and past.
• But achieving wisdom isn't easy.
• People must move successively through the other categories.
• The most important is, it is very hard to represent wisdom with a
computer system

12/04/2021 AI/CSE 3206 11


Data…..contd
Knowledge Hierarchy

12/04/2021 AI/CSE 3206 12


Data…..contd

How raw data gets converted to


wisdom through various levels of How our brain processes information
processing
12/04/2021 AI/CSE 3206 13
Views ….contd
• Intelligence: – capacity to learn and solve problems” (Webster
dictionary) – the ability to act rationally
• Natural Intelligence Versus Artificial Intelligence(?)
• There are different views/definitions to AI
– Views of AI fall into four different perspectives --- two dimensions:

12/04/2021 AI/CSE 3206 14


Views ….contd
• Think Like Humans: The cognitive modeling approach

• Involves cognitive modeling


– If we are going to say that a given program thinks like a human,
we must have some way of determining how humans think. We
need to get inside the actual workings of human
minds(sufficiently precise theory of the mind)
– The human thinking process is difficult to understand:
• how does the mind raises from the brain ?
• Think also about unconscious tasks such as vision and speech
understanding, reflex action
– Humans are not perfect !
• We make a lot of systemic mistakes
12/04/2021 AI/CSE 3206 15
Views ….contd

• Act Like Humans: The Turing Test approach


• To be intelligent, a program should simply act like a human
• Alan Turing Test(Operational test for intelligent behavior: the
Imitation Game)
– Indistinguishability from undeniably intelligent entities-human
beings.
– Capabilities needed(Suggested Major Components of AI)
• Natural Language Processing -successful communication
• Knowledge representation -store what it knows or hears
• Automated reasoning -Answer questions and make conclusions
• Machine learning-adaptation, detect and extrapolate patterns
• Computer vision-perceive objects
• Robotics-manipulate objects and move about
– Researchers have not devoted much(creating flying objects did not
fool pigeons)

12/04/2021 AI/CSE 3206 16


Views ….contd
• Think Rationally : the laws of thought
• Instead of thinking like a human , think rationally
– Find out how correct thinking must proceed
– Syllogism: “Socrates is a man; all men are mortal, therefore Socrates is
mortal.”
– These laws of thought were supposed to govern the operation of the mind;
their study initiated the field called logic
• A traditional and important branch of mathematics and computer science.
– Problem:
• It is not always possible to model thought as a set of rules; sometimes
there is uncertainty.
• Even when a modeling is available, the complexity of the problem may
be too large to allow for a solution.

12/04/2021 AI/CSE 3206 17


Views ….contd
• Act Rationally: The rational agent approach
– An agent is an entity that perceives and acts.
– Rational agent: acts as to achieve the best outcome or, when there
is uncertainty, the best expected outcome.
– Logical thinking is only one aspect of appropriate behavior:
reactions like getting your hand out of a hot place is not the result
of a careful deliberation, yet it is clearly rational.
– Sometimes there is no correct way to do, yet something must be
done.
– Instead of insisting on how the program should think, there fore it
is better to insist on how the program should act: caring only
about the final result(goal).
12/04/2021 AI/CSE 3206 18
Views ….contd
• Summary of Views of AI

12/04/2021 AI/CSE 3206 19


Views ….contd
• Modeling exactly how humans actually think
– cognitive models of human reasoning
• Modeling exactly how humans actually act
– models of human behavior (what they do, not how they think)
• Modeling how ideal agents “should think”
– models of “rational” thought (formal logic)
– NB: humans are often not rational!
• Modeling how ideal agents “should act”
– rational actions but not necessarily formal rational reasoning
– i.e., more of a black-box/engineering approach
• Modern AI focuses on the last definition
– A focus on this “engineering” approach
– Success is judged by how well the agent performs
-- Modern methods are also inspired by cognitive & neuroscience (how people
think).
12/04/2021 AI/CSE 3206 20
Views ….contd
 AI is an attempt of the reproduction of human reasoning and
intelligent behavior by computational methods
 The goal of AI is to create computer systems(Machines) that perform
tasks regarded as requiring intelligence when done by humans
 Take a task at which people are better, e.g.:
• Prove a theorem
• Play chess
• Plan a surgical operation
• Diagnose a disease
• Navigate in a building
and build a computer system that does it automatically

12/04/2021 AI/CSE 3206 21


Views ….contd
• There fore,
–AI involves modeling Human (Activities, behaviour, thoughts,
etc) and even other animals

• The goal of Artificial Intelligence(AI) is to build software systems


that behave "intelligently".

• AI involves building computer systems "do the right thing" in


complex environments

• The systems that are built act optimally given the limited information
and computational resources available. .

12/04/2021 AI/CSE 3206 22


Foundations of Artificial Intelligence
Philosophy Knowledge Rep., Logic, Foundation of AI (is AI possible?)
Mathematics Search, Analysis of search algos., logic
Economics Expert Systems, Decision Theory, Principles of Rational
Behavior

Psychology Behaviorist insights into AI programs


Brain Science Learning, Neural Nets
(Neuroscience)

Physics Learning, Information Theory & AI, Entropy, Robotics,


Image Processing
Computer Engg. Systems for AI
Linguistics Natural Language Processing(NLP), Speech Recognition,
Computational Linguistics, Knowledge Representation,
Expert systems, etc

12/04/2021 AI/CSE 3206 23


What can AI do today?(Roles of AI)

A concise answer is difficult, because there are so many activities in so


many subfields.
• Autonomous planning and scheduling
• Game playing(person Garry Kasparov, etc)
• Autonomous control
• Diagnosis
• Logistics Planning
• Robotics
• Language understanding and problem solving

12/04/2021 AI/CSE 3206 24


Some Achievements
 Computers have won over world
champions in several games, including
Checkers, Othello, and Chess, but still do
not do well in Go
 AI techniques are used in many systems:
formal calculus, video games, route
planning, logistics planning,
pharmaceutical drug design, medical
diagnosis, hardware and software trouble-
shooting, speech recognition, traffic
monitoring, facial recognition,
medical image analysis, part
inspection, etc...
 Stanford’s robotic car, Stanley,
autonomously traversed 132 miles of desert
 Some industries (automobile, electronics)
are highly robotized,
while other robots perform brain
and heart surgery, are rolling
on Mars, fly autonomously, …,
but home robots still remain
a thing of the future

12/04/2021 AI/CSE 3206 25


Some Big Open Questions
 AI (especially, the “rational agent” approach) assumes that
intelligent behaviors are only based on information processing?
Is this a valid assumption?

 If yes, can the human brain machinery solve problems that are
inherently intractable for computers?
 In a human being, where is the interface between “intelligence”
and the rest of “human nature”, e.g.:
• How does intelligence relate to emotions felt?
• What does it mean for a human to “feel” that he/she
understands something?
 Is this interface critical to intelligence? Can there exist a general
theory of intelligence independent of human beings? What is the
role of the human body?

12/04/2021 AI/CSE 3206 26


Some…contd
 AI (especially, the “rational agent” approach) assumes that
intelligent behaviors are based on information processing? Is this a
valid assumption?

In
 If the
Inyes, movie
thecan
movie I,I, Robot,
the human brainthe
Robot, the most
most impressive
machinery impressive feature
feature
solve problems that are
inherently
of
of the intractable
the robots
robots for computers?
isis not
not their
their ability
ability to
to solve
solve complex
complex
problems,
problems, but
but how
how they
they blend
blend human-like
human-like
 In a human being, where is the interface between “intelligence”
reasoning
the rest ofwith
reasoning
and with other
other
“human key aspects
aspects of
keye.g.:
nature”, of human
human beings
beings
(especially, self-consciousness,
 How does intelligence
(especially, fear
relate to emotions
self-consciousness, felt? of
fear of dying,
dying,
 What doesbetween
distinction it meanright
for and
aand
human
wrong)to “feel” that he/she
distinction between
understands something? right wrong)
 Is this interface critical to intelligence? Can there exist a general
theory of intelligence independent of human beings? What is the
role of the human body?

12/04/2021 AI/CSE 3206 27


Some…contd
 AI contributes to building an information processing model of
human beings, just as Biochemistry contributes to building a
model of human beings based on bio-molecular interactions
 Both try to explain how a human being operates
 Both also explore ways to avoid human imperfections (in Biochemistry, by
engineering new proteins and drug molecules; in AI, by designing rational
reasoning methods)
 Both try to produce new useful technologies

 Neither explains (yet?) the true meaning of being human

12/04/2021 AI/CSE 3206 28


Main Areas of AI

 Knowledge representation
(including formal logic)
 Search, especially heuristic Agent Perception
search (puzzles, games) Robotics
 Planning
 Reasoning under Reasoning
uncertainty, including Search
probabilistic reasoning Learning
 Learning
 Agent architectures Knowledge Constraint
Planning rep. satisfaction
 Robotics and perception
 Natural language processing

Natural
Expert
language
Systems
...

12/04/2021 AI/CSE 3206 29


Bits of History

 1956: The name “Artificial Intelligence” is coined

 60’s: Search and games, formal logic and theorem proving

 70’s: Robotics, perception, knowledge representation, expert


systems

 80’s: More expert systems, AI becomes an industry

 90’s: Rational agents, probabilistic reasoning, machine learning

 00’s: Systems integrating many AI methods, machine learning,


reasoning under uncertainty, robotics again

12/04/2021 AI/CSE 3206 30


Questions are
Welcome

12/04/2021 AI/CSE 3206 31


Artificial Intelligence(AI)

Chapter Two : Intelligent Agent

12/04/2021 AI/CSE 3206 32


Chapter Objectives

◦ Defining what an agent is in general


◦ Understanding the concept of rationality
◦ Giving ideas about agent, agent function, agent program and
architecture, environment, percept, sensor, actuator
(effectors),
◦ Giving ideas on how agent should act
◦ Explaining about agent types as well as agent environment
◦ Identifying ways of measuring agent success
◦ Describing rational agent, autonomous agent and
omniscience agent

12/04/2021 AI/CSE 3206 33


What is an Agent?

– An agent is any thing that can be viewed as perceiving its


environment through sensors and acting upon the environment
through the effectors(actutor)
• Human as an agent has eyes, ears, and other organs for sensors;
and hands, legs, mouth, and other organs as effectors
• Robots as an agent has camera, sound recorder, infrared range
finder for sensors; and various motors for effectors.
• A software agent receives keystrokes, file contents, and
network packets as sensory inputs and acts on the environment
by displaying on the screen, writing files, and sending network
packets.

12/04/2021 AI/CSE 3206 34


What is…..contd

• We use the term percept to refer to the agent's perceptual inputs


at any given instant.
• An agent's percept sequence is the complete history of
everything the agent has ever perceived.
• In general, an agent's choice of action at any given instant can
depend on the entire percept sequence observed to date.
• If we can specify the agent's choice of action for every possible
percept sequence, then we have said more or less everything there
is to say about the agent.
• Mathematically speaking, we say that an agent's behavior is
described by the agent function that maps any given percept
sequence to an action. [f: P*  A]

12/04/2021 AI/CSE 3206 35


What is…..contd

Agents interact with environments through sensors and actuators.


12/04/2021 AI/CSE 3206 36
What is…..contd

• Given an agent to experiment with, it is possible to construct a table


by trying out
• all possible percept sequences and
• recording which actions the agent does in response.'
• Sometimes it may be infinite
• The table is an external characterization of the agent.
• Internally, the agent function for an artificial agent will be
implemented by an agent program.
• It is important to keep these two ideas distinct.
• The agent function is an abstract mathematical description
• The agent program is a concrete implementation, running on the
agent architecture.

12/04/2021 AI/CSE 3206 37


What is…..contd

• An intelligent agent perceives its environment via sensors and acts


rationally upon that environment with its effectors.
• A discrete agent receives percepts one at a time, and maps this
percept sequence to a sequence of discrete actions.
• Properties
–Reactive to the environment
–Pro-active or goal-directed
–Interacts with other agents through
communication or in the environment
–Autonomous

12/04/2021 AI/CSE 3206 38


What is…..contd
• So, any agent consists of two parts:
– Agent architecture
– Agent program
• The architecture is the hardware and the program is the
software.
• The role of the agent program is to implement the agent
function.
• The agent function is a mapping from percept histories to
actions.

12/04/2021 AI/CSE 3206 39


What is…..contd
Ideal Example of Agent
Vacuum-cleaner world
• Percepts: location and contents
– e.g., [A,B, Dirty, Clean]
 Actions:[ Left, Right, Suck, Do Nothing]

Partial tabulation of a simple agent function for the vacuum-cleaner


world
12/04/2021 AI/CSE 3206 40
What is…..contd
• A rational agent is one that does the right thing
• Conceptually speaking, every entry in the table for the agent
function is filled out correctly.
• Obviously, doing the right thing is better than doing the wrong
thing, but what does it mean to do the right thing?
Agents may be rational or human like
We have seen how human act or think is difficult to
understand since due to the complex structure of human
intelligence.
Our agent should be designed from rationality view that act
rationally

12/04/2021 AI/CSE 3206 41


What is…..contd
How should Agents act?
A rational agent is an agent that does the right thing for the
perceived data from the environment
What is right is an ambiguous concept but we can consider
the right thing as the one that makes the agent more
successful.
Success is also measured by using performance measure and
a performance measure embodies the criterion for
success of an agent's behavior
Question
How and when do you measure success in performance?

12/04/2021 AI/CSE 3206 42


What is…..contd
 Performance measure (how?)
– Subjective Measure using the Agent
• How happy is the agent at the end of the action
• Agent should answer based on its opinion
• Some agents are unable to answer, some delude them selves,
some over estimate and some under estimate their success
• Therefore, subjective measure is not a better way.
– Objective Measure imposed by Some Authority is an
alternative
• But, the selection of a performance measure is not always easy.

12/04/2021 AI/CSE 3206 43


What is…..contd
 Objective Measure
◦ Needs standard to measure success
◦ Provides quantitative value of success measure of an agent
◦ Involves factors that affect performance and weight to each factors
E.g., performance measure of a vacuum-cleaner agent could be
 amount of dirt cleaned up, (average but way to achieve)
 amount of time taken,
 amount of electricity consumed,
 amount of noise generated, etc.
 Time factor in measuring performance is also important for success.
 It may include knowing starting time, finishing time, duration of job, etc

•Which is better-an economy where everyone lives in moderate poverty, or


one in which some live in plenty while others are very poor?
12/04/2021 AI/CSE 3206 44
What is…..contd
 Omniscience versus Rational Agent
– Omniscience agent is distinct from Rational agent
– An omniscient agent knows the actual outcome of its actions and
can act accordingly
– Is impossible in reality
– However, rational agent is an agent that tries to achieve more
success from its decision.
– Rational agent could make a mistake because of unpredictable
factors at the time of making decision.
– For each possible percept sequence, a rational agent should select
an action that is expected to maximise its performance
12/04/2021 AI/CSE 3206 45
What is…..contd

• Rationality is not the same as perfection.


• Rationality maximizes expected performance, while perfection
maximizes actual performance.
 Omniscient agent that act and think rationally never make a
mistake
 Omniscient agent is an ideal agent in real world
 Agents can perform actions in order to modify future percepts so
as to obtain useful information (information gathering,
exploration)
12/04/2021 AI/CSE 3206 46
What is…..contd

Factors to measure rationality of agents


1. Percept sequence perceived so far (do we have the entire
history of how the world evolve or not)
2. The set of actions that the agent can perform (agents
designed to do the same job with different action set will have
different performance)
3. Performance measures ( is it subjective or objective? What
are the factors and their weights)
4. The agent knowledge about the environment (what kind of
sensor does the agent have? Does the agent knows every thing
about the environment or not)

12/04/2021 AI/CSE 3206 47


What is…..contd

Ideal rational Agent

◦ For each possible percept sequence, an ideal rational agent


should do what every action is expected to maximize its
performance measure, on the basis of the evidence provided
by the percept sequence and what ever built-in knowledge the
agent has.

◦ Ideal rational agent implementation require perfection

◦ In real situation such agent is difficult to achieve

◦ Why car accident happened? Because drivers are not perfect


agent
12/04/2021 AI/CSE 3206 48
What is…..contd

 Autonomy
◦ An agent is autonomous if its behavior is determined by its own experience
(with ability to learn and adapt)
◦ Agent that lacks autonomous, if its actions are based completely on built-in
knowledge
◦ Example: student grade decider agent:
 Knowledge base given: rules for converting numeric grade to letter grade
 Case 1: agent always follows the rule (lacks autonomous)
 Case 2: agent that modify the rules by learning exceptions from the
knowledge base as well as grade distribution.
12/04/2021 AI/CSE 3206 49
What is…..contd

Structure of Intelligent Agent


 Structure of AI Agent refers to the design of intelligent agent program (function that
implement agent mapping from percept to actions) that will run on some sort of
computing device called architecture
 This course focus on intelligent agent program function theory, design and
implementation plunk
 Design of intelligent agent needs prior knowledge of
◦ Performance measure or Goal the agent supposed to achieve,
◦ On what kind of Environment it operates
◦ What kind of Actuators it has (what are the possible Actions),
◦ What kind of Sensors the agent has (what are the possible Percepts)
 Performance measure  Environment  Actuators  Sensors are abbreviated as
PEAS
 Percepts Actions Goal  Environment are abbreviated as PAGE

12/04/2021 AI/CSE 3206 50


What is…..contd

Examples of agents structure and sample PEAS/PAGE


 Agent: automated taxi driver:

◦ Environment: Roads, traffic, pedestrians, customers


◦ Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors,
keyboard
◦ Actuators: Steering wheel, accelerator, brake, signal, horn
◦ Performance measure: Safe, fast, legal, comfortable trip, maximize profits
 Agent: Medical diagnosis system

◦ Environment: Patient, hospital, physician, nurses, …


◦ Sensors: Keyboard (percept can be symptoms, findings, patient's answers)
◦ Actuators: Screen display (action can be questions, tests, diagnoses,
treatments, referrals)
◦ 12/04/2021
Performance measure: Healthy patient, minimize costs, lawsuits
AI/CSE 3206 51
What is…..contd

Examples of agents structure and sample PEAS


 Agent: Interactive English tutor
◦ Environment: Set of students, testing agency
◦ Sensors: Keyboard (typed words)
◦ Actuators: Screen display (exercises, suggestions, corrections)
◦ Performance measure: Maximize student's score on test
 Agent: Satellite image analysis system
◦ Environment: Images from orbiting
◦ Sensors: Pixels of varying intensity, color
◦ Actuators: print categorization of scene
◦ Performance measure: Correct categorization
12/04/2021 AI/CSE 3206 52
What is…..contd

Examples of agents structure and sample PEAS


 Agent: Part picking robot
 Environment: Conveyor belt with parts, bins
 Sensors: pixels of varying intensity(Camera, joint angle sensors)
 Actuators: pickup parts and sort into bins(Jointed arm and hand)
 Performance measure: place parts in correct bins
 An agent is completely specified by the agent function that maps
percept sequences into actions
 Aim: find a way to implement the rational agent function concisely

12/04/2021 AI/CSE 3206 53


What is…..contd

Agent programs
 Skeleton of the Agent

FUNCTION SKELETON-AGENT (percept) returns action


static memory, the agent’s memory of the world
memory UPDATE-MEMORY (memory, percept)
action  CHOOSE-BEST-ACTION (memory)
memory UPDATE-MEMORY (memory, action)
RETURN action
Note:
1. the function gets only a single percept at a time
Q: how to get the percept sequence?

2. The goal or performance measure is AI/CSE


12/04/2021
not part
3206
of the skeleton 54
What is…..contd

Table-lookup agent
• Table look up agent store all the percept sequences –action pair into the
table
• For each percept, this type of agent will search for the percept entry and
return the corresponding actions.
• Table look up couldn’t be the right option to implement successful agent
• Why?
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn the table entries
12/04/2021 AI/CSE 3206 55
What is…..contd

Agent types
• Based on memory of the agent, and they way the agent takes action we can
divide agents into five basic types:
• These are (according to their increasing order of generality) :
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
5. Learning agent
Notation of model:
• Rectangles: used to represent the current internal state of the agent decision
process
• Ovals: used to represent the background information used in the process
12/04/2021 AI/CSE 3206 56
What is…..contd

Simple Reflex Agents


• It is the simplest type of Agent.
• It uses a set of condition-action rules.
• It uses only the current precepts .
• The rules are of the form “if this is the percept then this is the best action”.
• They cannot make decisions on things that they cannot directly perceive,
i.e. they have no model of the state of the world.
• Simple reflex agents have the admirable property of being simple, but
they turn out to be of very limited intelligence
• Works only if the correct decision can be made on the basis of only the
current percept-that is, only if the environments is fully observable.

12/04/2021 AI/CSE 3206 57


What is…..contd

Simple reflex agents

12/04/2021 AI/CSE 3206 58


What is…..contd

• Simple reflex agent function prototype

• Function SIMPLE_REFLEX_AGENT(percept) return action

static : rules, a set of condition –action-rule

stateINTERPRET-INPUT(percept)

ruleRULE-MATCH(state, rules)

ActionRULE-ACTION[rule]

Return action

12/04/2021 AI/CSE 3206 59


What is…..contd

Model Based Agent


– It is a more complex type of Agent.

– Model based agents maintain an internal model of the world, which


is updated by precepts as they are received.

– In addition, they have built-in knowledge (i.e. prior knowledge) of


how the world tends to evolve.

– It can not able to plan to achieve longer term goals.

– They live in the present only and do not think about the future.

12/04/2021 AI/CSE 3206 60


What is…..contd
Model-based reflex agents (also called a reflex agent
with internal state)

12/04/2021 AI/CSE 3206 61


What is…..contd

Model-based reflex agents


• Function MODEL_BASED_AGENT(PERCEPT) return an action
• static/persistent state, a description of the current world state
rules, a set of condition action rules
stateUPDATE_STATE(state, percept)
ruleRULE_MATCH(state, rues)
actionRULE_ACTION[rule]
stateUPDATE_STATE(state,action)
return action

12/04/2021 AI/CSE 3206 62


What is…..contd

Goal-based agents(A model-based, goal-based agent)


• Knowing about the current state of the environment is not always enough
to decide what to do.
• For example, at a road junction, the taxi can turn left, turn right, or go
straight on.
• The correct decision depends on where the taxi is trying to get to(Goal).
• That is, the agent needs some sort of goal information that describes
situations that are desirable.
• It is a model-based, goal-based agent.
• It keeps track of the world state as well as a set of goals it is trying to
achieve, and chooses an action that will (eventually) lead to the
achievement of its goals.
– Is it easy always?

12/04/2021 AI/CSE 3206 63


What is…..contd
Goal-based agents
Notice that decision making of this kind is fundamentally different from the condition
action rules described earlier, in that it involves consideration of the future-both "What
will happen if I do such-and-such?' and "Will that make me happy?'

12/04/2021 AI/CSE 3206 64


What is…..contd

Goal-based agents structure


• Function GOAL_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in
terms of state
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionACTION_THAT_LEADS_TO_GOAL(actionSet)
stateUPDATE_STATE(state,action)
return action

12/04/2021 AI/CSE 3206 65


What is…..contd

Utility Based Agents


• Goals alone are not really enough to generate high-quality behavior in most
environments.
• Goal can be useful, but are sometimes too simplistic.
• Clearly there can be many actions that lead to a goal being achieved, but some
are better than others.
• Utility based agents deal with this by assigning a utility to each state of the
world.
– This utility defines how “happy” the agent will be in such a state.
• Goal based agents implicitly contain a utility function which is difficult to
define more complex “desires”.
• Explicitly stating the utility function also makes it easier to define the desired
behaviour of utility based agents.
12/04/2021 AI/CSE 3206 66
What is…..contd

A complete utility-based agent

12/04/2021 AI/CSE 3206 67


What is…..contd
Utility-based agents structure
• Function UTILITY_BASED_AGENT(PERCEPT) return action
static state, a description of the current world state
goal, a description of the goal to achieve may be in terms of state
stateUPDATE_STATE(state, percept)
actionSetPOSSIBLE_ACTIONS(state)
actionBEST_ACTION(actionSet) stateUPDATE_STATE(state,action)
return action

Remark:
• Utility can be represented as a function that maps states into real numbers. The larger the
number the higher the utility of the state.
• A complete specification of the utility function allows rational decisions in two kinds of
cases where goals have trouble.
• First, when there are conflicting goals, only some of which can be achieved (e.g.,
speed vs. safety), the utility function specifies the appropriate trade-off.
• Second, when there are several goals that the agent can aim for, none of which can be
achieved with certainty, utility provides a way in which the likelihood of success can
be weighed up against the importance of the goals.

12/04/2021 AI/CSE 3206 68


What is…..contd

Learning Agents
• In many areas of AI, this is now the preferred method for
creating state-of-the-art systems
• A learning agent can be divided into four conceptual
components
• Learning Element
– Suggesting improvements to any part of the performance
element.
– The input to the learning element comes from the Critic.(on
how the agent is doing and determines how the
performance element should be modified to do better in the
future)
12/04/2021 AI/CSE 3206 69
What is…..contd

• Learning Agent
• Performance element
– Responsible for selecting external actions(it takes in percepts
and decides on actions)
• Critic
– Analyses incoming precepts and decides if the actions of the
agent have been good or not.
– To decide this, it will use an external performance standard.
• Problem Generator
– Responsible for suggesting actions that will result in new
knowledge about the world being acquired.

12/04/2021 AI/CSE 3206 70


What is…..contd
Learning agents

12/04/2021 AI/CSE 3206 71


What is…..contd
Types of Environment
• Based on the portion of the environment observable
– Fully observable: An agent's sensors give it access to the complete
state of the environment at each point in time. (chess vs. driving)
– Partially observable
– Fully unobservable
• Based on the effect of the agent action
– Deterministic : The next state of the environment is completely
determined by the current state and the action executed by the
agent.
– Strategic: If the environment is deterministic except for the actions
of other agents, then the environment is strategic
– Stochastic or probabilistic
12/04/2021 AI/CSE 3206 72
What is…..contd

• Types of Environment
• Based on the number of agents involved
– Single agent A single agent operating by itself in an environment.
– Multi-agent: multiple agents are involved in the
environment(Chess(Competitive) versus Taxi(Cooperative or
partially competitive)
• Based on the state, action and percept space pattern
– Discrete: A limited number of distinct, clearly defined state,
percepts and actions.
– Continuous: state, percept and action are consciously changing
variables
– Note: one or more of them can be discrete or continuous

12/04/2021 AI/CSE 3206 73


What is…..contd

Types of Environment cont …


• Based on the effect of time
– Static: The environment is unchanged while an agent is deliberating.
– Dynamic: The environment changes while an agent deliberates
– semi-dynamic: The environment is semi-dynamic if the environment
itself does not change with the passage of time but the agent's
performance score does
• Based on loosely dependent sub-objectives
– Episodic: The agent's experience is divided into atomic "episodes"
(each episode consists of the agent perceiving and then performing a
single action), and the choice of action in each episode depends only
on the episode itself.
– Sequential: The agent's experience is a single atomic "episodes"

12/04/2021 AI/CSE 3206 74


What is…..contd

Environment Types: Example


Environment Chess with Chess w/out a Taxi Driving
a clock clock

Fully Observable yes Yes No


Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes yes No
Single Agent No No NO

12/04/2021 AI/CSE 3206 75


What is…..contd

Remark:
• The environment type largely determines the agent design
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
• As one might expect, the hardest case is partially observable,
stochastic, sequential, dynamic, continuous, and multi agent.
• It also turns out that most real situations are so complex that
whether they are really deterministic is a moot point.
• For practical purposes, they must be treated as stochastic. Taxi
driving is hard in all these senses.

12/04/2021 AI/CSE 3206 76


What is…..contd
Types of Environment cont …

12/04/2021 AI/CSE 3206 77


Summary
• An agent perceives and acts in an environment, has an architecture, and is
implemented by an agent program
• An ideal agent always chooses the action which maximizes its expected
performance, given its percept sequence so far
• An autonomous agent uses its own experience rather than built-in
knowledge of the environment by the designer
• An agent program maps from percept to action and updates its internal state
– Reflex agents respond immediately to percepts
– Model-based reflex agents maintain internal state to track aspects of the
world that are not evident in the current percept
– Goal-based agents act in order to achieve their goal(s)
– Utility-based agents maximize their own utility function(“happiness”)
– All agents types can increase their performance through learning
• Representing knowledge is important for successful agent design
• The most challenging environments are partially observable, stochastic,
sequential, dynamic, and continuous, and contain multiple intelligent agents.

12/04/2021 AI/CSE 3206 78


Questions are
Welcome

12/04/2021 AI/CSE 3206 79


Artificial Intelligence

Chapter Three:
(Problem Solving: Uninformed Search)

12/04/2021 AI/CSE 3206 80


Objectives

Identify the type of agent that solve problem by searching


Problem formulation and goal formulation
Types of problem based on environment type
Discuss various techniques of search strategies(Uninformed
Search)

12/04/2021 AI/CSE 3206 81


Problem…contd

• Four general steps in problem solving:


– Goal formulation
• What are the successful world states
– Problem formulation
• What actions and states to consider given the goal
– Search
• Determine the possible sequence of actions that lead to the
states of known values and then choosing the best
sequence.
– Execute
• Give the solution perform the actions.

12/04/2021 AI/CSE 3206 82


Problem-solving agent

function SIMPLE-PROBLEM-SOLVING-AGENT(percept) return an action


static: seq, an action sequence
state, some description of the current world state
goal, a goal
problem, a problem formulation

state  UPDATE-STATE(state, percept)


if seq is empty then
goal  FORMULATE-GOAL(state)
problem  FORMULATE-PROBLEM(state,goal)
seq  SEARCH(problem) • A simple problem-solving agent.
• It first formulates a goal and a problem, searches for a
action  FIRST(seq)
sequence of actions that would solve the problem, and
seq  REST(seq) then executes the actions one at a time.
return action • When this is complete, it formulates another goal and
starts over.
• Note that when it is executing the sequence it ignores its
percepts: it assumes that the solution it has found will
always work.
12/04/2021 AI/CSE 3206 83
Problem…contd
• A problem can be defined formally by four components
1. The initial state(the agent starts in)
2. A description of the possible actions(uses successor
function)
3. The goal test which determines whether a given state is a
goal state
4. A path cost is function that assigns a numeric cost to each
path(distance, etc)
• Together, the initial state and successor function implicitly define the
state space of the problem-the set of all states reachable from the
initial state.
• Path in the state space is a sequence of states connected by a
sequence of actions.

12/04/2021 AI/CSE 3206 84


Problem…contd
– A solution to a problem is a path from the initial state to a goal
state. Solution quality is measured by the path cost function, and
an optimal solution has the lowest path cost among all
solutions.
• Type of agent that solve problem by searching
– Such agent is not reflex or model based reflex agent because
this agent needs to achieve some target (goal)
– It can be goal based or utility based or learning agent
– Intelligent agent knows that to achieve certain goal, the state
of the environment will change sequentially and the change
should be towards the goal
– Intelligent agents are supposed to maximize their
performance measure
12/04/2021 AI/CSE 3206 85
Problem…contd
• Assume a problem is to reach specified place(location) as it is
indicated on the following slide
– A problem is defined by:
• An initial state, e.g. Arad
• Successor function S(X)= set of action-state pairs
– e.g. S(Arad)={<Arad  Zerind, Zerind>,…}
intial state + successor function = state space
• Goal test, can be
– Explicit, e.g. x=‘at bucharest’
– Implicit, e.g. checkmate(x)
• Path cost (additive)
– e.g. sum of distances, number of actions executed, …
– c(x,a,y) is the step cost, assumed to be >= 0
A solution is a sequence of actions from initial to goal state.
Optimal solution has the lowest path cost.
12/04/2021 AI/CSE 3206 86
Problem…contd

States

Actions

Start Solution

Goal

12/04/2021 AI/CSE 3206 87


Problem…contd

• In the preceding section we proposed a formulation of the problem of


getting to Bucharest in terms of the initial state, successor function,
goal test, and path cost
• This formulation seems reasonable, yet it omits a great many aspects
of the real world. Real world is absurdly complex.
– The state of the world(state description) includes so many things
for example , :
• The traveling companions,
• What is on the radio,
• The scenery out of the window,
• Whether there are any law enforcement officers nearby,
• How far lit is to the next rest stop,
• The condition of the road, the weather, and so on.
12/04/2021 AI/CSE 3206 88
Problem…contd
– All these considerations are left out of our state descriptions
because they are irrelevant to the problem of finding a route to
Bucharest.
– The process of removing detail from a representation is called
abstraction.
– In addition to abstracting the state description, we must abstract
the actions themselves.
• A driving action has many effects.
– Besides changing the location of the vehicle and its occupants, it takes
up time, consumes fuel, generates pollution, and changes the agent (as
they say, travel is broadening).
– In formulation, in our example, we take into account only the change
in location.

12/04/2021 AI/CSE 3206 89


Problem…contd

• Problem formulation
– For vacuum world problem, the problem formulation involve:
• States: The agent is in one of two locations, each of which
might or might not contain dirt. Thus there are 2 x 2^2 = 8
possible world states.
• Initial state: Any state can be designated as the initial state.
• Successor function: This generates the legal states that result
from trying the three actions (Left, Right, and Suck).
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of
steps in the path.

12/04/2021 AI/CSE 3206 90


Problem…contd

• Goal formulation: refers to the understanding


of the objective of the agent based on the state
description of the final environment
• For example, for the vacuum world problem,
the goal can be formulated as
[clean, Clean, agent at any block]

12/04/2021 AI/CSE 3206 91


Problem…contd
• In problem solving by searching, solution can be described into
two ways.
• Solution can be provided as state sequence or action sequence
• For example consider the vacuum cleaner world with initial
state as shown bellow
• Solution as state sequence becomes:

Suck Move Right


In general, an agent with several immediate options of unknown value
can decide what to do by first examining different possible sequences
of actions that lead to states of known value, and then choosing the
best sequence.
12/04/2021 AI/CSE 3206 92
Problem…contd

– This process of looking for such a sequence is called search.


– A search algorithm takes a problem as input and returns a
solution in the form of an action sequence.
– Once a solution is found, the actions it recommends can be
carried out. This is called the execution phase.
– Thus, we have a simple "formulate, search, execute" design
for the agent

12/04/2021 AI/CSE 3206 93


Problem…contd

Agent Program

12/04/2021 AI/CSE 3206 94


Problem…contd
Example: Road map of Ethiopia
Aksum
100

200
Mekele
80
180
Lalibela
110 250
150
Bahr dar
Dessie
170

Debre markos 330


Dire Dawa
230

400
330
Jima Addis Ababa
100
430 Adama 370

Gambela 230 320 Nekemt

Awasa
12/04/2021 AI/CSE 3206 95
Problem…contd

Example: Road map of Ethiopia

• Current position of the agent(Initial State): Awasa.


• Needs to arrive to: Gondar
• Formulate goal: be in Gondar
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Awasa, Adama, Addis Ababa, Dessie,
Godar
12/04/2021 AI/CSE 3206 96
Problem…contd
Types of Problems
• Four types of problems exist in the real situations:
1. Single-state problem
– The environment is Deterministic and fully observable
– Out of the possible state space, agent knows exactly
which state it will be in; solution is a sequence
2. Sensor less problem (conformant problem)
– The environment is non-observable
– It is also called multi-state problem
– Agent may have no idea where it is; solution is a
sequence
12/04/2021 AI/CSE 3206 97
Problem…contd

3. Contingency problem
– The environment is nondeterministic and/or partially observable
– It is not possible to know the effect of the agent action
– percepts provide new information about current state
4. Exploration problem
– The environment is partially observable
– It is also called unknown state space

12/04/2021 AI/CSE 3206 98


Problem…contd
• Problem type as a summary

Environment Type Problem Type


Deterministic, fully-observable Single-state problem

Non-observable, known state space Sensorless/conformant problem

Nondeterministic and/or partially- Contingency problem


observable
Partially observable, unknown state space Exploration problem

12/04/2021 AI/CSE 3206 99


Problem…contd

Example: vacuum world

Single-state
– Starting state us known
say in #5.
– What is the Solution?

12/04/2021 AI/CSE 3206 100


Problem…contd

Example: vacuum world


• Single-state, start in #5.
Solution? [Right, Suck]

12/04/2021 AI/CSE 3206 101


Problem…contd

Example: vacuum world


Sensorless,
– It doesn’t know what the
current state is
– So the current start is either of
the following: {1,2,3,4,5,6,7,8}
– What is the Solution?

12/04/2021 AI/CSE 3206 102


Problem…contd

Example: vacuum world

Sensorless
• Solution
• Right goes to {2,4,6,8}
Solution?
• [Right,Suck,Left,Suck]

12/04/2021 AI/CSE 3206 103


Problem…contd

Example: vacuum world


• Contingency
– Nondeterministic:
• Suck may dirty a clean carpet
– Partially observable:
• Hence we have partial information
– Let’s assume the current percept is: [L,
Clean] i.e. start in #5 or #7
– What is the Solution?

12/04/2021 AI/CSE 3206 104


Problem…contd

Example: vacuum world

• Contingency Solution
[Right, if dirt then Suck] Move right

suck

12/04/2021 AI/CSE 3206 105


Problem…contd

• Real-world Problems to be solved by searching algorithms


– We have seen two such problems:
• The road map problem and the vacuum cleaner world
problem
• Route finding
• Touring problems
• VLSI layout
• Robot Navigation
• Automatic assembly sequencing
• Drug design
• Internet searching

12/04/2021 AI/CSE 3206 106


Problem…contd

Example: vacuum world

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

12/04/2021 AI/CSE 3206 107


Problem…contd

Example: vacuum world

• States?? two locations with or without dirt: 2 x 2 2=8 states.


• Initial state?? Any state can be initial
• Actions?? {Left, Right, Suck}
• Goal test?? Check whether squares are clean.
• Path cost?? Number of actions to reach goal.

12/04/2021 AI/CSE 3206 108


Problem…contd

Example: 8-puzzle

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

12/04/2021 AI/CSE 3206 109


Problem…contd

Example: 8-puzzle

• States?? Integer location of each tile


• Initial state?? Any state can be initial
• Actions?? {Left, Right, Up, Down}
• Goal test?? Check whether goal configuration is reached
• Path cost?? Number of actions to reach goal

12/04/2021 AI/CSE 3206 110


Problem…contd

Example: 8-puzzle

8 2 1 2 3

3 4 7 4 5 6

5 1 6 7 8

Initial state Goal state

12/04/2021 AI/CSE 3206 111


Problem…contd
Example: 8-puzzle

8 2 7

3 4

8 2 5 1 6

3 4 7

5 1 6 8 2 8 2

3 4 7 3 4 7

5 1 6 5 1 6

12/04/2021 AI/CSE 3206 112


Problem…contd

Example: 8-puzzle

Size of the state space = 9!/2 = 181,440

15-puzzle  .65 x 1012

0.18 sec
24-puzzle  .5 x 1025
6 days

12 billion years

10 million states/sec

12/04/2021 AI/CSE 3206 113


Problem…contd

Example: 8-queens
Place 8 queens in a chessboard so that no two queens
are in the same row, column, or diagonal.

A solution Not a solution

12/04/2021 AI/CSE 3206 114


Problem…contd

Example: 8-queens problem

Incremental formulation vs. complete-state formulation


• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

12/04/2021 AI/CSE 3206 115


Problem…contd

Example: 8-queens

Formulation #1:
• States: any arrangement of 0 to 8 queens on
the board
• Initial state: 0 queens on the board
• Actions: add a queen in any square
• Goal test: 8 queens on the board, none
attacked
• Path cost: none

 648 states with 8 queens

12/04/2021 AI/CSE 3206 116


Problem…contd

Example: 8-queens

Formulation #2:
• States: any arrangement of k = 0 to 8
queens in the k leftmost columns with
none attacked
• Initial state: 0 queens on the board
• Successor function: add a queen to any
square in the leftmost empty column such
that it is not attacked by any other
queen
• Goal test: 8 queens on the bord

 2,067 states
12/04/2021 AI/CSE 3206 117
Problem…contd

Example: robot assembly

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??

12/04/2021 AI/CSE 3206 118


Problem…contd

Example: robot assembly

• States?? Real-valued coordinates of robot joint angles; parts of


the object to be assembled.
• Initial state?? Any arm position and object configuration.
• Actions?? Continuous motion of robot joints
• Goal test?? Complete assembly (without robot)
• Path cost?? Time to execute

12/04/2021 AI/CSE 3206 119


Problem…contd
Searching For Solution (Tree search algorithms)
• Given state space, and network of states via actions.
• The network structure is usually a graph
• Tree is a network in which there is exactly one path defined from
the root to any node
• Given state S and valid actions being at S
– the set of next state generated by executing each action is
called successor of S
• Searching for solution is a simulated exploration of state space
by generating successors of already-explored states

12/04/2021 AI/CSE 3206 120


Problem…contd
Searching For Solution (Tree search algorithms)

• A state is a (representation of) a physical configuration


• A node is a data structure constituting part of a search tree
– It includes:
• state,
• parent node,
• action,
• depth and
• one or more costs [like path cost g(x), heuristic cost h(x),
evaluation function cost f(x)]

12/04/2021 AI/CSE 3206 121


Problem…contd

Searching For Solution (Tree search algorithms)

• The Successor-Fn generate all the successors state and the action that
leads moves the current state into the successor state
• The Expand function creates new nodes, filling in the various fields
of the node using the information given by the Successor-Fn and the
input parameters
12/04/2021 AI/CSE 3206 122
Problem…contd

Searching For Solution (Tree search algorithms)


• A search process can be viewed as building a search tree over the
state space
• Search tree is a tree structure defined by initial state and a
successor function.
• Search(root) Node is the root of the search tree representing initial
state and without a parent.
• A child node is a node adjacent to the parent node obtained by
applying an operator or rule.

12/04/2021 AI/CSE 3206 123


Problem…contd
Tree search example
Awasa

Adama Addis Ababa

Gambela
Dire Nekemt Debre Awasa
Gambela AA Adama Jima
Dawa Markos
Dessie
Awasa

BahrDar AA
Lalibela AA Gondar

Gondar Debre M.

12/04/2021 AI/CSE 3206 124


Problem…contd
Implementation: general tree search

12/04/2021 AI/CSE 3206 125


Problem…contd
Graph vs. Tree

12/04/2021 AI/CSE 3206 126


Problem…contd
Example (1) : Vacuum Cleaner world
state space tree Any State

L R
S

1,3,5,7 4,5 ,7,8 2,4,6 ,8


R S L R L S

2,4,6 ,8 5,7 4 ,6,8 1 ,3,5,7 4 ,8


4 ,8
R S S S L
L S
6 ,8 5,7 4 ,8 5 ,7 3 ,7
1 ,3,5,7 4 ,8 S R L R S
S R 8 6 ,8 3 ,7 6,8 7
5 ,7 3 ,7 S S S
R S 8 7 8
6,8 7
S
8
AI/CSE 3206
12/04/2021 127
Problem…contd
Search Strategies

Searching
Strategies in AI

Un-informed Informed
(Blind Search) (Heuristic Search)

Depth- Breadth- Cost- Best-First Hill Constraint


first (DFS) First (BFS) First (CFS) Search (BFS) Climbing Satisfaction

Depth-
Limited A* Search
(DLS)
A very large number of AI problems are formulated as
Iterative search problems.
Deepening

12/04/2021 AI/CSE 3206 128


Problem…contd
Search strategies

 A search strategy is defined by picking the order of node expansion


 Strategies are evaluated along the following dimensions:

– completeness: does it always find a solutionwhemever one exists?


– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory
– optimality: does it always find a least-cost solution?
 Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞) for DFS
 Generally, searching strategies can be classified in to two as uninformed and informed search
strategies

12/04/2021 AI/CSE 3206 129


Problem…contd

 Blind (or un-informed) strategies


 They do not exploit state descriptions to order FRINGE.
 They only exploit the positions of the nodes in the
search tree
 Heuristic (or informed) strategies
 They exploit state descriptions to order FRINGE
 That is, the most “promising” nodes are placed at the
beginning of FRINGE

12/04/2021 AI/CSE 3206 130


Uninformed search (blind search) strategies
 Uninformed search strategies (Blind Search)
– use only the information available in the problem definition
– They have no information about the number of steps or the path cost from the current
state to the goal
– They can distinguish the goal state from other states
– They are still important because there are problems with no additional information.
 Six kinds of such search strategies will be discussed and each depends on the order of
expansion of successor nodes.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search
12/04/2021 AI/CSE 3206 131
Uninformed search (blind search) strategies

m
b
G

12/04/2021 AI/CSE 3206 132


Generating action sequences- search trees

• Leaf Node: is a node without successors ( or children).


• They have not been expanded yet or because they were expanded before.

• Depth (d): of a node is the number of actions required to reach it


from the initial state.

• Frontier or Fringe Nodes: are the collection of nodes that are


waiting to be expanded.

• Path cost: of a node is the total cost leading to this node.


• Branch Factor(b): Max. number of successors for any node.

12/04/2021 AI/CSE 3206 133


Breadth-first search

– Uses no prior information, nor knowledge


– It tracks all nodes because it does not know whether this node
leads to a goal or not
– Keeps on trying until it gets solution
– All nodes are expanded from the root node
– That is it is a simple strategy in which
• the root node is expanded first,
• then all the successors of the root node are expanded next,
• then their successors, and so on.

12/04/2021 AI/CSE 3206 134


Breadth…contd

 In general, all the nodes are expanded at a given depth in the


search tree before any nodes at the next level are expanded.

 That is, BFS expands all nodes at level d before expanding


nodes at level d+1

 It checks all paths of a given length before moving to any


longer path

 Expands the shallowest node first

12/04/2021 AI/CSE 3206 135


Breadth…contd

•The figure shows the progress of the search on a simply binary


tree BFS trees after 0, 1, 2, 3, and 4 nodes expansion
12/04/2021 AI/CSE 3206 136
Breadth…contd

A D • Move
downwards,
B D A E level by
level, until
C E E B B F goal is
reached.
D F B F C E A C G
G C G F
G
12/04/2021 AI/CSE 3206 137
Breadth-First Strategy

New nodes are inserted at the end of FRINGE

2 3 FRINGE = (1)

4 5 6 7

12/04/2021 AI/CSE 3206 138


Breadth-First Strategy

New nodes are inserted at the end of FRINGE

2 3 FRINGE = (2, 3)

4 5 6 7

12/04/2021 AI/CSE 3206 139


Breadth-First Strategy

New nodes are inserted at the end of FRINGE

2 3 FRINGE = (3, 4, 5)

4 5 6 7

12/04/2021 AI/CSE 3206 140


Breadth-First Strategy

New nodes are inserted at the end of FRINGE

2 3 FRINGE = (4, 5, 6, 7)

4 5 6 7

12/04/2021 AI/CSE 3206 141


Breadth…contd

Algorithm for Breadth-first search(FIFO)


• Blind search in which the list of nodes is a queue
• To solve a problem using breadth-first search:
1.Set L to be a list of the initial node in the problem.
2.If L is empty, return failure otherwise pick the first node n
from L
3.If n is a goal state, quit and return the path from initial node
to n
4.Otherwise remove n from L and add to the end of L all of
n's children. Label each child with its path from initial node
5.Return to 2.

12/04/2021 AI/CSE 3206 142


Breadth....contd
Algorithm for Breadth-first search(FIFO)

• BFS can be implemented using a queuing function that puts


the newly generated states at the end of the the que, after all
previously generated states

1. QUEUE <-- path only containing the root;

2. WHILE QUEUE is not empty


AND goal is not reached

DO remove the first path from the QUEUE;


create new paths (to all children);
reject the new paths with loops;
add the new paths to back of QUEUE;

3. IF goal reached
THEN success;
ELSE failure;

12/04/2021 AI/CSE 3206 143


Breadth…contd

Properties of breadth-first search


• Complete? Yes (if b is finite, which is true in most cases)
• Time? 1+b+b2+b3+… +bd = O(bd+1)
– at depth value = i , there are bi nodes expanded for i ≤d
• Space? O(bd) (keeps every node in memory)
– a maximum of this much node will be there while reaching
to the goal node
– This is a major problem for real world application
• Optimal? Yes (if cost = constant (k) per step)
• Space is the bigger problem (more than time)

12/04/2021 AI/CSE 3206 144


Breadth....contd
Using the same hypothetical state space find the time and memory
required for a BFS with branching factor b=10 and various values
of the solution depth d

Depth Nodes Time Memory


0 1 1 millisecond 100 bytes
2 111 0.1 second 11 kilobytes
4 11,111 11 seconds 1 megabyte
6 106 18 minutes 111 megabytes
8 108 31 hours 11 gigabytes
10 1010 128 days 1 terabyte
12 1012 35 years 111 terabytes
14 1014 3500 years 11,111 terabytes

12/04/2021 AI/CSE 3206 145


Depth-first Search (DFS)

• Pick one of the children at every node visited, and


work forward from that child
• Always expands the deepest node reached so far
(and therefore searches one path to a leaf before
allowing up any other path)
• Thus, it finds the left most solution

12/04/2021 AI/CSE 3206 146


Depth-first …..contd)

Depth-first search- Chronological backtracking

S
• Select a child
A • convention: left-to-right or may
D be alphabetical order
B
• Repeatedly go to next child, as long
E as possible.
C
• Return to left-over alternatives
D F (higher-up) only when needed.

12/04/2021 AI/CSE 3206 147


Depth-first …..contd)
Depth-first search(LIFO) algorithm

1. QUEUE <-- path only containing the root;

2. WHILE QUEUE is not empty


AND goal is not reached

DO remove the first path from the QUEUE;


create new paths (to all children);
reject the new paths with loops;
add the new paths to front of QUEUE;
3. IF goal reached
THEN success;
ELSE failure;
12/04/2021 AI/CSE 3206 148
Depth-first …..contd)

• Complete: Yes, if state space finite


No,if state contains infinite paths or loops
• Time: O(bm)
• Space: O(bm) or O(bm+1) (i.e. linear space)
• Optimal : No
Then the worst case time complexity is O(bm)
However, for very deep (or infinite due to cycles) trees this search may
spend a lot of time (forever) searching down the wrong branch
Backtracking search uses even less memory, one successor instead of all b.

12/04/2021 AI/CSE 3206 149


Depth-first …..contd)

• Time Requirements of Depth-First Search


– It is also more likely to return a solution path that is longer
than the optimal
– Because it may not find a solution if one exists, this search
strategy is not complete.
– Remarks: Avoid DFS for large or infinite maximum depths.

12/04/2021 AI/CSE 3206 150


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3
FRINGE = (1)
4 5

12/04/2021 AI/CSE 3206 151


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3
FRINGE = (2, 3)
4 5

12/04/2021 AI/CSE 3206 152


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3
FRINGE = (4, 5, 3)
4 5

12/04/2021 AI/CSE 3206 153


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 154


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 155


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 156


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 157


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 158


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 159


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 160


Depth-First Strategy

New nodes are inserted at the front of FRINGE


1

2 3

4 5

12/04/2021 AI/CSE 3206 161


Depth-Limited Strategy(Depth first search with cut off)

• Depth-first with depth cutoff k (maximal depth below which nodes


are not expanded)

• Three possible outcomes:


– Solution
– Failure (no solution)
– Cutoff (no solution within cutoff)

• Solves the infinite-path problem.


• If k< d then incompleteness results.
• If k> d then not optimal.
• Time complexity: O(bk)
• Space complexity O(bk)

12/04/2021 AI/CSE 3206 162


Depth-Limited Strategy(Depth first search with cut off)

DFS Evaluation:
• DFS is a method of choice when there is a known (and
reasonable) depth bound, and finding any solution is sufficient
1.Depth-first search:
IF the search space contains very deep branches without solution,
THEN Depth-first may waste much time in them.
2. Breadth-first search:
Is VERY demanding on memory !
Solutions ??
Iterative deepening
The order of expansion of states is similar to BFS, except that
some states are expanded multiple times
12/04/2021 AI/CSE 3206 163
Iterative Deepening Search l = 0

• Limit = 0

12/04/2021 AI/CSE 3206 164


Iterative Deepening Search l = 1

• Limit = 1

12/04/2021 AI/CSE 3206 165


Iterative Deepening Search l = 2

• Limit = 2

12/04/2021 AI/CSE 3206 166


Iterative Deepening Search l = 3

• Limit = 3

•As can be seen, from the three iterations, the order of expansion
of states is similar to BFS, except that some states are expanded
multiple
12/04/2021 times AI/CSE 3206 167
Iterative Deepening Search l = 1 to l=4

Stages in Iterative-Deepening Search

12/04/2021 AI/CSE 3206 168


Iterative Deepening Search (IDS)

• It requires little memory (a constant times depth


of the current node)
• Is complete
• Finds a minim-depth solution as does BFS
• It is a strategy that avoids (sidesteps) the issue of
choosing the best depth limit by trying all possible
depth limits
• Finds the best depth limit by gradually increase the
limit -> 0, 1, 2, …until goal is found at depth limit d

12/04/2021 AI/CSE 3206 169


Iterative ….contd

Iterative Deepening Search Algorithm

1. DEPTH <-- 1

2. WHILE goal is not reached

DO perform Depth-limited search;


increase DEPTH by 1;

12/04/2021 AI/CSE 3206 170


Completeness and optimality of Iterative Deepening Search

• Completeness
– It is complete
– It finds a solution if exists
• Optimality
– It is optimal
– Finds the shortest path (like breadth first)
• Guarantee shortest path
• Guarantee for goal node of minimal depth

12/04/2021 AI/CSE 3206 171


Uniform-cost search

• Expand least-cost unexpanded node


• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
 Each arc has some cost c   > 0
 The cost of the path to each node N is g(N) =  costs of arcs
 The goal is to generate a solution path of minimal cost
 The nodes N in the queue FRINGE are sorted in increasing g(N)
• Consider the problem that moves from node S to G
S

A, 1 B, 5 C, 15
A
1 10 S
5 B 5
S G
A, 1 B, 5 C, 15
15 C 5
G, 11
S

A, 1 B, 5 C, 15
G, 11 G, 10

12/04/2021 AI/CSE 3206 172


Bidirectional Search
S
Forward
A D Backwards

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
12/04/2021 AI/CSE 3206 173
Bidirectional…contd

 Bi-directional Search

Initial State Final State

* Completeness: yes
* Optimality: yes d/2
* Time complexity: O(bd/2)
* Space complexity: O(bd/2) d

O(bd) vs. O(bd/2) ? with b=10 and d=6 results in 1,111,111 vs. 2,222.

12/04/2021 AI/CSE 3206 174


Bidirectional…contd

2 fringe queues: FRINGE1 and FRINGE2

12/04/2021 AI/CSE 3206 175


Comparison of Strategies
 Breadth-first is complete and optimal
 But it has high space complexity
 Depth-first is space efficient
 But it is neither complete, nor optimal
 Iterative deepening is complete and optimal
 with the same space complexity as depth-first
 almost the same time complexity as breadth-
first

12/04/2021 AI/CSE 3206 176


Informed Search

• Section Objectives
– Define Informed search algorithms(strategies)
– Differentiate between Blind and Informed search
– Identify types of Informed Search
– Best-first search
– Memory Bound Best First search
– Iterative improvement algorithm (Local search
algorithms)
• Understand the use of an evaluation function f(n)
– Understanding Admissible heuristics

12/04/2021 AI/CSE 3206 177


Informed ….contd

 Informed search is a strategy that uses information about the


cost that may be incurred to achieve the goal state from the
current state.
 The information may not be accurate.
 But it will help the agent to make better decision
 This information is called heuristic information

12/04/2021 AI/CSE 3206 178


12/04/2021 AI/CSE 3206 179
12/04/2021 AI/CSE 3206 180
12/04/2021 AI/CSE 3206 181
Informed ….contd

• There several algorithms that belongs to this group. Some of


these are:
– Best-first search
1. Greedy best-first search
2. A* search
– Memory Bound Best First search
1. Iterative deepening A* (IDA*) search
– Iterative improvement algorithm (Local search
algorithms)
1. Hill-climbing search
2. Simulated annealing search

12/04/2021 AI/CSE 3206 182


Informed ….contd

Best-first search
Idea: use an evaluation function f(n) for each node
Estimate of "desirability“ using heuristic and path cost
Expand most desirable unexpanded node(Expand the node n
with smallest f(n))
The information gives a clue about which node to be expanded
first
This will be done during queuing
The best node according to the evaluation function may not be
best
Implementation:
 Order the nodes in fringe in decreasing order of desirability
(increasing order of cost evaluation function)

12/04/2021 AI/CSE 3206 183


Informed ….contd

Greedy Best-First Search


1. Put the initial node on a list START
2. If (START is empty) or (START =GOAL) terminate search
3. Remove the first node form START. Call this node n.
4. If (n = GOAL) terminate search with success.
5. Else if node n has successor, generate all of them. Find out how
far the goal node. Sort all the children generated so far by the
remaining distance from the goal.Name this list as START1
6. Replace START with START1
7. Go to Step2.
12/04/2021 AI/CSE 3206 184
12/04/2021 AI/CSE 3206 185
Informed ….contd
eedy Best-First Search
f(n)= h(n)
# of nodes tested 1, expanded 1
S
h=8

1 8
Expanded Node OPEN list
5
(S:8)
A B C
S not goal (C:3,B:4,A:8) h=4
h=8 h=3
9 4
3
7 5

D G
E
h= h=0
h=

12/04/2021 AI/CSE 3206 186


Informed ….contd

eedy Best-First Search


f(n)= h(n)
# of nodes tested 2, expanded 2
S
h=8

1 8
Expanded Node OPEN list
5
(S:8)
A B C
h=8 h=4 h=3
S (C:3,B:4,A:8)
9 4
C not goal (G:0,B:4,A:8) 3
7 5

D G
E
h= h=0
h=

12/04/2021 AI/CSE 3206 187


Informed ….contd

eedy Best-First Search

f(n)= h(n)
# of nodes tested 3, expanded 2 S
h=8

1 8
Expanded Node OPEN list 5

(S:8) A B C
h=8 h=4 h=3
S (C:3,B:4,A:8)
9 4
3
C (G:0,B:4,A:8) 7 5

D G
G goal (B:4.A:8) E
h= h=0
no expansion h=

12/04/2021 AI/CSE 3206 188


Informed ….contd

eedy Best-First Search


f(n)= h(n)
# of nodes tested 3, expanded 2 S
h=8

1 8
Expanded Node OPEN list
5
(S:8)
A B C
S (C:3,B:4,A:8) h=8 h=4 h=3
9 4
C (G:0,B:4,A:8) 3
7 5
G goal (B:4.A:8) D G
E
h= h=0
h=

* Fast but not optimal


Path: S,C,G Cost: 13
12/04/2021 AI/CSE 3206 189
Informed ….contd

Greedy Best-First Search


Ethiopia Map with step costs in km Straight Line distance to Gondar
Aksum
100 Gondar 0
200 Mekele Aksum 100
Gondar 180
80 Mekele 150
Lalibela
Lalibela 110
110 250
150 Desseie 210
Bahr Bahrdar 90
dar Dessie
170 Debre Markos 170
Addis Ababa 321
Debre 330
Dire Jima 300
markos Dawa
230 Diredawa 350
330
400 Adama 340
Jima Addis Ababa Gambela 410
100
430 Adama 370 Awasa 500
Nekemt 420
Gambel 230 320 Nekemt
a
12/04/2021 AI/CSE 3206 190
Awasa
Informed ….contd

Greedy Best First Search

• Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n


to goal
• That means the agent prefers to choose the action which is assumed
to be best after every action
• e.g., hSLD(n) = straight-line distance from n to Bucharest
• Greedy best-first search expands the node that appears to be closest
to goal (It tries to minimizes the estimated cost to reach the goal)

12/04/2021 AI/CSE 3206 191


Informed ….contd
Heuristic
R  G -------------- 100
Example Two A  G -------------- 60
Greedy best-first search example B  G -------------- 80

• Given the following tree structure, C  G -------------- 70

show the content of the open list and D  G -------------- 65

closed list generated by Greedy best E  G -------------- 40


first search algorithm F  G -------------- 45
R H  G ---------------10
I  G ---------------- 20
A B C J  G ---------------- 8
G1,G2,G3  G ------------ 0
D E F G1 H G2

I G3 J

12/04/2021 AI/CSE 3206 192


Informed ….contd

Properties of greedy best-first search

• Complete? Yes if repetition is controlled otherwise it can


can get stuck in loops
• Time? O(bm), but a good heuristic can give dramatic
improvement
• Space? O(bm), keeps all nodes in memory
• Optimal? No

12/04/2021 AI/CSE 3206 193


Greedy-Best-First-search example

12/04/2021 AI/CSE 3206 194


12/04/2021 AI/CSE 3206 195
12/04/2021 AI/CSE 3206 196
12/04/2021 AI/CSE 3206 197
12/04/2021 AI/CSE 3206 198
12/04/2021 AI/CSE 3206 199
12/04/2021 AI/CSE 3206 200
Informed ….contd

A* search
• Idea: avoid expanding paths that are already expensive
• Evaluation function f(n) = g(n) + h(n) where
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to goal
• It tries to minimizes the total path cost to reach into the goal at
every node N.
Excercise
• By using map of Ethiopia in previous slid , Indicate the flow of
search to move from Awasa to Gondar using A*

12/04/2021 AI/CSE 3206 201


Informed ….contd
A* Search

Exercise
Heuristic
R  G -------------- 100
A  G -------------- 60
• Given the following tree structure,
B  G -------------- 80
show the content of the open list and
closed list generated by A* best first C  G -------------- 70

search algorithm D  G -------------- 65


E  G -------------- 40
R
35
70 F  G -------------- 45
40
A B C H  G ---------------10
25 10 62 45 I  G ---------------- 20
18 21

D E F G1 H G2 J  G ---------------- 8
15 20 5 G1,G2,G3  G ------------ 0
I G3 J

12/04/2021 AI/CSE 3206 202


Informed ….contd

A* Search

Admissible heuristics
A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where
h*(n) is the true cost to reach the goal state from n.
An admissible heuristic never overestimates the cost to reach the goal,
i.e., it is optimistic
Example: hSLD(n) (never overestimates the actual road distance)

Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal

12/04/2021 AI/CSE 3206 203


Informed ….contd
A* Search
Example

n g(n) h(n) f(n) h*(n) S


h=8
S 0 8 8 9
1 8
A 1 8 9 9
5
B 5 4 9 4
A B C
C 8 3 11 5 h=8 h=4 h=3
D 4    9 4
3
E 8    7 5

G 10/9/13 0 10/9/1 0 D G
E
3 h= h=0
h=

Since h(n)  h*(n)  n, h is admissible.

12/04/2021 AI/CSE 3206 204


A* algorithm example

12/04/2021 AI/CSE 3206 205


Informed ….contd
Find Admissible heuristics for the 8-puzzle?

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance (i.e., no. of squares from
desired location of each tile). This is also called city cap
distance
• h1(S) = ?
• h2(S) = ?

12/04/2021 AI/CSE 3206 206


Informed ….contd
Iterative Improvement Algorithm (Local search algorithms)
• In many optimization problems, the path to the goal is irrelevant;
the goal state itself is the solution
• State space = set of "complete" configurations
• Find configuration satisfying constraints, e.g., n-queens
• In such cases, we can use local search algorithms
• keep a single "current" state, try to improve it

Example: n-queens
•Put n queens on an n × n board with no two queens on the same row, column, or
diagonal

12/04/2021 AI/CSE 3206 207


Informed ….contd

Iterative Improvement Algorithm (Local search algorithms)

• There are two types of Iterative Improvement algorithms


– Hill climbing if the evaluation function is quality
• also called Gradient Descent if the evaluation function is
a cost rather than a quality
– Simulated Annealing

12/04/2021 AI/CSE 3206 208


Informed ….contd

Hill-climbing search
• Tries to make changes that improve the current state cost
• The algorithm is given bellow
• It continually move in the direction of increasing value
• The node data structure maintain only records of state and
evaluation cost

12/04/2021 AI/CSE 3206 209


Informed ….contd

Hill Climbing - Algorithm


1. Pick a random point in the search space
2. Consider all the neighbors of the current state
3. Choose the neighbor with the best quality and
move to that state
4. Repeat 2 through 4 until all the neighboring states
are of lower quality
5. Return the current state as the solution state

12/04/2021 AI/CSE 3206 210


12/04/2021 AI/CSE 3206 211
Hill Climbing - Algorithm
Hill-climbing (Gradient Descent) search
• Tries to make changes that improve the current state cost

Problem:
1. Depending on initial state, can get
stuck in local maxima
2. Plateaux (after some progress the
algorithm will make a random
walk)
3. Ridges (a place where two sloppy
sides meet). In this case the search
may oscillate from side to side

12/04/2021 AI/CSE 3206 212


Informed Search
Hill-climbing search: 8-queens problem

• h = number of pairs of
queens that are attacking each
other, either directly or
indirectly
• h = 17 for the above state

• A local minimum with h = 1


• Improvement techniques
– Random restart hill climbing for N
iteration by saving the best state so far

12/04/2021 AI/CSE 3206 213


Exercise
1. Consider the search space with start S and goal G states.
The value of heuristics h are shown at each node. The h=1
S
cost associated with the each arc is not known. However,
the trace of the OPEN list produced by the execution of
A* algorithm is available(given below) along with the f
values. h=3
Determine the arc cost of arcs. A B
h=5
2. { (S, f=1)}
2. {(B, f=5), (A, f=6)}
C h=2
3. {(A, f=6), (C, f=7)}
4. {(C, f=6)}
5. {(D, f=5), (G, f=7)}
6. {(G,f=6)} D
G

h=0 h=0

12/04/2021 AI/CSE 3206 214


1. List out all nodes to reach the goal with Greedy
best and A* search and which one is optimal

A
S A 1
2 10
S B 3
2
S C 5
S G 9 5 5
A B 1 S B G
A C 3
A G 3 2
10
B C 2 5
B G 4
C
C G 4

12/04/2021 AI/CSE 3206 215


•Questions

12/04/2021 AI/CSE 3206 216


Chapter Four
(Knowledge and Reasoning )
Propositional Logic
and
Knowledge-based agents

12/04/2021 AI/CSE 3206 217


• Objectives
 KB agent and KB representation
 General Idea about Logic
 Kinds of logic
 Propositional (Boolean) Logic
 PL connector priority
 Types of sentences in Logic (Equivalence, validity, satisfiability)
 Entailment
 Inference rules and theorem proving
Logical equivalence
 Forms of logical expression
 Example of PL Knowledge representation and inferencing (The
Wumpus world)
12/04/2021 AI/CSE 3206 218
Knowledge …..contd

• The study of human language has a vital role to play in Artificial


Intelligence.
• This complexity, combined with a sense of optimism, may well have
been part of the reason that natural language processing was such a
popular research area in the early days of Artificial Intelligence.
• Some of the optimism surrounding Natural Language Processing
came from the writings of Noam Chomsky,
• In the 1950s Noam Chomsky proposed his theory of Syntactic
Structures, which was a formal theory of the structure of human
language.
• His theory also attempted to provide a structure for human
knowledge, based on the knowledge of language

12/04/2021 AI/CSE 3206 219


Knowledge …..contd

• Most AI systems are made up of two basic parts


– Knowledge base: facts about objects in the chosen domain
– Inference mechanism(engine): a set of procedures that are
used to examine the knowledge base in an orderly manner to
answer questions, solve problems or make decisions within
the domain
• Analogous to a database organization (hierarchical, relational,
network) knowledge base can be organized in one or more
configurations (schemes)
• Knowledge in the knowledge base can be organized differently
from inference engine
12/04/2021 AI/CSE 3206 220
Knowledge …..contd

• There are a variety of knowledge representation scheme that have


been developed over the years
They share two common features
• They can be programmed with existing computer programming
language and stored in memory
• They are designed so that the facts and other knowledge contained
within them can be used in reasoning
• We could just write down what we are told(natural language) but, as
the information grows, it becomes more and more difficult to keep
track of the relationships between the items.

12/04/2021 AI/CSE 3206 221


Knowledge …..contd

• Natural language is an obvious way of representing and handling facts.


However:
• Natural language is often ambiguous
• Syntax and semantics are not fully understood
• There is little uniformity in the structure of sentences
Knowledge Representation methods
• Knowledge captured from experts and other sources must be organized
in such fashion that a computer inferencing program will be able to
access this knowledge whenever needed and draw conclusions
• There are several methods of knowledge representation in AI. Many of
these are pictorial representation

12/04/2021 AI/CSE 3206 222


Knowledge …..contd

• There are two general types of knowledge representations:


– Those that support analysis
– Those that are used in actual coding
Knowledge analysis techniques
• Are usually used to support knowledge acquisition during scope
establishment and initial knowledge gathering
• Most of the techniques are pictorial
• They help in the primary analysis of knowledge so that it will be finally
coded with one or more techniques
• Typical analysis techniques are :
– Logics, Semantic networks, Scripts, Lists, Decision trees, Decision
tables
12/04/2021 AI/CSE 3206 223
Knowledge …..contd

• Knowledge recorded in any of the analysis techniques can be easily


translated into rules

Knowledge Analysis Coding Inference


acquisition representation representation

12/04/2021 AI/CSE 3206 224


Knowledge …..contd

Representation in Logic
• The oldest form of knowledge representation
• It is the scientific study of the process of reasoning and the system of
rules and procedures that aid in the reasoning process
• Logic is considered to be subdivision of philosophy
• The development and refinement of its processes are generally
credited to the ancient Greeks
• The general form of logical process is:
– Information is given
Premises
– Statements are made or observations are made
• The premises are used by the logical process to create the output
which consists of conclusions called inferences
• With this process, facts that are known to be true can be used to derive
new facts that also must be true
12/04/2021 AI/CSE 3206 225
Knowledge …..contd
• For a computer to perform reasoning using logic, some method must be
used to convert statements and the reasoning process into a form
suitable for manipulation by a computer
• The result is what is known as symbolic, or mathematical logic
• It is a system of rules and procedures that permit the drawing of
inferences from various premises using a variety of logical techniques
• The two basic forms of computational logic are
– Propositional logic
– Predicate logic(predicate calculus);-a system for computing
Input Output
Logical Inferences
Premises
Or facts
Process or
Conclusions
12/04/2021 AI/CSE 3206 226
Knowledge …..contd

What is logic?
• Logic is concerned with reasoning and the validity of arguments.
• In general, in logic, we are not concerned with the truth of
statements, but rather with their validity.
• That is to say, although the following argument is clearly logical,
it is not something that we would consider to be true:
– All lemons are blue
– Mary is a lemon
– Therefore, Mary is blue
• This set of statements is considered to be valid because the
conclusion (Mary is blue) follows logically from the other two
statements, which we often call the premises.
12/04/2021 AI/CSE 3206 227
Knowledge …..contd

What is logic?
• The reason that validity and truth can be separated in this
way is simple: a piece of a reasoning is considered to be
valid if its conclusion is true in cases where its premises are
also true.
• Hence, a valid set of statements such as the ones above can
give a false conclusion, provided one or more of the
premises are also false.
• We can say: a piece of reasoning is valid if it leads to a true
conclusion in every situation where the premises are true.

12/04/2021 AI/CSE 3206 228


Knowledge …..contd

• These connectives or operators are designated as AND, OR, NOT, IMPLIES


and EQUIVALENT
• The symbols are the same as those used in Boolean algebra

• In fact, because propositional logic involves only the truth or falsity of


propositions
• Boolean algebra and all of the related techniques used in analyzing, designing
or simplifying binary logic circuits can be used in propositional logic
• Connectives are used to join or modify propositions to make new propositions

12/04/2021 AI/CSE 3206 229


Knowledge …..contd
• A truth table can be used to show all possible combinations of this
connective
A NOT A
A= It is raining today
T F
NOT A= It is not raining today
F T

Logical connectives, or operators and their symbols

Connectives Symbol
AND ^ , Π, &
OR V, U, +
NOT ~, -, ¬
IMPLIES , Ɔ
BI-IMPLICATION 
EQUIVALENT ≡
12/04/2021 AI/CSE 3206 230
Knowledge …..contd

p Q OR AND Implication
F F F F T
F T T F T
T F T F F NOT A or B
T T T T T
A AND NOT B
Predicate Calculus
• Although propositional logic is a knowledge representation alternative, it
is not very useful in artificial intelligence.
• Since propositional logic deals primarily with complete statements and
whether they are true or false, its ability to represent real-world
knowledge is limited
• It cannot make assertions about the individual elements that make up
statements
• Consequently, AI uses predicate logic instead
12/04/2021 AI/CSE 3206 231
Knowledge Representation
• A predicate is a generalization of a propositional variable
• “Predicate,” or “first-order,” logic is a generalization of propositional
logic
• Predicates are functions of zero or more variables that return Boolean
values.
• Thus predicates can be true sometimes and false sometimes,
depending on the values of their arguments.
• For example, we shall find in predicate logic atomic operands such as
x(C, S,G).
• Here, x is the predicate name, and C, S, and G are arguments. We can
think of this expression as a representation in logic of the database
relation Course-Student-Grade
• It returns the value TRUE whenever the values of C, S, and G are
such that student S got grade G in course C, and it returns FALSE
otherwise
12/04/2021 AI/CSE 3206 232
Knowledge Representation

• Using predicates as atomic operands, instead of propositional


variables, gives a more powerful language than expressions
involving only propositions
• In fact, predicate logic is expressive enough to form the basis of a
number of useful programming languages, such as Prolog (which
stands for “Programming in logic”) and the language SQL
• In propositional logic, suppose that we have three propositions: r
(“It is raining”), u (“Joe takes his umbrella”), and w (“Joe gets wet”)
• Suppose further that we have three hypotheses, or expressions that
we assume are true: r → u (“If it rains, then Joe takes his
umbrella”), u → -w (“If Joe takes an umbrella, then he doesn’t get
wet”), and -r → -w (“If it doesn’t
12/04/2021
rain, Joe doesn’t get wet”) 233
AI/CSE 3206
Knowledge Representation

• What is true for Joe is also true for Mary, and Sue, and Bill, and for any other
persons
• Thus, we might think of the proposition u as uJoe, while w is the proposition
wJoe. If we do, we have the hypotheses r → uJoe, uJoe → - wJoe, and -r →
-wJoe
• If we define the proposition uMary to mean that Mary takes her umbrella, and
wMary to mean that Mary gets wet, then we have the similar set of hypotheses:
 r → uMary, uMary → -wMary, and -r → -wMary
• We could go on like this, inventing propositions to talk about every individual X
we know of and stating the hypotheses that relate the proposition r to the new
propositions uX and wX, namely, r → uX, uX → -wX, and -r → -wX
• We have now arrived at the notion of a predicate.
• Instead of an infinite collection of propositions uX and wX, we can define
symbol u to be a predicate that takesAI/CSE
12/04/2021
an argument
3206
X 234
Knowledge Representation
• The expression u(X) can be interpreted as saying “X takes his or her
umbrella.”
• Possibly, for some values of X, u(X) is true, and for other values of X, u(X)
is false. Similarly, w can be a predicate; informally w(X) says “X gets wet.”
• The propositional variable r can also be treated as a predicate with zero
arguments.
• That is, whether it is raining does not depend on the individual X the way u
and w do.
• We can now write our hypotheses in terms of the predicates as follows:
 r → u(X). (For any individual X, if it is raining, then X takes his or her umbrella.)
 u(X) → NOT w(X). (No matter who you are, if you take your umbrella, then you
won’t get wet.)
 NOT r → NOT w(X). (If it doesn’t rain, then nobody gets wet.)
12/04/2021 AI/CSE 3206 235
Knowledge Base Agent

• Knowledge base agent is an agent that perform action using the knowledge it has
and reason about their action using its inference procedure.

• Knowledge base is a set of representation of facts and their relation ships called
rules about the world.
• Each fact/rule is called a sentence which is represented using a language called
knowledge representation language.
• Declarative approach to building an agent (or other system):
– Tell it what it needs to know(TELL)
• Facts and rules (Knowledge base)
– Ask what it knows(ASK)
• Answers should follow from the KB
• In addition to TELLININHG the agent what it needs to know, we can provide a
knowledge-based agent with mechanisms that allow it to learn for itself.
12/04/2021 AI/CSE 3206 236
Knowledge Bases Agent

• The agent must be able to:


– Represent states of the world, actions, etc.
– Incorporate new percepts (facts and rules)
– Update internal representations of the world
– Deduce hidden properties of the world
– Deduce appropriate actions
Example of KB written in PROLOG
• FACTS
1. female(azieb).
2. male(melaku).
3. female(selam).
4. parent(melaku,selam).
5. parent(azieb,selam).
• RULE
1. father(X,Y):-male(X),parent(X,Y).
2. mother(X,Y):-female(X),parent(X,Y).
3. wife(X,Y):-parent(X,Z),parent(Y,Z).

12/04/2021 AI/CSE 3206 237


Knowledge Bases Agent

• Knowledge representation refers to the technique how to express the


available facts and rules inside a computer so that agent will use it to
perform well.
• Knowledge representation consists of:
– Syntax (grammar): possible physical configuration that constitute a
sentence (fact or rule) inside the agent architecture.
• For example one possible syntax rule may be every sentence must
end with full stop.
– Semantics (concept): determine the facts in the world to which the
sentence refers
• Without semantics a sentence is just a sequence of characters or
binary sequences
• Semantic defines the meaning of the sentence
• KB for agent program can be represented using programming language
designed for this purpose like LISP and PROLOG
12/04/2021 AI/CSE 3206 238
Logic as Knowledge Representation

 Logic includes…
1. Formal system of defining the world
• Syntax
• Semantics
2. A proof theory:
– It is Rules for determining all entailments (given the hidden
property of the world)
– A set of rules for deducing the entailment of a set of
sentences.

12/04/2021 AI/CSE 3206 239


Logic as …

Syntax
• Recursive definition of well-formed formulas
– An atom is a formula
– If S is a formula, :S is a formula(negation)
– If S1 and S2 are formulas, S1 ^ S2 is a formula (conjunction)
– If S1 and S2 are formulas, S1 _ S2 is a formula(disjunction)
– All well-formed formulas are generated by applying above
rules
• Shortcuts:
– S1  S2 can be written as : -S1 U S2
– S1 S2 can be written as (S1  S2) n (S2S1)

12/04/2021 AI/CSE 3206 240


Logic as…

Syntax
Examples of well-formed
formulas Examples of formulas that are not
a. p well-formed
b. ¬¬p a. pqr
d. ¬(p ∨ q) b. (p
c. p¬
e. (¬(p ∨ q) ∧ p)
d. ∨q
f. ¬((p ∨ q) ∧ p) e. (¬p ↔ r ∨ s)
g. ¬(p ↔ (r ∨ s)) f. → ∨ ∧
h. (p ↔ ¬(r ∨ s)) g. pq →
i. ((p ∧ q) ∨ (s ∧ r)) i. (p) ∧ p
j. ((((p → q) → ¬r) ↔ s) ∨ (t ∧ j. →∧pq ∨pq
u))
12/04/2021 AI/CSE 3206 241
Logic as …
Semantics

12/04/2021 AI/CSE 3206 242


Logic as …
PL connector priority

• Priority of logical connectives from highest to lowest


– Parenthesis
– Negation
– Conjunction
– Disjunction
– Implication
– Bi-implication

General principle of KB agent function

12/04/2021 AI/CSE 3206 243


Logic as …
Types of sentence

• Given a sentence α, this sentence according to the world


considered can be

– Valid (tautology)

– Invalid (contradiction) haystack

– Satisfiable (neither valid nor invalid)

– Unsatisfiable (equivalent to Invalid)

12/04/2021 AI/CSE 3206 244


Logic as …
Validity (tautology)

• A sentence is valid iff it is true under any interpretations in all possible world
• A sentence is valid iff it is true in every interpretation (every interpretation is
a model).
• A sentence s is a valid consequence of a set S of sentences, if (S => s) is
valid.
– Proof methods: Truth -Tables and Inference Rules
• Validity is connected to inference via the Deduction Theorem:
KB ╞ α if and only if (KB  α)
– Example: x>4 or x<=4;
– Water boils at 100 degree centigrade
– Human has two legs (may not be valid)
– Books have page number (may not be valid)

12/04/2021 AI/CSE 3206 245


Logic as …
Satisfiablility

• A sentence is satisfiable iff there is some interpretation in some


world for which it is true.
• A set of sentences is satisfiable if there exists an interpretation in
which every sentence is true (it has at least one model).
– Proof Methods: Truth-Tables
– Every valid sentence is satisfiable
– Example: x+2 = 20
– Every student of AI are in their class
• A sentence which is not satisfiable is unsatisfiable (contradiction).

12/04/2021 AI/CSE 3206 246


Logic as …
Entailment

• Entailment means that one thing follows from another:


• It can be represented by ╞ symbol (double turn style)
• KB ╞ α shows α can be entailed from KB
• Knowledge base KB entails sentence α if and only if α is true in all
worlds where KB is true
– E.g., the KB containing “the Giants won” and “the Reds won”
entails “Either the Giants won or the Reds won”
– E.g., x+y = 4 entails 4 = x+y
– E.g., x+y = 4 entails x= 2 and y = 2
– Entailment is a relationship between sentences (i.e., syntax) that is
based on semantics
12/04/2021 AI/CSE 3206 247
Inference Procedure

• An inference procedure is a procedure used as reasoning engine.


• It can do:
1. Given KB, generate new sentence α that can be entailed by KB
and we call the inference procedure entail α
2. Given KB and α, it will prove whether α is entailed by KB or
not
• KB ├i α means sentence α can be derived from KB by procedure i
(|- is called turnstyle or single turnstyle)
• The record of operation of a sound inference procedure is called a
proof

12/04/2021 AI/CSE 3206 248


Inference Procedure property
• Soundness: inference procedure i is said to be sound:
if whenever KB ├i α, it is also true that KB╞ α
• Completeness: inference procedure i is said to be complete if
whenever KB╞ α, it is also true that KB ├i α
• Soundness of an inference can be established through truth table
• For example, inference procedure that entails P from a KB which consists of
PQ & Q is not sound as shown bellow

P Q PQ Remark
1 T T T Q, PQ, & P are true
2 T F F Premises did not satisfy
3 F T T Premises satisfied but not the conclusion
4 F F T Premises did not satisfy

12/04/2021 AI/CSE 3206 249


Rules of inference for PL
• Soundness of an inference can be established through truth table
Example (P V H)   H)  P
• To prove validity of a sentence, there are a set of already identified
patterns called inference rules. These are:
1. Modes Ponens or implication elimination
P  Q
2. Modes Tollens ~Q
~ P

3. And Elimination

4. And introduction

5. Or introduction
6. Double negation elimination
7. Unit resolution(
8. Resolution
12/04/2021 AI/CSE 3206 250
Logic as …
Some of the most useful inference rules for propositional logic are as
follows. In these rules, A, B, and C stand for any logical expressions.

This rule is very straightforward. It says: Given A and


B, we can deduce A ∧B. This follows from the
definition of ∧.

These rules say that given A ∧ B, we can deduce A


and we can also deduce B separately.
Again, these follow from the definition of ∧.

12/04/2021 AI/CSE 3206 251


Logic as …

• These rules say that from A we can deduce the disjunction of


A with any expression.
• For example, from the statement “I like logic,” we can
deduce expressions such as “I like logic or I like cheese,”“I
like logic or I do not like logic,” “I like logic or fish can
sing,” “I like logic or 2 + 2 = 123,” and so on.
• This follows because true ∨ B is true for any value of B.

12/04/2021 AI/CSE 3206 252


Logic as …

This rule is usually known as modus ponens and is one of the most
commonly used rules in logical deduction.
It is expressed as follows:

In other words, if A is true and A implies B is true, then we know that B


is true. For example, if we replace A with “it is raining” and B with “I
need an umbrella,” then we produce the following:
It is raining. If it’s raining, I need an umbrella. Therefore, I need an
umbrella. This kind of reasoning is clearly valid.

12/04/2021 AI/CSE 3206 253


Logic as …
What rule is used for the conclusion?
1. If world population continues to grow, then cities will become hopelessly
crowed; If cities become hopelessly overcrowded, then pollution will become
intolerable. Therefore, if world population continues to grow then pollution
will become intolerable. 8
2. Either Yohanes or Thomas was in Ethiopia; Yohanes was not in Ethiopia.
Therefore, Thomas was in Ethiopia. 7
3. If twelve million children die yearly form starvation, then something is wrong
with food distribution; Twelve million children die yearly form starvation.
Therefore, something is wrong with food distribution. 1
4. If Japan cares about endangered species, then it has stopped killing whales;
Japan has not stopped killing whales. Therefore, Japan does not care about
endangered species. 2
5. If Napoleon was killed in a plane crash, then Napoleon is dead; Napoleon is
dead. Therefore, Napoleon was killed in a plane crash. 7
6. If it is Snowing, then it is 32 F or Below..It is not 32 F or Below. Therefore, it
is not Snowing. (M.T.) 2
12/04/2021 AI/CSE 3206 254
Logic as …
Logical equivalence

• Two sentences are logically equivalent iff they have the same truth value in all possible
world
• equivalently α ≡ ß iff α╞ β and β╞ α

1. Prove that (P V H)   H)  P is valid


2. Prove S, given that:
(PQ)
(PR)
(QR) S

12/04/2021 AI/CSE 3206 255


Logic as …
Forms of Logical expression

• There are different standard forms of expressing PL statement. Some of


these are:
1. Clausal normal form: it is a set of one or more literals connected with the
disjunction operator (disjunction of literals).
Example ~P  Q  ~R is a clausal form
2. Conjunctive normal forms (CNF): conjunction of disjunction of literals or
conjunction of clauses.
Example (A  B)  (C D)
3. Disjunctive normal form (DNF): disjunction of conjunction of literals.
Example (A  B) (C  D)
4. Horn form: conjunction of literals implies a literal.
Example (A  B  C D)=>E
5. A BNF (Backnus-Naur Form) grammar of sentences in propositional logic.
Sentence  Atomic Sentence  Complex Sentence
AtomicSentence  True  False P  Q  R  …
ComplexSentence  (Sentence) Sentence Connective Sentence 
Sentence
Connective        

12/04/2021 AI/CSE 3206 256


Logic as …
Inference procedure and normal forms

• The inference procedure that we have seen before are all sound
• If KB is represented in CNF, the generalized resolution inference
procedure is complete
• If KB is represented in Horn form, the generalized modes ponens
algorithm is complete
• It can be proved that every sentence of human language can be
represented using logic as CNF. However, it is not possible in Horn form.
• Therefore, CNF is a more powerful representation technique for
knowledge
• But, Horn form representation of knowledge is easily understandable
and convenient. It also require polynomial time inference procedure

12/04/2021 AI/CSE 3206 257


Logic as …
Generalized Resolution for PL

• Given any two clauses A and B, if there are any literal P 1 in A which has a
complementary literal P2 in B, delete P1 and P2 from A and B and construct
a disjunction of the remaining clauses.
• The clause constructed is called the resolvent of A and B.
– For example, consider the following clauses
A: P  Q  R
B: ~P  Q  M
C: ~Q  S
From clause A and B, if we remove P and ~P, it resolves into clause D : Q 
R  Q M  Q  R  M .
If Q of clause D and ~Q of clause C resolved, we get
clause E: R  M  S

12/04/2021 AI/CSE 3206 258


Generalized Resolution for PL

As another example, consider the following clauses


– A: P  Q  R
– B: ~P  R
– C: ~Q
– D: ~ R

An empty clause, which is false. This proves the contradiction.

Note: in order to apply resolution for proving a theory, make sure first all
the knowledge is in its clausal form
12/04/2021 AI/CSE 3206 259
Example: Resolution
• Prove that r follows from:
(p  q)  (r  s) - (1)
p~s - (2)
p q - (3)
• Solution:
Clause (1) in Clausal form
~ (p  q)  (r  s)
 {~ p  ~ q  r  s} - (1)
Clause (2) in Clausal form
{~ p  ~ s} - (2)
Clause (3) in Clausal form
{p} - (3)
{q} - (4)
Assume not r which {~ r} in Clausal form - (5)
12/04/2021 AI/CSE 3206 260
Example: Resolution
Using inference rules: from unit resolution rule of (1) and (5)
{~p  ~ q  s} - (6) (resolve r with ~r and get resolvent)
from unit resolution of (3) and (6)
{~ q  s} - (7) (resolve p with ~p and get resolvent)
from (4) and (7)
{s} - (8) (resolve q with ~q and get resolvent)
from (2) and (8)
{~ p} - (9) (resolve p with ~p and get resolvent)
from (3) and (9)
{} - (10)
Therefore r follows from the original clauses

12/04/2021 AI/CSE 3206 261


Converting to CNF

• Converting the following sentence to CNF:


a ~bcd
≡ (a  ~ b)  (c  d)
• Steps:
1. Remove Implication
~(a  ~ b)  (c  d)
2. Push Negations Inwards
~a  ~ ~ b  (c  d)
3. Eliminate Double Negations
~a  b  (c  d)
4. Push Disjunctions into Conjunctions
(~a  b  c)  (~a  b  d)
12/04/2021 AI/CSE 3206 262
Converting to CNF

Convert the following sentence to CNF:


((a  b)  c)
• Eliminate Implication
≡ (~a  b)  c
≡ ~(~a  b)  c
• Push Negations Inwards
≡(~~a  ~ b)  c)
• Eliminate Double Negations, apply De Morgans law
≡(a  ~ b)  c
• Push Disjunctions into Conjunctions
≡(a  c)  (~b  c)
• Hence (a  c)  (~b  c) is CNF of ((a  b)  c)

12/04/2021 AI/CSE 3206 263


Converting to CNF

• Convert the following sentence to CNF:

1. (a  ((b  c) d))


2. ((a  b)  c)

12/04/2021 AI/CSE 3206 264


Practical Example (The Wompus world)
• Goal: Agent wants to move to the square which holds Gold, grab it
and come back to the original square and release it there
• Initially agent could be at any of the square

• Environment
– Squares adjacent to wumpus are smelly(stench)
– Squares adjacent to pit are breezy
– Glitter iff gold is in the same square
– Shooting kills wumpus if agent is facing to it
– Shooting uses up the only arrow
– Grabbing picks up gold if in same square
– Releasing drops the gold in same square
12/04/2021 AI/CSE 3206 265
Practical Example (The Wompus world)
• Performance measure
– Grab gold has score of 1000,
– death by pits or wompus score -1000
– using the arrow (shooting) score -10 and
– the rest ation score -1
• Sensors: Stench, Breeze, Glitter, Bump, Scream
• Actuators: turn left 90o, turn right 90o, Forward, Grab, Release,
Shoot

12/04/2021 AI/CSE 3206 266


Practical Example (The Wompus world)
Characterization
• Fully Observable No – only local perception
• Deterministic Yes – outcomes exactly specified
• Episodic No – sequential at the level of actions
• Static Yes – Wumpus and Pits do not move
• Discrete Yes
• Single-agent? Yes – Wumpus is essentially a natural feature

12/04/2021 AI/CSE 3206 267


The Wompus world

Let Sij, Bij, Gij,Wij,Pij be there is a stench, breeze, Glitter, wompus, pits
at ith row and jth column respectively
Let Bu be true if agent is facing towards the wall being at the border
Let Sc be true if wompus is killed
Percept sequence if agent is at row i and column j can be
[Sij, Bij, Gij, Bu, Sc ]
Actions: turn right 900, turn left 900, Grab, Shoot and go forward

12/04/2021 AI/CSE 3206 268


The Wompus world

Initial percept sequence [ Sij,  Bij,  Gij,  Bu,  Sc]


which entails: 1,2 and 2,1 are safe
Precept sequence at 1,2 [Sij,  Bij,  Gij,  Bu,  Sc]
which entails:
1. there is a wompus at 1,3 or 2,2 (W13  W22)
2. There is no pit at 1,3 and 2,2 ( P13   P22)
Precept sequence at 2,1 [ Sij, Bij,  Gij,  Bu,  Sc] which entails
a) There is no wompus at 2,1 and 2,2( W21   W22)
b) There is a pit at 3,1 or 2,2 ( P31  P22)
a) and 2 shows 2,2 is free from wompus and pit and from b) and 2
(P31  P22,  P22) one can entail P31

12/04/2021 AI/CSE 3206 269


The Wompus world

Precept sequence at 2,2 [ Sij,  Bij,  Gij,  Bu,  Sc]


which entails:
1. there is a no wompus at 1,2 2,1 3,2 and 2,3 ( W12  
W21   W32   W23)
2. There is no pit at 1,2 2,1 3,2 and 2,3 ( P12   P21   P32
  P23)
Precept sequence at 3,2 [Sij, Bij,  Gij,  Bu,  Sc] which
entails
a) There is no wompus at 3,1 2,2 3,3 and 4,2( W31  
W22  W33   W42)
b) There is a pit at 3,1 or 3,3 ( P31  P33)

12/04/2021 AI/CSE 3206 270


12/04/2021 AI/CSE 3206 271

You might also like