You are on page 1of 141

CHAPTER 3

SOLVING PROBLEM BY SEARCHING


IN ARTEFICIAL INTELLIGENCE
Problem-solving agent
Four general steps in problem solving:
❖ Goal formulation
✔What are the successful world states
❖ Problem formulation
✔What actions and states to consider give the goal
❖ Search
✔Determine the possible sequence of actions that lead to the states of
known values and then choosing the best sequence.
❖Execute
✔Give the solution perform the actions.
Searching Strategies: A search strategy is defined by picking the order of
node expansion

Every AI program has to do the process of searching for the solution to


be found out.

Basically, to do a search process, the following are needed:

❖ The Initial state description of the problem.

❖ A set of legal operators that change the state.

❖The Final or the Goal state.


Search Strategies
A search strategy is defined by picking the order of node expansion
Strategies are evaluated along the following dimensions:
⮚completeness: does it always find a solution if one exists?
⮚time complexity: number of nodes generated
⮚space complexity: maximum number of nodes in memory
⮚optimality: does it always find a least-cost solution?
⮚Completeness: is the strategy guaranteed to find a solution there is one?
⮚ Time complexity: how long does it take to find a solution?
⮚Space complexity: how much memory does it need to perform the search?
⮚Optimality: does the strategy find the highest-quality solution when there
are several different solutions?
Cont..
• Time and space complexity are measured in terms of problem
difficulty defined by:
⮚b - branching factor of the search tree
⮚ d - depth of the least-cost solution
⮚ m - maximum length/depth of any path in the state space (may be ∞)
Searching process in AI can be broadly classified into two major types.

1. Brute Force Search (Uninformed search strategies)

Uninformed search (or blind search): strategies use only the information
available in the problem definition. The term Uninformed means that they
have no information about the number of steps or the path cost from the
current state to the goal

2. Heuristic Search (Informed (Heuristic) Search strategies )

Informed (or heuristic) search strategies know whether one state is more
promising than another
1. Brute Force Search Algorithms
Brute Force Search: Uninformed strategies (defined by order in which
nodes are expanded):
1. Depth-First Search:
2. Breadth-First Search:
3. Uniform-cost search
4. Iterative deepening
5. Bidirectional search
1. Depth-First Search:

❖This is a very simple type of brute-force search technique. The search


begins by expanding the initial node, i.e., by using an operator,
generate all successors of the initial node and test them.

❖ This algorithm finds whether the goal can be reached or not. But the
path it has to follow has not been mentioned. In order to remember the
path, all that has to be done is to establish a pointer from each generated
node to node “a” (in algorithm)
Algorithm for Depth-First Search:
Step 1: Put the initial node on a list START.

Step 2: If (START is empty) or (START=GOAL) terminate search.

Step 3: Remove the first node from START. Call this node as a .

Step 4: If (a=GOAL) terminate search with success.

Step 5: Else if node a has a successor, generate all of them and add
them at the beginning of START.

Step 6: Go to Step 2.
Cont..
Expand deepest unexpanded node
• Implementation:
• fringe = Last In First Out (LIPO) queue, i.e., put successors at
front.

Is A a goal state?
Cont.…
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front
queue=[B,C]

Is B a goal state?
Cont..
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[D,E,C]

Is D = goal state?
Cont.…
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[H,I,E,C]

Is H = goal state?
Cont..
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[I,E,C]

Is I = goal state?
Cont..
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[E,C]

Is E = goal state?
Cont..
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[J,K,C]

Is J = goal state?
Cont.…
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[K,C]

Is K = goal state?
Cont..
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[C]

Is C = goal state?
Cont..
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[F,G]

Is F = goal state?
Cont..
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[L,M,G]

Is L = goal state?
Cont.…
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[M,G]

Is M = goal state?
❖The important factors to be considered in any searching
procedure are the time- complexity and space-complexity.

❖ Time-complexity refers to the amount of time taken to generate


the nodes

❖ space-complexity refers to the amount of memory needed.

❖ The major drawback of depth-first search is the determination of the


depth until which search has to proceed. This depth is called cut-off
depth.
Cont.
• The value of cut-off depth is essential because otherwise the search
will go on and on. If the cut-off depth is smaller, solution may not be
found if cut-off depth is large, time-complexity will be more.

• NB: Depth-first search has very modest memory requirements


Properties of depth-first search

• Complete? No: fails in infinite-depth spaces

Can modify to avoid repeated states along path

• Time? O(bm) with m=maximum depth

⮚terrible if m is much larger than d


✔ but if solutions are dense, may be much faster than breadth-first

• Space? O(bm), i.e., linear space! (we only need to remember a single path +
expanded unexplored nodes)

• Optimal? No (It may find a non-optimal goal first)


⮚The drawback of depth-first search is that it can get stuck going down the wrong path.

⮚Many problems have very deep or even infinite search trees, so depth-first search will never
be able to recover from an unlucky choice at one of the nodes near the top of the tree.

⮚The search will always continue downward without backing up, even when a shallow solution
exists. Thus, on these problems depth-first search will either get stuck in an infinite loop and
never return a solution, or it may eventually find a solution path that is longer than the optimal
solution.

⮚That means depth-first search is neither complete nor optimal. Because of this, depth-first
search should be avoided for search trees with large or infinite maximum depths
2. Breadth-First Search:
❖One simple search strategy is a breadth-first search. In this strategy, the
root node is expanded first, then all the nodes generated by the root node
are expanded next, and then their successors, and so on. In general, all the
nodes at depth d in the search tree are expanded before the nodes at depth d
+ 1.

❖Breadth-first search is a very systematic strategy because it considers all


the paths of length 1 first, then all those of length 2, and so on.
Cont..


If there is a solution, breadth-first search is guaranteed to find it, and if there are
several solutions, breadth-first search will always find the shallowest goal state
first.

In terms of the four criteria, breadth-first search is complete, and it is optimal
provided the path cost is a non decreasing function of the depth of the node.
(This condition is usually satisfied only when all operators have the same cost.
The memory requirements are a bigger problem for breadth-first search than the
execution time.
Algorithm for BFS:
Step 1: Put the initial nod on a list START.

Step 2: If (START is empty) or (START=GOAL) terminate search.

Step 3: Remove the first node from START. Call this node as a .

Step 4: If (a=GOAL) terminate search with success.

Step 5: Else if node a have successors, generate all of them and add them
at the tail of START.

Step 6: Go to Step 2.
Cont..
• This is also a brute search procedure like depth-first search. Here
searching progresses level by level, unlike depth-first search, this
goes deep into the tree. An operator is employed to generate all
possible children of a node. The algorithm is given above. As in
case of depth-first search, if a pointer is introduced in the
algorithm, then the entire path scanned can be identified.
Cont..
❖The root node is expanded first Then all successors of the root node
are expanded

❖ Then all their successors

• … and so on

❖ In general, all the nodes of a given depth are expanded before any
node of the next depth is expanded.

❖ Uses a standard queue as data structure


❖Expand shallowest unexpanded node

❖ Fringe is the collection of nodes that have been generated but not
yet expanded.

❖ The set of all leaf nodes available for expansion at any given
point is called the frontier.

❖Implementation:

fringe/frontier= FIFO queue , i.e., new successors go at end of the


queue.
• Expand shallowest unexpanded node
• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

Is A a goal state?
• Expand shallowest unexpanded node
• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

Expand:
fringe = [B,C]

Is B a goal state?
• Expand shallowest unexpanded node
• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

Expand:
fringe=[C,D,E]

Is C a goal state?
• Expand shallowest unexpanded node
• Implementation:
• fringe is a FIFO queue, i.e., new successors go at end

Expand:
fringe=[D,E,F,G]

Is D a goal state?
Properties of breadth-first search
• Complete? Yes it always reaches goal (if b is finite)
• Time? 1+b+b2+b3+… +bd + (bd+1) = O(bd+1)
(this is the number of nodes we generate)
• Space? O(bd+1) (keeps every node in memory,
either in fringe or on a path to fringe).
• Optimal? Yes (if we guarantee that deeper solutions are less optimal,
e.g. step-cost=1).

• Space is the bigger problem (more than time)


3. Uniform-cost search

❖Breadth-first search finds the shallowest goal state, but this may not
always be the least-cost solution for a general path cost function.
Uniform cost search modifies the breadth-first Searching strategy by
always expanding the lowest-cost node on the fringe (as measured by
the path cost g(n)), rather than the lowest-depth node. It is easy to see
that breadth-first search is just uniform cost search with g(n) =
DEPTH(n).
Cont..

❖Extension of Breadth-First search:

✔ Expand node with lowest/smallest path cost g(n).

❖ Implementation: fringe = queue ordered by path cost

❖ UC-search is the same as Breadth-First search when all step


costs are equal
Cont.…
❑UCS expands the node n with lowest summed path cost g(n).

❑To do this, the frontier is stored as a priority queue. (Sorted list data
structure, better heap data structure).

❑The goal test is applied to a node when selected for expansion (not
when it is generated).

❑ Also a test is added if a better node is found to a node on the


frontier/fringe.
Cost functionf(n) applied to each node
• Implementation: fringe = queue ordered by path cost

• Equivalent to breadth-first if all step costs all equal.

• Complete? Yes, if step cost ≥ ε (otherwise it can get stuck in infinite


loops)

• Time? # of nodes with path cost ≤ cost of optimal solution.

• Space? # of nodes on paths with path cost ≤ cost of optimal solution.

• Optimal? Yes, for any step cost


4. Iterative deepening Searching

❑Iterative deepening search (or iterative deepening depth-first search)


is a general strategy, often used in combination with depth-first tree
search, that finds the best depth limit.

❑It does this by gradually increasing the limit—first 0, then 1, then 2,


and so on—until a goal is found.

❑This will occur when the depth limit reaches d, the depth of the
shallowest goal node.
❑Iterative deepening combines the benefits of depth-first and breadth-
first search.

❑Like depth-first search, its memory requirements are very modest:


O(bd) to be precise. Like breadth-first search, it is complete when the
branching factor is finite and optimal.
❑Iterative deepening search may seem wasteful, because states are
generated multiple times. It turns out this is not very costly. The reason
is that in a search tree with the same (or nearly the same) branching
factor at each level, most of the nodes are in the bottom level, so it does
not matter much that the upper levels are generated multiple times.
• Iterative deepening search is analogous to breadth-first search in that
it explores a complete layer of new nodes at each iteration before
going on to the next layer. It would seem worthwhile to develop an
iterative analog to uniform-cost search, inheriting the latter
algorithm’s optimality guarantees while avoiding its memory
requirements.
❖Call limited depth DFS with depth 0;

❖If unsuccessful, call with depth 1;

❖If unsuccessful, call with depth 2; –Etc.


• In general, iterative deepening is the preferred blind search
method when the search space is large and the solution depth is
unknown.
Iterative-Deepening Search: Efficiency
❖Complete? Yes

❖ Optimal? Same as BFS

❖ Time complexity? Exponential: O( bd )

❖ Space complexity? Polynomial: O( bd)


Bidirectional Search
❑The idea behind bidirectional search is to run two simultaneous searches—
one forward from the initial state and the other backward from the goal—
stopping when the two searches meet in the middle.

❑It searches forward from initial state and backward from goal state till both
meet to identify a common state. The path from initial state is concatenated
with the inverse path from the goal state. Each search is done only up to
half of the total path.
❑Bidirectional search is implemented by replacing the goal test with a
check to see whether the frontiers of the two searches intersect; if they
do, a solution has been found.

❑Simultaneously search forward from the initial state and backward


from the goal state

❑simultaneously search forward from S and backwards from G

❑ stop when both “meet in the middle”

❑ need to keep track of the intersection of 2 open sets of nodes


❖Alternate searching from the start state toward the goal and from the
goal state toward the start state.

❖Stops when frontiers intersect. Works well only when there are a
unique start and a unique goal stats.

❖Problem: How do we search backward from the goal state?

❖Requires the ability to generate “predecessors” state.

❖Predecessors of node n = all nodes that have n as successor.


Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Forward Search
Bidirectional Search
Bidirectional Search
Bidirectional Search
Bidirectional Search
❖We can reduce this by roughly half if one of the two searches is done
using iterative deepening, but at least one of the frontiers must be
kept in memory so that the intersection check can be done. This
space requirement is the most significant weakness of bidirectional
search

❖The reduction in time complexity makes bidirectional search


attractive, but how do we search backward???????? This is not as
easy as it sounds.
Properties of Bidirectional Search
❖Time complexity and Space complexity O(bd/2) rather than O(bd)

❖ Both actions and predecessors (inverse actions) must be defined

❖ Must test for intersection between the two searches

❖Really a search strategy, not a specific search method

❖Hard to compute predecessors

❖High predecessor branching factor

❖Too many goal states

Often not practical….


2. Heuristic Search strategies (Problem solving by agent)

❑To solve large problems with large number of possible states,


problem-specific knowledge needs to be added to increase the
efficiency of search algorithms.
❑informed search strategy—one that uses problem-specific knowledge beyond
the definition of the problem itself can find solutions more efficiently than can an
uninformed strategy

❑This section shows how an informed search strategy—one that uses


problem-specific knowledge beyond the definition of the problem
❑In order to solve large problems, domain specific knowledge must be added
to improve search efficiency.

❑In AI heuristic search has a general meaning and a more specialized technics .

❑It is a practical strategy increasing the effectiveness of complex problem


solving

❑ It leads to a solution along the most probable path, omitting the least
promising ones .

❑ It should enable one to avoid the examination of dead ends, and to use
already gathered data
Heuristic Evaluation Functions

❖In a single-agent path finding problems, a heuristic evaluation


function estimate the cost of an optimal path between a pair of nodes.

❖A key property of heuristic evaluation function is that It estimate


actual cost and that it be inexpensive to compute

❖Heuristic Evaluation Functions calculate the cost of optimal path


between two states

❖a heuristic function, denoted by h(n):


Heuristics are approximations used to minimize the searching process.

Generally two categories of problems use heuristics.

1. Problems for which no exact algorithms are known and one needs to

find an approximate and satisfying solution.

Example: computer vision, speech recognition, etc.

2. Problems for which exact solutions are known, but computationally

infeasible. Ex: Chess, etc.


The Heuristics which are needed for solving problems are generally
represented as a heuristic function which maps the problem states into
numbers. These numbers are used to guide search.

The following algorithms make use of heuristic evaluation functions:


1. A* Algorithm 2. Hill Climbing 3. AO* Algorithm
1. A* search Algorithm
• The most widely-known form of best-first search is called A* search
(pronounced “A-star A* search”). It is best-known form of Best First
search.

• Key Idea: avoid expanding paths that are already expensive, but
expand most promising first.

• A* is the best first search in which the cost associated with a node f(n)
Best first search with f(n)=g(n) + h(n)
⮚ g(n) =the cost of path from the initial state to node n

⮚h(n)= heuristic evaluation cost of path from node n to a goal

⮚ f(n)= estimated total cost of path through n to goal

⮚The sum of the evaluation function value and the cost along the
path leading to that state is called fitness number.
⮚A heuristic h(n)is admissible if for every node n, h(n) ≤h*(n), where
h*(n)is the true cost to reach the goal state from n

⮚A heuristic h(n) is admissible if it never overestimates the cost to


reach the goal. i.e. It is optimistic

⮚A* finds an optimal path to a goal if a heuristic h(n) is admissible


meaning it never over estimate actual cost. It uses these evaluation
function will find optimal solution to these problem.
Algorithm for A* Algorithm :
Step 1: Put the initial node on a list START

Step 2: If (START is empty) or (START = GOAL) terminate search

Step 3: Remove the first node from START. Call this node “a “

Step 4: If (a=GOAL) terminate search with success

Step 5: Else If node “a” has successors, generate all of them. Estimate the fitness
number of the successors by totaling the evaluation function value and the cost
function value. Sort the list by fitness number.

Step 6: Name the new list as START 1


2. Hill-Climbing Search
❖Hill climbing is a DFS with a heuristic measurement that orders
choices as nodes are expanded. The heuristic measurement gives the
value of the estimated remaining distance to the goal state. The
effectiveness of hill climbing is completely dependent upon the
accuracy of the heuristic measurement.

❖It is simply a loop that continually moves in the direction of


increasing value—that is, uphill.
⮚It terminates when it reaches a “peak” where no neighbor has a higher
value.
⮚Hill climbing does not look ahead beyond the immediate neighbors of
the current state.
⮚Remove the first path from the queue.
⮚Create new paths by extending the first path to all the neighbors
of the terminal node.
⮚ Sort the new paths, if any, by the estimated distance between
their terminal nodes and the goal, until the first path in the queue
terminates at the goal node or the queue is empty.
⮚ If the goal node is found, announce success, otherwise announce
failure
Algorithm for Hill-Climbing is given below:

Step 1: Put the initial node on a list START

Step 2: If (START is empty) or (START = GOAL) terminate search

Step 3: Remove the first node from START. Call this node “a”

Step 4: If (a=GOAL) terminate search with success

Step 5: Else If node “a “have successors, generate all of them. Find out
how far they are from the goal node.

Step 6: Go to Step 2.
Unfortunately, hill climbing search often gets stuck for the following reasons:

✔Local maxima: a local maximum is a peak that is higher than each of its
neighboring states, but lower than the global maximum

✔Ridges: Ridges result in a sequence of local maxima that is very difficult for
greedy algorithms to navigate.

✔Plateau: a plateau is an area of the state-space landscape where the objective


function is flat. It can be a flat local maximum, from which no uphill exit
exists.
AO* Algorithm: (AND-OR Graph)

• AND-OR graph (or Tree) is useful for representing the solution


of problems that can be solved by decomposing them into a set of
smaller problems, all of them to be solved.

• This decomposition or reduction generates arcs that we call AND


arcs. One AND arc may point to any number of successor nodes, all
of which must be solved in order for the arc to point to a solution.
❑Just as in an OR graph, several arcs may emerge from a single node,
indicating a variety of ways in which the original problem might be solved.

❑This structure is called not simply an AND graph but rather an AND-OR
Graph. An example of an AND-OR Graph is given in figure. And arcs are
indicated with a line connecting all the components.

❑This algorithm should find a path from the starting node of the
graph to a set of nodes representing solution states. node.
The end of chapter three
ARTIFICIAL INTELLIGENCE
CHAPTER 4
Knowledge and Reasoning
Expert system

Compiled By Tedy D.
Introduction
❑Knowledge is a description of the world and it is a progression that
starts with data.
❑By organizing or analyzing the data, we understand what the data
means, and this becomes information

❑The interpretation or evaluation of information yield knowledge

❑An understanding of the principles embodied within the


knowledge is wisdom

90
Fig. Knowledge Progression
• Data is viewed as collection of disconnected facts
• Example : It is raining

• Information emerges when relationships among facts are established


and understood; Provides answers to "who“, "what", "where", and
"when“
• Example : The temperature dropped 15 degrees and then it started
raining

92
• Knowledge emerges when relationships among patterns are identified
and understood; Provides answers as "how“

• Example : If the humidity is very high and the temperature drops


substantially, then atmospheres is unlikely to hold the moisture, so
it rains

• Wisdom is the ability to think and act using knowledge,


experience, understanding, common sense, and insight.
• Wisdom is the pinnacle of understanding, uncovers the principles of
relationships that describe patterns. Provides answers as "why"
• Example : Encompasses understanding of all the interactions that
happen between raining, evaporation, air currents, temperature
gradients and changes

94
• A knowledge model tells that, as the degree of “connectedness” and
“understanding” increases, we progress from data through
information and knowledge to wisdom

95
• Knowledge is a description of the world

• Representation is the way knowledge is encoded

• In Artificial Intelligence, knowledge representation studies the


formalization of knowledge and its processing within machines

• As a branch of Artificial Intelligence, knowledge representation and


reasoning aims at designing computer systems that reason about a
machine-interpretable representation of the world, similar to human
reasoning
96
• An answer to the question, "how to represent knowledge", requires an
analysis to distinguish between knowledge “how” and knowledge
“that”
• knowing "how to do something“
• e.g. "how to drive a car" is a Procedural knowledge
• knowing "that something is true or false“
• e.g. "that is the speed limit for a car on a motorway" is a
Declarative knowledge
97
• Generally Knowledge can be categorized in many different ways

• Domain knowledge: Domain knowledge is usable knowledge for a


particular domain.

• Meta knowledge: can be defined as knowledge about knowledge


(knowledge dictionary).

• Common sense knowledge: knowledge that is generally known.

• Heuristic knowledge: Heuristic is a specific rule-of-thumb or rule


derived from experience.
• Explicit knowledge: can be easily expressed in words/numbers and
shared in the form of data, scientific formulae, product specifications,
manuals, and universal principles.

• Tacit knowledge: not easy to document be Subjective insights,


intuitions, emotions, mental models, values and actions are examples of
tacit knowledge.

• The knowledge can be extracted from its sources i.e. tacit knowledge
from experts and by observation and explicit knowledge through
document analysis.
Propositional logic
• Proposition : A proposition is classified as a declarative sentence
which is either true or false.

• e.g. It rained yesterday.

• Represents statements about the world without reflecting this structure


and without modeling these entities explicitly

• Example: 'Peter likes ice-cream’ treated as an atomic entity

• some knowledge is hard or impossible to encode in the propositional


logic.
Sentences are combined by Connectives:

❖∧ ...and [conjunction]

❖∨ ...or [disjunction]

❖⇒ ...implies [implication / conditional]

❖⇔ ..is equivalent [biconditional]

❖¬ ...not [negation]
Propositional logic is a weak language
⮚Hard to identify “individuals” (e.g., Mary, 3)

⮚Can’t directly talk about properties of individuals or relations


between individuals (e.g., “Bill is tall”)

⮚Generalizations, patterns, regularities can’t easily be represented


(e.g., “all triangles have 3 sides”).
first-order logic

• A term is a logical expression that refers to an object Anatomic


sentence is formed from a predicate symbol followed by a
parenthesized list of terms.

• For example, Brother(Richard,John)

• First-order logic is a formal logical system used in mathematics,


philosophy, linguistics, and computer science.
First-order logic (like natural language) assumes the world contains

• Objects, which are things with individual identities. Like, people,


houses, numbers, theories, colors

• Properties: of objects that distinguish them from other objects.

• Relations: that hold among sets of objects. Like bigger


than, inside, part of, has color, occurred after, owns, comes between

• Functions, which are a subset of relations where there is only one


“value” for any given “input”.
• First-order logic is distinguished from propositional logic by its use
of quantifiers; each interpretation of first-order logic includes a
domain of discourse over which the quantifiers range.

• First order logic contains two standard quantifiers, called universal


and existential.

• (∀ x)P(x) means that P holds for all values of x in the domain


associated with that variable

• E.g., (∀ x) dolphin(x) →mammal(x)


Universal quantification(V)

• To express this particular rule, we will use unary predicates Cat and
Mammal , thus, "Spot is a cat“ is represented by Cat(Spof),and "Spot
is a mammal“ by Mammal(Spot). In English, what we want to say is
that for any object x,if x is a cat then x is a mammal. First-order
logic lets us do this as follows:

• All 4th students are smart. Assume the universe of discourse of x are
4th students ∀ x 4th (x, ) ⇒ smart(x)
Existential quantification(∃)
• (∃ x)P(x) means that P holds for some value of x in the domain associated
with that variable. E.g., (∃ x) mammal(x) ∧ lays-eggs(x)

• Universal quantification makes statements about every object. Similarly, we


can make a statement about some object in the universe without naming it
by using an existential quantifier.

• Someone of 4th students are smart

• Assume x are 4th year student

(∃ x)smart(x)
Agents that Reason Logically( Logical Agents):
• The central component of a knowledge based agent is its
knowledgebase. Informally ,a knowledge base is a set of
representations of facts about the world. Each individual
representation "is called a sentence.(Here "sentence“ is used as a
technical term.
Sentence is related to the sentences of English and other natural
languages, but is not identical.)The sentences are expressed in a
language called a knowledge representation language.

Logics are formal languages for representing information such that


conclusions can be drawn

❖ Syntax defines the sentences in the language

❖ Semantics define the "meaning" of sentences;

i.e., define truth of a sentence in a world


Knowledge Representation

• KR is the field of AI dedicated to representing information about


the world in a form that a computer system can utilize to solve
complex tasks such as diagnosing a medical condition or having a
dialog in a natural language
• Knowledge representation incorporates findings from psychology
about how humans solve problems and represent knowledge in
order to design formalisms that will make complex systems easier
to design and build.
• Knowledge Representation is a sub area of artificial intelligence
concerned with understanding, designing and implementing way of
representing information in a computer so that programs (agents) can
use this information
Automated Reasoning

• Knowledge representation goes hand in hand with automated


reasoning because one of the main purposes of explicitly representing
knowledge is to be able to reason about that knowledge, to make
inferences, assert new knowledge, etc.

• Automated reasoning is an area of computer science and mathematical


logic dedicated to understanding different aspects of reasoning.

• The study of automated reasoning helps to produce computer


programs that allow computers to reason completely,
Logic as a formal language
Expert systems

❖An expert system is a computer program that represents and reasons


with knowledge of some specialist subject with a view to solving
problems or giving advice.
Expert systems

❖To solve expert-level problems, expert systems will need efficient


access to a substantial domain knowledge base, and a reasoning
mechanism to apply the knowledge to the problems they are given.

❖Knowledge Based System (KBS) is sometimes used as a synonym


for Expert System.
Characteristics of Expert Systems

Expert systems can be distinguished from conventional computer


systems in that:

❖They simulate human reasoning about the problem domain, rather


than simulating the domain itself.

❖They perform reasoning over representations of human knowledge,


in addition to doing numerical calculations or data retrieval.
Characteristics of Expert Systems

❖They usually have to provide explanations and justifications of their


solutions or recommendations in order to convince the user that their
reasoning is correct

❖They have corresponding distinct modules referred to as the


inference engine and the knowledge base.
Architecture of Expert Systems
• The process of building expert systems is often called knowledge
engineering. The knowledge engineer is involved with all
components of an expert system:

Figure 1. Interactions of Knowledge Engineer with Expert System


Architecture of Expert Systems

Figure 2. Expert System Architecture


Components of Expert Systems
• Domain Expert: - is a person who has a comprehensive and authoritative
knowledge or skills in a particular area of endeavor.

• Knowledge acquisition. The process of acquiring knowledge from


different sources and converting them into a machine-readable form is called
knowledge acquisition.
Knowledge Base: - is the heart of the knowledge base system.

It records the factual and causal knowledge from human experience


and scientific studies in more than one way.

It contains the necessary rules and procedures for solving a given


problem. It is a collection of rules, assertions and facts about a specific
problem domain represented or formulated in a machine-readable
format is known as knowledge base
❖Inference Engine: - The purpose of the inference engine is to seek
information and relationships from the knowledge base and to provide
answers, predictions, and suggestions in the way a human expert
would.

❖The terms Inference "reasoning“ and "inference“ are generally used


to cover any process by which conclusions are reached

❖It uses the knowledge in the knowledge base and information


provided by the user to infer new knowledge
❖Inference Engine is the most important component of the knowledge
base system from user’s point of view.

❖Use of efficient procedures and rules by the Inference Engine is


essential in deducting a correct, flawless solution.

❖In case of knowledge-based ES, the Inference Engine acquires and


manipulates the knowledge from the knowledge base to arrive at a
particular solution.
The inference engine must find the right facts, interpretations, and
rules and assembles them correctly.

There are two common inference methods

1) Backward chaining = goal-derive inference

2) Forward Chaining = data-driven inference

User Interface: - The responsibility of the user interface is to convert


the rules from its inner depiction which the user may not understand to
the user understandable form.
• Knowledge Engineer: - is a person who have authority for acquiring
knowledge and prepare them into the format computer
understandable in to a knowledge base in a machine understandable
form.
Other Components
✔Working memory and Domain database (Case Specific Data)
✔Explanation facility
✔Knowledge base editor
Other Components
❖Working memory:
✔A global database of facts used by the rules

❖Domain database:
✔Contains facts about the ES’s subject
❖Explanation facility:
✔Explain the reasoning of the system to a user

❖Knowledge acquisition facility:


✔An automatic way for the expert to enter knowledge in the
system rather than by having the knowledge engineer explicitly
code the knowledge
Development of Expert Systems
• The process of ES development is iterative. Steps in developing the ES
include-

• Identify Problem Domain

• The problem must be suitable for an expert system to solve it. Find the experts
in task domain for the ES project. Establish cost effectiveness of the system.

• Design the System

• Identify the ES Technology


Development of Expert Systems
Develop the Prototype
From Knowledge Base: The knowledge engineer works to Acquire
domain knowledge from the expert.
Represent it in the form of If-THEN-ELSE rules.
Test and Refine the Prototype
The knowledge engineer uses sample cases to test the prototype for
any deficiencies in performance. End users test the prototypes of the
ES.
Development of Expert Systems
• Develop and Complete the ES
• Test and ensure the interaction of the ES with all elements of
its environment, including end users, databases, and other
information systems.
• Document the ES project well.
• Train the user to use ES.
• systems evolve
• Building expert systems is generally an iterative process. The
components and their interaction will be refined over the course of
numerous meetings of the knowledge engineer with the experts and
users.
• The major processes in Expert System development are:
• Knowledge Acquisition

• Knowledge Representation
Knowledge Acquisition

⮚It is the process of “extracting” the knowledge from domain experts


and representing the knowledge in suitable form that can be used
by knowledge base system.

⮚ Knowledge acquisition is the accumulation, transfer and


transformation of problem-solving expertise from experts and/or
documented knowledge sources to a computer program for
constructing or expanding the knowledge base.
The knowledge acquisition process is usually comprised of three
principal stages:
1. Knowledge elicitation: is the interaction between the expert and the
knowledge engineer/program to elicit the expert knowledge in some
systematic way.
2. The knowledge thus obtained is usually stored in some form of
human friendly intermediate representation.
3. The intermediate representation of the knowledge is then compiled
into an executable form (e.g. production rules) that the inference
engine can process.
• Knowledge Elicitation
The knowledge elicitation process itself usually consists of
several stages:
1.Find as much as possible about the problem and domain from
books, manuals, etc. In particular, become familiar with any
specialist terminology.
2.Try to characterize the types of reasoning and problem solving
tasks that the system will be required to perform.
3. Find an expert (or set of experts) that is willing to collaborate on
the project.
4. Interview the expert (usually many times during the course of
building the system). Find out how they solve the problems your
system will be expected to solve. Have them check and refine your
intermediate knowledge representation.
Stages of Knowledge Acquisition
• Levels of Knowledge Analysis
❖ Knowledge Identification: Use in depth interviews in which the
knowledge engineer encourages the expert to talk about how
they do what they do. The knowledge engineer should
understand the domain well enough to know which objects and
facts need talking about.
❖ Knowledge Conceptualization: Find the primitive concepts and
conceptual relations of the problem domain.
❖ Epistemological Analysis: Cover the structural properties of the
conceptual knowledge, such as taxonomic relations
(classifications).
❖ Logical Analysis: Decide how to perform reasoning in the
problem domain. This kind of knowledge can be particularly
hard to acquire.
❖ Implementation Analysis: Work out systematic procedures for
implementing and testing the system.
Benefits of Expert Systems

❖Availability − They are easily available due to mass production of software.

❖Less Production Cost − Production cost is reasonable. This makes them affordable.

❖Speed − They offer great speed. They reduce the amount of work an individual puts
in.
❖Less Error Rate − Error rate is low as compared to human errors.

❖Reducing Risk − They can work in the environment dangerous to humans.

❖Steady response − They work steadily without getting motional, tensed or fatigued.
• THE END
INDIVIDUAL Assignment

1. Draw any types of structure and Write a program for ( Brute Force Search
Depth-First Search: Breadth-First Search: Uniform-cost search: Iterative
deepening or Bidirectional search). You can you use one of the following
programing language

✔Java programing, Python programming , C++ programming, C# programming

(20%)
Submission date August 31 G.C OR August 25 E.C
NB:- You have to write by your hand and it must be clear.
Test 1 will be Next Week Wednesday.
February 25 G.C / February 17 E.C

You might also like