You are on page 1of 37

Ch6: Knowledge Representation Using Rules

Procedural vs. Declarative Knowledge


Logic Programming
Forward vs. backward reasoning
Matching
Control knowledge

Slide 1
Procedural vs. Declarative Knowledge
Q Declarative representation
–Knowledge is specified but the use is not given.
–Need a program that specifies what is to be done to the k
nowledge and how.
–Example:
• Logical assertions and Resolution theorem prover
–A different way: Logical assertions can be viewed as a pr
ogram, rather than as data to a program.
=> Logical assertions = Procedural representations of kno
wledge
Slide 2
Procedural vs. Declarative Knowledge
Q Procedural representation
–The control information that is necessary to use the kno
wledge is considered to be embedded in the knowledge it
self.
–Need an interpreter that follows the instructions given in
the knowledge.

–The real difference between the declarative and the pro


cedural views of knowledge lines in where control informa
tion resides.
• Kowalski’s equation: Algorithm = Logic + Control

Slide 3
Procedural knowledge Declarative knowledge

– Knowledge about "how to do – Knowledge about "that


something"; e.g., to something is true or false".
determine if Peter or Robert e.g., A car has four tyres;
is older, first find their ages. Peter is older than Robert;
– ◊ Focuses on tasks that – ◊ Refers to representations
must be performed to reach of objects and events;
a particular objective or knowledge about facts and
goal. relationships;
– ◊ Examples : procedures, – ◊ Example : concepts,
rules, strategies, agendas, objects, facts, propositions,
models. assertions, semantic nets,
logic and descriptive
models.

Slide 4
Procedural vs. Declarative Knowledge
The real difference between the declarative and the proce
dural views of knowledge lies in where control information reside
s. Example:
man(Marcus)
man(Caesar)
person(Cleopatra)
x: man(x) person(x)
person(x)?
X is to be binded to a particular value for which person is true. O
ur knowledge base justifies any of the following answers
y=Marcus
y=ceasar
Y=Cleopatra. Slide 5
Procedural vs. Declarative Knowledge
•Because there is no more than one value that satisfies the predic
ate, but only one value is needed, the answer depends on the ord
er in which the assertions are examined.
•Declarative assertions do not say how they will be examined.
•y=cleopatra is the answer for the question when viewed declarati
vely.
When viewed procedurally, the answer is Marcus.this happens be
cause the first statement the person goal is the inference rule x:
man(x) person(x)

Slide 6
Procedural vs. Declarative Knowledge
•This rule sets up a subgoal to find a man.Again the statements ar
e examined from the beginning and now Marcus is found to satisfy
the subgoal and thus also the goal.
•So Marcus is reported as the answer.
•There is no clear cut answer whether declarative or procedural kn
owledge representation frameworks are better.

Slide 7
Logic Programming
•Logic Programming is a programming language paradigm o
n which logical assertions are viewed as programs.
•PROLOG program is described as a series of logical asserti
ons each of which is a Horn Clause.
Prolog program = {Horn Clauses}
–Horn clause: disjunction of literals of which at most one is p
ositive literal
p,¬pVq,and pq are horn clauses.
=> Prolog program is decidable
–Control structure: Prolog interpreter = backward reasoning
+ depth-first with backtracking
Slide 8
Logic Programming
Q Logic:
X: pet(X) ^ small(X) apartmentpet(X)
X: cat(X) v dog(X) pet(X)
X: poodle(X) dog(X) ^ small(X) poodle(fluffy)
Q Prolog:
apartmentpet(X) :- pet(X) , small(X). pet(X) :- cat(
X).
pet(X) :- dog(X).
dog(X) :- poodle(X).
small(X) :- poodle(X). poodle(fluffy).

Slide 9
Logic Programming
Q Prolog vs. Logic
–Quantification is provided implicitly by the way the variabl
es are interpreted.
• Variables: begin with UPPERCASE letter
• Constants: begin with lowercase letters or number
–There is an explicit symbol for AND (,), but there’s none f
or OR. Instead, disjunction must be represented as a list of
alternative statements
–“p implies q” is written as q :- p.

Slide 1
0
Logic Programming

Logical negation cannot be represented explicitly in pure


Prolog.
– Example: x: dog(x) cat(x)
=> problem-solving strategy: NEGATION AS FAILURE
?- cat(fluffy). => false b/c it’s unable to prove Fluffy is a cat.
Q Negation as failure requires: CLOSED WORLD ASSUM
PTION which states that all relevant ,true assertions are
contained in our knowledge base or derivable from asser
tions that are so contained

Slide 1
1
Forward vs. Backward Reasoning
•Reason backward from the goal states: Begin building a tree of mov
e sequences that might be solutions by starting with the goal configu
rations at the root of the tree. Generate the next level of the tree by fi
nding all the rules whose right side match the root node. Use the left
sides of the rules to generate the nodes at this second level of the tr
ee. Generate the next level of the tree by taking each node at the pr
evious level and finding all the rules whose right sides match it. The
n use the corresponding left sides to generate the new nodes. Conti
nue until a node that matches the initial state is generated. This met
hod of reasoning backward from the desired final state if often called
goal-directed reasoning.

Slide 12
Forward vs. Backward Reasoning
Q Forward: from the start states.
Q Backward: from the goal states.
•Reason forward from the initial states: Begin building a tree of move
sequences that might be solutions by starting with the initial configur
ations at the root of the tree. Generate the next level of the tree by f
inding all the rules whose left sides match the root node and using th
eir right sides to create the new configurations. Continue until a confi
guration that matches the goal state is generated.
•Reason backward from the goal states: Begin building a tree of mov
e sequences that might be solutions by starting with the goal configu
rations at the root of the tree. Generate the next level of the tree by fi
nding all the rules whose right side match the root node.

Slide 13
Forward vs. Backward Reasoning
Q Four factors influence forward or Backward?
–Move from the smaller set of states to the larger set of sta
tes
–Proceed in the direction with the lower branching factor
–Proceed in the direction that corresponds more closely wi
th the way the user will think
–Proceed in the direction that corresponds more closely wi
th the way the problem-solving episodes will be triggered

Slide 14
Forward vs. Backward Reasoning
Q To encode the knowledge for reasoning, we need 2 kinds
of rules:
– Forward rules: to encode knowledge about how to respo
nd to certain input.
– Backward rules: to encode knowledge about how to achi
eve particular goals.

Slide 15
KR Using rules
IF . . THEN
ECA (Event Condition Action)
RULES
. APLLICATIONS
EXAMPLES
1. If flammable liquid was spilled, call the fire depart
ment.
2. If the pH of the spill is less than 6, the spill materi
al is an acid.
3. If the spill material is an acid, and the spill smells li
ke vinegar, the spill material is acetic acid.
( are used to represent rules)
FACTS

[][][]

MATCH EXECUTE

[][][]

Fig. 1 the rule Interpreted cycles through a


Match- Execute sequence
FACTS

A flammable Spill smells l The spill ma


The pH of the
liquid was sp ike vinegar terial is an a
spill is < 6
illed cid

EXECUTE
MATCH
New fact added to the KB
RULES

If the pH of the spill is less than 6,the spill


material is acid

Fig.2 Rules execution can modify the facts


in the knowledge base
FACTS

A flammable The pH of the Spill smells l The spill ma ACETIC


liquid was sp spill is < 6 ike vinegar terial is an a ACID
illed cid

MATCH

If the spill material is an acid and the spill


smells like vinegar, the spill material is acet
ic acid EXECUTE

RULES
Fig.3 Facts added by rules can match rules
FACTS

A flammable The pH of the Spill smells l


liquid was sp spill is < 6 ike vinegar
illed

EXECUTE Fire d
MATCH ept is
called

If a flammable liquid was spilled, call the fi


re department

RULES
Fig.4 Rule execution can affect the real world
The spill ma
The pH of th
terial is an a
e spill is < 6
cid

The spill ma
terial is an a
cetic acid

Spill smells l
ike vinegar

Fig.5 Inference chain for inferring the spill material


A A
E A E E
E A H
G H G H
G C C
G B D
C H C F
B
D B F D Z
B

MATCH EXECUTE EXECUTE


MATCH EXECUTE MATCH

F &B  Z F &B  Z F &B  Z


C &D  F C &D  F C &D  F
A D A D A D

RULES RULES
RULES
Fig. 6 An example of forward chaining
A D
F
C Z
B
Fig. 7 Inference chain produced by Fig. 6
FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS
Step A E
A EH 1 H 2 A E 3 AE 4 A E 5 A E 7 A EH 8 AE H
G G H G H G H G CH
6 A E
G B C B C B C B C B C B G H C
G GD F
C
B D F BC BZ

Z not here F not


C here D not A here
here
here Have Have Have Z
C&D F&B
Want F

Want Z Want D Want A


Want
Get
F here C
Need to get C D
Need to
Get A Execute Execute Execute
F
B

F&B  Z Z F&B  Z F&B  Z F&B  Z F&B  Z F&B  Z F&B  Z F&B  Z F&B  Z


D
C&D F C&D F C&D F C&D F C&D F C&D F C&D F C&D F C&D F
A D h A D A D A D A D A D A D A D A D
h
e e
r r
RULES e RULES RULES RULES e
RULES RULES RULES RULES RULES

Fig. 8 An example of Backward Chaining


Matching
Q How to extract from the entire collection of rules that can be appli
ed at a given point?
=> Matching between current state and the precondition of the rule
s
Indexing
• One way to select applicable rules is to do a simple search throug
h all the rules, comparing each one’s preconditions to the current
state and extracting all the ones that match. But there are two pro
blems with this simple solution:
• It will be necessary to use a large number of rules. scanning throu
gh all of them at every step of the search would be hopelessly ine
fficient.
• It is not always immediately obvious whether a rule’s precondition
Slide 25
’s are satisfied by a particular state.
Matching
Q Indexing
–A large number of rules => too slow to find a rule
–Indexing: Use the current state as an index into rules an
d select the matching ones immediately
–There’s a trade-off between the ease of writing rules (hig
h-level descriptions) and the simplicity of the matching pr
ocess

Slide 26
Matching
– RETE gains efficiency from three major sources.
– The temporal nature of data. rules usually do not alter the
state description radically. Instead a rule will add one or t
wo elements or delete one or two elements but the state r
emains the same.RETE maintains a network of rule condi
tions and it uses changes in the state description to deter
mine which new rules might apply.
– Structural similarity in rules.Eg.one rule concludes jaguar(
X)if mammal(x),feline(x),carnivorous(x) and has-spots(x).
Another rule concludes tiger(x) and is identical to the first r
ule except that it replaces has-spots with has-stripes. If tw
o rules are matched independently, a lot of work is repeat
ed unnecessarily. RETE stores rules so that they share str
uctures in memory.sets of conditions that appear in sever
al rules are matched once per cycle. Slide 27
Matching
-Persistence of variable binding consistency: while all the
individual preconditions of a rule might be met there may b
e variable binding conflicts that prevent the rule from firing.
Son(Mary,joe) and son (Bill,Bob) .
The individual preconditions of the rule can be matched
Son(x,y)A son(y,z)grandparent(x,z)
Can be matched, but not in a manner that satisfies the con
straint imposed by the variable y.

Slide 28
Matching
Q Approximate matching
–Rules should be applied if their preconditions approxim
ately match the current situation
–Example: A speech-understanding program
• Rules: A description of a physical waveform to p
hones (a, e, …)
• Physical signal: differences in the way individuals
speak, result of background noise, …

Slide 29
Matching
ELIZA (Weizenbaum, 1966): Person: Men all are alike.
ELIZA: In what way?
Person: They're always bugging us about something or
other.
ELIZA: Can you think of specific example? Person: Well,
my boyfriend made me come here. ELIZA: Your boyfri
end made you come here.
Person: He says I'm depressed much of the time. ELIZA:
I am sorry to hear you are depressed.

Slide 30
Matching
ELIZA (Weizenbaum, 1966): Person: It's true. I am unhap
py.
ELIZA: Do you think coming here will help you not to be u
nhappy?
Person: I need some help, that much seems certain. ELIZ
A: What would it mean to you if you got some help?
Person: Perhaps I could learn to get along with my brothe
r.
ELIZA: Tell me more about your family.
....

Slide 31
Matching
ELIZA (Weizenbaum, 1966):
(X me Y) (X you Y)
(I remember X) (Why do you remember X just now?)
(My {family-member} is Y) (Who else in your family is Y?)
(X {family-member} Y) (Tell me more about your family)
....
Person: My brother is mean to me.
ELIZA: Who else in your family is mean to you?
....

Slide 32
Matching
Conflict resolution:
The result of the matching process is a list of rules whose
antecedents
–Preferences based on rules:
• Specificity of rules
• Physical order of rules
–Preferences based on objects:
• Importance of objects
• Position of objects
–Preferences based on action:
Slide 33
• Evaluation of states
Control Knowledge
Knowledge about which paths are most likely to lead quickl
y to a goal state is often called search control knowledge.
– Which states are more preferable to others.
– Which rule to apply in a given situation.
– The order in which to pursue subgoals
– Useful sequences of rules to apply.

Search control knowledge = Meta knowledge

Slide 34
Control Knowledge
There are a number of AI systems that represent their control
knowledge with rules. Example SOAR,PRODIGY
SOAR is a general architecture for building intelligent systems.

Slide 35
Control Knowledge
PRODIGY is a general purpose problem solving system, th
at incorporates several different learning mechanisms.
It can acquire control rules in a number of ways:
Through hand coding by programmers
Through a static analysis of the domain’s operators.
Through looking at traces of its own problem solving behav
ior.
PRODIGY learns control rules from its experience, but unlik
e SOAR it learns from its failures.
PRODIGY pursues an unfruitful path, it will try to come uo
with an explanation of why that path failed. It will then us
e that explanation to build control knowledge that will hel
p it avoid fruitless search paths in future.
Slide 36
Control Knowledge
Two issues concerning control rules:
• The first issue is called the utility problem. As we add mo
re and more control knowledge to a system, the system i
s able to search more judiciously. If there are many contr
ol rules, simply matching them all can be very time consu
ming.
• the second issue concerns with the complexity of the pro
duction system interpreter.

Slide 37

You might also like