Professional Documents
Culture Documents
KE Unit 4 Notes
KE Unit 4 Notes
UNIT - 4 NOTES
In Chapter 4, we presented the problem reduction and solution synthesis paradigm. In this
chapter, we will present how a knowledge-based agent can employ this paradigm to solve
problems and assess hypotheses.
Figure 7.1 shows the architecture of the agent, which is similar to that of a production
system (Waterman and Hayes-Roth, 1978). The knowledge base is the long-term memory,
which contains an ontology of concepts and a set of rules expressed with these concepts.
When the user formulates an input problem, the problem reduction and solution synthesis
inference engine applies the learned rules from the knowledge base to develop a problem
reduction and solution synthesis tree, as was discussed in Section 4. This tree is developed
in the reasoning area, which plays the role of the short-term memory.
The ontology from the knowledge base describes the types of objects (or concepts) in
the application domain, as well as the relationships between them. Also included are the
instances of these concepts, together with their properties and relationships.
The rules are IF-THEN structures that indicate the conditions under which a general
problem (or hypothesis) can be reduced to simpler problems (hypotheses), or the solu-
tions of the simpler problems (hypotheses) can be combined into the solution of the more
complex problem (hypothesis).
The applicability conditions of these rules are complex concepts that are expressed by
using the basic concepts and relationships from the ontology, as will be discussed in
Section 7.2. The reduction and synthesis rules will be presented in Section 7.3. This section
student
IF the problem to solve is P1g
IF the problem to solve is P1g
IF the problem to solve is P1g
Condition
P11 S11 … P S
1n
Area
1n
subconcept of
IF the problem to solve is P1g
Condition
university employee Except-When
… ConditionIF the problem to solve is P1g
Condition
IF the problem to solve is P1g
subconcept of
Except-When Condition
staff member
subconcept of
faculty member graduate
undergraduate
student
…
Except-When
Condition
Except-When
… Condition
Except-When Except-When
Condition
Condition
Condition
Condition P1n1 S1n1 … P1nm S1nm
student … Condition
subconcept of Except-When
… Condition Condition
THEN solve itsExcept-When
sub-problems
subconcept of
Except-When
… Condition Condition
instructor professor PhD advisor subconcept of
BS student THEN1 …
P1g solve
1 itsExcept-When
Png sub-problems
subconcept of THEN1 …
P1g solve1 itsExcept-When
Png sub-problems Condition
assistant associate full
THEN
P1g1 … solve1 itsExcept-When
Png sub-problems Condition
solve1 its sub-problems
professor
instance of
professor
instance of
professor
instance of
MS student
instance of
PhD student
instance of
THEN
P1g1 …
THEN
P1g
Png
1 … solve1 its sub-problems
Png
1 … P1
P1n11 S1n11 … P1n1q S1n1q
P1g ng
John Smith John Doe Jane Austin Joan Dean Bob Sharp
Ontology output
Solution
Development and
Rule Learning Browsing
202
13:52:57,
.008
7.2. Complex Ontology-based Concepts 203
will also present the overall reduction and synthesis algorithm. Section 7.4 will present the
simplified reduction and synthesis rules used for evidence-based hypotheses analysis, and
Section 7.5 will present the rule and ontology matching process. Finally, Section 7.6 will
present the representation of partially learned knowledge, and Section 7.7 will present the
reasoning with this type of knowledge.
Using the concepts and the features from the ontology, one can define more complex
concepts as logical expressions involving these basic concepts and features. For example,
the concept “PhD student interested in an area of expertise” may be expressed as
shown in [7.1].
Because a concept represents a set of instances, the user can interpret the preceding
concept as representing the set of instances of the tuple (?O1, ?O2), which satisfy the
expression [7.1], that is, the set of tuples where the first element is a student and the
second one is the research area in which the student is interested. For example, Bob
Sharp, a PhD student interested in artificial intelligence, is an instance of the concept [7.1].
Indeed, the following expression is true:
In general, the basic representation unit (BRU) for a more complex concept has the
form of a tuple (?O1, ?O2, . . . , ?On), where each ?Oi has the structure indicated by [7.2],
called a clause.
Concepti is either an object concept from the object ontology (such as PhD student), a
numeric interval (such as [50 , 60]), a set of numbers (such as {1, 3, 5}), a set of strings
(such as {white, red, blue}), or an ordered set of intervals (such as (youth, mature)). ?Oi1 . . .
?Oim are distinct variables from the sequence (?O1, ?O2, . . . , ?On).
A concept may be a conjunctive expression of form [7.3], meaning that any instance of
the concept satisfies BRU and does not satisfy BRU1 and . . . and does not satisfy BRUp.
13:52:57,
.008
204 Chapter 7. Reasoning with Ontologies and Rules
For example, expression [7.5] represents the concept “PhD student interested in an area of
expertise that does not require programming.”
An agent can solve problems through reduction and synthesis by using problem reduction
rules and solution synthesis rules. The rules are IF-THEN structures that indicate the
conditions under which a general problem can be reduced to simpler problems, or the
solutions of the simpler problems can be combined into the solution of the more complex
problem.
The general structure of a problem reduction rule Ri is shown in Figure 7.2. This rule
indicates that solving the problem P can be reduced to solving the simpler problems
Pi1, . . ., Pini, if certain conditions are satisfied. These conditions are expressed in two
equivalent forms: one as a question/answer pair in natural language that is easily under-
stood by the user of the agent, and the other as a formal applicability condition expressed
as a complex concept having the form [7.4], as discussed in the previous section.
Consequently, there are two interpretations of the rule in Figure 7.2:
(1) If the problem to solve is P, and the answer to the question QRi is ARi, then one
can solve P by solving the subproblems Pi1, . . . , Pini.
(2) If the problem to solve is P, and the condition CRi is satisfied, and the conditions
ERi1, . . ., ERiki are not satisfied, then one can solve P by solving the subproblems
Pi1, . . . , Pini.
Question QRi
Answer ARi
THEN solve
Subproblem Pi1
…
Subproblem Pini
13:52:57,
.008
7.3. Reduction and Synthesis Rules 205
An example of a problem reduction rule is shown in Figure 7.3. It reduces the IF problem
to two simpler problems. This rule has a main condition and no Except-When conditions.
Notice also that the main condition is expressed as the concept (?O1, ?O2, ?O3, ?O4).
As discussed in Section 4.2 and illustrated in Figure 4.6 (p. 116), there are two synthesis
operations associated with a reduction operation: a reduction-level synthesis and a
problem-level synthesis. These operations are performed by employing two solution
synthesis rules that are tightly coupled with the problem reduction rule, as illustrated in
Figure 7.4 and explained in the following paragraphs.
For each problem reduction rule Ri that reduces a problem P to the subproblems
P 1, . . . , Pini (see the left-hand side of Figure 7.4), there is a reduction-level solution
i
synthesis rule (see the upper-right-hand side of Figure 7.4). This reduction-level solution
synthesis rule is an IF-THEN structure that expresses the condition under which the
solutions Si1, . . . , Sini of the subproblems Pi1, . . . , Pini of the problem P can be combined
into the solution Si of P corresponding to the rule Ri.
Let us now consider all the reduction rules Ri, . . . , Rm that reduce problem P to simpler
problems, and the corresponding synthesis rules SRi, . . . , SRm. In a given situation, some
of these rules will produce solutions of P, such as the following ones: Based on Ri, the
Main Condition
?O1 is industrial economy
?O2 is industrial capacity
generates essential war material from
the strategic perspective of ?O3
?O3 is multistate force
has as member ?O4
?O4 is force
has as economy ?O1
has as industrial factor ?O2
13:52:57,
.008
206 Chapter 7. Reasoning with Ontologies and Rules
solution of P is Si, . . . , and based on Rm, the solution of P is Sm. The synthesis rule SP
corresponding to the problem P combines all these rule-specific solutions of the problem
P (named Si, . . . , Sm) into the solution S of P (see the bottom-right side of Figure 7.4).
The problem-solving algorithm that builds the reduction and synthesis tree is pre-
sented in Table 7.1.
In Section 4.3, we discussed the specialization of inquiry-driven analysis and synthesis for
evidence-based reasoning, where one assesses the probability of hypotheses based on
evidence, such as the following one:
Moreover, the assessment of any such hypothesis has the form, “It is <probability> that
H,” which can be abstracted to the actual probability, as in the following example, which
can be abstracted to “very likely:”
It is very likely that John Doe would be a good PhD advisor for Bob Sharp.
13:52:57,
.008
7.5. Rule and Ontology Matching 207
As a result, the solution synthesis rules have a simplified form, as shown in Figure 7.5. In
particular, notice that the condition of a synthesis rule is reduced to computing the
probability of the THEN solution based on the probabilities for the IF solutions.
In the current implementation of Disciple-EBR, the function for a reduction-level
synthesis rule SR could be one of the following (as discussed in Section 4.3): min, max,
likely indicator, very likely indicator, almost certain indicator, or on balance. The
function for a problem-level synthesis can be only min or max.
Figure 7.6 shows an example of a reduction rule and the corresponding synthesis rules.
The next section will discuss how such rules are actually applied in problem solving.
The agent (i.e., its inference engine) will look for all the reduction rules from the know-
ledge base with an IF hypothesis that matches the preceding hypothesis. Such a rule is the
one from the right-hand side of Figure 7.7. As one can see, the IF hypothesis becomes
identical with the hypothesis to be solved if ?O1 is replaced with Bob Sharp and ?O2 is
replaced with John Doe. The rule is applicable if the condition of the rule is satisfied for
these values of ?O1 and ?O2.
13:52:57,
.008
208 Chapter 7. Reasoning with Ontologies and Rules
IF
Based on Ri it is < probability pi> that H
…
Based on Rm it is < probability pm> that H
Figure 7.5. Reduction and synthesis rules for evidence-based hypotheses analysis.
The partially instantiated rule is shown in the right-hand side of Figure 7.8. The agent
has to check that the partially instantiated condition of the rule can be satisfied. This
condition is satisfied if there is any instance of ?O3 in the object ontology that satisfies all
the relationships specified in the rule’s condition, which is shown also in the left-hand side
of Figure 7.8. ?Sl1 is an output variable that is given the value certain, without being
constrained by the other variables.
The partially instantiated condition of the rule, shown in the left-hand side of Fig-
ures 7.8 and 7.9, is matched successfully with the ontology fragment shown in the right-
hand side of Figure 7.9. The questions are: How is this matching performed, and is it
efficient?
John Doe from the rule’s condition (see the left-hand side of Figure 7.9) is matched with
John Doe from the ontology (see the right-hand side of Figure 7.9).
Following the feature is expert in, ?O3 has to match Artificial Intelligence:
This matching is successful because both ?O3 and Artificial Intelligence are areas of
expertise, and both are the values of the feature is interested in of Bob Sharp. Indeed,
Artificial Intelligence is an instance of Computer Science, which is a subconcept of area
of expertise.
13:52:57,
.008
7.5. Rule and Ontology Matching 209
Figure 7.6. Examples of reduction and synthesis rules for evidence-based hypotheses analysis.
Hypothesis
Bob Sharp is interested in an area of
expertise of John Doe.
A: Yes,?O3.
?O1
area of expertise
Condition
is interested in ?O1 is PhD student
instance of
is interested in?O3
?O3 ?O2 is PhD advisor
PhD advisor
is expert in ?O3
?O3 is area of expertise
instance of is expert in
?SI1 is-in [certain - certain]
13:52:57,
.008
210 Chapter 7. Reasoning with Ontologies and Rules
Hypothesis
Bob Sharp is interested in an area of
expertise of John Doe.
THEN conclude
John Doe ?Sl1 = certain
It is ?SI1 that Bob Sharp is interested in
an area of expertise of John Doe.
?O1 Bob Sharp ?O2 John Doe ?O3 Artificial Intelligence ?Sl1 certain
As the result of this matching, the rule’s ?O3 variable is instantiated to Artificial
Intelligence:
?O3 ⟵ Artificial Intelligence
Also, ?Sl1 will take the value certain, which is one of the values of probability, as
constrained by the rule’s condition.
13:52:57,
.008
7.5. Rule and Ontology Matching 211
The matching is very efficient because the structure used to represent knowledge (i.e.,
the ontology) is also a guide for the matching process, as was illustrated previously and
discussed in Section 5.10.
Thus the rule’s condition is satisfied for the following instantiations of the variables:
Therefore, the rule can be applied to reduce the IF hypothesis to an assessment. This
entire process is summarized in Figure 7.10, as follows:
(1) The hypothesis to assess is matched with the IF hypothesis of the rule, leading to
the instantiations of ?O1 and ?O2.
(2) The corresponding instantiation of the rule’s condition is matched with the
ontology, leading to instances for all the variables of the rule.
(3) The question/answer pair and the THEN part of the rule are instantiated, gener-
ating the following reasoning step shown also in the left-hand side of Figure 7.10:
Hypothesis to assess:
Assessment:
Condition
2 Match rule’s condition with the ontology
?O1 is PhD student
?O1 Bob Sharp is interested in?O3
?O2 John Doe ?O2 is PhD advisor
?O3 Artificial Intelligence is expert in ?O3
?SI1 certain ?O3 is area of expertise
?SI1 is-in [certain - certain]
Assessment Instantiate
It is certain that Bob Sharp is interested 3 assessment
THEN conclude
13:52:57,
.008
212 Chapter 7. Reasoning with Ontologies and Rules
In the preceding example, the agent has found one instance for each rule variable, which
has led to a solution. What happens if it cannot find instances for all the rule’s variables?
In such a case, the rule is not applicable.
But what happens if it finds more than one set of instances? In that case, the agent will
generate an assessment for each distinct set of instances.
What happens if there is more than one applicable reduction rule? In such a case, the
agent will apply each of them to find all the possible reductions.
All the obtained assessments will be combined into the final assessment of the hypoth-
esis, as was discussed in the previous section.
Figure 7.11 shows the successive applications of two reduction rules to assess an initial
hypothesis. Rule 1 reduces it to three subhypotheses. Then Rule 2 finds the assessment of
the first subhypothesis.
Rule 1
Which are the necessary conditions?
Bob Sharp should be interested in an area of expertise of John Doe, who should
stay on the faculty of George Mason University for the duration of the dissertation
of Bob Sharp and should have the qualities of a good PhD advisor.
Bob Sharp is John Doe will stay on the John Doe would be a
interested in an faculty of George Mason good PhD advisor with
area of expertise University for the duration of respect to the PhD
of John Doe. the dissertation of Bob Sharp. advisor quality criterion.
Rule 2
13:52:57,
.008
7.6. Partially Learned Knowledge 213
general concepts from the version space, and the lower bound contains the least general
concepts from the version space. Any concept that is more general than (or as general as) a
concept from the lower bound and less general than (or as general as) a concept from the
upper bound is part of the version space and may be the actual concept to be learned.
Therefore, a version space may be regarded as a partially learned concept.
The version spaces built by Disciple-EBR during the learning process are called plaus-
ible version spaces because their upper and lower bounds are generalizations based on an
incomplete ontology. Therefore, a plausible version space is only a plausible approxima-
tion of the concept to be learned, as illustrated in Figure 7.12.
The plausible upper bound of the version space from the right-hand side of Figure 7.12
contains two concepts: “a faculty member interested in an area of expertise” (see expres-
sion [7.6]) and “a student interested in an area of expertise” (see expression [7.7]).
The plausible lower bound of this version space also contains two concepts, “an
associate professor interested in Computer Science,” and “a graduate student interested
in Computer Science.”
The concept to be learned (see the left side of Figure 7.12) is, as an approximation, less
general than one of the concepts from the plausible upper bound, and more general than
one the concepts from the plausible lower bound.
The notion of plausible version space is fundamental to the knowledge representation,
problem-solving, and learning methods of Disciple-EBR because all the partially learned
concepts are represented using this construct, as discussed in the following.
13:52:57,
.008
214 Chapter 7. Reasoning with Ontologies and Rules
they could also be complex concepts of the form shown in Section 7.2. Moreover, in the case of
partially learned features, they are plausible version spaces, as illustrated in Figure 5.7 (p. 159).
As will be discussed in Section 9.10, the agent learns general hypotheses with applic-
ability conditions from specific hypotheses. An example of a general hypothesis learned
from [7.8] in shown in [7.9].
Name [7.9]
?O1 is a potential PhD advisor for ?O2.
Condition
?O1 instance of faculty member
?O2 instance of person
The condition is a concept that, in general, may have the form [7.4] (p. 203). The purpose
of the condition is to ensure that the hypothesis makes sense for each hypothesis instanti-
ation that satisfies it. For example, the hypothesis from [7.8] satisfies the condition in [7.9]
because John Doe is a faculty member and Bob Sharp is a person. However, the hypothesis
instance in [7.10] does not satisfy the condition in [7.9] because 45 is not a faculty member.
The condition in [7.9] will prevent the agent from generating the instance in [7.10].
A partially learned hypothesis will have a plausible version space condition, as illus-
trated in [7.11].
Name [7.11]
?O1 is a potential PhD advisor for ?O2.
Plausible Upper Bound Condition
?O1 instance of person
?O2 instance of person
Plausible Lower Bound Condition
?O1 instance of {associate professor, PhD advisor}
?O2 instance of PhD student
13:52:57,
.008
7.7. Reasoning with Partially Learned Knowledge 215
advisor or an associate professor who is an expert in artificial intelligence, while the upper
bound allows ?O2 to be any person who is an expert in any area of expertise ?O3 in which
?O1 is interested.
A learning agent should be able to reason with partially learned knowledge. Figure 7.14
shows an abstract rule with a partially learned applicability condition. It includes a Main
plausible version space condition (in light and dark green) and an Except-When plausible
version space condition (in light and dark red). The reductions generated by a partially
learned rule will have different degrees of plausibility, as indicated in Figure 7.14.
For example, a reduction r1 corresponding to a situation where the plausible lower bound
condition is satisfied and none of the Except-When conditions is satisfied is most likely to
be correct. Similarly, r2 (which is covered by the plausible upper bound, and is not
covered by any bound of the Except-When condition) is plausible but less likely than r1.
The way a partially learned rule is used depends on the current goal of the agent. If the
current goal is to support its user in problem solving, then the agent will generate the
solutions that are more likely to be correct. For example, a reduction covered by the
plausible lower bound of the Main condition and not covered by any of the Except-When
conditions (such as the reduction r1 in Figure 7.14) will be preferable to a reduction
covered by the plausible upper bound of the Main condition and not covered by any of the
Except-When conditions (such as the reduction r2), because it is more likely to be correct.
However, if the current goal of the agent is to improve its reasoning rules, then it is
more useful to generate the reduction r2 than the reduction r1. Indeed, no matter how the
user characterizes r2 (either as correct or as incorrect), the agent will be able to use it to
13:52:57,
.008
216 Chapter 7. Reasoning with Ontologies and Rules
refine the rule, either by generalizing the plausible lower bound of the Main condition to
cover r2 (if r2 is a correct reduction), or by specializing the plausible upper bound of the
Main condition to uncover r2 (if r2 is an incorrect reduction), or by learning an additional
Except-When condition based on r2 (again, if r2 is an incorrect reduction).
7.1. What does the following concept represent? What would be an instance of it?
7.2. Illustrate the problem-solving process with the hypothesis, the rule, and the ontol-
ogy from Figure 7.15.
7.3. Consider the reduction rule and the ontology fragment from Figure 7.16. Indicate
whether this agent can assess the hypothesis, “Bill Bones is expert in an area of
interest of Dan Moore,” and if the answer is yes, indicate the result.
7.4. Consider the following problem:
13:52:57,
.008
7.8. Review Questions 217
7.6. Consider the partially learned concept and the nine instances from Figure 7.20.
Order the instances by the plausibility of being instances of this concept and justify
the ordering.
13:52:57,
.008
218 Chapter 7. Reasoning with Ontologies and Rules
Question
Who or what is a strategically critical element
with respect to the ?O1 ?
Answer
?O2 because it is an essential generator of war
material for ?O3 from the strategic perspective.
Condition
?O1 is industrial economy
?O2 is industrial capacity
generates essential war material from
the strategic perspect ive of ?O3
?O3 is multistate force
has as member ?O4
?O4 is force
has as economy ?O1
has as industrial factor ?O2
Figure 7.17. Reduction rule from the center of gravity analysis domain.
7.7. Consider the ontology fragment from the loudspeaker manufacturing domain
shown in Figure 5.24 (p. 172). Notice that each most specific concept, such as dust
or air press, has an instance, such as dust1 or air press1.
Consider also the following rule:
13:52:57,
.008
7.8. Review Questions 219
object
multistate multistate
alliance coalition
US 1943
dominant partner equal partners industrial capacity
multistate alliance multistate alliance UK 1943
has as member has as industrial factor has as economy generates essential war material
domain multimember force domain force domain force from the strategic perspect ive of
range force range industrial capacity range economy domain capacity
range force
Figure 7.18. Feature definitions and ontology fragment from the center of gravity analysis domain.
Dotted links indicate “instance of” relationships while unnamed continuous links indicate “sub-
concept of” relationships.
13:52:57,
.008
object A PhD advisor will stay on the faculty of George Mason
University for the duration of the dissertation of Joe Dill.
actor position
IF the hypothesis to assess is
person organization A PhD advisor will stay on the faculty of ?O2 for
the duration of the dissertation of ?O3.
employee student Q: Will a PhD advisor stay on the faculty of ?O2
educational organization for the duration of the dissertation of O3?
subconcept of Condition
university long-term position ?O1 is PhD advisor
faculty member
PhD student has as position ?O4
has as employer ?O2
.008
Figure 7.19. Ontology, hypothesis, and potential rule for assessing it.
7.8. Review Questions 221
I6
I9
I2
I3 I7 I8
I1
I4 I5
13:52:57,
.008