You are on page 1of 21

Annals of Mathematics and Artificial Intelligence (2018) 84: 201–220

https://doi.org/10.1007/s10472-018-9605-y

NeuroSymbolic integration with uncertainty

Sreelekha S.1

Published online: 1 November 2018


© Springer Nature Switzerland AG 2018

Abstract
Most of the tasks which require intelligent behavior have some degree of uncertainty asso-
ciated with them. The occurrence of uncertainty might be because of several reasons such
as the incomplete domain knowledge, unreliable or ambiguous data due to measurement
errors, inconsistent data representation. Most of the knowledge-based systems require the
incorporation of some form of uncertainty management, in order to handle this kind of
indeterminacy present in the system. In this paper, we present one such method to han-
dle the uncertainty in neurules, a neuro-symbolic integration concept. Neuro-Computing is
used within the symbolic frame work for improving the performance of symbolic rules. The
uncertainty, the personal belief degree that an uncertain event may occur is managed by
computing the composite belief values of incomplete or conflicting data. With the imple-
mentation of uncertainty management in neurules, the accuracy of the inference mechanism
and the generalization performance can be improved.

Keywords Uncertainty reasoning · Decision support · Dempster-Shafer theory ·


Neuro-symbolic integration · Neurules · Knowledge based systems ·
Knowledge based neuro-computing

1 Introduction

Most of the intelligent systems reasoning mechanism have some degree of uncertainty asso-
ciated with them. The type of uncertainty that can occur in knowledge-based systems may
be caused by problems with the data such as, missing or unavailable data, unreliable or
ambiguous data due to measurement errors, imprecise or inconsistent representation of the
data. If we are able to quantify the uncertainty using proper methods, then the indeterminacy
present in the system can be handled. In order to implement some uncertainty reasoning
mechanism we must be concerned with three issues; first is how to represent uncertain data,

 Sreelekha S.
sreelekha@cse.iitb.ac.in

1 Department of Computer Science & Engineering, Indian Institute of Technology Bombay,


Mumbai, 400076, India

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


202 S. S.

second is how to combine two or more pieces of uncertain data and third is how to draw
inference using uncertain data.
According to the principle of minimal commitment in the probability framework, incom-
plete data should be censored or completed and the corresponding probability masses be
equally distributed on each possible state which corresponds to the maximum entropy prin-
ciple. According to the minimal commitment principle in the evidence framework, the belief
mass associated to incomplete data, i.e., the uncertainty on the state is allocated to the uncer-
tain state modality [29]. The principle of minimal commitment formalizes the idea that we
should never give more support than justified to any subset of the frame of discernment [31].
We can use uncensored data with the Dempster Shafer Theory to take the epistemic uncer-
tainty into account in context such as incomplete data in databases or incoherency between
data and reliability models. Therefore, the framework of an evidence theory and more pre-
cisely the Dempster-Shafer Theory [14] seems more appropriate to express and process the
epistemic uncertainty [29].
For intelligent processing, whether neural or symbolic approaches alone will be sufficient
to provide a general framework was a major discussion. The field of hybrid neural symbolic
processing has seen a striking development in recent years. For the integration of symbolic
and neural models of cognition and intelligent behavior, the motivation comes from different
sources. Since the brain has a neuronal structure and the capability to perform symbolic
processing, a symbolic interpretation of artificial neural network architecture is desirable
from the perspective of cognitive neuroscience.
From the perspective of knowledge-based processing, hybrid neural or symbolic repre-
sentations are advantageous for integrating different mutually complementary properties.
Symbolic representations have advantages with respect to easy interpretation, explicit con-
trol, fast initial coding, dynamic variable binding and knowledge abstraction. On the other
hand, neural representations show advantages for gradual analog plausibility, learning,
robust fault-tolerant processing, and generalization to similar input. Since these advantages
are mutually complementary, hybrid symbolic connectionist architecture can be useful if
different processing strategies have to be supported [33]. Neuro-symbolic approach, the
combination of symbolic and connectionist approaches, is one of the most popular types
of integration of different knowledge representation methods, which has yielded advanced
problem solving formalisms and systems [3, 5–13, 15, 22, 27, 30, 32].
In a neural-symbolic system, neural networks provide the machinery for efficient com-
putation and robust learning, while logic provides high-level representations, reasoning and
explanation capabilities to the network models, promoting modularity, facilitating validation
and maintenance and enabling a better interaction with existing systems. Neural-symbolic
systems integrate logical reasoning and statistical learning between network and logic mod-
els and this integration contain three main components: (1) knowledge representation and
reasoning in neural networks, (2) knowledge evolution and network learning, and (3) knowl-
edge extraction from trained networks. Neural-symbolic systems can produce better models
of knowledge acquisition, robust learning and reasoning under uncertainty [10].
The neurules [16–21, 21–24, 28], are a type of symbolic-neural rules combining sym-
bolic rules (of propositional type) and neurocomputing. Their high-level characteristic is
that they give preeminence to the symbolic part of the integration. As a result, they retain
the naturalness and modularity of symbolic rules to a large degree and are able to possess
an interactive inference mechanism [16, 21]. But the Neurules also lacks in proper handling
of partial domain knowledge i.e., the incomplete or uncensored data.
In this paper we have implemented uncertainty management in neurules using Dempster-
Shafer theory. Neurule base consists of number of adaline units in which each condition

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 203

is taking three inputs [1(true), -1(false), 0(unknown)]. So we have enhanced the integrated
rule-based approach for taking uncertain values as a fourth input to the neurule’s structure.
Thus the each input condition will be taking any one of the values from the set [1(true), -1(false),
0(unknown), Un(uncertainty)]. We have explained the demstershafer theory and its rule of com-
binations. Using this theory calculated the fourth input uncertainty. Neurule Production with
Uncertainty Algorithm is used for producing the neurules from empirical data. We have also
explained the improved reasoning and inference mechanism of neurules under uncertainty.
Further we explain how the accuracy of the inference mechanism is increased by considering
uncertain factors and there by the generalization performance is also getting improved.
The organization of the paper is as follows: Section 2 starts with the introduction about
neurules with uncertainty and the neurule-based knowledge representation scheme and
in Section 3, production of neurules from empirical data using neurule production with
uncertainty algorithm is presented. In Section 4, dempster shafer theory and its rule of com-
binations for composite belief computation is explained. Section 5 deals with the inference
mechanism of neurules under uncertainty. Section 6 presents experimental results regarding
efficiency and generalization capabilities of neurules, and finally, Section 7 concludes.

2 Neurules with uncertainty

2.1 Representation

Neurules are the integration of neuro computing and symbolic rules [16]. Internally each
neurule is considered as an adaline unit (Fig. 1b), which uses the LMS (Least Mean
Square) algorithm for learning and are more safely convergent for nonlinear training sets
and generalizes better than perceptron [13].
The formation of neurule is represented in Fig. 1a, where, I1, I2 ,. . . ,In are the input con-
ditions with corresponding weight values sf 1 , sf 2 ,. . . , sf n called the significance factors.

Fig. 1 a Form of a neurule b A neurule as an adaline unit

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


204 S. S.

The bias value sf0 , is termed as the bias factor of the rule. Each input condition takes a value
from the set of values [1(true), -1(false), 0(unknown), Un(uncertainty)]. Here ‘Un’ denotes
the uncertain factor value to be calculated.
The conclusion (decision) of the rule is represented by the output O, which is calculated
via the standard formulae of (1) and (2):
n
O = f (v), v = sf 0 + sf i Ii (1)
i=1

1 if v ≥ 0
f (v) = (2)
−1 otherwise
where v is the activation value and the threshold function f(x) is called as activation function.
The output can take one of two values (-1, 1) representing failure or success of the rule.
The significance factor of a condition represents the significance (weight) of the condition
in drawing the conclusion.

2.2 Enhanced neurule-based system architecture

The neurule-based system’s functional part is shown in Fig. 2. Similar to the conven-
tional rule-based system, neurule-based runtime system consists of mainly three modules:
NeuRuleBase (NRB), Integrated Inference Engine (IIE), and Working Memory (WM). By
using, produced neurules which are stored in NRB and the fact assertions from the WM, IIE
processes the inferences. WM has four fact assertion modules such as true, false, unknown,
uncertainty. A fact assertion has the following structure:
(Fi , ass(Fi ))
where Fi is a fact and ass(Fi ) is the assertion value related to it, which can take any one of
{TRUE, FALSE, UNKNOWN, UNCERTAIN}. A fact has the same format as a condition
or a conclusion of a rule:
Fi ≡ “DFi is dFi ”
where DFi is the variable and dFi is the corresponding value associated with the fact. During
an inference process either fact assertions are produced as intermediate or final conclusions
or provided as an initial input data by the user.

Fig. 2 Enhanced Neurule-based System Architecture

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 205

3 Neurules production with uncertainty algorithm

By applying the neurule production with uncertainty algorithm (NPUA), neurule base
(NRB) is produced. NPUA first calculates the uncertainty factor by considering all possi-
ble sub belief factors using dempster-shafer theory. Then it tries to produce one neurule for
each intermediate or output conclusion with all available input combination. However, due
to possible nonlinearity of the data, this is not usually feasible. So, more than one neurule
having the same conclusion may be produced for each intermediate or output conclusion

NPUA Algorithm

Input:
(a) A set D of domain variables
(b) A set of sub belief factors Bd
(c) Dependency information fD VI
(d) A set X of empirical data

Output: A set of neurules (NRB)

Body:
1. Assign the belief factors for all possible elements using Dempster Shafer’s evidence
theory
2. Compute the uncertainty factors using the Dempster’s Rule of Combination
3. Construct initial neurules, based on dependency information.
4. Extract an initial training set for each initial neurule from X
5. Train each initial neurule individually and produce corresponding neurule(s).
In the following, we explained in detail each step of the algorithm, by introducing some
definitions as well. First if we encounter an uncertain factor in preprocessed data, its belief
values are assigned and the composite belief value is calculated using Dempster Shafer The-
ory. Then according to the dependency information of domain variables, initial neurules are
constructed for each value of intermediate or output variable and it represents the possible
intermediate or final conclusions. So, step 1 of NPUA can be analyzed as follows:
1. For each attribute, B, having uncertainty,
1.1 for each possible set of atomic elements, assign probability mass, Bv.
1.1.1 Associate belief mass to subsets of each element using theory of evidence, bi Bv
From the mass assignments, the upper and lower bounds of a probability interval can be
defined. The lower bound, belief for a set is defined as the sum of all the masses of subsets
of the set of interest. The upper bound, plausibility is the sum of all the masses of the sets
B that intersect the set of interest. In Step 2, evidences from different sources are combined
as follows:
2. Combine independent sets of mass assignments
2.1 Calculate the combination (called the joint mass) from sets of masses
So step 3 of NPUA can be analyzed as follows:
3. For each inferable variable Dv in D

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


206 S. S.

3.1 For each possible value dv of Dv


3.1.1 Construct a neurule with conclusion “Dv is dv ” and conditions “Di is di (0),” where
(Dv ,Di ) = fD
V I and Sdi
If each constructed initial rule Nk has mk conditions, then its corresponding initial train-
ing set for each initial neurule is extracted from X, in step 4, formally it is defined as
follows:
4. For each initial rule Nk , extract an initial training set Xk from X. Each pattern in
Xk consists of as many values as the number of different variables in Nk . The last value
is the goal value.
First determined the corresponding training set for each initial neurule and it is individually
trained using the Least Mean Square (LMS) algorithm. As the training succeeds, bias and
significance factors that classify all the training patterns correctly are calculated and one
neurule is produced. If training set fails due to inseparability of the training examples a
splitting process is followed and the generated sets are trained individually by using one of
the training subsets and this process will repeat until there is no failure. So, step 5 of NPUA
is analyzed as follows:

5. For each Nk do
5.1 Train Nk using the LMS algorithm with Xk as the training set.
5.2 If training is successful, produce Nk with the calculated sfNk
i and stop (success).
5.3 Split Xk into two suitable subsets, X1k andX2k
5.4 Apply steps 5.1 to 5.3 with Xk = Xk1 and Xk = Xk2 separately.

For illustrating NPUA, an example problem is taken from [13, 20] and its corresponding
data set called ACUTE, which is related to a medical diagnosis problem that concerns acute
theoretical diseases of the sarcophagus.
There are six symptoms (Swollen Feet, Red Ears, Hair Loss, Dizziness, Sensitive Aretha,
Placibin Allergy), two diseases (Supercilliosis and Namastosis), whose diagnoses are based
on the symptoms, and three possible treatments (Placibin, Biramibio, and Posiboost). One
variable is assigned for each symptom, disease, and treatment: “sf” for “SwollenFeet,” “re”
for “RedEars,” “hl” for “HairLoss,” “dz” for “Dizziness,” “sa” for “SensitiveAretha,” “pa”
for “PlacibinAllergy,” “sc” for “Supercilliosis,” “nm” for “Namastosis,” “pl” for “Placibin,”
“bi” for “Biramibio,” “po” for “Posiboost.” The dependency information is depicted in
Table 1 (where x means “depends on”). The empirical data set of the problem is presented
in Table 2 (where T means “true,” F means “false,” and X means “uncertain”) and the initial
training set is presented in Table 3.

Table 1 Dependency information


sf re hl dz sa pa sc nm pl bi

sc × × ×
nm × × ×
pl × × ×
bi × × ×
po × ×

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 207

Table 2 The acute data set


sf re hl dz sa pa sc nm pl bi po

T T T F × F T F T F T
F F F T T F F T T T F
F F T T F T T T F F F
T T F F T F F F F F F
T F × T T T T T F T T
T F F T T F T T T T F
T T T F F T T F F F F
F T T F T T F T F F F

Therefore,
D={sf, re, hl, dz, sa, pa, sc, nm, pl, bi, po} , XDi = {true}fVI D ={(sc, sf), (sc, re), (sc, hl),

(nm, hl), (nm, dz), (nm, sa),(pl, pa), (pl, sc), (pl, nm), (bi, hl), (bi, sc), (bi, nm), (po, pl),(po,
bi)}
Here we have taken; HairLoss, Red Ears and Dizziness are the factors which have uncer-
tainty values for some instances. For finding the uncertainty value, subbelief factors are
prepared by interviewing with experts and also by using the information from journals. For
HairLoss, the considered belief factors are “NM” for “Namastosis,” “CS” for “CancerSymp-
toms,” “MS” for “MentalStress,””DA” for “Dandruff,” “BA” for “Baldness.” For Red Ears,
the considered belief factors are, “SC” for “Supercilliosis” “WD” for “WaxDeposit,” “IN”
for “Injury,” “AL” for “Allergy,” “CO” for “Cold.” For Dizziness, the considered belief fac-
tors are, “SA” for “Acute Sarcophagus,” “SL” for “SugarLevel,” “PR” for “PulseRate,” “FI”
for “FoodIntake,” “PE” for “Physical Exercise.”
According to step1 of NPUA, belief factors are prepared for the uncertain factors. And
using Dempster Shafer’s evidence theory, the belief masses i.e., the belief, disbelief and
the uncertainty present in the belief are assigned and the upper and lower bounds of prob-
ability interval are defined as described in Section 4. Then by applying the Dempster’s
rule of combination in step 2, we combine the independent set of mass assignments and

Table 3 Initial training set

S1 S2 S3 S4 S5

T T T 1 T F 0 −1 F T F 1 T F T −1 T F 1
F 0 F −1 F T T 1 F F T 1 F T F 1 T T −1
F T F 1 T T F 1 T T T −1 T T T −1 F F −1
T F T −1 Un Un T 1 F F F −1 F F F −1 F T 1
T Un Un 1 Un T T 1 F T T −1 F T T 1
F T T −1 F Un T 1 T T F −1 Un T T 1
F Un T 1 T Un F −1 0 T T 1 Un T F −1
T T Un −1 T F T 1 T F T 1 T T F −1
F F F −1 T F F −1
T F F 1

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


208 S. S.

Table 4 Belief factor set

KSI KS2 Composite belief

NM CS MS BA DA θ NM CS MS BA DA θ

HL 0.25 0.2 0.1 0.15 0.05 0.25 0.37 0.03 0.09 0.06 0.13 0.32 0.43
0.4 0.05 0.2 0.04 0.02 0.29 0.23 0.02 0.05 0.07 0.23 0.4 0.46
0.38 0.16 0.22 0.03 0.01 0.2 0.19 0.08 0.16 0.09 0.22 0.26 0.39
0.19 0.1 0.14 0.28 0.09 0.2 0.32 0.12 0.07 0.09 0.13 0.27 0.33
0.28 0.19 0.12 0.13 0.06 0.22 0.38 0.11 0.13 0.05 0.03 0.3 0.47
0.42 0.07 0.01 0.09 0.1 0.31 0.35 0.15 0.08 0.02 0.05 0.35 0.64
RE SC WD AL CO IN  SC WD AL CO IN θ
0.34 0.08 0.07 0.2 0.19 0.22 0.21 0.15 0.13 0.07 0.09 0.35 0.32
0.28 0.24 0.06 0.03 0.01 0.38 0.42 0.11 0.16 0.01 0.08 0.22 0.51
DZ SA PR FI PE SL θ SA PR FI PE SL θ
0.27 0.11 0.07 0.09 0.1 0.36 0.31 0.09 0.02 0.14 0.12 0.32 0.41
0.19 0.21 0.2 0.18 0.01 0.21 0.25 0.06 0.18 0.11 0.16 0.24 0.28
0.3 0.12 0.13 0.04 0.23 0.18 0.29 0.15 0.17 0.08 0.1 0.21 0.42

thus the uncertain factor value i.e., the composite belief value, is calculated as described
in Section 4.2 The Belief mass assignments formed by applying step 1 of NPUA and the
composite belief formed by applying step 2 of NPUA is presented in Table 4. Five initial
neurules are produced by applying step 3 of NPUA and are same as the first five of those
depicted in Table 5. Five training sets are formed for each of the five initial neurules by
applying step 4 of NPUA is depicted in Table 3 (where T means “true,” F means “false,”
0 means “Unknown” and Un means “uncertain”). The neurules presented in Table 5 are
obtained by applying step 5 of NPUA.

4 Dempster-Shafer theory

In this paper we have implemented Uncertainty using Dempster Shafer theory. Dempster
Shafer theory or Dempster-Shafer’s evidence theory (Belief Function Theory) is a mathe-
matical theory that generalizes probability theory by abandoning the additivity constraint
[4]. Using Dempster Shafer theory, belief masses can also be associated to subsets of ele-
ments instead of assigning a probability mass to atomic elements alone [2, 3]. Also it
allows information integration by considering both the belief and disbelief of a proposition
[25].
In Dempster shafer theory, the term frame of discernment  is the set of all possible
elements, i.e., it represents the set of all possible states of the system which denotes the
problem domain. Let us consider an example of the set of face cards from a deck of cards.
Here the frame of discernment for this example can be given by,

FOD,  = {H-K, S-K, C-K, D-K, H-Q, S-Q, C-Q, D-Q, H-J, S-J, C-J, D-J} (3)

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 209

Table 5 Produced neurules


R1
(0.3) if SwollenFeet is true(3.1),
RedEars is true(-0.7),
HairLoss is true(4.7)
then Supercilliosis is true
R2
(1.5) if HairLoss is true(0.3),
Dizziness is true(4.5),
SensitiveAretha is true(l.l)
then Namastosis is true
R3
(-1.8) if PlacibinAllergy is true(-3.6),
Supercilliosis is true(4.3),
Namastosis is true(3.0)
then Placibin is true
R4
(-3.6) if HairLoss is true (-4.4),
Supercilliosis is true(2.2),
Namastosis is true(4.0)
then Biramibio is true
R5
(0.5) if Placibin sis true (0.1),
Biramibio is true(0.7)
then Possiboost is true
R6
(0.5) if Biramibio sis true (-1.2),
Placibin is true(1.5),
then Possiboost is true

where

H – Heart K-King
S-Spade Q- Queen
C-Club J-Jack
D-Diamond

Infact  represents the set of all possible outcomes in a random experiment. The power
set 2 is the set of all subsets of , including the empty set Ø. Let 2n be a specific subset of
, where n =||, the cardinality of . So in the card example, the proposition “the drawn
card, u is king (K)” is given by { H-K, S-K, C-K, D-K} .
Every individual element of the subset from powerset 2 is assigned a belief mass
according to the theory of evidence. The resulting function which assigns probability masses
to the propositions as per evidence is called Basic Belief Assignment (BBA).

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


210 S. S.

Formally a BBA is represented with m


where
m : 2 → [0, 1] (4)
The value of m ranges from 0 to 1 with the mass of the empty set

m(Ø) = 0 (5)
And the BBA of remaining subsets adds to a total of one.

m(X) = 1 (6)
X∈2

Definition 4.1 The individual elements of 2 with assigned BBA >0are termed as focal
elements.

Definition 4.2 The masses of all the subsets of the set of interest is cumulatively given by
the belief function,

Belief (X) = m(Y ) (7)
Y ⊆X
As per the (4), the total belief present in a system will be in an interval [0, 1]. I.e., the
belief and disbelief present in the system will be lying in this interval. If there is a belief and
disbelief present then there will be an uncertainty about the occurrence of the claim, X and
it will lie in the same belief interval. We can represent the total belief present in the system
using a venn diagram. Let us consider p% is the belief present in the system and consider
q% as the disbelief present. Then the uncertainty can be computed by subtracting the belief
and disbelief from the total belief 1.
Thus the belief region is divided into three sub regions Belief (X), Disbelief (X) and
Uncertainty (X), where, Belief(X) represents the degree to which the current evidence
supports X,Disbelief (X) represents the degree to which the current evidence supports ¬X
and Uncertainty (X) represents the degree to which we believe nothing one way or the
other about proposition X. The uncertain region is represented in the joint portion of venn
diagram since the uncertain region may shift according to the supportiveness it provide to
the belief or disbelief. Thus from the Fig. 3 and (6) can derive an axiom.

Axiom 4.1
Belief + Uncertainty + Disbelief = 1 (8)

Then the uncertainty can be inferred as,

Uncertainty = 1 − (Belief + Disbelief) (9)

Definition 4.3 The term plausibility (pl) is used to define the degree to which the belief can
be proved to be correct. i.e., 
pl(X) = m(Y) (10)
Y|Y∩X=∅
where m(Y )is the mass of the sets Y that supports the set of belief X.
From Fig. 3 and (8) Plausibility can be inferred as,

P l(X) = Belief (X) + U ncertainty(X) (11)

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 211

Fig. 3 The belief representation

Definition 4.4 The term Doubt (D(X)) is used to define the degree to which the belief can
be proved to be incorrect. i.e.,

D(X) m(Y) (12)
Y|Y∩X=∅

where m(Y) is the mass of the sets Y that supports the set of disbelief X.
From Fig. 3 and (8) Doubt can be inferred as,

Doubt (X) = Disbel(X) + U ncertainty(X) (13)

From (11) and (13) it can be derived as,

i)P l(X) = Bel(X)


ii)P l(X) + P l(¬X) = 1
iii)Bel(X) + Bel(¬X) = 1. (14)

4.1 Dempster’s rule of combination

Combining the two independent sets of mass assignments is the next major task. By using
Dempster’s rule of combination [4, 26], we can combine evidence from different knowledge
sources. This rule uses a normalization factor to ignore all the conflicting evidences and
emphasizes the supporting evidences from different sources.
Let θ 1 and θ 2 be the frames of discernments submitted by the two knowledge sources
KB1 and KB2 respectively. The orthogonal summation for the knowledge sources can be
computed using the formula,
1 
m1,2 (X) = m1 (Y )m2 (Z) (15)
1−K
Y ∩Z=X =∅

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


212 S. S.

where m1(X) and m2(Y) will be the masses of supportive elements of knowledge sources
KB1 and KB2 respectively. Here the Y∩Z = Ø, since it denotes the amount of belief present
in both Y and Z. Hence for the combined belief for the claim it will be X = Y∩Z.
In (15) the conflicts are ignored by using the normalization factor, 1-K.
where, 
K= m1(Y )m2(Z) (16)
Y ∩Z=∅
Here the Y∩Z=Ø, since it denotes the amount of disbelief present in both Y and Z.
In (15) the combined belief,

Belief (X) = m1(Y )m2(Z) (17)
X=Y ∩Z=∅

According to the (9), the uncertainty, Un is,


Uncertainty(U n) = 1 − (Belief + Disbelief) (18)
Hence ⎡⎛ ⎞

 
U n = 1 − ⎣⎝ m1 (Y ).m2 (Z)⎠ + m1 (Y ).m2 (Z) ⎦ (19)
X=Y ∩Z=∅ Y ∩Z=0

Thus the plausibility, Pl, maximum belief mass enriched with the uncertainty factor can be
denoted with the orthogonal summation operation
m = (m1 ⊕ m2 ) (20)

4.2 Computation of composite belief value using dempster-shafer theory

Let us consider the acute sarcophagus problem described in Section 3 to illustrate the
orthogonal summation process. Here we have computed composite the beliefs for ‘RedEars
symptom to be a cause for sarcophagus’ using two knowledge sources KS1 and KS2 .
The knowledge source 1 (KS1) claims that the RedEars could be the symptoms for,
Supercilliosis with m1({SA}) = 0.28,
Cold with m1({CO}) = 0.03,
WaxDeposit with m1({WD}) = 0.24,
Injury with m1({IN}) = 0.1,
Allergy with m1({AL})= 0.06,
and the frame θ , with m1 ({θ}) = 0.38.
The assignment of BPA = 0.25 to θ means that, the uncertainty present for the belief of
knowledge source1 (KS1 ). Similarly, knowledge source 2 (KS2) claims that the HairLoss
could be the symptoms for,
Supercilliosis with m2({SA}) = 0.42,
Cold with m2({CO}) =0.01,
Waxdeposit with m2({WD})= 0.11,
Injury with m2({IN}) = 0.08,
Allergy with m2({AL) = 0.16,
and the frame θ with m2({θ }) = 0.22
Let us compute, the composite belief for whether the RedEars symptom is present for
Supercilliosis It can be computed using (15) of Dempster’s Rule of combination as,
m1 ({Y})m2 ({Y}) + m1 ({Y})m2 ({Y, Z}) + m1 ({Y, Z})m2 ({Y})
m1,2 (X) =
1 − [m1 ({Y})m2 ({Z}) + m1 (Z)m2 ({Y})]

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 213

Thus composite belief for the RedEars Symptom for Supercilliosis,


0.28∗ 0.42 + 0.28∗ 0.22 + 0.38∗ 0.42
m1,2 ({SC}) =
1 − [(0.28∗ 0.36) + (0.34∗ 0.42)]
0.1176 + 0.0616 + 0.1596 0.3388
= =
1 − [0.1008 + 0.1428] 0.7564
= 0.51

i.e., Bel1 (SA) ⊕ Bel2 (SA) = 0.51


This composite belief value will be considered as the uncertain factor value for RedEars
for the corresponding input conditions. This value will be assigned as the fact assertion value
after the evaluation process as described in Section 5 during inference process. Similiarly
composite belief value for all the uncertain factors in which the input condition is evaluated
to uncertainty is calculated

5 Integrated inference engine

The inference engine incorporated with the run time part of neurule-based system processes
the input conditions so that the conclusions can be derived from neurules. By integrating the
neurocomputing part appropriate next rules are selected and by means of symbolic struc-
tures, the inference is taking place after evaluating the corresponding input conditions of
neurule. The evaluated fact assertion and its corresponding values will be stored in Working
Memory. When an input condition will be evaluated as uncertainty its composite belief value
will be computed using Dempster-Shafer theory as described in Section 4. The require-
ment for an uncertain condition evaluation and the evaluation process are described in the
following sub section.

5.1 Evaluation aspects of neurules under uncertainty

The inference engine modules are the major functional part in an expert system. To evaluate
a neurule in an efficient inference process, first of all we have to evaluate its conditions by
considering all aspects.

5.1.1 Condition evaluation

For an efficient inference mechanism, evaluations of the associated conditions are the nec-
essary thing. Here in the IIE module, during every inference session evaluation of conditions
is done. For sibling conditions, evaluation of all the conditions which refer to the same vari-
able is done at the same time. For an efficient inference mechanism in expert systems, we
have to consider the imprecision of data and uncertainty and their possible sources while
the condition evaluation.
In these kinds of situations there is a need for reasoning with uncertain knowledge and
the conditions with incomplete knowledge can be evaluated as uncertainty. Thus the effi-
ciency of the inference mechanism of knowledge bases can be improved by evaluating
the conditions after considering the possible associated sub belief factors. In the problem
described in Section 3, RedEars is a symptom for the Supercilliosis disease. If the patient
notices that RedEars is present, then it will be evaluated as true and if it is completely
absent then it is evaluated as false. If the patient is unaware of the RedEars then it is taken

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


214 S. S.

as unknown but there is a chance that the patient may not have noticed RedEars inspite of
having RedEars or might be unaware that RedEars is a symptom for Supercilliosis. In these
kind of partial knowledge situation there is a chance for a fourth condition, uncertainty, to
evaluate the indeterminacy present in the input conditions of neurule. Thus by considering
some sub belief factors that lead to RedEars such as WaxDeposit, Injury, Allergy, Cold etc,
we can calculate the composite belief value of RedEars for the disease Supercilliosis. Then
using Dempster-Shafer theory we can assign the belief, disbelief and uncertainty values for
each of the above symptoms and the composite belief values of each symptom can be com-
puted using orthogonal summation procedure of Dempster’s rule of combination. Thus a
fourth condition uncertainty and its value can be evaluated and then we can make a proper
inference for the neurules of sarcophagus disease.
i.e., An input condition Ii can evaluate to TRUE, FALSE, UNKNOWN or UNCER-
TAINTY. According to the contents of the working memory (WM) or the user responses or
firings of rules, conditions are evaluated. As described earlier,
WM = {(Fi, ass(Fi )), I = 1, n} (21)
where Fi ≡ “DFi is dFi ” is a fact and ass(Fi ) {TRUE, FALSE, UNKNOWN,
UNCERTAINTY}
Based on WM contents, evaluation of an input condition can be done using the following
rules:
1. An input condition Ii evaluates to TRUE, denoted by ass(Ii )=TRUE, if there is a fact
assertion (Fi , ass(Fi )) in WM with Fi ≡ Ii and ass(Fi )≡ TRUE.
2. An input condition Ii evaluates to FALSE, denoted by ass(Ii )= FALSE, if there is a fact
assertion (Fi , ass(Fi )) in WM with Fi ≡ Ii and ass(Fi )=FALSE.
3. An input condition Ii evaluates to UNKNOWN, denoted by ass(Ii )= UNKNOWN, if
there is a fact assertion (Fi , ass(Fi )) in WM with Fi ≡Ii and ass(Fi )= UNKNOWN.
4. An input condition Ii evaluates to UNCERTAINTY, denoted by ass(Ii )= UNCER-
TAINTY, if there is a fact assertion (Fi , ass(Fi )) in WM with Fi ≡Ii and ass (Fi )=
UNCERTAINTY.
A condition evaluates to ‘uncertainty’, if there is a fact with the same variable, predicate
and ‘Un’ as its value.
The input condition Ii of a neurule Nk is evaluated to uncertainty, when the assertion of
fact lies between 0 and 1. i.e., 0 <ass(Fi ) <1.
As per Fig. 1 and (11) from Section 4, the total belief present in system can be categorized
into belief, disbelief and uncertainty and the total belief will add to 1. Then the composite
belief value, BDi which supports the claim will be computed by combining belief and the
amount of uncertainty that supports the belief will definitely will be less than 1.
Thus, 0 < BDi < 1
Hence the uncertain value of the domain variable Di , value (Di ) will vary from 0 to 1.
i.e., 0 < Value(Di ) < 1
Thus for an input condition Ik , if the value can be computed as a continous value then
the fact assertion, ass(Fi ) for the variable ‘DFi is dFi ’ can be evaluated as uncertainty. And
this fact assertion will get stored into the working memory of neurule-based system for the
corresponding condition.
When an input condition is evaluated to uncertainty, the uncertain factor value is calcu-
lated using Dempster-Shafer Theory as described in Section 4. For example consider the

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 215

RedEars factor for Supercilliosis disease in the Acute Sarcophagus problem discussed in
Section 3. According to Dempster-Shafer evidence theory, all possible belief factors that are
leading to RedEars are taken under consideration. Here we have taken four factors such as
WaxDeposit, Injury, Allergy and Cold. If the patient suffers from Allergy or Cold, then there
will be RedEars. Similiarly if the patient is having Injury in ears, RedEars will be there. If
the patient has excess WaxDeposit in ears then RedEars will be there by default. The belief
values for the symptoms that can lead to RedEars are computed using Dempster-Shafer the-
ory. Attribute’s composite belief value is computed according to the orthogonal summation
process of Dempster’s rule of combination as described in Section 4.2.
This composite belief value will be considered as the uncertain factor value i.e., the
evaluated value, for the corresponding input conditions. The same way composite belief
value for all the uncertain factors in which the input condition is evaluated to uncertainty is
calculated.
User have to assign a value value (Di ) to the corresponding variable Di, for evaluating a
condition Ii ≡ “Vi is vi ” (in which i Є VINP ), where
Value (Di) Є XDi U { UNCERTAINTY} (22)
The domain variable (where d j Є XDi) and the related sibling conditions i.e., Ij ≡
“Di is dj ” is evaluated using the below formula, as soon as an input condition value is given
by the user.


⎪ TRUE if dj = value(Di )



⎪ UNKNOWN

if value(Di ) = UNKNOWN
ass(Ij ) = (23)

⎪ UNCERTAINTY



⎪ if value(Di ) = UNCERTAINTY

FALSE otherwise
Also the sibling fact assertions i.e., the corresponding fact assertions are added to WM and
are specified in (24), where ass(Fj ) is evaluated via (23)

sibl-fact-ass (Di, value(Di) =(Fj, ass (Fj)) : DFj Di, dFj Є XDi} (24)
According to the below statements we can point out that, the same evaluation procedure
and its conclusion is produced when a neurule is fired.
• If ass(Ii ) = TRUE, then the sibling conditions of Ii evaluate to FALSE, because a
variable cannot take more than one value simultaneously.
• If ass(Ii ) = UNKNOWN, then the sibling conditions of Ii evaluate to UNKNOWN
too, since “unknown”
• If ass(Ii ) = UNCERTAIN, then the sibling conditions of Ii evaluate to UNCERTAIN
too.

5.1.2 Neurule evaluation

The neurule evaluation has two states, fired and blocked. If the neurule is in fired state it
is evaluated as ‘1’ and if the neurule is in blocked state it is evaluated as ‘-1’. The set of
fired rules during an inference process is denoted by NF , whereas that of blocked rules by
NB . Also, the set of evaluated rules is denoted by NE and that of unevaluated by NU . The
output of a neurule Nk is computed according to (1) and (2) from Section 2.1. For that, all
input conditions of the neurule should be evaluated to compute v(Nk ), the activation value

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


216 S. S.

of Nk, by using the below equations so that each input conditions contribution to the current
activation value can be encountered.

∝c (Nk ) = sf0Nk + sfiNk assv(IiNk ), (25)
Nk N
Ii ∈IE k

where, ⎧

⎪ 1 ifassv(IN
i ) = TRUE
K

⎨ NK
NK −1 ifassv(Ii ) = FALSE
assv(Ii ) = (26)

⎪ 0 ifassv(IN
i ) = UNKNOWN
K

⎩ NK
0.5 ifassv(Ii ) = UNCERTAINTY

6 Experimental results and discussion

In this section, experimental results regarding the performance of neurules with uncertainty
are presented. Which are based upon the following :
• The Neurules with Uncertainty inference mechanism is compared to the Neurules
inference mechanisms used in [16]
• The generalization capabilities of neurules with uncertainty are calculated and com-
pared to those of neurules and a backpropagation neural net (BPNN).
For experiments we used a Pentium IV system which operates at 4 GHz.
The “neurules with uncertainty” is compared with “neurules without considering uncer-
tainty” in terms of runtime, number of computations and convergent rate. For Uncertainty
reasoning, we synthetically add uncertainty to some existing UCI Machine Learning Repos-
itory datasets [1], since there is no public repository of uncertain datasetsavailable. We also
considered the incomplete datasets available. The Neurules with Uncertainty approach suc-
cessfully implemented and tested with various datasets such as breast cancer Wisconsin
(diagnosis), acute sarcophagus, acute inflammations, nursery, car, cancer, lenses and Iris,
which are taken from [1]. The two rules bases (RB1 and RB2) are related to diagnosis of
bone diseases also considered according to [16, 20]. They consist of 59 and 134 symbolic
rules, respectively. The rule bases were preprocessed before using and rules with the same
conclusion were grouped. The uncertainty factor for incomplete data is calculated using
Dempster-Shafer theory by considering the sub belief factors of corresponding attribute.
The sub belief factors are assigned by interviewing with experts, according to the problem’s
domain knowledge, available datasets and also used reports from various journals. The sub
belief factor’s set is preprocessed and provided for the computations during training. Before
training, data sets were preprocessed and the truth tables formed by grouping rules with the
same conclusion. The corresponding rows of the truth table constituted the patterns of the
data set. The produced neurules performed all inferences correctly.
Table 6 presents experimental results for the computational cost regarding the perfor-
mance of the neurules under uncertainty. The mean computational time required to draw the
conclusions is represented by “Runtime” and the mean number of computations required
to reach conclusions is represented by “Computations”. While considering uncertainty neu-
rules take somewhat more number of computations and runtime than neurules without
uncertainty but it is reasonable.
Table 7 presents the experimental results comparing the performance of the inference
mechanisms of ‘neurules with uncertainty’ and ‘neurules without uncertainty’ in terms of

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 217

Table 6 Comparison of inference mechanisms (Computational Cost)

DATA SETS Neurules with Uncertainty Neurules without Uncertainty

Runtime(msec) Computations Runtime(msec) Computations

ACUTE (39) 0.042 12.01 0.041 11.56


NURSERY (12960) 0.189 142 0.187 138.91
CAR (1728) 0.155 118.27 0.154 117.96
RB1 0.298 122.7 0.256 121.5
RB2 0.513 284.01 0.497 282.60

convergent rate. The convergent rate was little bit less than that of neurules without Uncer-
tainty, but the classification accuracy is high. “Convergent rate,” is the ratio of the number of
necessary (i.e., the least required) inputs and the total number of asked inputs. Since we are
considering the set of sub belief factors for calculating the uncertainty factor, the number
of least required factors will be somewhat more in neurules with uncertainty compared to
neurules without uncertainty. We have used two of the datasets and the two rule comparison
of convergent rate.
If the conditions or criteria used for classification are more specific and accurate to the
problem then the resultant classification also will be more specific and accurate. i.e., the
parameters considered for the classification have more impact in the resultant classification.

6.1 Generalization capabilities of neurules with uncertainty

How well a method handles new input data after the system has been trained can be termed
as Generalization capabilities of a learning method. Table 8 presents results regarding the
classification accuracy (generalization) of neurules with uncertainty, i.e., by considering
continuous variables on unseen test examples as compared with the ones of the neurules
without considering uncertain factors i.e., discrete factors alone. The presented network
generalized quite well in unseen examples, i.e., to a new pattern, since the network got
trained to handle incomplete data.
From the corresponding datasets, 75 percent of the patterns were used for training and
25 percent of the patterns were used for testing in four different runs. The whole data set
is divided into four different and disjoint test sets and was used in each run of the 4-fold
cross validation. The mean value of the accuracies obtained from the four tests is used for
computing the classification.
Ioannis Hatzilygeroudis and Jim Prentzas were already proved that neurules generalize
quite well [20]. Results shows that neurules well outperform the adaline unit and are a bit
worse than BPNNs. And they have explained the reason, i.e., it is due to the nature of the

Table 7 Comparison of inference


mechanisms (Convergent Rate) DATASET Neurules with Uncertainty Neurules without Uncertainty

ACUTE 0.635 0.639


LENSES 0.818 0.825
RB1 0.874 0.877
RB2 0.998 1.000

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


218 S. S.

Table 8 Generalization of experimental results

DATASET Neurules with Uncertainty(%) Neurules without Uncertainty(%) BPNN (%)

NURSERY 99.72 99.31


CAR 94.99 93.63 95.72
LENSES 75.23 70.83 70.83
MONKS1 99.90 99.77 100
MONKS3 95.79 93.98 97.22
BREAST CANCER 97.15 96.73 98.10

three approaches: the adaline unit is a single unit for performing classification, neurule base
consists of a number of autonomous adaline units (neurules) and back propagation neural
network is a multilayer network containing hidden nodes useful for the computation of non-
separable functions. By considering the uncertainty, to some extend neurules can overcome
the generalization performance as compared to BPNN. i.e., the generalization capabilities
of the neurules, more over the accuracy of the inference mechanism became considerably
increased by the implementation of Uncertainty management.

7 Conclusions

This paper describes the enhancement of Neurules to handle the uncertain data. We
have implemented uncertainty management in neurules using Dempster-Shafer theory by
considering uncertainty as fourth rule of input combination in neurule’s structure.
Sorting out the uncertainties in the problem into a priori independent items of evidence
and carrying out the Dempster’s rule computationally are the two closely related stages
involved in implementing the Dempster-Shafer theory. In our experiment, first we sorted
out the uncertain factors and determined the set of all possible states and its subsets, i.e., the
frame of discernment. Then according to the theory of evidence, belief masses are assigned
to individual elements. In the second stage by applying Dempster’s rule of combination, we
combined the independent set of mass assignments.
By the implementation of uncertainty management, neurules can process continuous
values also. That means by adding uncertainty as a fourth input, we can consider more prob-
lems with partial domain knowledge. Thus the generalization performance of the neurules
is increased by considering uncertainty.
Our future work is directed to implement the Uncertainty in neurules in more efficient
ways such as, 1) incorporating more efficient methods to implement uncertainty in neurule
2) incorporating fuzzy logic.

Acknowledgements The authors would like to thank Prof. Pushpak Bhattacharyya, Prof. D. Malathi for
his support and guidance during this work. The authors would like to thank Department of Science &
Technology, Govt. of India for providing fund under Woman Scientist Scheme (WOS-A) with the project
code-SR/WOS-A/ET/1075/2014.

Funding Information This work is funded by Department of Science & Technology, Govt. of India under
Woman Scientist Scheme (WOS-A) with the project code-SR/WOS-A/ET/1075/2014.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


NeuroSymbolic integration with uncertainty 219

Compliance with Ethical Standards

Author’s Contributions The first author is the sole author of this work.

Competing Interests The author declares that there is no competing interest.

References

1. Asuncion, A., Newman, D.J.: UCI Machine learning repository, http://www.ics.uci.edu/mlearn/


MLRepository.html, School of Information and Computer Science, University of California (2007)
2. Konar, A.: Artificial intelligence and soft computing : behavioral and cognitive modeling of the human
brain. CRC Press, Boca Raton (2000). ISBN 9780849313851-CAT#1385
3. Konar, A.: Computational intelligence principles, Techniques and applications, ISBN 3- 540-20898-4.
Springer, Berlin (2005)
4. Article, Wikipedia, the free encyclopedia, http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafertheory
(2011)
5. Bader, S., Hitzler, P. In: Artemov, S., Barringer, H., d’AvilaGarcez, A.S., Lamb, L.C., Woods, J. (eds.):
Dimensions of neural-symbolic integration-a structured survey, We will show them: Essays in Honour
of DovGabbay, vol. 1, pp. 167-194. College Publications, Oxford (2005)
6. Bader, S., Hitzler, P., Holldobler, S., Witzel, A.: A fully connectionist model generator for covered first-
order logic programs. In: Proceedings of the 20th Int’l Joint Conf.Artificial Intelligence(IJCAI ’07),
pp. 666–671 (2007)
7. Bader, S., Hitzler, P., Holldobler, S.: Connectionist model generation: a first-order approach. Neurocom-
puting 71, 2420–2432 (2008)
8. Bookman, L., Sun, R.: Connection Science, special issue on integrating neural and symbolic processes,
eds., vol. 5, nos. 3/4 (1993)
9. Cloete, I., Zurada, J.M. (eds.): Knowledge-based neurocomputing. MIT Press, Cambridge (2000).
ISBN:9780262032742502
10. d’Avila Garcez, A.S., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural
networks: a sound approach. Artificial Intelligence 125(2001), 155–207 (2001). Elsevier Science B.V.
11. Fu, L.M.: Knowledge-Based Connectionism for Revising Domain Theories. IEEE Trans. Syst. Man
Cybern. 23(1), 173–182 (1993)
12. Gallant, S.I.: Connectionist expert systems. Comm. ACM 31(2), 152–169 (1988)
13. Gallant, S.I.: Neural network learning and expert systems. MIT Press, Cambridge (1993).
ISBN:9780262071451382
14. Shafer, G.: Dempster-Shafer Theory,” Article, fitelson.org/topics/Shafer.pdf (1990)
15. Hammer, B., Hitzler, P.: Perspectives of Neural-Symbolic Integration, vol. 77. Springer, Berlin (2007).
ISBN:978-3-540-73954-8. https://doi.org/10.1007/978-3-540-73954-8
16. Hatzilygeroudis, I., Prentzas, J.: Neurules: Improving the performance of symbolic rules. Int’l J. AI Tools
(IJAIT) 9(1), 113–130 (2000)
17. Hatzilygeroudis, I., Prentzas, J.: Using a hybrid Rule-Based approach in developing an intelligent tutoring
system with knowledge acquisition and update capabilities. J. Expert Syst. Appl. 26(4), 477–492 (2004)
18. Hatzilygeroudis, I., Prentzas, J.: An efficient hybrid rule based inference engine with explanation
capability. American Association for Artificial Intelligence http://www.aaai.org (2001)
19. Hatzilygeroudis, I., Prentzas, J.: Constructing modular hybrid rule bases for expert systems. Published
in the International Journal of Artificial Intelligence Tools (IJAIT) 10(1-2), 87–105 (2001)
20. Hatzilygeroudis, I., Member, I., Prentzas, J.: Integrated rule-based learning and inference. IEEE Trans.
Knowl. Data Eng. 22(11), 1549–1562 (2010)
21. Hatzilygeroudis, I., Prentzas, J.: Multi-inference with Multi-neurules SETN 2002, LNAI 2308, pp. 30–
41. Springer, Berlin (2002)
22. Hatzilygeroudis, I., Prentzas, J.: Neuro-symbolic approaches for knowledge representation in expertSys-
tems. Published in the International Journal of Hybrid Intelligent Systems 1(3-4), 111–126 (2004)
23. Prentzas, J., Hatzilygeroudis, I., Tsakalidis, A.: Updating a hybrid rule base with new empirical
source knowledge. In: Proceedings of the 14th IEEE International Conference on Tools with Artificial
Intelligence, pp. 1082–3409/02 (2002). $17.00 ©IEEE
24. Prentzas, J., Hatzilygeroudis, I.: Rule-based update methods for a hybrid rule base. Data Knowl. Eng.
55(2005), 103–128.0169-023 (2005). X/$, Elsevier
25. Sentz, K.: Jan./Feb combination of evidence in Dempster-Shafer theory. Binghamton University, April
(2002)

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


220 S. S.

26. Ling, X., Rudd, W.G.: Combining opinions from several experts. Applied Artificial Intelligence 3, 439-
452 (1989)
27. Hillario, M.: An Overview of strategies for Neurosymbolic Integration, Connectionist-Symbolic INtegra-
tion form Unified to Hybrid Approaches. In: Sun, R., Alexandre E. (eds.) Lawrence Erlbaum (1997)
28. Prentzas, J., Hatzilygeroudis, I.: Construction of Neurules from training examples: a thorough inves-
tigation. In: Garcez, A., Hitzler, P., Tamburini, G. (eds.) Proceedings of the ECAI- 06 Workshop
Neural-Symbolic Learning and Reasoning (NeSy ’06), pp. 35–40 (2006)
29. Simon, C., Weber, P.: Bayesian networks implementation of the dempster- Shafer theory to model, Reli-
ability uncertainty. In: Proceedings of the First International Conference on Availability, Reliability and
Security, pp. 788-793,ISBN:0-7695-2567-9 (2006)
30. Sima, J., Cervenka, J. In: Cloete, I., Zurada, J.M. (eds.): Neural knowledge processing in expert systems,
Knowledge- based neurocomputing, pp. 419–466. MIT Press, Cambridge (2000)
31. Smets, P., Kennes, R.: The transferable belief model. Artif Intell 66, 191–243 (1994)
32. Xianyu, J.C., Juan, Z.C., Gao, L.J.: Knowledge-Based neural networks and its application in discrete
choice analysis,” pp. 491–496 (2008)
33. Wermter, S., Sun, R.: Hybrid neural symbolic integration. In: Workshop as part of International Confe-
rence on Neural Information Processing Systems, Breckenridge, Colorado, December 4 and 5 (1998)

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:

1. use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
2. use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at

onlineservice@springernature.com

You might also like