You are on page 1of 1

Domain LFC Assistant-I Assistant-R Naivni Bayes k-NN

INF1 86.0 +/- 5.1 90.1 +/- 3.5 88.8 +/- 3.8 91.6 +/- 3.1 89.0 +/- 3.6
INF2 67.1 +/- 6.3 55.4 +/- 9.8 68.7 +/- 7.8 32.1 +/- 4.5 56.8 +/- 6.3
TREE 75.8 +/- 5.4 79.2 +/- 5.7 78.8 +/- 6.2 69.0 +/- 5.9 68.2 +/- 5.3
PAR2 93.6 +/- 3.3 74.9 +/- 7.9 95.7 +/- 2.8 56.7 +/- 5.7 79.4 +/- 4.3
PAR3 84.1 +/- 10.1 65.6 +/- 11 95.7 +/- 2.1 55.5 +/- 5.2 60.4 +/- 6.7
PAR4 69.4 +/- 13.8 59.3 +/- 6.3 94.8 +/- 1.6 55.1 +/- 3.4 61.9 +/- 3.8
Artificial data
BOOL 89.8 +/- 1.6 89.8 +/- 1.6 89.8 +/- 1.6 66.6 +/- 2.5 89.8 +/- 1.6
LED 70.8 +/- 2.3 71.7 +/- 2.4 71.7 +/- 2.2 73.9 +/- 2.1 73.9 +/- 2.1
KRK1 98.7 +/- 1.2 98.6 +/- 1.2 98.6 +/- 1.2 91.6 +/- 1.4 92.2 +/- 1.9
KRK2 86.0 +/- 2.1 66.6 +/- 3.1 70.1 +/- 3.3 64.8 +/- 2.1 70.7 +/- 1.7
Real life medical data
PRIM 37.1 +/- 4.9 40.8 +/- 5.1 38.9 +/- 4.7 48.6 +/- 4.1 42.1 +/- 5.0
BREA 76.1 +/- 4.3 76.8 +/- 4.6 78.5 +/- 4.5 78.7 +/- 4.5 79.5 +/- 2.7
LYMP 82.4 +/- 5.2 77.0 +/- 5.5 77.0 +/- 5.9 84.7 +/- 4.2 82.6 +/- 5.7
RHEU 60.6 +/- 4.7 64.8 +/- 4.0 63.8 +/- 4.9 66.5 +/- 4.0 66.0 +/- 3.6
HEPA 79.0 +/- 5.3 77.2 +/- 5.3 82.3 +/- 4.9 86.1 +/-3.982.6 +/- 4.9
DIAB 69.2 +/- 3.0 71.1 +/- 2.8 71.5 +/- 2.6 76.3 +/- 2.4 73.9 +/- 4.9
HEAR 77.3 +/- 5.2 75.4 +/- 4.0 77.6 +/- 4.5 84.5 +/- 3.0 82.9 +/- 3.7
Real life non medical data
SOYB 83.6 +/- 4.8 91.9 +/- 2.5 89.6 +/- 2.7 89.4 +/- 1.6 84.0 +/- 1.9
IRIS 95.0 +/- 3.8 95.7 +/- 3.7 95.2 +/- 2.6 96.6 +/- 2.6 97.0 +/- 2.1
SAT 81.9 +/- 0.9 77.8 +/- 0.7 81.5 +/- 0.9 80.1 +/- 0.6 90.5 +/- 0.6
VOTE 93.2 +/- 2.6 95.9 +/- 1.5 95.5 +/- 1.5 90.1 +/- 1.8 92.5 +/- 2.0

Results from:
Overcoming the Myopia of Inductive Learning Algorithms with RELIEF,Igor Kononenko, Edvard Šimec, Marko Robnik-Šikonja, Applied
Inteligence 7, 39-55
(1997))

Comments on results:
When SAT is run with full set of examples, K-NN achieves 92.5% accuracy. LFC, AssistanI and AssistantR are machine learning algorithms
building a
decision tree.

You might also like