You are on page 1of 4

Methods and algorithms of determination of

complexity of test questions for formation a


database system of the adaptive test-control of
knowledge
Ch.M. Khidirova
Tashkent University of Information Technologies, Tashkent, Uzbekistan
khcharos@gmail.com

Abstract – In given paper offered methods and The system of adaptive control of knowledge
algorithms of determination of complexity of test questions formed by the definition of their
questions for formation a database system of the complexity. Definitions difficulty of test questions in
adaptive test control for objective estimation of a system of adaptive test control of knowledge can
knowledge of students (pupils) in the course of training
use the algorithms for calculating estimates. The
learning systems.
algorithm based on the calculation of assessments
Keywords: information weight feature, estimation first was proposed by Yu.I. Zhuravlev [2] and later
algorithms, test-control of knowledge, complexity of the used to classify learners preparedness levels [3,5,6].
test questions.
The purpose of the work. Development of
mathematical models, algorithms and corresponding
I. INTRODUCTION software using estimation algorithms of the system
adaptive test-control of knowledge.
Since the origins of the use of computers in the
learning process, special attention was paid to the II. THE PROGRESS OF RESEARCH
control of knowledge. Process control knowledge
Computational algorithms for obtaining the
consists of three stages: the formation of questions
information weights and assessment of their
for the control of knowledge based on the control
complexity [1]. Consider the set of objects
tasks that are stored in the database; issuing their
student and receiving his response, perhaps with S1 , S 2 , ..., S m , summarized in the table Tnml
feedback; exposure assessment for control. (partition of objects into classes and objects
belonging to the class is given by the scheme shown
in Table I).
TABLE I.
Features (Dn) i n
1 2 Classes
... multiple choice ... weighted battery
experts mark students response (Kl)
Objects (Sm) question of tests
1st test task (S1) X 111 X112 … X 11i … X11n
2nd test task (S2) X121 X122 … X12i … X12n
1-class
… … … … … … …
m1th test task ( S ) X 1m11 X 1m1 2 … X1m1i … X 1m1n
m1
th
m1+1 test task
(S ) X 2( m1 1)1 X 2( m1 1) 2 … X 2( m1 1)i … X 2( m11) n
m1 1
2-class
... … … … … … …
th
m2 test task ( S
m2
) X 2 m2 1 X 2 m2 2 … X 2 m2 i … X 2 m2 n
... … … … … … … …
ml-1+1th test task
(S ) X l 1( ml 1 1)1 X l 1( ml 1 1) 2 … X l 1( ml 1 1)i … X l 1( ml 1 1) n
ml 1 1
th
ml-1+2 test task
(S ) X l 1( ml 1  2)1 X l 1( ml 1  2) 2 … X l 1( ml 1  2)i … X l 1( ml 1  2) n l-class
ml 1  2
… … … … … … …
th
ml test task ( S
ml
) X lm1 X lm 2 … Xlmi … X lmn

978-1-5386-2168-4/17/$31.00 ©2017 IEEE


Applying voting procedures to the actual
table rows and calculate the value of:
'l
1
m  ml 1
^> @ >
Гl (S ml 11 )  Гl (S mi l 11 )  ...  Гl (Sm )  Гl (Smi ) @`
Г1 ( S1 ), Г1 ( S 2 ),..., Г1 ( S m1 ) ½ Value
° l

Г 2 ( S1 ), Г 2 ( S 2 ),..., Г 2 ( S m1 )° , p(i) '1  ' 2  ...  ' l ¦' u


(4)
¾ (1) u 1
.......... .......... .......... ........ °
called information weight i feature. Finally, the th

Г l ( S1 ), Г l ( S 2 ),..., Г l ( S m1 ) ° expression (4) is issued as follows:


¿

¦ >Г @
l mu
1
Г1 ( S m11 ), Г1 ( S m12 ),..., Г1 ( S m2 ) ½
°
p(i) ¦m
u 1  mu 1 q mu 1 1
u ( S q )  Г u ( S qi ) . (5)
u
Г 2 ( S m11 ), Г 2 ( S m12 ),..., Г 2 ( S m2 )° ,
¾ (2) If the counting of votes to use formula
.......... .......... .......... ........ °
¦ 2
mu ~
Г l ( S m11 ), Г l ( S m12 ),..., Г l ( S m2 ) °
r ( S , Sq )
Гu (S ) 1 ,
¿ q mu 1 1

………………………………. the minimal number of operations required.


Г1 ( S ml 11 ), Г1 ( S ml 12 ),..., Г1 ( S m ) ½ Definitions difficulty of test questions by
° evaluating the information weight of expert objectivity
Г 2 ( S ml 11 ), Г 2 ( S ml 12 ),..., Г 2 ( S m )° . [4,5]. The complexities of test questions can be
¾ (3)
.......... .......... .......... .......... ...... ° determined in a simpler way without using estimation
Г l ( S ml 11 ), Г l ( S ml 12 ),..., Г l ( S m ) °¿ algorithms. Each expert assesses the difficulty of test
questions in their own way (Table II).
Now we delete from the table Tnml the column TABLE II. DEFINITIONS INFORMATION WEIGHT
OBJECTIVITY EXPERT FOR THE FORMATION OF A
number of i and also remove it from the voting sets.
DATABASE SYSTEM OF ADAPTIVE TEST CONTROL
Rows S1 , S 2 , ..., S m move into the line

nth expert
Experts
S1i , S2i , ..., Smi (superscript i in the writing line
expert

expert

expert

expert
n–1th
2nd

3ed


1st
indicates that from the string deleted i th sign). Experts
1st expert 10 X12 X13 … X1(n-1) X1n
Realizing the voting procedure for the reduction 2nd expert X21 10 X23 … X2(n-1) X2n
of the table, we calculate the value of Гu ( S qi ) , which 3ed expert X31 X32 10 … X3(n-1) X3n
… …
q 1, 2, ..., m ; u 1, 2, ..., l , ordering that lead to a
scheme similar to schemes (1) - (3), with the only n–1th X(n-1)1 X(n-1)2 X(n-1)3 … 10 X(n-1)n
expert
difference being that instead S q will be presented nth expert Xn1 Xn2 Xn3 … Xn(n-1) 10
S qi . Experts Y1 Y2 Y3 … Yn-1 Yn
rating
Number of votes Гu ( S qi ) will be less than the n

corresponding Г u ( S q ) . ¦x ij (6)
Yn
j 1
,
If the sign i (remote column number i ) 10 * n
significantly, the number of votes on average greatly where, xij - rating inserted jth expert on the ith expert. xii
reduced, and vice versa. The degree of reduction
treated properly, it should be considered a measure of = 10 ( i 1, n ).
the importance of the studied feature.
With the help of (6) is determined by formula
Information weight i th feature is introduced as experts objectivity and taken the assessment test
follows. questions for the experts inserted, we get the
maximum rating.
We make the difference
III. THE RESULTS AND DISCUSSION
Г1 (S1 )  Г1 (S1i ), Г1 (S2 )  Г1 (S2i ),..., Г1 (Sm1 )  Г1 (Smi 1 ).
Checking the quality of test items using student
Enter value response. In all known theories of testing is regarded
as a process of confrontation with the subject offers
'1
1
m1
^> @ >
Г1 (S1 )  Г1 (S1i )  ...  Г1 (S m1 )  Г1 (S mi 1 ) . @` him the job. If the test denoted by the symbol i, and
the task - the symbol j, then the outcome of the
We proceed similarly with other classes: confrontation is estimated score Xij. The value of this
score is dependent on the ratio of the level of
'2
1
m2  m1
^> @ >
Г 2 (Sm11 )  Г 2 (Smi 11 )  ...  Г 2 (Sm2 )  Г 2 (Smi 2 ) … @` knowledge of the test with the task difficulty level of
the chosen units and from pre-accepted agreement
……………………………………………
(convention) - what is considered a "victory" of the such cases, write Xij = 1. If you can not cope, it is
test or assignment, and if a draw is acceptable. given a score (1) of zero (0).
In a simplified approach, generally considered To check the properties of the test tasks and test
two outcomes: victory or defeat. If the subject to cope forms part of the transformation of them into test
with the task, it is given for winning one point. In tasks done some calculations [5]. The results are
shown in Table III.
TABLE III. TEST RESULTS.
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 Yi pi qi pi/qi Ln pi/qi
1 1 1 1 0 1 1 1 1 1 1 9 .90 .10 9 2.20
2 1 1 0 1 1 1 1 1 1 0 8 .80 .20 4 1.39
3 1 1 1 1 0 1 1 0 1 0 7 .70 .30 2.33 0.85
4 1 1 1 1 0 1 0 1 0 0 6 .60 .40 1.50 0.40
5 1 1 1 1 1 1 0 0 0 0 6 .60 .40 1.50 0.40
6 1 1 1 1 0 0 1 0 0 0 5 .50 .50 1.00 0
7 1 1 0 1 1 0 1 0 0 0 5 .50 .50 1.00 0
8 1 1 1 1 1 0 0 0 0 0 5 .50 .50 1.00 0
9 1 0 1 0 1 1 0 0 0 0 4 .40 .60 0.66 -0.42
10 0 1 1 0 0 0 0 1 0 1 4 .40 .60 0.66 -0.42
11 1 1 1 0 0 0 0 0 0 0 3 .30 .70 0.43 -0.84
12 1 1 0 0 0 0 0 0 0 0 2 .20 .80 0.25 -1.39
13 1 0 0 0 0 0 0 0 0 0 1 .10 .90 0.11 -2.21
Rj 12 11 9 7 6 6 5 4 3 2 65
Wi 1 2 4 6 7 7 8 9 10 11
pj .923 .846 .692 .538 .462 .462 .385 .308 .231 .154 5
qj .077 .154 .308 .462 .538 .538 .615 .692 .769 .846
pjqj .071 .130 .213 .248 .248 .248 .236 .213 .178 .130
qj/pj .083 .182 .445 .859 1.164 1.164 1.597 2.246 3.329 5.493
Ln qj/pj -2.489 -1.704 -0.810 -0.152 0.152 0.152 0.468 0.809 1.202 1.703

In this matrix, two ordering conducted: calculated from the ratio of the number of wrong
answers (Wj) to the number of subjects (N):
• one concerns the subjects. The first row shows
the test scores of the most successful, at least in the qj = Wj / N
second, etc., the total points downward if its count for
Naturally it is assumed that
each test;
pj + qj = l
• other streamlining carried out for jobs. In the
first place there is the easiest job, in which there is The modern approach to the concept of
the greatest number of correct answers, the second - “difficulty”. The modern technology and adaptive
minimal, and so forth, until the last, in which there is learning control using a different measure of job
only one correct answer. difficulties, equal to ln qj/pj. This measure of
difficulty obtained in the scale of the natural
Because of the simplicity index R, it is
logarithms, called logit difficulty setting.
convenient, but as long as there are other groups of
Symmetrical introduced and logarithmic evaluation
subjects, with different numbers of subjects (N).
of the level of knowledge, the so-called logit level of
Therefore, to obtain comparative characteristics of R,
knowledge equal to ln pi / qi, where рi - percentage of
divided by the number of subjects in each group.
correct answers of the test, calculated by the formula
pj= Rj / N pi= Yi / k, where Yi is the number of correct answers
of the test i, and k icon It means the total number of
The result is normalized (number of subjects)
jobs.
statistical indicator - the proportion of correct
answers, pj. The logarithmic evaluation of such seemingly
disparate phenomena as the actual level of knowledge
In recent years, with the index of job difficulties
of each test, with the level of difficulty of each task,
we have come to associate the opposite statistics -
led to uncomplicated, apparently, trying to compare
share of incorrect answers (qj). This percentage is
them by subtraction. However, the effectiveness of
such a comparison had an enormous influence on the REFERENCES
development of foreign pedagogical theory and [1] Журавлев Ю.И., Камилов М.М., Туляганов Ш.Е.
practice. Алгоритмы вычисления оценок и их применение.
Ташкент, Изд-во «Фан» УзССР, 1974, стр. -120.
For the first time there was a possibility of direct [2] Зайцева Л.В., Новицкий Л.П., Грибкова В.А. Разработка и
comparison of any set of tasks with any number of применение автоматизированных обучающих систем на
subjects. The computer compares the logit базе ЭВМ. – Под ред. Л.В.Ницецкого. – Рига: «Зинатне»,
1989. -174 с.
assignment logit and knowledge and on this basis, [3] Фомин Я.А. Распознавание образов: теория и
selects the next task in the systems of adaptive применения. -2-е изд. - М.: ФАЗИС, 2012. - 429 с.
learning and control of knowledge. [4] Brusilovsky P. Adaptive and Intelligent Technologies for
Web-based Education. In C.Rollinger and C.Peylo (eds.).
The requirement is the most important known Special Issue on Intelligent Systems and Teleteaching,
problems systemically important feature of the test Konstliche Intelligenz, 2004.
[5] Khidirova Ch.M., Davronov Sh.R. Building systems of quality
task. If the test system is increasingly difficult tasks, analysis adaptive test control of knowledge. //Young Scientist.
then there is no place for jobs with no known –Kazan, LLC “Publisher Young Scientist”, no 19(ch1), 2016.
problems measures. –pp. -111-114.
[6] Maslovskyi S., Sachenko A. Adaptive test system of student
IV. CONCLUSION knowledge based on Neural Networks. // The 8th IEEE
International Conference on Intelligent Data Acquisition and
These computing circuit for counting information Advanced Computing Systems: Technology and Applications
weights using voting procedures may be executed in 24-26 September 2015, Warsaw, Poland. –pp. -940-944.
different ways in software depending on the type and
mathematic software of computers.

You might also like