You are on page 1of 10

This article has been accepted for publication in IEEE Transactions on Learning Technologies.

This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

An Innovative Strategy to Anticipate Students’


Cheating: The Development of Automatic Essay
Assessment on The “MoLearn” Learning
Management System
Julianto Lemantara, Bambang Hariadi, Dewiyani Sunarto, Tan Amelia, Tri Sagirani

Abstract – A quick and effective learning assessment is needed to although online tests could also lead to cheating and plagiarism
evaluate the learning process. Many tools currently offer [6]. Sorea et al. support this notion [7]. In their research, the
automatic assessment for subjective and objective questions; tendency to cheat and plagiarise someone’s work is higher in
however, there is no such free tool that provides plagiarism online examinations than when the test is conducted offline.
detection among students for subjective questions in a Learning Cheating and plagiarism are increasing due to technology that
Management System (LMS). This research aims to create an
automatic essay assessment in MoLearn LMS that can check
provides users with access to information and evaluation [8].
students’ answers. At the same time, the LMS can also anticipate Easy access to information is the primary reason for cheating
online cheating. This research employed two methods comprising and plagiarism cases, especially when students copy other
(1) System Development Life Cycle (SDLC) waterfall model and completed works [9]. A survey conducted by Fish and Hura
(2) Latent Semantic Analysis (LSA) method. The former was used found surprising facts [10]. When the students were asked about
to design and create essay assessment applications, and the latter cheating, 51.2% of students stated that plagiarising their
was employed to check the essay’s answer and automatically friends’ answers is not a big problem. This is problematic
detect students’ plagiarism. The result showed that MoLearn LMS because students do not believe that cheating or plagiarism
was successfully working on detecting students’ plagiarism, should not be done. Ultimately, cheating and plagiarism betray
leading to better LMS innovation. Automatic essay assessment
accelerates grading time by about 8.04 times that of manual
academic integrity.
grading, which takes 5.52 seconds per question. Two kinds of tests are often used in an examination:
multiple-choice objective and subjective essay tests [11], [12].
Index Terms – Plagiarism, Automatic Essay Assessment, MoLearn Scouler and Febrita state that essay tests have more benefits
LMS, Latent Semantic Analysis. than multiple-choice tests [13], [14]. This is because essay tests
can measure students’ understandings and train their high order
I. INTRODUCTION thinking skills. Nevertheless, this test requires more time to

I n the education world, cheating is a wrong or unjustifiable


thing to do [1], [2]. Cheating has been a serious issue for
many years around the world, including in Indonesia [2].
Therefore, one of the positive value cultivated in many schools
is avoiding cheating on an examination in Indonesia’s context.
assess the results. Additionally, teachers need more
concentration in the assessing process to maintain the
consistency and objectivity of the results. The assessing
process, which takes longer than multiple-choice tests, could
trigger boredom, especially when teachers need to assess many
This notion is introduced due to massive cases of cheating students [15][16].
found at schools during the examination period [3]. According Detecting plagiarism in an essay test also demands teachers
to Marksteiner and Jamaluddin, this cheating behaviour to spend more time and effort [16]. This exhausting process
emerges due to the absence of an effective system that can leads to three main problems when teachers manually assess the
accurately identify cheating. Information technology might essay: (1) The process takes a longer time, (2) objectivity and
become the answer to establishing honest learning because it consistency of the assessment cannot be fully guaranteed, (3)
can detect cheating easily [3], [4]. plagiarism and cheating cannot be easily detected [17], [18].
Easy access to information through gadgets is the primary On the other hand, a quick result of assignment feedback is
element in seeking materials. This causes adverse effects at needed. The stimulus theory posited by Thorndike supports
schools. For instance, students can easily copy others’ works this. Thorndike postulates three laws of learning. These are the
and claim the works as theirs [5]. Meanwhile, Kocdar et al. state law of readiness, the law of exercise, and the law of effect [19].
that students feel comfortable submitting examinations online, Regarding the law of effect, feedback for the students’
assignments will become a trigger for students to learn more,
Julianto Lemantara, Dewiyani Sunarto, Tan Amelia, and Tri Sagirani are especially when they get positive results [20]. Some studies
with Department of Information System, Dinamika University, Surabaya, East support this law of learning theory. For instance, Wening found
Java, Indonesia (e-mail: julianto@dinamika.ac.id; dewiyani@dinamika.ac.id;
meli@dinamika.ac.id; tris@dinamika.ac.id) that one of the learning elements that can enhance learning
Bambang Hariadi is with Department of Film and Television Production, quality is the effectiveness of feedback given to the students
Dinamika University, Surabaya, East Jawa, Indonesia (e-mail: [21]. Hattie stated that immediate feedback is beneficial at the
bambang@dinamika.ac.id) process level (i.e., engaging in processing classroom activities)
[22]. In addition, the effects of immediate feedback are likely

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

to be more powerful for immediate error correction during task and Vector Space Model (VSM). The research result by Jalaa
acquisition. Hoblos stated LSA-based modeling had more promising
Many educational institutions have used and developed (better) results than LDA-based modeling [37]. This study
information technology for learning during the COVID compared the assessments result between human grader and
pandemic [23]–[25]. Information technology holds a prominent automated assessment systems. Furthermore, the research result
role in education sustainability in this pandemic era. All by Jordan Kalmukov in 2022 showed in most cases, LSA
learning activities should depend on technology to overcome outperforms VSM and could even slightly outperform the
the social distance between students and teachers [26]–[28]. explicit document description by a taxonomy of keywords, if
Therefore, since pandemic era, the awareness of educational the term-document matrix is composed of TF-IDF values [38].
technology has increased [29], [30]. Educators should be able The automatic essay test assessment was successfully
to use technology to prepare, conduct, and evaluate learning implemented based on the previous studies. Several previous
activities [31]. studies have also integrated the automated essay assessment
Currently, Learning Management systems (LMS), such as system into LMS. However, the system established in the
Moodle, Edmodo, Canvas, Blackboard, etc., do not provide previous studies only focuses on using the LSA method to
tools that can be used to automatically check students’ essays investigate the grading system. Thus, the focus of the previous
[14], [32]. If teachers want to have automated essay studies is only on how accurate the system is. The previous
assessments on their LMS, teachers need to install plugins in studies does not shift its focus on exploring the detection of
the LMS. However, the plugins have several limitations in plagiarism. So, this research focuses on the measurement of
assessing the extended essay automatically. For instance, a essay tests using the LSA method, but it also attempts to detect
possible plugin used in Moodle is called iAssign. This plugin’s plagiarism in students’ answers. With MoLearn LMS
limitations are: (1) the Authoring process is still complicated, established, teachers can assess essays quickly and objectively
(2) iAssign integrates interactive learning modules (ILM) to so that students can receive test results faster. MoLearn is a
Moodle without enclosing Moodle questionnaire, and (3) the name of LMS created by researchers that is capable of
quiz data basis for questions and questionnaires is not integrated conducting classroom management, learning content
with Moodle repositories [33]. Another example, the “Essay management, and evaluating learning outcomes
(Auto-Grade)” plugin for Moodle can only assess an essay
based on designated keywords or phrases. This plugin also II. METHODS
could not detect plagiarism done by students [34]. In other hand, In developing the application in this research, the method of
there are many tools for checking plagiarism, such as Turnitin, System Development Life Cycle (SDLC) - waterfall model was
iThenticate, and others, but they are paid and not integrated with applied. Waterfall model was selected as the number of
LMS. developers in the team was not many and it did not require
Some research investigated these plugins. For instance, a immediate change in user requirements [39]–[41]. Moreover,
study conducted by Febrita and Mahmudy analysed the plugins this method was frequently applied and found effective in the
using Latent Semantic Analysis (LSA) [14]. In the pre- previous researches carried out by research team [12], [42]. The
processing phase, stop words removal and synonyms checking stages of waterfall model are described as follows [39]–[41],
were employed. As a result, this study found that the accuracy [43]:
of automatic essay assessment was not high; it was only about
54.93%. Additionally, Thomas et al. investigate an automatic
essay assessment plugin through LSA, positional indexing, and
spell-checking [32]. The findings show that the automatic essay
assessment system can be successfully integrated with the
existing LMS. However, no accuracy level was reported in this
study. Another study that investigates automated essay systems
through LSA method is the study by Rao et al. [15]. This study
said that the accuracy of the plugins was 83.36% after assessing
more than 600 essays’ answers.
Some studies develop the LSA method to better measure the
accuracy of automatic essay assessment systems or plugins. A
research created an automated assessment system called SAGE
[35]. This system’s accuracy is 83.25% higher than the other Fig. 1. The Waterfall Model
researchs. Zhang et al. also researched to measure the accuracy
A. Requirements Gathering and Analysis Stage
of automatic essay assessment by using incremental LSA and
not using the traditional LSA [36]. As a result of this, the In this stage, the data required for the research was collected,
accuracy was about 88%. The incremental LSA method and data collection was carried out by using techniques of
proposed in this research got shorter processing time and used survey and interview. Then, the analysis of what the system
better memory than traditional LSA. needed such as functional need and non-functional needs was
Based on the results of previous studies, LSA had a good conducted.
accuracy level in automatic essay assessment. Several other After conducting a survey by distributing questionnaires to
studies have also stated that LSA had better assessment results 20 teachers and 30 students, the strengths and weaknesses of
than other methods, such as Latent Dirichlet Allocation (LDA) the MoLearn are obtained as shown in Table I.

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

Input Data of students’ answers, solution, data of final


TABLE I essay result
THE STRENGTHS AND WEAKNESSES OF MOLEARN Initial The essay test result is not available yet
condition
TEACHER STUDENT Normal flow 1. Actor selects the test menu
Weaknesses of MoLearn 2. Actor selects the class whose essay will be
1. Essay correction which is still 1. Curious about test assessed
done manually causes the result as the result 3. Actor presses button to check the answer
assessment process longer generated is only 4. System will display temporary results of all
2. Plagiarism among students is objective test result students attempting the exam
difficult to detect 2. Questions are slow to 5. Actor can press the button to see the detail of
3. Application interface is not load during the exam the students’ answers
user-friendly, particularly for time 6. System will display questions, students’
senior teachers answers, solution, result of each number in
Strengths of MoLearn objective test, and score recommendation for
1. The sequence of questions and 1. Application is easy to each essay answer. Recommendation is done
the options of answer are learn and use using LSA method by identifying the
random. This minimizes the 2. Application is easily- similarity between student’s answer and
cheat attempt accessed. It can be solution
2. The exam history and its result accessed anytime and 7. Actor verified final result of each essay
are well-recorded from any places; from answer
3. The result of online test can be both website and 8. Actor presses button to save the test result of
exported to excel or pdf file Android application test
Final Recommendation of essay test result is displayed
condition
Based on the survey, two primary challenges in assessing the
essays were found. Those are”
a) Teachers need more time to assess essays. This is because TABLE III
teachers need to carefully check students’ answers, which ANALYSIS OF FUNCTIONAL REQUIREMENT:
often contain long answers STUDENTS’ PLAGIARISM REPORT
b) Teachers struggle to detect plagiarism because there is no Function Plagiarism report of each students’ answer
such tool to see the similarities between students’ answers Description To inform the main actor regarding the list of
Other issues of MoLearn LMS were not directly related to students suspected of cheating because of their
the functions of a system. Therefore, this research only focused identical answers while the level of students’
on the two main issues mentioned. The solution proposed to correct answers to the solution is relatively low
solve the issues was to build Learning Management System Actor Teacher
(LMS) with the following 2 main features: Priority High
1) Assessment on essay test that was supported with score Input Data of students’ answers, solution, maximum
recommendation, whereby the recommendation was given score of student’s correct answer, and minimum
score of student’s plagiarism level
based on the level of similarity between student’s answer
Initial Result of essay test is not available yet
and solution using LSA method condition
2) Report of students’ plagiarism that applied LSA method. Normal flow 1. Actor selects the test menu
The difference of this feature with the feature of essay test 2. Actor selects the class whose plagiarism
assessment was in the content that was compared. While level will be checked
essay test assessment compared student’s essay with 3. Actor presses the button of answer
teacher’s solution, the feature of plagiarism report assessment
compared one student’s essay answer with other students’ 4. System will display results of all students
answers attempting the exam
Two main features/functions were the strengths of this 5. Actor can set the maximum mark of the
students’ correct answer and minimum
research which distinguished this research from other similar
mark of the plagiarism level
researches. These features that will be discussed further to 6. Actor presses the button of plagiarism
describe the functional requirements analysis. Furthermore, see report
at Table II and Table III. 7. System will display the students’ number,
TABLE II student’s name, question number, the
ANALYSIS OF FUNCTIONAL REQUIREMENT: student’s essay answer to the particular
AUTOMATIC ESSAY TEST ASSESSMENT question, and the other student’s name who
has similar answer. System will adjust the
Function Automatic Essay Test Assessment report based on the mark set by the actor
Description To recommend the score of essay test to the main Final Recommendation of essay test result is displayed
actor, and the actor processes the finalization of condition
essay test assessment
Actor Teacher In addition to the two main features/functions, Molearn LMS
Priority High is equipped with other features to support the implementation

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

of tests or evaluations. Fig. 2 shows the flowchart of the overall B. Design Stage
solutions proposed in this study. Two main features are marked In this stage, the design of the system consisting of dataflow
with blue background. design, database design and user interface design was carried
out.
Three designs were employed in this research, they are:
a) Data flow design with Data Flow Diagrams (DFD)
At the DFD level context, two entities have a role in
processing online tests: students and teachers. Teachers
will give the primary data in the form of questions, test
settings (test’s schedule, the number of questions, the
value of questions, and the questions’ format), and the
final score for the essay test. Meanwhile, students will be
given lists of questions, and they need to provide answers
responding to the questions which should be submitted to
the system. Then, the system will provide the score of the
objective essay test to the students and teachers. Further,
teachers will be given a recommendation essay score by
the system, and teachers can use this score or make an
adjustment first. After the teachers save the score, students
will have the final score accumulated from objective tests
and essay scores.
b) Database design
Eight tables were used to support all functions. They are:
Soal (“Questions”), Kelas_Ujian (“Exam_Class”),
Kelompok_Soal (“Questions_Group”), Soal_Ujian
(“Exam_Questions”), Ujian_siswa (“Student_Exam”),
Jawaban_Ujian_Siswa (“Student_Exam_Answers”),
Sisipan (“Inserts”), and “Stopwords”.
c) User interface design
To be more focused, the user’s interface design in this
research only provided information on automated essay
assessment and plagiarism reports between students’
answers using the LSA method.
C. Coding/Implementation Stage
The activity done in this stage was implementing the result
of analysis and design stages in programming language. The
programming language used in this research was PHP with
CodeIgniter framework as the application built was website-
based application. The database used was MySQL. In this stage,
LSA algorithm implemented to perform two tasks, i.e.,
automatic essay assessment and plagiarism detection in
students’ answers.
LSA is a technique that has been applied in numerous fields
and industries such as bioinformation, language processing,
Fig. 2. The Overall of Proposed Solutions signal processing, etc. [36]. LSA is a mathematical or statistical
technique that is implemented to disclose the hidden correlation
Besides functional needs, there were other non-functional of contextual words usage in a document [44]. Another study
needs of MoLearn LMS, namely: by Perkasa, et al. also claimed that LSA is a method which is
1) Frequency of downtime in one semester was maximum applied in a large and structured set of documents [45]. It is used
two times to extract and represent the term or word meaning by using
2) The speed of score recommendation of essay test was linear algebra calculation and statistics. LSA is a method of
maximum 10 seconds of each question and each student making vector-based term representation that is capable of
3) Authentication of user used username and password that capturing the semantics of a documents (sentences) [11]. The
were encrypted with md5 function stages of LSA method implementation are as follows:
4) MoLearn could only be accessed by teachers registered in 1) Preprocessing. This is the most important stage in text
MoLearn application and accessed by students who were mining. Preprocessing is capable of changing the
registered by the school admin. unstructured data to the more structured ones. This stage
is done in order to clear data of noise in order to reduce
dimensionality so that it is easier to process further [46].

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

The stages of preprocessing consist of several processes, technique used to measure the similarity between two
namely case folding, tokenizing, stopwords removing objects based on similarity of vector space [48]. If two
(filtering), and stemming. In this research, case folding documents are identical, then the angle is 0 degree and the
refers to the process of converting text or document to similarity value is 1. On the other hand, if the two
lowercase; whereas tokenizing refers to text or document documents are totally unidentical, then the angle is 90
splitting based on the words that construct it. Filtering was degree and the similarity value is 0 [11]. Below is the
a process of selecting the important words and omitting formula of cosine similarity technique to measure
the words having general meaning, such as the word of similarity between 2 documents:
‘di’ (in/on/at), ‘yang’ (that), ‘dengan’ (with), etc. 𝑎𝑥 𝑏𝑥 + 𝑎𝑦 𝑏𝑦
𝐶𝑜𝑠 𝐶 =
Stemming referred to a process of changing the term to its 2 2
√𝑎𝑥 2 + 𝑎𝑦 2 𝑥√𝑏𝑥 + 𝑏𝑦
base word. Stem (root word) is a part of word that is left
after the suffixes are removed. Furthermore, The LSA stages can be described in Fig. 3.
2) Performing the calculations of Term Frequency-Inverse
Document Frequency (TF-IDF). TF-IDF is a method to
quantify the weight of each word, and it is generally used
in information retrieval. This is a method to quantify the
relationship between a word/term and document. TF-IDF
is a statistical measurement used to evaluate the
importance of a word or term in a document or corpus. In
this method, each sentence is considered as a document.
The frequency of words appearing in a document shows
the importance of the word in that particular document
while the frequency of document having the particular
word shows how common the word is. The weight of the
word or term is higher if the word frequently appears in a
document while its weight is smaller if it appears in many
documents [47]. The algorithm of TF-IDF formula is used
measure the weight (W) of each document to key word.
The formula is as follow:
𝑊𝑑𝑡 = 𝑇𝑓𝑑𝑡 𝑥 𝐼𝑑𝑓𝑡
where:
Wdt= weight of d document to the t word.
tfdt = number of words searched in a document.
Idft = Inversed Document Frequency (log (N/df))
N = total of document
df = total of document consisting searched words.
3) Conducting matrix decomposition with Singular Value
Decomposition (SVD). SVD is a main algorithm in LSA
to create low-rank approximations. LSA uses SVD in its
calculations [11]. SVD represents semantic space in
matrix that has smaller size than the original matrix size,
but still uses the matrix that has similar value. SVD is a
theorem of linear algebra that is capable of breaking block
A matrix into three new matrices, such as orthogonal
matrix U, diagonal matrix S, and transpose orthogonal
matrix V. If E matrix is real matrix, then the formula of Fig. 3. LSA Stages
SVD in E is:
𝐸 =𝑈𝑆𝑉 D. Testing Stage
4) Conducting degree 2 approximation of matrix U, S, and In this stage, the testing of application was conducted.
V. The values recorded were all rows in the first U and V Testing is the important activity to maintain the quality of
columns, and 2 first rows X 2 first S column. software that is developed [49]. The tests conducted to this
5) Calculating the value of q matrix in degree 2 application were application functionality test which
approximation with the formula as follows: implemented black box testing method. According to Nidhra
𝑁𝑒𝑤 𝑞 = 𝑞𝑇 𝑥 𝑈𝑘 𝑥 𝑆𝑘 and Dondeti, the software program whose functionality was
where: tested would be observed as a ‘blackbox’ [50]. The options of
qT = transpose of q matrix. test case for functionality test were based on the requirement or
Uk = U matrix taken was only the first 2 values specification of design of the software entity being tested. This
Sk = S matrix taken was only the first 2 values is in line with Larrea claiming that in all techniques used to test
6) Calculating the values of similarity in the document using blackbox, the test case is designed based of the specification of
cosine similarity technique. Cosine similarity is a software [51].

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

E. Deployment and Maintenance Stage The percentage value in the "Similarity with Answer Key"
In this stage, the application tested was handed to end users. column shows the degree of similarity between the student's
Some trainings were provided in order for the users to answers and the answer key. If the percentage value is higher,
familiarize themselves with the operation of application. The then the student's answer is getting better. Otherwise, the
defects and errors in application that had not been found in the percentage value in the "Similarity among The Students"
testing stage were improved periodically. column shows the degree of similarity of a student's answers to
other students. For example, Aan Tri Wardana’s answer is
III. RESULTS AND DISCUSSION 100% similar to Ayu Siti Fadilah’s answer and 98.71% similar
After conducting several steps in the SDLC waterfall model, to Angela Vony Yuniar Cristy’s answer. If the percentage value
the results were: among the students is higher, then the level of plagiarism is
higher. It shows the condition is getting worse. This plagiarism
A. Results of Implementation Stage report proves that MoLearn LMS can assess essay answers
After analysing and designing the system, coding was automatically based on the answer key. In addition, MoLearn
conducted. From this phase, MoLearn LMS with all features LMS can also detect the level of plagiarism among the students
was created. However, in this study, we only focused on two in a class. For raw data related to the plagiarism detection
essential features: automated essay test assessment and process can be seen in table IV.
plagiarism report among students’ answers using the LSA
method. Fig. 4 illustrates the coding results of the automatic
essay assessment.
From Fig. 4, teachers will be given a recommendation score
for the essay test; thus, teachers can accelerate the process of
checking and grading the essay. With this function operated, the
score generated from this process will be more objective and
consistent because the grading process has been computerised.
In this process, the LSA method compares the similarities
between students’ answers in the essay with teachers’ answer
keys. The similarities were calculated using steps explained in
the research method section, starting from the text pre-
processing phase until the cosine similarity calculation. Cosine
similarity value was shown as the basis of recommendation
scores for students. This score recommendation was obtained
by multiplying the cosine similarity score by the maximum
score of the essay test. Fig. 5. Plagiarism Report

TABLE IV
RAW DATA FOR PLAGIARISM DETECTION PROCESS
Essay Question (in Indonesian language):
“Apa saja hasil sidang PPKI pada tanggal 18 Agustus 1945?”
Answer Key (in Indonesian language):
“mengesahkan UUD 1945, menetapkan pancasila sebagai dasar negara”
Aan Tri Wardana’s Answer (in Indonesian language):
“mengesahkan dan menetapkan pancasila sebagai dasar negara”
(79.83% similar to answer key)
Student’s Student’s Answer (in Indonesian Similarity with
Name Language) Aan Tri’s Answer
Ayu Siti “mengesahkan dan menetapkan 100%
Fig. 4. Automatic Essay Assessment Fadilah pancasila sebagai dasar negara”
Angga Adi “mengesahkan, menetapkan 100%
Setiawan pancasila sebagai dasar negara”
Besides the automated essay assessment system, another Angela Vony “mengesahkan dan menetapkan 98.71%
feature that cannot be found in other research is plagiarism Yuniar pancasila sebagai dasar negara
checkers among students’ answers. In detail, coding results for Indonesia”
plagiarism checkers can be seen in Fig. 5. In this plagiarism Alfred “mensahkan dasar negara Republik 70.71%
Jonathan Indonesia yaitu Pancasila”
report, students’ answers will be compared to all answers in a
classroom. Thus, teachers can detect a student whose answers
are similar to another student. With this report, teachers can also B. Results of Testing Stage
give fair punishment to the students so that they will not The testing of MoLearn LMS has been conducted. The
consider plagiarising their answers in the future. In the testing was to measure LMS functionality by using black-box
plagiarism report, there are 2 main columns, namely testing. The results of black-box testing showed that all
"Kemiripan Dengan Kunci” (Similarity with Answer Key) and functions in the application worked well. There is no evidence
“Kemiripan Antar Siswa” (Similarity among The Students). of bugs and errors found in the MoLearn LMS. The black box

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

testing was conducted by all researchers and eight students who No Problem Solution
were recruited as the testers. For the automated essay 2 The time to produce a There is a technical improvement for
recommendation for text pre-processing, which was
assessment function, the test was conducted by asking all essay test scores is quite initially compared to each word by
students to answer four essay questions. Then, their answers long, which is 12,62 performing a query to the database. It
were graded manually and computerised. The results of manual seconds per question, due was replaced with a single query, and
assessment were also compared to the computerised to the length of text pre- the query results were accommodated
processing in an array. After that, each word is
assessment. Table V illustrates the comparison result of manual compared with the contents of the
and automated assessment. array. There is no need to query many
times so that the text pre-processing
TABLE V results are faster; it also speeds up the
emergence of score recommendations
THE COMPARISON OF MANUAL AND COMPUTERISED from the original 12,619 to 5,522
ASSESSMENT BY USING THE APPLICATION (PHASE I) seconds per question
Score of Score of Score of Score of
Question 1: Question 2: Question 3: Question 4: After repairs were made, the essay test assessment process
Results Geographical Chemical Foehn Wind Climate Zone with the application is better than before. Table VIII shows
Aspects Weathering
Man App Man App Man App Man App evident from the results of better accuracy in phase II testing.
Average 58.75 83.5 61.88 83.6 61.25 83.8 58.75 82.89 Based on Table VIII, the second stage of the test showed that
Deviation 24.75 21.725 22.55 24.1375
the accuracy of the automatic essay test assessment was
Accuracy
57.87% 64.89% 63.18% 58.91% 84.11%. This shows an increase in accuracy compared to the
Level
Accuracy first stage of testing of 61.21%.
Level 61.21%
Average
TABLE VIII
The accuracy score was obtained by using this formula: COMPARISON OF TEST SCORES MANUALLY AND TEST SCORES
⌈𝑀𝑎𝑛𝑢𝑎𝑙 𝑠𝑐𝑜𝑟𝑒 − 𝐴𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑠𝑐𝑜𝑟𝑒⌉ USING THE APPLICATION (PHASE II)
Accuracy score = (1 − ) ∗ 100%
𝑀𝑎𝑛𝑢𝑎𝑙 𝑠𝑐𝑜𝑟𝑒
Score of Score of Score of Score of
Based on Table V, the testing result of phase I indicates that Question 1: Question 2: Question 3: Question 4:
the accuracy level of the automatic essay test assessment was Results Geographical Chemical Foehn Wind Climate Zone
Aspects Weathering
61.21%. Besides the accuracy test of the grading score of the Man App Man App Man App Man App
automatic essay assessment system using the LSA method, the Average 58.75 68.56 61.88 72.7 61.25 70.8 58.8 66.84
test was also conducted to measure the system’s speed in Deviation
Accuracy
9.8125 10.825 9.55 8.0875

providing score recommendations for the essay to MoLearn 83.30% 82.51% 84.41% 86.23%
Level
users. The test’s results measure the system’s speed, providing Accuracy
Level 84.11%
the recommendation score. Four essays were used to analyse Average
the system’s speed in delivering recommendation scores. The
test’s results can be seen at Table VI. In addition to a better level of accuracy, the processing time
for generating recommendation scores in phase II is also faster,
TABLE VI from 12,62 seconds to 5,52 seconds per question. This is
RECOMMENDATION SCORE TIMETABLE evident from the quicker processing time, as shown in table IX.
Results The time for the system to provide a When compared between the automatic essay test assessment
recommendation score (in a second) time with the application in Table IX and the manual correction
Average Time (4 50.475 time in Table X, the assessment time with the application is 8.04
Questions) times faster than the manual assessment.
Average Time (1 12.61875 = 12.62
Question)
TABLE IX
From these two tests, problems and solutions are identified COMPARISON OF THE EMERGENCE TIME OF
RECOMMENDATIONS
as provided in tableVII.
Results Recommendation Appear Recommendation Appear
TABLE VII Time (seconds) – Phase I Time (seconds) – Phase II
PROBLEMS AND SOLUTIONS Average Time
50.475 22.0875
(4 Questions)
No Problem Solution Average Time
12.61875 = 12.62 5.521875 = 5.52
1 The accuracy level of the There is an improvement in the source Per Question
essay test is still low. The code for calculating the value of the q
score issued by the matrix at degree 2 approximation and TABLE X
application tends to be the similarity calculation process with
higher due to the cosine technique. Thus, the
MANUAL CORRECTION TIME
miscalculation of matrix similarity results, which are the basis
Manual correction Correction time (seconds)
q value in degree 2 for determining the recommendation
results Question1 Question2 Question3 Question4
approximation and errors for essay test scores, are better and
in cosine technique more accurate, from the original level Correction time 43.125 35.625 36.375 62.375
of accuracy of 61.21% to 84.11%

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

Manual correction Correction time (seconds) scale of 1 to 5. The results and percentage of the teachers’
results Question1 Question2 Question3 Question4 responses are presented in Fig. 7.
Average correction
44.375 = 44.38
time per question
the score recommendation of essay test
E. Results of Deployment and Maintenance Stage helps teachers to speed up the essay test
assessment process
MoLearn LMS that was tested and ran well was uploaded to 2 0 0
webserver with this domain https://molearn.net. The MoLearn 6% 0% 0% 12
LMS was already extensively introduced to teachers and Strongly Agree
students by giving them some trainings in order for them to be 38%
Agree
able to operate MoLearn LMS well. The feedbacks received
18 Quite Agree
from both teachers and students have been accommodated for 56%
continuous improvement. Furthermore, the evaluation was also Disagree
conducted by distributing questionnaires to 32 teachers. The Strongly Disagree
questionnaires were related to two main features, i.e., automatic
essay test assessment and report of students’ plagiarism. The
evaluation was carried out by utilizing descriptive statistic and Fig. 7. MoLearn Can Speed Up The Essay Assessment
Likert scale 1 to 5. The teachers’ responses were arranged as
follows: the values of ‘strongly agree’ was 5, value of ‘agree’ c) Teachers strongly agree that MoLearn LMS assists them
was 4, value of ‘quite agree’ was 3, value of ‘disagree’ was 2, in detecting students’ plagiarism in their answers. The
and value of ‘strongly disagree’ was 1. For classification of average of value generated is 4.25 with the scale of 1 to 5.
questionnaire scores, see Table XI. The results and percentage of the teachers’ responses are
presented in Fig. 8.
TABLE XI
CLASSIFICATION OF QUESTIONNAIRE SCORES
MoLearn can assist teachers in detecting
Score Range Score Description plagiarism in answers between students
1.00 – 1.80 Strongly Disagree 1 0 0
9
1.81 – 2.60 Disagree 3% 0% 0%
28% Strongly Agree
2.61 – 3.40 Quite Agree
3.41 – 4.20 Agree Agree
4.21 – 5.00 Strongly Agree 22 Quite Agree
69%
Disagree
After the average of questionnaire results was found, and the
results were compared to Table XI, the results of evaluations Strongly Disagree
were generated as follows:
a) Teachers agree that MoLearn application provides an
Fig. 8. MoLearn Can Detect Plagiarism
accurate recommendation of essay test score. The average
value is 4.03 with the scale of 1 to 5. The result and
d) Teachers strongly agree that MoLearn LMS has added
percentage of the teacher’s responses are presented in Fig.
value compared to other LMSs. This is due to the feature
6.
of score recommendation of essay test and plagiarism
report of students’ answers. The average value generated
MoLearn can provide score recommendation is 4.63 with the scale of 1 to 5. The results and percentage
of essay test accurately of the teachers’ responses are presented in Fig. 9.
5 1 0
8 MoLearn has added value compared to other
16% 3% 0%
25% Strongly Agree LMS
1 0 0
Agree 10 3% 0% 0% Strongly Agree
18 Quite Agree 31% Agree
56% Quite Agree
Disagree
21 Disagree
Strongly Disagree 66% Strongly Disagree

Fig. 6. Accuracy of Automatic Essay Assessment

b) Teachers strongly agree that the recommendation of essay Fig. 9. MoLearn Has Added Values Than The Other LMS
test score assists them to speed up the process of essay test
assessment. The average value generated is 4.31 with the

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

IV. CONCLUSIONS Detection on Mobile Learning ‘molearn’ Application Using GLSA


Method,” 2019 2nd Int. Semin. Res. Inf. Technol. Intell. Syst. ISRITI
Based on the results and discussions that have been described 2019, pp. 314–319, 2019, doi: 10.1109/ISRITI48646.2019.9034652.
previously, the following conclusions can be drawn: [13] K. Scouller, “The Influence of Assessment Method on Students ’
1) In this case, for small samples, MoLearn LMS can Learning Approaches : Multiple Choice Question Examination versus
automatically evaluate essay tests using the LSA method Assignment Essay Author ( s ): Karen Scouller Published by : Springer
Stable URL : http://www.jstor.org/stable/3448270 Accessed : 02-06-
with a relatively fast time of 5.52 seconds per question and 2016,” High. Educ., vol. 35, no. 4, pp. 453–472, 1998.
an assessment accuracy rate of 84.11%. The test results [14] W. F. Febrita, Ruth Emma; Mahmudi, “PRE-PROCESSED LATENT
prove that the automatic essay test assessment with SEMANTIC ANALYSIS Ruth Ema Febrita , b Wayan Firdaus
MoLearn LMS has a speed of 8.04 times compared to the Mahmudy Magister of Computer Science Brawijaya University ,
Veteran Street , Malang 65145 In education , essay is considered as
manual assessment. From the results of the questionnaire,
the best tool to evaluate student ’ s high order thin,” J. Ilm. KURSOR,
it shows that the teachers strongly agree that the vol. 8, no. 4, pp. 175–180, 2016.
recommendation for automatic essay test assessment can [15] M Varaprasad Rao; et. al, “Automated Evaluation ofTelugu Text
speed up the assessment process, with an average value of Essays Using Latent Semantic Analysis,” Turkish J. Comput. Math.
Educ., vol. 12, no. 5, pp. 1888–1890, 2021, doi:
4.31 on a scale of 1 to 5. 10.17762/turcomat.v12i5.2267.
2) MoLearn LMS can generate reports of plagiarism between [16] J. Zeniarta, A. Salam, and I. Achsamu, “Sistem Koreksi Jawaban Esai
students, making it easier for teachers to detect cheating Otomatis (E-Valuation) dengan Vector Space Model pada Computer
committed by students in a class. The questionnaire results Based Test (CBT),” Semin. Nas. Din. Inform., pp. 91–96, 2020.
[17] R. Bawarith, D. Abdullah, D. Anas, and P. Dr., “E-exam Cheating
show that teachers strongly agree that MoLearn can help Detection System,” Int. J. Adv. Comput. Sci. Appl., vol. 8, no. 4, pp.
detect plagiarism, with an average score of 4.25 on a scale 176–181, 2017, doi: 10.14569/ijacsa.2017.080425.
of 1 to 5. [18] F. Noorbehbahani, A. Mohammadi, and M. Aminazadeh, A systematic
3) MoLearn LMS provides a novelty compared to LMS in review of research on cheating in online exams from 2010 to 2021, no.
0123456789. Springer US, 2022.
general, which does not yet have an automatic essay test [19] Y. Ni and J. Lu, “Research on Junior High School English Reading
assessment feature as well as an answer plagiarism report Class Based on the Principle of Timing and Thorndike’s Three Laws
feature among students. of Learning,” J. Lang. Teach. Res., vol. 11, no. 6, pp. 962–969, 2020,
doi: 10.17507/jltr.1106.13.
[20] J. W. Santrock, Educational Psychology 6th Edition. New York:
REFERENCES McGraw-Hill, 2017.
[1] J. M. Stephens, “How to Cheat and Not Feel Guilty: Cognitive [21] O. Abdurakhman and R. Rusli, “Teori Belajar dan Pembelajaran
Dissonance and its Amelioration in the Domain of Academic Inovatif,” Didakt. Tauhidi J. Pendidik. Guru Sekol. Dasar, vol. 2, no.
Dishonesty,” Theory Pract., vol. 56, no. 2, pp. 111–120, 2017, doi: 1, p. 33, 2015.
10.1080/00405841.2017.1283571. [22] J. Hattie and H. Timperley, “The Power of Feedback,” Rev. Educ. Res.,
[2] H. Habiburrahim, I. K. Trisnawati, Y. Yuniarti, Z. Zainuddin, and S. vol. 77, no. 1, pp. 81–112, 2007, doi: 10.3102/003465430298487.
Muluk, “Scrutinizing cheating behavior among EFL students at [23] S. Ceria and G. Rossi, “IT education in Argentina,” IT Prof., vol. 16,
islamic higher education institutions in Indonesia,” Qual. Rep., vol. 26, no. 3, pp. 6–9, 2014, doi: 10.1109/MITP.2014.31.
no. 3, pp. 1033–1053, 2021, doi: 10.46743/2160-3715/2021.4683. [24] M. I. Q. & N. K. Ishamuddin Mustapha, Nguyen Thuy Van,
[3] G. Jamaluddin, Samudera F; Lufiyanto, “2500-Article Text-9962-1- Masoumeh Shahverdi, “Effectiveness of Digital Technology in
10-20210301.pdf,” ANIMA Indones. Psychol. J., vol. 36, no. 1, pp. 7– Education During,” iJIM Int. J. Interact. Mob. Technol., vol. 15, no. 8,
35, 2021. pp. 136–154, 2021.
[4] T. Marksteiner, M. A. Reinhard, O. Dickhäuser, and S. L. Sporer, [25] J. Koziel, A. Wac-Włodarczyk, and M. Śniadkowski, “IT education at
“How do teachers perceive cheating students? Beliefs about cues to the faculty of electrotechnology: Quality analysis and evaluation,”
deception and detection accuracy in the educational field,” Eur. J. 29th Annu. Conf. Eur. Assoc. Educ. Electr. Inf. Eng. EAEEIE 2019 -
Psychol. Educ., vol. 27, no. 3, pp. 329–350, 2012, doi: Proc., pp. 3–6, 2019, doi: 10.1109/EAEEIE46886.2019.9000442.
10.1007/s10212-011-0074-5. [26] D. Khanna and A. Prasad, “Problems Faced by Students and Teachers
[5] N. Hasan and N. Khan, “Internet and Increasing Issues of Plagiarism,” during Online Education Due to COVID-19 and How to Resolve
Shrinkhla Ek Shodhparak Vaicharik Patrika, vol. 5, no. 12, pp. 125– Them,” Proc. - 2020 6th Int. Conf. Educ. Technol. ICET 2020, pp. 32–
131, 2018, [Online]. Available: 35, 2020, doi: 10.1109/ICET51153.2020.9276625.
https://www.researchgate.net/publication/332696789. [27] D. F. Murad, R. Hassan, Y. Heryadi, B. D. Wijanarko, and Titan, “The
[6] S. Kocdar, A. Karadeniz, R. Peytcheva-Forsyth, and V. Stoeva, Impact of the COVID-19 Pandemic in Indonesia (Face to face versus
“Cheating and Plagiarism in E-Assessment: Students’ Perspectives,” Online Learning),” Proceeding - 2020 3rd Int. Conf. Vocat. Educ.
Open Prax., vol. 10, no. 3, pp. 221–235, 2018, doi: Electr. Eng. Strength. Framew. Soc. 5.0 through Innov. Educ. Electr.
10.5944/openpraxis.10.3.873. Eng. Informatics Eng. ICVEE 2020, pp. 4–7, 2020, doi:
[7] D. Sorea, G. Roşculeţ, and A. M. Bolborici, “Readymade solutions and 10.1109/ICVEE50212.2020.9243202.
students’ appetite for plagiarism as challenges for online learning,” [28] T. Supriyatno and F. Kurniawan, “A New Pedagogy and Online
Sustain., vol. 13, no. 7, 2021, doi: 10.3390/su13073861. Learning System on Pandemic COVID 19 Era at Islamic Higher
[8] P. Šprajc, M. Urh, J. Jerebic, D. Trivan, and E. Jereb, “Reasons for Education,” Proc. - 2020 6th Int. Conf. Educ. Technol. ICET 2020, pp.
plagiarism in higher education,” Organizacija, vol. 50, no. 1, pp. 33– 97–101, 2020, doi: 10.1109/ICET51153.2020.9276604.
45, 2017, doi: 10.1515/orga-2017-0002. [29] S. Xue and H. Crompton, “Educational technology research during the
[9] C. De Maio, K. Dixon, and S. Yeo, “Academic staff responses to COVID-19 pandemic,” Interact. Technol. Smart Educ., vol. ahead-of-
student plagiarism in universities: A literature review from 1990 to print, no. ahead-of-print, Jan. 2022, doi: 10.1108/ITSE-05-2022-0067.
2019,” Issues Educ. Res., vol. 29, no. 4, pp. 1131–1142, 2019. [30] I. Kusmaryono, J. Jupriyanto, and W. Kusumaningsih, “A Systematic
[10] R. Fish and G. Hura, “Students’ perceptions of plagiarism,” J. Literature Review on the Effectiveness of Distance Learning:
Scholarsh. Teach. Learn., vol. 13, no. 5, pp. 33–45, 2013. Problems, Opportunities, Challenges, and Predictions,” Int. J. Educ.,
[11] J. Lemantara, M. J. Dewiyani Sunarto, B. Hariadi, T. Sagirani, and T. vol. 14, no. 1, pp. 62–69, 2021, doi: 10.17509/ije.v14i1.29191.
Amelia, “Prototype of Online Examination on MoLearn Applications [31] M. Li and Y. Su, “Evaluation of online teaching quality of basic
Using Text Similarity to Detect Plagiarism,” Proc. - 2018 5th Int. education based on artificial intelligence,” Int. J. Emerg. Technol.
Conf. Inf. Technol. Comput. Electr. Eng. ICITACEE 2018, pp. 131– Learn., vol. 15, no. 16, pp. 147–161, 2020, doi:
136, 2018, doi: 10.1109/ICITACEE.2018.8576922. 10.3991/ijet.v15i16.15937.
[12] J. Lemantara, M. J. Dewiyani Sunarto, B. Hariadi, T. Sagirani, and T. [32] N. T. Thomas, A. Kumar, and K. Bijlani, “Automatic Answer
Amelia, “Prototype of Automatic Essay Assessment and Plagiarism Assessment in LMS Using Latent Semantic Analysis,” Procedia

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for publication in IEEE Transactions on Learning Technologies. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/TLT.2023.3267518

Comput. Sci., vol. 58, pp. 257–264, 2015, doi: Julianto Lemantara, is a researcher and
10.1016/j.procs.2015.08.019. lecturer in the Department of Information
[33] J. R. A. Rodrigues et al., “IQuiz: Integrated assessment environment System, Dinamika University. He graduated
to improve Moodle Quiz,” Proc. - Front. Educ. Conf. FIE, pp. 293– from STMIK Surabaya with bachelor's
295, 2013, doi: 10.1109/FIE.2013.6684834. degrees in Information systems. He continued
[34] G. Bateson, “Moodle plugins directory: Essay (auto-grade),” 2022. his study and completed his Master's degrees
https://moodle.org/plugins/qtype_essayautograde (accessed May 19, in Information Technology from Gadjah
2022). Mada University. He has research interests in
[35] K. Zupanc and Z. Bosnić, “Automated essay evaluation with semantic the field of information systems, decision
analysis,” Knowledge-Based Syst., vol. 120, pp. 118–132, 2017, doi: support systems, and data mining. He has
10.1016/j.knosys.2017.01.006. research interests in the field of information
[36] M. Zhang, S. Hao, Y. Xu, D. Ke, and H. Peng, “Automated essay systems, decision support systems, and data
scoring using incremental latent semantic analysis,” J. Softw., vol. 9, mining. Until 2022, he has published his
no. 2, pp. 429–436, 2014, doi: 10.4304/jsw.9.2.429-436. researches in Scopus indexed International Journals and Proceedings.
[37] J. Hoblos, “Experimenting with Latent Ssemantic Analysis and Latent
Dirichlet Allocation on Automated Essay Grading,” 2020 7th Int.
Conf. Soc. Netw. Anal. Manag. Secur. SNAMS 2020, 2020, doi: Bambang Hariadi, is a Vice Rector in
10.1109/SNAMS52053.2020.9336533. Student Affair, Universitas Dinamika,
[38] Y. Kalmukov, “Comparison of Latent Semantic Analysis and Vector Surabaya. His undergraduate degree in
Space Model for Automatic Identification of Competent Reviewers to educational administration, postgraduate and
Evaluate Papers,” Int. J. Adv. Comput. Sci. Appl., vol. 13, no. 2, pp. Ph.D, are consistently pursued in educational
77–85, 2022, doi: 10.14569/IJACSA.2022.0130209. technology. He is also a researcher and
[39] U. Barjtya, Sahil; sharma, Ankur; Rani, “A detailed study of Software lecturer in Department of Film and Television
Development Life Cycle (SDLC) Models,” Int. J. Eng. Comput. Sci., Production, Universitas Dinamika, Surabaya,
vol. 6, no. 7, pp. 22097–22100, 2017. Indonesia.
[40] A. Alshamrani and A. Bahattab, “A Comparison Between Three
SDLC Models Waterfall Model, Spiral Model, and
Incremental/Iterative Model,” IJCSI Int. J. Comput. Sci. Issues, vol.
12, no. 1, pp. 106–111, 2015, [Online]. Available:
https://www.academia.edu/10793943/A_Comparison_Between_Thre M.J. Dewiyani Sunarto, is a researcher and
e_SDLC_Models_Waterfall_Model_Spiral_Model_and_Incremental lecturer in Department of Information
_Iterative_Model. System, Universitas Dinamika, Surabaya,
[41] S. Kaur, “A Review of Software Development Life Cycle Models,” Indonesia. She graduated with a Doctor in
Int. J. Adv. Res. Comput. Sci. Softw. Eng., vol. 5, no. 11, pp. 354–360, Mathematics Education. Her research interest
2015, [Online]. Available: www.ijarcsse.com.
in the education field, such as the utilization
[42] J. Lemantara, “Sistem Pendukung Keputusan Pengoptimalan
Pembagian Tugas dengan Kombinasi Metode Hungarian dan of technology in education, the latest learning
Permutasi,” J. Nas. Tek. Elektro dan Teknol. Inf., vol. 6, no. 2, pp. 152– model, and mathematics education. She is
161, 2017. currently Head of Research and Community
[43] J. Abbas, “Quintessence of Traditional and Agile Requirement Service at Universitas Dinamika.
Engineering,” J. Softw. Eng. Appl., vol. 09, no. 03, pp. 63–70, 2016,
doi: 10.4236/jsea.2016.93005.
[44] P. Kherwa and P. Bansal, “Latent Semantic Analysis: An Approach to
Understand Semantic of Text,” Int. Conf. Curr. Trends Comput. Tan Amelia, is a researcher and lecturer in
Electr. Electron. Commun. CTCEEC 2017, pp. 870–874, 2018, doi: the Department of Information System,
10.1109/CTCEEC.2017.8455018. Universitas Dinamika. Her research interests
[45] D. A. Perkasa et al., “Sistem Ujian Online Essay Dengan Penilaian include software engineering with a particular
Menggunakan Metode Latent Sematic Analysis (Lsa),” J. Rekayasa interest in requirements prioritization. She
dan Manaj. Sist. Inf., vol. 1, no. 1, pp. 1–9, 2015, [Online]. Available: graduated from the Universitas Dinamika
http://ejournal.uin-suska.ac.id/index.php/RMSI/article/view/1313. with bachelor's degrees in Information
[46] F. S. Jumeilah, “Penerapan Support Vector Machine (SVM) untuk systems and then completed her Master's
Pengkategorian Penelitian,” J. RESTI (Rekayasa Sist. dan Teknol. degrees in Technology Management from the
Informasi), vol. 1, no. 1, pp. 19–25, 2017, doi: 10.29207/resti.v1i1.11. Institut Teknologi Sepuluh Nopember. She is
[47] T. Melita, Ria; Amrizal, Victor; Suseno, Hendra Bayu; Dirjam, currently pursuing a Ph.D. degree with the
“Penerapan Metode Term Frequency Inverse Document Frequency Faculty of Computing, Universiti Malaysia
(Tf-Idf) Dan Cosine Similarity Pada Sistem Temu Kembali Informasi Pahang, Kuantan, Malaysia.
Untuk Mengetahui Syarah Hadits Berbasis Web (Studi Kasus: Hadits
Shahih Bukhari-Muslim),” J. Tek. Inform., vol. 11, no. 2, pp. 149–164, Tri Sagirani, is a researcher and lecturer at
2018, doi: 10.15408/jti.v11i2.8623. the Department of Information System,
[48] O. Nurdiana, J. Jumadi, and D. Nursantika, “Perbandingan Metode Universitas Dinamika, Surabaya, Indonesia.
Cosine Similarity Dengan Metode Jaccard Similarity Pada Aplikasi She received Master's degree in the Institute
Pencarian Terjemah Al-Qur’an Dalam Bahasa Indonesia,” J. Online Teknologi Sepuluh Nopember, Surabaya,
Inform., vol. 1, no. 1, p. 59, 2016, doi: 10.15575/join.v1i1.12. Indonesia. Her current placement is the dean
[49] P. M. Jacob and M. Prasanna, “A Comparative analysis on Black box at Technic and Informatica Faculty,
testing strategies,” Proc. - 2016 Int. Conf. Inf. Sci. ICIS 2016, pp. 1–6, Universitas Dinamika. She is interested in
2017, doi: 10.1109/INFOSCI.2016.7845290. human computer interaction, such as
[50] S. Nidhra and J. Dondeti, “Black Box and White Box Testing computer for health, learning, technology for
Techniques - A Literature Review,” Int. J. Embed. Syst. Appl., vol. education, technology for special education,
2, no. 2, pp. 29–50, 2012, doi: 10.5121/ijesa.2012.2204. and user experience in application computer.
[51] M. Larrea, “Black-Box Testing Technique for Information
Visualization. Sequencing Constraints with Low-Level
Interactions,” J. Comput. Sci. Technol. (La Plata), vol. 17, no. 1, pp.
37–48, 2017.

© 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Tri Sagirani. Downloaded on April 18,2023 at 02:29:08 UTC from IEEE Xplore. Restrictions apply.

You might also like