You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/326815892

The construction and validation of a usability evaluation survey for mobile


learning environments

Article  in  Studies In Educational Evaluation · July 2018


DOI: 10.1016/j.stueduc.2018.06.002

CITATIONS READS

16 1,548

3 authors:

Nadia Parsazadeh Rosmah Ali


National Dong Hwa University Universiti Teknologi Malaysia
9 PUBLICATIONS   131 CITATIONS    49 PUBLICATIONS   240 CITATIONS   

SEE PROFILE SEE PROFILE

Mehran Rezaei
University of Isfahan
19 PUBLICATIONS   160 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Application-Oriented Engineering Technology T & L View project

Instrument development View project

All content following this page was uploaded by Nadia Parsazadeh on 03 August 2018.

The user has requested enhancement of the downloaded file.


Studies in Educational Evaluation 58 (2018) 97–111

Contents lists available at ScienceDirect

Studies in Educational Evaluation


journal homepage: www.elsevier.com/locate/stueduc

The construction and validation of a usability evaluation survey for mobile T


learning environments

Nadia Parsazadeha, , Rosmah Alib, Mehran Rezaeia, Sanaz Zolfaghar Tehranic
a
Department of Computer Engineering, University of Isfahan, Isfahan, Iran
b
Advanced Informatics School, Universiti Teknologi Malaysia, Kuala Lumpur, Malaysia
c
Department of Management, California State University, CA, Los Angeles, United States

A R T I C LE I N FO A B S T R A C T

Keywords: The advent of mobile technologies in learning context, has been increased the requirements for developing
Program evaluation appropriate usability model to align with mobile learning applications. Even though mobile learning has been
Student evaluation studied from different aspects of pedagogy environment and technology acceptance, there is little scientific and
Evaluation methods published research on usability of mobile learning applications. To fill up the gap, in this study, a usability
evaluation model with the inclusion of timeliness is developed to assess the usability of mobile learning ap-
plications. Timeliness or response time as an important feature in mobile learning, which influences learning
satisfaction, can be considered to evaluate the peers and instructors’ timely response. The main objective of this
study is to construct and validate a usability evaluation survey for mobile learning environments. This study
employed a two round Delphi method to empirically verify the usability questionnaire by obtaining a consensus
from fourteen experts regarding the questionnaire items. Results indicate that over 88% of experts have con-
sented on all usability items represented in the usability questionnaire. The usability evaluation survey for
mobile learning applications can help to improve user satisfaction and reductions in training costs. The decrease
in costs attracts many researchers, interface designers and project managers to employ the usability evaluation
when designing the interfaces for mobile learning applications.

1. Introduction (Taharim et al., 2013). Usability has been specified as one of the im-
portant fundamentals of mobile-learning applications (Albert & Tullis,
Mobile learning promotes active learning and classroom account- 2013; Anani, 2008; Capretz, Ali, & Ouda, 2012). Assessing the usability
ability, encourages the interaction and contribution of shy students of mobile technology has been identified as one of the main challenges
(Markett, Sánchez, Weber, & Tangney, 2006). Mobile learning permits in mobile learning and has a high priority for mobile learning evalua-
educators to generate online learning solutions for learners which can tion (Vavoula & Sharples, 2009). A previous literature review on the
be used anywhere and anytime as necessary in order to gain results that mobile-learning studies showed a lack of specific usability metrics for
cannot be reached using existing educational models (Jou, Tennyson, mobile-learning environments (Ivanc, Vasiu, & Onita, 2012). Specifying
Wang, & Huang, 2016). There are some limitations in mobile devices the characteristics and required attributes of usability has become a
employed in mobile learning including limited processing power, small challenging issue depends on the context in which the product is used
screen size, connectivity, and data entry methods (Nielsen & Budiu, (Ivanc et al., 2012; Witold et al., 2003). Evaluating the projects of
2013). Due to these issues for developing a mobile application, the mobile learning has both theoretical and methodological challenges
usability evaluation methods have to be studied specifically (Nielsen & (Traxler & Vosloo, 2014).
Budiu, 2013). The advent of mobile learning applications has presented new us-
Usability is significant, not only to increase accuracy, but also to ability metrics that are difficult to measure using traditional models of
decrease the response time of the range of tasks accomplished by the usability. Current usability evaluation methods are based on traditional
users of the system. Usability is also imperative where application is techniques that were designed for traditional computer systems, and
used to control interactive processes for example in mobile-learning not for emerging mobile computing technologies (Swart, Bere, &

Abbreviations: CIMLA, cooperative and interactive mobile learning application; OIES, online information evaluation skills; PACMAD, People At Center of Mobile Application
Development; DB, database; SUS, System Usability Scale

Corresponding author.
E-mail addresses: n.parsazadeh@comp.ui.ac.ir, pnadia4@live.utm.my (N. Parsazadeh).

https://doi.org/10.1016/j.stueduc.2018.06.002
Received 2 August 2017; Received in revised form 9 May 2018; Accepted 4 June 2018
0191-491X/ © 2018 Elsevier Ltd. All rights reserved.
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Mafunda, 2017). In other words, usability evaluation of mobile device identifies seven attributes for usability measure of a mobile application
app has become a significant issue because several software products including “Effectiveness, Efficiency, Satisfaction, Learnability, Memor-
that previously ran using desktops and laptops, are currently run using ability, Errors and Cognitive load” (Harrison et al., 2013). Each of these
smart phone technologies (Hussain, Mkpojiogu, Musa, & Mortada, usability attributes has important effect on the usability of the appli-
2017). cation so they can be used to assist assessment of the usability of ap-
Although there are many usability models for desktop applications plication. The PACMAD usability model can be used to assess the us-
(Bevan, 1998; Nielsen, 1994b), a study by Harrison, Flood, and Duce ability of mobile applications.
(2013) in particular reported the limitations of existing usability models
when applied to mobile devices. The usability model presented by
Bevan (1998) as well as the one constructed by Nielsen (1994a, 1994b) 2.2. Timeliness
were basically designed for traditional desktop applications. For in-
stance, Nielsen’s model was largely based on the design of telecoms Timeliness can be considered in system quality as response time
systems, rather than computer software. In addition, there is a singular (Gorla, Somers, & Wong, 2010). Timeliness (response time) defined as
lack of reliable usability guidelines, specifically meant for designing whether students perceive instructors responded promptly to their
and developing m-learning with user friendly interfaces. In fact, us- problems (Sun, Tsai, Finger, Chen, & Yeh, 2008). Lan and Sie (2010)
ability has been less extensively covered than the technological aspects identified timeliness as “the degree to which users think a received
of the m-learning (Ali, Alrasheedi, Ouda, & Capretz, 2015). The message is time-sensitive or has immediate feedback” (Lan & Sie,
PACMAD (People At Center of Mobile Application Development) us- 2010). For example, when the teacher posts a class announcement,
ability model developed by Harrison et al. (2013) is designed for us- students can receive the message immediately or when peer replies to a
ability evaluation of mobile applications, but is not considered the re- discussion topic, students can receive this replied message auto-
quired features of mobile-learning applications. matically (Lan & Sie, 2010). Timeliness concerns the timely, current,
From previous literature it is visible that many existing usability and up-to-date messages (Cheung & Thadani, 2012).
models do not consider timeliness (interactive response time) as an Timeliness has an understandable link to responsiveness. A timely
attribute of usability. To cope with this issue, our study included response to students’ questions or requests is certainly beneficial to
timeliness as a feature of usability to argument existing usability models students (Sun et al., 2008). When the information or questions are
for use in mobile learning context. In addition, a usability evaluation delivered on time, the teacher is able to respond to student’s question
questionnaire is developed to assess the usability of mobile learning very quickly. Immediate feedback to students’ questions increases stu-
applications. dent satisfaction. Student satisfaction has become more significant in
today’s competitive environment.
2. Literature review Previous research indicated that timely responses of teacher sign-
ificantly affect students’ satisfaction (Arbaugh & Duray, 2002;
This part has provided the foundation to this research through the Thurmond, Wambach, Connors, & Frey, 2002). The rationale is that
literature review carried out preceding the usability and timeliness, when learners face problems in an online course, timely support from
which forms the groundwork of the study. the teacher encourages students to continue their learning. Teachers’
failing to respond to students’ problems in time has a negative effect on
2.1. Usability students’ learning (Soon, Sook, Jung, & Im, 1999). Hence, if a teacher is
able of handling online learning activities and responding to learners’
Many recent researchers have identified the benefits of commitment questions and problems promptly, the satisfaction of learning will im-
to usability in application development life cycle (Harrison et al., 2013; prove (Arbaugh & Duray, 2002; Ryan, Carlton, & Ali, 1999; Thurmond
Iacob, Harrison, & Faily, 2013; Shitkova, Holler, Heide, Clever, & et al., 2002).
Becker, 2015). Investigating usability and its contribution or integra- Mobile-learning represents a revolution in the future of learning
tion to the learning procedure is valuable (Anani, 2008). Nielsen states because the characteristics of mobility and timeliness enable anywhere
that “usability is a necessary condition for survival on the web” and anytime learning which make it easier to access information and to
(Nielsen, 1994b). Usability is an important issue for the success of freely join discussion (Lin, Huang, Zhao, & Dai, 2013). Some re-
mobile application. Usability or “ease of use” can mean to make pro- searchers have discovered that interactive response time has a positive
ducts and systems easier to use, and to adapt them more closely to effect on user satisfaction (Jalal & Al-Debei, 2013; Wixom & Todd,
learners’ requirements. Poor usability reduces user and student pro- 2005; Wu & Wang, 2006; Xu, Benbasat, & Cenfetelli, 2013; Zelazny,
ductivity and accordingly causes it dropped out user and student Belanger, & Teagarden, 2012).
(Shitkova et al., 2015). Advantages of usability include reductions in From above studies, it can be found that timeliness is an important
training costs, enhanced quality of work, increased productivity, im- attribute of usability that needs to be considered in the context of
proved user satisfaction (ISO13407 ISO, 1999). The decrease in costs mobile learning. This attributes has an impact on the overall usability of
has involved many interface designers and project managers to engage mobile learning applications and as such can be used to help assess the
the usability theory when designing the interfaces. usability of mobile learning applications. Thus, to develop a usable
International Organization for Standardization ISO 9241-11 (1998) model for evaluating mobile-learning application, timeliness (response
defined usability “the extent to which a product can be used by speci- time) should be considered to evaluate the peers and instructors’ timely
fied users to achieve specified goals with effectiveness, efficiency and response. Timeliness as an important feature in mobile learning, which
satisfaction in a specified context of use” (Bevan, 1998). Further, ISO/ influences learning satisfaction, has not been considered in previous
IEC 9126-1 (2001), claims that usability is “the capability of the soft- usability models. In order to apply prior usability models in mobile-
ware product to be understood, learned, used and attractive to the user, learning context timeliness must be included as a feature of usability.
when used underspecified conditions” (ISO, 2001). Thus, in this study, timeliness was added to PACMAD usability
Harrison et al. (2013) conducted a systematic literature review of model, in order to provide a usability model, which can be used to
mobile usability evaluation models based on the usability attributes assess the usability of mobile applications in education context. The
developed by Nielsen (1994a, 1994b) and the International Organiza- usability model proposed for this study, improved the PACMAD model
tion for Standardization (ISO) by (Bevan, 1998). A new usability model in order to make the PACMAD usability model useful for using the
name PACMAD was developed by Harrison et al. (2013) to evaluate the usability model in mobile learning context. Fig. 1 indicates the pro-
usability of mobile applications. The PACMAD usability model posed usability model for mobile learning applications.

98
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Using engaging technologies along with appropriate evaluation


techniques are powerful methods of making learning efficient
(Mohamadi, 2018). Usability evaluation recognized as one of the suc-
cess factors of mobile learning (Ali et al., 2015; Hussain, 2017). A
number of formative usability evaluations need to be done for all pro-
ducts severally before the product gets released. Even after it has been
released, several summative usability evaluations should still be per-
formed as well (Hussain et al., 2017; Kuhnel, Seiler, Honal, &
Ifenthaler, 2017). Usability tests are techniques based on user percep-
tion to identify usability problems (Coutinho, Couto, Biase, Fernandes,
& Bonifacio, 2015). User-based (asking users their opinions) method is
one of the most commonly adopted methods for evaluating the usability
of an application (Anani, 2008; Khomokhoana, 2011). In this study, we
developed a questionnaire to ask students perceptions and evaluating
Fig. 1. Proposed Mobile Learning Application Usability Model. the usability of the mobile application.

2.3. Usability evaluation 3. Methodology

Evaluation is the process of collecting information about the worth An overview of system design is presented in the first subsection.
and merit of a program for the purpose of improving the program or The next subsections describe the system architecture of student-page
making decisions about effectiveness of the program. Evaluation as and teacher-page. The other subsections describe the construction and
phase of education research is concerned with understanding how the validation of a usability questionnaire using Delphi method.
processes of learning can be enhanced, mediated, and transformed
(Sharples, 2009). Once an online resource is made available to users, 3.1. System design
feedback and evaluation are essential to judge whether it is meeting the
exact requirements of the students (Hutchings, Hadfield, Howarth, & The Cooperative and Interactive Mobile Learning Application
Lewarne, 2007). (CIMLA) was designed and developed according to the instructional
Usability evaluation attributes should be considered in developing systems design model (ADDIE) in five phases: analysis, design, devel-
mobile application to assist in developing more usable mobile appli- opment, implementation, and evaluation (Fardoun, Montero, &
cation (Ali, 2013; Saleh, Isamil, & Fabil, 2015). Usability testing is a Jaquero, 2009; Peterson, 2003). CIMLA was designed based on mobile-
technique that emphasizes and examines how usable an application is learning framework conducted by (Parsazadeh, Ali, & Rezaei, 2018).
(Hussain, 2017; Sharpe, Rogers, & Preece, 2007). It specifies whether a This application is an adoption of the Jigsaw-based cooperative
product or application meets a quantifiable usability rate when a spe- learning model from (Aronson, 1978) and three types of interactive
cific user executes specific tasks using that product or application learning including learner-learner interaction, learner-lecturer interac-
(Khomokhoana, 2011). tion, and learner-content interaction from (Moore, 1989). The mobile

Fig. 2. System architecture of the student page in CIMLA.

99
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Fig. 3. Screen snapshot of the Home page of mobile application.

Fig. 5. Screen snapshot of the login page to Go Class.

that person or organization, what the author’s credentials, qualifica-


tions, and affiliations are, and whether the Web site is recommended by
a trusted source. Purpose or bias involves identifying the purpose of the
site and whether the information provided is fact or opinion. Currency
or timeliness refers to whether the information is up to date. Relevance
or validity refers to the comprehensiveness or depth of the information
provided on the site (Metzger & Flanagin, 2013; Metzger, 2007).
To identify problems in the user interface design of CIMLA, a course
evaluation was established by five experts using an adopted scoring
rubric of Nielsen (1994a,b) recommended using three to five experts to
evaluate and find the problems in the user interface (Nielsen, 1994a). In
the experts validation of CILMA, most experts confirmed the validity of
the CIMLA user interface. Some parts of the CIMLA changed according
Fig. 4. Screen snapshot of the Assignment page of mobile application. to the experts’ comments to improve the accuracy and appropriateness
of content, examples and practices to students’ level. Font sizes and
learning application was developed to demonstrate how cooperative styles were improved, and additional images were added to improve the
and interactive learning approaches helps to improve online informa- CIMLA more attractive.
tion evaluation skills of students. In order to design more usable mobile Java Languages and the Android Software Development Kit (SDK)
application, the usability attributes adopted from PACMAD including have been used to develop and implement an application for smart-
effectiveness, efficiency, timeliness, satisfaction, learnability, memor- phones. The web-based learning system operates on a Microsoft
ability, error, and cognitive load were considered during the design Internet Information Services (IIS) web server. Online learning activ-
phase of the mobile application. ities of readings, discussion, homework submission, and some special
User, task, and context are three factors adopted from PACMAD that features for teacher are available on the learning application. The web
affect the usability of mobile learning applications and were considered learning system uses the SQL Server database management system
in this study for developing CIMLA. User describes the computer sci- (DBMS) as a repository of students’ learning behaviors. The mobile
ence diploma students that used the mobile application in an experi- application designed for this study has the flexibility to run an all
ment study and participated in pre and post-test. Context describes that platforms including Android, iOS, Windows phone and tablets. Students
the developed mobile application was based on the Jigsaw-based co- who use Android phone can connect to application using an Android
operative learning model and the three types of interaction theory. Task application, while the other students and teachers that use other kind of
is identified as the online information evaluation skills of students that cell phones such as iPhone can connect to application using any stan-
should be improved after students have used the mobile application. dard web browser for example Firefox, Chrome, Safari, and Internet
The five online information evaluation criteria were adopted from Explorer.
Association (2000). Finally, an online tutorial using the final version of CIMLA was of-
Based on the information literacy standard, there are five criteria fered to a group of student volunteers who have not been previously
that students should consider in credibility assessment of web-based exposed to it. Thirty-five first-year diploma students in the computer
information including currency, relevance, authority, accuracy and science department at an international university in Kuala Lumpur took
purpose (Association, 2000). According to Metzger (2007) accuracy part as experimental group in the tutorial. The students spent 120 min
refers to the degree to which a Web site is free from errors, whether the to learn the online information evaluation skills (OIES) in three phases
information can be verified offline, and the reliability of the informa- of reading, discussion, and knowledge sharing. Fig. 11 indicates the
tion on the site. The authority of a Web site may be assessed by noting research procedure conducted in this study.
who authored the site and whether contact information is provided for In order to determine the extent of students’ abilities in evaluating

100
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Fig. 6. Screen snapshot of the page details in Go Class process.

online information, students’ assignments were collected in pre- and and guides the student to the main menu for view assignment, save
post-test. The pre-test and post-test consisted of multiple-choice as- assignment answer, and Go Class process.
signment related to online information evaluation skills. The multiple- Step B1 and B2. The student selects the assignment button. The
choice assignment consisted of 5 multiple choice questions, with each assignment menu includes pre and post assignments. Student can
question worth 0–3 points according a rubric table, for a total score of download the question attached file from Assignment database to an-
15. The pre and post test results were analyzed using Wilcoxon Signed swer questions.
Ranks test to determine whether there were any significant differences Step B3. When the student answers the question in the answer box
in their OIES after participation in the learning activities. and saves it, the answer is stored in Assignment database to be viewable
for teacher.
3.2. System architecture Step C1, C2, and C3. When the student selects the Go Class button,
the Class Definition database gives a class time to classroom time check
CIMLA includes student-page and teacher-page. The student-page, for authorizing of login time or sends a message that class is finished.
which consists of eight modules and five databases, can appropriately Step C4 and C5. When student clicks login in Go Class, the group
accomplish pre-test and post-test to measure Online Information detection module gets the group size from class definition database. In
Evaluation Skills (OIES) of students before and after learning using the Group detection, the student is allocated to a home group and an expert
mobile application. Moreover, the student-page can help students to group according to login order of students. As there are 35 students and
improve their OIES in Go Class module according to the jigsaw-based we have five criteria (currency, relevance, authority, accuracy, and
cooperative learning theory and the interaction theory. The teacher- purpose) for information literacy learning, the first 5 students that login
page, which has five databases and nine intelligent modules, sets stu- will be allocated to home group1 and expert group currency and so on.
dents list, sets assignment for pre-test and post-test, defines class timing, At the end we have seven groups with five students in each group. The
and monitors all students groups to answer their questions related to above features of this part make possible the cooperative learning
OIES course content. (Jigsaw-based cooperative learning model) and interaction (interactive
Based on system architecture, system operation procedures have learning) between students and teacher for better learning of OIES.
three major processes—reading, discussion, and transfer processes. The Step C6. The group detection information saves in the class real-
system operation procedures of student in Fig. 2 and teacher pages in time information database.
Fig. 7 are summarized as follows: Step C7. The Chat Hub module obtains the grouping information
from the class real-time information database. Students can have real
time discussion with their team-mate and ask questions of their teacher
3.2.1. System architecture of student page
in chat hub.
Step A1. Student gets into the system via a login interface module.
Step C8. The Change State Timer module changes the time allo-
When a user logs in, the login interface checks his/her account in the
cated for reading phase, discussion phase, and transfer phase in chat
user database. If the user is teacher, the system opens the teacher page
hub and informs students of the starting time of next phase.
and guides the teacher to the main menu to add students list, upload
Step C9. Finally, the Change State Timer module annotates to stu-
pre-test and post-test assignments, arrange class timing, prepare read-
dents that the class has finished. The student automatically logs out
ings, and manage students in Go Class. Students can login to the ap-
from Go Class. The student then returns to Step 4 for the post-test or
plication via the defined username and password by teacher.
logs out the application, terminating the learning process.
Step A2. If the user is student, the system opens the student page

101
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

The Log module obtains all student activities including the login uploaded assignments are stored in assignment database and are
time, assignment upload time, Go Class Login and logout time, and available for students to download.
application logout time to store these activities in Log database. The Step D1. The teacher allocates and set appropriate time for Go Class
student can see all these activities in Events menu of application. activity including start of class, maximum login time, reading time,
Figs. 3–6 indicate the screen snapshots of mobile application in discussion time, transfer time, and groups size.
student side in order to provide the operation process of CIMLA. Step D2 and D3. The allocated class timing is stored in class defi-
nition database and then the Classroom Time Check gets the allocated
3.2.2. System architecture of teacher page class time to display the login time for student and teacher.
Fig. 7 presents the system architecture of the teacher or the ad- Step E1 and E2. When the teacher selects the Go Class button, the
ministrator in CIMLA, which includes nine intelligent modules and five Classroom Time Check displays a message according to the class time
databases. The teacher-page is built on the website, allowing for easy received from Class Definition database. It shows “Login” menu for
monitoring of learning material by a teacher. A teacher can define entering the Go Class process or sends a message that “class is finished”
student list and class timing, send assignment for every student and see if the teacher tries to connect after the arranged time in Define Class
their answers to assess the learning states of students through pre-test Timing.
and post-test, upload learning content, and login Go Class to monitor Step E3 and E4. When teacher clicks Login in Go Class, the teacher
and answer students questions related to learning content. enters Chat Hub to answer students’ questions. The Class Real-time
Step A1. Teacher gets into the mobile application through the login Information database stores students’ questions, students’ discussion
interface module. As a user logs in the system, the login interface will information, and grouping information, and the Chat Hub gets this
check her account in the user account database. If the user is student, information from Class Real-time Information database in order to
the system opens the student page and guides the student to the main make them available for teacher and other students to answer the
menu to accomplish a pre-test, and then executes the Go Class process questions.
for improving her OIES; otherwise, the user is viewed as a teacher. Step E5 and E6. The Monitor All Groups obtains the grouping in-
Step A2. If the user is teacher, the system opens the teacher page formation from Class Real-time Information database and teacher can
and guides the teacher to the main menu to set student list, set as- monitor all the grouping information of students.
signment, define class timing, and manage Go Class process. Step E7. Finally, the changing state timer notifies the teacher that
Step B1 and B2. The teacher defines the student list for application the class is finished. The teacher automatically logs out from Go Class.
including name, username, password, and matric number, which is The teacher then returns to Step 4 to send the post-test to students, view
stored in users account database. Students can login the application via assignments answers, and then log out the application, terminating the
the defined username and password. monitoring process.
Step C1 and C2. The teacher uploads assignment in pre-test and The log interface obtains all teacher activities including the Login
post-test parts for every student and allocates a deadline for answers. time, Set assignment time, Go Class Login and logout time, and appli-
The uploaded assignments are stored in assignment database and cation logout time to store these activities in Log database. The teacher
available for students to download. can see all these activities in Events menu of application.
Step C3. The teacher uploads assignment in pre-test and post-test Figs. 8–10 indicate the screen snapshots of mobile application in
parts for every student and allocates a deadline for answers. The teacher side in order to provide the operation process of CIMLA.

Fig. 7. System architecture of the teacher page in CIMLA.

102
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Fig. 8. Screen snapshot of the posttest assignment settings in teacher side.

Fig. 9. Screen snapshot of the prepare reading materials in teacher side.

Fig. 10. Screen snapshot of the class timing in teacher side.

3.3. Usability questionnaire development et al., 2010). Satisfaction is “the perceived level of comfort and plea-
santness afforded to the user through the use of the software” (Harrison
In order to assess student’s perceptions on the usability of mobile et al., 2013). Learnability is “the ease with which a user can gain
learning application (CIMLA) a usability evaluation questionnaire was proficiency with an application” (Harrison et al., 2013). Memorability
developed based on some adapted questionnaire used previously by is “the ability of a user to retain how to use an application effectively”
other studies. The developed usability questionnaire consists of 42 5- (Harrison et al., 2013). Errors mean “the user can complete the desired
point Likert scale items on 8 usability attributes, namely effectiveness, tasks without errors” (Harrison et al., 2013).
efficiency, timeliness, satisfaction, learnability, memorability, error, Cognitive load refers to the extent of cognitive processing required
and cognitive load. Cronbach’s α value for the reliability of usability by the user to use the application (Harrison et al., 2013). The cognitive
questionnaire was 0.95 in this research, indicating that the usability load (including mental load and mental effort to reflect their intrinsic
questionnaire was reliable. Appendix A displays the list of adapted load which is a combination of extraneous and germane load) (Sweller,
items used in the usability evaluation questionnaire with the sources. Van Merriënboer, & Paas, 1998). Cognitive load measure is employed to
Effectiveness is “the ability of a user to complete a task in a specified investigate the effects of the proposed application called CIMLA, on
context” (Harrison et al., 2013). Efficiency is “the ability of the user to improving OIES of the students. The measure contains 5 items for the
complete their tasks with speed and accuracy” (Harrison et al., 2013). mental load and mental effort dimensions.
Timeliness can be considered in system quality as response time (Gorla

103
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

3.4. Usability questionnaire validation method The research process of this study started with literature review on
usability attributes of mobile application and then development of a
A two round Delphi method was conducted to validate the usability usability questionnaire for mobile learning application. In order to
evaluation questionnaire with a group of 14 experts. The experts had validate the developed usability questionnaire, a two-round Delphi
some comments on items, purpose, Likert scale, and the other parts to validation was conducted on fourteen participants. A panel of fourteen
review and correct some parts of the questionnaire to receive better experts participated in a two-round Delphi process designed to validate
results. Appendix A shows the final items of the questionnaire after the usability questionnaire.
adding, eliminating, and improving some items. After that the corrected The complete summary of the whole validation survey is presented
usability evaluation questionnaire was prepared for main study for an in Table 1. In the Delphi round-1, data was collected in 2 parts; de-
experimental group of 35 students that participated in a mobile mographic information was asked, questionnaire items for each attri-
learning experiment to assess their perceptions on the usability of bute were validated on a 5-point scale, which means that the 42
mobile learning application. questionnaire items that affect the usability evaluation of developed
The Delphi technique is a widely accepted method for gathering mobile application were investigated. Delphi tound-2 was performed
data from survey participants within a particular domain and expertise based on revised level of usability questionnaire and with a view to
(Vidal, Marle, & Bocquet, 2011). This technique has been used for assess the relevance and importance of the overall usability ques-
several studies in the field of education and engineering, which en- tionnaire to usability attributes of mobile-learning application were
couraged us in our research work. The Delphi method has been used asked using 5-point scale questions.
frequently in computing and IT related studies such as online education
(Rice, 2009), and identifying research priorities in educational tech-
nology (Pollard & Pollard, 2004). Some practical examples of previous 3.4.2. Delphi panel selection
papers in mobile-learning studies, and educational research which The three requirements for experts to quality for a Delphi study are
conducted using Delphi are (Cheong, Bruno, & Cheong, 2012; Chiu & i) sufficient experience and knowledge with the survey issues; ii) will-
Huang, 2016; Green, 2014; Hamann, 2015; Hsu, Ching, & Snelson, ingness, capacity and enough time to participate in Delphi; iii) good
2014). communication skills (Skulmoski, Hartman, & Krahn, 2007). Thus, the
The most predicted rational behind Delphi approach is that two or success of Delphi study depends on the quality of participants. The
more heads are better than one head. It means a group response gets Delphi panelists in this study were all experienced in questionnaire
closer to the truth in reality than that of only one individual passing the development and/or mobile learning. Area of their expertise include
judgment, and that inputs generated by experts based on their logical human computer interaction, usability measurement, mobile com-
reasoning are superior to simply guessing. The Delphi’s technique is puting, and instructional design.
well suited for this study, which is validation of the mobile-learning
application usability questionnaire.
3.4.3. Delphi panel qualifications
The Delphi panel was organized from the academic community
3.4.1. The Delphi process consisting of a lecturer, a librarian dean, or reputable authors. The
This part describes the procedures that were used to validate the panel was composed of 38 experts, 19 of them being university lec-
usability questionnaire. The purpose of the Delphi validation was to turers, and 19 being journal article authors. Of those 38 requested ex-
ascertain that the items used in each attribute of the usability ques- perts, 14 actually took part in the study from the beginning to the end,
tionnaire are valid and to identify the validity of the usability ques- of which 9 of them were university lecturers and 5 were journal article
tionnaire. The specific Delphi process used for this study adapted from authors.
Vidal et al. (2011) is discussed in details in Fig. 11.

Fig. 11. Research process adapted from Vidal et al. (2011).

104
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Table 1
Summary of Delphi validation process for usability questionnaire.
Validation Stages Round 1 Round 2
Instrument Questionnaire 1 Questionnaire 2

Data Collection • Data was collected in 2 parts; demographic information was asked • Assessment of relevance and importance of the overall usability
• Questionnaire items for each attribute were validated (on a 5-point scale)
which means that the questionnaire items that affecting the usability
questionnaire to usability evaluation model of mobile-learning
application were asked on 5-point scale questions.
evaluation of developed mobile application were investigated.
Data Analysis • Compute the means and determine level of agreement for each of the
questionnaire items by using percentage.
• Compute the means and determine level of agreement for each of the
questionnaire items by using percentage.
• Tally votes determine the level consensus on each of the usability
questionnaire items being verified.
• Tally votes determine the level consensus on each of the usability
questionnaire items being verified.
• Prepare Questionnaire 2 using items received highest rating. • Draw conclusions based on the results.
3.4.4. The Delphi panel size scales 4 and 5 (Agree and Strongly Agree), which were greater than
The recommended number of Delphi survey group size has varies in 70%, are considered a positive response in support of the questions
the literature, from 10 to 18 (Bourgeois, Pugmire, Stevenson, Swanson, asked. Appendix A represents content validity index of Usability
& Swanson, 2006). The size of the Delphi panel in a homogenous group, Evaluation Questionnaire in Delphi Panel Round-1.
such as a group of experts from same general discipline area, should be In the literature, the use of median score, based on Likert-type scale,
enough with ten to fifteen experts (Delbecq, Van de Ven, & Gustafson, is strongly favored (Hsu et al., 2014; Jacobs, 1996). The use of means
1975). Vidal recommended a group size from 9 to 18 participants to and median is suitable when reporting data in the Delphi process in
reach the relevant assumptions and avoid at the same time difficulty to order to present information regarding the collective judgments of re-
reach the consensus among experts (Vidal et al., 2011). The panel size spondents (Hsu & Sandford, 2007).
of fourteen for this study fits within the guidelines suggested for Delphi In this study, only those items that were rated 3.5 or above (on the
studies. 5.0 scale) have been included in the discussion and results section as
Austin, Lee, and Getz (2008) suggested that items below 3.0 are not
3.4.5. Number of Delphi rounds appropriate for professional practice on the rating system (Austin et al.,
For this study, a two round Delphi survey was accomplished as the 2008). Basically, consensus on a topic can be decided if a certain per-
group consensus is desirable. The recommended number of rounds for centage of the votes fall within a prescribed range (Miller, 2006). Green
Delphi is different and is dependent on the purpose of the research. For (1982) suggests that at least 70% of Delphi subjects need to rate three
instance, Hsu and Sandford (2007) claimed that small cases of iteration or higher on a four point Likert-type scale and the median has to be at
are sufficient to reach consensus and to collect the needed information 3.25 or higher (Green, 1982). This round validates each of the eight
(Hamann, 2015; Hsu & Sandford, 2007). Delbecq, Van de Ven and attributes of the usability questionnaire.
Gustafson recommend that a two or three iteration Delphi round is All experts responded with their comments for recommendations
adequate for most research (Delbecq et al., 1975). and improvement within three weeks. The following are representative
suggestions received based on the items to be added, deleted, and im-
proved:
4. Results and analysis
1 Some comments on spelling errors.
This part reports the results of Delphi analysis on the usability
2 Some comments on revising the words of some questions to be more
questionnaire, learning achievements, and students’ perceptions on the
understandable.
usability of CIMLA.
3 Some comments on revising items in order to avoid overlapping of
the items.
4.1. Delphi validation results and analysis 4 Changing the sentences of purpose of the questionnaire.
5 Some comments to change the sentences of explanation about the
In Delphi, decision rules must be established to assemble, analyze, developed mobile-learning application used in this study.
and summarize the judgments and insights offered by the participants. 6 Changing Neutral to Somewhat Agree in the Likert scale to be
Consensus on a topic can be determined if the returned responses on clearer.
that specific topic reach a prescribed or a priori range. In situations in 7 To design one negative question in every attribute.
which rating or rank ordering is used to codify and classify data, the 8 Provide simple definition for each attribute.
definition of consensus has been at the discretion of the investigator(s). 9 They agreed that the items will represent the usability attributes of
One example of Delphi consensus from the literature is 70% of Delphi
subjects rate three or higher on a 4-point scale (Green, 1982).
In this Delphi study validation, fourteen experts responded posi- Table 2
tively to the Delphi round-1 survey. The same fourteen experts took Percentage of experts’ consensus in Delphi round-1 and round-2.
part in the second round of Delphi. The validation was performed in
Usability Percentage % of Mean Percentage % of Mean
two rounds and the results are described in subsections of this article. Attribute Consensus in Round-1 Consensus in Round-2
Delphi Round-1 Delphi Round-2
4.1.1. Delphi round-1 results
Effectiveness 77 3.97 88 4.21
In this round, the demographic information of the experts was ob- Efficiency 84 4.12 90 4.57
tained. After a general description on the developed mobile-learning Timeliness 77 4.18 92 4.61
application and describing the objective of the usability questionnaire, Satisfaction 90 4.31 94 4.58
a 5-point scale survey asked experts their degree of agreement/dis- Learnability 86 4.12 86 4.14
Memorability 83 4.28 89 4.43
agreement on the structure and items chosen in usability questionnaire.
Error 71 3.95 90 4.64
Each point on the scale gave a 20% weight. Frequencies were calculated Cognitive Load 71 4.14 93 4.72
for all questions, and responses obtained for combined frequencies on

105
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Table 3 Delphi round-1 and round-2. Results indicate that over 88% of experts
Results of the Wilcoxon Signed-Ranks Test to compare the pre and post-test of have consented on all usability items represented in the usability
assignment scores of the students. questionnaire round-2.
Criteria Academic Achievement Pretest-Posttest N Md Z P
4.2. Learning achievements
Currency Pre-test 35 1.00 −5.09** .000
Post-test 35 3.00
Relevance Pre-test 35 1.00 −5.27** .000 Table 3 displays the results of the Wilcoxon signed-ranks test to
Post-test 35 3.00 compare the pre and post-test of assignment scores of the students. A
Authority Pre-test 35 1.00 −5.17** .000 Wilcoxon Signed Rank Test revealed a statistically significant increase
Post-test 35 3.00 in C, R, A, AC, P scores of students for multiple-choice assignment
Accuracy Pre-test 35 1.00 −5.16**
.000
following participation in the mobile learning program (Z = −5.09,
Post-test 35 3.00
Purpose Pre-test 35 1.00 −5.32** .000 −5.27, −5.17, −5.16, −5.32, p = .000 < .001). The median score
Post-test 35 3.00 on the C, R, A, AC, P scores increased from pre-test (Md = 1,1,1,1,1
respectively) to post-test (Md = 3,3,3,3,3 respectively). Based on the
Note: N: Number of students; Md: Median; P: Asymp. Sig. (2-tailed). results, it could be argued that the use of the developed mobile learning
** The difference is highly significant since p < .001. application significantly improved the OIES level of the students.

mobile-learning application.
4.3. Students’ perceptions on the usability of mobile application

In the end, only the items with a mean score of higher than 3.5 were
To get the accurate results of usability questionnaire, student’s re-
kept in the refined usability questionnaire.
sponse were calculated against each question and each usability attri-
bute. Fig. 12 indicate the results of usability evaluation of CIMLA by the
4.1.2. Delphi round-2 results students.
For deleting or improving items according to the experts’ comments
literature of Delphi consensus rules on mean, median, and percentage of 4.3.1. Effectiveness
the votes were considered. Thus, the items that meet three factors of According to the effectiveness part in Fig. 12, 52% of students agree
agree and strongly agree rates lower than 70%, mean rate lower than and 26% strongly agree, indicating that the opinion of nearly 78% of
3.5, and median rate under 3.25 were deleted from usability ques- students is above the average level and are satisfied with this attribute.
tionnaire. Other items were improved according to the comments of 20% of students consider effectiveness of developed mobile application
experts to reach a high degree of consensus in second round. is average. Only 2% disagree and no one strongly disagreed with this
After receiving all reviews from the Delphi panel, the questionnaire attribute of mobile application. High percentage of agreement on Ef-
was modified to include all the recommendations and feedback from fectiveness attribute indicates that the mobile application means CIMLA
the panel. This was referred as Round 2 of the reviews. An email sent to provided personalized access to course contents and helped to improve
the Delphi panel included two documents attached: one attachment was communication between student and teacher. In addition, CIMLA
the summary of changes from Round 1; and the second was the mod- helped students to produce class assignments of higher quality.
ified usability evaluation questionnaire. The email also requested fur-
ther feedback from the Delphi panel if required or to respond in support 4.3.2. Efficiency
of the usability questionnaire if deemed complete by the reviewer. In response to efficiency questions, 39% of students agree and 33%
All other parts of the questionnaire were left as previously stated as strongly agree, indicating that 42% of students consider it above the
there were no suggestions or comments for change. After three weeks, average level. Only 1% strongly disagree and 5% disagree with this
the replies were received with the statement that “Final usability feature. A high percentage of students agreed with the efficiency at-
questionnaire is complete as presented” representing that over 79% of tribute of the CIMLA. This shows that the course structure in CIMLA is
the experts had reached consensus and the usability questionnaire was visually clear, and the vocabularies used in the tutorial are appropriate
completed. Table 2 indicates the percentage of experts’ consensus in for students. In addition, the course learning objectives were achieved

Fig. 12. Students’ perceptions on the usability of CIMLA.

106
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

within a short period. Downloading the course content in CIMLA was analytically as below: 39% of students agree and 32% strongly agree
speedy. Students agreed that the steps button in the CIMLA guided them indicating that 71% of the students consider the application regarding
clearly and they did not need the support of a technical person to be cognitive load above the average level whereas 26% consider this at-
able to use the mobile application. tribute at average level. Only 2% consider the cognitive load under the
average.
4.3.3. Timeliness Cognitive load (including mental load and mental effort) concerned
The 3rd part of questionnaire reflects the important attribute of with the performance of users of mobile applications when performing
usability. Student responses to Timeliness attribute of application nu- special tasks such as walking, while using the mobile device. As mental
merically give the following output: load (or intrinsic cognitive load) refers to the interactions between the
Almost 73% of the students think that timeliness of the application learning tasks, subject characteristics and subject materials. This is
is above the average standard including 33% who strongly agree and highly related to the complexity of the learning materials that the
40% who agree value for this attribute. Only 1% was in disagreement. A students need to handle and how much information the working
high percentage of students agreed with the timeliness feature of the memory needs to deal with at the same time (Verhoeven, Schnotz, &
CIMLA. This means, when the instructor posts a message in the mobile Paas, 2009). In this study, the students were arranged to learn subject
application, students could receive this message immediately. They also materials of OIES, which they found it not complex to handle, also they
received feedback on their performance promptly. could perform special tasks such as walking, while they participated in
group discussion using CIMLA. On the contrary, mental effort is related
4.3.4. Satisfaction to the learning approaches or strategies used in the learning activities
Useful information was got from part 4 of the questionnaire to (Verhoeven et al., 2009). Mental effort refers to whether the instruc-
measure this attribute. Descriptions of this part are as follows: 1% of the tional design is poor (extraneous cognitive load) or good (germane
students strongly disagree, nearly 4% disagree and 26% considered cognitive load) enough (Paas & Van Merriënboer, 1994).
average level of satisfaction of the application. 69% of the students Consequently, the high percentage of students’ agreement with
considered application above average level against this attribute, in- cognitive load questions shows that the learning effectiveness is due to
cluding 31% who agree and 38% who strongly disagree. The results of the use of cooperative and interactive mobile-learning approach, which
satisfaction attribute indicate that most students believed CIMLA has decreased the mental effort and has improved the OIES. The stu-
changes their way of studying online courses from studying individually dents’ perceptions indicate that they could use the CIMLA while
to studying in a group interactively. They indicated interest to parti- walking, so the attention required for learning CIMLA tasks was low. In
cipate in future mobile-learning activities. Students strongly agreed that addition, they experienced instructions of CIMLA were not difficult to
CIMLA was a good discussion application, which they would re- follow, so when completing CIMLA tasks, their stress were low. The
commend CIMLA to other students. above results indicate that this approach improves students OIES with
appropriate cognitive load.
4.3.5. Learnability
The 5th part of the questionnaire “learnability” provides following 5. Discussion
analytical data: 42% of the students agreed with this application, al-
most 30% strongly agreed and 72% as a whole consider the application The proposed model in this study can be used for usability evalua-
has learnability. 22% consider this attribute at average level. 7% of the tion of mobile learning application with the inclusion of timeliness as
students think learnability attribute is below average, which means that contribution to knowledge, but previous studies by (Abdullah, Hussin,
6% disagree and only 1% strongly disagree with the learnability of & Zakaria, 2013; Mohammadi, 2015; Ng & Nicholas, 2013; Sha, Looi,
application. It means students agreed that the steps button in CIMLA Chen, & Zhang, 2012) did not include a model and method for evalu-
guided them to easy use the mobile application and they did not need ating mobile learning applications. The proposed model of this study
the support of a technical person to be able to use CIMLA. They could includes eighth usability attributes and three factors that affect the
download the course content in CIMLA easily. whole usability evaluation model, which is different from the usability
model of ISO presented by Bevan (1998) and Nielson usability model
4.3.6. Memorability presented by Nielsen (1994a,b) that were basically designed for tradi-
The 6th part of the questionnaire is related to “Memorability”. The tional desktop applications. Nielsen’s model was derived from telecoms
statistical outcome of this feature is as follows: 48% of students agree systems, instead of computer software. On the other hand the PACMAD
and 29% strongly agree with memorability which means that 77% of usability model by Harrison et al. (2013) is designed for usability
the students consider the application is above the average level re- evaluation of mobile applications, but did not consider the required
garding memorability, 22% slightly agree as average level and only 1% features of mobile-learning applications. From previous literature it is
of students consider this attribute below the level. The results of stu- noted that many existing usability models do not consider timeliness
dents’ perceptions on the memorability attribute of usability shows that (interactive response time) as an important attribute of usability which
students felt it was easy to remember how to use the mobile application, can improve the usability evaluation mode to be utilized for mobile
and they felt confident that they could use the mobile application with learning context. Thus, contribution of knowledge in this study is the
ease in future sessions. They believed the use of special signs in the inclusion of timeliness as an important attribute in usability evaluation
application buttons help them to recall how to use CIMLA effectively. of cooperative and interactive mobile learning applications.
This feature indicates high usability of the CIMLA. Results of this study indicate that the proposed mobile application
(CIMLA) provides an efficient and effective mobile learning mechanism
4.3.7. Error by improving OIES. With the growth of mobile learning approach, there
The statistical outcome of error attribute is as follows: 37% of stu- are many researches on developing and implementing mobile learning
dents agree and 31% strongly agree with memorability, which means application but none of them included both the Jigsaw-based co-
that 68% of the students consider the application is above the average operative learning and different types of interaction in their developed
level regarding error. This results show students agreed that they made mobile applications (Chu, Hwang, Tsai, & Tseng, 2010; Huang, Liao,
few errors during the use of CIMLA. Huang, & Chen, 2014; Hwang & Chang, 2011).
The CIMLA was provide students with the opportunity to discuss
4.3.8. Cognitive load content (OIES) with their classmates and lecturer. The cooperative and
This portion of questionnaire determines the response of students interactive learning approach provided students to experience a

107
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

student-centered approach that is a more effective learning method in project managers to employ the usability evaluation model when de-
compare to other learning approaches such as teacher-centered learning signing the interfaces for mobile learning applications. Within the
or self-directed learning. The previous studies by Leeder (2014) and theoretical context, the main deliverable of this research is the new
Thornes (2012) improved information literacy skills of students model for usability of mobile learning applications, by incorporating
through self-directed learning without social cultural benefits for stu- timeliness as an important attribute of usability, needs to be con-
dents. While, the previous usability tests such as the Nielsen heuristic sidering in the context of mobile learning. Timeliness of the response
evaluation by Nielsen (1994a,b) and the System Usability Scale (SUS) time is very important because it affects user satisfaction and usability
by Brooke (1996) were used for self-directed learning. This study de- of mobile learning application.
veloped a usability evaluation questionnaire with the inclusion of Then the system architecture of a mobile application called CIMLA
timeliness as a usability attribute to measure usability in cooperative was designed that aimed to improve online information evaluation
and interactive learning contexts. skills of students using cooperation and interaction learning. It was
The results of usability evaluation in previous studies indicated that developed to implement the Jigsaw-based cooperative and interactive
timeliness or responding to students’ questions promptly, increases learning theories for improving OIES of students using the mobile ap-
learning and user satisfaction (Jalal & Al-Debei, 2013; Wixom & Todd, plication. Having a cooperative and interactive mobile-learning appli-
2005; Wu & Wang, 2006; Xu et al., 2013; Zelazny et al., 2012). The cation of information evaluation skills also will help in providing a
results of this study also indicate that 73% of the students think that more systematic integration of these skills into the curriculum. The
timeliness of the application is above the average standard, a factor that interactive and cooperative setting in the learning-teaching environ-
enhances students’ satisfaction. ment can enhance students’ motivation for learning and foster greater
Most previous studies in mobile learning, used Delphi techniques as students’ exchange. The successful usage of these strategies enables
a research tool for validation of mobile learning frameworks and instructors to incorporate mobile learning into their classrooms.
guidelines, such as the studies conducted by (Cheong et al., 2012; Technology supported learning system such as mobile learning appli-
Hamann, 2015; Hsu et al., 2014). The contribution of this study is using cation not only aids to amplify the feature of the learning theory, but
the Delphi method to validate the constructed usability questionnaire also motivates students’ learning emotion to learn enthusiastically.
for mobile learning. The experimental results indicate that students were initially in poor
Many studies examined the most frequently used usability evalua- level in evaluating online information in the pre-test. After using the
tion metrics without taking into account which usability attributes were online tutorial, most of students achieved accomplished level in the
included. For instance, Rabi’u, Ayobami, and Hector (2012) identified post-test. These results indicate that the extent of students’ abilities to
that the usability dimensions expected to be inherent in any mobile evaluate online information was enhanced after utilizing CIMLA.
applications to determine its characteristics are usability, reliability, The results of the usability questionnaire showed that most of the
flexibility, portability, functionality, efficiency, maintainability, acces- students that participated in online tutorial using CIMLA and filled up
sibility and responsiveness which iterated with the users’ requirements the questionnaire agreed with the usability of the mobile application
(Rabi’u et al., 2012). Lettner and Holzmann (2012) and Hussain, and agreed that using the CIMLA could improve their learning perfor-
Hashim, Nordin, and Tahir (2013) provided analytical approach for mance. The usability questionnaire developed for this study can be
usability evaluation metrics such as accuracy, features, time taken, considers as a unique usability testing method which contributes to the
safety, simplicity, error rates hit counts and navigational graphing extant literature in the context of mobile learning by identifying us-
(Hussain et al., 2013; Lettner & Holzmann, 2012). Følstad, Law, and ability evaluation features and providing a usability questionnaire to
Hornbæk (2012) examined the most frequently used measures (task make possible the usability evaluation of mobile learning applications
completion, error rate, satisfaction and task time) (Følstad et al., 2012). through ask the users directly to recognize usability problems of ap-
Treeratanapon (2012) adopted ISO-9241 and Technology Acceptance plications. Future researches will be conducted to compare the pros and
Model (TAM) to develop usability evaluation framework based on main cons between the questionnaire proposed in this study and other sys-
usability attributes (satisfaction, efficiency, and effectiveness) from tems such as the Nielsen’s heuristic evaluation and the System Usability
ISO-9241 standard but not considered the usability attributes which Scale (SUS). In order to increase the reliability and generalization of the
proposed in this study (Treeratanapon, 2012). results further research should expand the research scope to students in
other fields and universities. Moreover, further studies need to collect
6. Conclusion empirical data on the timeliness and other usability attributes of mobile
learning applications using formative assessment.
This study at first level widely studied the literatures, which in-
dicate that there is little scientific and published research on usability of
mobile learning applications. The usability evaluation framework for Acknowledgement
mobile learning applications can help to increase productivity, improve
user satisfaction, enhance quality of work, and reductions in training This work was supported by the National Elites Foundation of Iran
costs. The decrease in costs attracts many interface designers and (BMN).

Appendix A. Result of Descriptive Analysis for Usability Questionnaire Item

Questions Source Mean Median Mode

Effectiveness
1. CIMLA makes possible personalized access to course contents. (Costabile, De Marsico, Lanzilotti, 3.85 4.00 4.00
Plantamura, & Roselli, 2005)
2. CIMLA provides more flexible method of learning as it can be done (Al-Fahad, 2009) 4.28 4.00 4.00
anywhere.
3. Using CIMLA to read course materials can increase my learning effect. (Tan & Liu, 2004) 4.20 4.00 4.00
4. CIMLA helps me to produce class assignments of higher quality. (Ellis & Hafner, 2008; Hrastinski, 3.74 4.00 4.00
2009)

108
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

5. CIMLA helps to improve communication between student and teacher. (Al-Fahad, 2009) 4.02 4.00 4.00
6. CIMLA cannot be used for learning due to expenses involved in mobile- (Fozdar & Kumar, 2007) 3.74 4.00 4.00
learning.
7. CIMLA cannot be used for learning due to poor networking in the city. (Fozdar & Kumar, 2007) 4.22 4.00 4.00
Efficiency
8. Course learning objectives (improving online information evaluation skills) (Georgieva, Smrikarov, & Georgiev, 4.14 4.00 5.00
can be met by CIMLA. 2011)
9. Course structure in CIMLA is clearly visualized. (Costabile et al., 2005) 3.82 4.00 4.00
10. CIMLA provides an impressive communication with other students. (Georgieva et al., 2011) 3.80 4.00 4.00
11. Vocabulary and terminology used in CIMLA are appropriate for student. (Georgieva et al., 2011) 4.17 4.00 5.00
12. Downloading course content in CIMLA is slow. (Wendeson, Ahmad, Fatimah, & 3.91 4.00 4.00
Haron, 2010)
Timeliness
13. When the instructor posts a message in CIMLA, students could receive this (Lan & Sie, 2010) 4.00 4.00 3.00
message immediately.
14. When my peer replies to my question in CIMLA, I could receive the replied (Lan & Sie, 2010) 3.80 4.00 4.00
message in appropriate time.
15. I can see the instant question and answer discussions among my teammates (Lan & Sie, 2010) 4.17 4.00 5.00
in CIMLA automatically.
16. When I ask a question from instructor in CIMLA, I cannot receive the (Sun et al., 2008) 4.31 4.00 5.00
instructor’s response in appropriate time.
17. Overall, I think that the messages in CIMLA are received immediately. (Lan & Sie, 2010) 3.91 4.00 4.00
Satisfaction
18. CIMLA was a good discussion application. (Motiwalla, 2007) 4.08 4.00 5.00
19. CIMLA change my habit of studying online courses alone. (Wang, Shen, Novak, & Pan, 2009) 4.00 4.00 4.00
20. I would participate in future mobile-learning activities. (Wang et al., 2009) 3.88 4.00 3.00
21. I would recommend CIMLA to other students. (Wang, 2003) 4.31 5.00 5.00
22. I am not pleased enough with CIMLA for improving online information (Motiwalla, 2007) 3.82 4.00 3.00
evaluation skills.
Learnability
23. It is easy to learn how to use CIMLA. (Qureshi & Irfan, 2009) 4.000 4.00 4.00
24. It is easy to become skillful to control CIMLA. (Qureshi & Irfan, 2009) 3.77 4.00 4.00
25. The steps button in CIMLA guides me to easily use the mobile application. (Brooke, 1996) 4.000 4.00 4.00
26. I can download the course content in CIMLA easily. (Kiget, Wanyembi, & Peters, 2014) 3.80 4.00 4.00
27. I think that I would need the support of a technical person to be able to use (Brooke, 1996) 4.17 4.00 5.00
CIMLA.
Memorability
28. I feel it is easy to remember how to use CIMLA. (Nacenta, Kamber, Qiang, & 4.11 4.00 4.00
Kristensson, 2013)
29. It is easy to reuse the options like (home, assignment, etc. button) in CIMLA (Chiou, Lin, Perng, & Tsai, 2009) 4.22 4.00 5.00
in the next time.
30. The use of special signs in CIMLA buttons helped me to remember how to (Santosa, 2009) 3.91 4.00 4.00
use application effectively.
31. It is difficult for me to remember how to use the courseware in CIMLA. (Othman, 2012) 4.11 4.00 4.00
32. I can use CIMLA more easily in future session. (Nielsen, 1994b) 3.91 4.00 4.00
Error
33. When I login the Go Class in incorrect time, CIMLA provides proper (self-developed) 4.62 4.00 3.00
feedback about the start time of class.
34. I can logout of the CIMLA whenever I desire and can easily return to the (Reeves et al., 2002) 4.05 4.00 4.00
closest logical point in the application.
35. The academic feedback to my incorrect answers in online course was useful. (Reeves et al., 2002) 3.85 4.00 4.00
36. The academic feedback explanation in CIMLA online course provides too (De Villiers, 2004) 4.08 4.00 4.00
much information on the screens, which confuses me.
37. When I fill up the multiple choice test as pre-test, CIMLA highlighted the (Ardito et al., 2004) 3.82 4.00 5.00
required question to answer it.
Cognitive Load
38. The attention required for learning CIMLA tasks is low for me. (Shih, Chuang, & Hwang, 2010) 3.80 4.00 4.00
39. When completing CIMLA tasks, my stress is high. (Shih et al., 2010) 4.14 4.00 5.00
40. I can use the CIMLA while walking. (Harrison et al., 2013) 3.74 4.00 3.00
41. I experienced the instructions of CIMLA as not difficult. (Burkes, 2007) 4.22 4.00 5.00
42. The hurried pace of tasks in CIMLA is suitable for me. (Windell & Wiebe, 2007) 4.14 4.00 5.00

109
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

References 2158244014529773.
Hamann, D. T. (2015). The construction and validation of an M-learning framework for online
and blended learning environments.
Abdullah, M. R. T. L., Hussin, Z., & Zakaria, A. R. (2013). MLearning scaffolding model for Harrison, R., Flood, D., & Duce, D. (2013). Usability of mobile applications: Literature
undergraduate English Language learning: Bridging formal and informal learning. review and rationale for a new usability model. Journal of Interaction Science, 1(1),
TOJET: The Turkish Online Journal of Educational Technology, 12(2). 1–16.
Albert, W., & Tullis, T. (2013). Measuring the user experience: Collecting, analyzing, and Hrastinski, S. (2009). A theory of online learning as online participation. Computers &
presenting usability metrics. Newnes. Education, 52(1), 78–82.
Al-Fahad, F. N. (2009). Students’ attitudes and perceptions towards the effectiveness of Hsu, C.-C., & Sandford, B. A. (2007). The Delphi technique: Making sense of consensus.
mobile learning in King Saud University, Saudi Arabia, Online Submission. The Practical Assessment, Research & Evaluation, 12(10), 1–8.
Turkish Online Journal of Educational Technology, 8(2). Hsu, Y.-C., Ching, Y.-H., & Snelson, C. (2014). Research priorities in mobile learning: An
Ali, A. A. (2013). A framework for measuring the usability issues and criteria of mobile international Delphi study. Canadian Journal of Learning and Technology, 40(2), 1–23.
learning applications. Huang, Y.-M., Liao, Y.-W., Huang, S.-H., & Chen, H.-C. (2014). A jigsaw-based co-
Ali, A., Alrasheedi, M., Ouda, A., & Capretz, L. F. (2015). A study of the interface usability operative learning approach to improve learning outcomes for mobile situated
issues of mobile learning applications for smart phones from the users perspective. arXiv learning. Educational Technology & Society, 17(1), 128–140.
preprint arXiv:1501.01875. Hussain, Z. (2017). Presentation: Measuring usability of the mobile learning app for the
Anani, A. (2008). M-learning in review: Technology, standard and evaluation. Journal of children.
Communication and Computer, 5(11), 1–6. Hussain, A., Hashim, N. L., Nordin, N., & Tahir, H. M. (2013). A metric-based evaluation
Arbaugh, J. B., & Duray, R. (2002). Technological and structural characteristics, student model for applications on mobile phones. Journal of ICT, 12, 55–71.
learning and satisfaction with web-based courses an exploratory study of two on-line Hussain, A., Mkpojiogu, E. O., Musa, J. A., & Mortada, S. (2017). A user experience
MBA programs. Management Learning, 33(3), 331–347. evaluation of Amazon Kindle mobile application. AIP conference proceedings, Vol.
Ardito, C., De Marsico, M., Lanzilotti, R., Levialdi, S., Roselli, T., Rossano, V., et al. 1891 pp. 020060, AIP Publishing.
(2004). Usability of e-learning tools, ACM. Proceedings of the working conference on Hutchings, M., Hadfield, M., Howarth, G., & Lewarne, S. (2007). Meeting the challenges
advanced visual interfaces, 80–84. of active learning in Web-based case studies for sustainable development. Innovations
Aronson, E. (1978). The jigsaw classroom. Sage. in Education and Teaching International, 44(3), 331–343.
Association, A. L. (2000). Information literacy competency standards for higher education. Hwang, G.-J., & Chang, H.-F. (2011). A formative assessment-based mobile learning ap-
Austin, D. R., Lee, Y., & Getz, D. A. (2008). A Delphi study of trends in special and proach to improving the learning attitudes and achievements of students. Computers
inclusive recreation. Leisure/Loisir, 32(1), 163–182. & Education, 56(4), 1023–1031.
Bevan, N. (1998). ISO 9241: Ergonomic requirements for office work with visual display Iacob, C., Harrison, R., & Faily, S. (2013). Online reviews as first class artifacts in mobile app
terminals (VDTs)—Part 11: Guidance on usability. TC159. development. Mobile computing, applications, and services. Springer pp. 47–53.
Bourgeois, J., Pugmire, L., Stevenson, K., Swanson, N., & Swanson, B. (2006). The Delphi ISO, I. (1999). 13407: Human-centred design processes for interactive systems. Geneva: ISO.
Method: A qualitative means to a better future. URL: . (Citirano 2 November 2011) ISO (2001). IEC 9126-1: Software engineering-product quality-part 1: Quality model. Geneva,
http://www.freequality.org/documents/knowledge/Delphimethod.pdf. Switzerland: International Organization for Standardization27.
Brooke, J. (1996). SUS—A quick and dirty usability scale. Usability evaluation in industry, Ivanc, D., Vasiu, R., & Onita, M. (2012). Usability evaluation of a LMS mobile web interface.
189(194), 4–7. Information and software technologies. Springer pp. 348–361.
Burkes, K. M. E. (2007). Applying cognitive load theory to the design of online learning. Jacobs, J. M. (1996). Essential assessment criteria for physical education teacher education
Capretz, L. F., Ali, A., & Ouda, A. (2012). A conceptual framework for measuring the programs: A Delphi study.
quality aspect of mobile learning. Bulletin of the IEEE Technical Committee on Learning Jalal, D., & Al-Debei, M. M. (2013). Developing and implementing a web portal success
Technologies, 14(4), 31. model. Jordan Journal of Business Administration, 9(1), 161–190.
Cheong, C., Bruno, V., & Cheong, F. (2012). Designing a mobile-app-based collaborative Jou, M., Tennyson, R. D., Wang, J., & Huang, S.-Y. (2016). A study on the usability of E-
learning system. Journal of Information Technology Education: Innovations in Practice, books and APP in engineering courses: A case study on mechanical drawing.
11(1), 94–119. Computers & Education, 92, 181–193.
Cheung, C. M., & Thadani, D. R. (2012). The impact of electronic word-of-mouth com- Khomokhoana, P. J. (2011). Using mobile learning applications to encourage active classroom
munication: A literature analysis and integrative model. Decision Support Systems, participation: Technical and pedagogical considerations. University of the Free State.
54(1), 461–470. Kiget, N. K., Wanyembi, G., & Peters, A. I. (2014). Evaluating usability of E-learning systems
Chiou, W.-C., Lin, C.-C., Perng, C., & Tsai, J.-T. (2009). E-learning usability measurement- in universities.
using technology acceptance model and usability test. 15th International Conference on Kuhnel, M., Seiler, L., Honal, A., & Ifenthaler, D. (2017). Mobile learning analytics in higher
Industry, Engineering, & Management Systems (IEMS)9–11. education: Usability testing and evaluation of an APP prototype. International
Chiu, P.-S., & Huang, Y.-M. (2016). The development of a decision support system for Association for Development of the Information Society.
mobile learning: a case study in Taiwan. Innovations in Education and Teaching Lan, Y.-F., & Sie, Y.-S. (2010). Using RSS to support mobile learning based on media
International, 53(5), 532–544. richness theory. Computers & Education, 55(2), 723–732.
Chu, H.-C., Hwang, G.-J., Tsai, C.-C., & Tseng, J. C. (2010). A two-tier test approach to Lettner, F., & Holzmann, C. (2012). Automated and unsupervised user interaction logging
developing location-aware mobile learning systems for natural science courses. as basis for usability evaluation of mobile applications, ACM. Proceedings of the 10th
Computers & Education, 55(4), 1618–1627. international conference on advances in mobile computing & multimedia, 118–127.
Costabile, M. F., De Marsico, M., Lanzilotti, R., Plantamura, V. L., & Roselli, T. (2005). On Lin, J. M., Huang, R., Zhao, J. Y., & Dai, Q. (2013). Mobile internet oriented M-learning
the usability evaluation of e-learning applications, IEEEHICSS’05. Proceedings of the system. Applied mechanics and materials, Vol. 411Trans Tech Publ. pp. 2883–2887.
38th annual Hawaii international conference on system sciences, 6b. Markett, C., Sánchez, I. A., Weber, S., & Tangney, B. (2006). Using short message service
Coutinho, W., Couto, E., Biase, C., Fernandes, P., & Bonifacio, B. (2015). Improving an to encourage interactivity in the classroom. Computers & Education, 46(3), 280–293.
educational mobile application through usability evaluation, Proceedings, Spain. 9th Metzger, M. J. (2007). Making sense of credibility on the web: Models for evaluating
international technology, education and development conference, 5812–5820. online information and recommendations for future research. Journal of the American
De Villiers, R. (2004). Usability evaluation of an e-learning tutorial: Criteria, questions Society for Information Science and Technology, 58(13), 2078–2091.
and case study, South African Institute for Computer Scientists and Information Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online
Technologists. Proceedings of the 2004 annual research conference of the South African environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210–220.
institute of computer scientists and information technologists on IT research in developing Miller, L. (2006). Determining what could/should be: The Delphi technique and its ap-
countries, 284–291. plication Columbus, Ohio. Meeting of the 2006 annual meeting of the Mid-Western
Delbecq, A. L., Van de Ven, A. H., & Gustafson, D. H. (1975). Group techniques for program Educational Research Association.
planning: A guide to nominal group and Delphi processes. IL: Scott, Foresman Glenview. Mohamadi, Z. (2018). Comparative effect of online summative and formative assessment
Ellis, T., & Hafner, W. (2008). Building a framework to support project-based colla- on EFL student writing ability. Studies in Educational Evaluation, 59, 29–40.
borative learning experiences in an asynchronous learning network. Interdisciplinary Mohammadi, H. (2015). Social and individual antecedents of m-learning adoption in Iran.
Journal of E-Learning and Learning Objects, 4(1), 167–190. Computers in Human Behavior, 49, 191–207.
Fardoun, H., Montero, F., & Jaquero, V. L. (2009). eLearniXML: Towards a model-based Moore, M. G. (1989). Editorial: Three types of interaction.
approach for the development of e-learning systems considering quality. Advances in Motiwalla, L. F. (2007). Mobile learning: A framework and evaluation. Computers &
Engineering Software, 40(12), 1297–1305. Education, 49(3), 581–596.
Følstad, A., Law, E., & Hornbæk, K. (2012). Analysis in practical usability evaluation: A Nacenta, M. A., Kamber, Y., Qiang, Y., & Kristensson, P. O. (2013). Memorability of pre-
survey study, ACM. Proceedings of the SIGCHI conference on human factors in computing designed and user-defined gesture sets, ACM. Proceedings of the SIGCHI conference on
systems, 2127–2136. human factors in computing systems, 1099–1108.
Fozdar, B. I., & Kumar, L. S. (2007). Mobile learning and student retention. International Ng, W., & Nicholas, H. (2013). A framework for sustainable mobile learning in schools.
Review of Research in Open and Distance Learning, 8(2), 1–18. British Journal of Educational Technology, 44(5), 695–715.
Georgieva, E. S., Smrikarov, A. S., & Georgiev, T. S. (2011). Evaluation of mobile learning Nielsen, J. (1994a). Heuristic evaluation. Usability Inspection Methods, 17(1), 25–62.
system. Procedia Computer Science, 3, 632–637. Nielsen, J. (1994b). Usability engineering. Elsevier.
Gorla, N., Somers, T. M., & Wong, B. (2010). Organizational impact of system quality, Nielsen, J., & Budiu, R. (2013). Mobile usability. MITP-Verlags GmbH & Co. KG.
information quality, and service quality. The Journal of Strategic Information Systems, Othman, M. K. (2012). Measuring visitors’experiences with mobile guide technology in cultural
19(3), 207–228. spaces. University of York.
Green, P. (1982). The content of a college-level outdoor leadership course. Paas, F. G., & Van Merriënboer, J. J. (1994). Instructional control of cognitive load in the
Green, R. A. (2014). The Delphi technique in educational research. SAGE Open, 4(2), training of complex cognitive tasks. Educational Psychology Review, 6(4), 351–371.

110
N. Parsazadeh et al. Studies in Educational Evaluation 58 (2018) 97–111

Parsazadeh, N., Ali, R., & Rezaei, M. (2018). A framework for cooperative and interactive Taharim, N. F., Mohd Lokman, A., Isa, W. M., Rahim, W. A., Noor, M., & Laila, N. (2013).
mobile learning to improve online information evaluation skills. Computers & A relationship model of playful interaction, interaction design, kansei engineering
Education, 120, 75–89. and mobile usability in mobile learning, IEEE. 2013 IEEE conference on open systems
Peterson, C. (2003). Bringing ADDIE to life: Instructional design at its best. Journal of (ICOS), 22–26.
Educational Multimedia and Hypermedia, 12(3), 227–242. Tan, T.-H., & Liu, T.-Y. (2004). The mobile-based interactive learning environment
Pollard, C., & Pollard, R. (2004). Research priorities in educational technology: A Delphi (MOBILE) and a case study for assisting elementary school English learning, IEEE
study. Journal of Research on Technology in Education, 37(2), 145–160. Proceedings. IEEE international conference on advanced learning technologies, 530–534.
Qureshi, K., & Irfan, M. (2009). Usability evaluation of e-learning applications. A case study Thornes, S. L. (2012). Creating an online tutorial to support information literacy and
of it’s learning from a student’s perspective. Master thesis. Blekinge Institute of academic skills development. Journal of Information Literacy, 6(1).
Technology. Thurmond, V. A., Wambach, K., Connors, H. R., & Frey, B. B. (2002). Evaluation of stu-
Rabi’u, S., Ayobami, A. S., & Hector, O. P. (2012). Usability characteristics of mobile dent satisfaction: Determining the impact of a web-based environment by controlling
applications Kampar, Malaysia. Proceedings of International Conference on Behavioural for student characteristics. The American Journal of Distance Education, 16(3),
& Social Science Research (ICBSSR), Vol. 2 Indexed by Thomson Reuters. 169–190.
Reeves, T. C., Benson, L., Elliott, D., Grant, M., Holschuh, D., Kim, B., et al. (2002). Traxler, J., & Vosloo, S. (2014). Introduction: The prospects for mobile learning. Prospects,
Usability and instructional design heuristics for E-learning evaluation. 44(1), 13–28.
Rice, K. (2009). Priorities in K–12 distance education: A Delphi study examining multiple Treeratanapon, T. (2012). Design of the usability measurement framework for mobile appli-
perspectives on policy, practice, and research. Journal of Educational Technology & cations. Proceedings of the international conference on computer and information tech-
Society, 12(3), 163–177. nology (ICCIT’2012)16–17.
Ryan, M., Carlton, K. H., & Ali, N. S. (1999). Evaluation of traditional classroom teaching Vavoula, G., & Sharples, M. (2009). Meeting the challenges in evaluating mobile learning:
methods versus course delivery via the World Wide Web. Journal of Nursing Education, A 3-level evaluation framework. International Journal of Mobile and Blended Learning,
38(6), 272–277. 1, 54–75.
Saleh, A., Isamil, R. B., & Fabil, N. B. (2015). Extension of pacmad model for usability Verhoeven, L., Schnotz, W., & Paas, F. (2009). Cognitive load in interactive knowledge
evaluation metrics using goal question metrics (GQM) approach. Journal of construction. Elsevier.
Theoretical & Applied Information Technology, 79(1). Vidal, L.-A., Marle, F., & Bocquet, J.-C. (2011). Using a Delphi process and the Analytic
Santosa, P. I. (2009). Usability of E-learning portal and how it affects students’attitude and Hierarchy Process (AHP) to evaluate the complexity of projects. Expert Systems with
satisfaction, an exploratory study. PACIS 2009 Proceedings71. Applications, 38(5), 5388–5405.
Sha, L., Looi, C. K., Chen, W., & Zhang, B. H. (2012). Understanding mobile learning from Wang, Y.-S. (2003). Assessment of learner satisfaction with asynchronous electronic
the perspective of self-regulated learning. Journal of Computer Assisted Learning, learning systems. Information & Management, 41(1), 75–86.
28(4), 366–378. Wang, M., Shen, R., Novak, D., & Pan, X. (2009). The impact of mobile learning on
Sharpe, H., Rogers, Y., & Preece, J. (2007). Interaction design: Beyond human-computer students’ learning behaviours and performance: Report from a large blended class-
interaction (2nd ed.). John Wiley Sons Ltd. room. British Journal of Educational Technology, 40(4), 673–695.
Sharples, M. (2009). Methods for evaluating mobile learning. Researching mobile Learning: Wendeson, S., Ahmad, W., Fatimah, W., & Haron, N. S. (2010). University students
Frameworks, Tools and Research Designs, 17–39. awareness on M-learning.
Shih, J.-L., Chuang, C.-W., & Hwang, G.-J. (2010). An inquiry-based mobile learning Windell, D., & Wiebe, E. (2007). Measuring cognitive load in multimedia instruction: A
approach to enhancing social science learning effectiveness. Journal of Educational comparison of two instruments. Annual meeting of the American Educational Research
Technology & Society, 13(4), 50–62. Association.
Shitkova, M., Holler, J., Heide, T., Clever, N., & Becker, J. (2015). Towards usability Witold, A. A., Suryn, W., Khelifi, A., Rilling, J., Seffah, A., & Robert, F. (2003).
guidelines for mobile websites and applications. Wirtschaftsinformatik pp. 1603–1617. Consolidating the ISO usability models, Citeseer. Submitted to the 11th international
Skulmoski, G., Hartman, F., & Krahn, J. (2007). The Delphi method for graduate research. software quality management conference and the 8th annual INSPIRE conference.
Journal of Information Technology Education: Research, 6(1), 1–21. Wixom, B. H., & Todd, P. A. (2005). A theoretical integration of user satisfaction and
Soon, K. H., Sook, K., Jung, C., & Im, K. (1999). The effects of internet-based distance technology acceptance. Information Systems Research, 16(1), 85–102.
learning in nursing. Computers in Nursing, 18(1), 19–25. Wu, J.-H., & Wang, Y.-M. (2006). Measuring KMS success: A respecification of the DeLone
Sun, P.-C., Tsai, R. J., Finger, G., Chen, Y.-Y., & Yeh, D. (2008). What drives a successful e- and McLean’s model. Information & Management, 43(6), 728–739.
learning? An empirical investigation of the critical factors influencing learner sa- Xu, J. D., Benbasat, I., & Cenfetelli, R. T. (2013). Integrating service quality with system
tisfaction. Computers & Education, 50(4), 1183–1202. and information quality: An empirical test in the e-service context. MIS Quarterly,
Swart, A., Bere, A., & Mafunda, B. (2017). Mobile learning usability evaluation using two 37(3), 777–794.
adoption models. IEEE Xplore: Global engineering education conference (EDUCON). Zelazny, L. M., Belanger, F., & Teagarden, D. (2012). Toward a model of information system
Sweller, J., Van Merrienboer, J. J., & Paas, F. G. (1998). Cognitive architecture and in- development success: Perceptions of information systems development team members.
structional design. Educational Psychology Review, 10(3), 251–296.

111

View publication stats

You might also like